19:03:46 #startmeeting infra 19:03:47 Meeting started Tue Jun 27 19:03:46 2017 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:03:50 The meeting name has been set to 'infra' 19:03:57 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:03:59 #topic Announcements 19:04:04 #info Don't forget to register for the PTG if you're planning to attend! 19:04:06 #link https://www.openstack.org/ptg/ PTG September 11-15 in Denver, CO, USA 19:04:08 as always, feel free to hit me up with announcements you want included in future meetings 19:04:14 #topic Actions from last meeting 19:04:20 #link http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-06-20-19.03.html Minutes from last meeting 19:04:25 #action ianw abandon pholio spec and shut down pholio.openstack.org server 19:04:35 oh, forgot to abandon the spec 19:04:44 #link https://review.openstack.org/#/c/477386/ 19:04:54 review to remove it from system-config 19:04:59 cool! 19:05:19 and no worries. i just noticed one we agreed to move out of the priority efforts list to implemented that i forgot to clean up, so will have a patch up for that forthwith as well 19:05:53 #topic Specs approval 19:05:57 #info APPROVED "PTG Bot" spec 19:06:03 #link http://specs.openstack.org/openstack-infra/infra-specs/specs/ptgbot.html "PTG Bot" spec 19:06:10 #info APPROVED "Provide a translation check site for translators" spec 19:06:14 #link http://specs.openstack.org/openstack-infra/infra-specs/specs/translation_check_site.html "Provide a translation check site for translators" spec 19:06:37 #topic Priority Efforts - Gerrit 2.13 Upgrade: status update (clarkb) 19:07:14 i know we discussed last week that we'd follow up this week 19:07:40 ya 19:07:47 so it turns out that 2.13 is a really weird release 19:08:06 weird even as gerrit releases go 19:08:32 yowza! 19:08:37 and if you upgrade to 2.13 < 2.13.8 you have to do db migrations by hand when you go to 2.13.8 (and presumably 2.14) 19:08:57 so we reverted teh 2.13.7 upgrade and are working on a plan to go staright to 2.13.8 19:09:04 migrations... between different db backends too 19:09:16 I have changes up under the gerrit-upgrade topic to get artifacts built 19:09:28 so reviews on thiose would be great 19:09:56 once we have artifacts I just need to update our process doc on etherpad and we can give it another go 19:10:01 maybe tomorrow even 19:10:38 #link https://review.openstack.org/#/q/is:open+topic:gerrit-upgrade 19:11:37 also the new db can't have the same name as the old db or you lose your old db 19:11:48 thats a nice feature that only just got documented in tip of the 2.13 branch 19:13:06 so thats teh basic update. THings to review to make wars and plugin jars so that we can tr 2.13.8 19:13:32 "new" being the "accountPatchReview" database? 19:13:44 yes 19:14:39 that's what i thought, just clarifying for those coming up to speed ;) 19:15:05 okay, so you'll give it a shot on review-dev possibly as early as tomorrow? 19:15:23 yes if we can get those changes merged today then plugins will be built overnight allowing us to attempt upgrade tomorrow 19:15:38 I'd like another infra-root around just to bounce ideas off of especially considering the way the last one went 19:15:44 fungi: are you willing/able to do that again? 19:15:47 you can count me in for that 19:16:14 yah - same here 19:16:35 worth noting, if we switch to building off the upstream stable branch tips rather than point release tags, we will be making non-fast-forward changes to our local branches (possibly orphaning earlier changes that add our local .gitreview et cetera) when we update that branch 19:17:15 consider me on standby for this 19:17:22 awesome 19:17:47 sounds good, I can likely get going as early as 9am pdt tomorrow 19:18:24 fungi: we modify so littl now, i'm not too worried about changes on the branches 19:18:53 jeblair: that was my feeling on it as well, just wanted to make sure that (slight) change in workflow was known 19:19:05 ya its mostly just the submodule change to make builds work 19:19:18 anything else we should know about at this stage? 19:19:31 that is all I have 19:19:44 unless we want to talk about how much more fun 2.14 will be :) 19:19:45 i know you had to abort before viability last week, so not much was learned about anything besides the upgrade process itself 19:20:24 discussing 2.14 upgrades at this stage is probably premature 19:20:59 unless we get far enough with 2.13 to decide that we really should be running 2.14 because of some currently unknown problem 19:21:11 ya I don't think we are anywhere near that point 19:21:47 thanks clarkb! and thanks mordred and jeblair for volunteering to help put out fires if we cause any 19:22:11 #topic Infra-cloud's future and call for volunteers (fungi) 19:23:15 i've been trying to take point on the private three-way discussions with hpe and osu on the possibility of relocating the infra-cloud hardware 19:23:25 but things have gotten more complicated 19:24:11 particularly because, over the course of one or more of the previous data center moves, the server mounting rails and top of rack switches we originally had were "lost" 19:24:56 and it's starting to look like if we want to continue running infra-cloud, we'll need to buy some 19:25:22 i don't personally have the bandwidth to research, spec and price out the necessary additional gear 19:25:43 so i'm looking for volunteers to pick up this project 19:25:51 fungi: and that is because osuosl isn't going to have switches we can plug into (I'm guessing thats a capacity problem?) and rails are always hard tocome by seems like? 19:26:04 yeah, pretty much 19:26:10 fungi: is hpe wanting to turn them off in the current shelf/sit on floor/daisy chained to router setup? 19:26:22 they're... what, stacked on top of each other now? 19:26:31 jeblair: I'm assuming based on lack of rails 19:26:44 we'll need fairly high-speed interconnections between the servers so basic connectivity is going to be insufficient 19:27:00 jeblair: correct, they are stacked directly on one-another 19:27:06 according to what we've been told 19:28:32 "neat" 19:28:35 i can still continue acting as go-between on the foundation budget/funding/contracts part 19:28:46 i can try reaching out to some people internally at RH who might have recent experience with this 19:29:25 but i'm wondering whether it makes sense to try to move forward with the current hardware if we're going to need a significant capital investment in rails and switch gear 19:29:30 personally i know not very much about rack/rail solutions, but if we can find someone from RDO cloud .... 19:30:35 fungi: do we have a hardware manifest somewhere? 19:31:46 i haven't seen one that i can recall. i believe most of the hardware model info we have was worked out by rcarrillocruz, yolanda or cmurphy remotely 19:32:07 https://docs.openstack.org/infra/system-config/infra-cloud.html does not look up to date.... 19:32:43 there may be some stuff in bifrost files, but i don't know if we would have (accurate?) chassis information if all of that was collected remotely 19:32:55 i can ask the data center manager who's been e-mailing our contact at osu if they can do an audit of the hardware that's in place 19:33:18 we can gather that via ilo at least 19:33:25 it may be slow but should work 19:33:46 and possibly incomplete for any where the ilo fails to respond 19:33:52 but better than nothing 19:34:35 i reckon that's step one if we want to find rails 19:35:01 personally, i'd ask the dc manager. i'd prefer that to trusting ilo. 19:35:06 it'll also only take me a few minutes to ask them to confirm one last time that there really are no mounting rails attached to the servers (maybe we've misinterpreted what they were saying about that) and get us a count of switchports/speeds 19:35:35 (or, you know, both. trust but verify. :) 19:36:01 after that, i can start a thread on the infra ml about this i guess and see if we get any good suggestions/offers 19:37:16 #action fungi start a ml thread about the infra-cloud rails and switching situation 19:37:40 we can continue there, hopefully reaching a wider audience 19:37:47 fungi: can you go ahead and ask the dc op too? 19:38:06 jeblair: yeah, i figured i was going to do that first 19:38:10 cool 19:38:45 #action fungi get details on current server models, presence of rails and switchport counts for infra-cloud 19:39:21 #topic Should we skip our next meeting? (fungi) 19:39:39 I will not be here. 19:39:43 yes 19:39:54 in reading yesterday's zuul meeting minutes to make sure i hadn't missed anything important while i was out, i was reminded that it will be a holiday in the usa 19:40:10 so if people want an infra meeting next week, we'll need a volunteer chair 19:40:36 because my wife will probably hound me to stay off the computer 19:40:48 (i'm really proud that i actually remembered to bring that up in meeting for once) 19:41:06 (and just look at the example it set!) 19:41:28 i could ... but i think it's not worth it 19:41:54 thanks ianw. not hearing any objections, let's go ahead and cancel next week 19:42:52 #agreed The Infra team meeting for July 4 will be skipped due to a major holiday for many attendees; next official meeting will take place on Tuesday, July 11. 19:43:29 #topic Open discussion 19:43:57 I'm fairly confident dns over ipv4 was the vast majority of our dns problems 19:44:19 the leftovers appear related to how various jobs deploy software 19:44:31 eg docker using 8.8.8.8 in osic anyways 19:45:08 i spotted this on the general ml but haven't had time to look into it: 19:45:09 ++ my testing has stabilised greatly 19:45:10 #link http://lists.openstack.org/pipermail/openstack/2017-June/045095.html git-review removing /gerrit 19:45:14 would be really awesome if someone has time to jump in on that since it's another case of our tools being used outside the openstack community 19:45:35 #link https://review.openstack.org/#/c/477736/ 19:45:47 new batch of beaker fixes here, mordred already +d' all of them https://review.openstack.org/#/q/project:%22%255Eopenstack-infra.*%2524%22+topic:fix-beaker 19:46:12 there's been some stretch talk ... patches for nodes etc. does anyone mind if i do 477736 and i imagine we need a manual run to get the first sync down 19:46:35 ianw: seems fine to me 19:46:49 ianw: ++ also check volume quota/size/space first 19:47:03 will do 19:47:20 s/+d/+2'd 19:48:05 thanks for adding all those cmurphy 19:48:27 i also approved the change to make the beaker jobs voting a few hours ago 19:48:37 ianw: I think someone said we might need to add the stretch release gpg key too 19:48:42 fungi: awesome 19:49:11 clarkb: yes, the key should be added but it's not actually in use yet afaik 19:49:31 the "stretch release key" is so named because it may start being used following the stretch release 19:49:43 ah 19:49:53 (sort of backward from how we name our signing keys) 19:52:59 jhesketh: not sure if you're awake yet, but are you still free to process the requested stable/mitaka eol? that would take one more thing off my plate 19:53:11 #link http://lists.openstack.org/pipermail/openstack-dev/2017-June/118473.html Tagging mitaka as EOL 19:54:06 i'll follow up to your reply on the ml 19:54:59 clarkb: do you (or anybody else) know the status on gerrit review tags? ttx was asking in this thread: 19:55:01 #link http://lists.openstack.org/pipermail/openstack-dev/2017-June/118960.html Turning TC/UC workgroups into OpenStack SIGs 19:55:38 i vaguely recall earlier discussions suggesting that was waiting for their notedb implementation 19:55:48 so not sure if 2.13 gets us far enough for that 19:56:10 fungi: I tried to find mention of them in changelog and docs and found nothing 19:56:21 I wonder if they just stalled out 19:56:59 what was the word they actually used for those (as "tags" would be ambiguous with git tags) 19:57:02 ? 19:57:53 hashtag was used at one point 19:57:55 I dont recall 19:58:13 thanks! 19:58:24 #link https://bugs.chromium.org/p/gerrit/issues/detail?id=287 arbitrary labels/tags on changes 19:59:58 i'll follow up to the ml after reading whatever i can find, if nobody beats me to it 20:00:00 erm 20:00:03 https://review.openstack.org/Documentation/config-hooks.html#_hashtags_changed 20:00:08 my google bubble pointed me at that 20:00:27 strange indeed! 20:00:29 looks like you have to enable notedb 20:00:31 and we're out of time 20:00:38 thanks everyone! 20:00:40 which last I read is not stable 20:00:45 #endmeeting