19:02:32 #startmeeting infra 19:02:32 Meeting started Tue Apr 18 19:02:32 2017 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:36 The meeting name has been set to 'infra' 19:02:42 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:02:47 #topic Announcements 19:02:56 #info Infra contributor bootstrapping intro 16:40pm EDT Monday at the OpenStack Forum in Boston 19:03:06 90-minute window shared with some other horizontal teams, so we have about 15 minutes for a quick intro and some questions from the audience 19:03:31 i could use some volunteers to help out with our section of that 19:04:15 get up with me if you're interested, we'll probably just rehash a few slides we have from our infra overview and talk about systems administration as code for a bit 19:04:46 also worth noting, in case you're torn between the two, it overlaps with the session about getting rid of stackalytics 19:04:55 hah 19:05:07 so, um, choose wisely 19:05:31 as always, feel free to hit me up with announcements you want included in future meetings 19:05:48 #topic Actions from last meeting 19:05:58 #link http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-04-11-19.01.html Minutes from last meeting 19:06:13 #action fungi put forth proposal to flatten git namespaces 19:06:31 i'm still pondering this one, but will likely take the form of an infra spec 19:07:12 pabelanger Open an Ubuntu SRU for bug 1251495 19:07:14 bug 1251495 in mailman (Ubuntu Trusty) "Lists with topics enabled can throw unexpected keyword argument 'Delete' exception." [High,Triaged] https://launchpad.net/bugs/1251495 19:07:28 yes! I hope to have this created this week. I've started on packaging already 19:07:37 cool, i was not seeing it in the bug 19:07:51 #action pabelanger Open an Ubuntu SRU for bug 1251495 19:07:59 yes, no bug yet. Was going to do it all in one shot 19:08:08 clarkb to add citycloud to nodepool 19:08:23 i know you're working on that one, though i'm guessing no config change proposed yet 19:08:32 I've made an account and have sent its info back to citycloud so they can "verify" the account and set quotas 19:08:46 oh, right, you cc'd me on that 19:08:49 no config change proposed yet beause I can't create the two infra users in the account until after it gets verified 19:08:59 yep, makes total sense 19:09:04 #action clarkb to add citycloud to nodepool 19:09:18 basically the we aren't spammers step 19:09:28 (speak for yourself!) 19:09:44 * fungi is kidding, of course ;) 19:10:16 passwords file is up to date with current info and I will add the users once we have them 19:10:38 #topic Specs approval: PROPOSED: Zuulv3 Executor Security Enhancement (SpamapS) 19:10:44 #link https://review.openstack.org/444495 Zuulv3 Executor Security Enhancement 19:11:08 this was discussed at last week's meeting with a council voting deadline of about 12 minutes ago 19:11:20 so i'm approving it now 19:12:00 #info APPROVED: "Zuulv3 Executor Security Enhancement" spec 19:12:27 #topic Priority Efforts 19:12:53 nothing called out specifically here this week, though the above approved spec is certainly associated with the Zuul v3 priority effort 19:13:19 I haven't had time to work on gerrit upgrade stuff in the last week unfortunately 19:13:43 yep, lots of people are spread thin 19:13:47 #topic Mitaka EOL: When to remove which changes? (AJaeger) 19:13:51 #link https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:mitaka-eol Current list of changes 19:14:14 looks like there are two pending 19:14:37 Just want to know how to move forward - we have some changes proposed and merged already like https://review.openstack.org/#/c/455637/ (twice proposed) and others proposing changes 19:14:44 * AJaeger started with those two 19:14:45 for python 3.4 job removal (mitaka was the last release before we switched to 2.5) and stable/mitaka bitrot job removal 19:15:06 s/2.5/3.5/ 19:15:21 Do we want to wait for tagging the branches - or start with some cleanups slowly? 19:15:24 we can also drop trusty jobs 19:15:42 And what to do with projects that still have mitaka branches and use trusty jobs? 19:15:43 well, except for infra's testing on trusty since we still deploy things on trusty at the moment 19:15:45 all or them? 19:15:47 of* 19:16:07 we need trusty for grenade from mitaka to newton, so cannot retire it completely 19:16:33 AJaeger: we stop testing mitaka to newton when we drop mitaka 19:16:34 * AJaeger already -1s all changes that add new trusty jobs 19:16:40 clarkb: ah, good 19:16:54 so that shouldn't be an issue. Guessing its just going to be deployment related thigns that will have trusty (infra, maybe osa?) 19:17:24 AJaeger: i think openstack-ansible at least expressed a concern because their plan (up until the ptg when we discussed it anyway) had been to perform upgrade tests of stable/newton on trusty servers (they apparently test in-place upgrades of the distro along with openstack) 19:17:36 clarkb: ya, I think deployment jobs might still want trusty. 19:18:15 I'm personally not in a rush to remove trusty, I feel nodepool-builder is now working great and current breakages have been minimal 19:18:48 yah - as long as it's not difficult to keep around, I don't feel a strong urge to remove it 19:18:51 that certainly helps. I mostly want it gone because upstart 19:19:12 yah. much as I hate systemd, I prefer _one_ init system to multiple 19:19:18 eventually I won't have to think about multiple init systems 19:19:36 I will also admit that sdague's systemd logging changes in devstack are nice 19:19:39 well, what i told them is that we'll be stuck keeping trusty around for a while due to the infra team's needs 19:20:14 we've got another year until there is a new lts to replace xenial, right? so we can tweak the policy between now and then based on what we learn from trusty perhaps? 19:20:33 but that 1. it's possible stable/newton of some projects may cease working on trusty at any time because they don't gate on it any longer, and 2. wait times for trusty nodes could be long under heavy ci load 19:21:20 right 19:21:31 ++ 19:21:56 also i strongly encouraged them to consider the qa team's distro alignment where grenade is concerned (i.e., stop testing upgrades for the oldest stable branch) 19:22:13 So, if we keep trusty jobs for deployment projects what are we doing for non-deployment projects that won't EOL directly? 19:22:44 AJaeger: I think we remoev all instances of trusty tied to mitaka which is teh vast majority of them 19:23:06 ++ 19:23:38 i guess the hard part of that is probably figuring out which ones aren't running trusty because of mitaka 19:23:57 so that they don't get removed 19:24:13 ya you'll have to check against the regex in layout.yaml that specifies based on branch 19:24:25 anything that matches the mitaka side can be removed 19:25:43 we have a default regex for trusty jobs to run only on mitaka, so unless that's overwritten for a specific job... 19:28:04 do we have enough info/consensus to be able to move forward? i mean, we should still wait to drop jobs until the bulk of the eol tags are pushed i think 19:28:17 so, shall I WIP my two changes? 19:28:33 for example, i saw a thread where teh glance team (i think it was glance) was requesting a delay because they had a stable/mitaka fix they want to land 19:29:04 ya I think we wait for eol then make our changes 19:29:13 wfm 19:29:14 because once eol happens no more changes can come in so safe to remove jobs 19:29:45 #link http://lists.openstack.org/pipermail/openstack-dev/2017-April/115529.html Tagging mitaka as EOL 19:29:56 yeah, it was indeed glance 19:30:27 there's some question as to whether it's a valid stable branch change, but i don't think discussion has concluded 19:31:04 my point being, if we drop their jobs, it'll be hard for them to test and land that (if they do wind up moving forward pre-eol) 19:31:31 note that it will be hard for them to test if everyone eols and they don't 19:31:34 as devstack won't do things 19:31:36 right 19:31:45 so far the jobs proposed, will not block merging any changes - but at one might that might indeed happen... 19:31:50 but it looks like eol hasn't happened anywhere _yet_ 19:32:03 at least nova, for example, doesn't have a mitaka-eol tag that i can see 19:32:19 fungi, indeed, tags are not done yet. 19:32:55 ya, I think our move is wait for the moment 19:33:07 fungi, we can move on to next topic and discuss further once the first tags have been done... 19:33:57 #agreed Wait for the main batch of EOL tagging/branch deletion, then remove mitaka-specific jobs (including some trusty-specific or Python 3.4-based jobs if only needed by stable/mitaka branches) 19:34:04 that sum it up? 19:34:34 +1 19:34:44 thanks for bringing it up~ 19:34:46 ! 19:34:49 #topic Scheduling proposed project renames (fungi) 19:35:21 the defcore working group changed their name to the interop working group a while ago and wants to rename their repo 19:35:53 i also have a change to push for renaming a couple of infra repos specific to the user story dashboard (now the feature tracker) 19:36:37 any infra-root planning to be around, say, this friday and interested in helping rename a handful of repos? 19:37:11 I'm free 19:37:21 as am I 19:37:23 i'm happy to do the bulk of the work and announcing, just want to make sure i'm not flying solo in case things go terribly wrong and we need some extra hands 19:37:51 i'll be around 19:38:14 shall we say 20:00 utc? 19:38:50 a one hour window (with the gerrit/zuul outage expected to conclude early in the window)? 19:39:05 I'll be around 19:39:18 wfm 19:39:31 i'll whip up the task tracker change in the next day or so 19:40:18 #info Gerrit and Zuul will be offline for project renaming maintenance Friday, April 21, 20:00-21:00 UTC. 19:40:33 look right? 19:41:18 are we doing online reindexes now? 19:41:26 yes 19:41:39 (upgrade to 2.13 is special case where we can't but general rename project should be fine) 19:41:39 yep 19:41:43 neat. then lgtm. :) 19:42:29 so, for example, we can bring gerrit and probably zuul back online right away, but some jobs may get incorrect results due to git replication delays (so we'd want to avoid tagging releases during that window) 19:42:47 * fungi consults release schedule real quick 19:43:34 yeah, milestone 1 was last week. this is a dead week on the schedule from what i can see 19:43:46 #link https://releases.openstack.org/pike/schedule.html Pike Release Schedule 19:44:23 ttx: dhellmann: dims: ^ heads up, lmk if that's a no-go for a maintenance window and i can readjust 19:44:33 * dhellmann looks at scrollback 19:45:17 this week or next seems fine to me, but maybe get ttx to sign off 19:45:31 no problem, it's never etched in stone ;) 19:45:41 * jeblair puts chisel down 19:45:45 this is r-19, next is r-18, and r-17 is the week before summit so it's probably best not to do it then 19:45:46 #topic Attending September 2017 PTG in Denver, CO, USA (fungi) 19:46:13 fungi: sounds good to me 19:46:31 i've been asked by the ptg organizers to attempt to gauge how many infra team members want to attend the ptg in denver this september (week of september 11 i believe) 19:47:20 really just trying to get a rough count so they can start planning sizing for our room and whatnot 19:47:29 * mordred will be there - although would like to complain now that he'll miss opening night of the symphony season 19:47:46 o/ 19:47:50 I'll be there 19:47:51 travel budget pending, plan on attending 19:48:12 so even if you don't think you have funding from your employer to attend but might want to take advantage of travel support or something, that's fine 19:48:47 slight overestimation is better than significant underestimation 19:49:32 mordred: should we plan a group outing to see the colorado state symphony or something? 19:49:38 * olaph hopes to attend 19:49:41 * cmurphy will try to be there 19:49:51 * zara_the_lemur__ is also hoping 19:50:00 fungi: :) maybe so 19:50:35 okay, so including me and those speaking up in the meeting, that's at least 8 so far 19:51:04 #info Let fungi know if you hope to attend the PTG in Denver this September so he can get a rough head count 19:51:38 i'll reach out a few other ways, i don't need to have numbers back to diablo_rojo just yet 19:51:49 glad so many people might be coming! 19:52:01 i love hanging out with all of you and getting things done together 19:52:20 and seeing no other last-minute topics on the agenda... 19:52:24 #topic Open discussion 19:52:36 pabelanger: you had something to bring up during open discussion, you said 19:53:26 Ah, just wanted to gauge interested about starting infracloud upgrades, we'd likely bring down a region at a time to rebuild everything 19:53:57 also, wouldn't mind trying to land https://review.openstack.org/455480/ today. Adds infra-root-keys to nodepool (we need a stop / start first on nodepool.o.o) 19:54:00 would be a good time to make sure we don't ahve the same kernel issue that baremetal had on the other servers 19:54:21 linux-image-generic needs to be installed 19:54:37 should be able to fix that 19:55:02 we should probably start with whichever region is smaller (chocolate?) 19:55:32 if we do chocolate first, it will be a good time to split strawberry too 19:56:06 timing might also coincide with bringing osic offline... i assume everyone's heard the news about osic wrapping up a year ahead of schedule... so not taking the larger of the two offline right when that happens might help us better gauge the impact 19:56:22 and maybe see if we can't get citycloud up first just to offset 19:56:45 Right, this would depend on OSIC timing also 19:56:45 yes, that 19:56:59 not a rush, just something we've talked about 19:57:26 as i said earlier in #openstack-infra, if it's something you want to work on then let's do it 19:57:44 ++ 19:57:53 has to happen sometime, after all 19:59:20 once I get citycloud account verified and quota'd it shouldn't take more than a day to get it running so that will be quick once ready 19:59:26 so shouldn't be a big speedbump here 20:00:13 a the a the a the a that's all folks! 20:00:18 thanks everyone 20:00:22 #endmeeting