19:03:57 #startmeeting infra 19:03:58 Meeting started Tue Aug 8 19:03:57 2017 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:59 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:04:01 The meeting name has been set to 'infra' 19:04:05 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:04:28 #topic Announcements 19:04:40 #info New code contributors are no longer forced to join the foundation 19:04:49 #link http://lists.openstack.org/pipermail/openstack-dev/2017-August/120771.html New code contributors no longer forced to join the foundation 19:05:01 #info We're removing support for the Contact Store feature in puppet-gerrit 19:05:10 #link http://lists.openstack.org/pipermail/openstack-infra/2017-August/005540.html Removing support for the Contact Store feature in puppet-gerrit 19:05:19 #link https://review.openstack.org/491090 Stop supporting the Contact Store feature 19:05:51 that is a good first step in simplifying our puppet deployment for modern gerrit 19:05:52 since i'm stuck in a car at this time next week, clarkb has volunteered to chair the meeting 19:06:00 on august 15 19:06:01 we have a lot of older stuff in there that can go away (like contact store) 19:06:19 and since we'll have a new ptl by the meeting after that, this is my last meeting as infra ptl 19:06:44 so unlike in the past, feel free to hit someone else up with announcements you want included in future meetings! 19:06:50 ;) 19:07:22 #topic Actions from last meeting 19:07:31 fungi's lame duck session commences 19:07:46 Shrews: hush or we'll make you ptl 19:07:48 #link http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-08-01-19.02.html Minutes from last meeting 19:07:55 he can do that 19:07:59 :) 19:08:06 fungi get switchport counts for infra-cloud 19:08:11 er, that should be 19:08:15 #action fungi get switchport counts for infra-cloud 19:08:22 because i still haven't heard back from hpe on it 19:08:39 the next two fared better than i 19:08:42 clarkb send advance notice of Gerrit 2.13 upgrade to mailing lists 19:08:49 i saw that, lemme get a link real fast 19:09:29 #link http://lists.openstack.org/pipermail/openstack-dev/2017-August/120533.html review.openstack.org downtime and Gerrit upgrade 19:09:53 mordred send advance notice of Zuul v3 cutover to mailing lists 19:09:59 i think i have a link for this one too 19:10:25 #link http://lists.openstack.org/pipermail/openstack-dev/2017-August/120499.html Rollout of Zuul v3 at the PTG 19:10:54 that concludes the our action items 19:11:04 #topic Specs approval 19:11:27 we don't seem to have anything new up this week, but i expect to put a completion change up for the contact removal spec next week 19:11:44 anybody know of anything else coming soon? 19:12:16 I have on my todo list to write a spec for the log server move (I don't think we have one already) 19:12:20 but I haven't had time to write that yet 19:12:25 nothing jumping out at me in open infra-specs changes at the moment 19:12:37 I hope to write up something about ara.openstack.org for PTG 19:12:40 yeah, that would be a good one to have underway soon 19:12:48 the logserver move 19:13:09 ara also sounds fun, we've had a lot of the community lamenting the loss of puppetboard 19:13:35 #topic Priority Efforts 19:13:39 nothing called out specifically here for this week 19:14:03 though the announcements earlier related to the gerrit contactstore removal priority effort 19:14:29 anyone have last-minute things they need to cover on a priority effort from our list? 19:15:20 cool, that leaves more time for general topics 19:15:34 #topic Bandersnatch upgrade (ianw) 19:15:45 i saw you hacking on testing this out 19:15:49 how'd it go? 19:16:04 it works, and our changes seem to have been incorporated correctly into bandersnatch 2.0 19:16:33 i tested by running updates on a clone of the pypi mirror, and it looks as you'd expect 19:16:38 so, now what :) 19:17:07 it technically requires python 3.5, but i tested with 3.4 on the existing trusty host and it works 19:17:18 i say live dangerously, upgrade to 2.0 on trusty, and remember this one's a prime candidate to replace with xenial? 19:17:19 I think release team wants us to be slushy but that seems like a safe change since its tested and we only publish if things work well 19:17:27 maybe a good time to migrate to xenial first? 19:17:32 "technically" == documented as, i mean 19:17:34 so worst case we downgrade and rebuild the mirror 19:18:06 the main danger with running it on trusty is that the 2.0 release notes specifically say they plan soon to switch to relying on some 3.5+ features 19:18:21 so we may find that 2.1 breaks for us 19:18:36 i don't think we need it -- it turns out the UA regex matching that was blocking us on pypi was unintentional 19:18:46 which is why upgrading to xenial soon would still be good regardless 19:18:55 ++ 19:19:13 so when would soon be, in people's mind. too much before release? 19:19:13 i do think generally keeping up with latest releases of the tools we use is still a "good thing"[tm] 19:19:32 if it's non-urgent, we could put it off until post-release 19:19:34 and do we want to snapshot & dist-upgrade or start fresh? 19:19:51 ianw: we tend to start fresh. The excception to that rule was lists.o.o to keep its IP addr 19:20:09 in this case I would start fresh, then we can just stop cron on trusty node and let cron run on xenial node 19:20:09 i think starting fresh would be fine, because we don't have to swap it into production until it's done populating the dataset anyway 19:20:13 should be straightforward cut over 19:20:36 granted we have to do it for reprepro too right? 19:20:37 except i don't want them racing and destroying each other's mirrors, since it's all backed on the same afs dir 19:20:40 oh, in fact the data's already in afs 19:20:48 so nothing really stateful locally on it anyway 19:20:57 we don't really have a global lock, other than turn one box off i guess :) 19:20:58 ianw: ya so you'd turn off the cron entries on old node before enabling them on new node 19:21:02 ya that 19:21:13 our "global lock" is zuul, technically 19:21:20 we could pause it 19:21:26 if really concerned 19:21:51 nah I think it would be fine to just disable cron on one node or turn it off (I think rax makes it hard to turn it off though) 19:21:56 insofar as preventing a slightly extended "safe" downtime from impacting jobs 19:22:28 clarkb: you can `sudo halt` or `sudo poweroff` a vm in rackspace, i do it pretty regularly 19:23:01 though there is the risk they might accidentally boot it back up, if we disable puppet temporarily and comment out cron jobs on it, no real risk 19:23:02 ok, so agree to start a fresh host. should i get started, or leave it for a while? 19:23:08 fungi: and use nova api to reboot it? 19:23:18 clarkb: that's what i've done before, yep 19:23:22 if so neat, I hadn't realized that was viable though makes sense 19:23:26 well, openstack server reboot 19:23:33 but yes 19:23:44 ianw: I'd say its fine to get started. IF we want to be very careful we can run it by release team first 19:23:59 i'm behind that plan 19:24:03 wfm 19:24:22 well it would be trusty->xenial and then bandersnatch separately, so it does minimise things as much as possible 19:24:49 ok, well seems an agreement we'll I start to look at it in near future 19:25:05 sounds like a good plan. i do think it's worth running by release team if we do it before release. 19:25:05 I'll start to look at it, i mean 19:25:36 ok, will loop in 19:26:03 that's all then, thanks 19:27:05 i guess the other risk with the xenial upgrade is that we do additional mirroring things on there besides just bandersnatch 19:27:30 but that's mostly just rsync and some reprepro right? or is reprepro also happening on another server? 19:27:37 ya its those two things 19:27:50 reprepro for debian and ubuntu mirrors and rsync for centos and fedora iirc 19:27:55 no reprepro is on there, and some rsync-ing for suse 19:28:01 rsync behavior i doubt would change significanyly, reprepro maybe 19:28:16 s/significanyly/significantly/ 19:28:43 but really i don't expect either to be problematic 19:28:50 okay, thanks ianw! 19:29:20 #agreed Check with the release team, but plan to build a replacement mirror update server for Xenial and bandersnatch 2.0+ 19:29:28 that cover it? 19:29:33 ++ 19:30:22 #topic PTG planning (fungi) 19:30:55 we've got about 30 minutes left we can discuss some of the stuff on the etherpad, though i'd like to cut to open discussion maybe at 5-10 minutes left in case others have anything else 19:31:18 #link https://etherpad.openstack.org/p/infra-ptg-queens Infra planning pad for Queens PTG in Denver 19:31:30 oh, correction from last week 19:31:41 I'll be arriving on Satuday afternoon if anybody else is arriving early 19:32:20 pabelanger: me too. It was either that or really late sunday so saturday afternoon won 19:32:20 the ethercalc i linked was for the _PIKE_ ptg, i was ding-dongity-wrong about the one for the upcoming (queens) ptg being up yet 19:32:48 realized it just moments after i embarrassed myself asking the coordinators questions about trying to update it 19:33:31 my plane is scheduled to cease hurtling through the aether at approximately 16:20 local denver time saturday 19:33:38 i think that's "mdt" 19:33:47 (utc-0600?) 19:34:49 seems like the plan is to then attempt to take the commuter train to the vicinity of the ptg hotel 19:35:09 after which i should be available for zuulishness and other infra shenanigans 19:35:43 Ya, I haven't looked yet how to get to hotel. But recall some services from last time we did mid-cycle in CO 19:36:25 there is commuter train service from the airport to relatively near the event hotel 19:36:43 well, the mid-cycle we did in fort collins wasn't really near where the ptg is being held 19:36:58 so no idea how much overlap there is for transit 19:37:10 fungi: right, but I remember the desk of transit service that were offered to other places 19:37:15 Central Park station on the A line then walk about 1/4 mile 19:37:20 pabelanger: ahh, yeah that makes sense 19:37:21 but sounds like train is the way to go 19:37:41 1/4 mile walk from the station sounds fine to me 19:37:54 i pack light 19:37:55 $9 19:41:01 i went and retagged some of the new additions to the planning etherpad just now as well 19:41:37 mmedvede's one on the third-party ci monitoring dashboard feels like it could sort of be either a helproom topic or a hacking topic 19:42:30 fungi: yeah, there is definitely some hacking left 19:43:01 For the uninitiated, could we define what the tags mean in the etherpad ? 19:43:06 +1 19:43:10 i'll do that 19:43:16 hacking, reserve, etc. 19:45:33 third-party ci dashboard deployment would involve spinning up a VM on infra (or reusing existing one), setting up gerrit account, probably fixing a few puppet quirks along the way. I think that leans towards [hacking] a bit more 19:45:35 dmsimard: does what i put in there clarify? 19:46:05 yes! 19:46:32 mmedvede: sure, if what you need is mostly guidance/assistance then the helproom setting is the thing, but if it'll be more group collaboration that's where i think the hacking days are geared 19:47:00 though again, it's sort of a fuzzy distinction 19:47:31 and as this is the first ptg we've tried this split with (and only the second ptg total for that matter) it's anybody's guess how it'll turn out 19:47:56 it seems like that topic at least wants a bit of discussion first, then possibly hacking...? (unless some decisions can be made before ptg) 19:48:48 Ya, was just thinking if that spec might have changed a bit based on zuulv3 things now 19:49:07 right, definitely no need to artifically block any effort or discussion on the ptg, but we're all pretty heads-down on zuul and ptg prep stuff so the ptg may be the first opportunity for some to get much available bandwidth from us 19:50:32 well, not just zuul and ptg prep stuff for that matter. release time and elections and our usual dog-paddle to try and stay afloat in a sea of crumbling systems ;) 19:51:42 if hacking time better spent on zuul v3, I can probably handle most of ci dashboard myself with a bit of assistance 19:51:55 okay, we can continue to do ptg prep discussion but i'm going to officially switch to open discussion for the remainder of the meeting in case people have other non-ptg stuff they want to bring up 19:51:58 #topic Open discussion 19:52:18 we ran out of time the past few meetings to do open discussion 19:52:40 I'm going to be afk tomorrow. Babysitting fell through so I'm stuck with the kids 19:52:41 so just wanted to make sure to make time this week in case 19:53:16 oh, yes, as mentioned i'm completely afk most of the 15th, but will also be travelling the 17th through the 25th and only intermittently available 19:53:53 (of august) 19:54:48 seems like maybe nobody has anything else? happy to wrap the meeting up a few minutes early for a change 19:56:12 seems that way 19:56:22 yep, these systems don't break themselves! 19:56:26 well, they do 19:56:31 but i still like to help 19:56:36 it's been a pleasure serving as your ptl for these last two years, thanks everyone! 19:56:50 fungi: thank you! 19:56:53 thanks fungi, appreciate all the help! 19:57:00 and best of luck to whosoever takes on the mantle 19:57:08 #endmeeting