19:03:57 <fungi> #startmeeting infra
19:03:58 <openstack> Meeting started Tue Aug  8 19:03:57 2017 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:59 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:04:01 <openstack> The meeting name has been set to 'infra'
19:04:05 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:04:28 <fungi> #topic Announcements
19:04:40 <fungi> #info New code contributors are no longer forced to join the foundation
19:04:49 <fungi> #link http://lists.openstack.org/pipermail/openstack-dev/2017-August/120771.html New code contributors no longer forced to join the foundation
19:05:01 <fungi> #info We're removing support for the Contact Store feature in puppet-gerrit
19:05:10 <fungi> #link http://lists.openstack.org/pipermail/openstack-infra/2017-August/005540.html Removing support for the Contact Store feature in puppet-gerrit
19:05:19 <fungi> #link https://review.openstack.org/491090 Stop supporting the Contact Store feature
19:05:51 <clarkb> that is a good first step in simplifying our puppet deployment for modern gerrit
19:05:52 <fungi> since i'm stuck in a car at this time next week, clarkb has volunteered to chair the meeting
19:06:00 <fungi> on august 15
19:06:01 <clarkb> we have a lot of older stuff in there that can go away (like contact store)
19:06:19 <fungi> and since we'll have a new ptl by the meeting after that, this is my last meeting as infra ptl
19:06:44 <fungi> so unlike in the past, feel free to hit someone else up with announcements you want included in future meetings!
19:06:50 <fungi> ;)
19:07:22 <fungi> #topic Actions from last meeting
19:07:31 <Shrews> fungi's lame duck session commences
19:07:46 <fungi> Shrews: hush or we'll make you ptl
19:07:48 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-08-01-19.02.html Minutes from last meeting
19:07:55 <jeblair> he can do that
19:07:59 <pabelanger> :)
19:08:06 <fungi> fungi get switchport counts for infra-cloud
19:08:11 <fungi> er, that should be
19:08:15 <fungi> #action fungi get switchport counts for infra-cloud
19:08:22 <fungi> because i still haven't heard back from hpe on it
19:08:39 <fungi> the next two fared better than i
19:08:42 <fungi> clarkb send advance notice of Gerrit 2.13 upgrade to mailing lists
19:08:49 <fungi> i saw that, lemme get a link real fast
19:09:29 <fungi> #link http://lists.openstack.org/pipermail/openstack-dev/2017-August/120533.html review.openstack.org downtime and Gerrit upgrade
19:09:53 <fungi> mordred send advance notice of Zuul v3 cutover to mailing lists
19:09:59 <fungi> i think i have a link for this one too
19:10:25 <fungi> #link http://lists.openstack.org/pipermail/openstack-dev/2017-August/120499.html Rollout of Zuul v3 at the PTG
19:10:54 <fungi> that concludes the our action items
19:11:04 <fungi> #topic Specs approval
19:11:27 <fungi> we don't seem to have anything new up this week, but i expect to put a completion change up for the contact removal spec next week
19:11:44 <fungi> anybody know of anything else coming soon?
19:12:16 <clarkb> I have on my todo list to write a spec for the log server move (I don't think we have one already)
19:12:20 <clarkb> but I haven't had time to write that yet
19:12:25 <fungi> nothing jumping out at me in open infra-specs changes at the moment
19:12:37 <pabelanger> I hope to write up something about ara.openstack.org for PTG
19:12:40 <fungi> yeah, that would be a good one to have underway soon
19:12:48 <fungi> the logserver move
19:13:09 <fungi> ara also sounds fun, we've had a lot of the community lamenting the loss of puppetboard
19:13:35 <fungi> #topic Priority Efforts
19:13:39 <fungi> nothing called out specifically here for this week
19:14:03 <fungi> though the announcements earlier related to the gerrit contactstore removal priority effort
19:14:29 <fungi> anyone have last-minute things they need to cover on a priority effort from our list?
19:15:20 <fungi> cool, that leaves more time for general topics
19:15:34 <fungi> #topic Bandersnatch upgrade (ianw)
19:15:45 <fungi> i saw you hacking on testing this out
19:15:49 <fungi> how'd it go?
19:16:04 <ianw> it works, and our changes seem to have been incorporated correctly into bandersnatch 2.0
19:16:33 <ianw> i tested by running updates on a clone of the pypi mirror, and it looks as you'd expect
19:16:38 <ianw> so, now what :)
19:17:07 <ianw> it technically requires python 3.5, but i tested with 3.4 on the existing trusty host and it works
19:17:18 <fungi> i say live dangerously, upgrade to 2.0 on trusty, and remember this one's a prime candidate to replace with xenial?
19:17:19 <clarkb> I think release team wants us to be slushy but that seems like a safe change since its tested and we only publish if things work well
19:17:27 <pabelanger> maybe a good time to migrate to xenial first?
19:17:32 <ianw> "technically" == documented as, i mean
19:17:34 <clarkb> so worst case we downgrade and rebuild the mirror
19:18:06 <fungi> the main danger with running it on trusty is that the 2.0 release notes specifically say they plan soon to switch to relying on some 3.5+ features
19:18:21 <fungi> so we may find that 2.1 breaks for us
19:18:36 <ianw> i don't think we need it -- it turns out the UA regex matching that was blocking us on pypi was unintentional
19:18:46 <fungi> which is why upgrading to xenial soon would still be good regardless
19:18:55 <pabelanger> ++
19:19:13 <ianw> so when would soon be, in people's mind.  too much before release?
19:19:13 <fungi> i do think generally keeping up with latest releases of the tools we use is still a "good thing"[tm]
19:19:32 <fungi> if it's non-urgent, we could put it off until post-release
19:19:34 <ianw> and do we want to snapshot & dist-upgrade or start fresh?
19:19:51 <clarkb> ianw: we tend to start fresh. The excception to that rule was lists.o.o to keep its IP addr
19:20:09 <clarkb> in this case I would start fresh, then we can just stop cron on trusty node and let cron run on xenial node
19:20:09 <fungi> i think starting fresh would be fine, because we don't have to swap it into production until it's done populating the dataset anyway
19:20:13 <clarkb> should be straightforward cut over
19:20:36 <clarkb> granted we have to do it for reprepro too right?
19:20:37 <ianw> except i don't want them racing and destroying each other's mirrors, since it's all backed on the same afs dir
19:20:40 <fungi> oh, in fact the data's already in afs
19:20:48 <fungi> so nothing really stateful locally on it anyway
19:20:57 <ianw> we don't really have a global lock, other than turn one box off i guess :)
19:20:58 <clarkb> ianw: ya so you'd turn off the cron entries on old node before enabling them on new node
19:21:02 <clarkb> ya that
19:21:13 <fungi> our "global lock" is zuul, technically
19:21:20 <fungi> we could pause it
19:21:26 <fungi> if really concerned
19:21:51 <clarkb> nah I think it would be fine to just disable cron on one node or turn it off (I think rax makes it hard to turn it off though)
19:21:56 <fungi> insofar as preventing a slightly extended "safe" downtime from impacting jobs
19:22:28 <fungi> clarkb: you can `sudo halt` or `sudo poweroff` a vm in rackspace, i do it pretty regularly
19:23:01 <fungi> though there is the risk they might accidentally boot it back up, if we disable puppet temporarily and comment out cron jobs on it, no real risk
19:23:02 <ianw> ok, so agree to start a fresh host.  should i get started, or leave it for a while?
19:23:08 <clarkb> fungi: and use nova api to reboot it?
19:23:18 <fungi> clarkb: that's what i've done before, yep
19:23:22 <clarkb> if so neat, I hadn't realized that was viable though makes sense
19:23:26 <fungi> well, openstack server reboot
19:23:33 <fungi> but yes
19:23:44 <clarkb> ianw: I'd say its fine to get started. IF we want to be very careful we can run it by release team first
19:23:59 <fungi> i'm behind that plan
19:24:03 <pabelanger> wfm
19:24:22 <ianw> well it would be trusty->xenial and then bandersnatch separately, so it does minimise things as much as possible
19:24:49 <ianw> ok, well seems an agreement we'll I start to look at it in near future
19:25:05 <jeblair> sounds like a good plan.  i do think it's worth running by release team if we do it before release.
19:25:05 <ianw> I'll start to look at it, i mean
19:25:36 <ianw> ok, will loop in
19:26:03 <ianw> that's all then, thanks
19:27:05 <fungi> i guess the other risk with the xenial upgrade is that we do additional mirroring things on there besides just bandersnatch
19:27:30 <fungi> but that's mostly just rsync and some reprepro right? or is reprepro also happening on another server?
19:27:37 <clarkb> ya its those two things
19:27:50 <clarkb> reprepro for debian and ubuntu mirrors and rsync for centos and fedora iirc
19:27:55 <ianw> no reprepro is on there, and some rsync-ing for suse
19:28:01 <fungi> rsync behavior i doubt would change significanyly, reprepro maybe
19:28:16 <fungi> s/significanyly/significantly/
19:28:43 <fungi> but really i don't expect either to be problematic
19:28:50 <fungi> okay, thanks ianw!
19:29:20 <fungi> #agreed Check with the release team, but plan to build a replacement mirror update server for Xenial and bandersnatch 2.0+
19:29:28 <fungi> that cover it?
19:29:33 <ianw> ++
19:30:22 <fungi> #topic PTG planning (fungi)
19:30:55 <fungi> we've got about 30 minutes left we can discuss some of the stuff on the etherpad, though i'd like to cut to open discussion maybe at 5-10 minutes left in case others have anything else
19:31:18 <fungi> #link https://etherpad.openstack.org/p/infra-ptg-queens Infra planning pad for Queens PTG in Denver
19:31:30 <fungi> oh, correction from last week
19:31:41 <pabelanger> I'll be arriving on Satuday afternoon if anybody else is arriving early
19:32:20 <clarkb> pabelanger: me too. It was either that or really late sunday so saturday afternoon won
19:32:20 <fungi> the ethercalc i linked was for the _PIKE_ ptg, i was ding-dongity-wrong about the one for the upcoming (queens) ptg being up yet
19:32:48 <fungi> realized it just moments after i embarrassed myself asking the coordinators questions about trying to update it
19:33:31 <fungi> my plane is scheduled to cease hurtling through the aether at approximately 16:20 local denver time saturday
19:33:38 <fungi> i think that's "mdt"
19:33:47 <fungi> (utc-0600?)
19:34:49 <fungi> seems like the plan is to then attempt to take the commuter train to the vicinity of the ptg hotel
19:35:09 <fungi> after which i should be available for zuulishness and other infra shenanigans
19:35:43 <pabelanger> Ya, I haven't looked yet how to get to hotel. But recall some services from last time we did mid-cycle in CO
19:36:25 <clarkb> there is commuter train service from the airport to relatively near the event hotel
19:36:43 <fungi> well, the mid-cycle we did in fort collins wasn't really near where the ptg is being held
19:36:58 <fungi> so no idea how much overlap there is for transit
19:37:10 <pabelanger> fungi: right, but I remember the desk of transit service that were offered to other places
19:37:15 <clarkb> Central Park station on the A line then walk about 1/4 mile
19:37:20 <fungi> pabelanger: ahh, yeah that makes sense
19:37:21 <pabelanger> but sounds like train is the way to go
19:37:41 <fungi> 1/4 mile walk from the station sounds fine to me
19:37:54 <fungi> i pack light
19:37:55 <clarkb> $9
19:41:01 <fungi> i went and retagged some of the new additions to the planning etherpad just now as well
19:41:37 <fungi> mmedvede's one on the third-party ci monitoring dashboard feels like it could sort of be either a helproom topic or a hacking topic
19:42:30 <mmedvede> fungi: yeah, there is definitely some hacking left
19:43:01 <dmsimard> For the uninitiated, could we define what the tags mean in the etherpad ?
19:43:06 <mmedvede> +1
19:43:10 <fungi> i'll do that
19:43:16 <dmsimard> hacking, reserve, etc.
19:45:33 <mmedvede> third-party ci dashboard deployment would involve spinning up a VM on infra (or reusing existing one), setting up gerrit account, probably fixing a few puppet quirks along the way. I think that leans towards [hacking] a bit more
19:45:35 <fungi> dmsimard: does what i put in there clarify?
19:46:05 <dmsimard> yes!
19:46:32 <fungi> mmedvede: sure, if what you need is mostly guidance/assistance then the helproom setting is the thing, but if it'll be more group collaboration that's where i think the hacking days are geared
19:47:00 <fungi> though again, it's sort of a fuzzy distinction
19:47:31 <fungi> and as this is the first ptg we've tried this split with (and only the second ptg total for that matter) it's anybody's guess how it'll turn out
19:47:56 <jeblair> it seems like that topic at least wants a bit of discussion first, then possibly hacking...?  (unless some decisions can be made before ptg)
19:48:48 <pabelanger> Ya, was just thinking if that spec might have changed a bit based on zuulv3 things now
19:49:07 <fungi> right, definitely no need to artifically block any effort or discussion on the ptg, but we're all pretty heads-down on zuul and ptg prep stuff so the ptg may be the first opportunity for some to get much available bandwidth from us
19:50:32 <fungi> well, not just zuul and ptg prep stuff for that matter. release time and elections and our usual dog-paddle to try and stay afloat in a sea of crumbling systems ;)
19:51:42 <mmedvede> if hacking time better spent on zuul v3, I can probably handle most of ci dashboard myself with a bit of assistance
19:51:55 <fungi> okay, we can continue to do ptg prep discussion but i'm going to officially switch to open discussion for the remainder of the meeting in case people have other non-ptg stuff they want to bring up
19:51:58 <fungi> #topic Open discussion
19:52:18 <fungi> we ran out of time the past few meetings to do open discussion
19:52:40 <clarkb> I'm going to be afk tomorrow. Babysitting fell through so I'm stuck with the kids
19:52:41 <fungi> so just wanted to make sure to make time this week in case
19:53:16 <fungi> oh, yes, as mentioned i'm completely afk most of the 15th, but will also be travelling the 17th through the 25th and only intermittently available
19:53:53 <fungi> (of august)
19:54:48 <fungi> seems like maybe nobody has anything else? happy to wrap the meeting up a few minutes early for a change
19:56:12 <fungi> seems that way
19:56:22 <jeblair> yep, these systems don't break themselves!
19:56:26 <jeblair> well, they do
19:56:31 <jeblair> but i still like to help
19:56:36 <fungi> it's been a pleasure serving as your ptl for these last two years, thanks everyone!
19:56:50 <jeblair> fungi: thank you!
19:56:53 <mmedvede> thanks fungi, appreciate all the help!
19:57:00 <fungi> and best of luck to whosoever takes on the mantle
19:57:08 <fungi> #endmeeting