19:01:07 <clarkb> #startmeeting infra
19:01:08 <openstack> Meeting started Tue Jun 12 19:01:07 2018 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:12 <openstack> The meeting name has been set to 'infra'
19:01:14 <ianw> o/
19:01:22 <clarkb> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:02:13 <clarkb> The openstack summit berlin CFP opened. Feel free to submit to that. I'm happy to review submissions if you want extra eyeballs on them too
19:02:24 <mordred> o/
19:03:06 <clarkb> Also I've been asked to point out that the openstack user survey is open now. If you run an openstack cloud they love to hear from you
19:03:37 <clarkb> oh I didn't set the topic to announcements
19:03:43 <clarkb> #topic Announcements
19:03:51 <clarkb> #info The openstack summit berlin CFP opened. Feel free to submit to that. I'm happy to review submissions if you want extra eyeballs on them too
19:04:01 <clarkb> #info I've been asked to point out that the openstack user survey is open now. If you run an openstack cloud they love to hear from you
19:04:29 <clarkb> #topic Actions from last meeting
19:04:39 <clarkb> #link http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-06-05-19.01.txt minutes from last meeting
19:05:01 <clarkb> I don't see any
19:05:30 <clarkb> #topic Specs Approval
19:06:07 <clarkb> mordred: any chance that your spec will be up soon and we can try and focus on it?
19:06:24 <clarkb> (its not an easy spec to write so I don't want to rush just curious where its at)
19:06:50 <mordred> yes! I swear
19:06:55 <fungi> i guess i can probably push up a change to move the limesurvey spec to completed now that the system-config docs for it have merged
19:07:04 <clarkb> fungi: that would be great, thanks
19:07:54 <cmurphy> these are also up for review https://review.openstack.org/563592 https://review.openstack.org/449933
19:08:09 <clarkb> I'm beginning to think that devoting a good chunk of ptg time to this modernization of config mgmt will be helpful so making sure we are ready to proceed there would be good
19:08:13 <anteaya> thank you
19:08:22 <clarkb> #topic Priority Efforts
19:08:34 <clarkb> #link https://review.openstack.org/563592 Upgrade to puppet 4
19:08:40 <corvus> clarkb: that's a great idea
19:08:48 <clarkb> #undo
19:08:49 <openstack> Removing item from minutes: #link https://review.openstack.org/563592
19:08:59 <clarkb> I think I got the links backwards /me tries to get them right
19:09:01 <cmurphy> yeah the other one
19:09:23 <anteaya> fungi and ianw thanks for the reviews
19:09:24 <clarkb> #link https://review.openstack.org/#/c/563592/ Mark puppet4 prelim testing spec as done
19:09:34 <clarkb> #link https://review.openstack.org/449933 upgrade to puppet 4
19:10:22 <clarkb> #info Clarkb's current thought process is work through config management modernization spec this summer so that we can devote PTG time to working on it
19:10:57 <cmurphy> i'm banking on my side being done long before ptg
19:11:14 <clarkb> cmurphy: ya I think that is doable
19:11:36 <mordred> cmurphy: I wouldn't be surprised if your bit is done before I finish updating the spec
19:11:38 * mordred sighs
19:11:43 <cmurphy> mordred: lol
19:12:25 <clarkb> cmurphy: mordred: anything else we can help with in the near future?
19:12:35 <anteaya> mordred, have you considered putting what you have in an etherpad so others can help with the first draft?
19:13:08 <cmurphy> clarkb: there are some easy-ish puppet-4 fixups in topic:puppet-4 if you want to help clear my queue
19:13:48 <clarkb> #link https://review.openstack.org/#/q/topic:puppet-4+status:open clear out cmurphy's puppet4 queue
19:14:16 <cmurphy> and feedback on https://review.openstack.org/572856 https://review.openstack.org/572861 and https://review.openstack.org/572218 would be good, those should be noops
19:14:55 <clarkb> #link https://review.openstack.org/#/c/572856/ https://review.openstack.org/572861 https://review.openstack.org/572218 noop changes for puppet 4
19:15:01 <cmurphy> there are a few modules i want to get proper tests for like puppet-nodepool but some of the other nodes we could start migrating nowish, just need to coordinate with rooters
19:15:02 <mordred> clarkb: not on my end - I need to throw _something_ up for comment
19:15:32 <clarkb> cmurphy: ok, I'll make an effort to get through the review backlog today on that topic
19:15:44 <cmurphy> tyty
19:15:57 <clarkb> #topic General Topics
19:16:46 <clarkb> It was a busy week last week. Or maybe ending friday with firefighting makes it feel that way.
19:17:10 <clarkb> The good news is that we have managed to update nodepool with a fairly large change to its schema and zuul is now running with ansible 2.5
19:17:24 <corvus> there are some problems.  some jobs are failing with permission errors on /tmp/console-None.  we're working through these in #zuul.  i think we at least understand the problem now.
19:17:40 <corvus> and we have some partial fixes standing by
19:17:45 <anteaya> yay?
19:17:49 <clarkb> If people have questions about weird job behavior sharing that will zuul is important to make sure that zuul covers all the corner cases here
19:17:49 <mordred> it's ... fun
19:18:17 <corvus> yeah, any more cases of console-None would be helpful to know about
19:20:17 <clarkb> I'd also like to remind the group that if there is interst in picking up some of the stuff Paul was working on that would be helpful. Specifically Gerrit 2.15 upgrade, zuul/nodepool zk cluster migration, and control plane upgrades
19:20:32 <clarkb> You don't have to pick it all off either :) any help is helpful
19:21:14 <clarkb> We also have new cloud credentials for bringing on a platform9 managed cloud running in packet
19:21:39 <clarkb> if you are interested in going through the bring a new cloud online process let me or fungi know.
19:21:47 <clarkb> otherwise I expect I'll work on that
19:21:55 <corvus> clarkb: can you say words about 'platform9' and 'packet' that will help me understand their use in that sentence?
19:22:03 <fungi> oh, yep, if anyone is interested in picking up the packethost/platform9 nodepool provider addition from me, lmk. it's all set up in clouds.yaml now but we still need mirror/cache server and some initial testing
19:22:23 <mordred> corvus: I had much the same thought :)
19:22:39 <clarkb> corvus: platform9 is running a openstack cloud on packethost hardware for us to use for test resources
19:22:51 <clarkb> right now it is just x86 hardware but packet also has arm64 stuff that we may get access to aiui
19:22:59 <corvus> clarkb: is it our own cloud, or a shared cloud?
19:23:05 <clarkb> corvus: it is our own cloud aiui
19:23:25 <fungi> they're gunning for the minimal 100 node quota from the start, so it'll be a helpful addition if we can get it up
19:23:34 <anteaya> does this cloud have a corresponding geographical location?
19:23:47 <fungi> it's in the republic of texas, i believe
19:23:53 <anteaya> thank you
19:24:05 <anteaya> hopefully away from a flood plain
19:24:05 <fungi> might be in mordred
19:24:05 <ianw> fungi: i can bring up mirror server today if you like, then you can test easier during US hours?
19:24:10 <fungi> mordred's basement?
19:24:18 <clarkb> corvus: the layering here makes it a little weird. our openstack is running within their baremetal cloud
19:24:23 <clarkb> corvus: but the openstack we talk to is only for us
19:24:28 <anteaya> anything is possible for both mordred and texas
19:24:31 <corvus> platform9 buys packet bare-metal-servers-as-a-service and manages openstack clouds on them?
19:24:38 <ianw> i imagine it's better to turn it on when people are awake
19:24:39 <clarkb> yup
19:24:40 <fungi> ianw: well, i haven't written a change to configure the mirror server yet (i think i need to do that next?)
19:24:49 <mordred> fungi: if it's in my basement, I suppose I'll need to try to figure out where a basement might be hiding
19:25:06 <corvus> mordred: it's the room with all the fire ants
19:25:20 <fungi> a.k.a. "storm cellar"
19:25:23 <mordred> corvus: there is NOT enough room in that room to keep servers
19:25:29 <mordred> do you know how many fire ants there are?
19:25:44 <clarkb> fungi: ianw I don't think we need a new change for that, just to boot the instance
19:25:46 <ianw> fungi: if the creds are in puppetmaster, it should just follow our standard host bringup?
19:25:55 * fungi is now reminded of the scene from pi where the protagonist is finding ants crawling on the server's processors
19:26:12 <anteaya> flee to the boat
19:26:13 <clarkb> the other step that would be helpful soonish is having nodepool upload images to it so that we can run performance tests on it with our real images (or we can just upload one out of band)
19:26:15 <fungi> ianw: yeah, it's already in there
19:26:36 <fungi> ianw: i suppose it's just cacti which needs adding
19:26:59 <fungi> (once the server is launched and in dns)
19:27:24 <clarkb> corvus: this sort of nested openstack is apparently a semi common thing in the wild
19:27:48 <ianw> fungi: yep, also assuming no other weird corner case things are triggered, which is not out of the realms of possibility :)
19:28:31 <fungi> it _is_ openstack, after all ;)
19:29:45 <ianw> i'm making some progress on some grafana & monitoring things
19:29:49 <ianw> http://grafana02.openstack.org/d/ACtl1JSmz/afs?orgId=1&from=now-7d&to=now
19:30:11 <clarkb> ianw: I was going to ask, will you be switching grafana.o.o to the new 02 host adn delete the old one?
19:30:15 <clarkb> or is that still a work in progress
19:30:17 <ianw> corvus & auristor's suggestion of tracking the creation time of readonly volumes got me what i wanted in "how long since we released a volume"
19:30:41 <ianw> clarkb: i'm finding it quite handy to hack on that with puppet disabled ATM, while i sort out adding some new things to grafyaml, so WIP
19:30:59 <clarkb> ok
19:31:39 <ianw> if i could get one more sanity check on the puppet for that would be good -> https://review.openstack.org/573493
19:32:10 <clarkb> #link https://review.openstack.org/573493 Puppet for afs monitoring on mirror-update
19:32:24 <clarkb> ianw: did you have to edit the firewall rules? if so did that get pushed up as a change as well?
19:32:56 <ianw> i also have a nodepool one for fixing up the dib-image-list / image-list output with a browser, something i'm fairly frequently pointing people too.  i know we've got other dashboard stuff in the works, but good to have it working now
19:33:04 <ianw> #link https://review.openstack.org/#/c/573053/
19:33:17 <ianw> clarkb: yeah, i did the firewall rules a while ago when first testing
19:35:08 <clarkb> ianw: for the how long ago did $volume release if we can icnlude the frequency of releases on that volume too that might be helpful
19:35:40 <clarkb> and whether or not that will update even if the mirror has no new content
19:36:54 <ianw> i think that the "vos release" will always create a new R/O volume; it's the creation time of that which is being tracked
19:38:00 <ianw> i'm sure that with some more data, you could do some graphite derivative() type thing to track the "acceleration" of results?
19:38:20 <clarkb> ya that might be what we want
19:38:54 <fungi> oh, speaking of afs i have a change up to provide an example of a manual content alteration (e.g., deleting some files)
19:38:56 <fungi> #link https://review.openstack.org/572821 Document an example for deleting content from AFS
19:38:59 <ianw> it is basically a stair-graph where we're putting in the unix timestamp of the creation date each stats run
19:40:36 <clarkb> Unrelated to AFS I've started work to have zuul run jobs for kata.
19:40:47 <clarkb> #link https://github.com/kata-containers/proxy/pull/74 https://review.openstack.org/#/c/573748/
19:41:23 <clarkb> Right now I'm waiting on vexxhost to get nested virt deployed, but otherwise I Think it is mostly working. Rough plan is to use this as a "here it can work for you" proof of concept and if kata want to move foward we would deploy them in their own tenant
19:41:36 <clarkb> current test is running as third party check pipeline job in openstack tenant
19:42:12 <corvus> clarkb: sounds like a plan.  any blockers there?
19:42:31 <clarkb> corvus: we'll want to clean up the zuul + github behavior I found (that I promised a bug for that I need to write)
19:42:37 <clarkb> otherwise just waiting for cloud resources I think
19:42:49 <clarkb> I'm told those will be available this week
19:44:08 <clarkb> as an exercise it is interesting to work in CI tooling from a project that doesn't share a background with the ones we work in
19:44:29 <clarkb> your assumptions have to change, and lack of set -x is frightening :)
19:45:02 <anteaya> clarkb, I'd be interesting in what assumptions you discovered you were carrying
19:45:08 <clarkb> the good news is that things like our distro mirrors appear to have just worked
19:45:29 <anteaya> clarkb, doesn't have to be now
19:45:39 <clarkb> anteaya: ya can talk to that outside the meeting
19:45:44 <anteaya> thanks
19:46:11 <clarkb> Anything else before I open the floor?
19:46:38 <clarkb> #topic Open Discussion
19:47:13 <clarkb> We had a bunch of rain and now a bunch of sun and my allergies are going crazy. So I may be a bit slow for a bit while the plants assult my immune system
19:47:26 <anteaya> :(
19:47:48 <mordred> clarkb: I've been having a similar experience
19:48:30 <anteaya> as a Canadian citizen I have learned I am now being attacked by the US exective
19:48:33 <anteaya> which is new for me
19:48:43 <anteaya> so I am defending myself
19:48:47 <clarkb> anteaya: the rest of us still like canada
19:48:54 <anteaya> not that anyone here will be surprised by this
19:48:59 <anteaya> clarkb, I know
19:49:13 <anteaya> which is why my defence is only in response to the attack
19:49:16 <anteaya> noone else
19:49:26 <anteaya> clarkb, and thank you
19:49:58 * fungi secedes from the usa and offers his yard up as a new canadian territory
19:50:07 <anteaya> fungi, thank you
19:50:18 <anteaya> fungi plant a cactus in my name?
19:50:30 <clarkb> well if there is nothing else, I think we can talk secession in other channels and I can get lunch 10 minutes early :)
19:50:35 <clarkb> thank you everyone.
19:50:40 <anteaya> thank you clarkb
19:50:45 <clarkb> #endmeeting