19:01:34 <ianw> #startmeeting infra
19:01:34 <frickler> o/
19:01:35 <openstack> Meeting started Tue Jan 23 19:01:34 2018 UTC and is due to finish in 60 minutes.  The chair is ianw. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:38 <openstack> The meeting name has been set to 'infra'
19:02:00 <ianw> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:02:01 <AJaeger> o/
19:02:13 <ianw> #topic Announcements
19:02:27 <ianw> #info Clarkb and fungi missing January 23rd meeting due to travel.
19:02:48 <ianw> thus you have me, and pabelanger ready to point out what i've messed up :)
19:03:06 <ianw> #info OpenDev being folded into Vancouver Summit. Note this changes the dates and location for the event.
19:03:15 <ianw> #link http://www.opendevconf.com/
19:03:59 <ianw> this one is all about ci/cd, so probably of interest to most people here
19:04:14 <ianw> #info Vancouver Summit CFP open. Submit your papers and/or volunteer to be on the programming committee.
19:04:14 <ianw> #link http://lists.openstack.org/pipermail/openstack-dev/2018-January/126192.html
19:04:40 <pabelanger> there is also a ML post my dmsimard about ideas for talks
19:04:55 <ianw> #link http://lists.openstack.org/pipermail/openstack-infra/2018-January/005808.html
19:05:07 <ianw> dmsimard: ^
19:05:20 <dmsimard> Oh opendev is folded into summit
19:05:41 <dmsimard> I guess it'll replace the concept of open source days which I actually asked about
19:06:03 <pabelanger> not sure, I think it is still up in the air
19:06:06 <corvus> o/
19:06:14 <pabelanger> but, just heard about the move today myself
19:06:18 <dmsimard> https://twitter.com/OpenStack/status/955793345162432515
19:06:35 <dmsimard> It's weird because their answer is a bit ambiguous
19:06:49 <dmsimard> Twitter was probably not the right medium to ask in the first place
19:06:52 <dmsimard> ¯\_(ツ)_/¯
19:07:03 <ianw> "The Vancouver Tracks are a bit different than usual and are open to fellow open source projects. We're encouraging @ansible and @kubernetesio talks to be submitted! Send by Feb 8!"
19:07:35 <ianw> a lot of the sydney talks were about 60% kubernetes anyway :)
19:08:07 <corvus> yeah, on the opendev call on friday they suggested that some amount of redirecting cicd talks to opendev may take place.  and opendev would be 2 days that run concurrently with the summit.
19:08:24 <corvus> so some of us may be double booked.
19:08:32 <dmsimard> Fun
19:08:34 <corvus> we may have to try to schedule forum sessions during the not-opendev days
19:09:21 <ianw> sounds like WIP, but if you've got a good talk, it will find a home somewhere
19:09:25 <corvus> i guess regardless, submit talks and they should eventually end up in the right place :)
19:09:36 <ianw> #info Zuul has mailing lists http://lists.openstack.org/pipermail/openstack-infra/2018-January/005800.html
19:09:41 <ianw> subscribe!
19:09:59 <ianw> #info PTG topic brainstorming happening now https://etherpad.openstack.org/p/infra-rocky-ptg
19:11:04 <ianw> not much more to say, all ideas welcome
19:11:15 * mordred waves
19:11:15 <ianw> #topic Actions from last meeting
19:11:29 <ianw> #link http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-01-16-19.01.log.txt
19:11:47 <ianw> if we've grepped correctly ...
19:12:01 * ianw corvus patchbomb project name removals from zuul.yaml
19:12:30 <hrw> o/
19:13:02 <ianw> i'm not sure i saw that?
19:13:08 <corvus> i will do that this week now that we have more ram
19:13:16 <corvus> #action corvus patchbomb project name removals from zuul.yaml
19:13:44 * ianw clarkb to take pass through old zuul and nodepool master branch changes to at least categorize changes
19:13:52 <AJaeger> corvus: I updated the docs projects already - and whenever I edit a zuul.yaml, remove the project stanza. So, 10+ less projects ;)
19:14:16 <corvus> AJaeger: cool, being lazy is working out!
19:14:20 <AJaeger> ;)
19:14:38 <corvus> ianw: on that front, i did abandon about 40 zuul changes with nice messages
19:15:14 <corvus> there's still probably about 60 for folks to go through and think about more (i only did a superficial pass based on commit message subject, basically)
19:15:46 <ianw> ok, i think that's a constant background thing for every project
19:16:04 <corvus> yeah, we just have a surplus at the moment because of feature branch myopia
19:17:25 <ianw> ok, do we want to put it back for next time to remember to talk about it again?
19:18:06 <pabelanger> sure
19:18:07 <ianw> #action clarkb / corvus / everyone / to take pass through old zuul and nodepool master branch changes to at least categorize changes
19:18:23 * ianw corvus send email to openstack-dev, openstack-infra, zuul-announce about feature branch merge
19:18:42 <ianw> i think we're all good there ...
19:18:43 <corvus> that's done
19:18:51 <ianw> #topic Specs approval
19:19:03 <ianw> #link http://lists.openstack.org/pipermail/openstack-infra/2018-January/005779.html Cleanup old specs.
19:19:51 <pabelanger> I still need to look at my old specs myself
19:20:16 <ianw> i don't think there was any disagreement with that mail
19:20:41 <pabelanger> I don't believe so
19:21:46 <ianw> let's put in an action item to abandon them, so we don't forget
19:22:00 <ianw> #action clarkb abandon specs per: http://lists.openstack.org/pipermail/openstack-infra/2018-January/005779.html
19:22:30 <ianw> other than that, i didn't see anything updated past the great jenkins removal of '17
19:23:06 <ianw> #topic Priority Efforts
19:23:17 <ianw> #topic Storyboard
19:23:26 <ianw> mordred: anything of note?
19:24:17 <ianw> let's loop back if so
19:24:17 <mordred> ianw: nothing from my end
19:24:23 <ianw> thanks :)
19:24:25 <ianw> #topic Zuul v3
19:25:31 <ianw> I think the server replacement has gone ok?
19:26:24 <AJaeger> ianw: yes, works nicely as far as I can see
19:26:31 <AJaeger> let's wait for corvus stress test ;)
19:27:19 <ianw> and then the other thing is the feature branch merge
19:27:33 <ianw> i imagine if you had changes in flight you've already moved them over
19:28:02 <ianw> #link https://etherpad.openstack.org/p/zuulv2-outstanding-change-triage
19:28:20 <ianw> pabelanger: ^ anything to say on that ... just go through it as people have time?
19:29:11 <pabelanger> ianw: yah, I think when people have time. feature/zuulv3 has been merged into master already
19:30:32 <ianw> ok, anything i'm missing?
19:31:21 <pabelanger> nothing else from me
19:31:41 <ianw> i saw a couple of newish waves too, not zuul has it's own meeting https://wiki.openstack.org/wiki/Meetings/Zuul
19:32:03 <ianw> #topic General topics
19:32:03 <ianw> #topic Handling review spam (frickler)
19:32:15 <ianw> #link http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-01-22.log.html#t2018-01-22T11:53:20
19:32:32 <ianw> #link http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-12-22.log.html#t2017-12-22T09:57:39
19:32:37 <frickler> not sure we need a big discussion here, just wanted to make sure that folks are aware of that
19:33:18 <corvus> thanks... after reviewing the irc logs, i think that's all good.
19:33:55 <corvus> i'm not sure we need to suspend accounts based on activity like that, but unresponsiveness after contacting them seems like a really good reason.
19:34:19 <ianw> yep, do we think it was robotic?
19:34:52 <ianw> i do seem to get random +1's as others do, but it seems quite sporadic
19:35:36 <frickler> no, didn't look like a bot, too irregular for that
19:36:19 <ianw> cool, well if you're doing odd things and we can't contact you to discuss it, i think that's pretty solid
19:36:33 <ianw> and i think we're all in agreement discussion is the first step
19:36:38 <ianw> so yay us :)
19:37:13 <ianw> #topic aarch64 update
19:37:45 <ianw> So linaro have very kindly offered us aarch64 resources for CI
19:37:53 <ianw> #link https://review.openstack.org/536187
19:38:12 <ianw> mordred: ^ if you can take another pass over that (linaro credentials)
19:38:14 <hrw> yay us!
19:38:50 <ianw> on the infra side, I'm trying to get a Xenial image up so we can see just how far off our deployment stuff is from working
19:39:10 <ianw> i got some notes back today about some more flags to try booting the image, and will take that up today
19:39:28 <pabelanger> cool
19:39:40 <ianw> i'll see how it all goes and send an email or reviews or whatever
19:39:42 <mordred> ianw: sure nuff!
19:39:45 <gema> ianw: +1, let us know
19:40:36 <ianw> so, in theory, we should be able to bring up a nodepool builder and mirror in our control plane account
19:40:55 <hrw> ianw: Xinliang can be helpful as due to living in China he is closer to your timezone
19:41:19 <ianw> the other side is ... what is the nodepool builder going to build
19:41:34 <pabelanger> are we planning on running all jobs on the cloud or subset of them?
19:41:46 <gema> pabelanger: kolla subset to start with
19:41:48 <hrw> ianw: kolla guys want to build images afaik
19:42:03 <gema> pabelanger: they've offered to be first
19:42:11 <ianw> my thought is that it would be a separate node type, and we'd start like that
19:43:05 <gema> hrw: and run tests
19:43:11 <ianw> xenial-aarch64 or something like that, and define separate jobs
19:43:18 <pabelanger> would we gate jobs on the cloud? Or just pipeline specific for aarch64?
19:43:54 <ianw> i imagine it would be a very soft start
19:43:59 <gema> pabelanger: from discussions with fungi and inc0, we start small
19:44:06 <ianw> gema has expressed that capacity and performance are still scaling up
19:44:06 <gema> let other teams try if they want
19:44:10 <gema> and then grow capacity
19:44:30 <gema> yep, we are working on that in the background
19:44:59 <pabelanger> sure, I'm thinking we might have it setup like we do with tripleo-test-cloud-rh1 today, jobs opt into a pipeline with aarch64 nodes
19:45:12 <pabelanger> but, we aren't there yet
19:46:12 <ianw> yep, i think that's a good option as a model too
19:46:28 <hrw> first let get base stuff running
19:46:52 <ianw> so yeah, it's going to be easiest if we can get dib building images just like everywhere else
19:47:05 <ianw> to that end we've been working on gpt/efi support
19:47:10 <ianw> #link https://etherpad.openstack.org/p/dib-efi
19:48:36 <hrw> ianw: thanks again for doing most of work
19:48:48 <ianw> I'm pretty confident we'll get that going, which would be easier than going back to snapshot images or something
19:48:56 <hrw> finding out how dib is layered was making me mad
19:50:03 <pabelanger> agree, getting DIB working should be first step
19:50:15 <corvus> pabelanger: why a pipeline?
19:50:28 <corvus> oh, because we only have one cloud?
19:50:33 <hrw> with current set of patches we are at point where resulting images should be bootable
19:50:51 <pabelanger> corvus: yah, single cloud question
19:51:27 <corvus> cool, any plans yet to add a second?  or see how things go first?
19:51:38 <gema> corvus: the second is in the making
19:51:48 <gema> more hw is coming to the first or the second or the third
19:51:58 <gema> and we'll add capacity as it reaches us and we spin up new clouds
19:52:23 <gema> corvus: depending on the member they ship us hardware to different datacenters and it is our job to make it available via developer cloud amongst other projects
19:52:29 <corvus> cool.  i think once we have some confidence that a cloud blip won't cause projects to get stuck, we can fold the new pipeline back into the normal check pipeline.
19:52:34 <hrw> and that keeps gema happy cause servers stop stacking on her desk
19:52:43 <gema> hrw: absolutely
19:52:58 <gema> corvus: new cloud being span up in UK short after Queens releases
19:53:00 <pabelanger> corvus: +1
19:53:08 <ianw> oh, the one other thing i wanted to run by the peanut gallery is if we should switch infra images to GPT based boot by default
19:53:13 <hrw> gema: and it will be running queens ;D
19:53:26 <gema> hrw: hopefully
19:53:31 <ianw> that would effectively make it the default
19:53:38 <hrw> gema: I do not take other option
19:53:59 <hrw> ianw: and do we want rootfs on vda1 or do not care
19:54:19 <ianw> right, it involves extra partitions
19:54:36 <hrw> the good part is GPT allows us to number partition like we want
19:54:45 <hrw> cirros has ESP as part15 and rootfs as part1
19:55:00 <ianw> i'm thinking that cinder, etc, are all pretty isolated from this
19:55:09 <hrw> so we can have: ESP/15 + BSP/14 + rootfs/1
19:55:37 <hrw> that way all projects assuming root=/dev/vda1 will still work
19:56:47 <ianw> i don't imagine anyone has an immediate response, but keep it in mind
19:57:09 <ianw> anyway, thanks to hrw and gema and you'll be hearing more about it!
19:57:20 <gema> ianw: thank you for your help :D
19:57:28 <hrw> ianw: we may merge gpt stuff without affecting x86 images
19:57:29 <gema> y'all, actually :D
19:57:34 <ianw> #topic Open discussion
19:57:46 <hrw> ianw: then add block-image-aarch64.yaml which will be gpt: esp+root
19:57:57 <pabelanger> I think infracloud might be dead, trying to see if I can get onto ilo of controller
19:58:08 <pabelanger> but we are expecting servers to be reclaimed by HPE
19:58:17 <corvus> pabelanger: a timely reminder of why we like cloud diversity :)
19:58:23 <pabelanger> indeed
19:58:26 <ianw> hrw: yeah, i think that's a first step
19:58:39 <hrw> ianw: 'do no harm' way
19:59:48 <hrw> ianw: so we get stuff to get aarch64 running and in meantime x86 devs can try gpt images
20:00:15 <hrw> 99% of x86 VM instances are i440fx/bios ones anyway
20:00:49 <ianw> ok, that's time for us ... come by #openstack-infra for more discussions!
20:00:55 <ianw> #endmeeting