19:01:34 #startmeeting infra 19:01:34 o/ 19:01:35 Meeting started Tue Jan 23 19:01:34 2018 UTC and is due to finish in 60 minutes. The chair is ianw. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:38 The meeting name has been set to 'infra' 19:02:00 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:02:01 o/ 19:02:13 #topic Announcements 19:02:27 #info Clarkb and fungi missing January 23rd meeting due to travel. 19:02:48 thus you have me, and pabelanger ready to point out what i've messed up :) 19:03:06 #info OpenDev being folded into Vancouver Summit. Note this changes the dates and location for the event. 19:03:15 #link http://www.opendevconf.com/ 19:03:59 this one is all about ci/cd, so probably of interest to most people here 19:04:14 #info Vancouver Summit CFP open. Submit your papers and/or volunteer to be on the programming committee. 19:04:14 #link http://lists.openstack.org/pipermail/openstack-dev/2018-January/126192.html 19:04:40 there is also a ML post my dmsimard about ideas for talks 19:04:55 #link http://lists.openstack.org/pipermail/openstack-infra/2018-January/005808.html 19:05:07 dmsimard: ^ 19:05:20 Oh opendev is folded into summit 19:05:41 I guess it'll replace the concept of open source days which I actually asked about 19:06:03 not sure, I think it is still up in the air 19:06:06 o/ 19:06:14 but, just heard about the move today myself 19:06:18 https://twitter.com/OpenStack/status/955793345162432515 19:06:35 It's weird because their answer is a bit ambiguous 19:06:49 Twitter was probably not the right medium to ask in the first place 19:06:52 ¯\_(ツ)_/¯ 19:07:03 "The Vancouver Tracks are a bit different than usual and are open to fellow open source projects. We're encouraging @ansible and @kubernetesio talks to be submitted! Send by Feb 8!" 19:07:35 a lot of the sydney talks were about 60% kubernetes anyway :) 19:08:07 yeah, on the opendev call on friday they suggested that some amount of redirecting cicd talks to opendev may take place. and opendev would be 2 days that run concurrently with the summit. 19:08:24 so some of us may be double booked. 19:08:32 Fun 19:08:34 we may have to try to schedule forum sessions during the not-opendev days 19:09:21 sounds like WIP, but if you've got a good talk, it will find a home somewhere 19:09:25 i guess regardless, submit talks and they should eventually end up in the right place :) 19:09:36 #info Zuul has mailing lists http://lists.openstack.org/pipermail/openstack-infra/2018-January/005800.html 19:09:41 subscribe! 19:09:59 #info PTG topic brainstorming happening now https://etherpad.openstack.org/p/infra-rocky-ptg 19:11:04 not much more to say, all ideas welcome 19:11:15 * mordred waves 19:11:15 #topic Actions from last meeting 19:11:29 #link http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-01-16-19.01.log.txt 19:11:47 if we've grepped correctly ... 19:12:01 * ianw corvus patchbomb project name removals from zuul.yaml 19:12:30 o/ 19:13:02 i'm not sure i saw that? 19:13:08 i will do that this week now that we have more ram 19:13:16 #action corvus patchbomb project name removals from zuul.yaml 19:13:44 * ianw clarkb to take pass through old zuul and nodepool master branch changes to at least categorize changes 19:13:52 corvus: I updated the docs projects already - and whenever I edit a zuul.yaml, remove the project stanza. So, 10+ less projects ;) 19:14:16 AJaeger: cool, being lazy is working out! 19:14:20 ;) 19:14:38 ianw: on that front, i did abandon about 40 zuul changes with nice messages 19:15:14 there's still probably about 60 for folks to go through and think about more (i only did a superficial pass based on commit message subject, basically) 19:15:46 ok, i think that's a constant background thing for every project 19:16:04 yeah, we just have a surplus at the moment because of feature branch myopia 19:17:25 ok, do we want to put it back for next time to remember to talk about it again? 19:18:06 sure 19:18:07 #action clarkb / corvus / everyone / to take pass through old zuul and nodepool master branch changes to at least categorize changes 19:18:23 * ianw corvus send email to openstack-dev, openstack-infra, zuul-announce about feature branch merge 19:18:42 i think we're all good there ... 19:18:43 that's done 19:18:51 #topic Specs approval 19:19:03 #link http://lists.openstack.org/pipermail/openstack-infra/2018-January/005779.html Cleanup old specs. 19:19:51 I still need to look at my old specs myself 19:20:16 i don't think there was any disagreement with that mail 19:20:41 I don't believe so 19:21:46 let's put in an action item to abandon them, so we don't forget 19:22:00 #action clarkb abandon specs per: http://lists.openstack.org/pipermail/openstack-infra/2018-January/005779.html 19:22:30 other than that, i didn't see anything updated past the great jenkins removal of '17 19:23:06 #topic Priority Efforts 19:23:17 #topic Storyboard 19:23:26 mordred: anything of note? 19:24:17 let's loop back if so 19:24:17 ianw: nothing from my end 19:24:23 thanks :) 19:24:25 #topic Zuul v3 19:25:31 I think the server replacement has gone ok? 19:26:24 ianw: yes, works nicely as far as I can see 19:26:31 let's wait for corvus stress test ;) 19:27:19 and then the other thing is the feature branch merge 19:27:33 i imagine if you had changes in flight you've already moved them over 19:28:02 #link https://etherpad.openstack.org/p/zuulv2-outstanding-change-triage 19:28:20 pabelanger: ^ anything to say on that ... just go through it as people have time? 19:29:11 ianw: yah, I think when people have time. feature/zuulv3 has been merged into master already 19:30:32 ok, anything i'm missing? 19:31:21 nothing else from me 19:31:41 i saw a couple of newish waves too, not zuul has it's own meeting https://wiki.openstack.org/wiki/Meetings/Zuul 19:32:03 #topic General topics 19:32:03 #topic Handling review spam (frickler) 19:32:15 #link http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-01-22.log.html#t2018-01-22T11:53:20 19:32:32 #link http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-12-22.log.html#t2017-12-22T09:57:39 19:32:37 not sure we need a big discussion here, just wanted to make sure that folks are aware of that 19:33:18 thanks... after reviewing the irc logs, i think that's all good. 19:33:55 i'm not sure we need to suspend accounts based on activity like that, but unresponsiveness after contacting them seems like a really good reason. 19:34:19 yep, do we think it was robotic? 19:34:52 i do seem to get random +1's as others do, but it seems quite sporadic 19:35:36 no, didn't look like a bot, too irregular for that 19:36:19 cool, well if you're doing odd things and we can't contact you to discuss it, i think that's pretty solid 19:36:33 and i think we're all in agreement discussion is the first step 19:36:38 so yay us :) 19:37:13 #topic aarch64 update 19:37:45 So linaro have very kindly offered us aarch64 resources for CI 19:37:53 #link https://review.openstack.org/536187 19:38:12 mordred: ^ if you can take another pass over that (linaro credentials) 19:38:14 yay us! 19:38:50 on the infra side, I'm trying to get a Xenial image up so we can see just how far off our deployment stuff is from working 19:39:10 i got some notes back today about some more flags to try booting the image, and will take that up today 19:39:28 cool 19:39:40 i'll see how it all goes and send an email or reviews or whatever 19:39:42 ianw: sure nuff! 19:39:45 ianw: +1, let us know 19:40:36 so, in theory, we should be able to bring up a nodepool builder and mirror in our control plane account 19:40:55 ianw: Xinliang can be helpful as due to living in China he is closer to your timezone 19:41:19 the other side is ... what is the nodepool builder going to build 19:41:34 are we planning on running all jobs on the cloud or subset of them? 19:41:46 pabelanger: kolla subset to start with 19:41:48 ianw: kolla guys want to build images afaik 19:42:03 pabelanger: they've offered to be first 19:42:11 my thought is that it would be a separate node type, and we'd start like that 19:43:05 hrw: and run tests 19:43:11 xenial-aarch64 or something like that, and define separate jobs 19:43:18 would we gate jobs on the cloud? Or just pipeline specific for aarch64? 19:43:54 i imagine it would be a very soft start 19:43:59 pabelanger: from discussions with fungi and inc0, we start small 19:44:06 gema has expressed that capacity and performance are still scaling up 19:44:06 let other teams try if they want 19:44:10 and then grow capacity 19:44:30 yep, we are working on that in the background 19:44:59 sure, I'm thinking we might have it setup like we do with tripleo-test-cloud-rh1 today, jobs opt into a pipeline with aarch64 nodes 19:45:12 but, we aren't there yet 19:46:12 yep, i think that's a good option as a model too 19:46:28 first let get base stuff running 19:46:52 so yeah, it's going to be easiest if we can get dib building images just like everywhere else 19:47:05 to that end we've been working on gpt/efi support 19:47:10 #link https://etherpad.openstack.org/p/dib-efi 19:48:36 ianw: thanks again for doing most of work 19:48:48 I'm pretty confident we'll get that going, which would be easier than going back to snapshot images or something 19:48:56 finding out how dib is layered was making me mad 19:50:03 agree, getting DIB working should be first step 19:50:15 pabelanger: why a pipeline? 19:50:28 oh, because we only have one cloud? 19:50:33 with current set of patches we are at point where resulting images should be bootable 19:50:51 corvus: yah, single cloud question 19:51:27 cool, any plans yet to add a second? or see how things go first? 19:51:38 corvus: the second is in the making 19:51:48 more hw is coming to the first or the second or the third 19:51:58 and we'll add capacity as it reaches us and we spin up new clouds 19:52:23 corvus: depending on the member they ship us hardware to different datacenters and it is our job to make it available via developer cloud amongst other projects 19:52:29 cool. i think once we have some confidence that a cloud blip won't cause projects to get stuck, we can fold the new pipeline back into the normal check pipeline. 19:52:34 and that keeps gema happy cause servers stop stacking on her desk 19:52:43 hrw: absolutely 19:52:58 corvus: new cloud being span up in UK short after Queens releases 19:53:00 corvus: +1 19:53:08 oh, the one other thing i wanted to run by the peanut gallery is if we should switch infra images to GPT based boot by default 19:53:13 gema: and it will be running queens ;D 19:53:26 hrw: hopefully 19:53:31 that would effectively make it the default 19:53:38 gema: I do not take other option 19:53:59 ianw: and do we want rootfs on vda1 or do not care 19:54:19 right, it involves extra partitions 19:54:36 the good part is GPT allows us to number partition like we want 19:54:45 cirros has ESP as part15 and rootfs as part1 19:55:00 i'm thinking that cinder, etc, are all pretty isolated from this 19:55:09 so we can have: ESP/15 + BSP/14 + rootfs/1 19:55:37 that way all projects assuming root=/dev/vda1 will still work 19:56:47 i don't imagine anyone has an immediate response, but keep it in mind 19:57:09 anyway, thanks to hrw and gema and you'll be hearing more about it! 19:57:20 ianw: thank you for your help :D 19:57:28 ianw: we may merge gpt stuff without affecting x86 images 19:57:29 y'all, actually :D 19:57:34 #topic Open discussion 19:57:46 ianw: then add block-image-aarch64.yaml which will be gpt: esp+root 19:57:57 I think infracloud might be dead, trying to see if I can get onto ilo of controller 19:58:08 but we are expecting servers to be reclaimed by HPE 19:58:17 pabelanger: a timely reminder of why we like cloud diversity :) 19:58:23 indeed 19:58:26 hrw: yeah, i think that's a first step 19:58:39 ianw: 'do no harm' way 19:59:48 ianw: so we get stuff to get aarch64 running and in meantime x86 devs can try gpt images 20:00:15 99% of x86 VM instances are i440fx/bios ones anyway 20:00:49 ok, that's time for us ... come by #openstack-infra for more discussions! 20:00:55 #endmeeting