19:03:49 #startmeeting infra 19:03:49 Meeting started Tue Feb 14 19:03:49 2017 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:03:52 The meeting name has been set to 'infra' 19:03:55 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:04:00 #topic Announcements 19:04:04 o/ 19:04:07 #info Final version of the foundation's logo work for the Infra team is now available 19:04:13 #link http://lists.openstack.org/pipermail/openstack-infra/2017-February/005151.html Foundation mascot and logo treatments for Infra team (final version) 19:04:46 clarkb also mentioned to me that if you're going to do something with those, avoid the jpegs. the png versions are devoid of terrible jpeg artifacting 19:05:02 (and there are also vector versions as well) 19:05:14 as always, feel free to hit me up with announcements you want included in future meetings 19:05:21 #topic Actions from last meeting 19:05:28 #link http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-02-07-19.04.html Minutes from last meeting 19:05:37 Shrews propose a change adding yourself to modules/openstack_project/manifests/users*.pp in system-config 19:05:42 #link https://review.openstack.org/430421 Add David Shrewsbury to users/infra-root 19:05:48 that's been approved! 19:06:00 Shrews is not in here though 19:06:31 #topic Specs approval 19:06:41 we don't seem to have anything new up this week 19:06:53 #topic Priority Efforts 19:07:04 nothing called out specifically here, though there are some zuul/nodepool topics coming up later in the meeting 19:07:23 #topic Zuul related PTG prep (jeblair) 19:07:26 #link http://lists.openstack.org/pipermail/openstack-infra/2017-February/005131.html Zuul info for PTG 19:07:35 * Shrews throws a delayed wave o/ 19:07:42 that's an awfully good thread (though i'll admit i haven't finished digesting it yet) 19:07:58 if folks could read that before arriving at the ptg, that would be great 19:08:26 we will at least have a common vocabulary to discuss things :) 19:08:51 we have also been working on the list here 19:08:56 #link https://etherpad.openstack.org/p/pike-ptg-zuul 19:09:18 just to reiterate, as zuul v3 and its related effects are probably the largest undertaking we've worked on as a team in recent years, expect it to be the primary source of activities for us at the ptg 19:09:42 gosh. i hope it works then :) 19:09:50 we're making good progress on that so i think we'll be able to accomplish something. :) 19:10:04 we will make time/space for collaboration on other things, but zuul/nodepool work will likely demand the main room a lot of the time 19:11:03 fungi: i think that about covers it 19:11:10 thanks jeblair, looking forward to it! 19:11:19 #topic Devstack trusty & nodepool (ianw) 19:11:24 #link https://review.openstack.org/433218 Remove distro support based on new libvirt minimum 19:11:28 is there a rough schedule on how much time we'll spend together each day? For those of us that may things going on in the evenings. 19:11:41 just to sync up on this, mostly for dib purposes ... 19:11:57 BobH: we've got a general ptg planning topic coming up later in the meeting too, we can get into it there 19:11:58 so all our nodepool dsvm testing is xenial right? even though the builders are actually trusty? 19:12:17 that sounds right 19:12:29 i presume that as we bring zuulv3 online, we'll be using xenial underneath? 19:12:31 ianw: i think that's pretty temporary. we all agreed, i think, that we want to rebuild them on xenial shortly 19:13:13 yes the risk was also thought to be minimal since for non dib things its all python and rest apis 19:13:24 i expect that it'll be xenial (or at least not trusty) by the time we go into production 19:13:45 *all python 2.7 19:14:00 we could probably start replacing the nb hosts now if we wanted? 19:14:09 ++ 19:14:14 jeblair: yup and we could add them then delete old ones 19:14:26 that might not be a bad idea to shake out issues early 19:14:33 yeah, i think we had briefly discussed deploying them initially on xenial but opted to do trusty first and then rolling replace with xenial soon after 19:14:41 maybe we could start that after the ptg.... 19:14:45 and i could reduce trusty testing on dib too 19:15:09 shortly post-ptg wfm 19:15:13 after the PTG sounds sane 19:15:42 ok, i'll be happy to help out with that 19:15:53 #agreed Soon after the PTG we should start replacing the 14.04-based nodepool builder instances with 16.04 19:16:05 that was all, just wanted to sync up on the deprecation, thanks 19:16:30 thanks ianw! good to point out the implications of libvirt minimums there 19:17:03 one (less savory) option would also be to test nodepool against stable/ocata until we can switch to master nova's expectations 19:17:17 but i doubt it will come to that 19:17:19 fungi: we test on xenail though 19:17:26 so we meet nova's expectations 19:17:38 oh, fair enough 19:17:48 (this was part of the argument for running on xenial, it would be harder to keep all that stuff working then a simpleish python2.7 process) 19:18:01 i meant if we wanted to test accurately for production, but yeah 19:18:01 yeah, we are on xenial. we still run dib functional tests on trusty 19:18:03 then we would over time get there on the daemon side 19:18:43 yep, seems fine to me 19:18:53 to be fair, we don't actually care that nodepool works with nova master, we care that it works with clouds we use. devstack/nova master is a proxy for that. :) 19:19:20 (and possibly a poor one, but at least an optimistic assumption!) 19:19:39 indeed 19:19:49 but yeah, the only sane thing for us to do i think is to continue testing master, so i'm glad we're doing that. 19:20:07 anything else here before we forge on in the agenda? 19:20:26 nope 19:20:41 #topic Redirect developer and docs from HTTP to HTTPS (fungi) 19:20:44 #link https://review.openstack.org/432334 Redirect developer and docs from HTTP to HTTPS 19:21:10 just a quick note that the docs team reached consensus on only serving via https going forward and redirecting all http to https 19:21:16 Yay 19:21:39 that change has a couple of +2s so i'm planning to approve it shortly following the meeting unless there are any objections 19:21:47 cool, that seemed fine, but i figured we should merge it when people were around 19:21:49 woo 19:22:24 i'll be around a good chunk of the evening, but wanted to make sure everyone in subsequent timezones was also aware (and knew what to revert if needed) 19:22:31 ++ 19:22:44 \o/ 19:22:58 any questions/concerns on this before we move on? 19:23:17 i like the speed and efficiency with which consensus was reached. :) 19:23:53 yes, AJaeger is good at squeezing answers out of people ;) 19:24:14 #topic Release bindep 2.2.0 (fungi) 19:24:24 #link http://git.openstack.org/cgit/openstack-infra/bindep/log/ Recent bindep commits 19:25:00 we've had a number of requests to get a new bindep in circulation, particularly for the rhel support change 19:25:37 there is one outstanding change for bindep with no negative reviews we could wedge into this before tagging too, if anyone's interested in looking over it 19:26:07 #link https://review.openstack.org/428394 Extract file finding and processing to functions 19:26:27 I know ihrachys said https://review.openstack.org/#/c/381979/ was important and asked for reviews on the mailing list, so I reviewed and it hasn't gotten a response 19:27:01 yeah, i was waiting for him to resurrect that one before looking at it again 19:27:14 so am not expecting it to make this release 19:27:39 from the already merged changes since 2.1.0 i think this needs to be 2.2.0 (new features, but no backward-incompatibilities) 19:27:51 anyone disagree? 19:28:23 nope 19:28:34 is this too risky at this point in the release cycle? (it's not in global requirements and isn't a dependency of anything, but does get used in lots of jobs) 19:28:59 i feel like our existing functional testing is pretty thorough at least 19:29:10 along with significant unit test coverage 19:29:35 thats a good point, because it isn't constraints managed it isn't super simple to avoid a broken release 19:29:54 ttx: as release ptl now, you might have some input there 19:29:55 but it probably isn't too hard to modify our bindep job macro to install previous version if necessary 19:30:43 yeah, i do think we have some easy and quick ways to solve any unintended breakage, including just quickly patching and tagging another release (because of no requirements sync baggage), or rolling back nodepool images 19:30:55 any downside to adding it to constraints? 19:31:13 what would be the up-side? it's not a dependency of anything 19:31:34 and we run it from a pre-built virtualenv baked into our images 19:31:47 true, was just asking 19:31:49 yeah, i think it's used too early in the process for constraints to help 19:32:03 right, putting upper-constraints.txt wouldn't do anything afaik 19:32:08 er, putting it in 19:32:54 okay, i'll double-check with the release team just to be sure they're aware (and let them know that we don't expect any impact) 19:33:11 otherwise planning to move forward with tagging 2.2.0 in the next day or so 19:33:12 right bindep has to run before pip with constraints does in order to get system deps in place 19:34:05 #info Expect bindep 2.2.0 in the next 24-48 hours unless there is strong objection from the Release Management team 19:34:23 #topic General PTG planning (fungi) 19:34:27 #link https://etherpad.openstack.org/p/infra-ptg-pike Infra Pike PTG planning pad 19:34:39 BobH: can you repeat your question from earlier, if you're still around? 19:35:19 re General PTG Planning I wanted to get the test failure debugging session onto the project room schedule 19:35:26 sure, do we have an idea on how much time we'll spend together during the day to allow for time in the evening for other activities 19:35:28 andreaf: if you are around you might have input on ^ too 19:36:07 I was wondering what people thought about when would be most effective, during the first two days of ptg or last 3 days? or maybe do it twice once for each group? 19:36:38 BobH: great question. i'm looking now to see if we have specific hours for start/finish published for teh conference space 19:37:04 I couldn't find anything other than it was up to the respective teams 19:37:08 fungi: reading 19:37:31 the ethercalc make it look like 9-5 19:37:36 jeblair: 5:30 19:37:42 I think the 5 block runs until 5:30 19:37:53 rooms will stay open until 6pm 19:38:03 Normal hours are 9-5 19:38:09 clarkb: indeed 19:38:10 thanks jeblair/clarkb/ttx 19:38:11 but overtime is ok 19:38:23 i feel like 9-5 is plenty. don't want to burn everyone out 19:38:55 re: release management I'd rather keep things safe until next Wednesday 19:39:00 slushy slush 19:39:20 9-5 sounds good to me 19:39:26 especially for things that are a bit special 19:39:28 works for me. 19:39:40 ttx: mmm, okay i can put off the new bindep until after release day 19:39:51 i keep forgetting it's next wednesday already 19:40:10 a week from tomorrow 19:40:11 how important are the rhel support updates? are they blocking any (release related) work? 19:40:43 re joint infra/qa session on debugging test failures. Seems like we currently pull more people from cross project work into that so possibly valuable to do it during first two days. But also valuable to do it during last 3 days to hopefully make more of the projects self sufficient in debugging things 19:40:44 jeblair: there shouldn't be any release dependency on bindep since it's intended for developers setting up development environments 19:41:05 and for some convenient automation in our ci 19:41:12 fungi: poorly phrased question -- i mean are they blocking any openstack developer's ability to complete the release of their project? 19:41:24 not to my knowledge, no 19:41:38 cool 19:42:40 if no one else has opinions on that session schedulign I will probably just pencil it in twice for the two groupings of days 19:42:55 clarkb: i wonder how many vertical devs will be around the first 2 days...? 19:43:02 clarkb: yeah, if we only have one joint debugging session it likely needs to be monday/tuesday because harder for the various services teams to work out a time they can have representatives available 19:43:12 clarkb: yeah, both as an experiment sounds like a good idea 19:43:16 oh, great point jeblair 19:43:45 ya thinking we want to reach both groups and having it during each chunk of time might be best way to do that 19:43:45 fungi: if we had to (guess and) pick 1, i would probably actually pick last 3 days... 19:43:58 i somehow ignored that some of the services teams might not have anyone show up until wednesday 19:44:02 also thinking half an hour might not be enough and should go for an hour? 19:44:13 fungi: i agree with your point about difficulty in scheduling though. 19:45:00 there I penciled in two hours 19:45:17 andreaf: if you can provide feedback on those selections that would be great 19:45:33 https://ethercalc.openstack.org/Pike-PTG-Discussion-Rooms is the URL in question, too me a while to find it 19:45:33 clarkb: i watched you pencil them in! ++ 19:45:38 clarkb: i agree that's safest. is there a rough set of examples you expect to work from? 19:46:17 fungi: I was planning on digging through recent ones 19:46:21 probably fridayish 19:46:32 so the kolla jinja2 failure 19:46:36 ttx: is there a reason not to link that spreadsheet in https://wiki.openstack.org/wiki/PTG/Pike/Etherpads (or is it there and i'm not seeing it)> 19:46:37 ? 19:46:38 some of the OOM fails 19:46:45 the libvirt crashes with nested virt 19:47:40 yeah, i guess the odds of unsolved gate failures more or less evaporating before then is slim 19:47:52 and if they do, we should spend those slots celebrating? ;) 19:48:09 but want to focus on general debugging too 19:48:17 and reading logs etc 19:48:37 seems like all of that is covered pretty thoroughly if you just dive into example failures anyway 19:48:39 since a large amount ofissues that get run by us seem to be from people just lost in the maze of openstack 19:48:50 ++ 19:49:16 but yes, big thumbs-up to trying to reduce the general confusion over where and how stuff gets logged 19:50:09 fungi: no reason, please fix 19:50:19 ttx: thanks, will do momentarily 19:52:56 clarkb: we could just pick something from the uncategorized failure page and dig into 19:52:59 it 19:53:09 skimming through our planning pad (and ignoring the stuff i pasted in there as conversation starters), it looks like we have interest in hacking on the next generation of zuul/nodepool (obviously), storyboard stuff (especially promotion and migration tooling), the debugging sessions clarkb has been discussing, maybe some work on firehose, manage-projects stuff (if there's any left to do by then), 19:53:11 xenial control-plane migrations and discussing moving some of our control plan to different service providers, and the devstack-gate support for local.conf 19:53:28 mtreinish: ++ I like that 19:54:53 do we have our room assignment so that can be put in etherpad too ? (though there will be maps and plenty of signs aiui so maybe not necessary) 19:55:18 there will be maps 19:55:22 there are always maps 19:55:37 reading the maps is important. 19:55:49 as stated previously i'm trying to avoid over-structuring this since it's the first ptg and we want to see how things might emerge organically, but one idea i've been toying with is to have at least one volunteer for each of the topics we wind up working on to take notes they can feed me with urls to outcomes (documentation of decisions reached, features implemented, results of demos/testing, 19:55:51 whatever) so i can more easily produce a summary without missing bits for topics in which i wasn't personally involved 19:56:22 * jeblair wonders what ttx has hidden in the maps 19:57:07 fungi: seems like a good approach 19:57:13 so between now and monday, be thinking about if there are things you plan to be around for all of where you wouldn't mind keeping track of what's going on for posterity (you don't need to be leading the work, just paying attention and making some notes is enough) 19:58:27 also someone noted there would be an atlanta guide for eateries? 19:58:32 jeblair: special prizes 19:58:33 we can break some of the more involved topics out into their own tracking pads too and link them from our main etherpad to keep stuff under control 19:58:39 once I find that I will try to respond to ml thread about dinner options 19:59:11 i still liked the trader vic's suggestion, but i'm probably just biased by the recommended attire 19:59:28 Pitty Pat's Porch looks very atlanta and is downtown. I'm good with Trader Vic's and we all dress appropriately :) 19:59:35 they have mai tais too 19:59:56 I may have to borrow an adequate shirt 20:00:06 they would have you believe they're the home of the original mai tai 20:00:25 oh, we're out of time 20:00:30 thanks everybody! 20:00:34 #endmeeting