19:03:20 #startmeeting infra 19:03:21 Meeting started Tue May 31 19:03:20 2016 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:03:24 The meeting name has been set to 'infra' 19:03:29 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:03:31 fungi: not really 19:03:37 (sorry) 19:03:52 no worries, just making sure you know there's a meeting afoot 19:03:57 #topic Announcements 19:04:05 #info REMINDER: Gerrit downtime on Friday 2016-06-03 at 20:00 UTC 19:04:12 #link http://lists.openstack.org/pipermail/openstack-infra/2016-May/004322.html 19:04:27 for those planning to help with that, remember that's this friday 19:04:57 also we're still missing a review for the openstack-infra/ansible-puppet -> openstack-infra/ansible-role-puppet rename, in case someone wants to volunteer to write it before friday 19:05:29 in unrelated news, anyone have any stats for the rename sprint from last week? 19:05:35 er, upgrade sprint 19:05:48 like how many servers we upgraded? how many left to go? 19:06:20 I think we managed to upgrade about 43 servers last week 19:06:20 (don't all shout at once!) 19:06:36 with 5 (or 6) remaining 19:06:42 pabelanger: I counted closer to 30, we will have to see where our numbers differ :) 19:07:05 20 logstash-workers, 6 elasticsearch, logstash.o.o, eavesdrop, zuul-dev, cacti, and probably a small handful I am forgetting 19:07:08 o/ 19:07:15 zuul-merger 19:07:20 graphite.o.o 19:07:29 zm was 8 19:07:30 #info At least 30 servers were upgraded from Ubuntu 12.04 to 14.04 during last week's sprint, with roughly half a dozen remaining 19:07:31 oh right the mergers 19:07:32 gg 19:07:45 apps.o.o 19:07:47 so more than 40? 19:07:49 o/ 19:07:54 fungi: sounds like it :) 19:07:57 #undo 19:07:58 Removing item from minutes: 19:08:03 #info At least 40 servers were upgraded from Ubuntu 12.04 to 14.04 during last week's sprint, with roughly half a dozen remaining 19:09:02 #topic Actions from last meeting 19:09:08 #link http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-05-24-19.03.html 19:09:16 set up filtering to direct provider account contact address(es) into a high priority inbox 19:09:30 er, pleia2 set up filtering to direct provider account contact address(es) into a high priority inbox 19:09:47 (my highlighter didn't highlight right the first time) 19:10:18 I think she said she was going to work on that while she was vacationing 19:10:20 pleia2 was travelling this week, so might not be around for the meeting 19:10:24 yeah 19:10:34 i'll punt discussion of that to next week 19:10:41 #action pleia2 set up filtering to direct provider account contact address(es) into a high priority inbox 19:10:51 #action fungi start an ml thread revisiting current priority efforts list 19:10:55 i didn't get to that either 19:11:04 hopefully tonight 19:11:11 fungi: on that I think we can finally remove the nodepool dib effort 19:11:19 that is all done now with the bare-precise nodes gone 19:11:30 excellent, and yes i agree. if you want to propose that for removal i'll happily approve it 19:11:41 the only thing left is tripleo-cloud, they still do a fedora-22 snapshot build 19:11:51 I'm still working to migrate them to centos-7 DIB 19:11:59 oh, hrm 19:12:12 right, i wonder for purposes of our priorities list whether that matters 19:12:15 oh for some reason I thought taht was done too, in any case really close 19:12:18 The work is done, but they are having HDD issues (lack of space) 19:12:30 i would like to remove the snapshot code from nodepool 19:12:49 we had to use different centos images with large partition 19:12:54 So, I could use some help getting tripleo on board with the migration :) 19:12:55 yeah, so if we want to consider removing snapshot support from nodepool a priority, then it needs to stay on the list for now 19:13:16 anyway, we can discuss this on the ml 19:13:19 pabelanger, ping me or jesusaur later on that 19:13:30 #topic Specs approval 19:14:18 (very) late addition to the agenda, but i wanted to propose the current state of https://review.openstack.org/314185 (A common task tracker for OpenStack) for council vote 19:14:30 or at least test the waters on how close we are to being able to vote on it 19:14:48 i tried to address the outstanding questions/concerns on it today 19:15:44 :) 19:16:33 and if we don't think it's ready for voting this week, i wanted to at least try to get some consensus on whether we should encourage unofficial projects who are interested in helping beta test it and possibly contribute to development 19:17:24 i know dmsimard is interested in having task tracking for his ara project and was disinterested in using lp for that, and wanted to use storyboard if possible 19:17:47 (that was you, right dmsimard?) 19:18:13 yeah, I'd like to skip an eventual migration from launchpad especially for a new project 19:18:29 I don't like launchpad too much anyway so I'm biased :) 19:18:37 in a move that surprises nobody, I'm in favour of having more contributors :) 19:18:54 * anteaya is not surprised 19:19:07 I don't think I have the bandwidth or the throughput to contribute code to storyboard but if I can contribute feedback, comments and ideas, I sure will 19:19:15 I'm in favour of letting folks who want to use storyboard use storyboard 19:19:24 I'm also in favour of allowing folk like dmsimard to jump on and use it 19:19:33 * fungi wonders if the usa contingent is fighting holiday weekend hangovers 19:19:35 dmsimard: your contributions have been supportive thus far, thank you 19:19:50 fungi: paint fume induced yes 19:19:58 clarkb: mmmm paint fumes 19:20:34 I have no objections to putting the spec to a council vote 19:21:35 I'm hearing no one else object 19:21:51 I am fine with opening it up to the adventurous 19:21:51 * jhesketh is fighting the morning.. 19:22:04 oh, i should have linked it for convenience :/ 19:22:08 #link https://review.openstack.org/314185 19:22:28 and putting the spec up for voting seems fine, we need to start moving on that early this cycle if we want to have something concrete done by the end of the cycle 19:22:36 +1 19:23:38 #info Council voting on "A common task tracker for OpenStack" is open until 19:00 UTC Thursday, June 2 19:24:10 #topic Priority Efforts: Common OpenStack CI Solution (mmedvede) 19:24:32 some questions here about a couple of reviews for elasticsearch and logstash support? 19:24:39 yes 19:24:43 #link https://review.openstack.org/240011 19:24:53 #link https://review.openstack.org/199790 19:24:56 I have a topic item but I have to step away for around 20 minutes -- sorry. Keep me for the end ? 19:25:08 spec had the move of elasticsearch/logstash to openstackci 19:25:09 Be back asap. 19:25:23 the move is half done (patch that adds things to openstackci merged, removal patch did not) 19:25:26 dmsimard: sure (though you're the only other topic besides open discussion, but i can always add something) 19:25:56 fungi: either that or feel free to discuss it, I think the review and the openstack-infra thread contains a sane amount of information 19:26:04 mmedvede: we likely need to reconverge the setups then get the deletion one in 19:26:07 so I'd like advise on either to abandon the move as it is now, and revert the patch that merged, or split the move in smaller pieces 19:26:22 mmedvede: my suggestion would be to do it per service 19:26:36 elasticsearch, lgostash worker, logstash.o.o, and subunit workers as 4 migrations 19:26:52 will be easier to debug issues and revert if we need to 19:27:04 ++ 19:27:10 mmedvede: what paatch do you want to revert? 19:27:12 should we revert the puppet-openstackci changes and do them clean? 19:27:22 yeah, it's a lot of moving parts, i agree an incremental move would introduce fewer things to have to debug at each step 19:27:30 asselin: I don't think you need to do that, just updat ethem 19:27:42 anteaya: the patch that merged into openstackci 19:27:47 well I think we need to take a look at who is using openstackci 19:28:00 does the target audience want elk too? 19:28:08 mmedvede: have you a url? 19:28:30 that is the only concern, if any of thrid-party CI operator use the pieces already 19:28:41 anteaya, yeah it was part of the original spec 19:28:47 #link openstack common-ci spec https://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html 19:28:57 just a separate change for each class in 199790 right? 19:29:03 mmedvede: I have not come across anyone asking about elk for their third party ci 19:29:18 fungi: yes, I'd prefer to split it, and make it as atomic as possible 19:29:22 I think if we rework https://review.openstack.org/#/c/240011/ to be a short stack of changes that each address a separate service, then we can move forward without a revert 19:29:28 anteaya: I think people are more interested in it for their on prem CI maybe less so for CI that reports to us 19:30:01 that is fine 19:30:33 my concern is having to field support questions on it from folks who can't get their zuul figured out, it it is an all in one 19:30:54 asselin: and my apologies for not following the spec closly enough to know elk was in it 19:31:45 anteaya, no worries elk is an acronym. spec as written is only inlcuding the 'l' and 'k' poritons: Logstash / kibana (optional) 19:32:33 ok, so as I understand, agreed on reverting the patch (maybe send out email just in case) and do the move by service? 19:32:51 mmedvede, no revert, just update 19:33:18 ok, no revert. thanks 19:33:38 okay, so way forward determined? 19:33:45 fungi: affirmative 19:33:55 I think so, update openstackci, do one srevice at a time for migrations 19:33:58 any core sponsers to help review? 19:34:03 anteaya: if your main concern is fielding support questions, I understand the system pretty well and generally reply to pings 19:34:14 I can help with it though am fairly swamped iwth life right now 19:34:43 #agreed Split proposed elasticsearch/logstash refactor changes into incremental per-class migrations rather than the current big bang 19:35:37 jesusaur: well yes I know it is less that we don't know how to support it and more the consumers don't know enough to even be able to explain what they are experiencing 19:36:04 anteaya: ahh 19:36:09 jesusaur: yeah 19:36:46 anyway looks like there is agreement on a thing 19:36:48 moving on 19:36:48 #link storyboard for common-ci https://storyboard.openstack.org/#!/story/2000101 19:36:58 i think the stated goal of the openstackci module was to be able to stand up a complete ci system (eventually including log analysis and code review system) 19:37:49 and that it was just starting at the third-party ci use case with plans to extend to the rest of our commonly-consumed ci stack 19:38:03 fungi, ++ 19:38:24 anyway, sounds like this is licked for now 19:38:57 #topic Gauging interest in a late-cycle in-person sprint (fungi) 19:39:14 filler topic while we wait for dmsimard to return 19:39:25 fungi: do you have any idea on location and suggested dates? 19:39:41 neutron just announced theirs in cork ireland for 17-19 aug 19:39:46 it's been offered that the organization interested in sponsoring the qa sprint this cycle would be interested in making it a joint qa and infra sprint 19:39:59 my biggest thing would be to have a cohesive topic that we can collaborate on. I really liked how the infra cloud one turned out (maybe we do infra cloud pt 2?) 19:40:14 about to say that thing 19:40:16 :-) 19:40:25 qa isn't in the wiki yet 19:40:31 yeah, this is my concern as well. for one, i don't want us to get locked into the idea that we need to have a "mid-cycle" get together every cycle 19:40:32 where and when is the qa sprint? 19:40:52 mtreinish: ^^ 19:41:02 it's still being debated, but the proposal is in the frankfurt area again, latter half of september 19:41:08 * notmorgan would consider showing up to a late cycle sprint. 19:41:09 ohhh 19:41:11 wow that is late 19:41:22 we could consider a zuulv3 sprint -- that may be a good time for it 19:41:28 jeblair: :) 19:41:32 my concern is that this puts it during the final rc weeks, immediately prior to release week 19:41:33 * rcarrillocruz is cool with germany again 19:42:11 i'd be cool with a reduced-scope agenda for it so that it doesn't seem like something everyone in infra feels like they need to show up for 19:42:12 (i'm hoping by then we're on our way to running it) 19:42:21 +1 for zuulv3 19:42:47 is it going to be a single topic to get the most out of it or a couple of them 19:42:47 fungi: I'm available for questions 19:42:54 I 19:42:56 i think the numbering pattern pls renaming could be good too 19:43:02 late september is also during the ptl and tc elections 19:43:13 ttx: awesome, i'll wedge in a topic for further task tracker discussion in just a sec 19:43:24 *I think a sprint on any of our priority efforts would be time well spent 19:43:47 jhesketh: hope you can attend this time, missed you last time 19:44:04 anteaya: yeah, i have a feeling i wouldn't go mostly because i want there to be some coverage for election and release activities, but i also don't want to turn down a hosting offer on behalf of our team if there are people interested in taking advantage of it 19:44:19 fungi: I understand 19:44:21 anteaya: so do I :-) 19:44:25 jhesketh: :) 19:44:50 fungi: but this hosting opporunity doesn't have to be the only opportunity for an infra sprint 19:44:57 it is just _an_ opportunity 19:45:37 I have no objection to Germany but it'll be more expensive for most people 19:46:22 anteaya: agreed. i'd also be strongly in favor of doing a virtual sprint for one of our priority efforts (including zuul v3) 19:46:50 i feel like last week's upgrade sprint wet awesomely (did i mention that earlier? thanks everyone for pitching in on that!) 19:47:04 s/wet/went/ 19:47:20 3 europe trips in less than 6 months does seem like a bit much but I am happy to go if we think it would be useful 19:48:01 On the election if I could get some reviews on https://review.openstack.org/#/q/topic:add_elections it'd be good to do soon 19:48:07 okay, so if i get back to the people offering to host a joint qa/infra thing in late september in germany, any guess how many people i should estimate from infra? 19:48:17 fungi: fair enough 19:48:31 clarkb: 3? 19:48:51 jhesketh: I get on a plane for openstack day prague on saturday 19:48:55 jhesketh: agreed 19:49:03 Ah cool 19:49:09 I'm back o/ 19:49:26 fungi: Depending on travel approval, I'm a maybe 19:49:29 jhesketh: foundation staff are taking advantage of openstack events in hungary and czech republic to co-opt a meeting space for our quarterly meeting 19:49:36 i depend on approval 19:49:37 fungi: I would make an effort to go but can't promise anything (budget, timing etc) 19:49:49 obviously is more likely i get approval on EMEA than US/APJ 19:50:10 sounds like maybe we would be in the 5-10 infra attendees range? 19:51:10 that sounds about right. ft collins was a little closer to 20 iirc 19:51:36 okay, i'll iterate with them and i guess position it as a zuul v3 polishing sprint? 19:51:49 or infra cloud second installment? 19:52:00 I htink both of those make good sprints 19:52:01 those were the only concrete ideas i saw pitched so far 19:52:05 ++ 19:52:22 fungi: why not just priority efforts? 19:52:54 that doesn't seem like a focus 19:53:10 also, i bet in a few weeks/months we will have a better idea of which might be effective... 19:53:27 i guess for some people it is, but ideally our priority efforts on the whole are hopefully things we're generally working on anyway 19:53:27 (current/future infra-cloud hardware state -- progress/roadmap on zuulv3) 19:53:57 It allows others to come and pick their work, but I do see the advantages of being narrow 19:54:25 waldorf or dresden 19:54:34 yeah, if everyone picks their work then we'll likely be sitting in a room together more or less working on what we always work on 19:55:20 can we get to dmsimard's thing and come back to the sprint? 19:55:42 yep 19:55:44 \o 19:55:50 thanks 19:56:03 #topic Ara project interest (dmsimard) 19:56:13 #link https://review.openstack.org/321226 19:56:35 dmsimard: i guess you put this on the agenda because you're curious about whether infra use cases are compelling for it? 19:56:52 Hi, tl;dr ARA is a project born out of RDO because we have a lot of CI that installs and tests openstack-related things through ansible. 19:57:07 And we have logs like these: https://dmsimard.com/files/ansible-jenkins.txt 19:57:10 These are not fun. 19:57:40 that looks familiar to me 19:57:40 well, i can tell that tool can be useful in the world where we automate one-off playbooks against our infra servers... looks very cool 19:57:44 oh yeah, we ran into the "all puppet stdout in one line" thing too :) 19:57:57 We're seeking a home for the project and believe that OpenStack is a good one for it 19:58:10 Projects like jenkins job builder were also born out of OpenStack requirements 19:58:34 dmsimard: starting as an unofficial project in openstack's gerrit, sounded like and maybe looking to join infra later or something? 19:59:05 Up to you guys, really - a first step would be to join the ecosystem first, yes. 19:59:19 I added the item on the agenda due to lack of -2 or +2, really, though 19:59:30 i think it might prove really useful in zuulv3, but i don't think we're ready to think about the details yet. i'd love to see it generating some static stuff from devstack-gate tempest runs though -- that might be useful, shouldn't be too hard, and would show it off a bit 19:59:42 Because the project needs a home and if that home isn't openstack, we can house it as a RDO project 19:59:48 we might want to think about using in in connection with/instead of puppetboard 19:59:57 (for infra system-config reporting) 20:00:00 jeblair: yes, ara is fairly analogous to puppetboard 20:00:07 yeah, it seems like it might be a fit for becoming an infra project, _if_ we get to the point where we're depending on it. i'd just hesitate to adopt it officially if we're not making use of it 20:00:38 also we're out of time 20:00:44 thank you fungi 20:00:52 thanks for your time :) 20:00:54 ttyl folks 20:01:07 anyway, i guess anyone interested in maybe implementing ara, review dmsimard's change and get up with him 20:01:13 thanks everyone! 20:01:18 #endmeeting