19:02:43 #startmeeting infra 19:02:44 Meeting started Tue Jun 14 19:02:43 2016 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:48 The meeting name has been set to 'infra' 19:03:03 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:03:09 #topic Announcements 19:03:18 #info CI outage Friday, June 17 for ~2 hours so we can upgrade the operating system for zuul.openstack.org and logs.openstack.org, time TBD 19:03:23 pabelanger: did you have an announcement worked up for that yet? (sent?) 19:03:43 fungi: no, I have not. Apologies. I will do that now 19:03:49 an etherpad link I mean 19:03:51 o/ 19:04:02 i wpn't #action it, since the window is prior to next meeting anyway 19:04:12 agreed 19:04:28 also rehashed from last week's announcements for those who might have missed it... 19:04:30 #info Tentative late-cycle joint Infra/QA get together to be held September 19-21 (CW38) in at SAP offices in Walldorf, DE 19:04:32 check with mkoderer and oomichi for details 19:04:34 #link https://wiki.openstack.org/wiki/Sprints/QAInfraNewtonSprint 19:05:14 also here's a good one! 19:05:29 #info StoryBoard Bug Squash! 22nd and 23rd of June 19:05:36 #link http://lists.openstack.org/pipermail/openstack-infra/2016-June/004402.html 19:05:36 \o/ 19:05:37 fungi: yeah, we will write the detail on the wiki with mkorderer and notice for you :) 19:06:14 thanks, oomichi! 19:06:23 anyone know any other important upcoming things i've missed announcing before i move on? 19:06:24 I'm hoping to have an gerrit storyboard interacting by bug squash 19:06:33 \o/ 19:06:35 :D 19:06:37 at least on test servers 19:07:20 #topic Actions from last meeting 19:07:22 we didn't formally #action it, but at the last meeting I said I'd draft up the wiki upgrade spec, review here: https://review.openstack.org/#/c/328455/ (also topical from discussions in channel this morning) 19:07:34 pleia2: yay 19:07:56 pleia2: ahh, yep, i'd like to have that up for council approval next week, but doesn't mean we can't get started implementing sooner 19:07:57 anteaya: thanks for your review 19:08:05 #link http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-06-07-19.03.html 19:08:08 thanks for writing the spec 19:08:09 o/ 19:08:12 (none) 19:08:23 #topic Specs approval 19:08:47 none this week, but as mentioned above, wiki upgrade spec is up for review and should hopefully be ready for approval next week 19:08:55 #link https://review.openstack.org/328455 19:09:06 #topic Priority Efforts 19:09:33 i started a ml thread reviewing the current priority list, proposing some cleanup, asking for suggestions 19:09:41 #link #link http://lists.openstack.org/pipermail/openstack-infra/2016-June/004374.html 19:10:13 i meant to have an update to infra-specs encapsulating the feedback up for today, but ran out of time so expect it in the next day or two hopefully 19:10:30 i started looking on the ansible-puppet issues 19:10:33 in the meantime, please follow up there on the ml if you have anything to add 19:10:40 thanks rcarrillocruz! 19:10:43 already pushed https://review.openstack.org/#/c/327789/ , for syslogging on puppetmaster 19:11:08 #topic Priority Efforts: Use Diskimage Builder in Nodepool (pabelanger) 19:11:11 puppetdb on 3.x and all will go shortly 19:11:25 i approved the change to close this one out a few minutes ago 19:11:31 #link https://review.openstack.org/329080 19:11:32 \o/ 19:11:37 Yes! 19:11:37 congrats all 19:11:39 another one down! 19:11:46 indeed, congrats to everybody on that 19:11:49 excellent work, everyone 19:12:14 is there any redux/post-mortem needed on it? 19:12:38 fungi: maybe as part of removing snapshot builds from nodepool we can write something up 19:12:45 "hey this happened this is why" etc 19:13:06 i think this is one of those which started out a little vague and grew too many tentacles whichi should have been treated as related/prerequisite priority efforts 19:13:24 https://review.openstack.org/#/c/325339/ for deprecating 19:14:49 in the future, we should keep in mind that scope changes on existing priority efforts might be better off as additional (perhaps small) specs which we can just declare as a de facto priority because of being a blocking prerequisite for a priority we've approved. it makes it a little easier to be able to see the progress being made and keeps us from ending up with specs which linger forever without 19:14:51 clear reasons for taking so long 19:15:33 i'm partially to blame on this one for deciding to tack on bindep implementation in jobs without creating a separate blocking spec to cover that 19:15:57 glean was another massive thing that jammed the works 19:16:10 yep, same sort of situation 19:16:11 not strictly required but we went down the rabbit hole so far that we didn't really have a choice but to dig out the other side 19:16:15 well, i pushed that too. it was administratively easy to do at the time, but is perhaps worth the extra effort as you say 19:17:03 anyway, awesome all of this is done and we can now reap the benefits of using our own images and _only_ our own images in ci 19:17:13 thanks again everyone 19:17:20 ++ 19:17:22 yay glean 19:17:28 rcarrillocruz: :D 19:17:35 #topic Priority Efforts: Infra-cloud (crinkle) 19:18:03 so when we shut down the servers in fort collins some of them were on kilo and some were on liberty 19:18:16 we now have code to update the puppet module for mitaka 19:18:31 i recall we decided our upgrade strategy was just to do a full redeploy 19:18:35 and rcarrillocruz is redeploying everything as we speak 19:18:45 so i think it makes sense to just merge the mitaka code and have mitaka? 19:18:46 well, fixing inventory 19:18:54 we haven't gone that far to redeploy (yet) 19:19:01 okay, right, not up to the redeploy stage yet 19:19:03 yeah i think we're some work away from redeploy but still 19:19:21 fungi: nice try -- next time you say that, it'll be true! :) 19:19:27 baremetal00.vanilla.ic.openstack.org is online and managed by ansible / puppet again too 19:19:30 i'd say yes to mitaka 19:19:30 crinkle: what is the downside of merging the mitaka code? 19:19:33 I've been using the mitaka puppet modules elsewhere, they're in good shape 19:19:35 but i agree, trying to redeploy with mitaka and working out obvious bugs would be a great use of the environment while we have it (until the upcoming move) 19:19:44 :-) 19:19:46 o/ 19:19:49 pabelanger suggested i check and see if anyone had strong feelings about upgrading all to mitaka as opposed to equalizing to liberty 19:19:52 anteaya: none that i can see really 19:19:54 which, unofficially, will happen end of next month 19:19:59 I think we should mitaka 19:20:06 crinkle: great, thanks then I am in favour of merging 19:20:06 I'm for mitaka too 19:20:07 we'll pass on info as we get it 19:20:18 yeah, redeploy means we don't really care that there's a release skip for some of the systems in the mix 19:21:03 so no objection fromme. if anything, complete agreement 19:21:09 okay sounds good 19:21:14 ++ 19:21:25 #agreed upcoming infra-cloud redeploy should be 100% mitaka 19:21:28 #link https://review.openstack.org/#/c/312319/ 19:21:33 #link https://review.openstack.org/#/c/304946/ 19:21:44 fromme means pious apparently 19:21:45 thanks crinkle, rcarrillocruz! 19:21:51 ooh 19:22:04 i like when my terrible typos have actual meaning in other languages 19:22:13 me too, it was a good one 19:22:24 anything else on this? 19:22:28 not from me 19:23:26 neither from me 19:23:53 #topic Updated Gerrit GC test results (zaro) 19:24:01 #link https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg04374.html 19:24:19 hopefully everyone's been following the ml thread 19:24:53 are there further objections to or test requests before turning on garbage collection and pruning? 19:24:59 so i got more info. wondering if we wanted to make any changes to git repos? 19:25:30 and was the suggestion to do git gc or jgit gc? noting that we'll presumably still need to git gc on the mirrors as well? 19:26:11 maybe decide on interval as well? 19:27:02 jeblair: ^ you were probably the most vocal as far as specific ideas of what should be tested first 19:27:12 the existing repack is set for weekly. 19:27:44 zaro seems to have covered everything! :) 19:27:51 i’ve prepared a change to switch out repack with gc (same interval), #link https://review.openstack.org/#/c/329566 19:29:13 yeah, i'm satisfied we've got sufficient diligence on this, given the risk such a change potentially carries 19:29:42 zaro: thanks so much for all the work trying out combinations and reporting stats 19:29:58 np. glad to do :) 19:30:08 and hashar (not in channel) for pitching in details from wikimedia foundation's deployment 19:30:43 i’ve also prepared changes to upgrade to latest gerrit 2.11 but wanted to wait until gc change and see what happens 19:31:16 so the proposed change is to stick with git gc on the gerrit server rather than jgit gc in the background, since we can make it match what we do on the cgit servers, right? 19:31:32 i like that approach 19:31:34 makes sense to me. 19:32:02 #agreed Let's start doing git garbage collection on review.openstack.org and the git.openstack.org servers 19:32:10 anything else on this? 19:32:28 nope. 19:32:44 ohh wait, weekly ok? 19:33:01 it's what we've done so far. i think it's a fine first step 19:33:08 cool. 19:33:27 if we decide we want to even it out more, we can make it more frequent later now that we're at least comfortable it's a working solution 19:33:35 ++ 19:33:49 #topic Open discussion 19:33:53 I should have added this to the agenda, but I could use a bit of help with the translations checksite, re: http://specs.openstack.org/openstack-infra/infra-specs/specs/translation_check_site.html 19:34:31 I've been working with Frank Kloeker, but we've run into some snags I'm not sure how to solve 19:34:59 we have a new puppet module for installing devstack and pulling in the translations in openstack-infra/puppet-translation_checksite 19:35:20 on a good day, it pretty much works (still needs a patch for a longer timeout) 19:35:35 also, on the earlier topic of the wiki upgrade and improvements, i'm hoping to propose including it in the newton priorities list once the spec is approved. i at least intend to be spending a fair amount of effort getting it into better shape 19:35:56 I have a couple of reviews I'd like looked at more gentoo stuff 19:35:59 on most days, ./stack.sh will fail to build in some way and we're left with a broken instance 19:36:02 i'm curious if there are proposed topics for the infra/qa midcycle? i might have missed it 19:36:12 5 weeks old :P https://review.openstack.org/#/c/310865/ 19:36:17 crinkle: suggestions were zuul v3 or infra-cloud again 19:36:29 crinkle: but nothing solid yet. open to ideas! 19:36:36 I'd appreciate some feedback on my IRC spec (https://review.openstack.org/#/c/319506/1) and maybe some guidance on what to do next? Or just wait 'till the spec is approved and go from there? 19:36:44 ianw: you think I can unmark that wip? 19:36:49 fungi: thank you 19:37:20 so our plan of refreshing devstack weekly as outlined in the spec is complicated 1) by the devstack builds failing 2) we have downtime when we rebuild devstack, though weekly for a couple hours seems reasonable assuming the build is successful 19:37:39 also need reviews for https://review.openstack.org/#/c/318994/ to clean up the nodepool dib elements 19:37:54 pleia2: do you think devstack is the right tool for this? 19:37:57 I was wondering if we're going about this wrong, since we have tooling to build devstack instances already, and this is essentially a read-only instance from the perspective of translators 19:38:01 docaedo: yeah, after looking into it, and alternatives, and watching some of the same discussion unfold for the python infra community i think i'm close to being in favor 19:38:23 pleia2: do you have thoughts on what approach would work better? 19:38:25 clarkb: we need a build of openstack that we can apply the in-progress translations to during the last month before release, devstack seemed like the right thing 19:38:54 pleia2: why do the devstack builds fail? 19:38:58 would a disk image builder image make more sense? 19:39:00 prometheanfire: have you built a gentoo image? we can work on integrating it but the first step is to know that dib with all the infra elements is going to actually output a usable qcow2 19:39:04 pleia2: ok I just ask because if it fails more often than not that seems like potentially a bad idea. But may be worth figuring out those fails too 19:39:09 fungi: thanks - since you're the spirit guide for it I'll just pester you occasionally for additional feedback ;) 19:39:40 (theoretically, devstack should almost always work -- it's gated) 19:39:43 ianw: I'll test tonight or tomorrow 19:39:45 docaedo: was there a demo for the interface somewhere? 19:39:51 jeblair: last time it was a puppet timeout, another time it was network weirdness on my instance, we have no fault tolerance in the module so any problems are fatal 19:39:53 ianw: https://review.openstack.org/#/c/318994/ shouldn't need to wait though 19:40:33 fungi: no, but I can make an account on my test server and PM you the details 19:41:12 fungi: user account management is a gap right now too, feel like that will need some discussion and a brilliant plan from you or some other genius on the infra team 19:41:24 pleia2: ah ok the fails are not in devstack itself but in the surrounding tooling/env 19:41:56 docaedo: oh, that would be cool. and yeah account setup is one of those sticking points for wider adoption. while the idea has merit the opportunity to attract abusive users without vetting is high 19:42:05 clarkb: yeah, once or twice it's been I got unlucky and devstack was actually broken, but toward the end of the cycle when translators are active this should not be a problem, devstack should be pretty solid by then 19:42:06 docaedo: could we hook it into openstackid/ipsilon? 19:42:48 jeblair: what is the current status of ipsilon progress? 19:43:01 jeblair: in theory, yes, but someone would need to write the JS for it as that doesn't exist in The Lounge right now 19:43:55 basically running a thelounge instance without limiting it to specific users and specific channels would be likely to attract spammers and other abusive users because we'd basically just be operating an "open proxy" to freenode 19:44:23 anteaya: not yest begun 19:44:26 correct, even if it was tied to openstackID I would want some curated list of accounts that are active 19:44:35 jeblair: thanks 19:44:50 docaedo: that is fair 19:45:00 First pass at our gerrit outage: https://etherpad.openstack.org/p/upgrade-zuul-trusty 19:45:04 docaedo: limiting it to channels where openstack infra has ops would also be good, if we can do that 19:45:38 basically anything we can do to prevent freenode staff from blocking its ip address due to abuse 19:45:51 because then, nobody's able to use it 19:45:57 fungi: that's probably not too difficult, but I would argue that's not necessary if we're deciding who we set up with it (presumably a small audience of mostly working group people) 19:46:08 usually they limit connections at 10ish per ip anywyas, so we'd have to talk to them regardless 19:46:11 fungi: but yes, getting banned by freenode would not be great 19:46:12 pleia2: likely knows more 19:46:44 yeah, we should notify them that we're hosting it 19:46:44 we might want to see if tomaw can provide some feedback on the idea 19:46:51 docaedo: one solution would be to delegate account control to others in the community (working group leads or something) to field new account requests 19:47:14 since they're likely to know when someone requesting access is legitimately needing it 19:47:44 fungi: I would be all for that, I do not want to be the person in charge of IRC accounts by any means 19:48:02 as widespread and in touch with the community as our team is, we're not omnipotent and don't know a lot of the regulars on various board-appointed working groups and the like 19:48:48 but opening it up to anyone with an openstack foundation member accuont is just likely to result in lots of people signing up as foundation members (with bogus info) so they can abuse the proxy 19:49:22 what problem is this solving? 19:49:34 sorry I haven't been following along 19:49:41 what are we fixing with this 19:50:04 anteaya: https://review.openstack.org/319506 proposes a spec to host a persistent web irc client 19:50:05 anteaya: the short answer is helping people who aren't comfortable with IRC get on IRC 19:50:34 anteaya: essentially giving them an IRCCloud account without them having to pay for it (by hosting an alternate with similar UX and functionality) 19:51:03 given the amount of human work going to be needed to ensure the tool isn't abused would it make sense to offer instructions to those who are uncomfortable to become comfy? 19:51:19 as opposed to pushing them to freenode's webchat (which doesn't catch scrollback when they're offline, so will miss async private messages) or irccloud (which costs money) 19:51:23 I mean humans will have to be actively montioring its use anyway 19:51:41 who needs to have to pay for irc? 19:51:59 I've never heard of having to pay to use irc 19:52:08 irccloud is a popular paid service 19:52:15 anteaya: to be frank, i pay for access to irc (because i run my irc client on a persistent virtual host in a public cloud provider) 19:52:27 fungi: well yes, that is true 19:52:28 same as fungi 19:52:33 yes, good point 19:52:37 anteaya: theres a service (ircclloud.com) that has a free tier, but the paid tier gives you scrollback when you leave and return 19:52:51 so teaching folks xchat is off the table? 19:52:56 * zaro pays for the ircloud 19:53:05 if that has already been discarded then fine 19:53:08 anteaya: you could definitely give some feedback on the spec :) 19:53:25 so take random project manager at $member_company who wants to participate in an openstack board-designated working group 19:53:28 it just seems this is going to be maintenance heavy 19:53:44 anteaya: the other thing I want to solve is the drive-by IRC use, where folks pop on just for a meeting, and don't use IRC again (ideally helping bring more people into our globally distributed IRC community) 19:54:16 docaedo: that is fair, I think there is value in helping them find where the channel logs are hosted 19:54:24 anteaya: it might be maintenance heavy, it's one of the concerns I noted in the spec 19:54:36 the likely end results are 1. we convince them to use irc and having their meetings where the technical community has their meetings by providing a low-barrier-to-entry interface to irc for them, or 2. they use google hipmeets or whatever 19:54:41 okay I've said my bit, thanks for listening 19:55:13 fungi: exactly, well said, thanks. 19:55:30 fungi: also can you share an invite to google hipmeets if you have one? thanks! 19:55:44 docaedo: i think it's integrated with orkut 19:56:00 fungi: :P 19:56:07 A potentially relevant data point: one of the behaviours I see in many contexts for this class of folk is a reliance on conference calls, sometimes with IRC adjuncts, where the IRC consists of #action, #info, #topic, but no content. 19:56:28 persia: thanks, that is certainly a distinct risk 19:56:34 persia: any suggestions for getting all the content on irc? 19:56:43 persia: or just an observation 19:57:04 re: translations checksite, I'll get the stakeholders on a thread on the -infra list with a more cohesive explanation of what we're struggling with so we can ask more specific questions about the direction we're going in 19:57:17 though i'd argue that having a meeting secretary shorthand the minutes into our meetbot is still better than nothing public at all 19:57:21 anteaya: Reduce barriers to entry for persistent IRC. Note that there are some engineering teams who contribute to parts of OpenStack (e.g. bits of Neutron) that have the behaviour I described. 19:58:02 persia: can you review https://review.openstack.org/#/c/319506/ with your observations 19:58:15 Sure. 19:58:17 also I acknowledge some Neutron folks still have this habit 19:58:25 we're down to the last couple minutes. anybody have anything else? 19:58:31 though overall there has been great improvement in that project 19:58:34 pleia2: sounds good re: the translations checksite 19:58:34 persia: thanks 19:59:08 i didn't expect running and periodically rebuilding a devstack instance would be trivial. after all, i've seen our ci jobs ;) 19:59:32 indeed :) 20:00:02 okay, we're at time. thanks everyone! see you in #openstack-infra 20:00:06 @endmeeting 20:00:09 gah 20:00:12 #endmeeting