19:01:39 #startmeeting infra 19:01:39 O/ 19:01:42 o/ 19:01:43 Meeting started Tue Apr 21 19:01:39 2015 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:44 o/ 19:01:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:47 The meeting name has been set to 'infra' 19:01:48 o/ 19:02:21 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:02:21 o/ 19:02:23 o/ 19:02:27 o/ 19:02:43 #topic Actions from last meeting 19:02:57 jeblair send announcement for april 17 2200 utc 2-hour outage for renames and utf8 conversion 19:03:02 #link http://lists.openstack.org/pipermail/openstack-dev/2015-April/061489.html 19:03:08 that's done and done 19:03:21 jeblair send announcement for may 9 1600 utc 4-hour outage for 2.10 upgrade 19:03:25 #link http://lists.openstack.org/pipermail/openstack-dev/2015-April/061490.html 19:03:27 also announced 19:03:45 that's it for previous action items 19:03:51 #topic Priority Efforts 19:04:12 we skipped these last time, but we also have a few new topics after so let's keep these brief if we can 19:04:13 i think there were a few questions but no substantial issues raised as a result of the upgrade announcment 19:04:28 #topic Priority Efforts (Swift logs) 19:04:46 do we have any next steps on this that need attention/highlighting? 19:05:04 yes, the stack from jhesketh on os-loganalyze to allow configurable file passthrough from swift is up for review 19:05:10 o/ 19:05:11 should have topic set to enable_swift too 19:05:24 I am happy to babysit those if people want to +2 iwthout approving 19:05:31 clarkb: thanks 19:05:37 but do ping me if you do that so I know they are ready for me 19:05:41 #link https://review.openstack.org/#/q/status:open+topic:enable_swift,n,z 19:06:06 anything else there? 19:06:24 #topic Priority Efforts (Nodepool DIB) 19:06:58 fungi: I approved the bindep fallback update yesterday 19:07:04 so hopefully today's images have the latest fallback 19:07:25 once thats in we can start moving forward on using bindep right? the changes you wanted in bindep itself have merged? 19:07:35 ahh, yep saw that. i'm doing some final testing with that now before i put the jobs and venv for it in 19:07:37 the dib patches are starting to merge for nodepool dib to consolodate our images 19:07:43 i've seen more stacks of shade changes flying by. are those all nodepool/dib-related? 19:07:58 fungi: no I think the majority of them are not dib related 19:08:19 fungi: they add features like volume support and stuff which is useful for shade but not specific to nodepool's need for dib images 19:08:19 well we need shafe to do rax, so kind of related? 19:08:23 shade 19:08:24 also the change to add a "centos-6" worker to nodepool merged, and we added a 0.5tb cinder volume to the nodepool server to accomodate a larger dib cache and image set 19:08:43 fungi: thank you for doing that 19:08:46 clarkb: errrm.. many of them are test coverage 19:08:51 well, it needed doing 19:08:53 fungi: are we okay on cinder quota? 19:09:13 jeblair: probably? i honestly didn't check our quota but i didn't get an error 19:09:21 k 19:09:32 i should have a peek at it later 19:09:37 clarkb: test coverage that will necessarily precede the migration to no-more-clientlib objects. 19:09:43 #action fungi check our cinder quota in rax-dfw 19:09:47 I seem to recall there was a volume cleanup around when afs went in 19:10:17 anything else on the nodepool/dib front? 19:10:39 #topic Priority Efforts (Migration to Zanata) 19:10:48 pleia2: cinerama: what's the good news? 19:10:55 o/ 19:11:21 I was able to log in as an admin and make a project, so yay openstackid working + my admin username 19:11:28 hi there. so we have a working translate-dev server and the next step is to get the client-side jenkins stuff ported over 19:12:17 i think that is about it 19:12:22 I need to spend some time going through the spec and reprioritizing some of our tasks 19:12:29 you can make projects without admin rights too 19:12:31 s/and/and spend some time 19:12:35 which may or may not be a problem 19:12:49 yep. we need to work that out 19:12:52 yeah, i think at some point we need to sync up on how we want project creation to work there 19:12:54 I also have an admin section, but that is a something we should look at 19:13:10 probably should just follow up in channel on that for starters 19:13:32 it's a bit tedious doing it via transifex (we need to work directly with their help staff), but I don't think we want everyone to be able to create things 19:13:38 yeah, out of meeting is fine for this discussion 19:13:54 sounds like great progress! 19:14:01 things are moving along though, huge thanks for cinerama for picking this up while I've been traveling (but I'm home now for a while!) 19:14:15 * anteaya applauds cinerama 19:14:45 anteaya: thanks *blush* 19:15:00 yes, thank you! 19:15:04 that's all for now :) 19:15:17 #topic Priority Efforts (Downstream Puppet) 19:15:37 asselin_ has been recruiting help 19:15:41 i think all of the metadata changes asselin_ put in yesterday have merged now 19:15:45 I don't know if he is here right now 19:15:58 hi 19:16:00 fungi: yep, last I checked they were at least all approved, aiming for merging 19:16:07 and yes, there was a call for participants on the ml 19:16:17 the thread seems to be picking up steam 19:16:19 i'm meeting with gozer folks from hp that are interested in particpating too 19:16:26 yes, recruiting, and a few people are interested. I got some private e-mails too 19:16:49 #link http://lists.openstack.org/pipermail/openstack-dev/2015-April/061929.html 19:16:56 fbo submitted a patch to refactor zuul....nice to see ;) 19:17:02 #link https://review.openstack.org/#/c/175970/ 19:17:32 if we could get some core reviews on that to ensure asselin_ and I have been giving fbo the correct direction that would be great 19:17:43 pleia2 seems to have just approved the last awaiting metadata change as we discussed this 19:18:09 so we should be very close to being able to start uploading modules to the forge now? 19:18:13 fungi: hah, yeah, I was waiting for jenkins 19:18:16 I have a request into ttx to get asselin_ a table at summit to work on it there 19:18:25 looking like Tuesday afternoon 19:18:52 It would be good to get the log server one merged (on the openstackci side) https://review.openstack.org/#/c/167425/ 19:19:05 anteaya: what does a teable at summit mean? 19:19:23 jeblair: a room with a table for work to happen 19:19:28 (just rebased on metadata change) 19:19:36 anteaya: are those not allocated to projects? 19:19:44 ttx has a few available 19:19:55 but at poor times, tuesday end of day for example 19:20:19 oh, a session slot in a workroom vs a fishbowl 19:20:25 workroom 19:20:31 more work less talk 19:20:34 yeah, i just thought infra had some of those 19:20:49 jeblair: we do, I'm not sure what you have planned though 19:20:57 so I didn't offer one of ours 19:21:08 which should probably have been a separate meeting topic--we need to put together a list of what we want to use our allotted rooms/times to discuss or work on 19:21:23 more of a psa i guess 19:21:32 sorry to derail 19:21:34 right, well, i think it would be rather near the top of the list 19:21:37 probably shouldn't spend the meeting putting it together 19:21:41 of things infra would want to work on 19:22:17 jeblair, +1 19:22:25 i can start an infra ml thread or an etherpad or something to start collecting ideas for our summit sessions, if that will help 19:22:34 jeblair: whatever you want to have happen is fine with me, just didn't know what that was 19:22:50 and wanted asselin_ a chance to get some new help on it 19:23:18 fungi: i'll take care of it 19:23:26 okay, great! 19:23:28 I have offered the cross-project third party session for this discussion, but if we want to do it in an infra session even better 19:24:22 anything else downstream puppet wise we need to cover today? 19:24:33 not from me 19:24:44 krtaylor: thanks -- it feels less like a cross-project thing to me. i'll see what we have available though and chat with you and asselin_ about it 19:25:10 jeblair, I agree and that works for me 19:25:29 #topic Priority Efforts (Askbot migration) 19:25:57 mrmartin has been working on the staging/dev server deployment solution, we talked through it some earlier today 19:26:07 thanks to jeblair for catching the backups issue, I just confirmed now that it's working 19:26:15 I'm here 19:26:24 we still have that problem with the caching of the solr stuff in /opt but mrmartin wrote an upstream puppet patch to address that 19:26:37 so hopefully that gets in then we can fix that remaining issue 19:26:50 oh, and we had redis running out of memory 19:27:02 fungi: again? 19:27:06 that's solved thanks to mrmartin spotting something we needed to add to the config 19:27:14 mrmartin: no, it's been fine since that last config update 19:27:17 oh ok. 19:27:45 I need an approval on theme pinning patch, because Evgeny cannot publish changes 19:27:50 #link http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=2544&rra_id=all 19:28:01 without braking production ask.o.o 19:28:11 mrmartin: have a link for that? 19:28:38 https://review.openstack.org/#/c/171066/ 19:28:51 #link https://review.openstack.org/171066 19:29:16 jhesketh: ^ there are replies to you in that review 19:29:47 anyway, nibalizer have an approval right on vamsee/puppet-solr repository, so the good news that he can help us to accept the required upstream pull request 19:30:01 so we can close the /tmp solr warning issue 19:30:33 are things still complicated or in need of coordinating at this point that it needs to remain a priority effort meeting topic? we can leave it on the agenda for one more week if anyone wants and revisit 19:30:57 otherwise i'll clear it off after the meeting wraps up today 19:31:07 everything is ok, I guess we can go on with the staging 19:31:14 but it'll take a time 19:31:21 I'll write a spec for that 19:31:27 if required 19:31:47 mrmartin: is it substantially different than prod? 19:31:47 I don't think we need a spec for that 19:31:48 question on ask.o.o graph 5min avg: How can Current be 2.03g and maximum 1.90G? 19:31:55 okay, if you think it's complex enough to warrant a spec feel free, but i wouldn't worry about it unless it's going to need a lot of help (beyond just spinning up a server and testing) 19:32:03 jeblair, yes because, we need to rewrite the update mechanism 19:32:04 that used mem 19:32:06 it should be pretty similar to what we have done for prod but deploy latest all the things 19:32:37 mrmartin: if it's what clarkb says, i think we can probably just cover that in review 19:32:50 the environment is the same, the deployment model is a bit different, because we need to deploy the askbot-devel from a remote upstream github repo 19:32:57 we had a discussion about this today morning 19:33:17 mrmartin: right but isn't that as simple as changing version => x.y.z to version => latest 19:33:26 yes 19:33:30 mrmartin: we should be able to do that in a straightforward manner 19:33:38 mrmartin: yeah, skip the spec then and just start with a change to add the server 19:33:41 Rockyg: good eye... i'm going to guess we've got something setting maximum to the average value there for some reason 19:33:50 I hope the patch will land this week, but it requires some careful review 19:34:02 ok. 19:34:10 so that's all related to askbot 19:34:15 basically the prod is working well 19:34:24 thanks mrmartin! 19:34:59 mrmartin: awesome great work there 19:35:05 thnx 19:35:07 #topic Priority Efforts (Upgrading Gerrit Saturday May 9, 2015) 19:35:25 any new developments we need to be mindful of here? 19:35:27 nothing much to say here. 19:35:30 I have a question related to this 19:35:38 I'll just note the agenda says 2.9 and I think we agreed last week to 2.10 19:35:45 jeblair: do we want to put your connection debugging change atop 2.10.3 just in case we end up needing it? 19:35:59 anteaya: right, i simply dropped the version number in the meeting topic for that reason 19:36:05 clarkb: i don't think it would hurt, but i do not know the status of the bouncy castle war problem 19:36:15 fungi: okay 19:36:25 clarkb: (however, i don't think that's related to my change, so i'm not sure that matters) 19:36:38 fungi syas he was going to try to manually reploy? 19:36:44 at slow time 19:36:51 zaro was looking at it yesterday but couldn't reproduce with a similar config and that warfile 19:36:58 clarkb: it _does_ suggest that we should make very sure we have a working 2.10 deployed on -dev and that's what we use for the upgrade :) 19:37:06 jeblair: ++ 19:37:12 so, yeah, i need to try again and see if i can collect some additional state details 19:37:16 cause right now the status of that seems like "voodoo build bug" 19:37:54 it may be a problem with the earfile library unpacking done by the gerrit manifet 19:38:00 er, puppet manifest 19:38:04 i thought about trying to repro on review-dev but didn't want to go thru trouble of reverting to 2.8 to validate 19:38:27 unless yous guys think it's worth it 19:38:59 zaro: could probably repro on a throwaway personal server, right? 19:39:23 jeblair: requires contact store to be enabled to hit that lib. 19:39:49 zaro: but you said you tested again with that turned on, right? 19:39:53 jeblair: i tried that on my VM but gerrit got stuck on startup. didn't even make it to the point where it would fail 19:40:21 hrm. we should probably offline this and move on :/ 19:40:31 ahh, yep. veering well off-topic 19:40:40 but we'll continue later in the infra channel 19:40:45 ok 19:41:11 doing the next topics slightly out of order because this one's sort of time-sensitive... 19:41:23 #topic Outreachy: we have a prospective intern candidate but need a volunteer mentor (vkmc) 19:41:35 someone who wants to work on infra? 19:41:37 #link https://wiki.openstack.org/wiki/Mentors 19:41:40 neat! 19:41:47 yes, there's someone who wants to do infra work! 19:42:04 vkmc was going to try to make the meeting i thought, but perhaps she was not able 19:42:06 sweet 19:42:17 i have some info from reed that confirms we do though 19:42:20 I would be happ to help but ETWINS 19:42:41 clarkb: that'll teach you 19:42:41 she isn't in this channel I just pinged her in #openstack-opw 19:42:51 I can help this time around, but I can't be a primary mentor 19:42:53 hi o/ 19:42:56 and that vkmc contacted the people listed on the mentors sign-up sheet (including jhesketh and nibalizer) but that they were not able to assist after all so we need some other volunteers 19:42:57 welcome vkmc 19:42:59 there she is 19:43:04 thanks :) 19:43:17 thanks for bringing this topic up in the weekly meeting, I appreciate this 19:43:22 vkmc: oops! sorry, started without you 19:43:27 vkmc: so far we are looking at https://wiki.openstack.org/wiki/Mentors 19:43:28 fungi, no worries! 19:43:41 anteaya, cool 19:44:15 the candidate is in american pacific standard timezone, right? so volunteers in that tz would probably be the best fit 19:44:27 well, my main concern here is that whereas I got really good comments from this applicant, I'd really like that the willing to be mentor know them as well 19:44:53 and make a project plan for the internship 19:45:15 the selected applicants announcement is this Friday 19:45:19 righth, we'd need to pick a couple of moderate-complexity tasks which don't require a lot of ramp-up learning curve 19:45:29 agree 19:45:55 and what's the start and stop date for interns and rough number of hours per week? 19:46:04 interested mentors please contact me and I'll give further details about the applicant background and the program 19:46:19 as a team, could we brainstorm some projects? I don't mind being a go-to for helping, but I'm winding down another mentorship projects (no outreachy) where I had to do way too much ground work 19:46:29 s/no/not 19:46:31 sure, the internships starts on May 25 and ends on August 25 19:46:51 40 hours a week when I did it, the candidate is not required to have any kind of programing ops experience (unless that has changed) 19:47:00 ^^ that's what I fear 19:47:28 and yeah, as anteaya mentioned, 40 hours per week 19:47:30 in this case it seems like the candidate probably has a fair amount of programming experience though 19:47:31 for the intern 19:47:44 I'm all for bringing in new people, but I personally don't have time to be the primary mentor though I can offer support as a tertery mentor 19:48:16 pleia2: i like your suggestion about brainstorming topics 19:48:20 clarkb, anteaya - care to mentor as a trio? (and vkmc is that ok?) 19:48:51 it is yes :) 19:49:04 oh I wouldn't go that far 19:49:13 ok 19:49:18 so why don't we brainstorm topics in -infra post meeting then we can go from there? 19:49:19 sorry but I still am trying to find the work burnout balance 19:49:29 and things don't go well for me if I can't find it 19:49:29 clarkb: wfm 19:49:35 sounds great 19:49:36 trying to be supportive here 19:49:37 anteaya: totally understand 19:49:48 thanks vkmc we'll try to get up with you in a couple days 19:49:57 if that works 19:49:58 vkmc: thanks for joining the meeting 19:50:07 sure, it sounds good to me 19:50:11 feel free to reach me in any time 19:50:24 #topic Spec proposal - Integration tests for System-config Openstack_project using containers (fbo) 19:50:34 #link https://review.openstack.org/172833 19:50:41 discussion of this can probably happen in that review 19:50:44 for the sake of time 19:50:56 #topic Neutron-lib proposal (dougwig, mestery) 19:51:02 hi there 19:51:04 howdy 19:51:05 it was mentioned that we should bring up an upcoming split of some library stuff out of neutron here. proposal: 19:51:05 https://review.openstack.org/#/c/171836/ 19:51:05 it's not final or certain yet, and I also wanted to point out that it's not a "split" with history. there is so much refactor involved, it'll almost certainly be just a repo create and move forward with regular gerrit reviews. 19:51:05 actual repo creation WIP is here: 19:51:05 https://review.openstack.org/#/c/174952/ 19:51:06 bringing it up here to see if there are any concerns or things that we need to do to make life easier all around. 19:51:06 do either of you have a quick summary? 19:51:12 dougwig does I think :) 19:51:17 i consider clarkb and nibalizer key particpants in the testing spec review 19:51:28 #link https://review.openstack.org/174952 19:51:32 jeblair: i would agree 19:51:35 ya I can review it today 19:51:52 mostly I thought this should have infra eyes to ensure the steps to split are ones we support 19:52:08 mestery, dougwig: this is making a lib out of the guts of neutron I'm guessing? 19:52:08 trying to avoid the scenario last time with the *-aas splits 19:52:13 did i run afoul of the rate limiter? 19:52:13 mordred: ++ 19:52:14 #link https://review.openstack.org/171836 19:52:28 it was mentioned that we should bring up an upcoming split of some library stuff out of neutron here. proposal: 19:52:33 https://review.openstack.org/#/c/171836/ 19:52:33 dougwig: no we're just digesting :) 19:52:35 Yes, what dougwig said 19:52:39 oh, ok. :) 19:52:42 dougwig, mestery: seems fine to me 19:52:53 mordred: Cool, we just wanted to bring this up so infra was aware. 19:53:02 I'm confused ut I Think I opened the wrong change 19:53:13 fungi: you want to undo that and link the other change 19:53:20 #undo 19:53:21 Removing item from minutes: 19:53:44 dougwig: have a link to the right change for your proposal? 19:53:51 https://review.openstack.org/#/c/171836/ 19:53:54 https://review.openstack.org/#/c/174952/ 19:54:00 spec and wip for project-config 19:54:07 nice work dougwig :) 19:54:18 dougwig: you aren't linking to the spec 19:54:21 171836 is "Non-json body on POST 500's" 19:54:29 is that the right one? 19:54:36 if it is then I am really confused 19:54:40 no it isn't 19:54:44 sigh, sorry. did i mentioned that i ran out of red bull yesterday? 19:54:49 let's try: 19:54:50 https://review.openstack.org/#/c/154736/ 19:54:53 lol 19:55:08 #link https://review.openstack.org/154736 19:55:15 thanks, that makes more sense ;_ 19:55:34 cool, so probably not something we have time to get too deep into discussing in the meeting, but those reviews look like a good place to go deeper 19:55:52 fungi: Yes, comments very welcome there! Thanks! 19:55:54 thanks for bringing it to our attention! 19:55:59 please do. thanks. 19:56:02 mostly to ensure if they follow these steps we don't have to dig them out of a hole 19:56:13 #topic Renaming stackforge/mistral to openstack/mistral (rakhmerov) 19:56:13 dougwig mestery thank you 19:56:20 thanks fungi anteaya ! :) 19:56:24 anteaya: it is designed to avoid the previous holes. we hope. 19:56:27 rakhmerov: get up with us in the infra channel on the details for how to do this 19:56:39 it doesn't really need meeting time devoted to it though 19:56:49 #topic Tag all the things! (fungi) 19:56:49 and i think we decided to schedule further renames post-summit 19:56:54 or at least post-release 19:57:04 jeblair: ++ 19:57:08 just a reminder we wanted a nodepool tag for hashar's packaging 19:57:17 and a zuul tag for zuul cloner 19:57:24 i'm planning to tag bindep in the next day or so 19:57:30 ahh, yes, zuul 19:57:42 and we're overdue to do another git-review release 19:58:02 anything else we should be planning to tag rsn? 19:58:11 zuul was all I had 19:58:51 okay, in that case... 19:58:53 #topic Open discussion 19:59:01 i give you one minute of open discussion! 19:59:05 yay 19:59:06 fungi: nicely done! :) 19:59:07 I would like to suggest turning hpcloud back off, looking at status graphs we have better long term throughput on rax alone 19:59:13 Gosh, the weather has been good lately 19:59:20 do we have an etherpad for summit brainstorming? 19:59:22 basically hpcloud starts ok but seems to get worse over time 19:59:33 I need to finalize the connection to give you those extra VMs 19:59:33 clarkb: eek, yeah it looks sort of unhappy 19:59:37 clarkb: yeah, i think the cleanup process may not be far enough along? 19:59:38 (with OVH) 19:59:42 pleia2: not yet, jeblair wants to take care of that 19:59:54 jeblair: ya, and we are probably just piling more onto it as we go 19:59:57 anteaya: thanks 20:00:06 ttx: yeah, mordred was looking at going a day early for that (so meet on tuesday) 20:00:07 jeblair: who should serve as a contact on your side ? you ? 20:00:14 clarkb: and yeah, our workers line dropped with hpcloud under load, compared to without hpcloud at all 20:00:18 ttx: if that fails, i can stay a day later and meet (so following monday) 20:00:32 jeblair: ideally we'd move before, while the topic is hot 20:00:33 workers/running anyway 20:00:35 fungi: with some new tags to follow for Zuul. There is a few patches for Zuul I am dieing to see merged in before the grand refactoring of v3 :D 20:00:48 ttx: agreed -- plan A is mordred on tuesday 20:00:49 do we still need the meeting open? 20:00:58 hashar: thanks for the reminder. yes we normally check outstanding reviews before releasing 20:01:01 okay, we're over time 20:01:10 thanks everybody! 20:01:12 #endmeeting