19:03:23 #startmeeting infra 19:03:24 Meeting started Tue Jan 21 19:03:23 2014 UTC and is due to finish in 60 minutes. The chair is pleia2. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:03:27 The meeting name has been set to 'infra' 19:03:42 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:03:57 #topic Actions from last meeting 19:04:49 #link http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-01-14-19.02.html 19:05:08 last meeting minutes, fungi had a number but he's on plane 19:05:19 around-ish 19:05:34 fungi: any updates from last meeting action items worth mentioning? 19:06:03 o/ 19:06:04 zaro: as of this morning, I believe you and clarkb are still looking at the scp race condition issues? 19:06:33 yes, got a fix, just testing it now. 19:06:37 mordred: any meeting-worth updates on manage-projects failures? 19:06:43 zaro: great! 19:07:11 pleia2: i'd have to pull up the meeting minutes, so better to just assume i need most of those action items reapplied 19:07:13 pleia2: nope. I ran it several times by hand to try to catch a fail 19:07:28 pleia2: so far, that has been unsuccessful 19:07:30 mordred: still at "puppet is doing something weird to make it fail" then? 19:07:44 yeah. that's the current unproven working theory 19:07:54 * pleia2 nods 19:08:08 I need to set up a new testbed thing that I can hammer on with a big hammer 19:08:52 ok, going to #action a bunch now, we can follow up next week on others that need to be removed when people are not so flying 19:09:18 i did get mordred's creds reinstated 19:09:23 #action reed to talk to smarcet and find a mentor to help him get through the CI learning curve faster 19:09:35 \o/ 19:09:45 and there was one other on there... i got the trpleo credentials tested 19:10:06 #action mordred to continue looking into manage-projects failures 19:10:34 fungi, clarkb - thoughts on timing for -metering to -telemetry rename? 19:10:58 mordred: your groups: http://paste.openstack.org/show/61640/ 19:11:01 #action fungi upgrade jenkins.o.o and jenkins01 to match 02-04 19:11:05 clarkb: how's this weekend? 19:11:23 i'll be home friday night unless winter weather delays flights through chicago 19:11:42 #action fungi move graphite whisper files to faster volume 19:11:50 #action fungi prune obsolete whisper files automatically on graphite server 19:11:58 #action fungi request org.openstack group in sonatype jira for maven nexus 19:12:05 fungi: mornings are fine 19:12:24 that one is back-burner as it turns out, since we don't need it (should also be org.stackforge instead) 19:12:38 #undo 19:12:39 Removing item from minutes: 19:12:56 #action clarkb to rename stackforge/cookbook-openstack-metering to -telemetry 19:13:01 (had to give one to not fungi!) 19:13:09 we can action it when the clouddocs-maven group decides how they're doing releases 19:13:15 ok, great 19:13:19 ok, that's it for action items from last meeting 19:13:28 #topic Trove testing 19:13:35 jeblair: thanks! 19:13:58 mordred, any updates? (I don't see hub_cap or SlickNik here) 19:16:01 going to go with "no" or "check in with the others" so we can move along 19:16:14 #topic Tripleo testing (lifeless, pleia2, fungi) 19:16:28 we had some -infra patches merged last week related to this (thanks fungi!) 19:17:11 not much else to report on the infra side right now I don't think 19:18:30 lifeless: you have anything to add? (otherwise we just pick up the less-infra parts of this topic in the tripleo meeting) 19:19:39 i think he was working on some of the nodepoo prep scripts which needed tweaking still to get servers to build successfully 19:19:47 nodepool 19:19:58 dependency-wise, right? 19:20:24 nodepoo 19:20:24 i don't recall the details there 19:20:27 I'm a child 19:20:36 fungi: pleia2 hi yes 19:20:43 theres a patch from deryck 19:20:47 and we need to turn nodepool on 19:20:55 then we'll start to see how far it gets 19:21:11 ok cool, I'll have a look at that in a bit 19:21:18 mordred: you need to make that tshirt for the next design summit 19:21:38 https://review.openstack.org/#/c/67958/ and https://review.openstack.org/#/c/67685/ 19:21:44 ^ neither should affect anyone else at all 19:22:12 great 19:22:44 looks like next two agenda items it looks like we covered in action items review: Requested StackForge project rename (fungi, clarkb, zhiwei) & Ongoing new project creation issues (mordred) 19:22:59 #topic Pip 1.5 readiness efforts (fungi, mordred) 19:23:01 yup 19:23:24 1.5.1 is out 19:23:32 reqs integration jobs work again, as much as any jobs are working at the moment 19:24:12 virtualenv 1.11.1 was coming earlier than pip 1.5.1 i thought, but i haven't seen it yet 19:24:20 maybe that plan changed 19:24:41 fungi: 1.11.1 is on pypi 19:24:59 yup, just saw it 19:25:07 slow browsing on the plane 19:25:39 so in theory we could lift our 1.10.1 pin when we're ready to babysit that 19:25:54 maybe that's an action item 19:26:05 who wants that one? 19:27:21 ok, we can give it to mordred this week and reshuffle as needed :) 19:27:39 #action mordred to lift virtualenv 1.10.1 pin when we're ready to babysit it 19:27:44 ++ 19:27:54 #topic OpenID provider project (fungi, reed) 19:28:45 no reed here today 19:28:53 i've got an action item to describe the bits of automation which are still mssing so that mrmartin can help smarcet get it done 19:29:17 * pleia2 nods 19:29:34 #topic Graphite cleanup (fungi) 19:29:48 we had to reaction a bunch of graphite stuff, I assume "ongoing" here too? 19:30:06 this is still untouched. got too busy, though someone fixed it yesterday when it toppled over 19:30:19 someone fixing it \o/ 19:30:19 as in got it running again 19:30:19 it wasn't me. I was curious who that was as well. guessing jeblair 19:30:29 yeah 19:30:29 nope 19:30:37 must have been mordred 19:30:40 an elf 19:30:50 we have a helpful elf 19:30:54 i logged in and everything looked normal. busy but normal. 19:30:54 wasn't me 19:31:07 huh. weird 19:31:11 also, while i noticed a gap in some graphs, i did not see it in all 19:31:28 possibly a data reporting problem from one source? 19:31:34 network dropping udp packets? 19:31:40 okay, so whoever said it fell over and someone fixed it speculating 19:31:48 er, was speculating 19:32:03 #topic Upgrade gerrit (zaro) 19:32:36 review-dev.o.o has the upgraded gerrit (ver 2.8) 19:33:06 I believe I've also enbable all features requested in upgrade etherpad #link https://etherpad.openstack.org/p/gerrit-2.8-upgrade 19:33:37 I've also created a semi-automated upgrade script #link https://etherpad.openstack.org/p/gerrit_upgrade_script 19:34:13 it's ready to go whenever we want to flip the switch. 19:34:19 zaro: has it been tested with git-review and zuul? 19:34:42 zaro: also, gerritbot and the jeepyb hooks? 19:35:07 i have been using git review and that seems fine. however i have not tested with zuul. 19:35:40 could switch zuul-dev to point at it 19:35:42 i have not looked into gerritbot nor jeepyb integrations. 19:36:17 i assumed a zuul was already pointing at gerrit-dev. but it isn't? 19:36:40 zaro: i think the last thing we did with zuul-dev was point it at prod gerrit to load test jenkins 19:37:39 ok. i'll put that on my tdl. i'll also reveiw gerritbot and jeepyb integrations to see what to test there. 19:38:05 #action zaro to point zuul-dev at gerrit-dev 19:38:29 #action zaro to review gerritbot and jeepyb integrations with new gerrit 19:38:44 anything else? 19:39:18 yes, zuul changes last week 19:39:30 jeblair: not sure if you caught this, but zuuls scratch git space is running on a tmpfs 19:39:38 (I meant for this topic :)) 19:39:50 clarkb: (i saw; clever) 19:40:00 oh this topic, sorry I am bad at context switching right now 19:40:04 #topic Private gerrit for security reviews (zaro, fungi) 19:40:20 this is on hold, nothing new to report. 19:40:21 I think we're still waiting on this for the 2.8 upgrade of regular gerrit 19:40:32 Savanna testing (SergeyLukjanov) 19:40:36 #topic Savanna testing (SergeyLukjanov) 19:40:45 SergeyLukjanov: any updates here? 19:40:47 nothing realling interesting / new 19:41:01 still waiting for review on patches for tempest 19:41:13 #topic Jenkins SCP Plugin fix for Elastic Search (clarkb, zaro) 19:41:30 mentioned this one earlier, testing is happening now 19:41:51 btw, after the first savanna api tests will be merged into the tempest, I'd like to enable them for savanna as voting (https://review.openstack.org/#/c/68066/) 19:42:03 after updating the SCP plugin to handle new jenkins we realized it was done running jobs and emitting 0mq events that could be processed before console logs had even begun to start witing on the log server 19:42:43 this meant that the logstash machinery got 404s and those logs weren't processed. now we wait for the file to be created before continuing the job but there was a small bug in that that allowed jobs to get stuck (manually killing them fixes the problem) 19:42:52 fix for latest bug is being tested now by zar 19:43:44 (zar is pirate pizza) 19:44:06 ha ha ha 19:44:46 ok, thanks clarkb and zaro 19:44:52 #topic Open Discussion 19:44:57 now, all the other things! 19:45:33 I had some lovely parsnips and brussels sprouts last night in a mustard sauce. is that off topic? 19:45:39 my flight is finally taking off. i will supposedly have 40 minutes to catch my connecting flight in vegas. no time for slot machines 19:45:45 while jeblair and I had an 8 hour layover in Auckland we finally got all those historic publications up on our site, yay! http://docs.openstack.org/infra/publications/ 19:45:57 fungi: there are slot machiens in the airport - you can totally slot in 40 minutes 19:46:00 8 hour layover? crazy 19:46:03 pleia2: woot! 19:46:37 fungi: it was in the air new zealand lounge which had an automatic pancake making machine. i'm not complaining. 19:47:08 neat 19:47:16 pancake dispenser 19:47:21 it was pretty neat 19:47:22 I do want to point out that the git operations that zuul does do seem to get slower over time 19:47:28 maybe https://github.com/openstack-ci can go away now? :) 19:47:37 the tmpfs is definitely better, but I believe performance is degrading there too 19:47:47 clarkb: :( 19:47:50 clarkb: that doesn't seem nice 19:47:51 and ideas on why that may happen? do we need to gc and pacl more often? 19:47:58 did you upgrade git? 19:48:13 anteaya: we did not, local very unscientific testing showed it would not help much 19:48:15 I know you discussed upgrading git 19:48:19 okay 19:48:21 granted that was some very simple cases being tested 19:48:24 clarkb: sanity check that we're packing once/day? 19:48:31 jeblair: /me checks 19:48:57 7 4 * * 0 looks like it 19:49:26 oh wait 19:49:28 ahahahahahaha 19:49:52 hrm that shouldn't matter 19:49:58 what? 19:50:02 that's weekly, yeah? 19:50:04 its not providing a working dir for the git pack but git pack doesn't need one 19:50:57 yeah, that is weekly 19:51:01 yeah the 0 at the end 19:51:13 should we s/0/*/ 19:51:29 still... 19:51:42 jeblair: I think the thrash is a big part of it 19:51:46 root@zuul:/var/lib/zuul/git/openstack/nova/.git# ls -la packed-refs 19:51:47 -rw-rw-rw- 1 zuul zuul 1856171 Jan 21 03:15 packed-refs 19:51:47 root@zuul:/var/lib/zuul/git/openstack/nova/.git# wc -l packed-refs 19:51:47 19834 packed-refs 19:51:47 root@zuul:/var/lib/zuul/git/openstack/nova/.git# find refs/|wc -l 19:51:47 2857 19:52:04 jeblair: zuul is created hundres of refs * number of resets 19:52:15 it seemed to pack refs this morning, and the bulk of the current refs are packed 19:53:35 i wonder if gitpython deals with large numbers of refs or objects particularly poorly. 19:54:39 well change pushed, we can think over it in review 19:56:23 it did seem much snappier when zuul was operating on fresh clones, but i have no empirical evidence, just perception 19:56:24 might be 2 min late for tc meeting, switching locations 19:56:45 fungi: it was, the entire zuul git operations were sub 2 seconds then compared to 9-15 19:56:52 fungi: it is now about 5 seconds 19:59:05 ok, sounds like we're done, thanks everyone 19:59:08 #endmeeting