19:02:41 #startmeeting infra 19:02:42 Meeting started Tue Mar 26 19:02:41 2013 UTC. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:45 The meeting name has been set to 'infra' 19:03:03 #action clarkb set up new hpcs account with az1-3 access 19:03:12 #topic actions from last meeting 19:03:25 fungi: it looks like you started on rename -drivers in gerrit to -milestone, and create -ptl groups and give them tag access 19:03:34 jeblair: yes 19:03:38 #link https://etherpad.openstack.org/3a5sCguY7B 19:03:42 working up a plan there 19:03:50 the rename is the touchy but 19:03:52 er, bit 19:04:02 the -ptl groups should be easier 19:04:15 ttx: ^ fyi 19:04:30 ttx also opened a related bug 19:04:33 #link https://bugs.launchpad.net/bugs/1160277 19:04:35 Launchpad bug 1160277 in openstack-ci "Groups have similar names in LP and gerrit but are no longer synced" [Undecided,New] 19:05:16 not much else to add on that at the moment 19:05:26 fungi: cool. definitely think the sql query is the way to go there. :) 19:05:30 i think we can maybe announce it for a very short window, or no window 19:05:52 should only impact things like +2/approve and only for a few minutes 19:06:34 #action fungi finish working on -milestone/-ptl group changes 19:06:39 fungi: yeah, i don't think i'd bother scheduling an outage; just maybe announce that the names have changed 19:06:51 wfm 19:07:13 fungi: it's possible that if you use the webui there would be no affect (as i believe that it updates the acls simultaneously) 19:07:30 oh, interesting. i'll test that 19:07:37 fungi: but either way, i think the effect is too small to bother people with. 19:07:48 agreed 19:08:12 zaro: ping 19:08:33 yes 19:09:26 was there a question? 19:09:50 gearmand; it's running on zuul.o.o now, right? 19:10:06 actually i haven't checked yet. 19:10:15 fungi, do you know? 19:10:21 i just logged in an looked; it seems to be running 19:10:28 zaro: it was there when i looked, yes 19:10:33 great! 19:10:51 zaro: how's your jenkins-dev testing going? 19:10:57 #topic gearman 19:11:13 it's going well. haven't found any new bugs 19:11:24 on jenkins-dev anyways. 19:11:54 zaro: you want to link to your test plan etherpad? 19:11:59 only new things to report this week is that jenkins-gearman plugin is now hosted on jenkinsci 19:12:55 sent brian an email asking about gearman cancel feature but have not gotten a reply. 19:13:20 supposedly he just needs to get it passed CI. 19:13:36 he says feature is complete on his dev branch. 19:14:17 other than those news i've added some tests for the plugin. unit and integration. 19:14:37 it sounds like that's the only thing we're waiting on before we start hooking it into zuul 19:14:43 fungi: it's not a real test plan. so i won't bother linking to the etherpad. 19:15:06 we can do some prelim testing with it though if you would like to do that. 19:15:39 just without the cancel feature. 19:16:04 the cancel gearman job feature has nothing to do with the gearman-plugin. 19:16:19 cancel will be done on zuul clients. 19:17:35 hello?? 19:17:58 we're still here 19:18:24 sorry, not used to silence during meeting. 19:18:33 we're a quiet bunch 19:18:47 and we're waiting on krow, so there's not much more to say 19:19:12 what do you think about hooking up to prod jenkins for testing? 19:19:28 after the release 19:19:48 ok, when is that again. 19:19:56 we're in a bit of a self-imposed slushy-freeze rsn, i assume 19:20:19 zaro: april 4 19:20:39 cool. 19:21:14 zaro: though i don't think we'll do much testing in production 19:21:50 zaro: at most, we'll install the plugin and make sure it doesn't have any adverse effects; i think the first time it's going to get used on the production server is when zuul starts sending it jobs 19:22:03 zaro: we'll test zuul development against jenkins-dev. 19:22:35 sounds good. 19:23:16 #topic pypi mirror/requirements 19:23:45 we just merged a change that will switch all the slaves to using our mirror exclusively 19:24:15 and i have a change in progress to move building that mirror to two jenkins slaves (for 26 and 27) which will then put the mirror on static.o.o 19:24:48 after that, i think we'll swing back around and set up some gating jobs for the requirements repo 19:25:15 any questions about that? 19:25:55 is the plan to remove things from the openstack-specific mirror over time as they no longer fall within the requirements allowances? 19:26:36 fungi: that's not in the short-term plan. we should definitely talk about that at the summit 19:27:08 or is just making sure that the requirements for a given project fall within the global requirements for the release they're on sufficient? 19:27:20 yeah, worth a discussion 19:28:39 #topic baremetal testing 19:28:55 so, I had a good chat with devananda about this yesterday 19:29:18 the first step we need to do here is get diskimage-builder into our testing infrastructure 19:29:45 turns out SpamapS has done some work on this, adding tox.ini and making some other changes: https://github.com/stackforge/diskimage-builder/ 19:30:05 which should mostly satisfy the "check python stuff" requirement 19:30:44 there's also testing 1) whether it can create an image of a specific type (image exists, exit status 0) and 2) that it boots 19:31:10 this is were I'll need help, I haven't yet written a jenkins job and I'm not sure how our infrastructure will strictly support this 19:32:38 pleia2: is anyone helping you on that? 19:32:42 * ttx waves 19:32:48 no 19:33:07 what do you mean by writing jenkins job? 19:33:41 zaro: well, the test 19:34:19 I assume jenkins would be the one to build these images 19:34:20 pleia2: can you be more specific about what you're concerned about? jenkins can run shell scripts. what other infrastructure is needed? 19:34:25 ohh. you mean creat a job in jenkins to test something? 19:34:52 if so i can help there 19:35:28 jeblair: creating the image shouldn't be a problem, but then it needs to launch these VMs and do further tests - so it needs a VM that runs the test, then it creates another VM and launches it 19:36:32 pleia2: the devstack-gate management jobs launch vms, so you might use that as a model 19:36:53 ok, I'll take a look there 19:37:12 pleia2: and in fact, the image-update job launches a vm, configures it, creates an image, and then deletes the vm. 19:37:41 pleia2: is there an overall plan written down somewhere? 19:38:06 only vaguely in the bug report 19:38:25 https://bugs.launchpad.net/openstack-ci/+bug/1082795 19:38:26 Launchpad bug 1082795 in openstack-ci "Add baremetal testing" [High,Triaged] 19:38:37 #link https://bugs.launchpad.net/openstack-ci/+bug/1082795 19:39:36 pleia2: the thing i don't understand is how we use an image from dib. 19:40:25 pleia2: we can't boot machines from a glance server in our public clouds, so i think we'd have to do the thing where we boot a public image and then sync that to the image from dib 19:40:29 mordred's idea was to stash it in glance and then when devstack-gate is run it runs a different script which pulls that image from glance instead of doing its image-update thing 19:40:30 (i forget what robert calls that) 19:40:34 ah 19:41:07 honestly, that sounds slow to me, but i haven't seen it in action. :) 19:41:36 well, I think this will only be important for the "baremetal" one, since demo is run tripleo style 19:42:03 (demo is another image we'll test building with dib) 19:43:27 pleia2: i'm happy to help with overcoming technical obstacles, but you may have to go to mordred if you find you're missing an architectural piece. 19:43:49 jeblair: ok, thanks 19:44:11 I'll poke around jenkins and see if I can get some drafts for these tests in place too, seeing as I have no familiarity with it yet :) 19:44:21 pleia2: you'll want to read up on jenkins-job-builder 19:44:27 right 19:44:35 #link http://ci.openstack.org/jenkins-job-builder/ 19:44:51 #topic releasing git-review 19:44:56 fungi: ? 19:45:22 mordred wanted to wait for his rebase configuration patch 19:45:42 he and saper are still hashing out that pair of intertwined patches, it seems 19:45:49 there was more work done on them over the weekend 19:46:33 i'm strongly inclined to let those wait for the next release and go ahead and tag one in the interim 19:47:00 on a related note, this has just reminded me to tag zuul 1.2.0. 19:47:31 #topic open discussion 19:48:07 anyone have anything else they would like to chat about? 19:48:08 bug 1157618 19:48:09 Launchpad bug 1157618 in openstack-ci "swift-tarball job produces incorrectly-versioned tarballs" [Undecided,New] https://launchpad.net/bugs/1157618 19:48:22 i discussed it with mordred 19:48:33 We'll work around it manually for the remaining swift rcs 19:48:38 i believe he promised all projects would have the new versioning code by the release. :( 19:48:44 fungi: so i'll use your rename powers 19:48:57 ttx: i'll be happy to lend them 19:49:44 my rename powers consist of 'ssh tarballs.o.o sudo mv foo bar' 19:50:01 that's all I need :) 19:50:24 bug 1160269 is probably completed 19:50:27 Launchpad bug 1160269 in openstack-ci "stable/essex is maintained by ~openstack-essex-maint" [Medium,In progress] https://launchpad.net/bugs/1160269 19:50:41 ttx: did you respond to my question for clarification there? 19:50:46 fungi: yes 19:50:58 okay, cool. lp e-mail is sometimes sloooow 19:51:17 okay, awesome. marking released 19:51:30 that's all I had 19:51:36 ttx: is diablo dead yet? 19:51:48 jeblair: for some definition of dead, yes 19:52:11 jeblair: our position was that it's alive as long as someone cares about it 19:52:33 jeblair: i.e. as long as openstack-diablo-maint is staffed 19:52:54 but i think in practice it's pretty dead now 19:53:18 okay, we should think about removing it from infrastructure soon... maybe a summit chat. :) 19:53:18 we do security for essex and folsom 19:53:28 tehre is a session on stable branches at the summit 19:53:50 cool, it can wait till then i think. 19:54:04 it won't be much deader by then. 19:54:18 just as a heads up, i'm going to be continuing this as an employee of the openstack foundation after next week... so don't try to e-mail me at my hp.com address (not that i ever checked it regularly anyway) 19:54:58 fungi: congratulations! 19:55:03 thanks jeblair 19:55:20 congrats fungi! 19:55:23 fungi: congrats! 19:55:37 i appreciate your support 19:56:30 fungi: i never even know you had an hp email. 19:56:40 on that note: thanks everyone! 19:56:41 #endmeeting