19:02:41 <jeblair> #startmeeting infra
19:02:42 <openstack> Meeting started Tue Mar 26 19:02:41 2013 UTC.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:45 <openstack> The meeting name has been set to 'infra'
19:03:03 <jeblair> #action clarkb set up new hpcs account with az1-3 access
19:03:12 <jeblair> #topic actions from last meeting
19:03:25 <jeblair> fungi: it looks like you started on rename -drivers in gerrit to -milestone, and create -ptl groups and give them tag access
19:03:34 <fungi> jeblair: yes
19:03:38 <fungi> #link https://etherpad.openstack.org/3a5sCguY7B
19:03:42 <fungi> working up a plan there
19:03:50 <fungi> the rename is the touchy but
19:03:52 <fungi> er, bit
19:04:02 <fungi> the -ptl groups should be easier
19:04:15 <jeblair> ttx: ^ fyi
19:04:30 <fungi> ttx also opened a related bug
19:04:33 <fungi> #link https://bugs.launchpad.net/bugs/1160277
19:04:35 <uvirtbot> Launchpad bug 1160277 in openstack-ci "Groups have similar names in LP and gerrit but are no longer synced" [Undecided,New]
19:05:16 <fungi> not much else to add on that at the moment
19:05:26 <jeblair> fungi: cool.  definitely think the sql query is the way to go there.  :)
19:05:30 <fungi> i think we can maybe announce it for a very short window, or no window
19:05:52 <fungi> should only impact things like +2/approve and only for a few minutes
19:06:34 <fungi> #action fungi finish working on -milestone/-ptl group changes
19:06:39 <jeblair> fungi: yeah, i don't think i'd bother scheduling an outage; just maybe announce that the names have changed
19:06:51 <fungi> wfm
19:07:13 <jeblair> fungi: it's possible that if you use the webui there would be no affect (as i believe that it updates the acls simultaneously)
19:07:30 <fungi> oh, interesting. i'll test that
19:07:37 <jeblair> fungi: but either way, i think the effect is too small to bother people with.
19:07:48 <fungi> agreed
19:08:12 <jeblair> zaro: ping
19:08:33 <zaro> yes
19:09:26 <zaro> was there a question?
19:09:50 <jeblair> gearmand; it's running on zuul.o.o now, right?
19:10:06 <zaro> actually i haven't checked yet.
19:10:15 <zaro> fungi, do you know?
19:10:21 <jeblair> i just logged in an looked; it seems to be running
19:10:28 <fungi> zaro: it was there when i looked, yes
19:10:33 <zaro> great!
19:10:51 <jeblair> zaro: how's your jenkins-dev testing going?
19:10:57 <jeblair> #topic gearman
19:11:13 <zaro> it's going well. haven't found any new bugs
19:11:24 <zaro> on jenkins-dev anyways.
19:11:54 <fungi> zaro: you want to link to your test plan etherpad?
19:11:59 <zaro> only new things to report this week is that jenkins-gearman plugin is now hosted on jenkinsci
19:12:55 <zaro> sent brian an email asking about gearman cancel feature but have not gotten a reply.
19:13:20 <zaro> supposedly he just needs to get it passed CI.
19:13:36 <zaro> he says feature is complete on his dev branch.
19:14:17 <zaro> other than those news i've added some tests for the plugin.  unit and integration.
19:14:37 <jeblair> it sounds like that's the only thing we're waiting on before we start hooking it into zuul
19:14:43 <zaro> fungi: it's not a real test plan. so i won't bother linking to the etherpad.
19:15:06 <zaro> we can do some prelim testing with it though if you would like to do that.
19:15:39 <zaro> just without the cancel feature.
19:16:04 <zaro> the cancel gearman job feature has nothing to do with the gearman-plugin.
19:16:19 <zaro> cancel will be done on zuul clients.
19:17:35 <zaro> hello??
19:17:58 <pleia2> we're still here
19:18:24 <zaro> sorry, not used to silence during meeting.
19:18:33 <fungi> we're a quiet bunch
19:18:47 <jeblair> and we're waiting on krow, so there's not much more to say
19:19:12 <zaro> what do you think about hooking up to prod jenkins for testing?
19:19:28 <jeblair> after the release
19:19:48 <zaro> ok, when is that again.
19:19:56 <fungi> we're in a bit of a self-imposed slushy-freeze rsn, i assume
19:20:19 <jeblair> zaro: april 4
19:20:39 <zaro> cool.
19:21:14 <jeblair> zaro: though i don't think we'll do much testing in production
19:21:50 <jeblair> zaro: at most, we'll install the plugin and make sure it doesn't have any adverse effects; i think the first time it's going to get used on the production server is when zuul starts sending it jobs
19:22:03 <jeblair> zaro: we'll test zuul development against jenkins-dev.
19:22:35 <zaro> sounds good.
19:23:16 <jeblair> #topic pypi mirror/requirements
19:23:45 <jeblair> we just merged a change that will switch all the slaves to using our mirror exclusively
19:24:15 <jeblair> and i have a change in progress to move building that mirror to two jenkins slaves (for 26 and 27) which will then put the mirror on static.o.o
19:24:48 <jeblair> after that, i think we'll swing back around and set up some gating jobs for the requirements repo
19:25:15 <jeblair> any questions about that?
19:25:55 <fungi> is the plan to remove things from the openstack-specific mirror over time as they no longer fall within the requirements allowances?
19:26:36 <jeblair> fungi: that's not in the short-term plan.  we should definitely talk about that at the summit
19:27:08 <fungi> or is just making sure that the requirements for a given project fall within the global requirements for the release they're on sufficient?
19:27:20 <fungi> yeah, worth a discussion
19:28:39 <jeblair> #topic baremetal testing
19:28:55 <pleia2> so, I had a good chat with devananda about this yesterday
19:29:18 <pleia2> the first step we need to do here is get diskimage-builder into our testing infrastructure
19:29:45 <pleia2> turns out SpamapS has done some work on this, adding tox.ini and making some other changes: https://github.com/stackforge/diskimage-builder/
19:30:05 <pleia2> which should mostly satisfy the "check python stuff" requirement
19:30:44 <pleia2> there's also testing 1) whether it can create an image of a specific type (image exists, exit status 0) and 2) that it boots
19:31:10 <pleia2> this is were I'll need help, I haven't yet written a jenkins job and I'm not sure how our infrastructure will strictly support this
19:32:38 <jeblair> pleia2: is anyone helping you on that?
19:32:42 * ttx waves
19:32:48 <pleia2> no
19:33:07 <zaro> what do you mean by writing jenkins job?
19:33:41 <pleia2> zaro: well, the test
19:34:19 <pleia2> I assume jenkins would be the one to build these images
19:34:20 <jeblair> pleia2: can you be more specific about what you're concerned about?  jenkins can run shell scripts.  what other infrastructure is needed?
19:34:25 <zaro> ohh.  you mean creat a job in jenkins to test something?
19:34:52 <zaro> if so i can help there
19:35:28 <pleia2> jeblair: creating the image shouldn't be a problem, but then it needs to launch these VMs and do further tests - so it needs a VM that runs the test, then it creates another VM and launches it
19:36:32 <jeblair> pleia2: the devstack-gate management jobs launch vms, so you might use that as a model
19:36:53 <pleia2> ok, I'll take a look there
19:37:12 <jeblair> pleia2: and in fact, the image-update job launches a vm, configures it, creates an image, and then deletes the vm.
19:37:41 <jeblair> pleia2: is there an overall plan written down somewhere?
19:38:06 <pleia2> only vaguely in the bug report
19:38:25 <pleia2> https://bugs.launchpad.net/openstack-ci/+bug/1082795
19:38:26 <uvirtbot> Launchpad bug 1082795 in openstack-ci "Add baremetal testing" [High,Triaged]
19:38:37 <jeblair> #link https://bugs.launchpad.net/openstack-ci/+bug/1082795
19:39:36 <jeblair> pleia2: the thing i don't understand is how we use an image from dib.
19:40:25 <jeblair> pleia2: we can't boot machines from a glance server in our public clouds, so i think we'd have to do the thing where we boot a public image and then sync that to the image from dib
19:40:29 <pleia2> mordred's idea was to stash it in glance and then when devstack-gate is run it runs a different script which pulls that image from glance instead of doing its image-update thing
19:40:30 <jeblair> (i forget what robert calls that)
19:40:34 <pleia2> ah
19:41:07 <jeblair> honestly, that sounds slow to me, but i haven't seen it in action.  :)
19:41:36 <pleia2> well, I think this will only be important for the "baremetal" one, since demo is run tripleo style
19:42:03 <pleia2> (demo is another image we'll test building with dib)
19:43:27 <jeblair> pleia2: i'm happy to help with overcoming technical obstacles, but you may have to go to mordred if you find you're missing an architectural piece.
19:43:49 <pleia2> jeblair: ok, thanks
19:44:11 <pleia2> I'll poke around jenkins and see if I can get some drafts for these tests in place too, seeing as I have no familiarity with it yet :)
19:44:21 <jeblair> pleia2: you'll want to read up on jenkins-job-builder
19:44:27 <pleia2> right
19:44:35 <jeblair> #link http://ci.openstack.org/jenkins-job-builder/
19:44:51 <jeblair> #topic releasing git-review
19:44:56 <jeblair> fungi: ?
19:45:22 <fungi> mordred wanted to wait for his rebase configuration patch
19:45:42 <fungi> he and saper are still hashing out that pair of intertwined patches, it seems
19:45:49 <fungi> there was more work done on them over the weekend
19:46:33 <fungi> i'm strongly inclined to let those wait for the next release and go ahead and tag one in the interim
19:47:00 <jeblair> on a related note, this has just reminded me to tag zuul 1.2.0.
19:47:31 <jeblair> #topic open discussion
19:48:07 <jeblair> anyone have anything else they would like to chat about?
19:48:08 <ttx> bug 1157618
19:48:09 <uvirtbot> Launchpad bug 1157618 in openstack-ci "swift-tarball job produces incorrectly-versioned tarballs" [Undecided,New] https://launchpad.net/bugs/1157618
19:48:22 <ttx> i discussed it with mordred
19:48:33 <ttx> We'll work around it manually for the remaining swift rcs
19:48:38 <jeblair> i believe he promised all projects would have the new versioning code by the release.  :(
19:48:44 <ttx> fungi: so i'll use your rename powers
19:48:57 <fungi> ttx: i'll be happy to lend them
19:49:44 <fungi> my rename powers consist of 'ssh tarballs.o.o sudo mv foo bar'
19:50:01 <ttx> that's all I need :)
19:50:24 <ttx> bug 1160269 is probably completed
19:50:27 <uvirtbot> Launchpad bug 1160269 in openstack-ci "stable/essex is maintained by ~openstack-essex-maint" [Medium,In progress] https://launchpad.net/bugs/1160269
19:50:41 <fungi> ttx: did you respond to my question for clarification there?
19:50:46 <ttx> fungi: yes
19:50:58 <fungi> okay, cool. lp e-mail is sometimes sloooow
19:51:17 <fungi> okay, awesome. marking released
19:51:30 <ttx> that's all I had
19:51:36 <jeblair> ttx: is diablo dead yet?
19:51:48 <ttx> jeblair: for some definition of dead, yes
19:52:11 <ttx> jeblair: our position was that it's alive as long as someone cares about it
19:52:33 <ttx> jeblair: i.e. as long as openstack-diablo-maint is staffed
19:52:54 <ttx> but i think in practice it's pretty dead now
19:53:18 <jeblair> okay, we should think about removing it from infrastructure soon...  maybe a summit chat.  :)
19:53:18 <ttx> we do security for essex and folsom
19:53:28 <ttx> tehre is a session on stable branches at the summit
19:53:50 <jeblair> cool, it can wait till then i think.
19:54:04 <jeblair> it won't be much deader by then.
19:54:18 <fungi> just as a heads up, i'm going to be continuing this as an employee of the openstack foundation after next week... so don't try to e-mail me at my hp.com address (not that i ever checked it regularly anyway)
19:54:58 <jeblair> fungi: congratulations!
19:55:03 <fungi> thanks jeblair
19:55:20 <pleia2> congrats fungi!
19:55:23 <zaro> fungi: congrats!
19:55:37 <fungi> i appreciate your support
19:56:30 <zaro> fungi: i never even know you had an hp email.
19:56:40 <jeblair> on that note: thanks everyone!
19:56:41 <jeblair> #endmeeting