19:01:14 <mtaylor> #startmeeting
19:01:15 <openstack> Meeting started Tue May 15 19:01:14 2012 UTC.  The chair is mtaylor. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:15 <LinuxJedi> o/
19:01:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:27 <clarkb> o?
19:01:36 <clarkb> er o/
19:01:42 <mtaylor> #topic gerrit trigger plugin
19:01:46 <mtaylor> jeblair: you go
19:01:47 <jeblair> that looks exactly like a person scratching their head
19:01:50 <mtaylor> what's up?
19:02:00 <jeblair> o?  <- gerrit trigger plugin makes jim scratch his head a lot
19:02:05 <jeblair> so...
19:02:06 <mtaylor> indeed
19:02:14 <jeblair> our current changes have been merged upstream
19:02:21 <mtaylor> woohoo
19:02:33 <jeblair> darragh points out that a few things may have been missed, but i'm sure they can be fixed with small patches
19:02:48 <mtaylor> cool. does that mean we can close out https://bugs.launchpad.net/bugs/903375
19:02:49 <uvirtbot> Launchpad bug 903375 in openstack-ci "Finish and install new Gerrit Trigger Plugin" [High,Fix committed]
19:02:53 <soren> What were these changes?
19:03:01 <soren> Too many to enumerate?
19:03:33 <jeblair> i'm working on speculative execution, which will let us test lots of changes in parallel and merge them in series, maintaining our current behavior of testing patches "as they will be merged", but parallelizing the process for speed
19:03:45 <jeblair> mtaylor: i think so
19:03:49 <mtaylor> jeblair: awesome
19:04:02 <jeblair> soren: we added support for triggering on comment-added and ref-updated events
19:04:17 <jeblair> comment-added is what we use to trigger testing and merging on APRV+1 votes
19:04:29 <jeblair> ref-updated we use for building tarballs, etc, when changes land
19:04:51 <soren> jeblair: Neat. I was just sweating at the Gerrit trigger plugin a couple of hours ago for not supporting that.
19:04:55 <soren> Er...
19:04:57 <soren> swearing.
19:04:58 <soren> Mostly.
19:05:03 <mtaylor> soren: you should use ours
19:05:10 <soren> Clearly!
19:05:29 <mtaylor> hrm
19:05:38 <jeblair> we have a jenkins job that builds ours and has the hpi as an artifact
19:05:41 <mtaylor> LinuxJedi: your changes to the docs for that don't seem to have made it to ci.openstack.org
19:05:54 <jeblair> so whatever crazynees we're working on is available pre-built
19:05:59 <LinuxJedi> mtaylor: awesome, something to look at
19:06:44 <mtaylor> soren: https://jenkins.openstack.org/view/All/job/gerrit-trigger-plugin-package
19:06:45 <jeblair> so, immediate future work for me: continue working on spec-ex, fixing upstream merge problems as i go, and roll that out to openstack
19:06:46 <mtaylor> /lastSuccessfulBuild/artifact/gerrithudsontrigger/target/gerrit-trigger.hpi
19:06:48 <mtaylor> gah
19:06:56 <mtaylor> soren: https://jenkins.openstack.org/view/All/job/gerrit-trigger-plugin-package/lastSuccessfulBuild/artifact/gerrithudsontrigger/target/gerrit-trigger.hpi
19:07:23 <mtaylor> jeblair: sounds fantastic. you are enjoying java threading I take it?
19:08:03 <mtaylor> LinuxJedi: AH - I know why...
19:08:18 <jeblair> i'm hoping that the spec-ex patch will be pretty small, but there are a lot of events and listeners going on here, so it'll take a bit to get it just right.  :)
19:08:31 <mtaylor> LinuxJedi: when I was cleaning unused stuff from openstack-ci, I removed setup.py, but we use that to build docs...
19:08:41 <mtaylor> jeblair: cool
19:08:41 <LinuxJedi> haha! :)
19:09:23 <jeblair> (eol)
19:09:27 <mtaylor> sweet
19:09:31 <mtaylor> #topic etherpad
19:09:37 <mtaylor> clarkb: how's tricks?
19:09:51 <clarkb> I think linuxjedi merged the puppet module today
19:09:59 <mtaylor> I believe you are right
19:10:13 <LinuxJedi> I did, was I not supposed to?
19:10:18 <mtaylor> nope, that's great
19:10:39 <clarkb> there are a couple extra things that I hsould eventually fix in that module, but for now you get everything but ssl certs, backups, and the json settings file (because passwords)
19:11:11 <clarkb> Once I get accounts I can spin up a box to run that on and migrate the data from the old etherpad to the new
19:12:00 <mtaylor> clarkb: LinuxJedi would be more than happy to spin you up a machine :)
19:12:00 <clarkb> I suppose I should also document this which has not been done.
19:12:17 <mtaylor> clarkb: we have an openstackci account at rackspace that we use for important servers
19:12:26 <LinuxJedi> sure thing
19:12:28 <mtaylor> speaking of ... we should probably delete some old servers from the openstack account
19:12:33 * LinuxJedi makes a note...
19:12:37 <clarkb> that works for me too
19:13:11 <mtaylor> but yeah - docs would probably be splendid. :)
19:13:13 <LinuxJedi> mtaylor: there is a stale meetbot server that can die
19:13:39 <mtaylor> there are several stale servers ...
19:13:41 <clarkb> document note is on the whiteboard
19:13:52 <mtaylor> Shrews: you around?
19:13:57 <Shrews> yup
19:14:01 <mtaylor> #topic pypi mirror
19:14:09 <LinuxJedi> mtaylor: if you have a list of them I can clear them out
19:14:20 <Shrews> pypi mirror is initialized and up and running on http://pypi.openstack.org
19:14:27 <mtaylor> Shrews: ++
19:14:35 <Shrews> right now, only updating once a day. may need to adjust that at some point
19:15:05 <Shrews> now trying to figure out how to use it correctly so that we fall back to normal pypi.python.org in case there is something we are not mirroring
19:15:21 * Shrews not 100% convinced that we ARE mirroring everything, but not sure how to verify
19:15:31 <soren> What makes you think we aren't?
19:15:57 <Shrews> soren: download size is around 6GB. from older posts about setting it up, i was expecting much more
19:16:57 <soren> Yeah, that doesn't sound like much
19:17:11 <clarkb> will it be a public mirror at some point? or is that more trouble than its worth?
19:17:47 <mtaylor> well, I'm mostly wanting it to reduce latency and make our stuff more resilient... not so sure I care if other people get benefit from it :)
19:18:07 <mtaylor> although there's really nothing preventing its use by anyone at the moment I guess
19:18:23 <Shrews> future stuff: see if pygerrit is worth anything
19:19:00 * Shrews done
19:19:08 <mtaylor> excellent ...
19:19:16 <mtaylor> #topic jenkins job filer 2.0
19:19:56 * LinuxJedi up?
19:20:09 <mtaylor> LinuxJedi: yup
19:20:14 <LinuxJedi> ok, so...
19:20:43 <LinuxJedi> after massive complications with the puppet way of trying to create jobs in jenkins I have now re-written this in Python
19:20:58 <LinuxJedi> and it takes YAML scripts for job configuration parameters
19:21:04 <LinuxJedi> and is all nice and modular and stuff
19:21:13 <mtaylor> it makes me happy
19:21:32 <LinuxJedi> it also talks the Jenkins API so can add/modify/delete jobs without any reload/restart
19:21:40 <soren> Yeah, generating config.xml from Puppet templates doesn't seem like much fun. I've been doing that a fair bit the last while.
19:21:47 <LinuxJedi> and logs everything in the job config history correctly and stuff
19:21:53 <soren> LinuxJedi: Sweet.
19:22:01 <mtaylor> soren: you should look at LinuxJedi's new stuff ... I think you'll like it
19:22:05 <soren> LinuxJedi: So is Puppet involved in that at all?
19:22:20 <LinuxJedi> soren: yes, just to specify which projects to push live
19:22:28 <LinuxJedi> soren: and it executes the python script
19:22:55 <mtaylor> soren: https://github.com/openstack/openstack-ci-puppet/tree/master/modules/jenkins_jobs
19:22:55 <LinuxJedi> soren: so nothing essential
19:23:23 <soren> LinuxJedi: I'll take a look. Thanks!
19:23:26 <clarkb> LinuxJedi: you wrote a new implementation of the api for it?
19:23:36 <LinuxJedi> next step is to make it support batches of jobs instead of having a long YAML per-project.  I've made a start on this but it won't be finished until at least tomorrow
19:23:49 <LinuxJedi> clarkb: yes, I tried 4 different APIs, they all sucked
19:24:10 <LinuxJedi> clarkb: the only one that supported all the commands we needed didn't actually work :)
19:25:04 <LinuxJedi> it took me *much* longer testing those libraries than writing a new one too unfortunately
19:25:36 <mtaylor> sigh
19:25:58 <mtaylor> cool
19:26:02 <mtaylor> #topic openvz
19:26:02 <LinuxJedi> Stackforge RedDwarf (currently disabled) and Ceilometer are using it currently
19:26:16 <mtaylor> oops.
19:26:21 <mtaylor> LinuxJedi: anything else?
19:27:30 <mtaylor> devananda: you wanna tell the nice folks how openvz support is going?
19:28:16 <LinuxJedi> mtaylor: nothing else on jenkins jobs right now
19:30:08 <jeblair> mtaylor: is jclouds plugin far enough along to be used instead of devstack-gate on the HP internal jenkins for openvz (assuming that's the plan)?
19:30:34 <mtaylor> jeblair: I do not know.
19:31:35 <mtaylor> I'm going to see if I can do a unittests poc with it this week some time
19:31:45 <soren> I forget... Why do we care about openvz?
19:31:55 <mtaylor> the story so far on openvz is that we can finally build the kernel module
19:32:24 <mtaylor> soren: hp and rackspace both want nova to support it to use behind dbaas stuff ... the migrations feature I think it one of the big plusses iirc
19:32:44 <mtaylor> but we're not going to merge the patch until we can test the patch
19:32:56 <devananda> mtaylor: sorry, missed the ping...
19:32:58 <mtaylor> s/we/vish/
19:33:00 <mtaylor> all good
19:34:58 <devananda> so, like mtaylor said, we've got a .deb package of openvz kernel that boots in ubuntu.
19:35:22 <LinuxJedi> devananda: you made it work with 3.x or is it an older kernel?
19:35:44 <devananda> i'll be working this week to get jenkins building and testing it (probably with significant help from jeblair)
19:35:52 <devananda> LinuxJedi: 2.6.32
19:35:59 <LinuxJedi> ah, ok :)
19:36:12 <devananda> that's the last one supported by openvz, as far as they've said to me
19:37:00 <devananda> as far as what tests to run on it, or gating, etc, i leave to others at this point :)
19:37:24 <mtaylor> yeah - I think for now we're just gonna focus on being able to spin up openvz enabled machines
19:37:44 <mtaylor> once we've got that, other folks can help actually drive testing and stuff
19:38:48 <mtaylor> #topic open discussion
19:38:50 <mtaylor> anything else ?
19:38:55 * LinuxJedi raises hand
19:39:00 <mtaylor> LinuxJedi: go!
19:39:07 <LinuxJedi> stackforge email...
19:39:24 <LinuxJedi> the stackforge gerrit server has been migrated to a different cloud account
19:39:37 <mtaylor> ah yes.
19:39:40 <LinuxJedi> this needed to happen anyway, but was accelerated due to mail not actually sending
19:39:58 <LinuxJedi> about 20 minutes ago I was finally told why that happened and that it will happen again
19:40:21 * jeblair perks up
19:40:22 <LinuxJedi> so we need an action plan that will most certainly involve a relay server outside of HP Cloud
19:40:35 <LinuxJedi> jeblair, mtaylor: I've just emailed you the exact reason
19:41:06 * mtaylor is so happy ...
19:41:33 <LinuxJedi> yep, I want to use pointy things because it took a week to find out this information
19:41:44 <LinuxJedi> and I was told it when I wasn't even looking for it
19:42:18 <mtaylor> LinuxJedi: do we know what the 25 rate limit actually is?
19:42:42 <LinuxJedi> mtaylor: I didn't get that far, but it explains why a few cronspam were getting through
19:42:56 <LinuxJedi> mtaylor: lets just assume really damn low for now
19:43:02 <mtaylor> yeah. that's probably fair
19:43:13 <LinuxJedi> mtaylor: so low you will see that before PBL
19:44:02 <mtaylor> jeblair: any thoughts other than just running a mail relay on rackspace?
19:44:09 <jeblair> mtaylor: i don't think that's appropriate
19:44:15 <mtaylor> I don't either
19:44:22 <LinuxJedi> so I'm going to need to investigate this further this week
19:44:40 <mtaylor> it's possible we might be able to get the rate limiting lifted on our account I believe
19:44:43 <LinuxJedi> as there is an implied workaround on a case-by-case basis
19:44:46 <LinuxJedi> yep
19:45:28 <LinuxJedi> we just need to figure out who to talk to, and the guy I emailed you about is probably a good starting point
19:45:34 <mtaylor> great
19:46:18 <LinuxJedi> just wish *someone* had told me in all those mails in the last week :)
19:46:34 <LinuxJedi> I can't be the only person that is going to hit this :)
19:46:51 <LinuxJedi> </rant>
19:48:46 <jeblair> mtaylor: eom?
19:48:52 <mtaylor> yeah. I think so
19:48:54 <mtaylor> thanks everybody!
19:48:55 <clarkb> oh I will be out friday
19:48:59 <mtaylor> #endmeeting