19:08:12 <mtaylor> #startmeeting
19:08:13 <openstack> Meeting started Tue Aug 14 19:08:12 2012 UTC.  The chair is mtaylor. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:08:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:08:30 <mtaylor> what's up folks?
19:08:40 <harlowja> so, only thing i have to bring up is possibly, https://bugs.launchpad.net/openstack-ci/+bug/1035966
19:08:41 <uvirtbot> Launchpad bug 1035966 in openstack-ci "Move 'anvil' to stackforge" [Undecided,New]
19:09:05 <harlowja> i'm fixing it up/cleaning it up for folsom and might be useful to show up in stackforge instead of y! organization
19:09:28 <harlowja> thoughts?
19:09:43 * mtaylor look at bug...
19:10:19 <harlowja> k, anvil is similar to devstack for those that are wondering
19:10:44 <harlowja> but i've tried to add features that devstack doesn't have and tried to refactor it (in python) with other goodies as well
19:10:51 <mtaylor> ah. hrm.
19:10:52 <harlowja> example outputs are attached to that bug...
19:11:18 <harlowja> but its a useful tool, so thats why i think it belongs on stackforge, being that the forge site seems like the right place?
19:12:02 <mtaylor> yeah - so, are there any concerns?
19:12:35 <harlowja> not from me (i'd wait on moving it for say another week though as i finish up some fixes in it)
19:12:50 <harlowja> idk about the devstack guys but devstack seems like it should also be on stackforge, idk
19:12:59 <harlowja> *i won't push that, but seems to make sense
19:14:25 <mtaylor> seems to make general sense to me
19:14:34 <mtaylor> my main concern would be in what testing for it would look like
19:15:31 <harlowja> ya, unit level stuff would make sense, the capabiltiiy to do more than that could be added on (setting up openstack components on diffferent distros...), but that might not be useful since thats approaching integration tests (but idk)
19:16:06 <mtaylor> exactly, and that winds up starting to be a larger engineering effort
19:16:26 <mtaylor> I'd like to keep it to unittesty stuff, because that's all stuff you can manage pretty easily by just putting in patches to the puppet repo
19:16:38 <harlowja> thats fine with me
19:17:08 <mtaylor> kk. then I don't see any difference between this and heat from a suitability perspective
19:17:19 <jeblair> mtaylor: is there documentation on how a project joins stackforge?  and what is expected, as far as self-managing and ci?
19:17:35 <harlowja> ya, that'd be useful also, cause i wasn't really sure :)
19:17:45 <mtaylor> jeblair: there is not. so far it's all been one off requests
19:18:00 <mtaylor> jeblair: and I believe the expectation is that the project should be mostly self-suffient
19:18:20 <mtaylor> iirc, the fine folks at heat added all of their job-builder and zuul config themselves
19:18:21 <jeblair> mtaylor: ok.  i thought there used to be docs, but i couldn't find them.
19:18:51 * LinuxJedi thought I wrote them too, I guess I thought about writing them and it didn't happen
19:19:00 * mtaylor could be wrong about the docs
19:19:08 <harlowja> might be useful to have more docs, i could see alot of 'tools' or similar wanted to get on that site
19:19:38 <harlowja> since it seems to be the centeral openstack useful tools 'place'
19:19:43 <jeblair> http://ci.openstack.org/stackforge.html
19:19:44 <jeblair> huh
19:19:52 <jeblair> it's not linked
19:21:04 <mtaylor> weird
19:21:09 <harlowja> right, i just wonder if its useful to have a what could showup on stackforge list, but idk
19:21:32 <harlowja> ' similar to that of the main OpenStack project but for use with projects that are not under the main OpenStack umbrella.' is pretty vague ;)
19:22:02 <jeblair> harlowja: so how to create the jenkins jobs is not very well documented at the moment, you may have to do some digging.  it's all in the openstack-ci-puppet repo.
19:22:25 <harlowja> np
19:22:38 <jeblair> harlowja: one of us will have to do the initial import into gerrit, but you should be able to self-manage the jenkins bits after that.
19:22:45 <harlowja> cool beans
19:23:02 <mtaylor> jeblair: any other concerns from your end?
19:24:44 <jeblair> i think that covers it
19:25:02 <mtaylor> cool cool. then harlowja we'll sync up with you on getting you imported and stuff
19:25:10 <harlowja> sweet
19:25:13 <harlowja> thx guys!
19:26:16 <mtaylor> jeblair: you mentioned being blocked on me ...
19:26:45 <jeblair> i have made no progress on backups, other than to find out that hpcloud requires custom novaclient in order to use their block storage
19:27:13 <mtaylor> awesome. so, apparently that's a diablo v. folsom issue
19:27:24 <jeblair> i don't feel that's appropriate for a project level activity like this -- they're not even available without an hpcloud account.
19:27:35 <mtaylor> in that novaclient apparently has ceased supporting diablo or something
19:27:48 <jeblair> mtaylor: the bug indicated it was related to keystone.
19:27:50 <mtaylor> I agree - I do not think that we should use a modified novaclient
19:29:34 <mtaylor> jeblair: yeah, apparently something changed between now and then and novaclient doesn't support the old way at all?
19:29:46 <jeblair> mtaylor: that's vague
19:29:47 <mtaylor> anyhoo
19:30:26 <mtaylor> apparently block storage is also having data corruption issues at the moment
19:30:34 <mtaylor> so it might not be terrible that we can't get it
19:30:47 <jeblair> effectively, we have no viable option for off-site backup (let's define that as at least not the same cloud provider as our main servers)
19:31:16 <mtaylor> jeblair: what about copies to multiple backup hosts, even if it's on ephemeral drives?
19:31:44 <mtaylor> jeblair: like, backup to rax and hp az{1,2} - and it would take a 4-way failure to take us all the way to dead
19:31:46 <mtaylor> ?
19:32:38 <jeblair> mtaylor: well, i'm unaware of a site-level raid system -- each time hpcloud loses a node, we would need to copy the data back from another host
19:34:27 <jeblair> (i don't think that's feasible -- no one here is actually interested in backups, so anything that relies on anyone doing anything by hand isn't going to work)
19:35:52 <jeblair> mtaylor: so do you expect the situation with hpcloud to change, or do we just need to write it off completely?
19:36:06 <mtaylor> jeblair: I do not expect the situation with hpcloud to change any time in the near future
19:36:29 <mtaylor> so if the lack of block storage access means that our current plan is unworkable, then I think we will need to come up with a new plan
19:36:33 <mtaylor> which is quite sad
19:36:54 <jeblair> it is.  hpcloud foils every attempt of ours to use it.
19:36:59 <LinuxJedi> :(
19:37:32 <mtaylor> yup
19:37:32 <LinuxJedi> maybe we should get physical kit from somewhere for this?
19:37:46 <jeblair> yeah, that may be a good idea.
19:38:32 <jeblair> mtaylor: got anything laying around?
19:39:18 <mtaylor> jeblair: hrm
19:39:20 * LinuxJedi has a beat-up old Pentium 4 laptop in the loft.  Probably more reliable
19:39:37 <jeblair> other options: ask other partners if they have something appropriate, or punt until the foundation is formed
19:39:54 <jeblair> (presumably the foundation will have an interest in this, and a budget to procure commercial solutions if necessary)
19:40:49 <jeblair> okay, well, if anyone has ideas, let us know.  :)
19:40:53 <LinuxJedi> we all work for companies with data centres with lots of kit.  Surely we can punt for a half-rack or something
19:41:03 <mtaylor> it's not just kit
19:41:07 <mtaylor> bare metal needs to be managed
19:41:09 <mtaylor> etc.
19:41:14 <LinuxJedi> true :/
19:41:24 <jeblair> true, a backup server with a degraded raid array is no fun
19:41:30 * mtaylor will ask around
19:41:57 * jeblair anticipates mtaylor getting the response "why not use hpcloud?"
19:42:04 <LinuxJedi> lol! :)
19:42:05 <creiht> mtaylor: you should store backups in swift datastores :)
19:42:31 <jeblair> creiht: we considered that -- however, that means having read-write credentials to the data store on the hosts being backed up...
19:42:45 <jeblair> creiht: we'd like our backups to be append-only
19:43:13 <jeblair> creiht: if you know of a swift provider that supports an append-only configuration, i'm all ears.  :)
19:43:21 <creiht> ahh heh
19:44:15 <jeblair> so how about some good news?  swift is part of the devstack gate now
19:44:30 <jeblair> i set up bitrot jobs in jenkins: https://jenkins.openstack.org/view/Bitrot/
19:44:44 <jeblair> (i'll email the ML about that after the new ones have their first run)
19:45:03 <mtaylor> jeblair: woohoo!
19:45:19 <jeblair> i'd like to get someone/group set up to receive email alerts when those fail.  maybe QA team, maybe core devs, maybe PTLs, maybe stable-maint...
19:45:38 <mtaylor> jeblair: ++
19:45:46 <mtaylor> ttx: around? thoughts on above? ^^ jaypipes  ?
19:46:14 <jeblair> (if not, i'll bring it up on the ML)
19:46:53 <jeblair> so in dog-fooding cherry-pick mode, i realized that having gerrit-git-prep do merges while the repo is cherry-picking is problematic
19:47:08 <mtaylor> jeblair: oh, heh
19:47:27 <jeblair> if you approve a series of changes, once the first one is cherry-picked, the subsequent ones can't merge because they conflict
19:47:49 <jeblair> (because an older version of the cherry-picked patch is in their ancestry)
19:47:59 <mtaylor> jeblair: I noticed that
19:48:11 <mtaylor> jeblair: when doing puppet this weekend, but didn't make the causation connection
19:48:17 <jeblair> so, rather than adding complexity to gerrit-git-prep for that, i started moving that complexity into zuul
19:48:20 <jeblair> https://review.openstack.org/#/c/11349/
19:48:49 <mtaylor> jeblair: saw that come through - looking at it now
19:48:58 <jeblair> that's a change so that zuul starts managing copies of git repos, and whenever it wants a change tested, it merges/cherry-picks as appropriate whatever changes it wants tested
19:49:15 <jeblair> so the end result is that zuul passes a single ref to jenkins, and jenkins checks that out and tests it
19:49:39 <jeblair> thit should make zuul much easier for other people to use, as they (and we!) can (eventually) get rid of the gerrit-git-prep script
19:49:46 <jeblair> s/thit/this/
19:50:08 <jeblair> mtaylor has a patch submitted to the jenkins git module to fix one problem we'd have with doing that right away
19:50:19 <jeblair> but we should be able to get to that point, and pretty soon.
19:50:59 <jeblair> so an open question is where should zuul's git repos be served from -- currently zuul is running on jenkins.o.o, which means they would have to be served from there (via apache, presumably)
19:51:22 <mtaylor> jeblair: I was just about to ask about that
19:51:51 <jeblair> we could move zuul to the gerrit server, but they'd still need to be separate from the replica repos, because zuul needs a working tree, and gerrit requires replicas be bare
19:52:21 <jeblair> (presumably, we could do the work in zuul, and then push the results to the replica repos)
19:52:28 <jaypipes> mtaylor: reading back.
19:52:59 <jeblair> or we could put zuul on its own server.
19:53:33 <jeblair> anyway, it's not too important.  we'll find some place.
19:54:03 <mtaylor> ++
19:54:07 <jeblair> once we switch gerrit-git-prep to using the new method, i also plan on renaming all the GERRIT_ env variables, so there will be a bit of an overhaul for all the jenkins jobs that reference them.
19:54:48 <mtaylor> kk
19:54:56 <jeblair> after the switch, i'd like to burn in cherry-pick mode some more before we examine changing the projects.
19:55:25 <mtaylor> I agree with that - cherry-pick so far has been mildly weird
19:55:25 <jeblair> eol
19:55:34 <mtaylor> but that's probably largely the merge thing
19:55:46 <jeblair> yeah, i think so.  but good to be sure.
19:57:24 <mtaylor> and with that - I think we're at about time
19:58:54 <ttx> mtaylor: stable-maint sounds good to me
19:59:10 <ttx> for bitrot jobs
19:59:44 <ttx> since they apply to stable/*
20:00:16 <jeblair> ttx: cool, i'll make that a strawman suggestion when i post to the ml
20:00:42 <ttx> jeblair: could be some group that people opt in
20:00:52 <ttx> I mean, if people are into that...
20:01:42 * med_ thinks there's an endmeeting that needs to occur.
20:02:29 <med_> mtaylor, ^?
20:02:31 <jeblair> #endmeeting