19:06:08 <jeblair> #startmeeting infra
19:06:09 <openstack> Meeting started Tue Mar 12 19:06:08 2013 UTC.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:06:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:06:12 <openstack> The meeting name has been set to 'infra'
19:06:30 <jeblair> #link http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-03-05-19.02.html
19:06:42 <jeblair> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting
19:07:26 <jeblair> mordred: i think the decision about hpcs az3 was to make a new account, yeah?
19:07:36 <mordred> jeblair: yes
19:07:58 <jeblair> #action clarkb set up new hpcs account with az1-3 access
19:08:22 <jeblair> fungi: you did #2 and #3, yeah?  wanna talk about those?
19:08:28 <fungi> sure
19:08:42 <jeblair> dtroyer: ping
19:08:57 <dtroyer> jeblair yo
19:09:11 <jeblair> dtroyer: awesome, can you hang for a sec, i have a question for you when fungi's done
19:09:16 <fungi> okay, so #2 was really just follow up to the cla maintenance
19:09:21 <dtroyer> np
19:09:25 <fungi> dredging up ml archive link now
19:10:08 <fungi> basically started with
19:10:12 <fungi> #link http://lists.openstack.org/pipermail/openstack-dev/2013-March/006344.html
19:10:28 <fungi> evolved later into
19:10:31 <fungi> #link https://wiki.openstack.org/wiki/CLA-FAQ
19:11:00 <fungi> pretty cut and dried. as far as improvements to the foundation's e-mail matching, i haven't heard back from todd
19:11:06 <fungi> i'll check in with him today
19:11:30 <fungi> quantal slaves is the other thing there...
19:11:45 <fungi> we're running all our infra jobs which used to run on precise slaves there now
19:12:19 <fungi> only issue encountered so far (to my knowledge) was a race condition in zuul's tests which we exposed in this change, and which jeblair patched
19:12:55 <fungi> do we have a feel for when we should move forward proposing patches to other projects?
19:13:15 <jeblair> since we're so late in the cycle, we should do it very cautiously.
19:13:19 <fungi> i have the job candidates in a list here...
19:13:22 <fungi> #link https://etherpad.openstack.org/quantal-job-candidates
19:14:15 <fungi> well, technically jobs, templates and projects
19:15:04 <fungi> we're already in phase 1. phase 2 there could certainly be split out into phases 2-10 or whatever
19:15:20 <jeblair> yeah, i'd try to batch those a bit
19:15:26 <jeblair> maybe do all the non-openstack projects
19:15:47 <fungi> sure, i'll make stackforge et al phase 2
19:15:55 <jeblair> then maybe the openstack projects.
19:15:59 <jeblair> you tested nova, right?
19:16:08 <fungi> repeatedly, yes
19:16:21 <fungi> and spot tested a lot of the other server and client projects too
19:16:33 <jeblair> ok, so it's probably okay to batch the openstack ones together too
19:16:40 <jeblair> just be ready to quickly revert.
19:16:46 <jeblair> when is rc1?
19:17:07 <fungi> yeah i ran all of the main openstack components' py27 jobs on their master branches to test
19:17:51 <jeblair> the first rc is thursday
19:18:12 <jeblair> i'm starting to think we should postpone until the summit
19:18:21 <jeblair> or after the release
19:18:24 <fungi> so 48 hours... yeah, i agree for the official openstack bits
19:18:36 <fungi> i'll press forward with the rest though for now
19:18:39 <jeblair> sounds good
19:18:57 <fungi> we also have rhel6 nearing a similar state which can occupy my time
19:19:16 <fungi> presumably follow the same patter there for oneiric-rhel6
19:19:19 <fungi> er, pattern
19:19:33 <jeblair> yeah, i also think that needs to wait until after the release for the official projects
19:19:42 <fungi> completely agree
19:20:00 <fungi> should be dtroyer's turn now, unless there are other related questions
19:20:00 <jeblair> #action fungi migrate stackforge projects to quantal, possibly rhel6
19:20:14 <jeblair> dtroyer: so i just noticed you +1'd https://review.openstack.org/#/c/23209/
19:20:21 <jeblair> which is what i was going to ask about, so cool.
19:20:38 <jeblair> dtroyer: is there any time in particular you would like me to merge that?
19:20:52 <jeblair> dtroyer: (so you can know when to update git remotes, etc)
19:21:39 <dtroyer> anytime is good…Grenade is running through right now but all of the tests still aren't passing so there is presumably some upgrade work missing.
19:22:14 <jeblair> dtroyer: okay, i'll merge it this afternoon and let you know.
19:22:20 <dtroyer> great, thanks
19:22:26 <jeblair> thank you!
19:22:43 <jeblair> #topic oslo-config rename
19:23:17 <jeblair> i'm going to test renaming a project with a hyphen to a dot on review-dev, just to make sure there's no gotchas there
19:23:28 <jeblair> assuming that works, when should we schedule the gerrit downtime for the rename?
19:23:48 <jeblair> friday night?
19:24:14 <fungi> i'm cool with helping on friday night
19:24:35 <fungi> is the day after rc1 release likely to be a bad time? or a good time?
19:24:57 <jeblair> we could do wed night, which is the night before rc1
19:24:58 <fungi> i guess the gerrit outage is brief either way
19:25:11 <jeblair> or even tonight.
19:25:31 <mordred> jeblair: tonight we're busy
19:25:33 <jeblair> i think after would be better
19:25:40 <jeblair> mordred: well, after that.  :)
19:25:44 <mordred> heh
19:26:00 <jeblair> but anyway, maybe friday night...
19:26:14 <jeblair> mordred and i are at pycon this weekend, so sat/sun mornings are probably bad
19:27:17 <fungi> i'm good with whenever's convenient for you and doesn't get in ttx's way for rc1 stuff
19:27:40 <jeblair> let's do 9pdt friday unless ttx objects
19:28:07 <fungi> wfm
19:28:22 <jeblair> #action jeblair send gerrit outage announce for 9pdt friday for oslo-config rename
19:28:26 <jeblair> any other renames we need to do?
19:28:45 <fungi> none which have come to my attention, other than your test
19:28:52 <jeblair> #topic gerrit/lp groups
19:29:12 <jeblair> mordred, fungi: thoughts on whether we should sync core/drivers -> lp, or rename the groups?
19:29:47 <fungi> renaming is where i'm leaning. they seem like distinct responsibilities which may or may not share the same individuals
19:30:17 <fungi> and they also vary in use between server/client/infra/stackforge/et cetera
19:30:52 <jeblair> that's my preference because there's less to break.  if the core group in lp is important, then that makes doing a sync script pretty compelling...
19:31:06 <mordred> jeblair: renaming ++
19:31:23 <fungi> i'm still not sure i entirely follow why core is important to sync to lp. when i want to keep track of bugs for a project i go to lp and subscribe
19:31:37 <jeblair> i think it's for security bugs
19:31:52 <fungi> ahh, that's something i'm as of yet still blind to
19:32:22 <fungi> and does introduce a bit of a wrinkle, i can see
19:32:29 <jeblair> but it seems like ttx is willing to work around that, and we're more leaning toward renaming, so let's do that.
19:33:21 <jeblair> we'll rename -drivers in gerrit to -milestone, and create -ptl groups and give them tag access
19:33:53 <jeblair> #topic gearman
19:34:13 <jeblair> zaro, mordred: what's the latest?
19:34:16 * mordred landing - will drop from meeting in a bit
19:34:23 <mordred> jeblair: krow is hacking on cancel today
19:34:28 <jeblair> sweet
19:34:43 <zaro> jeblair: i'm working on getting label changes to update gearman functions.
19:34:54 <zaro> jeblair: having a difficult time due to jenkins bugs.
19:35:03 <jeblair> ugh
19:35:33 <jeblair> sounds like things are moving; let me know if you need anything from me
19:35:51 <jeblair> #topic reviewday
19:35:58 <zaro> ok.  might need your help if don't get it done today.
19:35:59 <pleia2> ok, I think I finally got all the variables sorted out, needs some code reviews
19:36:06 <pleia2> https://review.openstack.org/#/c/21158/
19:36:12 <pleia2> also need gerrit user, not sure how automatic set up is on the server, only have .ssh/config file right now in puppet but "git review -s" may nede to be ru
19:36:17 <pleia2> run
19:36:51 <pleia2> so 1. create gerrit user 2. test that it works with the files we have I guess
19:37:16 <jeblair> you mean 'reviewday' user?
19:37:22 <pleia2> yeah
19:37:30 <jeblair> on the host or in gerrit?
19:37:35 <pleia2> which goes in to pull reviewday stats
19:37:36 <fungi> a reviewday account in gerrit
19:37:36 <pleia2> gerrit
19:37:41 <pleia2> host is handled in puppet
19:37:55 <jeblair> ah, ok.  gotcha, the 'git review -s' bit confused me
19:38:08 <jeblair> yeah, adding a role account is a manual process for an admin, i think.
19:38:16 <pleia2> well, the user on the host uses the gerrit user, it has a .ssh/config file
19:38:18 <jeblair> fungi: you done that yet?  you want to take this one?
19:38:21 <fungi> pleia2: i can help take care of the gerrit account
19:38:26 <pleia2> thanks fungi
19:38:28 <fungi> jeblair: you beat me to it
19:38:30 <fungi> eys
19:38:33 <fungi> yes too
19:38:42 <jeblair> #action fungi create reviewday user in gerrit
19:39:23 <jeblair> pleia2: i think the only thing you'd need is to accept the authorized keys
19:39:24 <fungi> pleia2: i'll get up with you on that after i review your change
19:39:35 <pleia2> jeblair: yeah, that's my hope
19:40:03 <jeblair> pleia2: you could either do that by having the reviewday program accept them, or add known_hosts to puppet
19:40:39 <pleia2> I'd prefer known_hosts (adding autoaccept to reviewday makes me feel not so good)
19:41:16 <jeblair> that's fine.  i wouldn't add 'always accept' but rather, 'accept on first connection'.
19:41:20 <pleia2> so I can add that patch in to puppet for the gerrit server
19:41:23 <pleia2> ah, fair enough
19:41:44 <jeblair> your call
19:41:52 <pleia2> I'll do known_hosts
19:42:03 <jeblair> ok
19:42:11 <jeblair> #topic pypi mirror / requirements
19:42:14 <mordred> sigh. websockify has made a broken release. nova needs to be version pinned
19:42:27 <jeblair> mordred: perfect timing
19:42:29 <mordred> yah
19:42:40 <fungi> "reasons we need better mirroring"
19:43:02 <fungi> or better requirements handling i guess in this case
19:43:07 <jeblair> so mordred and i just hashed out a possible way to deal with the situation of having transitive requirements through 3 levels of unreleased software.
19:43:46 <fungi> does it involve bodily violence at pycon?
19:43:50 <jeblair> it's going to take some time, including things like subclassing or extending pip.
19:44:01 <jeblair> so it might be next week before i have something.
19:44:26 <jeblair> #topic baremetal testing
19:44:35 <pleia2> so, I got familiar with how baremetal testing w/ incubator works: https://github.com/tripleo/incubator/blob/master/notes.md
19:44:44 <pleia2> then last week spoke with devananda about loose plan and updated the ticket: https://bugs.launchpad.net/openstack-ci/+bug/1082795
19:44:47 <uvirtbot`> Launchpad bug 1082795 in openstack-ci "Add baremetal testing" [High,Triaged]
19:44:56 <pleia2> echohead is working on removing devstack requirement, instead editing diskimage-builder to create a more appropriate "bootstrap" VM
19:45:14 <pleia2> (not really sure how it differs from devstack exactly, but might have less going on, horizon probably isn't needed...)
19:45:33 <pleia2> anyway, the thought is to use devstack-gate to run a sort of bootstrap-gate instead which will then spin up the baremetal instances and do all tests (essentially, "do incubator notes")
19:45:48 <pleia2> what I need are some pointers to getting a handle on this overall, I've read some about devstack-gate, went back to read about the whole process to find out when exactly it's run (also read up on glance, since we'll be doing some demo baremetal image-stashing during this process)
19:46:14 <pleia2> but when it comes to starting to add all of this to ci, I'm still lacking a bit of the glue in my understanding that will make this all possible practically
19:46:21 <pleia2> so while echohead gets bootstrap in diskimage-builder ready, I need some background (maybe some docs to read today, then check in with someone tomorrow morinng to outline some practical first steps?)
19:46:26 * ttx waves
19:46:45 <pleia2> EOF :)
19:46:51 <jeblair> ttx: mind if we take a 5 min gerrit outage on friday to rename oslo-config to oslo.config?
19:46:59 <jeblair> ttx: friday evening us time
19:47:03 <ttx> jeblair, fungi: there is no specific date for RC1
19:47:23 <ttx> so there is no good or bad time for an outage
19:47:25 <jeblair> ttx: oh, ok.  then do you have a feeling for when would be best to take that outage?
19:47:48 <ttx> jeblair: wahtever is best for you.
19:47:56 <jeblair> okay, well, fri night still seems good, so let's stick with that then.
19:48:10 <jeblair> pleia2: have you read the devstack-gate readme?
19:48:15 <pleia2> jeblair: yep
19:49:05 <pleia2> doesn't really seem built at the moment to s/devstack/some other rebuilt openstack instance
19:49:27 <pleia2> s/rebuilt/prebuilt
19:49:31 <ttx> jeblair: for LP/Gerrit groups: +1 to renaming. I can work around the core issue. Subscribing all of them to security bugs was just a cool shortcut
19:49:59 <devananda> pleia2: a while back, i worked through this, and found it very helpful: https://github.com/openstack-infra/devstack-gate/blob/master/README.md#developer-setup
19:50:20 <pleia2> devananda: ok great, I got that started on my desktop yesterday but haven't brought it up yet
19:50:45 <jeblair> pleia2: so there are two parts of devstack-gate that deal with devstack, the image-creation, and then running devstack itself.  the rest is general node management.
19:51:09 <jeblair> pleia2: we can change whichever bits of those we need to to make plugging something else in possible.
19:51:12 <pleia2> image creation us done with diskimage-builder, correct?
19:51:21 <pleia2> at least, partially
19:51:39 <jeblair> pleia2: devstack-gate does not do that now, it spins up a new vm and images it using the cloud provider's api.
19:51:48 <pleia2> ah, ok
19:52:01 <jeblair> pleia2: are you going to pycon?
19:52:15 <devananda> pleia2: we'll still build the bootstrap, deploy, and demo images using dib (or download those from /LATEST/, once dib is gated)
19:52:15 <pleia2> jeblair: no, but I can wander down some evening next week if it's helpful
19:52:57 <ttx> jeblair: in corner cases we might have to rename -core teams in LP instead of deleting -- I remember LP is a bit picky when there is a ML attached to group.
19:53:09 <pleia2> devananda: yeah, I think I was wondering devstack-gate-wise whether we'd want some standby bootstraps like we currently have standby devstacks all ready for testing (as I understand it), and how those would be created
19:53:46 <devananda> pleia2: d-g should probably use its existing methods for creating those
19:53:47 * ttx preventively purges the nova-core Ml
19:54:26 <devananda> pleia2: but, no, standby-bootstrap doesn't really make sense to me. bootstrap will be a VM running inside the devstack-gate-created instance
19:54:36 <pleia2> devananda: ok
19:54:47 <pleia2> right, that makes sene
19:54:49 <pleia2> sense
19:55:19 <ttx> jeblair: weird, most -core teams on LP activated their ML.
19:56:04 * ttx purges
19:56:06 <pleia2> ok, I'll play around with the devstack-gate developer-setup to get a better understanding of this and go from there
19:56:17 <jeblair> ttx: good
19:56:40 <jeblair> pleia2: cool, i'm happy to help out, and we should see if we can get together with mordred this week.
19:56:59 <pleia2> sounds good
19:57:00 <ttx> jeblair: I'll ping PTLs before purging the cinder and quantum ones.
19:57:12 <jeblair> ttx: I have made the core groups not be admins of openstack-cla.
19:57:22 <jeblair> ttx: people were getting confused and still approving membership.
19:57:51 <jeblair> ttx: i expect we'll want to delete that group eventually, but i'm hesitant to do so right now.
19:58:02 <ttx> jeblair: there is a corner case if some people are in ~openstack only by way of a core team
19:58:13 <ttx> they could lose the general ML list subscription
19:58:33 <ttx> but we can manually check that before deleting the group
19:58:39 <jeblair> #topic open discussion
19:58:58 <jeblair> ttx: yeah, that would be a nice thing to do.
19:59:28 <jeblair> thanks everyone!
19:59:35 <jeblair> #endmeeting