19:06:08 #startmeeting infra 19:06:09 Meeting started Tue Mar 12 19:06:08 2013 UTC. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:06:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:06:12 The meeting name has been set to 'infra' 19:06:30 #link http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-03-05-19.02.html 19:06:42 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting 19:07:26 mordred: i think the decision about hpcs az3 was to make a new account, yeah? 19:07:36 jeblair: yes 19:07:58 #action clarkb set up new hpcs account with az1-3 access 19:08:22 fungi: you did #2 and #3, yeah? wanna talk about those? 19:08:28 sure 19:08:42 dtroyer: ping 19:08:57 jeblair yo 19:09:11 dtroyer: awesome, can you hang for a sec, i have a question for you when fungi's done 19:09:16 okay, so #2 was really just follow up to the cla maintenance 19:09:21 np 19:09:25 dredging up ml archive link now 19:10:08 basically started with 19:10:12 #link http://lists.openstack.org/pipermail/openstack-dev/2013-March/006344.html 19:10:28 evolved later into 19:10:31 #link https://wiki.openstack.org/wiki/CLA-FAQ 19:11:00 pretty cut and dried. as far as improvements to the foundation's e-mail matching, i haven't heard back from todd 19:11:06 i'll check in with him today 19:11:30 quantal slaves is the other thing there... 19:11:45 we're running all our infra jobs which used to run on precise slaves there now 19:12:19 only issue encountered so far (to my knowledge) was a race condition in zuul's tests which we exposed in this change, and which jeblair patched 19:12:55 do we have a feel for when we should move forward proposing patches to other projects? 19:13:15 since we're so late in the cycle, we should do it very cautiously. 19:13:19 i have the job candidates in a list here... 19:13:22 #link https://etherpad.openstack.org/quantal-job-candidates 19:14:15 well, technically jobs, templates and projects 19:15:04 we're already in phase 1. phase 2 there could certainly be split out into phases 2-10 or whatever 19:15:20 yeah, i'd try to batch those a bit 19:15:26 maybe do all the non-openstack projects 19:15:47 sure, i'll make stackforge et al phase 2 19:15:55 then maybe the openstack projects. 19:15:59 you tested nova, right? 19:16:08 repeatedly, yes 19:16:21 and spot tested a lot of the other server and client projects too 19:16:33 ok, so it's probably okay to batch the openstack ones together too 19:16:40 just be ready to quickly revert. 19:16:46 when is rc1? 19:17:07 yeah i ran all of the main openstack components' py27 jobs on their master branches to test 19:17:51 the first rc is thursday 19:18:12 i'm starting to think we should postpone until the summit 19:18:21 or after the release 19:18:24 so 48 hours... yeah, i agree for the official openstack bits 19:18:36 i'll press forward with the rest though for now 19:18:39 sounds good 19:18:57 we also have rhel6 nearing a similar state which can occupy my time 19:19:16 presumably follow the same patter there for oneiric-rhel6 19:19:19 er, pattern 19:19:33 yeah, i also think that needs to wait until after the release for the official projects 19:19:42 completely agree 19:20:00 should be dtroyer's turn now, unless there are other related questions 19:20:00 #action fungi migrate stackforge projects to quantal, possibly rhel6 19:20:14 dtroyer: so i just noticed you +1'd https://review.openstack.org/#/c/23209/ 19:20:21 which is what i was going to ask about, so cool. 19:20:38 dtroyer: is there any time in particular you would like me to merge that? 19:20:52 dtroyer: (so you can know when to update git remotes, etc) 19:21:39 anytime is good…Grenade is running through right now but all of the tests still aren't passing so there is presumably some upgrade work missing. 19:22:14 dtroyer: okay, i'll merge it this afternoon and let you know. 19:22:20 great, thanks 19:22:26 thank you! 19:22:43 #topic oslo-config rename 19:23:17 i'm going to test renaming a project with a hyphen to a dot on review-dev, just to make sure there's no gotchas there 19:23:28 assuming that works, when should we schedule the gerrit downtime for the rename? 19:23:48 friday night? 19:24:14 i'm cool with helping on friday night 19:24:35 is the day after rc1 release likely to be a bad time? or a good time? 19:24:57 we could do wed night, which is the night before rc1 19:24:58 i guess the gerrit outage is brief either way 19:25:11 or even tonight. 19:25:31 jeblair: tonight we're busy 19:25:33 i think after would be better 19:25:40 mordred: well, after that. :) 19:25:44 heh 19:26:00 but anyway, maybe friday night... 19:26:14 mordred and i are at pycon this weekend, so sat/sun mornings are probably bad 19:27:17 i'm good with whenever's convenient for you and doesn't get in ttx's way for rc1 stuff 19:27:40 let's do 9pdt friday unless ttx objects 19:28:07 wfm 19:28:22 #action jeblair send gerrit outage announce for 9pdt friday for oslo-config rename 19:28:26 any other renames we need to do? 19:28:45 none which have come to my attention, other than your test 19:28:52 #topic gerrit/lp groups 19:29:12 mordred, fungi: thoughts on whether we should sync core/drivers -> lp, or rename the groups? 19:29:47 renaming is where i'm leaning. they seem like distinct responsibilities which may or may not share the same individuals 19:30:17 and they also vary in use between server/client/infra/stackforge/et cetera 19:30:52 that's my preference because there's less to break. if the core group in lp is important, then that makes doing a sync script pretty compelling... 19:31:06 jeblair: renaming ++ 19:31:23 i'm still not sure i entirely follow why core is important to sync to lp. when i want to keep track of bugs for a project i go to lp and subscribe 19:31:37 i think it's for security bugs 19:31:52 ahh, that's something i'm as of yet still blind to 19:32:22 and does introduce a bit of a wrinkle, i can see 19:32:29 but it seems like ttx is willing to work around that, and we're more leaning toward renaming, so let's do that. 19:33:21 we'll rename -drivers in gerrit to -milestone, and create -ptl groups and give them tag access 19:33:53 #topic gearman 19:34:13 zaro, mordred: what's the latest? 19:34:16 * mordred landing - will drop from meeting in a bit 19:34:23 jeblair: krow is hacking on cancel today 19:34:28 sweet 19:34:43 jeblair: i'm working on getting label changes to update gearman functions. 19:34:54 jeblair: having a difficult time due to jenkins bugs. 19:35:03 ugh 19:35:33 sounds like things are moving; let me know if you need anything from me 19:35:51 #topic reviewday 19:35:58 ok. might need your help if don't get it done today. 19:35:59 ok, I think I finally got all the variables sorted out, needs some code reviews 19:36:06 https://review.openstack.org/#/c/21158/ 19:36:12 also need gerrit user, not sure how automatic set up is on the server, only have .ssh/config file right now in puppet but "git review -s" may nede to be ru 19:36:17 run 19:36:51 so 1. create gerrit user 2. test that it works with the files we have I guess 19:37:16 you mean 'reviewday' user? 19:37:22 yeah 19:37:30 on the host or in gerrit? 19:37:35 which goes in to pull reviewday stats 19:37:36 a reviewday account in gerrit 19:37:36 gerrit 19:37:41 host is handled in puppet 19:37:55 ah, ok. gotcha, the 'git review -s' bit confused me 19:38:08 yeah, adding a role account is a manual process for an admin, i think. 19:38:16 well, the user on the host uses the gerrit user, it has a .ssh/config file 19:38:18 fungi: you done that yet? you want to take this one? 19:38:21 pleia2: i can help take care of the gerrit account 19:38:26 thanks fungi 19:38:28 jeblair: you beat me to it 19:38:30 eys 19:38:33 yes too 19:38:42 #action fungi create reviewday user in gerrit 19:39:23 pleia2: i think the only thing you'd need is to accept the authorized keys 19:39:24 pleia2: i'll get up with you on that after i review your change 19:39:35 jeblair: yeah, that's my hope 19:40:03 pleia2: you could either do that by having the reviewday program accept them, or add known_hosts to puppet 19:40:39 I'd prefer known_hosts (adding autoaccept to reviewday makes me feel not so good) 19:41:16 that's fine. i wouldn't add 'always accept' but rather, 'accept on first connection'. 19:41:20 so I can add that patch in to puppet for the gerrit server 19:41:23 ah, fair enough 19:41:44 your call 19:41:52 I'll do known_hosts 19:42:03 ok 19:42:11 #topic pypi mirror / requirements 19:42:14 sigh. websockify has made a broken release. nova needs to be version pinned 19:42:27 mordred: perfect timing 19:42:29 yah 19:42:40 "reasons we need better mirroring" 19:43:02 or better requirements handling i guess in this case 19:43:07 so mordred and i just hashed out a possible way to deal with the situation of having transitive requirements through 3 levels of unreleased software. 19:43:46 does it involve bodily violence at pycon? 19:43:50 it's going to take some time, including things like subclassing or extending pip. 19:44:01 so it might be next week before i have something. 19:44:26 #topic baremetal testing 19:44:35 so, I got familiar with how baremetal testing w/ incubator works: https://github.com/tripleo/incubator/blob/master/notes.md 19:44:44 then last week spoke with devananda about loose plan and updated the ticket: https://bugs.launchpad.net/openstack-ci/+bug/1082795 19:44:47 Launchpad bug 1082795 in openstack-ci "Add baremetal testing" [High,Triaged] 19:44:56 echohead is working on removing devstack requirement, instead editing diskimage-builder to create a more appropriate "bootstrap" VM 19:45:14 (not really sure how it differs from devstack exactly, but might have less going on, horizon probably isn't needed...) 19:45:33 anyway, the thought is to use devstack-gate to run a sort of bootstrap-gate instead which will then spin up the baremetal instances and do all tests (essentially, "do incubator notes") 19:45:48 what I need are some pointers to getting a handle on this overall, I've read some about devstack-gate, went back to read about the whole process to find out when exactly it's run (also read up on glance, since we'll be doing some demo baremetal image-stashing during this process) 19:46:14 but when it comes to starting to add all of this to ci, I'm still lacking a bit of the glue in my understanding that will make this all possible practically 19:46:21 so while echohead gets bootstrap in diskimage-builder ready, I need some background (maybe some docs to read today, then check in with someone tomorrow morinng to outline some practical first steps?) 19:46:26 * ttx waves 19:46:45 EOF :) 19:46:51 ttx: mind if we take a 5 min gerrit outage on friday to rename oslo-config to oslo.config? 19:46:59 ttx: friday evening us time 19:47:03 jeblair, fungi: there is no specific date for RC1 19:47:23 so there is no good or bad time for an outage 19:47:25 ttx: oh, ok. then do you have a feeling for when would be best to take that outage? 19:47:48 jeblair: wahtever is best for you. 19:47:56 okay, well, fri night still seems good, so let's stick with that then. 19:48:10 pleia2: have you read the devstack-gate readme? 19:48:15 jeblair: yep 19:49:05 doesn't really seem built at the moment to s/devstack/some other rebuilt openstack instance 19:49:27 s/rebuilt/prebuilt 19:49:31 jeblair: for LP/Gerrit groups: +1 to renaming. I can work around the core issue. Subscribing all of them to security bugs was just a cool shortcut 19:49:59 pleia2: a while back, i worked through this, and found it very helpful: https://github.com/openstack-infra/devstack-gate/blob/master/README.md#developer-setup 19:50:20 devananda: ok great, I got that started on my desktop yesterday but haven't brought it up yet 19:50:45 pleia2: so there are two parts of devstack-gate that deal with devstack, the image-creation, and then running devstack itself. the rest is general node management. 19:51:09 pleia2: we can change whichever bits of those we need to to make plugging something else in possible. 19:51:12 image creation us done with diskimage-builder, correct? 19:51:21 at least, partially 19:51:39 pleia2: devstack-gate does not do that now, it spins up a new vm and images it using the cloud provider's api. 19:51:48 ah, ok 19:52:01 pleia2: are you going to pycon? 19:52:15 pleia2: we'll still build the bootstrap, deploy, and demo images using dib (or download those from /LATEST/, once dib is gated) 19:52:15 jeblair: no, but I can wander down some evening next week if it's helpful 19:52:57 jeblair: in corner cases we might have to rename -core teams in LP instead of deleting -- I remember LP is a bit picky when there is a ML attached to group. 19:53:09 devananda: yeah, I think I was wondering devstack-gate-wise whether we'd want some standby bootstraps like we currently have standby devstacks all ready for testing (as I understand it), and how those would be created 19:53:46 pleia2: d-g should probably use its existing methods for creating those 19:53:47 * ttx preventively purges the nova-core Ml 19:54:26 pleia2: but, no, standby-bootstrap doesn't really make sense to me. bootstrap will be a VM running inside the devstack-gate-created instance 19:54:36 devananda: ok 19:54:47 right, that makes sene 19:54:49 sense 19:55:19 jeblair: weird, most -core teams on LP activated their ML. 19:56:04 * ttx purges 19:56:06 ok, I'll play around with the devstack-gate developer-setup to get a better understanding of this and go from there 19:56:17 ttx: good 19:56:40 pleia2: cool, i'm happy to help out, and we should see if we can get together with mordred this week. 19:56:59 sounds good 19:57:00 jeblair: I'll ping PTLs before purging the cinder and quantum ones. 19:57:12 ttx: I have made the core groups not be admins of openstack-cla. 19:57:22 ttx: people were getting confused and still approving membership. 19:57:51 ttx: i expect we'll want to delete that group eventually, but i'm hesitant to do so right now. 19:58:02 jeblair: there is a corner case if some people are in ~openstack only by way of a core team 19:58:13 they could lose the general ML list subscription 19:58:33 but we can manually check that before deleting the group 19:58:39 #topic open discussion 19:58:58 ttx: yeah, that would be a nice thing to do. 19:59:28 thanks everyone! 19:59:35 #endmeeting