Tuesday, 2013-04-02

*** sacharya has joined #openstack-meeting-alt00:00
*** bdpayne has quit IRC00:21
*** yidclare has quit IRC01:08
*** sarob has joined #openstack-meeting-alt01:44
*** esp1 has joined #openstack-meeting-alt02:55
*** esp1 has left #openstack-meeting-alt03:04
*** bdpayne has joined #openstack-meeting-alt03:24
*** bdpayne has quit IRC03:26
*** esp1 has joined #openstack-meeting-alt03:28
*** sarob has quit IRC04:13
*** SergeyLukjanov has joined #openstack-meeting-alt04:15
*** sacharya has quit IRC04:57
*** amyt has joined #openstack-meeting-alt05:05
*** esp1 has quit IRC05:12
*** amyt has quit IRC05:12
*** amyt has joined #openstack-meeting-alt05:13
*** amyt has quit IRC05:22
*** rmohan has quit IRC05:35
*** rmohan has joined #openstack-meeting-alt05:36
*** SergeyLukjanov has quit IRC05:40
*** nimi has left #openstack-meeting-alt07:34
*** nimi has quit IRC07:34
*** openstack has joined #openstack-meeting-alt07:41
*** ChanServ sets mode: +o openstack07:41
*** SergeyLukjanov has joined #openstack-meeting-alt07:47
*** SergeyLukjanov has quit IRC07:57
*** SergeyLukjanov has joined #openstack-meeting-alt08:56
*** SergeyLukjanov has quit IRC10:01
*** SergeyLukjanov has joined #openstack-meeting-alt10:03
*** dhellmann has quit IRC11:50
*** rnirmal has joined #openstack-meeting-alt12:17
*** SergeyLukjanov has quit IRC12:44
*** SergeyLukjanov has joined #openstack-meeting-alt13:07
*** dhellmann has joined #openstack-meeting-alt13:33
*** SergeyLu_ has joined #openstack-meeting-alt13:40
*** SergeyLukjanov has quit IRC13:41
*** SergeyLu_ is now known as SergeyLukjanov13:41
*** SergeyLukjanov has quit IRC13:42
*** sacharya has joined #openstack-meeting-alt13:46
*** jcru has joined #openstack-meeting-alt13:49
*** SergeyLukjanov has joined #openstack-meeting-alt13:54
*** SergeyLukjanov has quit IRC13:56
*** djohnstone has joined #openstack-meeting-alt14:15
*** cloudchimp has joined #openstack-meeting-alt14:21
*** SergeyLukjanov has joined #openstack-meeting-alt14:31
*** rnirmal_ has joined #openstack-meeting-alt14:43
*** rnirmal_ has joined #openstack-meeting-alt14:44
*** jcru is now known as jcru|away14:46
*** cp16net is now known as cp16net|away14:46
*** rnirmal has quit IRC14:47
*** rnirmal_ is now known as rnirmal14:47
*** sdake_ has quit IRC14:49
*** sacharya has quit IRC14:55
*** jcru|away is now known as jcru14:59
*** cloudchimp has quit IRC15:04
*** amyt has joined #openstack-meeting-alt15:10
*** dhellmann has quit IRC15:16
*** SergeyLukjanov has quit IRC15:56
*** SergeyLukjanov has joined #openstack-meeting-alt15:58
*** sacharya has joined #openstack-meeting-alt16:05
*** amyt has quit IRC16:20
*** amyt has joined #openstack-meeting-alt16:20
*** vipul is now known as vipul|away16:24
*** vipul|away is now known as vipul16:24
*** rnirmal has quit IRC16:29
*** bdpayne has joined #openstack-meeting-alt16:33
*** dhellmann has joined #openstack-meeting-alt16:34
*** SergeyLukjanov has quit IRC16:37
*** rmohan has quit IRC16:42
*** rmohan has joined #openstack-meeting-alt16:42
*** esp1 has joined #openstack-meeting-alt16:42
*** esp1 has left #openstack-meeting-alt16:47
*** rmohan has quit IRC16:48
*** rmohan has joined #openstack-meeting-alt16:50
*** yidclare has joined #openstack-meeting-alt17:02
*** rnirmal has joined #openstack-meeting-alt17:03
*** rnirmal has quit IRC17:04
*** rnirmal has joined #openstack-meeting-alt17:08
*** sacharya has quit IRC17:08
*** rmohan has quit IRC17:15
*** rmohan has joined #openstack-meeting-alt17:18
*** sdake_ has joined #openstack-meeting-alt17:56
*** SergeyLukjanov has joined #openstack-meeting-alt18:01
*** yidclare has quit IRC18:06
*** cp16net|away is now known as cp16net18:07
*** yidclare has joined #openstack-meeting-alt18:11
*** SlickNik has joined #openstack-meeting-alt18:22
*** SlickNik has left #openstack-meeting-alt18:23
*** sacharya has joined #openstack-meeting-alt18:32
*** sarob has joined #openstack-meeting-alt18:34
*** vipul is now known as vipul|away18:49
*** vipul|away is now known as vipul18:49
*** rmohan has quit IRC18:51
*** rmohan has joined #openstack-meeting-alt18:52
*** vipul is now known as vipul|away18:53
*** vipul|away is now known as vipul18:53
*** yidclare has quit IRC18:59
*** jcru has quit IRC19:00
*** yidclare has joined #openstack-meeting-alt19:02
*** vipul is now known as vipul|away19:08
*** jcru has joined #openstack-meeting-alt19:09
*** sarob has quit IRC19:17
*** SergeyLukjanov has quit IRC19:22
*** SergeyLukjanov has joined #openstack-meeting-alt19:25
*** heckj has joined #openstack-meeting-alt19:26
*** sacharya has quit IRC19:27
*** cp16net is now known as cp16net|away19:30
*** cp16net|away is now known as cp16net19:31
*** vipul|away is now known as vipul19:56
*** cp16net is now known as cp16net|away20:14
*** cp16net|away is now known as cp16net20:15
*** rnirmal has quit IRC20:19
*** yidclare has quit IRC20:25
*** yidclare has joined #openstack-meeting-alt20:27
*** SergeyLukjanov has quit IRC20:34
*** sdake_ has quit IRC20:47
*** hub_cap has joined #openstack-meeting-alt20:52
*** sdake_ has joined #openstack-meeting-alt20:52
*** jcru has quit IRC20:54
*** esp1 has joined #openstack-meeting-alt20:54
*** robertmyers has joined #openstack-meeting-alt20:54
*** jcru has joined #openstack-meeting-alt20:54
*** esp1 has joined #openstack-meeting-alt20:54
*** datsun180b has joined #openstack-meeting-alt20:58
hub_cap#startmeeting reddwarf20:59
openstackMeeting started Tue Apr  2 20:59:35 2013 UTC.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.20:59
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:59
*** openstack changes topic to " (Meeting topic: reddwarf)"20:59
openstackThe meeting name has been set to 'reddwarf'20:59
datsun180bhello20:59
robertmyershello20:59
hub_capas usual, >>> time.sleep(120)20:59
djohnstonehi21:00
*** SlickNik has joined #openstack-meeting-alt21:00
vipulhola21:00
*** saurabhs has joined #openstack-meeting-alt21:00
SlickNikhey there21:00
annashenhi21:00
esp1hello21:01
cp16netpresent21:01
hub_cap#link https://wiki.openstack.org/wiki/Meetings/RedDwarfMeeting21:01
hub_cap#link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-03-26-20.59.html21:01
juicegreetings21:01
imsplitbitgreets21:01
imsplitbitwe ready to get this party started?21:02
hub_capyup21:02
hub_cap#topic Action items21:02
*** openstack changes topic to "Action items (Meeting topic: reddwarf)"21:02
* juice is doing a shot21:02
hub_capnice21:02
hub_capsomeone snag grapex21:02
vipulwhere's the alcohol?21:02
SlickNikLet's do it.21:02
datsun180bhe's trying to resolve his connection21:02
hub_capand smack him w/ a trout21:02
hub_capok datsun180b, still smack him w a trout, hes on a mac21:03
hub_capso ill skip him for now21:03
hub_capmy action item is next, and i actually added it to the agenda today21:03
hub_capso ill skip it till then (action / action items)21:03
hub_capvipul: yer next, patch for backUps to database-api21:03
vipulyea, haven't gotten around to it21:04
vipulpromise to do it this week!21:04
vipul#action Vipul to publish backup API to database-api21:04
hub_capcool, can u re-action it21:04
hub_capso backupRef vs backupUUID21:04
hub_capi believe we decided backupRef was fine right, but it could jsut be a uuid no biggie?21:04
hub_capoh nm21:05
hub_capi emailed jorge21:05
hub_caphe never emailed me back21:05
hub_caphe hates me21:05
SlickNik#action SlickNik to finish publishing security groups API to database-api21:05
hub_capill send him another email21:05
hub_cap<3 SlickNik21:05
SlickNikI remember I started on that one, but I still have a couple of changes to it.21:05
hub_cap#action get ahold of jOrGeW to make sure about backupRef vs backupUUID21:05
vipulSlickNik yea i think that was abandoned21:05
*** sdake_ has quit IRC21:05
vipuleveruy openstack project seems to do this shit differently21:05
vipulin terms of uuid vs ref21:06
SlickNikI have the change, just haven't gotten around to making the couple of changes I wanted.21:06
hub_capi know vipul...21:06
hub_capthere is no standard21:06
hub_capits terrible21:06
SlickNikyeah, sucks.21:06
vipulno wonder George Reese is always putting the smack down21:06
hub_cappersonally, i think of a ref being a remote system21:06
hub_capoh god nooooooo21:06
vipulhe's on my twitter feed21:06
vipulbig time complaints :)21:06
SlickNikWho's George Reese?21:06
vipulenstratus21:07
hub_capSlickNik: search your openstack mail21:07
SlickNikAnd why are his peanut butter cups awful?21:07
*** grapex has joined #openstack-meeting-alt21:07
hub_capgrapex: !!!21:07
grapexhub_cap: That was awesome21:07
SlickNikhub_cap: will do…21:07
grapexWhat's up?21:07
hub_capok so back to ref vs uuid, in my brain21:07
hub_capa ref is remote21:07
hub_capand a uuid is local21:07
datsun180blacking context, it seems the 'uu' part of 'uuid'  disagrees with that21:08
robertmyersnice21:08
vipulheh... had to bring that up21:08
hub_capsorry my definition of local is not correct21:08
hub_caplocal being controlled by reddwarf21:08
hub_capas opposed to controlled by a 3rd party system21:08
hub_capyes uu still applies :)21:08
vipulin that case we should go with UUID then21:09
hub_capwell thats what im leaning toward21:09
hub_capbut let me just touch back w/ jorge21:09
hub_capits fairly easy to change right?21:09
vipulYea pretty minor21:09
robertmyershow abut backupID?21:09
vipulit's ref now21:09
robertmyersnot UUID21:09
hub_capId might be better than uuid21:09
vipulyea that's probably better21:09
SlickNikIt's ref now, but easily changed...21:09
hub_capso BackUpId21:09
hub_cap:P21:10
SlickNikI like backupId21:10
grapexSorry gang, my IRC client had a visual glitch- are we talking about using ID or UUID for what's now known in the client as "backupRef?"21:10
vipulk, we'll wait for Jorje?21:10
vipuljorge21:10
hub_capya but lets lean heavily toward backupId21:10
SlickNikyes, grapex21:10
grapexOk21:10
vipulsteveleon: can we change that now then?21:10
vipulesmute ^21:10
hub_capgrapex: lets go back to yoru action item21:10
*** sdake_ has joined #openstack-meeting-alt21:10
hub_capxml lint integration in reddwarf grapex21:10
grapexSorry, still nothing so far. Had a conference last week which threw me off track.21:11
hub_capfo sure, plz re-action it21:11
grapexOk21:12
hub_capok so lets say we are done w action items21:12
esmuteyeah we can change it..21:12
esmutei will have to change the rd-client that just got merged a few hours ago too21:12
hub_cap#topic Status of CI/jenkins/int-tests21:12
*** openstack changes topic to "Status of CI/jenkins/int-tests (Meeting topic: reddwarf)"21:12
hub_capesmute: okey21:12
vipulesmute: thanks21:12
vipulSo not having a int-test gate has been killing us it seems21:13
hub_capvipul: its all yo mang21:13
vipulSlickNik and I are working on a jenkisn here at HP that will listen to gerrit triggers21:13
vipuland give a +1 / -1 vote21:13
hub_capok it can be nonvoting too21:13
vipulgot it sort of working, but the plugin we use to spin up a VM need some love21:13
hub_capif thats hard ro anything21:13
hub_capAHH21:13
datsun180bshould it be more like check/cross then?21:13
hub_capare u using the jclouds one?21:13
vipulno, home grown21:14
hub_capdatsun180b: j/y21:14
vipuljruby thingie21:14
datsun180bconsidering that builds that don't pass int-tests aren't worth shipping?21:14
hub_capoh fun vipul21:14
SlickNiknope, it's something that one of the folks here came up with.21:14
hub_capdatsun180b: correct21:14
vipulYea but i think we want voting21:14
hub_capSlickNik: do you guys adhere to the openstack api internally? if so the jclouds one is bomb21:14
vipulhub_cap: We shoudl try the jclouds one.. honestly haven't event tried it21:14
SlickNikYeah, it needs a couple of changes to be able to pass the gerrit id's from the trigger to the new instance it spins up.21:15
hub_capits great21:15
hub_capitll spawn a server, if it fails itll spawn a new one21:15
hub_capit sets up keys to ssh21:15
hub_capit does a lot of work for u21:15
SlickNikhub_cap, do you have a link to the jclouds plugin you speak of?21:15
vipulone other thing missing is checking to see if tests passed or not..21:15
hub_caphttps://wiki.jenkins-ci.org/display/JENKINS/JClouds+Plugin21:15
hub_cap#link https://wiki.jenkins-ci.org/display/JENKINS/JClouds+Plugin21:15
vipulcurrently can run them, but no check to see if it worked properly21:15
SlickNikhub_cap: thanks!21:15
datsun180bgrep for OK (Skipped=)21:16
hub_capvipul: ahh, i think that the jclouds plugin will fix that21:16
datsun180bat minimum21:16
vipulyea, that's something we're trying to get added to our jruby plugin21:16
hub_capitll fail if the int-tests emit a error code21:16
vipulhub_cap: Jclouds does that already?21:16
datsun180beven better21:16
hub_capwell u just tell jclouds to exectue X on a remote system21:16
hub_capand if X fails, it fails teh job21:16
vipulhub_cap: so jenkins plugin is building ajenkins slave? or arbitrary vm21:17
vipulcuz i don't care for the jenkins slave.. just want a vm21:17
hub_capvipul: thre is not much difference between them, but it can easily do arbitrary vm21:17
grapexvipul: Does StackForge CI use the jclouds plugin and make it an official Jenkins slave or does it just create a VM without the jenkins agent?21:18
datsun180bi like the idea of int-tests running on a machine that doesn't persist between builds and so doesn't rely on manual monkeying for tests to work21:18
vipulthey have a pool of servers grapex21:18
hub_capit _is_ a slave in terms of jenkins but thats convenient for making sure the node comes online etc21:18
vipulnot sure exactly how they allocate them21:18
esp1datsun180b: mee too.21:18
*** cloudchimp has joined #openstack-meeting-alt21:18
SlickNikThey have home-grown scripts to allocate them…21:18
vipuldatsun180b: yep, fresh instance each time21:18
hub_capanyhoo, i say look into it21:19
grapexSlickNik: home-grown instead of using the jenkins agent?21:19
hub_capit may or may not21:19
hub_capwork for u21:19
grapexI'm not pro or con Jenkins agent btw, just curious21:19
hub_capgrapex: the ci team? ya its all homegrown :)21:19
vipulYea... so still a WIP.. i think we need to give this a bit more time..21:19
vipulBUT we're getting close21:19
vipullast week all tests passed21:19
hub_caphell yes21:20
hub_capeven if its nonvoting and it triggers and we can just look @ it b4 approving21:20
hub_capthats a good step 121:20
hub_caplets just get it runnin21:20
SlickNikWe get the voting part from the gerrit trigger.21:20
hub_capso we can stop comitting code that fails21:20
vipulyep, can't wait21:20
datsun180b+4021:20
cp16net+!21:20
hub_capim fine w/ it always voting +1 since it doesnt tell if it passes or fails yet21:20
SlickNikAnd I've set up the accounts to be able to connect to gerrit.21:21
hub_caplets just get a link put up21:21
vipulOH and need to do openID integration21:21
esp1yeah, we probably need to run the int-tests locally before checking21:21
hub_caprather than taking it to the finish line fully working21:21
vipul#action Vipul and SlickNik to update on status of VM Gate21:21
esp1I meant checking in21:21
hub_caplets get a baton pass by getting it running for each iteration asap ;)21:21
SlickNikagreed hub_cap21:21
SlickNikWe need this goodness! :)21:21
hub_capyup21:22
hub_cap#action stop eating skittles jelly beans, they are making me sick21:22
hub_capok we good on ci?21:22
SlickNikthanks for actioning, Vipul21:22
vipuli think so21:22
hub_cap#Backups Discussion21:22
hub_capstatus first21:22
vipuli thnk juice / robertmeyers / SlickNik you're up21:23
*** cloudchimp has quit IRC21:23
robertmyersbackups are good, lets do it21:23
SlickNikSo we got a sweet merge from robertmyers with his streaming/mysqldump implementation...21:23
SlickNikto our shared work in progress repo.21:23
hub_caprobertmyers: lol21:24
robertmyerswe need a good way to run the restore21:24
*** dhellmann has quit IRC21:24
hub_capare we trying to get the backup _and_ restore in one fell swoop?21:24
hub_capor are we going to break it up to 2 features?21:24
SlickNikI am working on hooking up our innobackupex restore implementation to it. (Testing it out now, really)21:24
hub_capsince we havent nailed the api to omuch for the restore etc21:24
vipuli think we have hub_cap21:25
vipulthat's the backupRef peice21:25
robertmyerswell, we should at least have a plan21:25
SlickNikI think esmute has the API/model pieces for both ready to go.21:25
hub_capi agree we need a plan21:25
hub_capoh well then SlickNik if its taht easy21:25
SlickNikI'll let esmute comment.21:25
SlickNik:)21:25
vipulhub_cap: so the backupRef vs backupId discussion is related to restore21:25
esmuteyup... just need to do some renaming from ref to id21:25
vipulwhere we create a new instance, and porivde the backup ID21:25
vipulthat's the API21:25
robertmyersthere may be extra things we need like to reset the password21:25
SlickNikBut the plan was to check with him and push those pieces up for gerrit review.21:26
vipulam i missing something?21:26
juicerobertmyers: could they do that after the restore?21:26
esp1robertmyers: I was wondering about that.21:26
robertmyerswe could do it automatically after the restore21:26
vipulwhich password? the root user? os_admin?21:27
esp1should they get the original password by default?21:27
robertmyersthey may want to use all the same uses/etc21:27
juicerobertmyers: who would the user get the new password21:27
robertmyersI think the root mysql user password will need to be reset21:27
esp1juice: it comes back in the POST response body for create21:27
vipulso i thought we'd slide the restore piece into the current 'prepare' workflow which i beleive does that after the data files ar ein place?21:28
hub_caprobertmyers: yes it would21:28
esp1or you can do a separate call as robertmyers said21:28
hub_capand it shouldnt be enabled by default since new instances dont come w/ root enabled by default21:28
esp1got it21:28
hub_cap<321:28
hub_capand the osadmin user/pass might be goofy21:29
hub_capim assuming we are pullin in the user table21:29
hub_capso given that, we will have a user/pass defined for that, as well as a root pass21:30
robertmyersthat is the plan, a full db backup21:30
SlickNikYeah, we're pullin in the user table as part of restore.21:30
hub_capso we might have to start in safe mode, change the passes, and then restart in regular mode after writing osadmin to the config21:30
SlickNikWhat's the behavior if a db with root enabled is backed up?21:31
SlickNik(on restore)21:31
hub_capid say do _not_ enable root21:31
hub_capno matter what21:31
robertmyers#agreed21:31
hub_capcuz that needs to be recorded21:31
hub_capand it becoems a grey area for support21:31
SlickNikSo the restored instance is the same except with root _not_ enabled…?21:31
hub_capcorrect21:32
hub_capsince enable root says "im giving up support"21:32
vipuland a different password for os_admin21:32
SlickNikgotcha vipul.21:33
hub_capso great work on status team21:33
hub_capnow division of labor21:33
hub_capwhos doin what21:33
hub_capcuz i aint doin JACK21:33
SlickNikJust one clarification.21:33
hub_capwrt backups21:33
hub_capyes SlickNik sup21:33
SlickNikSo it's fine for them to backup from an instance on which they have given up support and restore to an instance for which they will have support.21:34
hub_caphmm thats a good point of clarification21:34
vipulthat's tricky..21:34
vipulcuz we don't know what they may have changed21:34
hub_capok so given that21:34
hub_caplets re-record that root was enabled on it21:35
hub_capand keep the same root pass21:35
juicesounds like the right thing to do21:35
hub_capreally its the same database21:35
grapeximsplitbit: for some backup strategies, doesn't the user info get skipped?21:35
hub_capso the ony pass that should change is os_admin cuz thats ours21:35
grapexMaybe I'm thinking back to an earlier conversation, but I remember this came up and the idea was the admin tables wouldn't be touched on restore.21:36
hub_capgrapex: i think that was b4 mysqldump righ?21:36
vipulgrapex: you can choose which tables to include in backup21:36
grapexhub_cap: Maybe, it seems like I might be speaking crazy here.21:37
robertmyersI think we want to do all and get the users21:37
hub_capi think grapex is alluding to a conversation that was had regarding a internal product here21:37
grapexvipul: For iter 1 let's just do the full thing and remember what the root enabled setting was21:37
hub_capyup just make sure to re-record the root enalbed setting w/ the new uuid, and leave teh root pass the same21:37
grapexhub_cap: No, it was earlier than that, talking about import / export... n/m. For iter 1 you guys are right and we should record the setting.21:37
SlickNikgrapex / vipul / hub_cap: I like grapex's iter 1 idea for now.21:37
hub_capupdate os_admin21:37
vipulnot sure about mysqldump.. but xtrabackup supports expressions to exclude /incdlue tables21:37
hub_capvipul: same w/ mysqldump21:37
vipulso then we could make a call now21:38
vipulif we want to start with a fresh set of users each time21:38
vipulthen we just exclude it now21:38
hub_capnaw i dont think so, it doesnt make sense to _not_ include users21:38
grapexFor imports it might-21:38
hub_capeven if so, u still have the root enalbed issue21:38
grapexthat's down the road though21:38
robertmyerswell, I think that can be set by the implementation21:38
vipuland to get manageability on restore back21:38
SlickNikWell, from a users perspective, if I've done a bunch of work setting up new users, I don't want to have to redo that on restore, though...21:38
vipulyea nvm21:38
vipulyou still have other tables21:39
hub_caplets go w/ all or nothing as of now :)21:39
hub_capwe are devs, we dont know any better :D21:39
hub_caplet some mysql admins tell us we are doing it wrong later21:39
hub_cap;)21:39
* grapex blows horn to summon imsplitbit21:39
vipulwe can add a flag later to support this use case21:39
robertmyerswell, right now the command is plugable21:39
hub_cap#agreed w/ grapex iter 121:39
hub_capi say we move on21:39
robertmyersso it can be easily changed21:39
hub_captru robertmyers21:40
hub_capdivision of labor21:40
SlickNiksounds good, thanks for clarification.21:40
hub_capwhos doin what21:40
* robertmyers will work on all the fun parts21:40
hub_caplol21:40
SlickNikhaha21:40
SlickNikI'm working on the innobackupex pluggable part.21:40
robertmyersright now I'm looking at the backup model to see if we can remove logic from the views21:41
robertmyerslike a test to see if a job is running21:42
vipulanyone doing the streaming download?21:42
juicerobertmyers: what is your intention for work on the restore?21:42
*** sacharya has joined #openstack-meeting-alt21:42
juicevipul: I think the download is a lot more straightforward than the upload21:42
robertmyerswell, I was thinking that we create a registry, and look up the restore process from the backup type21:43
juicesince swift handles the reassembly of the pieces … or at least that is what I read in the documentation21:43
juicerobertmyers: do we do that or just mirror the configuration that is done for the backup runner?21:44
robertmyersjuice:  yes if you download the manifest it pulls down all21:44
hub_capid say lets think we are doing 1x backup and restore type for now21:44
SlickNikrobertmyers, do we need a registry? It makes sense to have the restore type coupled with the backup_type, right? I don't see a case where I'd backup using one type, but restore using another...21:44
hub_capcorrect, for now21:44
hub_capin the future we will possibly have that21:44
robertmyerswell, since we are storing the type, one might change the setting over time21:44
SlickNikhub_cap, that was my thinking  for now, at least.21:44
hub_capa user uploads a "backup" of their own db21:44
hub_caprobertmyers: i dont think that we need that _now_ tho21:45
vipuldont' we already have the use case of 2 types?21:45
hub_capthat could happen in teh future, and we will code that when we thik about it happening21:45
vipulxtrabackup and mysqldump21:45
robertmyersNo i'm talking about us changing the default21:45
grapexI agree21:45
grapexLet's just put in types now.21:46
hub_capvipul: but xtrabackup will have its own restore, and so will mysqldump right?21:46
hub_capgrapex: types are in teh db i think arelady21:46
hub_capright?21:46
vipulright but you need ot be able to look it up since it's stored in the DB entry21:46
robertmyersso we store the backup_type in the db and us that to find the restore method21:46
hub_capso what wil this _restore method_ be21:46
vipulit's really a mapping.. 'xtrabackup' -> 'XtraBackupRestorer'21:46
hub_capgrab a file from swift and stream it in?21:46
robertmyersvipul: yes21:46
hub_capthe xtrabackup file cant be streamed back in?21:47
hub_caplike a mysqldump file21:47
robertmyerswell, that part will be the same21:47
SlickNikit needs to be streamed to xbstream to decompress it.21:47
juicethis discussion is do we use configuration to more or less statically choose the restore type or do we use some component that chooses it based off of the backup type?21:47
robertmyersbut the command to run will be differrent21:47
SlickNikBut then it has an extra step of running the "prepare"21:47
hub_capim confused21:47
hub_capw/ xtra do u not do, 1) start reading from swift, 2) start streaming to mysql21:48
juicedownload then backup21:48
hub_caplike u do w/ mysqldump21:48
juicedownload is the same for either case21:48
hub_capmysql < dumpfile21:48
vipulhub_cap: no... you don't pipe it in21:48
juicebackup process may vary yes?21:48
hub_capthats just terrible21:48
hub_cap:)21:48
vipulyou have to 'prepare' which is a xtrabackup format -> data files21:48
vipulthen you start mysql21:48
SlickNikhub_cap: db consistency isn't guaranteed unless you run prepare for xtrabackup.21:49
hub_capok lets go w/ what robertmyers said then.... i thoguht they were tehe same21:49
juicesame download + different restore + same startup21:49
vipulis it the same startup?21:49
vipulone assumes mysqsl is up and running21:49
vipulother assumes it's down.. and started after restore21:49
SlickNikSeems like it's different. I think for mysqldump, mysql already needs to be running so it can process the logical dump.21:50
hub_capok so lets not worry bout the details now that we _know_ its different21:50
SlickNikvipul: yeah.21:50
hub_caplets just assume that we need to know how to restore 2 different types, and let robertmyers and co handle it accordingly21:50
vipulrobertmyers: so where do store the dump?21:51
vipulassume that there is enough space in /vda?21:51
vipulor actually you stream directly to mysql21:51
vipulwhere xtrabackup streams directly to /var/lib/mysql21:51
robertmyersgood question, we may need to check for enough space21:51
robertmyerswe can see if streaming is possible21:52
vipulmysql < swift download 'xx'?21:52
hub_capmysqldump shouldnt store the dump i think21:52
hub_capstream FTW21:52
SlickNikI think you may be able stream it to mysql directly.21:52
hub_caplets assume yes for now, and if not, solve it21:53
hub_capi _know_ we can for mysqldump21:53
hub_capmoving on?21:54
SlickNikoh, one other clarification.21:54
hub_capkk21:54
SlickNikI'll probably be done looking at the xtrabackup backup piece by today/tom.21:55
SlickNikand so juice and I can start looking at restore.21:55
hub_capcool21:55
hub_capso on to notifications21:55
hub_cap#topic Notifications Plan21:55
*** openstack changes topic to "Notifications Plan (Meeting topic: reddwarf)"21:55
hub_capvipul: batter up!21:55
SlickNikcool, thanks.21:55
robertmyersI updated the wiki with our notifications21:56
vipulSO thanks to robertmyers for filling out hte info in wiki21:56
vipuli wanted to see where this is on your radar21:56
SlickNikthanks #robertmyers.21:56
vipulin terms of pushing up the code21:56
vipulotherwise we can start adding it in21:56
vipulnow that we have a design for what we need to do..21:56
vipulalso wanted to see how you emit 'exists' events21:56
robertmyerswell, we have it all written... so we should make a patch21:56
grapexvipul: Describe an "exists" event.21:57
vipuldo we have a periodic task or something?21:57
vipulthat goes through and periodically checks every resource in the DB21:57
grapexvipul: We do something like that once a day.21:57
robertmyerswe have a periodic task that runs21:57
hub_capisint that _outside_ of reddwarf?21:57
robertmyersyes21:58
hub_capwould reddwarf _need_ to emit these?21:58
grapexhub_cap: I don't think it should.21:58
vipulWell it seems that every 'metering' implementation has exists evetns21:58
hub_capvipul: sure but somethign like cielometer shoudl do that21:58
vipulso it seems anyone using reddwarf woul dhave to build one21:58
hub_capnotifications are based on events that happen21:59
grapexvipul cp16net: What if we put the exist daemon into contrib?21:59
hub_capi personally disagree w/ exists events too, so it may color my opinion :)21:59
grapexhub_cap: Honestly, I don't like them at all either. :)21:59
vipulcontrib sounds fine to me grapex21:59
hub_capi dont think that nova sends exists events vipul22:00
vipulit's easy enough to write one.. i just feel that's it's kinda necessary..22:00
hub_capit sends events based on a status change22:00
vipulit might not be part of core.. not sure actually22:00
grapexvipul: Ok. We're talking to this goofy external system, but we can find a way to seperate that code. If there's some synergy here I agree we should take advantage of it.22:00
hub_capits necessary for our billing system, but i dont think reddwarf needs to emit them. they are _very_ billing specific22:00
hub_capbut im fine w/ it being in contrib22:01
grapexSo how does Nova emit these events?22:01
hub_capi dont think nova does grapex22:01
grapexOr rather, where? In compute for each instance?22:01
vipulnova does the same thing as reddwarf using oslo notifications22:01
grapexevents == notifications roughly22:01
grapexto my mind at least22:01
imsplitbitsorry guys I got pulled away but I'm back now22:01
vipulyep agreed interchangeable22:01
hub_caphttps://wiki.openstack.org/wiki/SystemUsageData#compute.instance.exists:22:01
hub_capnot sure if this is old or not22:02
grapexHmm22:02
vipulthere is a volume.exists.. so it's possible that there is something periodic22:02
grapexvipul: We should probably talk more after the meeting22:02
hub_capif there are exists events _in_ the code, then im ok w/ adding them to our code22:02
hub_capbut god i hate them22:03
vipulgrapex: sure22:03
grapexMy only concern with combining efforts is if we don't quite match we may end up complicating both public code and both our billing related efforts by adding something that doesn't quite fit.22:03
vipulif we keep it similar to what robertmyers published i think it'll be fairly generic22:03
vipuland we need them for our billing systme :)22:04
hub_caphttps://wiki.openstack.org/wiki/NotificationEventExamples#Periodic_Notifications:22:04
grapexvipul: OK. He or I will be doing a pull request for notifications stuff very soon22:04
hub_capif nova already does this grapex maybe we are duplicating effort!22:04
grapexwithin the week, hopefully22:04
vipulgrapex: sweet!22:04
*** sdake_ has quit IRC22:04
hub_capbut just cuz there is a wiki article doesnt mean its up to date22:05
SlickNikNice.22:05
vipulhub_cap: even if nova emitted events, I think reddwarf should be the 'source of truth' in terms of timestamps and whatnot22:05
imsplitbithub_cap: if it's on the internet it's true22:05
grapexvipul: Agreed22:05
hub_capvipul: ya i didnt mean use novas notification22:05
grapexvipul: Working backwards from Nova to figure out what a "Reddwarf instance" should be could lead to issues...22:05
SlickNikintruenet...22:05
hub_capi meant that we might be able to use their code :)22:05
vipulyep22:05
hub_capto emit ours22:05
vipulright.. ok22:05
grapexhub_cap: Sorry for the misinterpretation22:05
hub_capno worries22:06
vipulcool.. i think we're good on this one..22:06
vipullet's get the base in..22:06
hub_capdef22:06
vipuland can discuss periodic22:06
vipulif need be22:06
hub_cap#link https://wiki.openstack.org/wiki/NotificationEventExamples22:06
hub_cap#link https://wiki.openstack.org/wiki/SystemUsageData#compute.instance.exists:22:06
hub_capjust in case22:06
vipulawesome thanks22:07
hub_cap#action grapex to lead the effort to get the code gerrit'ed22:07
hub_cap#dammit that doestn give enough context22:07
vipulthere should be an #undo22:07
hub_cap#action grapex to lead the effort to get the code gerrit'ed for notifications22:08
hub_caplol vipul ya22:08
SlickNikor a #reaction22:08
hub_cap#RootWrap22:08
hub_caplol22:08
hub_cap#topic RootWrap22:08
*** openstack changes topic to "RootWrap (Meeting topic: reddwarf)"22:08
hub_capso lets discuss22:08
vipulok this one is around Guest Agent..22:08
vipulwhere we do 'sudo this' and 'sudo that'22:08
vipulturns out we can't really run gueat agent wihtout giving the user sudoers privileg22:08
hub_capfo shiz22:09
SlickNik#link https://wiki.openstack.org/wiki/Nova/Rootwrap22:09
vipulso we should look at doing the root wrap thing there22:09
* datsun180b listening22:09
hub_capyes but we shodl try to get it moved to common if we do that ;)22:09
hub_caprather than copying code22:09
vipulI believe it's based on config where you specify everything you can do as root.. and only those things22:09
datsun180bsounds about right22:09
vipulhub_cap: it's alrady in oslo22:09
SlickNikIt's already in oslo, I believe.22:10
SlickNikWe need to move to a newer version of oslo (which might be painful) though22:10
vipuldatsun180b: I think the challenge will be to define that xml.. with every possible thing we want to be able to do22:10
vipulbut otherwise probably not too bad22:10
vipulso maybe we BP this one22:10
datsun180bwe've got a little experience doing something similar internally22:11
hub_capvipul: YESSSSS22:11
SlickNikI think we should bp it.22:11
datsun180bi don't think a BP would hurt one bit22:11
SlickNikI hate the fact that our user can sudo indiscriminately...22:11
vipulyup makes hardening a bit difficult22:11
datsun180bwell if we've got sudo installed on the instance what's to stop us from deploying a shaped charge of a sudoers ahead of time22:12
datsun180bspitballing here22:12
datsun180baren't there provisos for exactly what commands can and can't be run by someone granted powers22:12
vipulyou mean configure the sudoers to do exactly that?22:13
robertmyerssudoers is pretty flexible :)22:13
hub_capsure but so is rootwrap :)22:13
datsun180bright, if we know exactly what user and exactly what commands will be run22:13
hub_capand its "known" and it makes deployments easier22:13
hub_cap1 line in sudoers22:13
datsun180bwhat, rootwrap over sudoers?22:13
hub_caprest in code22:13
hub_capyes rootwrap over sudoers22:13
vipulprobably should go with the common thing..22:13
hub_capsince thats the way of the openstack22:13
SlickNikyeah, but I don't think you can restrict arguments and have other restrictions.22:14
robertmyersyou can do line line in sudoers too, just a folder location22:14
*** djohnstone has quit IRC22:14
*** sdake_ has joined #openstack-meeting-alt22:14
hub_capsure u can do all this in sudoers22:14
hub_capand in rootwrap22:14
hub_capand prolly in _insert something here_22:14
robertmyersoptions!22:14
hub_capbut since the openstack community si going w/ rootwrap, we can too22:15
* datsun180b almost mentioned apparmor22:15
juicehub_cap: I think root_wraps justification is that it is easier to manage than sudoers22:15
SlickNiklol@datsun180b22:15
hub_capyes juice and that its controlled in code vs by operations22:15
hub_caphttps://wiki.openstack.org/wiki/Nova/Rootwrap#Purpose22:16
vipul+1 for root wrap22:16
hub_cap+1 for rootwrap22:17
hub_cap+100 for common things shared between projects22:17
vipulSo I can start a BP for this one.. hub_cap just need to get us a dummy bp22:17
SlickNik+1 for rootwrap22:17
datsun180b-1, the first step of rootwrap in that doc is an entry in sudoers!22:17
datsun180bi'm not going to win but i'm voting on the record22:18
SlickNikAh, did we run out of dummies already?22:18
hub_capvipul: https://blueprints.launchpad.net/reddwarf/+spec/parappa-the-rootwrappah22:18
SlickNiknice name22:18
grapexhub_cap: that is the greatest blue print name of all time.22:18
hub_capdatsun180b: read the rest of the doc then vote ;)22:18
hub_cap:P22:18
datsun180bit looks to be about par for nova22:18
hub_capthere was _much_ discussion on going to rootwrap 2 version agao22:19
hub_cap*ago22:19
hub_capmovion on?22:19
SlickNikAre we good with rootwrap?22:20
SlickNiksounds good.22:20
vipuloh yes.. i think we're good22:20
hub_cap#topic quota tests w/ xml22:20
*** openstack changes topic to "quota tests w/ xml (Meeting topic: reddwarf)"22:20
datsun180byes let's22:20
hub_capgrapex: lets chat about that22:20
grapexhub_cap: Sure.22:20
grapexIt looks like the "skip_if_xml" is still called for the quotas test, so that needs to be turned off.22:21
grapexhttps://github.com/stackforge/reddwarf/blob/master/reddwarf/tests/api/instances.py , line 24322:21
vipuli thought we were fixing this a week ago22:21
vipulmaybe that was for limits22:21
vipulesp1: weren't you the one that had a patch?22:22
grapexvipul: Quotas was fixed, but this was still turned off. I was gone at the time when the test fixes were merged, so I didn't see this until recently... sorry.22:22
esp1vipul: yeah22:23
esp1I can retest it if you like.22:23
*** sdake_ has quit IRC22:23
esp1but I'm pretty sure they were working last week.22:23
hub_capis the flag still in the code?22:24
grapexesp1: The issue is in the second test run - right now if "skip_with_xml" is still called, the tests get skipped in XML mode. That one function needs to be removed.22:24
vipulesp1: https://github.com/stackforge/reddwarf/blob/master/reddwarf/tests/api/instances.py#L24322:24
hub_capps grapex, if u do github/path/to/file.py#LXXX it will take u there22:24
hub_capgrapex: like that :P22:24
grapexhub_cap: Thanks for the tip... good ole github22:24
esp1grapex: ah ok.22:24
datsun180beasy enough to reenable the tests, the hard part is making sure we still get all-green afterward22:25
esp1I think I saw a separate bug logged for xml support in quotas22:25
grapexesp1: Sorry, I thought I'd made a bug or blueprint or something for this explaining it but I can't find it now... *sigh*22:25
grapexesp1: Maybe that was it22:25
esp1grapex: I think you did.  I can pull it up and put it on my todo list22:25
grapexSo in general, any new feature should work with JSON and XML out of the gate... the skip thing was a temporary thing to keep the tests from failing.22:25
hub_cap+1billion22:25
grapexesp1: Cool.22:26
SlickNikI agree. +122:26
esp1np22:26
esp1#link https://bugs.launchpad.net/reddwarf/+bug/115090322:26
grapexesp1: One more tiny issue22:26
esp1yep,22:26
grapexesp1: that test needs to not be in the "GROUP_START", since other tests depend on that group but may not need quotas to work.22:27
grapexOh awesome, thanks for finding that.22:27
esp1grapex: ah ok.  yeah I remember you or esmute talking about it.22:27
hub_capgrapex: is there a doc'd bug for that?22:27
esp1I'll take care of that bug too.  (maybe needs to be logged first)22:28
vipul#action esp1 to re-enable quota tests w/XML support and remove them from GROUP_START22:28
hub_capperfect, we good on that issue?22:28
grapexhub_cap: Looks like it.22:28
datsun180bsounds good22:28
vipuland delete the 'skip_if_xml' method :)22:28
vipulall together22:29
esp1right22:29
SlickNikSounds good22:29
esp1sure why not.22:29
SlickNikThanks esp122:29
hub_capbaby in ihandss22:29
hub_capsry22:29
esmutewhat is the xml support?22:29
hub_cap#topic Actions / Action Events22:29
*** openstack changes topic to "Actions / Action Events (Meeting topic: reddwarf)"22:29
SlickNikdoes it improve your spelling? :)22:29
hub_caplol no22:30
hub_capits terrible either way22:30
vipulesmute: the python client can do both json and xml.. we run tests twice, once with xml turned on and once without22:30
hub_capso i thoguth of 3 possible ways to do actions and action_events22:30
esp1esmute: we support both JSON and XML in the Web Service API22:30
hub_cap1) pass a async response uuid back to asyn events and poll based on that (our dnsaas does this for some events)22:31
hub_caplemme find the email and paste it22:31
esmutethanks vipul, esp1... is the conversion happening in the client?22:31
hub_cap1) Async callbacks - a la DNS. Send back a callback uuid that a user can query against a common interface. This is more useful for things that do not return an id, such as creating a database or a user. See [1] for more info. For items that have a uuid, it would make more sense to just use that uuid.22:31
*** amyt has quit IRC22:31
hub_cap2) HEAD /whatever/resource/id to get the status of that object. This is like the old cloud servers call that would tell u what state your instance was whilst building.22:31
hub_cap3) NO special calls. Just provide feedback on the GET calls for a given resource. This would work for both items with a uuid, and items without (cuz a instance has a uuid and u can append a username or dbname to it).22:31
hub_cap[1] http://docs.rackspace.com/cdns/api/v1.0/cdns-devguide/content/sync_asynch_responses.html22:31
*** robertmyers has quit IRC22:31
esp1esmute: sorta I'll walk you through it tomorrow :)22:31
esmutecool22:32
hub_capi think that #3 was the best option for uniformity22:32
hub_capdoes anyone feel same/different on that?22:32
vipulwait so this is how does the user determine the status of an action (like instance creation state)?22:32
vipul3 is what we do right22:33
vipultoday22:33
hub_capcorrect22:33
hub_capbut it gives no status22:33
hub_caperr22:33
hub_capit gives no description22:33
hub_capor failure info22:33
grapexhub_cap: Do you mean uniformity between other OS apis?22:33
hub_capgrapex: uniforimity to what nova does22:33
hub_capand uniformity as in, itll work for actions that dont have uuids (users/dbs)22:33
grapexhub_cap: I think we should go for #1. I know it isn't the same as nova but I think it would really help if we could query actions like that.22:34
grapexEventually some other project will come up with a similar idea.22:34
hub_capmy thought is to start w/ #322:34
hub_capsince itll be the least work and itll provide value22:34
vipulin #1, the user is providing a callback (url or something)?22:34
SlickNikSo, if I  understand correctly 3 is to extend the GET APIs that we have today to also provide the action description.22:34
grapexhub_cap: That makes sense, as long as #1 is eventually possible22:34
hub_capessentially #3 is #122:34
hub_capbut w/ less data22:35
hub_capthats why i was leaning toward #322:35
grapexYeah, sorry... if we have unique action IDs in the db we can eventually add that and figure out how the API should look22:35
vipul#1 seems more like a PUSH model.. where reddwarf notifies22:35
hub_capthe only reason u ened a callback url in dns aas is cuz they dont control the ID22:35
hub_capwell they are all polling honestly but i thikn i see yer point vipul22:35
hub_capi honestly dislike the "callback" support22:36
hub_capbecasue whats the diff between these scenarios22:36
hub_cap1) create instance, get a uuid for the instance22:36
hub_capcrap let me start over22:36
hub_cap1) create instance, get a uuid for the instance, poll GET /instance/uuid for status22:37
hub_cap2) create instance, get new callback uuid and uuid for instance, poll GET /actions/callback_uuid for status22:37
hub_capotehr thatn 2 is more work :P22:37
vipulthat's not very clean22:37
vipulif you're going to do 2) then we should be pushing to them.. invoking the callback22:37
*** heckj has quit IRC22:38
hub_capya and we wont be doing that anytime soon :)22:38
hub_capall in favor for the "Easy" route, #3 above?22:38
vipulI22:38
vipulAye22:38
grapexI'm sorry, I'm confused.22:38
hub_capeye22:38
vipuleye22:38
hub_caplol vipul22:38
hub_capgrapex: gohead22:38
grapex#2 - you just mean the user would need to poll to get the status?22:38
hub_capcorrect just like dns22:39
grapexHow would 1 and 2 map to things like instance resizes?22:39
hub_caphttp://docs.rackspace.com/cdns/api/v1.0/cdns-devguide/content/sync_asynch_responses.html22:39
hub_capGET /instance/uuid vs GET /instance/callback_uuid_u_got_from_the_resize22:40
hub_capGET instance/uuid alreayd says its in resize22:40
hub_capthis will jsut give u more info if something goes wrong22:40
vipulreally the diffence is GET /resourceID vs GET /JobID22:40
hub_capwhich was the original point of actions in the first place22:40
SlickNikHonestly the only reason I'd consider 2 would be if there were actions that are mapped to things other than resources (or across multiple resource).22:40
grapexSlickNik: That's my concern too.22:40
hub_capwe cross that bridge when we come to it22:41
hub_cap^ ^ my favorite phrase :)22:41
grapexOk- as long as we can start with things as they are today22:41
vipuldo we want to consider everything a Job?22:41
grapexand each action has its own unique ID22:41
hub_capgrapex: it does/will22:41
grapexOr maybe a "task"?22:41
grapexThat our taskmanager can "manage"? :)22:41
vipulheh22:41
hub_caplol grapex22:42
SlickNikheh22:42
grapexActually live up to it's name finally instead of being something we should've named "reddwarf-api-thread-2"22:42
hub_capehe22:42
vipulthat might have been the intention.. but we don't do a whole lot of managing task states22:42
hub_capreddwarf-handle-async-actions-so-the-api-can-return-data22:42
vipulwe record a task id i believe.. but that's it22:42
grapexvipul: Yeah its pretty silly.22:42
grapexWell I'm up for calling it action or job or task or whatever.22:43
grapexhub_cap: Nova calls it action already, right?22:43
hub_capnova calls it instance_action22:43
hub_capcuz it only applies to instances22:43
vipulgross22:43
grapexAh, while this would be for anything.22:43
hub_capim calling it action cuz its _any_ action22:43
hub_caplikely for things like create user22:44
grapextask would kind of make sense. But I'm game for any name.22:44
hub_capill do instance uuid - username22:44
hub_capas a unique id for it22:44
vipulbackup id22:44
hub_caplets call it poopoohead then22:44
hub_capand poopoohead_actions22:44
SlickNiklol!22:44
juicehub_cap: hope that's not inspired by holding a baby in your hands22:44
grapexhub_cap: I think we can agree on that.22:44
juicesounds like a mess22:44
hub_capHAHA22:44
hub_capnice yall22:45
vipulhub_cap does it only apply to async things?22:45
hub_caplikely22:45
hub_capsince sync things will return a error if it happens22:45
hub_capbut it can still record sync things if we even have any of those22:45
hub_capthat arent GET calls22:45
hub_capbasically anything that mods a resource22:45
grapexSo I'm down for going route #3, which if I understand it means we really won't change the API at all but just add this stuff underneath22:46
vipuli'm assuming we add a 'statusDetail' to the response of every API?22:46
hub_capto the GET calls vipul likely22:46
grapexbecause it seems like this gets us really close to tracking, which probably everyone is salivating for, and we may want to change the internal DB schema a bit over time before we offer an API for it.22:47
SlickNikonly the async GETs, I thought.22:47
hub_capdef grapex22:47
grapexSo instance get has a "statusDetail" as well?22:47
hub_caplikely all GET's will have a status/detail22:48
grapexSo there's status and then "statusDetail"? That implies a one to one mapping with resources and actions.22:48
hub_capmaybe not if it is not a failure22:48
grapexAssuming "statusDetail" comes from the action info in the db.22:48
vipulthat's my understanding as well grapex22:48
hub_capit implies a 1x1 mapping between a resource and its present state22:48
hub_capit wont tel u last month your resize failed22:49
hub_capitll tell you your last resize failed if its in failure state22:49
grapexhub_cap: But that data will still be in the db, right?22:49
hub_capfo shiiiiiiiz22:49
SlickNikWould a flavor GET need a status/detail?22:49
vipulprolly not since that would be static data22:49
hub_capcorrect22:49
grapexOk. Honestly I'm ok, although I think statusDetail might look a little gross.22:50
grapexFor instance22:50
grapexif a resize fails22:50
grapextoday it goes to ACTIVE status and gets the old flavor id again.22:50
grapexSo in that case, would "statusDetail" be something like "resize request rejected by Nova!" or something?22:50
grapexBecause that sounds more like a "lastActionStatus" or something similar. "statusDetail" implies its currently in that status rather than being historical.22:51
hub_capthat is correct22:51
vipuli guess that could get itneresting if you have two requests against a single resource22:51
hub_caplet me look @ how servers accomplishes this22:51
vipulyou may lose the action you care about22:51
* hub_cap puts a cap on meeting22:51
hub_caplets discuss this on irc tomorrow22:51
hub_capits almost 6pm in tx22:52
hub_capand i have to go to the bakery to get some break b4 it closes22:52
hub_capi will say that there is more thinking that needs to go into this bp22:52
SlickNikSounds good. Want to think about this a bit more as well.22:52
hub_capand ill add to the bp( which is lackig now)22:52
hub_capSlickNik: agreed22:52
grapexMaybe we should discuss moving the meeting an hour earlier22:52
vipulyup good talk22:53
datsun180bnot a bad idea22:53
grapexA few people had to go home halfway through today since it's been raining hard here22:53
vipula little rain gets in the way?22:53
vipuli'm game for 1pm PST start22:53
grapexWhich in Texas happens so rarely it can be an emergency22:53
SlickNikI'd be up with that, too22:53
datsun180bthis is austin, rain is a rarer sight than UFOs22:53
hub_capvipul: lol its tx22:53
hub_caprain scares texans22:53
SlickNikIt's probably like snow in Seattle. :)22:53
juiceor sun22:53
vipulor sun22:53
vipuldamn22:54
vipulyou beat me to it22:54
SlickNiknice22:54
hub_capHAHA22:54
SlickNiksame time22:54
vipulthis room always seems empty before us22:54
vipulso let's make it happen for next week22:54
hub_capLOL we are the only people who use it ;)22:54
vipulgrapex: we need to talk about the prezo22:54
grapexvipul: One person on the team said they needed to go home to roll up the windows on their other car which had been down for the past five years.22:54
vipullol22:55
hub_capgrapex: hahahaa22:55
SlickNiklolol22:55
hub_capso end meeting?22:55
datsun180bso then next week, meeting moved to 3pm CDT/1pm PDT?22:55
grapexReal quick22:55
grapexWe're all cool if hub_cap goes forward on actions right?22:55
grapexiter 1 is just db work22:55
grapexand we can raise issues during the pull request if there are any22:56
hub_capyup grapex thats what im gonna push the first iter22:56
vipulYea, let's do it22:56
grapexCool.22:56
SlickNikI'm fine with that.22:56
SlickNik+122:56
grapexI'm looking forward to it. :)22:56
SlickNikSweetness.22:56
hub_capaight then22:56
hub_cap#endmeeting22:56
*** openstack changes topic to "OpenStack meetings (alternate) || Development in #openstack-dev || Help in #openstack"22:56
openstackMeeting ended Tue Apr  2 22:56:56 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:56
openstackMinutes:        http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-04-02-20.59.html22:56
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-04-02-20.59.txt22:57
openstackLog:            http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-04-02-20.59.log.html22:57
SlickNikThanks all...22:57
esp1phew!22:57
hub_caplol22:57
hub_capl8r22:57
SlickNikgo get yer bread hub_cap…22:57
SlickNiklaters :)22:57
grapexSee you guys!22:57
hub_capi know! i gotta get it!!!!22:57
*** hub_cap has left #openstack-meeting-alt22:57
*** esp1 has left #openstack-meeting-alt22:57
*** vipul is now known as vipul|away22:59
*** saurabhs has left #openstack-meeting-alt23:03
*** jcru has quit IRC23:04
*** vipul|away is now known as vipul23:04
*** vipul is now known as vipul|away23:05
*** sdake_ has joined #openstack-meeting-alt23:21
*** dhellmann has joined #openstack-meeting-alt23:29
*** vipul|away is now known as vipul23:52

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!