21:59:26 #startmeeting reddwarf 21:59:27 Meeting started Tue Jan 29 21:59:26 2013 UTC. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:59:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:59:30 The meeting name has been set to 'reddwarf' 21:59:36 hello there 21:59:40 hiya 21:59:40 howdy SlickNik 21:59:40 here 21:59:44 and everyone else :) 21:59:54 #link http://wiki.openstack.org/Meetings/RedDwarfMeeting 21:59:57 here 22:00:02 present 22:00:08 lets give a minute for the tricklers 22:00:10 ding 22:00:12 afternoon 22:00:15 sounds good. 22:00:41 bam 22:00:47 lol hi imsplitbit 22:00:55 howdy 22:01:01 sorry I'm late 22:01:13 ok lets rock this, we have enough 22:01:14 he started a minute early 22:01:16 Greets 22:01:17 grapex says he will be late 22:01:20 won't hold it against him though 22:01:21 WOAH nice grapex! 22:01:31 grapex, you live! 22:01:48 #topic action items 22:02:02 vipul: link us your bp plz sir 22:02:06 for quotas 22:02:17 #link https://blueprints.launchpad.net/reddwarf/+spec/quotas 22:02:25 cool, thanks... 22:02:40 annashen, juice added a bunch to this one 22:02:49 i think this needs a good review, and some discussion 22:02:55 which i've put as part of the agenda later 22:02:57 hokey. and we have a topic right? 22:03:05 they've also got a bunch of info up on the openstack wiki... 22:03:34 so the testr BP 22:03:40 link to the wiki again 22:04:00 #link http://wiki.openstack.org/reddwarf-quotas 22:04:01 #link http://wiki.openstack.org/reddwarf-quotas 22:04:01 gotit 22:04:02 grr yes the wiki is more complete let me link to that 22:04:09 thanks vipul and nik 22:04:10 nice 22:04:26 so lets defer too much convo to the actual topic in the meeting 22:04:30 Yes, we have a topic for discussion on this, 22:04:31 sounds good. 22:04:34 and breeze thru these action items 22:04:46 as for testr BP, i reviewed last wk. the only suggestion i have is that i dont think we need to put the files that are changing in the BPs 22:04:57 testr Blueprint... I think we had some discussions here in the office, agreed to put everything testr related in /reddwarf/tests/unittest 22:04:57 otherwise the content is good... cna someone link it? 22:05:05 ok that makes sense 22:05:20 #link https://blueprints.launchpad.net/reddwarf/+spec/testr-unit-tests 22:05:30 thanks vipul 22:05:34 * CaptTofu is lurking 22:05:39 HAI! 22:05:41 i think the idea was to show where tests would live, right esp? 22:05:48 ya ive seen a few of the BPs have full file listings 22:05:49 welcome aboard, capt! 22:05:52 yeah I think so 22:05:53 welcome CaptTofu 22:06:02 hi! 22:06:10 but yea, generally probalby don't need to have every file being changed listed in the bp 22:06:14 there was another recent one that had the 3 files (the guest conf grant chante one) 22:06:15 one needs a break from chef 22:06:37 :) 22:06:38 we decided to kill off the 'functional' package and shove everything under unittest I think. 22:06:46 ok that works for me esp 22:06:54 #link https://blueprints.launchpad.net/reddwarf/+spec/create-restricted-root-account 22:07:10 ^^ we are still working out the details of this one 22:07:23 ya i love the content of that BP im just not sure we need the files at the top of it 22:07:25 any opposition to having a seaprate dbaas.conf file? 22:07:27 but yeah it involves adding a new config for the grant 22:07:45 id prefer a separate guest conf file personally 22:07:56 so we know where the split is if we create a different guest :) 22:08:09 k, I will fix the content of the BP 22:08:11 but we got a bit OT on that one :) 22:08:30 we are on housing multiple images action item 22:08:34 #agrees with hub_cap on separate guest conf 22:08:41 i think weve beat that one to the ground right? 22:08:53 lol, yea…I think so... 22:08:54 :) 22:08:57 esp: seems like we need a separate one then 22:09:21 vipul: yeah I'm good with that 22:09:33 ya /me wants separate config.. it makes more sense 22:09:34 just to be clear, there's gonna be a guestagent.conf.. and another conf for things like grants 22:09:45 ok now im confused 22:09:46 thigns that are hard-cdoed in dbaas.py could live there 22:09:48 lol 22:09:49 lets table this 22:09:49 though so 22:09:52 also note that we are note going to put the actual GRANT stmt in the config but we'll build it in code as per steveleon 22:09:54 come back to it 22:09:59 and put it on the agenda so we can get thru action items 22:10:12 k 22:10:14 vipul: can u amend the agenda? 22:10:24 Slicknik working on making integration tests run post devstack install + local.sh run for CI <-- you are up SlickNik 22:10:32 yep 22:10:38 should we table this to the CI section? 22:10:42 Okay, so I'm fixing up the two problematic tests... 22:11:01 there are 2 action items SlickNik owns, go head and chat em up 22:11:16 issues were being caused by apparmor and upstart trying to continuously start up mysql. 22:11:17 problematic tests? speed wise? or mucking up content? 22:11:19 AHH 22:11:31 im pretty sure upstart _is_ the devil 22:11:43 because of this the log files were never actually getting _fixed_ 22:12:08 ya, grapex told me about that one recently 22:12:14 it's the restart tests - problem only seems to be evident on cloud servers.. since it's much slower 22:12:21 anyhow, after these fixes are in all the blackbox tests should run clean. 22:12:45 coolness 22:12:52 Also, we had to take some smaller fixes to make the build/tests run under devstack-vm-gate. 22:12:56 thats great news. then its hook into gerrit time :) 22:13:10 Did it seem like there was anything we could change to make the tests catch that bug every time? 22:13:14 cool SlickNik i figured there would be a bit of work there... 22:13:16 thats awesome news! 22:13:25 Instead of just on the slower environment? 22:13:27 and there's a couple more I need to code up that have to do with us always using 10.0.0.1 for the host ip.. 22:13:36 vm-gate uses 10.1.0.1 22:13:37 grapex: I think an explicit 'stop' call may work 22:13:45 and timeout issues with test_instance_created 22:14:03 SlickNik: Ahhhh i was wondering why u mentioned removing that 10.0.0.1 hardcode... thx for the info 22:14:22 Should be able to get to that by tomorrow, so ever inching closer. 22:14:27 so that takes care of 4, 5, and 6 22:14:30 SlickNik: AWESOME 22:14:47 hub_cap: http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-01-22-22.01.html 22:14:52 vipul: #7 is you, once u and grapex figured out the apparmore thing u were good to og right? 22:14:53 are you on the right one? 22:15:07 WEAK vipul no im not 22:15:10 stale link 22:15:20 lol 22:15:29 was cornfused 22:15:30 heh, I was wondering too.. 22:15:38 so that takes care of 2, 3 and 4 :) 22:15:49 just one other note - on images, test and performance…I am digging into the over performance issues with the disk image builder images 22:16:15 #info juice looking into why diskimage builder images are slower 22:16:18 if its over performing lets not look at it too hard /rimshot 22:16:27 I wish 22:16:42 lol 22:16:46 hub_cap: its a dog 22:16:57 * hub_cap sheds a tear 22:17:04 so i spoke w/ heckj wrt the api 22:17:04 I think it's an overperforming underperformer :P 22:17:08 :) 22:17:10 heh 22:17:14 ?? 22:17:21 hi lurker 22:17:26 * heckj waves 22:17:36 I thought I'd been summoned 22:17:44 nope just talked _about_ 22:17:59 we exchanged emailses about the spec and what some good lessons learned are, ill foward them to u vipul 22:18:15 soudns good 22:18:23 #action hub_cap to forward joe hecks (so i dont mention his irc name heckj) email to vipul 22:18:29 heh 22:18:33 see how i did that there :P 22:18:51 so our internal team does _not_ like the 1.0 2.0 spec change 22:18:56 so we might just roll w/ a 1.1 spec 22:19:00 vipul, hub-cap: I don't think anything was terribly private in there, do with it as you please 22:19:10 <3 heckj 22:19:22 and we are currently reviewing the internal 1.0 spec as per our doc writer, it should be up soon 22:19:33 hub_cap... what's the issue with 1.1 vs 2.0? 22:19:37 mostly my with-a-bourbon reflections on what went ok and what fucked up doing spec writing and pimping 22:19:48 mmmm bourbon 22:20:05 well since we havent changd the api significantly from the 1.0 api, why not roll w/ 1.1 since its just mroe features 22:20:18 was their question.. and it makes sense 22:20:28 right, agreed, not a compatilibity change 22:20:29 no reason to go 2.0 until we break the 1x contract 22:20:31 yup 22:20:41 gotcha 22:21:01 hub_cap: you have a 1.1 API flushed out? 22:21:14 vipul: nope we have a full day meeting planned next wk 22:21:24 so any features u have already fleshed out, send our way 22:21:29 or btter yet, blueprint :) 22:21:33 hub_cap: please forward our way 22:21:40 hub_cap: we have a snapshots blueprint just filed 22:21:51 I need to come up with an API around it 22:21:52 DEF. itll be up on the database-api (but ill send u a working copy first) 22:21:55 that's not quite int he BP yet 22:21:58 cool 22:22:13 ya id like ot not reinvent the wheel for that since uve done it already 22:22:21 and we have done work for the my.cnf edits api as well 22:22:32 so we can turn it into a nice little doc and go bakc and forth a bit on it 22:22:43 #link http://wiki.openstack.org/Reddwarf 22:22:47 ^ ^ needs love 22:22:47 cool... hopefuly next couple of weeks we can get a few of thos big items flushed out 22:22:55 vipul: i think next week ill have something for u 22:23:13 i do like that yall put the quotas BP up on that, and i like even more the under construction sign is still there :) 22:23:24 (sorry ive moved on to the next action item) 22:23:46 #action everyone add more content to wiki 22:23:53 #agreed 22:24:10 juice: percona bits to integration, hows that comin? 22:24:11 I can document the disk image builder stuff there 22:24:17 nice plz do juice 22:24:27 hub_cap: that was handed off to two folks here 22:24:40 I think they are just working on getting the flags/switches in there 22:24:52 SlickNik or I can document how to get RD intalled with devstack 22:24:55 great. who is the new handler? 22:24:56 Though I don't know if they have yet gotten guest agent to set status to ACTIVE 22:24:59 #action SlickNik to document info about the new devstack - redstack build to the wiki 22:25:07 thanks 22:25:22 no worries, I'll run it by you vipul... 22:25:24 kaganos, kmansel 22:25:29 ok perfect 22:25:31 hey 22:25:40 updates on Percona image? 22:25:42 sorry, we're head down into something here ... 22:25:46 what was the question? 22:25:49 #action kaganos and kmansel own percona bits for integration 22:25:58 kaganos: just wondering update on status, no worries 22:26:03 k 22:26:11 do what u gotta do we can talk in #reddwarf later 22:26:14 status="working on it... 22:26:16 " 22:26:17 :) 22:26:24 ha i got my smiley in your quotes 22:26:41 so grapex and steveleon, how did the test reviews go? 22:27:19 I know the final review got merged... 22:27:24 guest agent 100% 22:27:44 There's one open from Deniz Demir I haven't looked at yet. Sorry, I've been ill. 22:27:45 nice!! 22:27:45 yup 22:27:53 had some help from grapex.... 22:27:56 grapex: caught the black lung 22:28:00 grapex is here! 22:28:07 hope you're feeling better... 22:28:09 and consequently is also a merman 22:28:10 what's black lung 22:28:11 Thanks 22:28:20 grapex: flu or cold? 22:28:20 too many cigarettes 22:28:23 hub_cap: water is the essence of liquid 22:28:24 cigarattitis 22:28:28 vipul, you were saying that there was some intermittent failures with some tests 22:28:46 grapex: :) 22:28:50 steveleon: yes, saw that last night on the 'coverage' patch... sqllite tests fail from time to time 22:29:10 since they are run parallel 22:29:23 ugh... i wonder if it is the fake id we are passing 22:29:36 did yall do the randomizing thing to it? 22:29:40 vipul: lifeless mentioned that if we ran those tests with sqlite in memory only, we'd get rid of those parallel problems 22:30:02 isint that what the nova/cinder/etal tests do? 22:30:06 we probably should just do that... no point in having a separate file, since i think we teardown/recreate db 22:30:09 on each test 22:30:37 vipul: Yeah, we never actually make use of persiting the sqlite db. 22:30:43 I'll make a blueprint to change that. 22:30:55 #action grapex to file BP on in-memory sqlite 22:31:04 wouldnt a bug be sufficient? 22:31:20 I think it's just a param for connecting to sqllite 22:31:20 steveleon, bug works too, shoudl be small 22:31:25 steveleon: Is it breaking anything yet? 22:31:27 db:men or something like that 22:31:36 rackers just want to flaunt the fact that they can create new bps :P 22:31:43 i havent seen 22:31:52 but it has been passing most of the time 22:31:53 I say bp because we'll need to update redstack too. :( 22:32:16 i havent seen it fail running it locally 22:32:31 Ah, I see grapex… 22:32:31 ok bp sounds good 22:32:42 last action item... 22:32:45 ok so the last action item is qutoas 22:32:46 i think it might just be a name change 22:33:05 so instead of specifying the filename, you specify ":memory:"... or something like that 22:33:22 hub_cap you mentioned there was no consensus? 22:33:29 that sounds about right 22:33:35 wellllll..... vipul 22:33:38 wrt :memory: and sqlite 22:33:58 the consensus is that everyone has their own till someone ponys up and works on the kyestone one 22:34:15 so that seems to me that we could do that as well 22:34:36 we will be using repose so i dont think we will be contributing cycles to it, but we welcome reviews to makeing it a better system... 22:34:46 i know currently we only have max volumes and max geebees 22:35:06 i'd like to get a solution in there, that doesn't involve fixing up CI to support Java 22:35:14 so that's why i lean towards an embedded solution 22:35:26 sure, we might just keep the java bits internal 22:35:27 a stop-gap until its added to keystone, or some place better 22:35:33 apt-get install apache-tomcat ;) 22:36:06 cool.. well that maeks the quotas conversation easy 22:36:14 lets go on to CI tho as per th emeeting 22:36:15 might be that simple.. but it'll be a much bigger deal in openstack-ci me thinks 22:36:29 yup 22:36:35 hub_cap: why do you like the repose solution? 22:36:36 yep, have an item to dicuss futher later anyway 22:36:38 #topic testing-ci 22:36:56 so i tihnk we got updated w/ whats going on w/ CI right? lets summarize 22:37:28 #info SlickNik working on getting CI tests working w/ devstack vm gate, fixing small issues, support to come soon 22:37:29 yes.. 22:37:32 is that good? 22:37:47 anything more to add SlickNik? 22:38:13 #info dkehn also working on devstack-vm-gate 22:38:18 nope that's it. 22:38:35 should have it pushed up to openstack CI this week (hopefully) 22:38:36 The black box tests should be good to go soon. 22:38:43 SlickNik: Nice! 22:38:54 thats if no more issues 22:39:04 yup 22:39:10 Just some more closing up on the devstack-vm-gate issues that keep cropping up. :) 22:39:25 thatll be so cool 22:39:38 ok do we have any unit test stuffs to talk about? 22:39:46 if so ill mod the title otehrwise ill skip it 22:39:56 nope 22:40:21 hokey 22:40:27 #topic quota consensus 22:40:41 so i feel like we have consensus, let me sumarize 22:41:00 #info quota support that mirrors cinder/nova will be added to reddwarf for the short to medium term 22:41:15 #info eventually we will use what the other openstack projects use but that has yet to materialize 22:41:20 but rackspace is using repose 22:41:25 #info rax to use repose internally 22:41:35 hub_cap.. quick question about repose.. won't you need to add repose APIs ? 22:41:36 can I ask why you guys like that solution 22:42:02 juice: rax wrote repose, and uses it :) 22:42:19 hub_cap: that's a good answer 22:42:20 vipul: ya if we _have_ to add apis we can 22:42:34 hub_cap: ok.. 22:42:41 as a matter of design/architecture what do you like about it 22:42:44 hub_cap: rate limits, we ok with similar approach to nova? 22:42:46 rate limits will be another one too... 22:43:02 vipul: ya i think so vipul, we should call them limits, not quotas 22:43:06 to support rate and absolute 22:43:19 yup.. limits.py, possibly a request filter 22:43:29 juice: we are evaluating it now, so lets give u that answer next wk 22:43:33 :D 22:43:35 okie 22:43:38 djohnstone: is your man for that 22:43:54 back looking at that tomorrow 22:43:57 hub_cap: there may be two use cases, one a filter (rate limits) and quotas really are checked upon time of creation) 22:43:59 he just started on it but he can give a summary next meeting 22:44:05 so may make sense to have them different 22:44:13 vipul: likely there will be 2 different things 22:44:19 i just meant we need to support limits of all types 22:44:25 not that the code shoudl be the same :D 22:44:28 ya 22:44:29 ok 22:44:47 i mean, if yall get limits done and they are AWESOME then we might use them :P 22:44:56 #action djohnstone to give us an update on Repose? 22:45:06 lol question mark at the end of that hah 22:45:08 thanks vipul 22:45:12 lol 22:45:19 makes it optional :D 22:45:27 hahah nice 22:45:50 ok we feel good about limits? 22:45:56 ok we good, juice? 22:46:04 Sounds like a plan to me. 22:46:07 yup 22:46:24 coolness 22:46:33 #topic User Management 22:46:49 this is regarding the BP linked earlier... 22:47:06 we wnat to be able to control grants given to root user on 'enableRoot' 22:47:08 esp was briefly mentioning it earlier as well... 22:47:09 ah okey. shall we discuss the multi config file together? 22:47:25 vipul: i dont blame u for that. i think we talked about that too a while ago 22:47:44 yes... any opposition to having those grants live in a separate config (different from guestagent.conf) 22:47:57 since dbaas.py -- has a ton of grants/sql statement hard coded 22:48:01 so is it established that there will be a new conf file used only for root-privileges purposes? 22:48:14 possibly this would be a config file that is gearted towards configuring dbaas.py 22:48:14 so _why_ do we need yet another conf file? 22:48:32 why cant we just put anotehr option in the conf file 22:48:47 i thought, when i read that last night, that we were putting it all in the standard reddwarf.conf and got confused 22:48:49 dbaas.py is going to get even more grants and statements shortly :3 22:48:51 we _could_... although the thinking is that we're configuring a subset of the geust agent 22:48:52 not the guest.conf 22:49:12 no, not guest.conf that's diff 22:49:27 vipul: but there is no notion of having > 1 config file anywehre in openstack... it just seems like its going against the current so to speak 22:49:33 ok so these are run by the guest, right? 22:49:34 the grants 22:49:40 they are 22:49:43 correct 22:49:50 why would the grants not be in the conf file that is given to the guest 22:49:51 right... where do we move hard-coded sql statements 22:50:00 i was under the impression that you didnt want the option in the guestagent.conf 22:50:08 steveleon: i was confused last night when i said that 22:50:15 i saw reddwarf.conf i thought in that blueprint 22:50:16 i vote pull them not into a static config file but as a module to be imported by dbaas.py 22:50:24 initially we were putting the full GRANT stmt in the config file which was pretty gross 22:50:39 Is the idea that the image could be build with a conf file that lives in it with the grants, and the dynamic conf file would just point to it? 22:50:45 my opinion is to put them in the guestagent.conf 22:50:45 datsun180b: that's the approach sorta 22:51:17 so the ?.conf file will have a property: root_grant= create delete update alter …. 22:51:18 datsun180b.. there are certain things may need to enumerated.. like a list of privs that guest agnet could construct a grant statement from 22:51:22 they are config values.... everyone will have different config values for their setups 22:51:24 put a list of all the privileges that root will have in a config file.. preferably guestagent.conf 22:51:33 and we will use the properties to build the GRANT stmt in code. 22:51:40 steveleon: it _only_ seems right to put them there steveleon... im sorry for confusion 22:51:41 and dbaas.py will read from it and generate a grant sql query 22:51:53 i feel like the email i sent out last night has caused all this 22:52:06 nah, we've been going back and forth on this 22:52:09 general rule of thumb. if its a config that only the guest will use, put it in the guest.conf 22:52:19 it's already changed like 5 22:52:20 x 22:52:26 if it sgonna differ from install to install put it in _a_ conf (not code) 22:52:29 so... i see two tracks.. put them in a module imported by dbaas.py... another to add to reddwarf-guestagent.conf 22:52:46 so given that both of those are true, it seems it should be ina config right? 22:52:52 we have our own homegrown Query class to facilitate guest agent queries, we can build a Grant to go with it 22:53:01 yep, I see no big deal putting it into reddwarf-guestagent.conf 22:53:09 esp: #agreed 22:53:11 I like the config idea too. 22:53:15 right, let's go with reddwaf-guestagent.conf 22:53:32 datsun180b: let's chat after, seems like that direction I'm going 22:53:36 #info static grants to be configurable through reddwarf-guestagent.conf 22:53:44 #agreed 22:53:46 another thing that surge from this discussion is the ability to have a disable-root feature 22:53:49 I prefer having it in a conf, rather than a module since it really is configuration. 22:53:52 #agreed 22:54:08 ok.. next item around this 22:54:13 adding a new API to disable root user 22:54:15 this will make it easier for support to see if and how long the user have used root privileges 22:54:20 #link https://blueprints.launchpad.net/reddwarf/+spec/revoke-root-user-api 22:54:24 steveleon just filed 22:54:46 any reason to not add this API? 22:54:48 hmm... this is going to be a contention point :) 22:54:48 esp: please do 22:55:04 well... we dont add it cuz once u enable root your support model changes 22:55:11 but thats a rackspace specific thing 22:55:22 right, but you have a history of if it was ever enabled 22:55:35 breaking the seal voids your warranty, in short 22:55:36 so it's something that you could still use to determine that 22:55:40 datsun180b: exactly 22:55:48 vipul: there is, ther is a root history table 22:55:49 I guess I don't see the point of disabling root once you have it, since by then the support model already changes. 22:56:05 well the support model should not govern the code 22:56:13 True 22:56:13 and thats why i said "its a rax specific thing" 22:56:20 grapex: there may be scenarios when the user needs it for a period of time to diagnose an issue.. but possible needs to turn it off when done 22:56:22 so im kinda at odds w/ my brain here 22:56:22 but I'm trying to figure out why someone would want that 22:56:36 i see from a permissions standpoint 22:56:43 dont want root being enabled remotely forever 22:56:50 but want to get in, touch something, and get out 22:56:51 right? 22:56:54 vipul: I think then that may be a different concept. It seems like you're giving someone temporary root permissions, like say someone in support. Like it could be a mgmt api. 22:56:57 does the calling root create api to enable root multiple times have the same effect as resetting the root password? 22:57:02 right, and another tangent.. problay wnat a role-based access to the 'enableRoot' API 22:57:04 hub_cap: correect 22:57:10 esp: Yes 22:57:22 hub_cap: exactly 22:57:37 grapex: i don't see managemnt api something end user would have access to 22:57:50 i see a 'dba' at some company that needs temporary elevated access 22:58:31 ok i think that we need to talk internally about this 22:58:44 hub_cap.. ok, we just filed this today.. please review, add comment to BP 22:58:45 yeah I'm not sold yet :) 22:58:48 #action hub_cap to get back to vipul on root rmemove 22:58:53 wow rmemove?!?! 22:58:58 yeah well we have history of when root was enabled 22:59:01 #action hub_cap to learn how to type 22:59:28 cp16net: right, that's what would be the determining thing for your suppor tmodel i'd imagine 22:59:36 cp16net: yeah the history thing is cool. we were pondering how disable would fit in 22:59:51 esp: temporary elevated privileges 22:59:55 how can we determine which users can and can't use which api functions? would that eventually just grow into ACLs on api calls? 22:59:58 yeah we need to talk about this internally to figure that out 23:00:08 datsun180b: we need to add additonal roles i think to API 23:00:35 enableRoot shoudl only be accessible with a higher privileged user i thnk 23:00:38 we need a policy like nova possibly... 23:00:40 that's a different discussion 23:00:41 user, superuser, and mgmt? 23:00:46 yep 23:00:55 that could be handled with keystone right? 23:01:00 but it can be whatever u want it to be... its configurable (nova policy) 23:01:02 exactly cp16net 23:01:03 maybe would could enable root with a timeout…but perhaps that just complicates things... 23:01:19 esp: naw, some people want root 200% of the tiem :) 23:01:29 thats x2 23:01:31 just an authentication that doesn't allow renewals 23:01:34 esp: I thought about that, but I don't like it... 23:01:38 there's your timeout 23:01:39 yeah makes sense 23:02:14 ok so we never changed topic to the dbaas.py/conf file, but we have consensus ya? 23:02:20 ill topic change and jot it down 23:02:24 oh yea, forgot that was a separate item 23:02:28 we're good on that one now 23:02:29 esp: I don't think that there would be a one size fits all timeout, so then we'd have to get into the business of configuring that, which could potentially get messy 23:02:34 ok lets just #info it then 23:02:35 throwing my hat in for that one 23:02:51 SlickNik: yep. I hear ya. 23:02:52 SlickNik: yeah that would be messy quickly 23:02:52 #info grants will go into reddwarf-guestagent.conf, not a separate file 23:02:57 #info guest conf for configurable sql queries until we have a better solution 23:02:59 doh u beat me to it 23:03:07 #topic open discussion 23:03:16 i dont have long, i have to run and clean my house (closing is thursday) 23:03:27 JOY 23:03:29 thats my open discussion :) 23:03:34 fun 23:03:38 hub_cap: congrats! 23:03:41 are you living in the bay-area now? 23:03:42 #action vipul to file BP on additional roles in reddwarf (user, superuser, admin) 23:04:10 FYI, it's worth a mention that the tests are successfully running on RAX cloud :) 23:04:20 woah nice 23:04:21 WOOO 23:04:28 Cool 23:04:35 vipul: is that to address the revoke api call? or something else? 23:04:36 steveleon: i will be flying out soon to look for a place, im in an apartment here 23:04:41 awesome 23:04:43 in austin 23:04:57 esp: it will be related to that... as well as limiting who can call 'enableRoot 23:04:59 vipul: i tink that we should not be specific on that 23:05:09 if we do a policy like nova does, it wont matter what _roles_ u want 23:05:21 if u can say a user of role X can execute things in module Y 23:05:28 which is what the nova policy file does 23:05:38 hub_cap: ok, will mention that in BP.. have a policy that dictates RBAC 23:05:57 #define RBAC? 23:06:08 role based access control :D 23:06:08 Role Based Access Control, I believe. 23:06:15 ok :) 23:06:29 vipul: yar 23:06:44 it _kinda_ does that currently 23:06:56 right, but it's limited to user/admin only? 23:07:05 its limited to whatever u want really 23:07:06 Currently just admin/non-admin, right? 23:07:09 as long as u configure it 23:07:19 https://github.com/openstack/nova/blob/master/etc/nova/policy.json 23:07:27 u can make yoru own groups 23:07:45 #info https://github.com/openstack/nova/blob/master/etc/nova/policy.json 23:07:47 just what i needed 23:07:55 its just a Yes/No system really, so u can make the groups yourself based on the expressions 23:07:56 nice 23:08:19 what applies this policy though? 23:08:22 where do you implement the rule 23:08:29 admin_or_owner example 23:08:30 "admin_or_owner": "is_admin:True or project_id:%(project_id)s", 23:08:32 "context_is_admin": "role:admin", 23:08:42 those are at the top 23:08:45 oh duh 23:08:48 so u can say what the roles are :) 23:08:56 and then put them accordingly in teh stuf down below 23:09:05 :D 23:09:16 I see. 23:09:16 and it's just a filter that get's added to wsgi? 23:09:33 now that im not 101% sure about but that _seems_ like itd be the only place for it 23:09:38 i havent looked into how it works 23:09:48 ok, cool that's a good start thanks for the info 23:10:08 np!! 23:10:20 i think itll hold us over till keystone has decent rbac 23:10:40 god i wish thsi was all in nova-common 23:10:45 err, oslo-incubator 23:10:48 yeah that _could_ work temp 23:11:10 ok im out of things to discuss 23:11:14 anyone else? 23:11:15 yea temp is fine 23:11:19 nope.. i'm good 23:11:20 vipul: sweet 23:11:24 welcome back esp 23:11:25 :P 23:11:31 excess flood? 23:11:37 esp's water was rising... 23:11:42 :-P 23:11:46 sorry.. 23:11:51 heh 23:11:54 his float bobber got too high and pulled the blug 23:11:56 *plug 23:12:01 #action hub_cap still cant type 23:12:08 man tough crowd today. 23:12:08 wait thats mroe info 23:12:13 LOL 23:12:16 ok im gonna call this 23:12:16 keep trying and surely one day.... 23:12:24 cp16net: LAWL 23:12:26 #endmeeting