18:02:52 #startmeeting keystone 18:02:53 Meeting started Tue May 31 18:02:52 2016 UTC and is due to finish in 60 minutes. The chair is jamielennox. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:55 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:57 The meeting name has been set to 'keystone' 18:03:07 o/ 18:03:15 hey everybody 18:03:33 quick reminder as with last week that the spec proposal freeze is this week 18:03:51 hi jamesmcarthur 18:03:52 o/ 18:03:53 oops 18:03:53 ooh, damn, i have one to get up 18:03:57 jamielennox, 18:04:03 hopefully there is nothing still outstanding that wants to get up this cycle 18:04:10 :) 18:04:36 week of new ideas: the new ideas growth is doubled this week :) 18:04:48 #link https://etherpad.openstack.org/p/keystone-weekly-meeting 18:05:05 #topic Multiple datacenters 18:05:16 agrebennikov, amakarov: all yours 18:05:21 o/ 18:05:33 jamielennox, I guess we discussed it last week after meeting 18:05:37 o/ 18:05:37 I've created a spec for ayoung patch (see etherpad ^) 18:05:40 o/ 18:05:56 and ayoung's comment was "create a spec if you need it " 18:06:04 amakarov, I believe it is there 18:06:23 does anyone really have a problem with a cloud admin being able to create a project with a specificID? 18:06:23 agrebennikov: https://review.openstack.org/323499 18:06:46 ACKing that we would need a microversion anyway? 18:06:49 ayoung: no, not as a break-glass scenario as long as it conforms (uuid) 18:06:57 ayoung: I believe there were no complains in API v2 18:07:02 ayoung: microversion spec approved! 18:07:09 done done 18:07:11 ayoung: the only case i'm comfortable with is the restore a project so you can deleete its resources 18:07:12 heyhey! 18:07:20 henrynash, we were pretty close to microversion compliant thus far anyway 18:07:30 jamielennox, what about syncing two sites? 18:07:32 personally i think that's a keystone-manage command but ok 18:07:34 jamielennox: that would be the use-case imo. but you kindof open the door to lots of things. 18:07:49 vanity IDs for one 18:07:54 ayoung: nope, there's a bunch of ways to do syncing that we shouldn't touch 18:07:58 shaleh: nope 18:07:59 jamielennox: and i am *ok* with the ability being added, not specific to where/how it makes it into the db. 18:08:01 in that case, why stop at project, why not ALL IDs? 18:08:09 jamielennox: not saying I support them :-) 18:08:31 gyee: honestly, i think this is a case of backing away from the remove the row on delete 18:08:33 so what are the objection? 18:08:33 are we limiting it to projects ? 18:08:38 in general 18:08:47 gyee: exactly, currently projects is all that is required because they can LDAP backend the rest - but that's just the current pain point 18:08:49 ayoung: this is the current need 18:08:55 gyee: but.... 18:08:57 not users, .... roles ... meh the ids for roles should be the role names 18:09:00 ok hold on 18:09:00 uuids are dumb there 18:09:06 jamielennox, I worry about consistency 18:09:11 special cases = magic 18:09:14 "restore" is off the table as part of this discussion right now 18:09:15 not sure why you can't LDAP backend the projects but whatever 18:09:32 this is the API need amakarov is asking for 18:09:40 jamielennox, we killed that off 18:09:49 ayoung: ++ 18:09:50 to have consistent ids across different deployments in keystone 18:09:51 so. 18:09:52 so let’s review the conceptual objection to this….since we now know of multiple requirements (which various of us may or may not think important) 18:09:58 on that topic 18:10:02 what is the feeling? 18:10:22 this is clearly an API change. 18:10:24 what is the risk? 18:10:25 I don't have a problem with it, I understand the objective, but the argument seem pretty weak 18:10:28 I think the conceptual objection was to avoid collisions across clouds? 18:10:43 henrynash: no. 18:10:47 I go to a remote system, create a project with a deliberate UUID, and the thing already exists... 18:11:01 that was probably deliberately done 18:11:08 henrynash: to have the same resources (projects, by id) in multiple deplyments 18:11:11 cuz...uuid clash... not too likely 18:11:20 if a uuid collides 18:11:26 notmorgan: more i don't like the precendent of having different subsets of data present in different keystones 18:11:28 this allows for more than just making sure the same id exists in a single keystone - for that case wouldn't you just share the resource backend? 18:11:32 i'll buy the next round of drinks for the entire keystone team at the summit 18:11:40 if DB replication is not an options because 1) too expensive, 2) unreliable, and 3) error prone 18:11:44 the whole team. (everyone here) 18:11:45 then lets say it in the spec 18:11:54 notmorgan, like I said, its probably deliberate 18:11:55 you could be bolder notmorgan 18:11:57 assuming that because you can create a project in another keystone that you can just use that keystone as if it was the original 18:11:58 jamielennox: ++ that worries me 18:11:59 it's not 18:12:04 everyone in -meeting maybe 18:12:11 notmorgan: that’s the specific requirement here….weve discused this on many occasions, but always felt uncomfortabtle…just trying to (re)flush out the ojections 18:12:11 rodrigods: sure. 18:12:12 it doesn't have the roles or any other information 18:12:17 but that could be done today with V2 18:12:20 domains will suffer the same proble 18:12:42 and it can be done today with either a custom backend or a working database sync 18:12:55 jamielennox, I feel that your new role is the "I say no to things" person. 18:13:10 ayoung: someone has to do it 18:13:12 but i don't want keystone to figure out how to provide keystone sync-ing via PAI 18:13:21 API 18:13:47 because i'm still not sure why project is special from other things that would need to be transfered to effective duplicate a keystone deployment 18:13:59 jamielennox, ++ 18:14:02 jamielennox, why not. We really don't provide people with sufficient tools for K2K yet 18:14:06 so specifically on this topic, there has been a lot of asking for this feature. because when a cloud has a new deployment they want to have the customer in the same project. and they don't sync keystones. 18:14:22 i don't know if this is a usecase we need to solve? 18:14:34 didn't we have an action item from the last meeting to see what about our federation implementation was preventing this? 18:14:48 jamielennox, because users are the same in the backend, and roles are pretty static 18:15:17 jamielennox: and roles are referenced by name, not id, outside of keystone. 18:15:20 agrebennikov: "same backend"? why not put projects in this same backend that works? 18:15:23 lbragstad, right 18:15:41 jamielennox, I'd love to have them in ldap! 18:15:47 gyee do you know if/where that list of short-comings is? 18:16:00 notmorgan: i don't understand your use case 18:16:01 With K2K, we need some form of sync. And we need to map from local to remote. THat could be done by naming, or could be done by ID 18:16:01 lbragstad, no, I was hoping to see them in the spec 18:16:03 agrebennikov: you can't assume that users are in LDAP 18:16:20 dstanek, why not? 18:16:30 so jamielennox's question is valid...how is project different from other data 18:16:42 projects in LDAP was based on an assumption that is no longer valid 18:16:43 agrebennikov: so do it :) we may not provide the backend upstream but there's nothing stopping you from doing what works for your case 18:16:47 ayoung: like it or not we still support other backends 18:16:56 if we do resources in LDAP, we need to redesign, and I would not recommend that 18:16:57 jamielennox: i'm just echoing what i've been asked for. 18:17:02 dstanek, that is not the same thing 18:17:09 dstanek, right, if they are not the rest of the stuff doesn't make any sense, since it will not be "clouds in sync" usecase 18:17:24 LDAP users and groups should not be tied to assignments in the LDAP backend 18:17:42 notmorgan: i mean i'm not understanding what's being asked for, 'customer in the same project' why does this involve preset uuids? 18:18:03 jamielennox: oh hold on. customer HAVING the same uuid (project) for ease of access 18:18:18 we go out of our way to be hostile to users...UUIDs are not pleasant things to work with 18:18:30 remember the whole Killer for dynamic policy? 18:18:37 ok so. 18:18:43 "Don't make use add an endpoint, get the uuid, and stick that in the config fil" 18:19:08 i think this feature is asking for something that opens weird edge cases and doesn't solve them. 18:19:23 notmorgan, that was already open 18:19:25 basically there is nothing to prevent name collisions in domains if both are active. 18:19:34 for domains* 18:19:40 you have the same issues with domains. 18:19:43 ayoung: so although I wasn’t thinking about this use case, the spec I’m working on might help here: https://review.openstack.org/#/c/318605/4 18:19:45 you have the same issues with user_ids. 18:20:00 henrynash, yep 18:20:10 notmorgan, I have the same issues with everything! 18:20:29 i really don't think we can reliably say this is a good plan - creating specific ID'd resources across multiple installations that don't share the authoritative data backend 18:20:55 ayoung: this would certainy allow project scopig without eitehr domain ID or project ID 18:21:21 you need far more syncronization than just project ids... to the level of "not something keystone should figure specifically" imo 18:21:29 notmorgan that sounds like federation 18:21:35 notmorgan, you also need role assignments, etc 18:21:41 i think the spec i want to post for newton will actually address this rather elegantly -- what's the spec proposal deadline exactly, end of week? 18:21:44 you can do it with out IDs. Just not via K2K 18:21:54 dolphm, yep 18:21:58 lbragstad: federation would be the logical goal to solve this imo. 18:21:59 so to pick up on ayoungs: point, what if you didn’t need to use projectIDs to auth…would that solve this use case? 18:22:00 dolphm: yes 18:22:00 dolphm, what's the gist of it? 18:22:23 notmorgan, then we need to seriously look at cleaning up the mapping mechanism 18:22:25 notmorgan, but is is allowed for users (since they may be inthe same backend), and All others Must be different. Or federation. No other options 18:22:28 ayoung: leverage shadow users and make the mapping engine way more powerful 18:22:31 lbragstad: there is a place on Earth where peaple are killing eachother for that word 18:22:48 people* 18:22:50 notmorgan, lbragstad: federation is a difficult thing to set up here because you would need to map every project to every other project - and then assume things like horizon can make the jump 18:23:02 it's not a transparent operation 18:23:23 that's what i think we are missing. that sort of project mapping. 18:23:25 ayoung: user presents keystone with a saml doc, keystone creates a shadow user, maybe creates a project, maybe even creates a role, then does a role assignment... all according to the mapping engine output 18:23:39 we support project mapping today 18:23:42 dolphm: that sound sreasonable 18:23:54 at face value (i'd need to see more obviously) 18:23:54 i think talking about this as a sync is going down the wrong path 18:23:59 jamielennox: the strait-forward way is to allow usage of wildcards and aliases in mappings 18:24:04 dolphm, so..in order to makethat manageable, we need to limit "this is what you canmap TO" 18:24:05 dolphm so building on our current mapping engine 18:24:08 I'm not sure we want that ) 18:24:16 and make it so IdP admins can manage their own mapping files 18:24:18 lbragstad: ++ 18:24:27 ok so, going to call time box on this in 5 minutes. 18:24:30 amakarov: yea, ew could improve the mapper to handle something like that properly i just don't think it will today 18:24:36 ayoung: we need that anyway, imho 18:24:41 ayoung: i'd like to see a stronger constraints around domains to manage that 18:24:41 henrynash, ++ 18:24:58 idp to domain associations 18:25:04 ayoung: actually, i’ll raise a spec for that anyway 18:25:13 ayoung: basically yes. so a domain admin is also free to manage a mapping 18:25:14 this idp can map to domain X,y,z butn not p,d,or q 18:25:20 folks, don't you think federation setup will dramatically slow down the process of authorization in production? 18:25:22 generally speaking, as an API, I am against user-provided IDs. 18:25:40 agrebennikov: ++ 18:25:49 agrebennikov, nope 18:25:52 ayoung, dolphm, henrynash: lets see what the federation spec looks like before we dive too far into it, but it sounds like an option 18:25:55 notmorgan: you don't love race conditions? 18:26:00 amakarov: are you alredy solving your usecase and just looking to upstream the idea? or have you not started to do it yet? 18:26:06 dolphm: hehe 18:26:08 agrebennikov, look at the Federation via SSSD and Mod_lookup_+identity setup I did a few years ago 18:26:17 no slower than LDAP, and much cleaner 18:26:25 https://adam.younglogic.com/2014/05/keystone-federation-via-mod_lookup_identity/ 18:26:33 amakarov: your use case on the mailing list is exactly what i have in mind 18:26:33 agrebennikov: it shouldn't 18:26:39 dstanek: I'm sticking to upstream wherever possible 18:26:58 ayoung, I mean if every time user tries to authorize in one cloud it has to call to remote keystone 18:27:07 s/wherever possible// 18:27:10 amakarov: ok, was just going to see about any experiments you've done 18:27:19 dolphm: thanks ) 18:27:24 agrebennikov: not authorize, just authenticate 18:27:25 ayoung: explain to me from before why you think you couldn't implent a custom LDAP resource backend? 18:27:40 dstanek, we are talking about projects, rigth? 18:27:46 jamielennox, OK, so when I first did LDAP Identity and Assignment were in a single backend 18:27:49 amakarov: fwiw, this really sounds like db replication is the solution (or LDAP backend, or anything with replication) 18:27:54 agrebennikov: once you have a token for the cloud you want to operate on you won't talk to the other keystone 18:28:08 today, there would be no clean way to pull in users from federation etc via LDAP 18:28:23 I mean, you could put it in LDAP, but it would be Keystone only data 18:28:32 dstanek, but when you bring it to local keystone, does it have to verify the token against the remote one? 18:28:34 and not have the role assignments, as those really should be in SQL 18:28:36 notmorgan: ++ to db replication, this stemmed from having problems doing reliable sync but i think something behind the scenes is still correct here 18:28:37 notmorgan: and db replication souns like an overkill :) 18:28:56 ayoung: the problem is that you want to manage user-specific authorization before there are users, right? 18:29:00 amakarov, you can never have too much overkill 18:29:04 amakarov: it provides consistency and a shared authoritative data store 18:29:35 ayoung: but for whatever reason they are not having to duplicate the assignments backend, just resources, so you could put that in LDAP ? 18:29:40 notmorgan, why do you always try to enforce third-party service to do the job which may be avoided? :) 18:29:42 amakarov: allowing user-provided ids only solves a very small case and i am almost 100% sure you're going to need a lot more. 18:29:56 jamielennox, they would be aback asking for assignments once they realized.... 18:30:05 you need the whole kit and kaboodle 18:30:07 agrebennikov: because the NIH in openstack is strong. 18:30:11 ayoung: i'm pretty sure that's going to happen here anyway 18:30:14 and it should not be in LDAP 18:30:23 jamielennox, so...I like dolphm 's approach 18:30:29 ayoung, please, bring it back to ldap)) 18:30:30 or SQL sync 18:30:36 agrebennikov, Nope. 18:30:42 agrebennikov, not the right tool 18:30:55 LDAP is general purpose data, not app specific 18:31:17 dolphm's approach being allow the mapper to create projects on first access? 18:31:24 why is ldap replication any better than SQL replication? 18:31:25 jamielennox: ++ 18:31:44 ayoung, because LDAP data is "highly static" 18:31:45 jamielennox, it also solves the long standing "autoprovisioning" feature request 18:31:47 ok we have other topics to cover 18:31:47 dolphm: i like that - but it still wouldn't let you specify a static project id output right? 18:31:53 we should close up this one. 18:31:54 jamielennox: first authN, or on an authN when the mapping produces a new result (new attribute in SAML, or new mapping rules) 18:32:05 dolphm: would your approach deal with the project id issue so that users only have to know one? 18:32:09 we can circle back at the end. 18:32:09 notmorgan: ++ let me get this into a spec, and we can resume next week 18:32:14 yea, ok, allow for evolution of mapping 18:32:14 dstanek: yes, it could 18:32:16 notmorgan: what's the conclusion? 18:32:27 so, in wrapping up 18:32:31 lets see what dolphm proposes 18:32:36 amakarov: conclusion is i owe you a spec :P 18:33:12 i personally am against supporting this in the API, what do the other folks feel (temperature) wise on the user supplied ids? 18:33:23 * amakarov writing down in a very special notebook "claim a spec from dolphm" 18:33:32 #action dolphm to write up federation spec for next week. 18:33:38 i'm still -1 on the whole idea, i feel like you need only projects because of a very specific deployment setup 18:33:49 to do this properly would require access to a whole lot more 18:34:19 and i think the answer should be db replication, because you are actually trying to horizontally scale keystone 18:34:19 ayoung, henrynash, dstanek, shaleh, rodrigods, samueldmq ? 18:34:25 lbragstad? 18:34:35 yeah, give me my custom project IDs! :P 18:34:48 i'm not a huge fan of user defined IDs because it can cause weird edge cases 18:34:51 notmorgan, meh 18:34:57 lbragstad, ++ 18:35:04 generally i'm -1 on this idea. it seems like it would open too many corner cases 18:35:05 last time it was OK as an admin-only action, no 18:35:06 notmorgan, I want them for other reasons, see no reason to feat them 18:35:07 i remain skeptical of user defined IDs as well 18:35:07 ? 18:35:07 what is the reasoning behind avoiding DB replication? what's described on the agenda just sounds like a technical exercise rather than a use case 18:35:09 fear 18:35:10 you man check they are unique 18:35:26 dolphm: ++ 18:35:29 dolphm: ok lets get an answer to that then close up the topic 18:35:33 but if we can do it with something like the mapping engine and solve the auto-provisioning case that would be cool 18:35:37 agrebennikov, amakarov^ 18:36:12 notmorgan: via the API, not so keen. As say a keystone-manage so it was ensure for admins for special purposes. Maybe. 18:36:19 "Customer doesn't want to replicate databases between several geo-distributed datacenters" -- why not? 18:36:38 agrebennikov: ^ 18:36:41 dolphm: a low-volume of change, small dataset db. 18:36:53 dolphm: keystone with fernet isn't crazy changing data. 18:37:05 * notmorgan narrows the case down a little 18:37:09 dolphm: last week it was said that they had a sync issue and they killed the DBs in multiple datacenters 18:37:11 notmorgan: I am with other cores, don't like the idea of changing only the porject API for a very specific case 18:37:19 what about revocations 18:37:33 samueldmq, another good point 18:37:33 ok, we do need to move along 18:37:36 those would need to be replicated. 18:37:41 ayoung: i know hold up. 18:37:52 dstanek: ok. fair enough. 18:37:52 sounds like an issue that should be solved at deployment level, like dolphm just mentioned 18:38:06 dstanek, that was exactly the issue they had 18:38:47 so i'm going to take it as a -1 from most cores to the general idea but we can continue to discuss it all after the meeting 18:38:52 jamielennox: ++ 18:39:00 agrebennikov: is it possible that they were just doing it wrong or is this a problem with the sync software you use? 18:39:10 jamielennox: that was the result here. 18:39:10 #topic Return request-id to caller" and other meta values 18:39:18 let's all resync after dolphm gets a spec up? 18:39:19 breton_: yours 18:39:27 as far as i remember, you guys decided to talk about the subject at the summit 18:39:35 and choose which approach to use for adding properties to responses returned by ksc 18:39:44 there were 2 possible ways: 18:39:57 1. Somehow add properties to default python types (eeryone boo here) 18:40:14 breton_: euuuuwwww :( 18:40:15 2. Create a new class, "Response", and do things there 18:40:45 breton_: generally speaking, working with a response object (imo) sounds more correct than wedging data onto python primatives. 18:40:51 i am interested in this because i need to do something similar for horizon -- they want to consume flag "truncated" returned for list operations 18:41:01 breton_: i didn't just boo. i threw my laptop in a fit of rage 18:41:19 and we decided that i'll do it the same way as "return request id to caller" 18:41:19 dstanek, I can only imaging that :D 18:41:29 https://review.openstack.org/#/c/261188/ 18:41:50 https://review.openstack.org/#/c/261188/ is moving but too slow, and i want finally some consensus on that 18:41:50 ah, there it is, i couldn't find it 18:42:03 #link https://review.openstack.org/#/c/261188/ 18:42:09 because i need to do my thing too 18:43:33 did you discussed it? Also, please add your thoughts to https://review.openstack.org/#/c/261188/ 18:43:34 breton_: maybe you contack the author and pick that up? 18:43:45 amakarov: the author wrote to the mailing list today 18:43:57 breton_: isn't that review doing #1? 18:44:03 dstanek: it is 18:44:38 i'm not sure that request_id ever escapes keystoneclient doing it this way 18:44:45 but they can be review comments 18:45:19 jamielennox: ++ i said that early on. that casting or type maniplation may give you a new python primitive without the request id. 18:45:34 breton_: ok, so we need to get some eyes on that review - i think we need to look at a better response object, i'm not sure about wrapt-ing the whole thing 18:45:50 i think a response object is in order so that we can expose other meta data 18:46:20 dstanek: right, so that response gets turned into a manager specific resource before returning to the user, so we should be able to figure out something there 18:46:32 breton_: ok for me to move on? 18:46:48 jamielennox: yes. Everyone, please comment on that review. 18:46:56 #topic KeystoneAuth Release today 18:46:59 any chance this work could also assist with the cross project goal of having a shared request_id so a request can be tracked through the layers of OS? 18:47:15 just an FYI, we're going to release ksa this week. 18:47:27 shaleh: sigh - that's a long discussion 18:47:35 it has new betamax support etc. 18:47:36 jamielennox: I know :-( 18:47:45 so https://review.openstack.org/#/c/321814/ is the only outstanding review i would want to see in this review 18:47:52 scream and yell if you're unhappy. 18:48:14 there are a couple of +2s but no +A as ayoung has offered to test it for us :) 18:48:24 ayoung: can you confirm/ upgrade to +2/+A today? 18:48:32 ayoung: or tomorrow 18:48:33 jamielennox, was wokring on getting a setup running again 18:48:46 turns out I was running OSP 7...Pre mitaka stuff 18:48:47 ayoung: yea, it broke for me too 18:49:08 love to get a gate going on this stuff 18:49:11 but I should be able to test that on my laptop talking to it 18:49:16 i'll hold the release until you've tested it/are ok (at least until wednesday) 18:49:22 will do 18:49:29 ayoung: thanks. :) 18:49:34 #action ayoung test and approve https://review.openstack.org/#/c/321814/ 18:49:36 ayoung: you have 6 hours, run..... 18:49:54 #topic Service user permissions 18:49:57 s/wednesday/thursday 18:50:07 ayoung: thanks 18:50:11 #link https://review.openstack.org/#/c/317266/ 18:50:42 so i prompted everyone to think about the consequences of this last week (week before?) and so far have no reviews 18:51:10 on the face of it this sounds like a bad idea 18:51:19 i'd really like everyone's impression of this because it's a big conceptual change that i would like people to be on board with or kill 18:51:38 jamielennox, I suggest OSS folks take a look at it as well 18:51:48 "just trust me" is never good security 18:52:01 gyee: good point - i need to ML it with [security] 18:52:35 yeah, disagree 18:52:54 we don't want the token validated, we want the "user still has this allowed" validate 18:53:16 ayoung: i disagree so much with that 18:53:28 two distinc things...a service user should not be tied to the token expiry, but they should only be able to do things a user has asked them...its like S4U2Proxy 18:53:44 validating that every single step is crazypants imo. 18:54:18 but then again i clearly am in the minority here. 18:54:24 something to consider, if we are sharing memcache for token validation result, it will have similar issues, security-wise 18:54:36 notmorgan: validating the authz is crazypants? 18:54:49 ayoung: that was my initial gut feeling here as well, still validate the credential headers for consistency at every step 18:55:15 but i'm having a hard time coming up with what that solves rather than just trusting everything from another service 18:55:16 dstanek: validating at every single stage in the chain is crazypants because we end up with the broken model of "oh we didn't know you cant do X 5 services deep" 18:55:16 notmorgan, if a user loses access to, say a Cinder volume, that she might have had a month ago, the service user should not be able to override 18:55:34 I have two words, PKI tokens! 18:55:38 the service user should not be tied to token expiry, but the delegation should not be infinite, either 18:55:49 a service request should be good for , say, 24 hours 18:55:55 ayoung: and if you're coming back a month later, you shouldn't be using the same request :P 18:56:03 and a token for 5 minutes 18:56:08 notmorgan, exactly 18:56:42 yea, i don't think a month is reasonable here, there is a certain level of trust we would need to put on the services to forward the right thing, but we have that anyway 18:56:43 ayoung: within a specific scope/request validating permissions (not general auth) at every step is crazy. once you know you're allowed to do X, let them do it. 18:56:57 ayoung: i think we're quibbling over some small details thugh, mostly we're on the same page. 18:57:09 jamielennox, so what is proposed here is that nova would be able to do anything anywhere 18:57:25 also unfortunately this doesn't fix the enforce policy only at the edge either, just token expiration 18:57:45 i'm generally +1 to this idea, but i have to admin i haven't read the spec yet 18:58:01 ayoung: it has that effect - however conversely to make things work at all nova service user typically has admin already 18:58:08 jamielennox: that is a big step forward 18:58:20 jamielennox: the expiry check is a huge win for openstack 18:58:22 notmorgan, we are not 100% on the same page. I do wantto check up front "can the user do, right now, everything they are erequesting" and return a 40X if they can't 18:58:24 there we both agree 18:58:29 jamielennox: so, generally +1. 18:58:31 but there are 2 things I want byond that 18:58:43 1. that the servcie use does not have carcte blanche to do anything anywhere 18:59:06 ayoung: that is a possible future thing 18:59:07 2. that the permission gets rechecked based on the delegation at the time of execution 18:59:09 ayoung: fwiw, they already pretty much do :P most service users *are* admins. 18:59:13 I care far more about 1 than 2 18:59:20 notmorgan, I know 18:59:33 notmorgan, I'm the crazy person that came up wiuth trusts....remember? 18:59:44 ok - were out of time again 18:59:51 o/ 18:59:54 i'll send a ML this week 18:59:58 jamielennox: ++ 19:00:02 lbragstad: ? 19:00:05 that is the next logical step. 19:00:11 quick 19:00:30 please review 19:00:36 #endmeeting