18:00:57 #startmeeting keystone 18:00:57 Meeting started Tue May 17 18:00:57 2016 UTC and is due to finish in 60 minutes. The chair is stevemar. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:01 The meeting name has been set to 'keystone' 18:01:05 Woo! 18:01:07 courtesy ping for ajayaa, amakarov, ayoung, breton, browne, crinkle, claudiub, davechen, david8hu, dolphm, dstanek, edmondsw, gyee, henrynash, hogepodge, htruta, jamielennox, joesavak, jorge_munoz, knikolla, lbragstad, lhcheng, marekd, MaxPC, morganfainberg, nkinder, raildo, rodrigods, rderose, roxanaghe, samleon, samueldmq, shaleh, stevemar, tjcocozz, tsymanczyk, topol, vivekd, wanghong, xek 18:01:14 o/ 18:01:17 Ayuh 18:01:18 o/ 18:01:21 jaugustine: that's the kind of excitement we need to see! 18:01:23 o/ 18:01:25 _o/ 18:01:26 o/ 18:01:40 o/ 18:01:42 \o 18:02:09 critical mass achieved, let's start 18:02:15 #topci release update 18:02:18 #topic release update 18:02:30 #link http://releases.openstack.org/newton/schedule.html 18:02:49 we are currently in week R20, it counts down to the end of the release, so 20 weeks left 18:03:11 CODE FREEZE! 18:03:14 sorry 18:03:18 we should be cleaning up any old issues from mitaka and preparing for newton-1 18:03:19 hahaha 18:03:26 no where near that time yet :) 18:03:45 but the next deadline is R18, in 2 weeks 18:03:51 there are 2 things happening that week 18:03:54 M1? 18:03:56 newton-1 milestone driver 18:03:57 yep 18:03:58 o/ 18:04:01 Er..N1? 18:04:11 did the M mean Milestone or Mitake? 18:04:12 and that's the spec *proposal* freeze week 18:04:13 Mitaka 18:04:31 o/ 18:04:35 ayoung: it's N1 this time around, the letter changes 18:04:47 Noice 18:04:51 so in 2 weeks: newton-1 driver, and spec *proposal* freeze 18:05:08 the latter meaning, if you want a spec for newton, propose it soon! 18:05:23 it doesn't have to merge in 2 weeks, just be proposed and a reviewable state 18:05:53 any questions about dates? 18:06:03 when is code freeze 18:06:16 dolphm: all dates are here: http://releases.openstack.org/newton/schedule.html 18:06:43 dolphm: keystone is following the general feature freeze, which is R5 18:06:48 in 15 weeks 18:07:34 dolphm: good? 18:07:48 * topol just rsvp'd for the midcycle.. dont forget 18:07:50 ++ 18:07:55 if anyone has questions, bug me in -keystone 18:08:04 topol: on that note... 18:08:06 #topic midcycle update 18:08:12 notmorgan: ^ 18:08:16 anyone want to got rockclimbing in Yosemite either before or after the midcycle? 18:08:19 #link https://docs.google.com/forms/d/1To2qx90Am4hcYdgaqTkRosfNOzxr26M0GiahhiWgAJU/viewform?c=0&w=1 18:08:26 RSVP Form ^ 18:08:34 notmorgan is there a recommended hotel 18:08:36 Sign up. we have 35 slots. 18:08:55 topol: no, please feel free to update the wiki/ether pad if you have recommendations... 18:09:00 topol: sec getting those links too 18:09:19 hi all 18:09:19 #link https://wiki.openstack.org/wiki/Sprints/KeystoneNewtonSprint 18:09:29 #link https://etherpad.openstack.org/p/keystone-newton-midcycle 18:09:51 the RSVP form will be used as the authoritative source of "who is coming" for planning purposes 18:10:22 is this the official rsvp form: 18:10:24 #link https://docs.google.com/forms/d/1To2qx90Am4hcYdgaqTkRosfNOzxr26M0GiahhiWgAJU/viewform?c=0&w=1 18:10:25 ? 18:10:38 rderose: yes. 18:10:39 rderose: yes 18:10:45 cool 18:10:58 done 18:11:03 notmorgan: we need a slot count down 18:11:20 35? 18:11:22 please take a second and thank Cisco for hosting us *and* cburgess for helping set that up 18:11:29 dolphm: 35 spots. 18:11:31 ayoung: one way of reducing the attendee count…. 18:11:31 cburgess thanks! 18:11:44 dstanek: oh, you mean visibility into remaining number of slots? 18:11:47 yea, also hotel recomentdation would be useful 18:12:00 dolphm: yes, exactly 18:12:03 there are ** 23 ** slots remaining for the midcycle 18:12:17 since i know next to nothing about San Jose area... [sorry], I'm going to ask folks who are more familiar for hotel recommendations 18:12:36 Stay in San Francisco and take Caltrains? 18:12:47 ayoung: long ride, but doable. 18:13:08 there's a couple hotels within a decent walking distance, but we'll probably need a few cars anyway, given it's san jose 18:13:15 https://goo.gl/3gS8GQ 18:13:15 dolphm: yes. 18:13:18 you can take VTA 18:13:20 nearby hotels ^ 18:13:32 I think VTA cover that area 18:13:37 Earwigging, here, but I've done that ride and unless you like hours on the train it's not good 18:13:38 thanks lbragstad 18:14:02 Caltrain is like an hour away from SF 18:14:04 lbragstad: please add that to the wiki. 18:14:14 notmorgan ok 18:14:42 You can Caltrail to and VTA from Mountain View - it's not impossible - but that's another half an hour or so depending on here specifically you'll be 18:14:45 Levi stadium is nearby so there are a bunch of hotels there 18:14:47 ijw, I'm doing Caltrain everyday - it's not that bad :) 18:14:50 hyatt's are usually nice, and nearby there 18:15:08 Aloft? 18:15:15 i think staying close would be much easier 18:15:15 Navy Lodge Moffett If you are DoD 18:15:25 i added wiki link to the RSVP form 18:15:29 There's a Hyatt House on First that comes recommended, the Crowne Plaza, and the Hilton Garnden Inn and Larkspur Landing, all in the reasonable vicinity 18:15:33 ayoung: The hanger at Moffet if you are NASA 18:15:49 ayoung: doens't google also rent that out now? :P 18:15:52 ok anyway 18:15:54 thats the info 18:16:05 henrynash, there is an Armory Floor not too far away if you are Army National Guard. I used to drill out of there. 18:16:23 I believe I stayed at the Santa Clara Marriott. Its nice 18:16:34 thanks ijw and notmorgan 18:16:41 topol: used to be next to Great America…. 18:16:55 gyee, dibs on your couch. 18:16:58 Casino M8trix :-) 18:16:59 anyone have an issue with making the results of RSVP public? 18:17:00 (that was not a Trump reference) 18:17:04 i can add it to the wiki as well. 18:17:12 notmorgan: fine by me 18:17:13 notmorgan: nope 18:17:16 will do 18:17:28 dstanek ++ 18:19:11 bug notmorga if folks have questions :P 18:19:17 #link https://docs.google.com/spreadsheets/d/1qTupqEyYwXnNnO-sW0kRhh-I9hvpPHA7QAuewXnw6AA/edit?usp=sharing 18:19:17 (fine fine, i'll be around too) 18:19:40 on to more contentious topics! 18:19:43 #topic Specify ID for Project or domain creation 18:19:53 amakarov ^ 18:19:58 agrebennikov is missing :( 18:20:06 Thanks 18:20:33 The question is: why do we unconditionally overwrite project ID on creation? 18:20:51 Is there some reason behind it or is it a bug? 18:21:09 amakarov as in you can't create a project with a project ID that you pass in? 18:21:17 I think that's what we do everywhere else ? 18:21:30 change question -why should you ever specify an id up front? 18:21:36 jamielennox: ++ 18:21:39 jamielennox: ++ 18:21:46 I think the argument was IDs are supposed to be globally unique 18:21:48 ++ 18:21:59 jamielennox: the actual case: multiDC env 18:22:01 gyee: yeah, and we want to control that 18:22:01 i don't like that we can do that in some places 18:22:04 gyee: good point 18:22:10 amakarov: to guarantee that project IDs are globally unique and immutable, we don't accept user input for them. project names are mutable, not globally unique, and user-defined 18:22:22 so, i dislike specifying it - if we are doing this it can be ADMIN only 18:22:33 if we create projects and roles with the same ID - the tokens are valid across DC 18:22:35 and it still must conform to automatic such as uuid 18:22:54 to play the devil's advocate, we've already allowed "shadow" users, why not shadow other things? 18:23:04 so there is one other reason this is important 18:23:06 if we all understand the consequences on federation and whatnot, i'm open to breaking that convention 18:23:07 gyee: shadow users doen't let you pick the user's id 18:23:16 but shadow users doesn't allow a user to specify the id of their user 18:23:19 notmorgan: so no big advantage besides knowing it ahead of time (and it stil can fail at cretion time!) 18:23:19 in some cases you need to restore a project after deletion 18:23:20 notmorgan: we still ok with that in v2 18:23:23 jamielennox, at least to agree on a global ID 18:23:38 amakarov: default domain is special, and i always treat it as such 18:23:40 gyee: no, it creates an id for you as does project 18:23:46 amakarov what's the use case for specifying an ID for a project? 18:23:51 right, with a specific algorithm 18:23:55 lbragstad: ^ 2 reasons 18:23:57 lbragstad: ++ 18:24:00 gyee: no, uuid 18:24:01 lbragstad: 1) restore a deleted project 18:24:01 this is going to be like Nova hooks isn't it, where someone brings up an old feature and the core immediately deprecates it? 18:24:04 and maintain the id. 18:24:16 well, excpet V2 is already deprecated 18:24:20 2) same id across deployments (keystones) 18:24:38 i would not make it a "normal" action anyone can do. 18:24:43 in either case. 18:24:45 lbragstad: if you need to authZ in different clouds with a single token 18:24:45 in order for 2 to work you'd have to have the identity backend shared anyway 18:24:45 notmorgan ++ 18:24:50 ayoung: v2 is deprecated 18:24:55 notmorgan: if you did then it may be a security issue 18:25:04 raildo, doesn't change the fact that Nova core hates me. 18:25:07 I think we have good use cases to make this happen 18:25:12 dstanek: exactly, it, if anyting is an Admin-only thing 18:25:16 ayoung: I know :( 18:25:18 I have an abandonded spec 18:25:18 notmorgan: why do you need the same id across deployments? 18:25:23 lbragstad: ++ 18:25:30 notmorgan: cloud admin? 18:25:31 jamielennox, I see, but they are still using their native ID for auth, no? 18:25:36 rderose: ^^ my answer to lbragstad 18:25:39 #link https://review.openstack.org/#/c/203852/ 18:25:40 rderose: a lot of deployers want the same project id in multiple AZs that are managed by different keystones 18:25:52 rderose: not my choice/reasoning, just echoing the reasoning 18:26:03 notmorgan: ++ 18:26:04 rderose: the "restore a project" reason is why i support it 18:26:05 i wouldn't even say this is an admin only thing, if your use case is restoring a deleted project then at best it's a keystone-manage feature 18:26:07 amakarov: can't you do that in a more federated approach? 18:26:13 rderose: and that is a break-glass scenario 18:26:25 but should be doable via the API not DB manipulation 18:26:32 dstanek: no, as federation is currently in demo state :) 18:26:44 demo state!? 18:26:48 dstanek: i'd make it a cloud-admin thing for sure. 18:26:49 amakarov: ? 18:26:57 dstanek: if i was implementing it. not a domain admin thing 18:27:04 vvtletuvnjejfltljiigeubducbbkiitevfbdhkgnedg 18:27:05 notmorgan: via API ++ 18:27:07 notmorgan: but if you create a project, get the id, then you can do the restore use case? right? why would you need to specify the project id in that case? 18:27:13 notmorgan: lol 18:27:15 notmorgan, Ubikey failuer\\re? 18:27:17 gyee, dstanek: it lacks, for example, grops shadowing 18:27:20 ayoung: yes 18:27:34 * ayoung imagines Q-Bert 18:27:40 but please don't say it's demo state, people worked hard on it 18:27:53 and still do 18:27:59 amakarov: i think it would be better to make federation fit this, rather than do this one-off thing 18:28:08 dstanek ++ 18:28:13 `make federation great again` 18:28:16 unfortunately agrebennikov is away, but he exposed it very clear on design session 18:28:17 ayoung: yes. i need to fix that 18:28:22 stevemar -- 18:28:28 ayoung: i keep bumping the OTP button. 18:28:34 when i move the laptop 18:28:37 Heh 18:28:39 agreed, i can see the break-glass case of restore a project via -manage, but for cross DC you are needing to do some sort of federation 18:29:01 notmorgan, 3 more of those and Ithink I'll be able to crack the seed. 18:29:19 amakarov: can you get together a list of things the current federation implemenation is missing? 18:29:33 yeah - each kesytone across DCs is going to require access to the same identity backend (which naturally fits the federation story) 18:30:00 lbragstad: agree, but is there a big demand for this? 18:30:11 I mean, do we want to give this a priority now? 18:30:12 dstanek: I agree about that approach in theory, though we can solve many problems easily and I don't see why we can't do that right away 18:30:13 ayoung: good thing i don't use that OTP for anything 18:30:20 What will it break? 18:30:45 amakarov: the wrong solution right away isn't necessarily good 18:30:52 i think it's more of losing control 18:30:55 dstanek: why is it wrong? 18:31:14 amakarov: i dislike specifying ids 18:31:21 ++ 18:31:38 i would be -2 if users could do it, but i'm still -1 for cloud admins 18:31:52 i like jamielennox's idea of a manage command if that is actually needed 18:32:16 dstanek: sure. 18:32:29 dstanek: ok, what is the suggested way to create/use projects based on LDAP groups? 18:32:29 dstanek, its an RBAC thing regardless 18:32:34 what guarantee keystone will accept the ID the user passes ? 18:32:34 dstanek: a break glass method that isn't "edit the DB" is what i look for in solving the "restore" a project/domain 18:32:38 dstanek: i'd support that. 18:32:55 dtsanek, jamielennx: ++ agree on the keystone manager cmd 18:33:00 Nope 18:33:05 that is the wrong security model 18:33:12 that means people who have access to the machine 18:33:23 you don't want to push people to do that in the course of manageing the applicaiton 18:33:36 rderose i think we'd need to see a list of short comings in the federation implementation in order to prioritize 18:33:48 ayoung: if we're restricting the operation to the cloud-admin they they have access to the machine 18:33:51 lbragstad: yep 18:33:51 lbragstad: good point 18:34:08 if you need to do this operation often enough that logging onto the machine is a problem you are doing management wrong 18:34:19 jamielennox, we are doing management wrong. Its called Keystone 18:34:33 lmao 18:34:33 we delete a project and all of the resources out there are orphaned 18:34:38 I think we need a well described use case, and how we would support it via federation (waht's missing as lbragstad said) 18:34:42 ayoung: ++ 18:34:57 we emit notifications on resource changes 18:35:06 ayoung: so we need to have a look at keystone's delete project? 18:35:12 deprecate in favour of disable? 18:35:15 more to say, looks nobody even dare to touch this topic publicly :) 18:35:17 lbragstad, and we have no standard workflow engine so that is not sufficient 18:35:20 soft-delete? 18:35:26 jamielennox: no soft deletes. 18:35:34 jamielennox, gyee has been saying that for years 18:35:35 jamielennox: i'm ok with "never remove from the db" 18:35:37 but a full on restore the db entry is also a bad idea 18:35:59 resource life-cycle management 18:36:03 jamielennox: soft-delete implies "could be hard-deleted" 18:36:04 I'm just afraid it is closing the barn door after the horse is long gone 18:36:07 notmorgan: meh, we have an enabled flag on projects, it's essentially a soft delete 18:36:08 that's production operation stuff 18:36:10 i think we're mixing things up here, amakarov doesn't necessarily care about the 'restore a project' if deleted case 18:36:15 something we don't use often enough :-) 18:36:16 i'm 100% ok with nevre removing it as a delete instead. 18:36:23 we should do event sourcing 18:36:24 stevemar: yes 18:36:33 anyway, keep it as a restricted operation. We can make it wider exposed later so long as it is RBAC managed 18:36:43 stevemar: ++ 18:36:44 ok lets set "restore" to the side 18:36:45 so let's put the whole manage command, and soft delete discussion to the side 18:36:52 i'll propose something separate for that 18:37:11 i have an idea and i think it'll work just fine. 18:37:14 You guys are missing the fact that notifications are currently a disaster, so we can't notify untrusted services 18:37:43 ayoung: let's fix it, but it's still part of the other conversation :) 18:37:49 well, let me put it this way: how can I provide a customer the UX when he auth in 1 cloud (in horizon) and then works with resources from the other cloud without re-login? 18:37:51 ayoung: ++ we have a huge problem with nova quota when we delete a project =/ 18:37:51 and we don';t even know all of the services that would need the notification 18:37:52 amakarov: to do that via federation, what would we need ? 18:38:03 amakarov wants to specify an ID only for create, and he is using this in a multi-dc environment. if federation is stinky for this, we should fix it. 18:38:04 amakarov: does that mean 'federated resources' ? 18:38:10 considering existing LDAP 18:38:11 amakarov, that sounds like K2K 18:38:24 But... 18:38:36 that is the whole pre-sync between two clouds 18:38:41 amakarov: "other cloud" - same owner different deployment (like AZ)? 18:38:42 amakarov: one horizon for all DCs right? 18:38:43 which is a good midcycle topic 18:38:50 dstanek: ++ 18:38:51 amakarov: or totally different cloud totally different dpeloyer? 18:38:57 lets have a "larger workflows" track at the midcycle? 18:39:03 notmorgan good question 18:39:16 lbragstad: because the answer to that dictates my view. 18:39:38 amakarov: it sounds like the Mercador idea... https://wiki.openstack.org/wiki/Mercador 18:39:40 notmorgan: wouldn't solving for different owners essentially solve for both? 18:39:42 notmorgan: no. clouds may be thought about as "replicated" 18:39:54 amakarov: so "same deployer" 18:40:02 dstanek: not really but hold on. 18:40:15 notmorgan: same deployer 18:40:31 amakarov: cool. then i think it's not unreasonable 18:40:56 why do 2 clouds from the same deployer wanting to share projects between them not use the same keystone? 18:40:59 we have to move on, there are other topics on the agenda 18:41:01 dstanek: different deployers/owners is the same reason we assume keystone for the local cloud is authoritative. 18:41:10 jamielennox: ++ 18:41:12 i'll give this another minute to wrap up 18:41:19 jamielennox: my point was they could also solve this via replicating keystone 18:41:32 jamielennox: it may be a big project distributed geographically 18:41:34 jamielennox: which is where i was going, and why it was "reasonable" to want that kind of replication 18:41:41 amakarov: keystone will support that 18:41:43 not that the API driven one was the right answer. i'm abstaining there 18:41:45 jamielennox same keystone or just point to the same backend (which is essentially the same thing?) 18:41:46 replicating gets expensive when you have more than a few DCs 18:41:48 jamielennox: looks like we're trying to fix something that should be fixed with right deployment choice and/or federation 18:42:08 amakarov: write up a detailed usecase so we can discuss further 18:42:14 you can horizontally scale keystone and you can put endpoints in different regions 18:42:20 fernet was a huge win for that 18:42:47 dstanek: ++ amakarov please do what dstanek said ^ 18:42:51 jamielennox: right, if the project id's are the same 18:42:52 fwiw - we did testing of that across three globally distributed datacenters 18:42:59 samueldmq, dstanek: ack 18:43:02 will do 18:43:08 amakarov: if they are the same keystone the project ids are unique within it 18:43:13 stevemar: let's get this show on the road! 18:43:39 dstanek: alright! 18:43:45 switching gears 18:43:48 #topic Service Token provides user permissions spec 18:43:51 jamielennox: ^ 18:44:12 (i haven't read it yet) 18:44:15 ok, so we discussed this spec in tokyo and then i never got around to doing it in the last cycle 18:44:15 jamielennox: different keystones 18:44:17 #link https://review.openstack.org/#/c/317266/1/specs/keystone/newton/service-user-permissions.rst 18:45:06 the intent is that we should validate users only on the first service they hit in openstack, from then on in service to service communication we should trust the user permisssions provided by the last service 18:45:07 jamielennox: so a token only get validate once per openstack workflow ? 18:45:20 nice 18:45:25 samueldmq: yep 18:45:42 samueldmq: if you pass an X-Service-Token then the service token is validated to say that we should be able to trust this information 18:45:48 notmorgan: looks like what we had a discussion a couple of months ago .. that could be implemented as a ksa plugin etc 18:45:49 this is something i've been advocating for a bit. 18:45:58 so...what if we do :X-Service-Token + user_id + project_id + role_id 18:46:00 so this is a fairly large change in the way we handle tokens in openstack 18:46:00 samueldmq: same thing, slightly different mechanism 18:46:12 but it sovles the mid-operation token expiry which has plagued us forever 18:46:16 but the service token still needs to be validated, right ? 18:46:23 so no gain in terms of performance? 18:46:25 so the remote side can still confirm that the resource is within the permissions boundary 18:46:26 samueldmq: yes, but that isn't long term 18:46:30 jamielennox, it will be an explicitly opt-in feature right? 18:46:31 and we don't have an elevation of privs 18:46:36 the spec is not completely polished up and given the conceptual size of the change i just want to make sure people are on board first 18:46:37 jamielennox: is this a overcome for the nova/keystone design session, right? 18:46:40 samueldmq: actually one less validation- you don't need to validate twice 18:46:58 pass along the list of roles in the token that the user was originally validated 18:47:03 ayoung: i still think that we're wrong on that front, but we can expand the discussion *after* 18:47:19 ayoung: its a different topic that builds on jamielennox's proposal 18:47:25 raildo: well i'm pushing it now because it all came up again at summit and this was our solution that never got proposed 18:47:32 samueldmq: if we figure smth out to trust some services - we don't have to validate tokens EVERY time 18:47:49 notmorgan, just the role-list? 18:47:50 jamielennox: ++ nice 18:47:52 so there are a bunch of security questions about how we trust those headers 18:47:59 notmorgan: long-term is use certs for that and don't validate service token at all ? (and user token only once at the beggining of workflow) 18:48:08 ayoung: like i said, something to discuss later - once we land this i don't want to derail 18:48:19 samueldmq: that will also be an option 18:48:27 but service tokens are easy to refresh 18:48:28 service token == user with service role _ is_admin_project 18:48:28 samueldmq: certs would be cool 18:48:36 er that _ should be + 18:48:39 ayoung: or whatever the definition is. 18:48:42 amakarov: yes, that could be done using certificates, for example; but is a separate conversation, maybe long term :) 18:48:44 ayoung: that is an implementation detail 18:48:58 yea, don't need is_admin_project, however you define service is fine 18:49:16 can be whatever KSM is configured for 18:49:34 so there's a question in there about whether we should trust all the attributes passed as headers or whether we should take only the core user_id project_id etc and reconstruct it like we do fernet 18:49:42 notmorgan: jamielennox: nice so this definitely opens the door for other long term improvements (like that using certs) 18:49:46 works for me 18:49:49 that i'd like people to have a look at as well 18:49:52 jamielennox: i think nova will no longer hate us if we get this in 18:50:09 stevemar: yep, and all those stupid trust hacks that glance did recently can go away 18:50:16 turn hate into love 18:50:16 So the idea is that we pre-validate at the boundaries, and then trusted services can talk to each other. This is fine for, say, Nova to Glance, but I don't want it for *aaS like Sahara and Trove 18:50:16 glance as an example 18:50:22 jamielennox: yep 18:50:29 gyee: ++ 18:50:47 ayoung: right, these are the concerns that i want out on the spec 18:50:48 Yeah...this is not going to scale 18:50:52 jamielennox: is it any worst/better in terms of security to trust all the headers ? 18:50:58 this is why I want unified delegation 18:51:18 so i don't expect everyone to have opinions on it now, but i'll put it on the agenda for next week as well and i'd like to hash it out on the spec until then 18:51:25 but this is the right approach 18:51:42 jamielennox: i don't anticipate much push back 18:51:46 samueldmq: it's an open question 18:51:53 we distinguish betwee "this is a long running operation, but treat it all as a unit" versus "do something for me later" 18:52:00 stevemar: this is a giant security model change - it needs to be pushed back on 18:52:11 this has the potential to be a disaster. 18:52:16 ayoung: ++ 18:52:28 ayoung: but there's no other reasonable way i can see to solve it 18:52:29 ++ 18:53:02 well, currently things just time out, or token expirations are set stupidly high, so... this is an improvement to me 18:53:07 i expect to have to expand the security implications bit to an essay 18:53:11 it would be cool if polcies were configured per openstack workflow (create instance and all its involved API calls) rather than separate API operations 18:53:19 samueldmq, ++ 18:53:28 and to do that we need to know what perms you need for the workflow 18:53:34 that means, reflect this work in the policy bits 18:53:46 yep - and if we're in the process of designing openstack 2.0 then i have other ideas as well 18:54:00 which leads to dynamic policy and all the heresy I've been spewing these past few years 18:54:03 cool, I like it 18:54:20 jamielennox, what level of confidence do we have that this does not open some big unforseen security hole? 18:54:36 how do we vet/ 18:54:39 ? 18:54:42 ayoung: we can do that btw )) 18:54:47 topol, 100% that it does. 18:54:56 topol: it's certainly risky, it escalates the service user from something that is not particularly interesting to something that can emulate everyone 18:54:58 topol: it is not a lot worse than we have now. 18:55:08 topol, because we are not going to be able to limit this to services in the service catalog 18:55:10 nova tells me that there is no service user on the compute nodes 18:55:20 because they don't even know their own identity 18:55:23 but it makes the service users a much jucier target 18:55:34 if service is sharing the same memcached, any service could potentially modify the token validation results 18:55:39 just saying :-) 18:56:10 Why did we split OpenStack into microservices again? 18:56:22 ayoung: ++ 18:57:12 how does a service know its creds to auth against keystone ? 18:57:13 henrynash: i'm assuming 3 minutes isn't enough time for you? 18:57:17 anyway - not much time left, but these questions are why i would like to debate it a lot oon the spec 18:57:21 probably not 18:57:24 notmorgan, want to float"signed requests" again? 18:57:36 samueldmq: same as now, they all have it 18:57:42 oauth 18:57:47 samueldmq, they are autogenerated and stuck in the config file by Puppet etc 18:57:48 henrynash: we can chat in -keystone, cc ayoung 18:57:49 jamielennox: config files ? 18:57:56 oh 18:57:56 samueldmq, ye 18:58:01 i spent a bunch of time looking at oauth, it's not going to work for us 18:58:02 stevemar: yep 18:58:16 ayoung: isn't that dangerous ? 18:58:35 samueldmq, yes 18:59:02 jamielennox: if we're going to certs in the future, let's look at the effort with going with it now vs 2-step 18:59:14 samueldmq, the old saying about laws and sausages goes double for software and treble for security 18:59:32 jamielennox: (not agaisnt at all, just want to put all the options on the table) 18:59:36 samueldmq: certs is never something we are going to be able to enforce globally 18:59:46 lets roll out 18:59:48 say what? 18:59:54 #endmeeting