18:00:57 <stevemar> #startmeeting keystone
18:00:57 <openstack> Meeting started Tue May 17 18:00:57 2016 UTC and is due to finish in 60 minutes.  The chair is stevemar. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:01:01 <openstack> The meeting name has been set to 'keystone'
18:01:05 <jaugustine> Woo!
18:01:07 <stevemar> courtesy ping for ajayaa, amakarov, ayoung, breton, browne, crinkle, claudiub, davechen, david8hu, dolphm, dstanek, edmondsw, gyee, henrynash, hogepodge, htruta, jamielennox, joesavak, jorge_munoz, knikolla, lbragstad, lhcheng, marekd, MaxPC, morganfainberg, nkinder, raildo, rodrigods, rderose, roxanaghe, samleon, samueldmq, shaleh, stevemar, tjcocozz, tsymanczyk, topol, vivekd, wanghong, xek
18:01:14 <rodrigods> o/
18:01:17 <ayoung> Ayuh
18:01:18 <dstanek> o/
18:01:21 <stevemar> jaugustine: that's the kind of excitement we need to see!
18:01:23 <amakarov> o/
18:01:25 <gagehugo> _o/
18:01:26 <topol> o/
18:01:40 <crinkle> o/
18:01:42 <gyee> \o
18:02:09 <stevemar> critical mass achieved, let's start
18:02:15 <stevemar> #topci release update
18:02:18 <stevemar> #topic release update
18:02:30 <stevemar> #link http://releases.openstack.org/newton/schedule.html
18:02:49 <stevemar> we are currently in week R20, it counts down to the end of the release, so 20 weeks left
18:03:11 <ayoung> CODE FREEZE!
18:03:14 <ayoung> sorry
18:03:18 <stevemar> we should be cleaning up any old issues from mitaka and preparing for newton-1
18:03:19 <stevemar> hahaha
18:03:26 <stevemar> no where near that time yet :)
18:03:45 <stevemar> but the next deadline is R18, in 2 weeks
18:03:51 <stevemar> there are 2 things happening that week
18:03:54 <ayoung> M1?
18:03:56 <stevemar> newton-1 milestone driver
18:03:57 <stevemar> yep
18:03:58 <edtubill> o/
18:04:01 <ayoung> Er..N1?
18:04:11 <ayoung> did the M mean Milestone or Mitake?
18:04:12 <stevemar> and that's the spec *proposal* freeze week
18:04:13 <ayoung> Mitaka
18:04:31 <rderose> o/
18:04:35 <stevemar> ayoung: it's N1 this time around, the letter changes
18:04:47 <ayoung> Noice
18:04:51 <stevemar> so in 2 weeks: newton-1 driver, and spec *proposal* freeze
18:05:08 <stevemar> the latter meaning, if you want a spec for newton, propose it soon!
18:05:23 <stevemar> it doesn't have to merge in 2 weeks, just be proposed and a reviewable state
18:05:53 <stevemar> any questions about dates?
18:06:03 <dolphm> when is code freeze
18:06:16 <stevemar> dolphm: all dates are here: http://releases.openstack.org/newton/schedule.html
18:06:43 <stevemar> dolphm: keystone is following the general feature freeze, which is R5
18:06:48 <stevemar> in 15 weeks
18:07:34 <stevemar> dolphm: good?
18:07:48 * topol just rsvp'd for the midcycle..  dont forget
18:07:50 <dolphm> ++
18:07:55 <stevemar> if anyone has questions, bug me in -keystone
18:08:04 <stevemar> topol: on that note...
18:08:06 <stevemar> #topic midcycle update
18:08:12 <stevemar> notmorgan: ^
18:08:16 <ayoung> anyone want to got rockclimbing in Yosemite either before or after the midcycle?
18:08:19 <notmorgan> #link https://docs.google.com/forms/d/1To2qx90Am4hcYdgaqTkRosfNOzxr26M0GiahhiWgAJU/viewform?c=0&w=1
18:08:26 <notmorgan> RSVP Form ^
18:08:34 <topol> notmorgan is there a recommended hotel
18:08:36 <notmorgan> Sign up. we have 35 slots.
18:08:55 <notmorgan> topol: no, please feel free to update the wiki/ether pad if you have recommendations...
18:09:00 <notmorgan> topol: sec getting those links too
18:09:19 <samueldmq> hi all
18:09:19 <notmorgan> #link https://wiki.openstack.org/wiki/Sprints/KeystoneNewtonSprint
18:09:29 <notmorgan> #link https://etherpad.openstack.org/p/keystone-newton-midcycle
18:09:51 <notmorgan> the RSVP form will be used as the authoritative source of "who is coming" for planning purposes
18:10:22 <rderose> is this the official rsvp form:
18:10:24 <rderose> #link https://docs.google.com/forms/d/1To2qx90Am4hcYdgaqTkRosfNOzxr26M0GiahhiWgAJU/viewform?c=0&w=1
18:10:25 <rderose> ?
18:10:38 <notmorgan> rderose: yes.
18:10:39 <dolphm> rderose: yes
18:10:45 <rderose> cool
18:10:58 <henrynash> done
18:11:03 <dstanek> notmorgan: we need a slot count down
18:11:20 <dolphm> 35?
18:11:22 <notmorgan> please take a second and thank Cisco for hosting us *and* cburgess for helping set that up
18:11:29 <notmorgan> dolphm: 35 spots.
18:11:31 <henrynash> ayoung: one way of reducing the attendee count….
18:11:31 <lbragstad> cburgess thanks!
18:11:44 <dolphm> dstanek: oh, you mean visibility into remaining number of slots?
18:11:47 <jamielennox> yea, also hotel recomentdation would be useful
18:12:00 <dstanek> dolphm: yes, exactly
18:12:03 <dolphm> there are ** 23 ** slots remaining for the midcycle
18:12:17 <notmorgan> since i know next to nothing about San Jose area... [sorry], I'm going to ask folks who are more familiar for hotel recommendations
18:12:36 <ayoung> Stay in San Francisco and take Caltrains?
18:12:47 <notmorgan> ayoung: long ride, but doable.
18:13:08 <dolphm> there's a couple hotels within a decent walking distance, but we'll probably need a few cars anyway, given it's san jose
18:13:15 <lbragstad> https://goo.gl/3gS8GQ
18:13:15 <notmorgan> dolphm: yes.
18:13:18 <gyee> you can take VTA
18:13:20 <lbragstad> nearby hotels ^
18:13:32 <gyee> I think VTA cover that area
18:13:37 <ijw> Earwigging, here, but I've done that ride and unless you like hours on the train it's not good
18:13:38 <stevemar> thanks lbragstad
18:14:02 <gyee> Caltrain is like an hour away from SF
18:14:04 <notmorgan> lbragstad: please add that to the wiki.
18:14:14 <lbragstad> notmorgan ok
18:14:42 <ijw> You can Caltrail to and VTA from Mountain View - it's not impossible - but that's another half an hour or so depending on here specifically you'll be
18:14:45 <gyee> Levi stadium is nearby so there are a bunch of hotels there
18:14:47 <roxanaghe_> ijw, I'm doing Caltrain everyday - it's not that bad :)
18:14:50 <stevemar> hyatt's are usually nice, and nearby there
18:15:08 <gyee> Aloft?
18:15:15 <dstanek> i think staying close would be much easier
18:15:15 <ayoung> Navy Lodge Moffett If you are DoD
18:15:25 <notmorgan> i added wiki link to the RSVP form
18:15:29 <ijw> There's a Hyatt House on First that comes recommended, the Crowne Plaza, and the Hilton Garnden Inn and Larkspur Landing, all in the reasonable vicinity
18:15:33 <henrynash> ayoung: The hanger at Moffet if you are NASA
18:15:49 <notmorgan> ayoung: doens't google also rent that out now? :P
18:15:52 <notmorgan> ok anyway
18:15:54 <notmorgan> thats the info
18:16:05 <ayoung> henrynash, there is an Armory Floor not too far away if you are Army National Guard.  I used to drill out of there.
18:16:23 <topol> I believe I stayed at the Santa Clara Marriott. Its nice
18:16:34 <stevemar> thanks ijw and notmorgan
18:16:41 <henrynash> topol: used to be next to Great America….
18:16:55 <ayoung> gyee, dibs on your couch.
18:16:58 <gyee> Casino M8trix :-)
18:16:59 <notmorgan> anyone have an issue with making the results of RSVP public?
18:17:00 <henrynash> (that was not a Trump reference)
18:17:04 <notmorgan> i can add it to the wiki as well.
18:17:12 <henrynash> notmorgan: fine by me
18:17:13 <stevemar> notmorgan: nope
18:17:16 <notmorgan> will do
18:17:28 <lbragstad> dstanek ++
18:19:11 <stevemar> bug notmorga if folks have questions :P
18:19:17 <notmorgan> #link https://docs.google.com/spreadsheets/d/1qTupqEyYwXnNnO-sW0kRhh-I9hvpPHA7QAuewXnw6AA/edit?usp=sharing
18:19:17 <stevemar> (fine fine, i'll be around too)
18:19:40 <stevemar> on to more contentious topics!
18:19:43 <stevemar> #topic Specify ID for Project or domain creation
18:19:53 <stevemar> amakarov ^
18:19:58 <stevemar> agrebennikov is missing :(
18:20:06 <amakarov> Thanks
18:20:33 <amakarov> The question is: why do we unconditionally overwrite project ID on creation?
18:20:51 <amakarov> Is there some reason behind it or is it a bug?
18:21:09 <lbragstad> amakarov as in you can't create a project with a project ID that you pass in?
18:21:17 <samueldmq> I think that's what we do everywhere else ?
18:21:30 <jamielennox> change question -why should you ever specify an id up front?
18:21:36 <samueldmq> jamielennox: ++
18:21:39 <dstanek> jamielennox: ++
18:21:46 <gyee> I think the argument was IDs are supposed to be globally unique
18:21:48 <rderose> ++
18:21:59 <amakarov> jamielennox: the actual case: multiDC env
18:22:01 <rderose> gyee: yeah, and we want to control that
18:22:01 <dstanek> i don't like that we can do that in some places
18:22:04 <samueldmq> gyee: good point
18:22:10 <dolphm> amakarov: to guarantee that project IDs are globally unique and immutable, we don't accept user input for them. project names are mutable, not globally unique, and user-defined
18:22:22 <notmorgan> so, i dislike specifying it - if we are doing this it can be ADMIN only
18:22:33 <amakarov> if we create projects and roles with the same ID - the tokens are valid across DC
18:22:35 <notmorgan> and it still must conform to automatic such as uuid
18:22:54 <gyee> to play the devil's advocate, we've already allowed "shadow" users, why not shadow other things?
18:23:04 <notmorgan> so there is one other reason this is important
18:23:06 <dolphm> if we all understand the consequences on federation and whatnot, i'm open to breaking that convention
18:23:07 <jamielennox> gyee: shadow users doen't let you pick the user's id
18:23:16 <lbragstad> but shadow users doesn't allow a user to specify the id of their user
18:23:19 <samueldmq> notmorgan: so no big advantage besides knowing it ahead of time (and it stil can fail at cretion time!)
18:23:19 <notmorgan> in some cases you need to restore a project after deletion
18:23:20 <amakarov> notmorgan: we still ok with that in v2
18:23:23 <gyee> jamielennox, at least to agree on a global ID
18:23:38 <notmorgan> amakarov: default domain is special, and i always treat it as such
18:23:40 <jamielennox> gyee: no, it creates an id for you as does project
18:23:46 <lbragstad> amakarov what's the use case for specifying an ID for a project?
18:23:51 <gyee> right, with a specific algorithm
18:23:55 <notmorgan> lbragstad: ^ 2 reasons
18:23:57 <rderose> lbragstad: ++
18:24:00 <jamielennox> gyee: no, uuid
18:24:01 <notmorgan> lbragstad: 1) restore a deleted project
18:24:01 <ayoung> this is going to be like Nova hooks isn't it, where someone brings up an old feature and the core immediately deprecates it?
18:24:04 <notmorgan> and maintain the id.
18:24:16 <ayoung> well, excpet V2 is already deprecated
18:24:20 <notmorgan> 2) same id across deployments (keystones)
18:24:38 <notmorgan> i would not make it a "normal" action anyone can do.
18:24:43 <notmorgan> in either case.
18:24:45 <amakarov> lbragstad: if you need to authZ in different clouds with a single token
18:24:45 <lbragstad> in order for 2 to work you'd have to have the identity backend shared anyway
18:24:45 <ayoung> notmorgan ++
18:24:50 <raildo> ayoung: v2 is deprecated
18:24:55 <dstanek> notmorgan: if you did then it may be a security issue
18:25:04 <ayoung> raildo, doesn't change the fact that Nova core hates me.
18:25:07 <raildo> I think we have good use cases to make this happen
18:25:12 <notmorgan> dstanek: exactly, it, if anyting is an Admin-only thing
18:25:16 <raildo> ayoung: I know :(
18:25:18 <ayoung> I have an abandonded spec
18:25:18 <rderose> notmorgan: why do you need the same id across deployments?
18:25:23 <samueldmq> lbragstad: ++
18:25:30 <dstanek> notmorgan: cloud admin?
18:25:31 <gyee> jamielennox, I see, but they are still using their native ID for auth, no?
18:25:36 <amakarov> rderose: ^^ my answer to lbragstad
18:25:39 <ayoung> #link https://review.openstack.org/#/c/203852/
18:25:40 <notmorgan> rderose: a lot of deployers want the same project id in multiple AZs that are managed by different keystones
18:25:52 <notmorgan> rderose: not my choice/reasoning, just echoing the reasoning
18:26:03 <rderose> notmorgan: ++
18:26:04 <notmorgan> rderose: the "restore a project" reason is why i support it
18:26:05 <jamielennox> i wouldn't even say this is an admin only thing, if your use case is restoring a deleted project then at best it's a keystone-manage feature
18:26:07 <dstanek> amakarov: can't you do that in a more federated approach?
18:26:13 <notmorgan> rderose: and that is a break-glass scenario
18:26:25 <notmorgan> but should be doable via the API not DB manipulation
18:26:32 <amakarov> dstanek: no, as federation is currently in demo state :)
18:26:44 <gyee> demo state!?
18:26:48 <notmorgan> dstanek: i'd make it a cloud-admin thing for sure.
18:26:49 <dstanek> amakarov: ?
18:26:57 <notmorgan> dstanek: if i was implementing it. not a domain admin thing
18:27:04 <notmorgan> vvtletuvnjejfltljiigeubducbbkiitevfbdhkgnedg
18:27:05 <raildo> notmorgan: via API ++
18:27:07 <rderose> notmorgan: but if you create a project, get the id, then you can do the restore use case?  right?  why would you need to specify the project id in that case?
18:27:13 <stevemar> notmorgan: lol
18:27:15 <ayoung> notmorgan, Ubikey failuer\\re?
18:27:17 <amakarov> gyee, dstanek: it lacks, for example, grops shadowing
18:27:20 <stevemar> ayoung: yes
18:27:34 * ayoung imagines Q-Bert
18:27:40 <samueldmq> but please don't say it's demo state, people worked hard on it
18:27:53 <samueldmq> and still do
18:27:59 <dstanek> amakarov: i think it would be better to make federation fit this, rather than do this one-off thing
18:28:08 <lbragstad> dstanek ++
18:28:13 <stevemar> `make federation great again`
18:28:16 <amakarov> unfortunately agrebennikov is away, but he exposed it very clear on design session
18:28:17 <notmorgan> ayoung: yes. i need to fix that
18:28:22 <lbragstad> stevemar --
18:28:28 <notmorgan> ayoung: i keep bumping the OTP button.
18:28:34 <notmorgan> when i move the laptop
18:28:37 <ayoung> Heh
18:28:39 <jamielennox> agreed, i can see the break-glass case of restore a project via -manage, but for cross DC you are needing to do some sort of federation
18:29:01 <ayoung> notmorgan, 3 more of those and Ithink I'll be able to crack the seed.
18:29:19 <dstanek> amakarov: can you get together a list of things the current federation implemenation is missing?
18:29:33 <lbragstad> yeah - each kesytone across DCs is going to require access to the same identity backend (which naturally fits the federation story)
18:30:00 <rderose> lbragstad: agree, but is there a big demand for this?
18:30:11 <rderose> I mean, do we want to give this a priority now?
18:30:12 <amakarov> dstanek: I agree about that approach in theory, though we can solve many problems easily and I don't see why we can't do that right away
18:30:13 <notmorgan> ayoung: good thing i don't use that OTP for anything
18:30:20 <amakarov> What will it break?
18:30:45 <dstanek> amakarov: the wrong solution right away isn't necessarily good
18:30:52 <lbragstad> i think it's more of losing control
18:30:55 <amakarov> dstanek: why is it wrong?
18:31:14 <dstanek> amakarov: i dislike specifying ids
18:31:21 <rderose> ++
18:31:38 <dstanek> i would be -2 if users could do it, but i'm still -1 for cloud admins
18:31:52 <dstanek> i like jamielennox's idea of a manage command if that is actually needed
18:32:16 <notmorgan> dstanek: sure.
18:32:29 <amakarov> dstanek: ok, what is the suggested way to create/use projects based on LDAP groups?
18:32:29 <ayoung> dstanek, its an RBAC thing regardless
18:32:34 <samueldmq> what guarantee keystone will accept the ID the user passes ?
18:32:34 <notmorgan> dstanek: a break glass method that isn't "edit the DB" is what i look for in solving the "restore" a project/domain
18:32:38 <notmorgan> dstanek: i'd support that.
18:32:55 <henrynash> dtsanek, jamielennx: ++ agree on the keystone manager cmd
18:33:00 <ayoung> Nope
18:33:05 <ayoung> that is the wrong security model
18:33:12 <ayoung> that means people who have access to the machine
18:33:23 <ayoung> you don't want to push people to do that in the course of manageing the applicaiton
18:33:36 <lbragstad> rderose i think we'd need to see a list of short comings in the federation implementation in order to prioritize
18:33:48 <jamielennox> ayoung: if we're restricting the operation to the cloud-admin they they have access to the machine
18:33:51 <stevemar> lbragstad: yep
18:33:51 <rderose> lbragstad: good point
18:34:08 <jamielennox> if you need to do this operation often enough that logging onto the machine is a problem you are doing management wrong
18:34:19 <ayoung> jamielennox, we are doing management wrong.  Its called Keystone
18:34:33 <gyee> lmao
18:34:33 <ayoung> we delete a project and all of the resources out there are orphaned
18:34:38 <samueldmq> I think we need a well described use case, and how we would support it via federation (waht's missing as lbragstad said)
18:34:42 <amakarov> ayoung: ++
18:34:57 <lbragstad> we emit notifications on resource changes
18:35:06 <jamielennox> ayoung: so we need to have a look at keystone's delete project?
18:35:12 <jamielennox> deprecate in favour of disable?
18:35:15 <amakarov> more to say, looks nobody even dare to touch this topic publicly :)
18:35:17 <ayoung> lbragstad, and we have no standard workflow engine so that is not sufficient
18:35:20 <jamielennox> soft-delete?
18:35:26 <notmorgan> jamielennox: no soft deletes.
18:35:34 <ayoung> jamielennox, gyee has  been saying that for years
18:35:35 <notmorgan> jamielennox: i'm ok with "never remove from the db"
18:35:37 <jamielennox> but a full on restore the db entry is also a bad idea
18:35:59 <gyee> resource life-cycle management
18:36:03 <notmorgan> jamielennox: soft-delete implies "could be hard-deleted"
18:36:04 <ayoung> I'm just afraid it is closing the barn door after the horse is long gone
18:36:07 <jamielennox> notmorgan: meh, we have an enabled flag on projects, it's essentially a soft delete
18:36:08 <gyee> that's production operation stuff
18:36:10 <stevemar> i think we're mixing things up here, amakarov doesn't necessarily care about the 'restore a project' if deleted case
18:36:15 <gyee> something we don't use often enough :-)
18:36:16 <notmorgan> i'm 100% ok with nevre removing it as a delete instead.
18:36:23 <dstanek> we should do event sourcing
18:36:24 <amakarov> stevemar: yes
18:36:33 <ayoung> anyway,  keep it as a restricted operation.  We can make it wider exposed later so long as it is RBAC managed
18:36:43 <samueldmq> stevemar: ++
18:36:44 <notmorgan> ok lets set "restore" to the side
18:36:45 <stevemar> so let's put the whole manage command, and soft delete discussion to the side
18:36:52 <notmorgan> i'll propose something separate for that
18:37:11 <notmorgan> i have an idea and i think it'll work just fine.
18:37:14 <ayoung> You guys are missing the fact that notifications are currently a disaster, so we can't notify untrusted services
18:37:43 <samueldmq> ayoung: let's fix it, but it's still part of the other conversation :)
18:37:49 <amakarov> well, let me put it this way: how can I provide a customer the UX when he auth in 1 cloud (in horizon) and then works with resources from the other cloud without re-login?
18:37:51 <raildo> ayoung: ++ we have a huge problem with nova quota when we delete a project =/
18:37:51 <ayoung> and we don';t even know all of the services that would need the notification
18:37:52 <samueldmq> amakarov: to do that via federation, what would we need ?
18:38:03 <stevemar> amakarov wants to specify an ID only for create, and he is using this in a multi-dc environment. if federation is stinky for this, we should fix it.
18:38:04 <samueldmq> amakarov: does that mean 'federated resources' ?
18:38:10 <amakarov> considering existing LDAP
18:38:11 <ayoung> amakarov, that sounds like K2K
18:38:24 <ayoung> But...
18:38:36 <ayoung> that is the whole pre-sync between two clouds
18:38:41 <notmorgan> amakarov: "other cloud" - same owner different deployment (like AZ)?
18:38:42 <dstanek> amakarov: one horizon for all DCs right?
18:38:43 <ayoung> which is a good midcycle topic
18:38:50 <amakarov> dstanek: ++
18:38:51 <notmorgan> amakarov: or totally different cloud totally different dpeloyer?
18:38:57 <ayoung> lets have a "larger workflows" track at the midcycle?
18:39:03 <lbragstad> notmorgan good question
18:39:16 <notmorgan> lbragstad: because the answer to that dictates my view.
18:39:38 <raildo> amakarov: it sounds like the Mercador idea... https://wiki.openstack.org/wiki/Mercador
18:39:40 <dstanek> notmorgan: wouldn't solving for different owners essentially solve for both?
18:39:42 <amakarov> notmorgan: no. clouds may be thought about as "replicated"
18:39:54 <notmorgan> amakarov: so "same deployer"
18:40:02 <notmorgan> dstanek: not really but hold on.
18:40:15 <amakarov> notmorgan: same deployer
18:40:31 <notmorgan> amakarov: cool. then i think it's not unreasonable
18:40:56 <jamielennox> why do 2 clouds from the same deployer wanting to share projects between them not use the same keystone?
18:40:59 <stevemar> we have to move on, there are other topics on the agenda
18:41:01 <notmorgan> dstanek: different deployers/owners is the same reason we assume keystone for the local cloud is authoritative.
18:41:10 <samueldmq> jamielennox: ++
18:41:12 <stevemar> i'll give this another minute to wrap up
18:41:19 <notmorgan> jamielennox: my point was they could also solve this via replicating keystone
18:41:32 <amakarov> jamielennox: it may be a big project distributed geographically
18:41:34 <notmorgan> jamielennox: which is where i was going, and why it was "reasonable" to want that kind of replication
18:41:41 <jamielennox> amakarov: keystone will support that
18:41:43 <notmorgan> not that the API driven one was the right answer. i'm abstaining there
18:41:45 <lbragstad> jamielennox same keystone or just point to the same backend (which is essentially the same thing?)
18:41:46 <gyee> replicating gets expensive when you have more than a few DCs
18:41:48 <samueldmq> jamielennox: looks like we're trying to fix something that should be fixed with right deployment choice and/or federation
18:42:08 <dstanek> amakarov: write up a detailed usecase so we can discuss further
18:42:14 <jamielennox> you can horizontally scale keystone and you can put endpoints in different regions
18:42:20 <jamielennox> fernet was a huge win for that
18:42:47 <samueldmq> dstanek: ++ amakarov please do what dstanek said ^
18:42:51 <amakarov> jamielennox: right, if the project id's are the same
18:42:52 <lbragstad> fwiw - we did testing of that across three globally distributed datacenters
18:42:59 <amakarov> samueldmq, dstanek: ack
18:43:02 <amakarov> will do
18:43:08 <jamielennox> amakarov: if they are the same keystone the project ids are unique within it
18:43:13 <dstanek> stevemar: let's get this show on the road!
18:43:39 <stevemar> dstanek: alright!
18:43:45 <stevemar> switching gears
18:43:48 <stevemar> #topic Service Token provides user permissions spec
18:43:51 <stevemar> jamielennox: ^
18:44:12 <stevemar> (i haven't read it yet)
18:44:15 <jamielennox> ok, so we discussed this spec in tokyo and then i never got around to doing it in the last cycle
18:44:15 <amakarov> jamielennox: different keystones
18:44:17 <stevemar> #link https://review.openstack.org/#/c/317266/1/specs/keystone/newton/service-user-permissions.rst
18:45:06 <jamielennox> the intent is that we should validate users only on the first service they hit in openstack, from then on in service to service communication we should trust the user permisssions provided by the last service
18:45:07 <samueldmq> jamielennox: so a token only get validate once per openstack workflow ?
18:45:20 <samueldmq> nice
18:45:25 <notmorgan> samueldmq: yep
18:45:42 <jamielennox> samueldmq: if you pass an X-Service-Token then the service token is validated to say that we should be able to trust this information
18:45:48 <samueldmq> notmorgan: looks like what we had a discussion a couple of months ago .. that could be implemented as a ksa plugin etc
18:45:49 <notmorgan> this is something i've been advocating for a bit.
18:45:58 <ayoung> so...what if we do :X-Service-Token + user_id + project_id + role_id
18:46:00 <jamielennox> so this is a fairly large change in the way we handle tokens in openstack
18:46:00 <notmorgan> samueldmq: same thing, slightly different mechanism
18:46:12 <jamielennox> but it sovles the mid-operation token expiry which has plagued us forever
18:46:16 <samueldmq> but the service token still needs to be validated, right ?
18:46:23 <samueldmq> so no gain in terms of performance?
18:46:25 <ayoung> so the remote side can still confirm that the resource is within the permissions boundary
18:46:26 <notmorgan> samueldmq: yes, but that isn't long term
18:46:30 <gyee> jamielennox, it will be an explicitly opt-in feature right?
18:46:31 <ayoung> and we don't have an elevation of privs
18:46:36 <jamielennox> the spec is not completely polished up and given the conceptual size of the change i just want to make sure people are on board first
18:46:37 <raildo> jamielennox: is this a overcome for the nova/keystone design session, right?
18:46:40 <notmorgan> samueldmq: actually one less validation- you don't need to validate twice
18:46:58 <ayoung> pass along the list of roles in the token that the user was originally validated
18:47:03 <notmorgan> ayoung: i still think that we're wrong on that front, but we can expand the discussion *after*
18:47:19 <notmorgan> ayoung: its a different topic that builds on jamielennox's proposal
18:47:25 <jamielennox> raildo: well i'm pushing it now because it all came up again at summit and this was our solution that never got proposed
18:47:32 <amakarov> samueldmq: if we figure smth out to trust some services - we don't have to validate tokens EVERY time
18:47:49 <ayoung> notmorgan, just the role-list?
18:47:50 <raildo> jamielennox: ++ nice
18:47:52 <jamielennox> so there are a bunch of security questions about how we trust those headers
18:47:59 <samueldmq> notmorgan: long-term is use certs for that and don't validate service token at all ? (and user token only once at the beggining of workflow)
18:48:08 <notmorgan> ayoung: like i said, something to discuss later - once we land this i don't want to derail
18:48:19 <notmorgan> samueldmq: that will also be an option
18:48:27 <notmorgan> but service tokens are easy to refresh
18:48:28 <ayoung> service token == user with service role _ is_admin_project
18:48:28 <jamielennox> samueldmq: certs would be cool
18:48:36 <ayoung> er that _  should be +
18:48:39 <notmorgan> ayoung: or whatever the definition is.
18:48:42 <samueldmq> amakarov: yes, that could be done using certificates, for example; but is a separate conversation, maybe long term :)
18:48:44 <notmorgan> ayoung: that is an implementation detail
18:48:58 <jamielennox> yea, don't need is_admin_project, however you define service is fine
18:49:16 <notmorgan> can be whatever KSM is configured for
18:49:34 <jamielennox> so there's a question in there about whether we should trust all the attributes passed as headers or whether we should take only the core user_id project_id etc and reconstruct it like we do fernet
18:49:42 <samueldmq> notmorgan: jamielennox: nice so this definitely opens the door for other long term improvements (like that using certs)
18:49:46 <samueldmq> works for me
18:49:49 <jamielennox> that i'd like people to have a look at as well
18:49:52 <stevemar> jamielennox: i think nova will no longer hate us if we get this in
18:50:09 <jamielennox> stevemar: yep, and all those stupid trust hacks that glance did recently can go away
18:50:16 <gyee> turn hate into love
18:50:16 <ayoung> So the idea is that we pre-validate at the boundaries, and then trusted services can talk to each other.  This is fine for, say, Nova to Glance, but I don't want it for *aaS like Sahara and Trove
18:50:16 <jamielennox> glance as an example
18:50:22 <stevemar> jamielennox: yep
18:50:29 <samueldmq> gyee: ++
18:50:47 <jamielennox> ayoung: right, these are the concerns that i want out on the spec
18:50:48 <ayoung> Yeah...this is not going to scale
18:50:52 <samueldmq> jamielennox: is it any worst/better in terms of security to trust all the headers ?
18:50:58 <ayoung> this is why I want unified delegation
18:51:18 <jamielennox> so i don't expect everyone to have opinions on it now, but i'll put it on the agenda for next week as well and i'd like to hash it out on the spec until then
18:51:25 <ayoung> but this is the right approach
18:51:42 <stevemar> jamielennox: i don't anticipate much push back
18:51:46 <jamielennox> samueldmq: it's an open question
18:51:53 <ayoung> we distinguish betwee "this is a long running operation, but treat it all as a unit" versus "do something for me later"
18:52:00 <jamielennox> stevemar: this is a giant security model change - it needs to be pushed back on
18:52:11 <ayoung> this has the potential to be a disaster.
18:52:16 <jamielennox> ayoung: ++
18:52:28 <jamielennox> ayoung: but there's no other reasonable way i can see to solve it
18:52:29 <rderose> ++
18:53:02 <stevemar> well, currently things just time out, or token expirations are set stupidly high, so... this is an improvement to me
18:53:07 <jamielennox> i expect to have to expand the security implications bit to an essay
18:53:11 <samueldmq> it would be cool if polcies were configured per openstack workflow (create instance and all its involved API calls) rather than separate API operations
18:53:19 <ayoung> samueldmq, ++
18:53:28 <ayoung> and to do that we need to know what perms you need for the workflow
18:53:34 <samueldmq> that means, reflect this work in the policy bits
18:53:46 <jamielennox> yep - and if we're in the process of designing openstack 2.0 then i have other ideas as well
18:54:00 <ayoung> which leads to dynamic policy and all the heresy I've been spewing these past few years
18:54:03 <samueldmq> cool, I like it
18:54:20 <topol> jamielennox, what level of confidence do we have that this does not open some big unforseen security hole?
18:54:36 <topol> how do we vet/
18:54:39 <topol> ?
18:54:42 <amakarov> ayoung: we can do that btw ))
18:54:47 <ayoung> topol, 100% that it does.
18:54:56 <jamielennox> topol: it's certainly risky, it escalates the service user from something that is not particularly interesting to something that can emulate everyone
18:54:58 <notmorgan> topol: it is not a lot worse than we have now.
18:55:08 <ayoung> topol, because we are not going to be able to limit this to services in the service catalog
18:55:10 <jamielennox> nova tells me that there is no service user on the compute nodes
18:55:20 <ayoung> because they don't even know their own identity
18:55:23 <jamielennox> but it makes the service users a much jucier target
18:55:34 <gyee> if service is sharing the same memcached, any service could potentially modify the token validation results
18:55:39 <gyee> just saying :-)
18:56:10 <ayoung> Why did we split OpenStack into microservices again?
18:56:22 <jamielennox> ayoung: ++
18:57:12 <samueldmq> how does a service know its creds to auth against keystone ?
18:57:13 <stevemar> henrynash: i'm assuming 3 minutes isn't enough time for you?
18:57:17 <jamielennox> anyway - not much time left, but these questions are why i would like to debate it a lot oon the spec
18:57:21 <henrynash> probably not
18:57:24 <ayoung> notmorgan, want to float"signed requests" again?
18:57:36 <jamielennox> samueldmq: same as now, they all have it
18:57:42 <notmorgan> oauth
18:57:47 <ayoung> samueldmq, they are autogenerated and stuck in the config file by Puppet etc
18:57:48 <stevemar> henrynash: we can chat in -keystone, cc ayoung
18:57:49 <samueldmq> jamielennox: config files ?
18:57:56 <samueldmq> oh
18:57:56 <ayoung> samueldmq, ye
18:58:01 <jamielennox> i spent a bunch of time looking at oauth, it's not going to work for us
18:58:02 <henrynash> stevemar: yep
18:58:16 <samueldmq> ayoung: isn't that dangerous ?
18:58:35 <ayoung> samueldmq, yes
18:59:02 <samueldmq> jamielennox: if we're going to certs in the future, let's look at the effort with going with it now vs 2-step
18:59:14 <ayoung> samueldmq, the old saying about laws and sausages goes double for software and treble for security
18:59:32 <samueldmq> jamielennox: (not agaisnt at all, just want to put all the options on the table)
18:59:36 <jamielennox> samueldmq: certs is never something we are going to be able to enforce globally
18:59:46 <stevemar> lets roll out
18:59:48 <gyee> say what?
18:59:54 <stevemar> #endmeeting