17:02:15 <lbragstad> #startmeeting keystone-office-hours
17:02:16 <openstack> Meeting started Tue May 29 17:02:15 2018 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:02:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:02:20 <openstack> The meeting name has been set to 'keystone_office_hours'
17:02:57 * knikolla goes to grab lunch
17:03:52 * gagehugo ditto
17:05:08 <hrybacki> tritto
17:17:29 <lbragstad> quaditto
17:55:02 <openstackgerrit> Harry Rybacki proposed openstack/keystone-specs master: Follow-up -- replace 'auditor' role with 'reader'  https://review.openstack.org/570990
17:55:25 <hrybacki> lbragstad: ^^
17:55:29 <lbragstad> sweet
18:22:34 <ayoung> knikolla, I think your proxy and Istio are covering similar ground.  What I am wondering is what the API would look like for Proxy to consume
18:23:25 <ayoung> lbragstad, did you go to https://www.youtube.com/watch?time_continue=143&v=x9PhSDg4k6M  ?  Its pretty much Dynamic Policy reborn...how many years ago was that?
18:23:49 <lbragstad> i didn't go to the one
18:23:54 <lbragstad> i had a conflict with something else i think
18:24:45 <lbragstad> it was on my schedule to watch later though
18:25:43 <ayoung> lbragstad, just watched through it.  Basically, a service prior to Keystone that update multiple un-synced keystones
18:25:46 <knikolla> ayoung: what API are you referring to?
18:25:49 <ayoung> hub and spoke model
18:26:00 <ayoung> knikolla, the cross-project access thing
18:26:21 <ayoung> if a user from one project needs to access a resource in another and has to get a new token, its kinda yucky
18:26:28 <knikolla> ayoung: the normal openstack APIs. the proxy is transparent.
18:26:53 <ayoung> knikolla, right now it is K2K, but using the users creds
18:27:17 <knikolla> ayoung: the proxy just goes through all the projects the user has access to
18:27:26 <ayoung> I guess that would be more like get the resource, find what proejct it is, and request a token for that project..all done by the proxy?
18:27:37 <knikolla> ayoung: yes.
18:28:00 <ayoung> might have some scale issues there.  I would rather know which project a-priori....somehow
18:28:42 <knikolla> ayoung: caching works
18:28:48 <knikolla> go where it was last time
18:29:13 <knikolla> or there might be a push model by listening through the messagebus for notifications of creations
18:29:31 <ayoung> knikolla, like a symlink
18:29:46 <ayoung> knikolla, lets use the volume mount as the example
18:29:51 <ayoung> P1 holds the Vm
18:29:56 <ayoung> P2 holds the volume
18:30:06 <ayoung> Ideally, I would add a symlink in P1 to the volume
18:30:24 <ayoung> a placeholder that says "when you get this resource, go to P2 to get it"
18:30:42 <knikolla> so explicit instead of implicit by searching for it?
18:30:47 <ayoung> but...it should be at the keystone level
18:30:54 <ayoung> knikolla, what if we tagged the P1 project itself
18:31:04 <ayoung> "additional resources located in P2"
18:31:52 <knikolla> ayoung: maybe do this at the level above in the project hierarchy
18:32:11 <ayoung> knikolla, its not a strict hierarchy thing
18:32:25 <ayoung> should be a hint: not enforcing RBAC,
18:33:29 <ayoung> its almost like a shadow service catalog
18:33:32 <knikolla> ayoung: but it makes things easier to understand. and provides a cleaner way to implement granularity by subdiving a project.
18:33:42 <ayoung> "get Network from PN, Storage from PS, IMage from PI"
18:34:02 <ayoung> and...yes, you should be able to tag that on a parent project and have it inherited down
18:34:27 <knikolla> ayoung: same thing but with different clouds and you have the open cloud exchange we want.
18:34:52 <ayoung> knikolla, ooooooh
18:35:11 <ayoung> so...part of it could be the Auth URL for the remote project
18:35:41 <knikolla> ayoung: it's in the keystone service catalog. all service providers are there.
18:35:59 <ayoung> knikolla, but in this case it would be a pointer to the SP
18:36:22 <ayoung> like "on this project, for networkm, us SP1:PN
18:36:25 <ayoung> use
18:36:39 <ayoung> project level hints
18:36:40 <knikolla> like a local project symlinking to a remote cloud's project?
18:36:47 <ayoung> 'zactly!
18:37:15 <knikolla> i've called these sister-projects during presentations.
18:38:54 <ayoung> knikolla, do you have a formal proposal for how to annotate the sister-projects?
18:40:02 <knikolla> ayoung: no I don't. In my notes I have "scope to a project with the same name as the local one, on the domain assigned to the IdP".
18:40:16 <ayoung> knikolla, OK...starting another etherpad for this
18:40:29 <ayoung> https://etherpad.openstack.org/p/sister-projects
18:44:38 <knikolla> ayoung: minus the annotation stuff (proxy goes everywhere searching for stuff), the cross-attaching thing works already.
18:45:29 <ayoung> knikolla, ++
18:45:59 <ayoung> knikolla, this could be big
18:46:37 <ayoung> knikolla, I think we have the topic for our Berlin presentation
18:46:42 <knikolla> ayoung: what's different this time than the other times I proposed this?
18:46:56 <ayoung> "We've done unspeakable things with Keystone"
18:47:11 <ayoung> knikolla, the fact that we can use it inside a single openstack deployment for one
18:47:17 <ayoung> the annotations for second
18:47:34 <ayoung> and constant repitition to beat it through people's heads, of course
18:47:45 <ayoung> we call it keystone-istio to get people's attention, too
18:47:58 <ayoung> its real service mesh type stuff
19:13:27 <knikolla> ayoung: istio is more about connecting apps though, right?
19:14:47 <ayoung> knikolla, its about any app to app communication, and used for multiple use cases.  pretty much all cross cutting concernts
19:15:14 <ayoung> access control, Denial of Service control,  bl;ue/green deployments
19:15:31 <ayoung> it is a proxy layer.  those are typically used for 3 things
19:15:43 <ayoung> security, lazy load, remote access
19:16:01 <ayoung> https://en.wikipedia.org/wiki/Proxy_pattern#Possible_Usage_Scenarios
19:16:23 <ayoung> logging is often done that way, too
19:17:53 <knikolla> i have concerns on performance for a generic app proxy with python. the openstack-service to openstack-service use case is slightly different since they are terribly slow anyway.
19:18:14 <ayoung> knikolla, Istio is in Go
19:18:28 <ayoung> kmalloc, who makes your 1/4 rack?
19:19:02 <knikolla> ayoung: you want to adopt istio or make what we have more similar to istio?
19:19:20 <kmalloc> ayoung: startach
19:19:27 <kmalloc> ayoung: or something like that, sec
19:19:43 <ayoung> https://www.amazon.com/12U-4-Post-Open-Rack/dp/B0037ECAJA  kmalloc
19:19:49 <kmalloc> ayoung: https://www.amazon.com/gp/product/B00P1RJ9LS/ref=oh_aui_search_detailpage?ie=UTF8&psc=1
19:20:11 <kmalloc> same thing, different seller
19:20:16 <ayoung> kmalloc, ah even better price tho
19:20:20 <kmalloc> yup
19:20:42 <kmalloc> they make a few options, up to 42U
19:21:11 <kmalloc> do not get the 2-post or the 2-post-HD. wont work for you
19:21:43 <ayoung> kmalloc, these the shelve rails
19:21:45 <ayoung> https://www.amazon.com/NavePoint-Adjustable-Mount-Server-Shelves/dp/B0060RUVBA/ref=pd_lutyp_sspa_dk_typ_pt_comp_1_6?_encoding=UTF8&pd_rd_i=B0060RUVBA&pd_rd_r=736717d5-d9cf-40f1-a796-f73d9ba525bc&pd_rd_w=4OmZr&pd_rd_wg=wiOng&pf_rd_i=desktop-typ-carousels&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=8337014667200814173&pf_rd_r=8M47S57ND2AEMDDDBMQF&pf_rd_s=desktop-typ-carousels&pf_rd_t=40701&psc=1&refRID=8M47S57ND2AEMDDDBMQF
19:22:49 <kmalloc> ayoung: i used https://www.amazon.com/gp/product/B00TCELZTK for the UPS, you can also get https://www.amazon.com/gp/product/B0013KCLQC for heavier items
19:22:59 <kmalloc> the full shelf is VERY nice.
19:23:39 <ayoung> I think for the poweredges I want the rail version
19:24:15 <kmalloc> sure, be wary though, some of the rail versons don't play well with server cases, they consume just enough (~1-2mm) space that the servers scrape
19:24:35 <kmalloc> so measure your servers and make sure you have a few mm on either side where the rails would normally go
19:24:57 <kmalloc> shouldn't really be an issue with any "real" server with rail mount points
19:24:58 <kmalloc> but....
19:25:01 <kmalloc> ymmv
19:25:07 <ayoung> understood
19:25:36 <ayoung> what about these:
19:25:42 <ayoung> https://www.amazon.com/dp/B00JQYUI7G/ref=sspa_dk_detail_6?psc=1&pd_rd_i=B00JQYUI7G&pd_rd_wg=yrH6s&pd_rd_r=XHT079H16NRJYSZAQ9ER&pd_rd_w=hzj5S
19:26:08 <kmalloc> i don't see how those would work for anything
19:26:18 <kmalloc> not surew what the heck those even are
19:26:23 <ayoung> yeah...thought they were rails at first
19:31:25 <knikolla> ayoung: ping again, you are thinking of adopting istio or morphing what we already have in mixmatch to be more like istio?
19:31:42 <ayoung> knikolla, I'm still digesting what I saw at the summit
19:31:48 <ayoung> I think we need something like Istio
19:31:59 <ayoung> whether that is Istio or your proxy or something else yet is unclear
19:32:27 <knikolla> ack
19:33:48 <ayoung> knikolla, I think that the proxuy technology is one  question, and what APIs Keystone needs to support it is a second related one
19:34:41 <knikolla> ayoung: it depends how many birds are you trying to hit
19:34:49 <knikolla> i have something that fits the openstack-service to openstack-service
19:35:13 <knikolla> which probably won't work with app to app.
19:36:14 <ayoung> knikolla, take some time to look at Istio, and tell me if it is an effort you could support.
19:37:29 <knikolla> ayoung: i'll play around with it.
19:37:37 <ayoung> knikolla, TYVM
19:45:50 <knikolla> it was about time i learned Go. :/
20:42:15 <rm_work> keystone seems to do hard-deletes on projects in the DB -- is that a correct assessment? and if so, is there any way to make it do soft-deletes, or any specific reason it wasn't done that way?
20:42:55 <lbragstad> rm_work: we support disabling projects, which does just about the same thing you'd expect a soft delete to do
20:42:59 <rm_work> ok
20:43:06 <rm_work> so it may just be a "using it wrong" issue
20:43:22 <lbragstad> if you disable a project, users can't authenticate to it, use it, etc...
20:43:34 <rm_work> k
21:13:18 <rm_work> lbragstad: the issue we're trying to solve is around orphaned objects -- keystone projects get deleted and we have servers and stuff that we now can't see who owned them
21:13:45 <lbragstad> yeah - that's a problem
21:13:48 <rm_work> but if we can't control exactly what users do -- i feel like we should be able to enforce soft-delete (disable) only
21:13:59 <lbragstad> one thing that might help
21:14:10 <rm_work> like i'd be tempted to locally patch the delete call to just set the disabled flag instead
21:14:20 <rm_work> if `soft_delete = True` or something in config
21:14:41 <lbragstad> what if your delete flow does a disable first?
21:14:57 <rm_work> i mean this is like
21:15:02 <rm_work> end-users delete a project
21:15:15 <rm_work> it's not really something we control, unless we refuse project deletes based on policy
21:15:17 <lbragstad> then consume the notification from keystone about the disabled project and clean things up before you delete it
21:15:20 <rm_work> which is just confusing for everyone involved
21:16:05 <lbragstad> that was one of the main reasons we implemented notification support in keystone
21:16:15 <rm_work> ok well isn't that still a patch to keystone we'd have to do?
21:16:21 <rm_work> to change the "delete" call to do a disable first?
21:16:35 <lbragstad> no - more like horizon, but still a patch somewhere, yes
21:16:48 <rm_work> I can't control what John Doe CloudUser does with his projects
21:16:49 <rm_work> we don't use horizon, just API
21:17:13 <rm_work> and the issue is when random end-users create projects, use them, and then delete them with resources still on them
21:17:17 <rm_work> via the API
21:17:19 <lbragstad> the idea was that keystone would emit notifications about state changes for projects, then other services would subscribe to the queue
21:17:47 <lbragstad> it could see the notification come in via the message bus (which still isn't ideal... but)
21:17:58 <lbragstad> pull the project id out of the payload
21:18:06 <lbragstad> and clean up instances/volumes accordingly
21:18:07 <rm_work> so we should be listening to the keystone notifications and deleting everything that exists for projects based on their ID? (this sounds like a Reaper related thing)
21:18:27 <rm_work> but that's ... really not what we want, I think. what we want is just a soft-delete <_<
21:19:02 <lbragstad> even if you have a soft delete, something has to do the clean up
21:19:05 <rm_work> I guess we could have something listen to the notifications, and for each deleted project it sees, just archive that to another table or something
21:19:06 <lbragstad> right?
21:19:10 <rm_work> not necessarily
21:19:31 <rm_work> sometimes it's because someone left the company and we need to reassign their stuff to another project, or deal with it intelligently at least
21:19:36 <rm_work> rather than blindly wipe everything out
21:19:54 <rm_work> or just someone does something dumb
21:19:59 <rm_work> and we need to undo it
21:20:15 <rm_work> and it's a lot easier to undo an accidental project delete, than wiping out all resources in the cloud for that project :P
21:20:48 <rm_work> or rather
21:21:04 <rm_work> it's a lot easier to undo an accidental project delete *when all it did is remove one DB record*, as opposed to issuing cascading deletes to all services in the cloud for all objects
21:21:52 <lbragstad> i'm hearing two different use cases here
21:22:02 <rm_work> you're not wrong i guess
21:22:09 <lbragstad> 1.) you want to clean up orphaned objects in certain cases
21:22:16 <lbragstad> 2.) and transfer of ownership
21:22:18 <rm_work> well, we don't want it automated in ANY case
21:22:24 <rm_work> we want to be able to deal with it later
21:22:27 <rm_work> in all cases
21:22:37 <lbragstad> sure
21:22:47 <rm_work> just that the way projects get deleted might be different
21:22:54 <rm_work> but in all cases, what we want is them to be soft-deleted
21:23:11 <rm_work> and not clean up anything
21:23:16 <rm_work> the issue is not that the orphans exist
21:23:23 <rm_work> it's that we can't tell who they used to belong to
21:23:40 <rm_work> for auditing purposes, or making a decision on cleanup
21:24:22 <lbragstad> kmalloc: has opinions on this, and we were going to discuss it in YVR but i'm not sure we did
21:25:07 <rm_work> just seems like soft-delete is done in most places, except keystone (and maybe neutron?)
21:25:36 <lbragstad> if you had a soft delete capability in keystone, how would you expect it to work differently from disable?
21:25:46 <rm_work> i'm not sure i would
21:26:07 <rm_work> i mean i would probably literally implement it as "if CONF.soft_delete: disable; else: delete"
21:26:57 <rm_work> you COULD go a little further and have a deleted flag... and just use that as a sort of explicit filter (?show_deleted=true)
21:27:00 <lbragstad> so - why not restrict project deletion to system administrators and just leave disable available to customers
21:27:05 <rm_work> but i don't know if that's necessary
21:27:18 <rm_work> lbragstad: that's what i mentioned earlier as the only solution i could think of
21:27:27 <lbragstad> right
21:27:44 <rm_work> but it seems like a bad solution just because as an outlier it is very confusing to people
21:27:57 <rm_work> but yes, we could do that
21:27:59 <lbragstad> if your users can disable/enable and not delete - then you can manually do whatever you need to as a system admin
21:28:04 <rm_work> not sure how many thousands of workflows we'd break
21:28:39 <lbragstad> would those workflows still break if you had CONF.soft_delete?
21:28:47 <rm_work> which seems like the main blocker, because if we did that there's a good chance whoever ok'd it would be fired :P
21:28:48 <rm_work> no
21:28:57 <rm_work> because it would still say "204 OK" or whatever
21:29:06 <rm_work> and then ideally be filtered from API lists
21:29:17 <rm_work> (by default)
21:29:28 <rm_work> the same as how every other soft-delete that i'm aware of works
21:29:43 <rm_work> basically it just pretends to delete, unless you really go digging
21:30:48 <rm_work> so from a typical user's perspective, they couldn't tell the difference
21:30:51 <rm_work> but it doesn't remove the DB entry and throw a wrench in auditing
21:31:50 <rm_work> a quick fix for us could be like, throw a delete-trigger on the project table and have it archive -- at least we could look them up later if we HAD to <_< right now even that isn't possible. sometimes we get lucky looking through backups if the project was long-lived...
21:32:05 <rm_work> ^^ but that is dumb and i would never actually do that (it's just an example)
21:32:28 <rm_work> I'm honestly surprised this hasn't come up frequently
21:33:25 <lbragstad> it has
21:33:36 <lbragstad> very often actually
21:33:37 <lbragstad> https://www.lbragstad.com/blog/improving-auditing-in-keystone
21:35:45 <rm_work> k
21:35:50 <rm_work> basically yes, that seems right
21:36:04 <rm_work> but I wouldn't say it's *too* heavy handed
21:38:46 <lbragstad> it would be a lot of work to our API
21:39:05 <rm_work> it seems like the work would be more on the backends side
21:39:20 <rm_work> for the API wouldn't you just have to add another query param?
21:39:27 <rm_work> like "show_deleted"?
21:39:41 <lbragstad> yeah - we'd probably need to support something like that
21:39:53 <lbragstad> and implement soft deletes for all keystone resources, mainly for consistency
21:40:10 <rm_work> yeah that expands the scope of things a little, but i don't think you're wrong
21:40:12 <lbragstad> (i can imagine it being frustrating to have projects soft delete but not something else like users or groups)
21:40:48 <rm_work> i still think it's something that's needed.
21:40:52 <lbragstad> we'd also need to double check the api with HMT
21:41:02 <rm_work> but i guess maybe there aren't enough people that agree with my opinion for it to have happened
21:41:29 <rm_work> which means it probably won't any time soon, unless I go do it :P (and then get agreement from enough cores to accept the patches)
21:41:30 <lbragstad> i don't think people is disagreeing with you, but no one has really stepped up to do the work
21:41:46 <lbragstad> s/is/are/
21:41:49 <rm_work> so you think if it was done, no one would object to merging?
21:42:19 <lbragstad> the last time i discussed it around the Newton time frame, people were only opposed to the dev resource aspect of it
21:42:26 <rm_work> k
21:42:32 <lbragstad> and making sure if we did it, it was done consistently
21:42:39 <lbragstad> afaik
21:42:48 <rm_work> noted
21:42:59 <lbragstad> i don't think people had super strong opinions on saying absolutely not to soft-deletes
21:43:07 <lbragstad> s/not/no/
21:43:15 <lbragstad> wow - typing is really hard
21:43:27 <rm_work> it can be, yes :P
21:43:33 <lbragstad> that was the main purpose of the post that i wrote
21:43:52 <lbragstad> i think the use case for auditing is important, but at the time those were the three options that were clear to me
21:44:01 <lbragstad> based on my discussions with various people
21:46:12 <lbragstad> but - yeah... it's an important use case and I get it, but i also know kmalloc and ayoung have a bunch of thoughts on this
21:47:10 <lbragstad> i wouldn't be opposed to discussing it again, and seeing if we can do something to Stein or T
21:47:22 <lbragstad> discussing it as a larger group*
21:47:48 <rm_work> yeah, I mean, I'll be in Denver
21:47:55 <lbragstad> for the PTG?
21:48:01 <rm_work> yeah
21:48:06 <rm_work> if we want to discuss it then
21:48:09 <lbragstad> sure
21:48:20 <lbragstad> we can throw it on the meeting agenda to for next week
21:48:39 <lbragstad> if you feel like getting more feedback sooner than september
21:50:48 <rm_work> what time are your meetings?
21:51:15 <lbragstad> https://etherpad.openstack.org/p/keystone-weekly-meeting
21:51:24 <lbragstad> 1600 UTC on tuesdays
21:51:32 <lbragstad> so - 11:00 AM central
21:51:59 <lbragstad> rm_work: are you based in texas?
21:52:10 <rm_work> not anymore
21:52:16 <rm_work> kinda ... nomadic
21:52:19 <lbragstad> ack - i wasn't sure
21:52:28 <rm_work> yeah after I left castle, I go all over :P
21:52:36 <lbragstad> cool
21:53:16 <lbragstad> well - we can throw it on the agenda for next week if you'll be around
21:53:29 <lbragstad> otherwise, the use case seems straight-forward enough to kickstart on the mailing list
21:57:13 <rm_work> yeah we could do a quick topic on it I suppose -- I can try to show up for that
21:57:25 <gyee> lbragstad, I supposed we don't support directly mapping a federated user into a domain admin (domain-scoped token) do we? It's been awhile since I looked that piece of code. Just curious if anything has changed.
21:57:36 <rm_work> just for feedback purposes -- though whether or not it is important enough to us to get resources on it anytime soon is another question
21:57:46 <rm_work> which is why i figured PTG would be easier timing
21:58:40 <lbragstad> gyee: ummm
21:59:03 <lbragstad> you could map a user into a group with an admin role assignment on a domain
21:59:21 <lbragstad> but are you asking if trading a SAML assertion for a domain-scoped token works?
21:59:29 <gyee> but do we directly issued a domain-scoped token as the result of that?
21:59:33 <gyee> right
21:59:40 <lbragstad> hnmmm
21:59:47 <gyee> I don't remember we ever support that
22:01:25 <lbragstad> gyee: https://github.com/openstack/keystone/blob/master/keystone/tests/unit/test_v3_federation.py#L3861 ?
22:01:39 <lbragstad> oh - wait...
22:01:40 <lbragstad> nevermind
22:01:44 <lbragstad> that's an IDP test case
22:02:34 <gyee> yeah
22:02:58 <lbragstad> all these tests seem to authenticate for an unscoped token before trading it for a domain-scoped token
22:02:59 <lbragstad> https://github.com/openstack/keystone/blob/master/keystone/tests/unit/test_v3_federation.py#L3147
22:03:29 <gyee> right, that's what I thought
22:03:45 <lbragstad> but part of that flow with horizon is asking which project you want
22:03:47 <lbragstad> to work on
22:04:01 <lbragstad> so if it lists domains, horizon might support building a domain-scoped authentication request
22:04:28 <gyee> let me dive into that code again, someone told me today you can get a domain-scoped token for federation user
22:04:31 <lbragstad> i feel like this was on the list of things we wanted to improve with horizon a few releases back
22:05:03 <gyee> but I don't remember ever seeing that functionality
22:05:27 <lbragstad> cmurphy: _might_ know off the top of her head?
22:05:45 <lbragstad> i remember she was working on some of that stuff during those joint team meetings between keystone and horizon
22:06:03 <gyee> k, let me check with her as well
22:06:04 <gyee> thanks man
22:06:15 <lbragstad> gyee: no problem, let me know if you hit anything weird
22:06:32 <lbragstad> #endmeeting