16:00:19 #startmeeting keystone 16:00:19 Meeting started Tue Jun 5 16:00:19 2018 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:22 The meeting name has been set to 'keystone' 16:00:26 #link https://etherpad.openstack.org/p/keystone-weekly-meeting 16:00:28 agenda ^ 16:00:34 ping ayoung, breton, cmurphy, dstanek, gagehugo, hrybacki, knikolla, lamt, lbragstad, lwanderley, kmalloc, rodrigods, samueldmq, spilla, aselius, dpar, jdennis, ruan_he, wxy, sonuk 16:00:52 o/ 16:01:04 o/ 16:01:29 o/ 16:02:23 o/ 16:03:21 o/ 16:04:01 #topic proxy calls between projects 16:04:03 knikolla: o/ 16:04:06 o/ 16:04:34 what i'm working on and what pays my bill is working on resource federation in openstack 16:04:52 currently i have a proxy called mixmatch which proxies api calls between services in different deployment 16:05:10 handling k2k saml exchange, and scoping the token to the correct project on the remote deployment 16:05:30 o/ 16:05:35 was thinking, if we forget about k2k. the proxy could be used to allow nova to access things in other projects 16:05:46 example, attach a volume in a different projects 16:05:59 essentially allowing granular ACLs, since you can divide things in many projects. 16:06:14 this is the elevator pitch. 16:06:43 * kmalloc holds the "door close button" to give knikolla more time to pitch. 16:07:07 thanks, haha. 16:07:55 so basically you'd have a project with subprojects. and for people who should have access to everything, they are granted a role on the parent. for people who just need specific objects, just one or multiple subprojects. 16:08:22 you'd still be able to use the staff as if they were in a single project though 16:08:24 stuff* 16:08:33 this was brought up in a discussion with ayoung 16:08:40 if he's around 16:08:44 so the proxy only affects a single hierarchy 16:08:45 ? 16:09:07 as in, resource must be in a peer project or the local project. 16:09:14 not "any project configured" 16:09:21 potentially. 16:09:32 currently it's less restricted (it looks at all the projects you have access to) 16:09:36 ah 16:09:41 ok, so it could be any project 16:09:44 since it uses your token to get a list of projects 16:10:01 but realistcally, it could be limited with enhancements to mixmatch 16:10:14 yes 16:10:15 ok cool, thanks, was curious on the "current state of the world" 16:10:26 * kmalloc is looking at the mixmatch code >.> 16:12:07 thoughts, opinions, questions? 16:13:04 knikolla: would you be using this internally for k2k and resource sharing across projects? 16:13:36 lbragstad: it's main use case right now is cross attaching a volume between multiple clouds 16:13:43 the proxy is registered as the cinder endpoint 16:13:48 o/ 16:13:49 nova makes an attach call 16:14:06 and the proxy figures out where the volume is, does k2k, gets a token, and forwards the request to the appropriate cloud 16:14:16 and sends the response back 16:14:23 nova is happy without knowing it went to another cloud 16:14:34 potentially, it can be used without k2k, just rescoping inside a cloud 16:14:47 allowing usage of resource from one project you have access to another 16:15:05 i want to say i've heard that use case before 16:15:18 but i can't remember who it was 16:15:53 it was related to the more fine-grained permissions [splitting resources in the same project] 16:16:08 same concept but without needing to add extra bits everywhere for access control 16:16:14 that too 16:16:20 the topic today is related to fine-grained, yes. 16:16:48 would it be possible to use this to do fine-grained access within the same project? 16:17:15 lbragstad: if you make the endpoint only accessible via proxy, yes 16:17:26 otherwise a use could bypass it and talk straight to the service. 16:17:28 might be weird. 16:17:30 user* 16:17:31 sure 16:17:35 and you could still bypass the proxy 16:17:41 which defeats the purpose 16:17:47 yes. 16:18:06 ayoung had a concept (blog post) about access tags [think k8s] in openstack to solve inthesameproject fine grained control 16:18:23 honestly, i would be unhappy trying to wedge fine grained control via the proxy in a single project 16:18:39 as it doesn't really buy security in any way, and is easily bypassed 16:18:40 ++ 16:18:55 so the primary use cases is sharing resources across projects 16:19:01 across projects is no different than across clouds [really] 16:19:12 this is more like a, use keystone hmt and role assignment to half-ass access tags. 16:19:40 it's the inverse of what adam proposed 16:19:53 it's across project, vs internal project 16:20:22 we would need something like mixmatch if we wanted access tags intra-project anyway [due to model of data and RBAC in openstack in general] 16:20:27 yes, thought if the projects are subprojects or a parent project. it's easier to wrap your brain around. 16:20:42 though* 16:20:47 of* 16:21:12 so, silly question: can you horizontally scale mixmatch, one per service? 16:21:22 yes. 16:21:23 one for cinder, one for nova, one for neutron [or a mix of those] 16:21:25 cool. 16:21:41 i worry about a single-point-of-failure that does everything(tm) from a design perspective 16:21:48 and "recommended method of deployment" 16:22:07 you can even have multiple per service. 16:22:28 it's a flask app with a sqlite db for caching where things are. and memcached for caching credentials. 16:22:35 load balanced or do you mean, CinderX and CinderY, the same way you'd have multiple cinders today [does that even work?] 16:22:53 load balanced 16:22:56 ok. 16:23:18 so you'd have HA, but at 10000 cinders we have issues [extreme case] 16:23:26 but that is a scale issue not a "functional" issue 16:24:01 ok, so back to the meeting topic 16:24:11 what is the pitch besides "hey i have cool tech"? 16:24:31 is this "accept this into keystone as a project owned and under openstack" [officail + working with TC] or something else? 16:25:09 the pitch is "this could solve one use case people have been screaming for with small changes" 16:25:16 does it make sense to pursue that use case 16:25:23 * kmalloc puts down that he has a vested interest due to his employer in this kind of project. 16:25:28 full disclosure 16:25:59 there's the edge use case where it would allow you in an edge cloud to fetch images from a "home" cloud 16:26:02 through k2k 16:26:06 right. 16:26:08 knikolla: would it require keystone to own the code? would it be a new feature? would it be a new project under the identity umbrella? 16:26:11 without having a glance server there 16:26:37 lbragstad: i would assume it remains a separate repo, it's isolated code, just under our umbrella if we drive down this path 16:26:47 i wouldn't want it "in" keystone. 16:27:06 ++ 16:27:11 keystone server, that is, because it can then be isolated security concerns 16:27:20 ok - are there docs on mixmatch? 16:27:22 it is a consumer of keystone, just like anything else. 16:27:26 like - how it works with keystone? 16:27:27 i know ayoung was excited about a general purpose service mesh proxy like istio 16:27:34 but this is probably not THAT general purpose 16:27:53 like how it fits architecturally with everything else? 16:27:54 mixmatch.readthedocs.io 16:28:12 #link http://mixmatch.readthedocs.io/en/latest/reference.html 16:28:26 knikolla, I wasn't excite. I never get excited. 16:28:42 there he is 16:29:01 I'm not 100% sure that access tags would really be either necessary nor sufficient 16:29:06 (assume the docs have faults, i have been running low on interns to update them) 16:29:17 what I am learning is that there are a lot of rules people want to be able to put on the resources 16:29:18 ayoung: right, but it was highlighted as a similar case (brain thinking wise) 16:29:30 regardless if we drive down that path 16:29:37 knikolla: has this idea been socialized anywhere besides irc? 16:29:37 one thing to evaluate is OPA 16:29:42 mailing lists or whatnot? 16:30:14 Istio + OPA is a viable approach for a microservice mesh, might be something we make optional but supported? 16:30:28 lbragstad: the proxy in general or the fine grained permissions use case? 16:30:38 the use case we are discussing here 16:30:39 we are in the very early stages of exploring OPA fwiw 16:31:11 we being my downstream team 16:31:35 lbragstad: no. 16:32:20 ok 16:32:35 it seems interesting, and i can see the use case 16:32:46 but it would be nice to gauge interest, too 16:32:58 especially if keystone is going to own it 16:33:27 if keystone owns it i'm by implication almost 100% on keystone, haha 16:33:52 well - that's a good reason to own it, but i'm biased :) 16:34:43 i'd like to know a little more about how it works and how it fits in with various pieces 16:34:46 but i'll go read the docs 16:34:52 and probably come back with questions 16:35:00 ++ 16:35:04 i agree i need more time with it 16:35:06 does anyone else have questions on this? 16:35:12 please do, anything just ping me 16:35:16 but tentatively i think it could fit within keystone's umbrella 16:35:24 knikolla: do you have anything else to share? 16:35:41 or potentially closer to KSM's umbrella. 16:35:47 but same thing 16:36:00 i was thinking about KSM and how it could interact with it. 16:36:30 * knikolla hands over the mic. 16:36:46 ping me on -keystone anytime about questions. 16:36:50 sounds good 16:36:54 thanks for bringing it up 16:36:58 it also [might] be something that doesn't need to be a full fledged service 16:37:17 but something we could wrap into a library if we wanted to bake it in [something around ksa] 16:37:24 i'm going to jump into the next topic since we have several other things to get through 16:37:29 but that requires code changes in the other services. 16:37:31 we can circle back in open discussion if that's cool 16:37:33 * kmalloc lets the topic change 16:37:40 #topic auditing improvements 16:37:45 i dont see rm_work around 16:37:55 yeah. 16:38:00 but context is that he wants to propose adding soft deletes to keystone API 16:38:07 #link http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-05-29.log.html#t2018-05-29T20:42:15 16:38:17 soft deletes good 16:38:18 in case anyone is interested or curious ^ 16:38:30 but if he shows up, i'll let him make his case 16:38:31 i still hate soft deletes as a concept 16:38:44 i would accept the code/spec if someone wants to move forward with it 16:39:14 i pinged him in -keystone, but we'll see if we can catch him later 16:39:15 is this related to resource cleanup? 16:39:24 cmurphy: i think it is. 16:39:25 kmalloc, once someone claims a project ID, that ID is a fact that we should not delete. Least not for a long long while 16:39:26 kind of 16:39:28 from the eavesdrop 16:39:37 i asked about that, and i had a hard time getting a straight answer 16:39:43 it sounds like it was, but it wasn't 16:39:48 think of it like the requirements that a company keep their files for X number of years 16:39:50 ayoung: i expect we never will have a project id collision with uuid4 16:40:20 FTR, i am 100% against issuing tokens for "deleted" [soft or otherwise] projects 16:40:23 not collision 16:40:25 #link http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-05-29.log.html#t2018-05-29T21:13:18 16:40:42 but i don't really care if we keep records around. 16:40:45 #link http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-05-29.log.html#t2018-05-29T21:21:52 16:40:54 it is a record that user X had project Y and that we can recreate, and if needs be, perform operations in the cloud based on that ID in the future 16:41:22 recreation is unlikely to be reasonably supportable in many cases [without implicit renameing] 16:41:31 well, s/recreation/undelete 16:41:36 lets not say it's being recreated 16:41:47 because it really isn't 16:42:18 if we want to add a "deleted" column to projects and make DELETE call just set that value, i'm really not terribly unhappy with the idea. 16:42:33 I assume a "deleted" project would be considered "disabled" as well? so we wouldn't issue tokens for it 16:42:34 and let cloud ops undelete/look into history. 16:42:34 as i thought about this, the bit the felt weird is that we have enabled/disabled, and now we'd be adding soft delets to the mix 16:42:53 lbragstad: nah, we supplant "soft delete" with our hard delete 16:43:25 and track deleted time with a "cleanup" option so "remove records for projects with deleted and deleted before 16:43:25 so what's the difference between soft delete and disabled? 16:43:34 disabled doesn't free name 16:44:06 disabled is "the project exists but is unusable" 16:44:14 deleted is "the project effectively doesn't exist" 16:44:23 i would still purge roles/access etc 16:44:26 with deleted 16:44:41 you just maintain the project record and we allow cloud admin to introspect projects that are deleted. 16:44:47 [when, by whom, etc] 16:44:54 have to drop early to make an appt, sorry all! 16:45:00 hrybacki: have a good one man 16:45:02 couldn't we emit a cadf notification with that info? 16:45:17 thanks kmalloc ! 16:45:17 o/ 16:45:18 project deleted [who, when, other] 16:45:48 we can. but maintaining the project record could be useful 16:46:06 cadf notifications are ... lets say ephemeral to be nice 16:46:11 they sometimes miss, or are missed 16:46:38 ++ agree with kmalloc 16:46:38 well - YMMV with notifications 16:46:52 the change we're looking at here is a db migration for a couple new columns and changing the internal behavior of delete 16:46:54 one of the use cases rm_work brought up was around doing something with orphaned resources 16:47:16 once something consumes a notification for a deleted project, the project is already gone 16:47:19 true 16:47:43 can't an admin token effectively just work for deleting them regardless of project deletion? 16:47:49 delete would stil do everything you'd expect, except it wouldn't remove records. 16:47:59 knikolla: not in all projects 16:48:11 s/projects/services 16:48:15 it's hit and miss 16:48:34 i'm less worried [right now] about the resource cleanup, i'll want proposals on how to address security concerns 16:49:03 but i'm fine with maintaining records in the db [with a purge action/api, based upon "datetime"] if that is what folks want 16:49:07 for audit/etc 16:49:18 the key is you'll be able to see the ids and whatnot 16:49:35 right now, you may not know how long VM X has supposed to be dead because the project is gone 16:49:39 when was it deleted? 16:49:57 or if there was a network event that caused you to miss the "delete project" event 16:50:15 thats another concern with notifications 16:50:26 the way i'd accept this proposal is if we are changing the delete behavior to maintain rows 16:50:28 and records 16:50:33 not "add another form of delete": 16:50:37 that is special and magical 16:50:56 either way - i wanted to share the context 16:51:11 because it sounds like rm_work would be interested in doing this depending on our reactions 16:51:13 and if rm_work or someone else wants to do it, i'll happily review spec/code 16:52:01 but - something to think about if you think this is something that is useful 16:52:32 #topic default roles work has begun 16:52:40 thanks hrybacki for getting this stuff rolling 16:52:56 #link https://review.openstack.org/#/c/572243/ 16:53:05 nice to see implementations coming through 16:53:14 i think he just wanted to get this on people's radar 16:53:28 awesome! 16:53:41 #topic open discussion 16:53:54 we have a few minutes left if folks have questions 16:55:16 what should I eat for lunch? 16:55:29 teriyaki spicy chicken 16:55:37 that's what i just ordered for delivery 16:56:02 gagehugo: pho 16:56:12 gagehugo: it's a pho kind of day 16:56:19 both sound good 16:56:26 everyday is a pho kind of day 16:56:29 lbragstad: ++ 16:57:11 if we don't have anything else - we can call it a few minutes early 16:57:25 * kmalloc goes and submits a new laptop refresh to red hat... so the nag emails stop =/ 16:57:30 with my few extra minutes 16:57:34 thanks for coming, i appreciate it! 16:57:44 #endmeeting