16:02:42 <lbragstad> #startmeeting keystone
16:02:42 <openstack> Meeting started Tue May 29 16:02:42 2018 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:45 <openstack> The meeting name has been set to 'keystone'
16:02:56 <lbragstad> i knew i was forgetting something...
16:03:02 <gagehugo> lol
16:03:15 <lbragstad> #topic summit recap
16:03:37 <lbragstad> so - obviously last week was the summit and i'm working on a summary that i hope to share soon
16:04:40 <lbragstad> #topic unified limits specification follow up
16:04:52 <lbragstad> the last session for the week was on unified limits
16:05:09 <lbragstad> and i felt it was really productive, we got a bunch of a good feedback on wxy|'s specification
16:05:27 <lbragstad> i spent some time friday working the latest comments from that session into the current spec
16:05:33 <lbragstad> #link https://review.openstack.org/#/c/540803/
16:05:49 <lbragstad> #link https://review.openstack.org/#/c/540803/10..11/specs/keystone/rocky/strict-two-level-enforcement-model.rst
16:05:49 <kmalloc> lbragstad: the conversation(s) ayoung and I have could simplify the lookups even in the "fluid" quota model we have proposed.
16:06:05 <kmalloc> avoiding needing to do the lookup across all children.
16:06:17 <lbragstad> cool
16:06:19 <kmalloc> i can add comments and ayoung made a post that encapsulates it somewhat
16:06:30 <lbragstad> do you have a link?
16:06:37 <kmalloc> it also would make james' needs easy.
16:06:39 <kmalloc> sec.
16:07:05 <wxy|> kmalloc: the one ayoung post in his blog? http://adam.younglogic.com/2018/05/tracking-quota/#more-5542
16:07:12 <hrybacki> o/
16:07:40 <lbragstad> oh - nice
16:07:44 <lbragstad> i'll have to read that
16:08:08 <kmalloc> yeah
16:08:12 <kmalloc> thats is it.
16:08:30 <kmalloc> tl;dr of it, store claims/consumption upwards in the tree
16:08:46 <kmalloc> so child stores it's usage and propagate child usage to the parent
16:08:58 <kmalloc> so we do an upwards traverse of the tree to collect the usage.
16:09:12 <kmalloc> so worst-case is a _MAX_DEPTH_IN_KEYSTONE lookup
16:09:31 <lbragstad> as opposed to a horizontal calculation?
16:09:34 <kmalloc> yes
16:09:37 <lbragstad> hmm
16:09:39 <lbragstad> interesting
16:09:56 <lbragstad> so - is this going to affect what we have proposed for the two-level-strict model?
16:10:06 <kmalloc> it would make the two-level model more efficient
16:10:11 <lbragstad> ok
16:10:12 <kmalloc> as you scale children out
16:10:29 <lbragstad> i'll parse that today then and see how/if we can work it into the specification
16:10:31 <kmalloc> 10000 children is still: Check Local, Check Parent
16:10:51 <kmalloc> ayoung's post expands a bit more on it, but the very short summary i provided is the core
16:10:56 <lbragstad> i'd like to try and get that specification merged by the EOW
16:11:05 <kmalloc> it doesn't solve across-endpoint quota checks
16:11:13 <kmalloc> but that is external to <local endpoint>
16:11:18 <ayoung> yep
16:11:22 <kmalloc> unless the limit is endpoint specific
16:11:29 <kmalloc> which is also fine.
16:11:39 <lbragstad> ok
16:12:15 <rmascena> that looks like a great approach, since we probably don't have a big depth in the project tree
16:12:20 <kmalloc> the idea is you'll need to store a child record on the parent that can be aggregated [somehow; eys glossing over specific implementation]
16:12:30 <kmalloc> rmascena: we also have a fixed max limit iirc
16:12:35 <kmalloc> which was... 8?
16:12:44 <lbragstad> yeah - i don't think it was even that high
16:13:07 <kmalloc> it still gets expensive to aggregate the data in 10000 nodes, but it's not 10000 lookups, it's _MAX_DEPTH lookups with a sum()
16:13:36 <rmascena> kmalloc, gotcha
16:13:40 <kmalloc> James' use-case made me really think about the implementation.
16:13:47 <lbragstad> #link https://github.com/openstack/keystone/blob/cbc6cac4c01a4d7836a726fb30730ea603cde176/keystone/conf/default.py#L70
16:14:00 <kmalloc> crap.. we REALLY made that configurable =/
16:14:06 <lbragstad> yep
16:14:09 <kmalloc> that makes me cry
16:14:18 <kmalloc> can we deprecate that option? ;)
16:14:26 <lbragstad> that can only be 2 with the new model
16:14:38 <ayoung> Nah, leave it
16:14:38 <kmalloc> right, so lets deprecate that option while we're at it.
16:14:43 <ayoung> we can't solve this
16:14:45 <kmalloc> let the quota model dictate the max
16:14:55 <ayoung> need something like Istio or Congress running in front
16:15:06 <ayoung> you need active participation, and Keystone is too passive
16:15:09 <kmalloc> ayoung: we're not going to get that right now, so solve this the best we can.
16:15:26 <kmalloc> and if the quota model allows storage to something like etcd, we can share the data across endpoints
16:15:35 <ayoung> depth of the project tree is irrelevant
16:15:48 <kmalloc> depth is relevant even with congress or istio
16:16:04 <kmalloc> because you still need to look at the heirarchy
16:16:15 <lbragstad> the bummer is that that option affects the API, and so does the enforcement model that you use
16:16:15 <kmalloc> at 1000 deep [we don't set a max], it wont work well.
16:16:27 <kmalloc> lbragstad: so lets deprecate that option.
16:16:40 <kmalloc> minimize options that impact API
16:16:47 * kmalloc gets off soapbox
16:17:24 <lbragstad> ok - looks like we have some things to work into the specification then this week
16:17:31 <kmalloc> anyway, the other takeaway is the quota-model needs to be very opinionated on what data is stored.
16:17:45 <kmalloc> it is on the quota model to define what we need to store, not "nova"
16:17:45 <lbragstad> but - i also took a stab at trying to workout the context manager stuff for the oslo.limit library
16:17:57 <lbragstad> curious if anyone would be interested in double checking my work there
16:18:00 <kmalloc> since nova will be moving to the generic-resource type labels
16:18:12 <kmalloc> lbragstad: i'll review/look it over, ping me when you need it
16:18:25 <lbragstad> it's ready for review and it's in the code examples in the current spec
16:18:28 <lbragstad> ps 11
16:18:31 <kmalloc> ok
16:18:37 <kmalloc> will poke at it today
16:18:52 <lbragstad> we had a list of things to change wrt integrating oslo.limit
16:19:03 <kmalloc> but i warn you, unless your example says "HEEEEEEREEEEEEEE's JOHNNY" i'm going to -1billion it :P
16:19:04 <kmalloc> >>
16:19:06 <kmalloc> <<
16:19:21 <kmalloc> ^_^
16:19:43 <lbragstad> :)
16:19:43 <kmalloc> i think the changes for oslo.limit are very positive
16:20:08 <lbragstad> i do too... anything else on the unified limit stuff?
16:20:30 <kmalloc> i think thats really it.
16:20:51 <lbragstad> does anyone have questions? this is a probably a fast moving conversation for the folks who weren't there
16:22:06 * cmurphy will try to catch up with the spec and the blog post
16:22:59 <ayoung> I think the problem is not depth of tree, but rather the number of projects that span from the initial project
16:23:16 <ayoung> a wide flat tree would have the same effective cost
16:23:41 <lbragstad> that was penick's concern
16:24:26 <ayoung> there might be linear constant for the deep tree (one average 2ce as many comparisons) but not N! more
16:24:49 <kmalloc> with penick's concern it was both depth and breadth of tree
16:24:58 <kmalloc> becuase multi-depth was a reality
16:25:21 <kmalloc> but i think we have a workable direction to go
16:25:28 <lbragstad> ack
16:25:30 <ayoung> yeah, but the number of comparisons is based on the number of subproject/split of the initial quota
16:25:53 <kmalloc> ayoung: ++
16:25:54 <ayoung> so if a wide flat tree has 100s of sub projects, the effective cost will be pretty much the same
16:26:00 <kmalloc> yes.
16:26:02 <ayoung> coo
16:26:40 <kmalloc> and we can work to address cross-endpoint stuff [kicking the can a little] as we move towards an implementation as long as we don't write ourselves into a corner
16:27:00 <ayoung> let me know when were are done.  I have a couple other things to share
16:27:02 <kmalloc> lbragstad: i have a few open-topic things to seed.
16:27:05 <kmalloc> as well
16:27:10 <ayoung> you first
16:27:17 <lbragstad> #topic open discussion
16:27:22 <kmalloc> 2 quick things
16:27:23 <lbragstad> rmascena:  has a topic too
16:27:49 <rmascena> kmalloc, go ahead
16:27:51 <kmalloc> 1) I'm looking at ETCD for some cross-endpoint storage, notably policy data (loaded via KSM). Please let me know if you really hate this idea.
16:28:28 <kmalloc> since we have a central source, we can start looking at distributing via watches (yes, i know, eventlet weirdness sometimes), vs pure rest
16:28:31 <ayoung> fine by me, but you knew that already
16:28:56 <kmalloc> no direct answer needed here, just stew on it/bug me. other things are a bit higher prio for me atm than that future bit
16:29:18 <lbragstad> i wouldn't mind discussing it a bit more to get a better understanding of it
16:29:28 <lbragstad> i don't know enough now to have an opinion
16:29:41 <ayoung> lbragstad, its kindof like a more effective memcache
16:29:49 <kmalloc> lbragstad: right, so lets start that offline and formulate a better understanding
16:29:55 <kmalloc> it's too complex for meeting time right now.
16:29:57 <kmalloc> and too early
16:29:57 <ayoung> ++
16:30:05 <lbragstad> ack - sounds good, thanks kmalloc
16:30:15 <ayoung> used very heavily in Kubernetes, and effictively
16:30:25 <ayoung> effectively
16:30:47 <kmalloc> and #2, i want to start looking at moving (next cycle) Keystone Specs and testing to be heavily focused on Behavior Driven [discuss the behaviors/encode it] vs "intent"
16:30:58 <kmalloc> again, early on discussion and things to think about
16:31:09 <ayoung> kmalloc, is that along the lines of the move to the new planning tool?
16:31:19 <kmalloc> kmalloc: it was in the same vein.
16:31:28 <wxy|> kmalloc: +1 for Etcd proposal. I like the watch mechanism
16:31:28 <kmalloc> but more on our process side
16:31:38 <ayoung> Keystone is on the short list for testing it out, right?
16:31:54 <kmalloc> ayoung: yes. and once we're on storyboard we can start discussing other method changes
16:31:59 * hrybacki looks at kmalloc with intrigue
16:32:08 <ayoung> ++
16:32:26 <kmalloc> i want to start moving us in better directions on how we process things, but nothing until we get moving on SB and other community-wide initiatives
16:32:26 <hrybacki> speaking of, lbragstad when is a good time to ping diablo_rojo aobut that again? I wanna poke at things
16:32:44 * knikolla is late
16:32:52 <lbragstad> i'm sure she'll reach out once they have the migration ready to go
16:33:00 <hrybacki> ack
16:33:03 <kmalloc> but, tl;dr behavior driven development is focusing on the user-story style of things, but also more specific expected behaviors.
16:33:12 * ayoung likes
16:33:14 <kmalloc> I post a USER object to /users and X happens
16:33:23 <lbragstad> i'm not expecting to hear anything this week, with the summit wrap up and things like that
16:33:36 <kmalloc> in informal sentences which can be expanded to the more rigid spec
16:34:06 <kmalloc> trying to encompass ideas -> specs -> code in a better way
16:34:15 <hrybacki> One more question: We landed default roles spec noting a folow-up with changing of the 'auditor' role name to ... 'readonly' ? Leaving us with 'readonly', 'member', and 'admin'
16:34:27 <kmalloc> hrybacki: as i remember, yes
16:34:29 <ayoung> reader
16:34:41 <ayoung> you can have multiple roles, so reader is for reading things
16:34:43 <hrybacki> kmalloc: rope me into that when possible -- I wanna learn from your thought process
16:34:43 * lbragstad likes reader over readonly
16:34:43 * kmalloc hands the microphone to the next person... but if it's adam, turns off the mic first
16:34:58 * ayoung needs no mike
16:35:07 <hrybacki> what about reader, member, and adminer? /s
16:35:15 <ayoung> ++
16:35:16 <kmalloc> hrybacki: sure thing, I spent a chunk of the weekend looking at development methodologies and specifically in python.
16:35:17 <knikolla> reader, because member implying readonly is awkward.
16:35:19 <kmalloc> it was good.
16:35:33 <ayoung> readinator, membenator, administrator
16:35:48 <kmalloc> ayoung: what about burninator?
16:35:53 <kmalloc> cause i think we need trogdor now
16:36:02 <ayoung> reserved for Trogdor
16:36:05 <hrybacki> justreads, readsandmembers, readsandmembersandadmins ?
16:36:07 <gagehugo> kmalloc ++
16:36:08 <kmalloc> ayoung: there is your "delete only" role.
16:36:09 <ayoung> OK...me up?
16:36:20 <lbragstad> ayoung:  go for it
16:36:21 <hrybacki> kmalloc: feel free to shoot me any good reading links
16:36:26 <ayoung> a lot of people spoke to the pain of customizing policy
16:36:28 <kmalloc> hrybacki: i'll dig some up.
16:36:32 <ayoung> esp AT&T doing read only
16:36:37 <ayoung> needs some tooling support
16:36:43 <kmalloc> (hey! this is going to benefit from the etcd thing)
16:36:44 <gagehugo> we love read-only
16:36:53 <hrybacki> gagehugo: yes y'all do :)
16:37:04 <ayoung> so...was coming up with a spec for a tool to help build policy
16:37:14 <ayoung> and...I guess I should dump to an Etherpad... 1 sec
16:38:16 <ayoung> https://etherpad.openstack.org/p/posse-policy-tool
16:38:27 <ayoung> Sorry about format, but I had it on my blog.  I'll clean up
16:38:35 <ayoung> this is pure requirements, no design at this state
16:38:53 <hrybacki> #link https://etherpad.openstack.org/p/posse-policy-tool
16:39:07 <lbragstad> so - isn't this what we're going to be implementing with hrybacki's specification?
16:39:12 <ayoung> and the idea is that this is something for offline or installtime use first and foremost, not an API.  Might grow into one in the future.
16:39:24 <ayoung> possibly.
16:39:40 <hrybacki> lbragstad: I think he's reaching for something bigger. a unified way to add custom policy on-top-of these default roles
16:39:41 <ayoung> I was more concerned with getting the requirements clear pre-spec
16:40:07 <ayoung> and this is something that an operator should be able to use to customize policy in a sane way
16:40:16 <ayoung> as part of a larger workflow, without specifying that workflow
16:41:07 <ayoung> I was hoping to whip up a quick POC on that, but its too big
16:41:37 <kmalloc> i think the further reach, especially something that can do the policy file editing in a clean way (programatically, lets face it the DSL is sucky to work with as a human) could be beneficial
16:41:44 <ayoung> we need to collect policy, in a few different forms.  We need a catalog of APIs (VERB + PATH)
16:42:01 <ayoung> we need the mapping from API to policy key
16:42:03 <ayoung> etc.
16:42:23 <ayoung> so lets gather the reqs in one place.  Feel free to hack that doc, etc.  And we can convert to a spec later
16:42:27 <ayoung> OK  second thing
16:43:00 <ayoung> Another end user was pretty vehement that Keystone RBAC was too course grained, that policy enforcable only at the project/Role level meant that he could not use it
16:43:07 <ayoung> also from AT&T, so real world deployment
16:43:27 <ayoung> so...I started writing down a label based enforcement scheme that could be done within a project
16:43:39 <ayoung> this would require a lot of remote project support, so long term
16:43:44 <ayoung> and I just want a sanity check on it
16:43:55 <ayoung> #link https://etherpad.openstack.org/p/access-tags-planning
16:44:01 <ayoung> ths short short is this:
16:44:05 <kmalloc> it takes some notes from k8s tagging
16:44:22 <ayoung> use the roles that a use has to tag a resource, and then later match the tags when performing operations
16:44:36 <kmalloc> it's a decent concept, but needs a lot of eyes on it to be vetted before moving to something more concrete
16:44:37 <ayoung> and specifically, morphing operations: PUT, DELETE, PATCH
16:44:42 <ayoung> exactly
16:45:23 <ayoung> it takes a page from SELinux, but also tries to be much less ambitious and thus easier to understand and implement
16:45:52 <lbragstad> ok - i have post-it notes lining the bottom of my monitor to review this stuff
16:45:55 <ayoung> but, say I tab a volume with the "DBA" access tag, I need to have the "DBA" role in order to morph it
16:46:01 <ayoung> thanks
16:46:20 <ayoung> those two things plus quota was what I got out of the summit
16:46:48 <ayoung> the tool maybe should live in oslo-policy, so we can consider that, too
16:46:56 <lbragstad> yeah
16:47:00 <lbragstad> something to think about
16:47:07 <lbragstad> i have an idea that i was talking to hrybacki about
16:47:22 <lbragstad> and it sounds like it might have some overlap with that tool
16:47:31 <knikolla> ayoung: want me to try a simpler solution with the proxying api calls between projects?
16:47:51 <hrybacki> lbragstad: you'll have to referesh my memory
16:47:51 <ayoung> knikolla, Isio!
16:47:56 <ayoung> Istio
16:48:01 <ayoung> but, yes, there is that
16:48:14 <lbragstad> hrybacki: it was after that moon presentation
16:48:40 <ayoung> knikolla, can you talk through the proxy-between approach, and where it is now?
16:48:52 <hrybacki> lost in the ethereum lbragstad =/
16:49:27 * kmalloc makes a bad etherpad joke.
16:49:38 <ayoung> knikolla, I suspect, though, that if we proxy like you say, we would need a way to manage cross-project access control at the keystone level eventually
16:49:44 <ayoung> so there is still an admin overhead
16:49:47 <kmalloc> remember rmascena has something to talk about too. 10 left [timecheck]
16:49:52 <knikolla> github.com/openstack/mixmatch is an API proxy for service to service communication with k2k. theoretically instead of k2k, it can rescope the token for another project.
16:49:55 <kmalloc> 10m*
16:49:59 * ayoung surrenders the conch shell
16:50:09 <knikolla> ayoung: we can use HMT for that. a larger project and subprojects.
16:50:24 <ayoung> knikolla, ++
16:50:26 <rmascena> I hope that will be quick :) can I start?
16:50:31 <lbragstad> #link https://bugs.launchpad.net/keystone/+bug/1754677
16:50:32 <openstack> Launchpad bug 1754677 in OpenStack Identity (keystone) "Unable to remove an assignment from domain and project" [High,In progress] - Assigned to Raildo Mascena de Sousa Filho (raildo)
16:50:33 <ayoung> rmascena, go ahead.
16:50:44 <ayoung> knikolla, schedule time for next week to discuss that, please?
16:50:55 <knikolla> ayoung: ++
16:50:57 <raildo> so, enjoying that we are talking about HMT :D
16:51:12 <raildo> there is this bug that lbragstad pointed out
16:51:32 <raildo> and I sent this patch with a reproducer for it: https://review.openstack.org/#/c/570438/2
16:52:25 <ayoung> raildo, that commit message needs context in it
16:52:36 <raildo> so, basically we can add role assignments for a domain side and a project side of this entity, but we can't remove it right know, since the table was not prepared for having two assignments for one entity
16:52:47 <raildo> ayoung, yeah, it needs :)
16:53:06 <raildo> ayoung, https://bugs.launchpad.net/keystone/+bug/1754677/comments/1
16:53:07 <openstack> Launchpad bug 1754677 in OpenStack Identity (keystone) "Unable to remove an assignment from domain and project" [High,In progress] - Assigned to Raildo Mascena de Sousa Filho (raildo)
16:53:14 <ayoung> raildo, think you can tackle the solution?
16:54:07 <raildo> ayoung, kind of, should we just change the table schema to make this a real thing?
16:54:32 <raildo> I'm afraid of creating some invalid duplicate entries if we enabled that
16:54:43 <ayoung> should be a mod of the query, not the table, IIUC
16:54:56 <ayoung> check_grant_role_id  needs the type in it
16:55:50 <raildo> ayoung, well, what I understood is that in the role assignment table, we will have two entries for the same role_id, with the same target_id and pointing to the same user_id
16:56:12 <lbragstad> and the python code doesn't actually hand that ambiguity
16:56:14 <lbragstad> right?
16:56:15 <ayoung> raildo, we already have that
16:56:22 <raildo> lbragstad, right
16:56:28 <ayoung> that is what the error message states, we just assume the query returns one
16:56:59 <ayoung> 'type'\ is domain or project, I think
16:57:04 <lbragstad> so - would we need to inspect each target in the response and make sure it's what we expect it to be (a domain or a project)
16:57:27 <ayoung> or add to the up front query
16:57:46 <ayoung> check_system_grant(self, role_id, actor_id, target_id, inherited):  does not have a type in it
16:57:59 <ayoung> check_grant_role_id  neither
16:58:20 <raildo> ok, so I'll add a check type on this query before try to remove it, that should be fine
16:58:23 <lbragstad> the manager logic for the system assignments is explicity about types
16:58:28 <lbragstad> in the actual business logic
16:58:33 <lbragstad> and calls to a separate driver
16:58:38 <ayoung> I think that if project_id is NOne, type is a domain assignemnt, if project_id is set, it is a proejct role assignment
16:59:00 <ayoung> should be a change limited to the sql driver
16:59:01 <ayoung> cool?
16:59:28 * lbragstad would love to refactor the domain/project assignment backends to be more explicity and less implicit
16:59:41 * ayoung wants to just kill domains
16:59:59 <raildo> so, let's do it! :D
17:00:05 <lbragstad> raildo: i have your patch open in a tab and i'll review it today
17:00:12 <hrybacki> that hour flew by
17:00:15 <lbragstad> thanks for proposing the test first
17:00:17 <raildo> lbragstad, ok, thanks
17:00:29 <raildo> that's all from my side
17:00:30 <lbragstad> anywho, that went quick
17:00:42 <lbragstad> we're out of time, but office hours is starting in -keystone
17:00:46 <lbragstad> thanks everyone for coming
17:00:48 <lbragstad> #endmeeting