17:56:56 <lbragstad> #startmeeting keystone-office-hours
17:56:57 <openstack> Meeting started Tue Apr 24 17:56:56 2018 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:56:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:57:01 <openstack> The meeting name has been set to 'keystone_office_hours'
17:58:13 <kmalloc> lbragstad: email sent to -dev
17:58:35 <kmalloc> re stable
17:59:03 <lbragstad> checking
17:59:16 <gagehugo> nice
17:59:32 <lbragstad> fyi - https://review.openstack.org/#/c/563974/ and https://review.openstack.org/#/c/562716/ close bugs
18:01:00 <gagehugo> lbragstad looking
18:01:23 <lbragstad> the one about enforcement limits needs some feedback on the API bits
18:01:38 <lbragstad> GET /v3/limit_model versus GET /v3/limit/model
18:01:48 <lbragstad> i left comments in review and i can respin that pretty each
18:01:50 <lbragstad> easy*
18:31:20 <openstackgerrit> Lance Bragstad proposed openstack/keystone-specs master: Add idea for alternative service catalogs  https://review.openstack.org/564042
18:31:47 <lbragstad> mnaser: ^ relevant to your discussion in -tc the other day
19:13:49 <gagehugo> lbragstad with that double token provider use-case, we should avoid doing this: https://review.openstack.org/#/c/558918/
19:13:50 <gagehugo> right?
19:15:16 <lbragstad> gagehugo: yeah - it was a far fetched use case
19:15:32 <lbragstad> i was just trying to think of things that would require the location of both repositories
19:17:03 <lbragstad> then again - if someone is rolling their own token providers, they might not have an issue just exposing new configuration values
19:17:04 * lbragstad shrugs
19:17:35 <gagehugo> hmm
19:20:44 <gagehugo> leave it up for now, we will likely discuss it once dev work begins on jwt
19:20:50 <gagehugo> I guess*
19:21:01 <lbragstad> yeah - that works
19:28:30 * knikolla finally booked flights/hotel for vancouver.
19:38:13 <gagehugo> \o/
20:26:48 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Add conceptual overview of the service catalog  https://review.openstack.org/563974
20:36:43 <lbragstad> kmalloc: do you happen to know where the translation you speak of happens? https://review.openstack.org/#/c/530509/4/oslo_context/context.py
20:39:01 <lbragstad> i see some stuff in keystonemiddleware/audit/_api.py
20:39:12 <lbragstad> but that doesn't seem right
20:44:30 <lbragstad> oh...
20:45:05 <lbragstad> https://github.com/openstack/keystonemiddleware/blob/686f7a5b0b13a7ef4c7ce6721e6c9e601816ad45/keystonemiddleware/auth_token/_request.py#L201-L217
20:50:20 <kmalloc> Not off the top of my head
20:50:32 <kmalloc> Will look when done with lunch.
20:53:21 <lbragstad> i think i found a clue
21:17:41 <lbragstad> this is weird, i see where ksm scrubs the headers when it receives a request
21:17:58 <lbragstad> and then it sets them appropriately if the user and service tokens are valid
21:18:03 <lbragstad> which make total sense
21:18:15 <lbragstad> the request object trucks along through middleware
21:18:23 <lbragstad> following the wsgi pipeline
21:19:15 <lbragstad> and then in the case of nova, it reaches a different piece of middleware called NovaKeystoneContext that processes the headers using oslo.context to build a context object for nova
21:20:08 <lbragstad> where it gets strange is that ksm sets the headers like X-Project-Id
21:20:39 <lbragstad> but oslo.context looks for request.headers['HTTP_X_PROJECT_ID']
21:20:59 * lbragstad goes to dig in requests
21:22:59 <lbragstad> s/requests/webob/
21:28:14 <lbragstad> baha - https://github.com/Pylons/webob/blob/4e8c7ecc20bed6ce6c64daa3dcb97cc328058e8c/src/webob/headers.py#L111-L115
22:15:42 <openstackgerrit> Lance Bragstad proposed openstack/keystonemiddleware master: Introduce new header for system-scoped tokens  https://review.openstack.org/564072
22:43:47 <kmalloc> lbragstad: aha
03:13:03 <openstackgerrit> wangxiyuan proposed openstack/keystone master: Update IdP sql model  https://review.openstack.org/559676
03:13:04 <openstackgerrit> wangxiyuan proposed openstack/keystone master: Fix the test for unique IdP  https://review.openstack.org/563812
03:39:54 <openstackgerrit> wangxiyuan proposed openstack/keystone master: Invalidate the shadow user cache when deleting a user  https://review.openstack.org/561908
06:27:48 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Turn normalize_status into a class  https://review.openstack.org/564110
06:55:02 <openstackgerrit> wangxiyuan proposed openstack/keystonemiddleware master: Follow the new PTI for document build  https://review.openstack.org/562951
08:32:35 <Horrorcat> is the unified limits thing used by services already? (in pike)
08:37:07 <Horrorcat> how do unified limits interact with reselling?
08:37:16 <Horrorcat> eh, s/reselling/keystone federation/
12:40:25 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Turn normalize_status into a class  https://review.openstack.org/564110
12:41:19 <mordred> kmalloc, lbragstad, ^^ I think that should address the review comments
12:41:31 <mordred> kmalloc: ping me when you're up and we can talk aout your compat concern and stuff
14:01:41 <kmalloc> mordred: o/
14:01:44 <kmalloc> Here
14:02:41 <kmalloc> Drinking absurdly strong coffee
14:02:46 <kmalloc> But here.
14:04:03 <rabel> hi there. what does the keystone policy service actually do? as far as i understand, policies are handled by every openstack component individually (that is by using oslo.policy). or is the keystone architecture as described at https://docs.openstack.org/keystone/latest/getting-started/architecture.html not up-to-date?
14:05:45 <lbragstad> rabel: that was an API that was introduced shortly after the v3 API but never really fully realized
14:11:30 <mordred> kmalloc: \o/
14:12:35 <kmalloc> So, my olny concern is that the returned catalog will be different between releases of ksa with this for the same unchanged OpenStack (in theory)
14:13:02 <kmalloc> And that, in itself, would be a behavior change in ksa / might break someone using it
14:13:08 <mordred> kmalloc: so - in general, the whole aliases thing is aimed at letting deployers start to deploy things with better service types without breaking their users - at themoment nobody should be doing it because it would break everyone. it SHOULD return all the same data for the existing clouds
14:13:32 <mordred> of course, I could totally be wrong
14:13:41 <kmalloc> If it was anything but KSA I would have already +2'd it
14:13:48 <kmalloc> FTR
14:13:48 <mordred> yah - totally
14:14:24 <kmalloc> I am ok with it if this should just work and not break anyone... But you get my concern :)
14:14:53 <mordred> I'm wondering what we should do to validate whether the concern is a thing ... oh, and YES - I totally agree on not wanting to do the thing you are concerned with
14:14:57 <kmalloc> But I want to be 3x sure we aren't really changing any behavior that someone might be using.
14:15:19 <kmalloc> Yeah, I don't know how to be sure.
14:15:28 <mordred> the one thing I could come up with in my head that could be different ...
14:15:32 <kmalloc> Which is why I wanted to chat ;)
14:15:53 <rabel> lbragstad: so keystone in reallity only consists of 5 components instead of 6?
14:15:57 <mordred> is that if a deployer deployed cinder using service-type bloack-storage and didn't use volumev2 or volumev3 - and the user requested service-type volumev2
14:16:18 <kmalloc> Yep, that is the only real example I can come up with.
14:16:20 <lbragstad> rabel: i guess it depends on your use case
14:16:31 <mordred> kmalloc: in that case, today the user would get "can't find endpoint" and after this change they'd get the block storage endpoint
14:16:54 <lbragstad> but as far as the policy stuff is concerned, that direction never really panned out because we were going to be able to pull all policies across all of openstack into a single service
14:17:24 <kmalloc> How likely is that though.
14:18:35 <kmalloc> mordred: I am inclined to approve this patch as is.
14:19:22 <mordred> kmalloc: cool. I think as a second verification, I'll write the sdk consumption patch so we can see it do all of shade/sdk functional tests too
14:19:34 <kmalloc> Heck, I kind of want to make KSA2 and just sump some stuff and clean up things. :P
14:19:40 <mordred> kmalloc: and we should make sure that's happy before we cut the release
14:19:47 <mordred> kmalloc: dude. srrsly
14:19:50 <kmalloc> Yeah, sounds good.
14:19:55 <mordred> the number of flags we've grown
14:20:01 <rabel> lbragstad: is this going to happen in the future? i mean having a central policy service? should we add a note to https://docs.openstack.org/keystone/latest/getting-started/architecture.html that keystone's policy service is not actually used and the corresponding api is deprecated?
14:20:08 <kmalloc> You know, it means we are successful with it.
14:20:23 <kmalloc> And we have been careful to make it stable.
14:20:36 <kmalloc> We did a good job (tm) on ksa
14:20:42 <lbragstad> rabel: i doubt it
14:20:58 <lbragstad> but there have been variants of the idea proposed
14:21:17 <lbragstad> that don't include a full fledged policy mapping for every service in keystone
14:21:49 <kmalloc> mordred: we can chat about ksa2 (strictly API compat, but behavioral cleanup) next release
14:22:22 <kmalloc> Let's get a sign off from lbragstad  on this change.
14:22:55 <kmalloc> And we can roll.forward (sdk stuff as a depending change)
14:23:10 * mordred nudges lbragstad
14:23:26 * lbragstad looks for a link
14:24:14 <kmalloc> lbragstad: service aliases. Sorry on mobile (+_+)
14:24:30 <lbragstad> https://review.openstack.org/564110 ?
14:25:08 <kmalloc> https://review.openstack.org/#/c/462218/4
14:25:23 <kmalloc> But yes, that stack
14:25:38 <kmalloc> See my comments and the wuick chat mordred and I had here just now.
14:25:55 <kmalloc> But I'm inclined to accept this, but want other eyes first.
14:26:27 <rabel> lbragstad: thank you! what do you thin about adding a note to the docs?
14:26:45 <lbragstad> rabel: yeah - i don't see an issue with that
14:27:16 <lbragstad> fwiw - the API reference has a warning https://developer.openstack.org/api-ref/identity/v3/index.html#policies
14:27:48 <lbragstad> as far as the architecture document is concerned, i wouldn't be opposed to just remove the policy section
14:28:12 <lbragstad> it's has no real significance on the overall architecture of keystone
14:28:25 <lbragstad> and is mostly boilerplate code
14:28:52 <lbragstad> thinking about it from someone trying to digest that document, it's just an extra thing they have hold in their mental map
14:28:53 <rabel> lbragstad: ok, I will create a change for deleting that section
14:29:31 <lbragstad> rabel: thanks
14:31:03 <lbragstad> kmalloc: you're good with the test added at the end of that change?
14:33:02 <mordred> lbragstad, kmalloc: cmurphy also often has good eyes for spotting when I do something crazy
14:33:16 <cmurphy> heh
14:33:53 <cmurphy> trying to finish something else up, can look later tonight if it isn't already taken care of
14:35:15 <rabel> lbragstad: more down in that same document there are some keystone internals including more stuff about policy. i don't touch that stuff, right?
14:36:00 <lbragstad> rabel: yeah - i would leave that because it's describing the relationship we have with oslo.policy to do policy enforcement
14:38:41 <openstackgerrit> David Rabel proposed openstack/keystone master: Remove policy service from architecture.rst  https://review.openstack.org/564239
15:04:34 <openstackgerrit> David Rabel proposed openstack/keystone master: Remove policy service from architecture.rst  https://review.openstack.org/564239
15:07:01 <openstackgerrit> David Rabel proposed openstack/keystone master: Remove policy service from architecture.rst  https://review.openstack.org/564239
15:37:24 <openstackgerrit> Lance Bragstad proposed openstack/keystoneauth master: Use Status variables in tests  https://review.openstack.org/564258
15:53:23 <openstackgerrit> Lance Bragstad proposed openstack/keystoneauth master: Reference class variable in Status  https://review.openstack.org/564262
16:01:46 <lbragstad> kmalloc: for that ksm change, wouldn't we want service users who have roles on the system to also have that header populated?
16:02:22 <kmalloc> uhm...
16:02:25 <kmalloc> maybe?
16:02:39 <kmalloc> it wasn't clear to me.
16:03:56 <lbragstad> i mean - that might be a little bit into the future
16:04:16 <lbragstad> but, i figured it might be possible to have service users that have roles on the system
16:05:51 <kmalloc> trying to figure that out
16:06:45 <kmalloc> also do we want to call this: openstack-system-scope: system
16:06:51 <kmalloc> or .. openstack-scope-type ?
16:07:12 <lbragstad> that's a good question
16:07:21 <lbragstad> we already have the project-id domain-id bits
16:07:27 <lbragstad> which relay scope
16:07:37 <kmalloc> annnd i wouldn't want to call it openstack-auth-system-scope and openstack-service-system-scope
16:07:46 <kmalloc> which is why i don't think it belongs in the template
16:08:06 <kmalloc> (thats what the template does)
16:09:32 <lbragstad> got it
16:09:36 <lbragstad> that's fair
16:10:04 <openstackgerrit> Lance Bragstad proposed openstack/keystonemiddleware master: Introduce new header for system-scoped tokens  https://review.openstack.org/564072
16:10:05 <lbragstad> new version ^
16:10:08 <kmalloc> cool.
16:10:11 <lbragstad> no tests yet
16:10:18 <lbragstad> but just working through the feel of it
16:10:43 <kmalloc> yah
16:27:33 <lbragstad> i'm going to take lunch quick, but i'll get the ksm patch tested this afternoon
16:40:09 <openstackgerrit> Merged openstack/keystoneauth master: Expose version status in EndpointData  https://review.openstack.org/559125
17:31:32 <kmalloc> mordred: once lbragstad's questions are answered i'm good with the aliases change
17:37:05 <mordred> kmalloc: awesome. in exchange, I have written you a whole new patch to be excited about
17:37:13 <mordred> kmalloc: are you ready to be excited by it?
17:37:14 <kmalloc> oh no :P
17:37:43 <mordred> kmalloc: (I realized there was one last piece lurking when I went to write the sdk patch)
17:37:51 <kmalloc> hehe
17:38:12 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases  https://review.openstack.org/564299
17:38:40 <mordred> you're gonna like the comment I left in this one - I at least I hope you are
17:38:45 <kmalloc> god, volume2 version=3
17:38:50 <mordred> right?
17:38:52 <kmalloc> wow, we are baaaaaaad.
17:38:53 <kmalloc> :P
17:39:11 <mordred> this is all literally because of the original impl of python-novaclient that got cargo culted everywhere
17:39:50 <mordred> since it hid the version discovery docs, there was no way for cinder to rev the version without volumev2 - and we're STILL cleaning up after it years later
17:40:04 <mordred> EVEN THOUGH openstack has been able to support doing this cleanly since basically day one
17:40:05 <kmalloc> yeah
17:40:12 <kmalloc> it looks good, going to wait for zuul
17:40:16 <mordred> it's all 100% the fault of a single library
17:40:26 <kmalloc> also... https://review.openstack.org/#/c/564110/2 needs another pass
17:40:28 * mordred stabs everyone
17:40:34 <kmalloc> you can't remove a public function
17:40:48 <kmalloc> but it's a small fix to just use the class for that function
17:41:22 <kmalloc> ah wait. that hasn't been released
17:41:29 <kmalloc> bah, i need to drink coffee before reviewing
17:41:30 <kmalloc> :P
17:42:39 <kmalloc> +2 now
17:42:47 <kmalloc> unless a release is cut between now and when it lands.
17:45:01 <mordred> I'm +2 on the followups from lbragstad ... I feel bad for not co-authoring efried though
17:45:59 <mordred> kmalloc: you saw those, yeah?
17:48:16 <lbragstad> yeah - the whole volumev2, volumev3, block-storage test case kinda makes me sad
17:48:18 <lbragstad> but...
17:48:32 <lbragstad> i assume that's a different problem
17:48:46 <mordred> such sadness
17:49:08 <mordred> lbragstad: I can make a class - and will do that real quick -if it subclasses dict it doesn't break json.dumps
17:49:19 <lbragstad> nice
18:00:32 <kmalloc> mordred: yeah +2 on those
18:29:28 <lbragstad> hmm - our ksm tests are pretty dated
18:35:02 <lbragstad> we reference two unsupported token providers
19:00:41 <openstackgerrit> Lance Bragstad proposed openstack/keystonemiddleware master: Introduce new header for system-scoped tokens  https://review.openstack.org/564072
19:00:44 <lbragstad> kmalloc: tested ^
19:58:14 <ayoung> lbragstad, does oslo-context have support for system scope?  I assume it does, just want to confirm
19:58:26 <lbragstad> ayoung: it's in the pipe
19:58:36 <lbragstad> https://review.openstack.org/#/c/530509/
19:58:46 <lbragstad> but it needs this too https://review.openstack.org/#/c/564072/
19:58:47 <ayoung> lbragstad, I think that needs to land before https://review.openstack.org/#/c/564072/ middleware will work.
19:58:59 <lbragstad> yeah
19:59:11 <ayoung> I got fooled by that doing is_admin_project.  Jamie helped straighten it out
19:59:11 <lbragstad> middleware will set the header but it won't be recognized by anything
19:59:25 <lbragstad> yeah - i tried reverse engineering the approach in that patch
19:59:26 <ayoung> and thus policy can't get enforced on it
19:59:29 <lbragstad> not sure if i got close of not
19:59:31 <lbragstad> or not*
19:59:39 <ayoung> have jamielennox look at it
19:59:54 <ayoung> I really don't trust anyone else with oslo-context
20:00:07 <lbragstad> i groked at it for a while
20:00:19 <lbragstad> the ksm stuff is intense too
20:00:30 <lbragstad> the header translation bits threw me for a loop
20:00:30 <ayoung> yep
20:00:43 <ayoung> I'll keep on those reviews.  I've been beating on that beast for years nowq
20:00:55 <lbragstad> keep beating please
20:00:59 <lbragstad> those should be good to review
20:01:16 <lbragstad> we'll have to get both of those merged before the patch to nova will make any sense
20:36:15 <openstackgerrit> Lance Bragstad proposed openstack/oslo.policy master: Update documentation to include usage for new projects  https://review.openstack.org/564340
21:07:33 <mwhahaha> hey any thoughts on a change in about the last 3 weeks that might have affected the saml2 items?
21:08:13 <mwhahaha> the puppet tests that deployed Shibboleth/saml2 is failing now  http://logs.openstack.org/87/558887/10/check/puppet-openstack-beaker-centos-7/cc9be41/logs/keystone/keystone.txt.gz#_2018-04-25_18_29_34_339
21:08:58 <mwhahaha> i would guess it's https://review.openstack.org/#/c/350815/ but i'm not sure what we should be loading instead
21:15:45 <cmurphy> mwhahaha: i think this shouldn't be set http://git.openstack.org/cgit/openstack/puppet-keystone/tree/manifests/federation/shibboleth.pp#n102
21:16:07 <cmurphy> [auth]/methods = [...],saml2 should be enough
21:16:23 <mwhahaha> so drop the module_plugin bits
21:16:31 <cmurphy> yeah
21:16:38 <mwhahaha> k i'll give that a shot
23:19:32 <jamielennox> lbragstad: what is that header - a boolean?
08:40:13 <eEbx> Hello, I'm thinking about keystone cluster in two data centers. Can you recommend me some best practices or do you have some architectural plan?
08:43:04 <jmccarthy> Does keystone support this currently ? assert:supports-zero-downtime-upgrade
09:32:18 <cmurphy> eEbx: we don't really have a good document on that yet :/ you might try the #openstack-operators room or the openstack-operators mailing list for some best practice advice
09:33:01 <eEbx> cmurphy: ok thanks a lot
09:33:45 <cmurphy> jmccarthy: we don't currently assert that tag, but we do support rolling upgrades (we're just short of asserting the tag by some CI requirements) and zero downtime should be achievable that way
09:35:18 <jmccarthy> cmurphy: Ok - this is the best docs for this at the moment is it ? https://docs.openstack.org/keystone/pike/admin/identity-upgrading.html
09:37:09 <cmurphy> jmccarthy: yes or the queens version https://docs.openstack.org/keystone/queens/admin/identity-upgrading.html
09:37:24 <cmurphy> though i don't think it's changed
09:37:34 <jmccarthy> cmurphy: Oh yes, great - thanks ! :)
09:37:51 <cmurphy> np
11:22:18 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Turn normalize_status into a class  https://review.openstack.org/564110
11:22:19 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases  https://review.openstack.org/564299
11:22:20 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Make VersionData class  https://review.openstack.org/564469
11:23:35 <mordred> cmurphy: ^^ that last patch should take care of your original review on the version data patches
11:23:57 <cmurphy> mordred: sweet
11:24:41 <mordred> cmurphy: if we don't watch out, we're going to have a workable service type story
11:25:15 <cmurphy> whoa
12:22:45 <gagehugo> o/
13:04:33 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases  https://review.openstack.org/564299
13:04:34 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Allow tuples and sets in interface list  https://review.openstack.org/564495
13:57:28 <lbragstad> jamielennox: the system scope one?
13:57:33 <lbragstad> it's a dictionary
15:16:51 <openstackgerrit> Merged openstack/keystone master: Remove policy service from architecture.rst  https://review.openstack.org/564239
15:44:20 <openstackgerrit> Lance Bragstad proposed openstack/oslo.policy master: Update documentation to include usage for new projects  https://review.openstack.org/564340
15:56:57 <cmurphy> if we propose moving the default roles session from Thursday to Monday at 11:35 how do people feel about that?
15:57:21 <lbragstad> +1 from me
15:57:32 <gagehugo> yes please
15:59:52 <lbragstad> knikolla: have you heard of any updates about the mutable configs goal?
16:00:14 <lbragstad> i was just reading the weekly release email and it reminded me of that
16:14:06 <empty_cup> I'm trying to associate a user with a project using Keystone's REST APIs. Initially I set the default_project_id field but that seems more like a suggestion.
16:14:46 <lbragstad> empty_cup: correct
16:14:58 <lbragstad> empty_cup: you need to explicitly give that user authorization
16:15:16 <empty_cup> I'm also assigning the role to user on project and both return successfully but when I list the projects that a user belongs to, it is empty.
16:16:09 <lbragstad> empty_cup: we have a note about default_project_id here - https://developer.openstack.org/api-ref/identity/v3/index.html#users
16:16:42 <lbragstad> empty_cup: which APIs are you calling? do you have a trace?
16:17:30 <empty_cup> Ok, the confirmation helps that I need to create a role and assign it to a user on a project
16:21:05 <empty_cup> lbragstad: I'm working on a trace. Basically I create a user, then a role, assign the user to the role on an existing project.
16:21:23 <lbragstad> ok
16:21:28 <lbragstad> that sounds about right
16:21:57 <lbragstad> what api are you asking for a user's projects?
16:22:33 <empty_cup> Although when I list projects for the user there is nothing in the array. I'm using v3. I can see each in Horizon except for the role
16:23:13 <empty_cup> I am creating all of the objects in a separate domain although I can view them while logging in as the default admin account on the default domain.
16:23:23 <empty_cup> Except for the role, the role doesn't show up.
16:25:43 <lbragstad> hmm
16:25:52 <lbragstad> are you using /v3/role_assignments
16:25:58 <lbragstad> or GET /v3/auth/projects ?
16:26:12 <lbragstad> are you using openstackclient?
16:27:21 <empty_cup> yep, v3 for everything, that's my reference
16:27:50 <empty_cup> sorry, i'm using post requests for everything -- i have a python script or using curl
16:29:23 <lbragstad> ok - if you have a token for the user you're trying to list projects for, you should be able to use GET /v3/auth/projects
16:29:29 <lbragstad> and get a list of all projects you have authorization on
16:29:54 <lbragstad> otherwise, as an administrator, you should be able to call the /v3/role_assignments API and list all role assignments present in the deployment
16:31:20 <empty_cup> i'm working on it, thanks for the help
16:41:08 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases  https://review.openstack.org/564299
16:42:25 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Allow tuples and sets in interface list  https://review.openstack.org/564495
16:44:47 <mordred> lbragstad: I got the class extraction done in that stack ^^ like you wanted
16:44:55 <mordred> lbragstad: https://review.openstack.org/#/c/564469/1
16:45:12 <lbragstad> oh -cool
17:19:39 <empty_cup> lbragstad[m]: I confirmed by using the roles?domain_id that I see the role created. And I have a trace of activity. Is there a pastebin of choice the channel uses?
17:19:51 <empty_cup> I feel like I'm really close to understanding how keystone works
17:21:21 <mordred> empty_cup: http://paste.openstack.org is a good one
17:21:25 <mordred> empty_cup: and yay for understanding!
17:26:36 <empty_cup> Pasted here: http://paste.openstack.org/show/719965/
17:41:24 <empty_cup> i'm starting to think it may be influenced by the scope of the admin token
17:43:56 <lbragstad[m]> empty_cup: which API are you calling?
17:47:43 <knikolla> lbragstad[m]: i haven't checked with regards to mutable config. will do.
17:47:57 <empty_cup> libragstad: http://localhost/identity/v3
17:49:27 <lbragstad[m]> that's the root path of the v3 api
17:51:53 <empty_cup> oh sorry, i misunderstood the question, which api call? the pastebin contains the resulting text for each API call. i can include the URL data as well
17:54:44 <lbragstad[m]> lines 15 and 16 in your paste seem to be missing the call you made?
17:59:44 <empty_cup> lbragstad[m]: /roles?domain_id= using the same domain_id fed into the creation commands
18:00:30 <lbragstad[m]> oh - gotcha
18:00:57 <lbragstad[m]> so - that's only going to give you a list of role filtered by the domain, were you looking for it to give you a list of people with authorization on that domain?
18:10:22 <empty_cup> that's a good point, i'll find the other command that will list authorized users
18:16:22 <lbragstad[m]> empty_cup: https://developer.openstack.org/api-ref/identity/v3/index.html#id594 might help
18:55:34 <empty_cup> ok i was applying the role to the domain and not on the project. that's fixed
18:56:34 <empty_cup> now through the use of policy.json i can craft a policy that says this role has the ability to create users only within this project
18:56:57 <lbragstad[m]> technically - yes...
18:57:07 <empty_cup> is there a better way?
18:57:50 <lbragstad[m]> but we do have some stuff in the works to make it so that you don't have to roll a custom policy for that kind of thing https://bugs.launchpad.net/keystone/+bug/1748027
18:57:52 <openstack> Launchpad bug 1748027 in OpenStack Identity (keystone) "The v3 users API should account for different scopes" [High,Triaged] - Assigned to sonu (sonu-bhumca11)
19:06:12 <empty_cup> neat
19:07:49 <empty_cup> now that i have the trifecta of role, project, and user. when i list projects for user the array is still empty. should it be empty or populated with the project?
19:08:39 <lbragstad> how are you listing the projects for a user?
19:08:52 <lbragstad> GET /v3/auth/projects ?
19:09:15 <empty_cup> /v3/users/{user_id}/projects
19:09:54 <empty_cup> as an admin from default
19:10:59 <lbragstad> empty_cup: and the user is in a different domain with a role assignment on a project in a different domain?
19:17:57 <lbragstad> empty_cup: i was able to do this locally - http://paste.openstack.org/raw/719967/
19:18:26 <lbragstad> i was operating as the 'admin' user from devstack which has the administrator role and is within the default domain
19:23:32 <empty_cup> cool, i'm looking at it. i've been using the 'admin' user from devstack as well
19:47:34 <mordred> lbragstad: did morgan change his nick again or is he just not in channel?
19:48:00 <lbragstad> i think kmalloc just dropped
19:48:05 <lbragstad> i'm unaware of a nick change
19:48:16 <lbragstad> but - would totally believe it if he did change nicks again :)
19:48:35 <mordred> right? tough to keep up with him on that :)
19:49:00 <lbragstad> it took me about 2 days to catch on to kmalloc
19:49:01 <mordred> lbragstad: in any case, the plan of "make an sdk patch consuming the alias patches to make sure we didn't miss anything" TOTALLY bore fruit and caught a bug
19:49:20 <lbragstad> nice!
20:10:17 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases  https://review.openstack.org/564299
20:13:53 <mordred> lbragstad: ^^ this time with winning included
20:29:07 <openstackgerrit> Merged openstack/oslo.policy master: Trivial: Update pypi url to new url  https://review.openstack.org/563368
21:00:30 <openstackgerrit> Doug Hellmann proposed openstack/oslo.policy master: make the sphinxpolicygen extension handle multiple input/output files  https://review.openstack.org/564627
21:00:50 <empty_cup> lbragstad: is there the ability for a project "admin" to create users scoped to a specific project within a specific domain?
21:01:29 <openstackgerrit> Monty Taylor proposed openstack/keystoneauth master: Infer version from old versioned service type aliases  https://review.openstack.org/564299
21:02:14 <lbragstad> empty_cup: an admin can give users roles on specific projects, which essentially acts as scope
21:02:34 <lbragstad> empty_cup: are you trying to find a way to make it so that a user can only work within one project?
21:02:46 <lbragstad> other wise users at technically scoped to domains
21:03:03 <lbragstad> which act as containers for users, groups, and projects
21:03:08 <lbragstad> and sometimes roles
21:03:38 <openstackgerrit> Doug Hellmann proposed openstack/oslo.policy master: make the sphinxpolicygen extension handle multiple input/output files  https://review.openstack.org/564627
21:07:38 <empty_cup> lbragstad: oh, i'm starting to see that's not how it is designed. the user is created in the domain and then based on roles is allowed into projects
21:11:54 <lbragstad> correct
21:12:04 <lbragstad> keystone tries to be explicit with authorization
21:12:28 <lbragstad> so even if a user's domain is set, they don't automatically get authorization on projects within that domain
21:12:48 <lbragstad> an administrator of some kind must grant them authorization explicitly
21:26:38 <empty_cup> got it
22:55:58 <jamielennox> lbragstad: ok, it's kind of weird (or just not something we'd done before) passing a dict through the environment headers
22:56:20 <jamielennox> is it a known dict or something that will change frequently?
01:50:35 <lbragstad> jamielennox: that's a good question
01:51:03 <lbragstad> right now it will always be {"all", true}
01:51:30 <lbragstad> just because we don't have the ability to scope to something other than the system as a whole
01:52:23 <lbragstad> that said, it could change to something like {"compute": "7fce0720478d4595bc02970a1d467d51"}
01:52:26 <lbragstad> in the future
01:52:36 <lbragstad> if 7fce0720478d4595bc02970a1d467d51 is the service id or something like that
01:52:38 * lbragstad shrugs
01:52:48 <lbragstad> i'm not sure what the best way is to relay the information
01:52:57 <lbragstad> s/the/that/
01:55:47 <jamielennox> that's an interesting target - the concept has obviously changed a bit since i last looked
01:56:10 <jamielennox> there was talk about having system be able to be targetted to a region
01:56:19 <lbragstad> right
01:56:32 <jamielennox> but last i was involved you could do a region and then use roles to be able to restrict to compute
01:56:45 <lbragstad> before we just had to deal with an id that could belong to a project or a domain, so this is obviously more complicated than that
09:49:55 <openstackgerrit> lei zhang proposed openstack/keystone master: Fix the outdated URL  https://review.openstack.org/564714
13:45:53 <lbragstad> cmurphy: this weeks bug report https://gist.github.com/lbragstad/80862a9111ff821af07e43e217c52190
13:46:15 <cmurphy> lbragstad: nice
14:48:44 <mordred> oh look. it's a jamielennox
14:57:48 <cmurphy> yay thank you to whoever started adding to the update etherpad
14:57:57 * cmurphy is scatterbrained today
15:16:07 <lbragstad> cmurphy: no worries - i just jotted down a couple things
15:16:17 <lbragstad> feel free to reword it how ever you wish
15:24:43 <cmurphy> already sent it out
15:37:22 <lbragstad> oh - fantastic
16:30:33 <lbragstad> jamielennox: i responded to your comments here - https://review.openstack.org/#/c/530509/5/oslo_context/context.py
16:30:48 <lbragstad> curious if you have a suggestions for what to do with `system` if it's a dictionary
16:54:04 <mordred> lbragstad: delete it
16:54:11 <mordred> lbragstad: delete everything
16:54:42 * lbragstad types in sudo rm -rf /
16:57:25 <gagehugo> --no-preserve-root
17:05:04 <lbragstad> lol
17:25:00 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Introduce new TokenModel object  https://review.openstack.org/559129
17:25:13 <kmalloc> mordred: o/
17:25:18 <kmalloc> See I'm here!
17:25:29 * kmalloc kicks irccloud hard.
17:25:37 <lbragstad> kmalloc: gagehugo wxy i thought i was hitting transient issues with loading the backend drivers... but i couldn't reproduce it
17:25:41 <lbragstad> ^
17:26:10 <kmalloc> lbragstad: in what way? Oh in the token model change?
17:26:11 <lbragstad> that patch introduces a new base class that uses bootstrap instead of the massive ksfixtures loading TestCase does today
17:26:15 <lbragstad> yeah
17:26:23 <kmalloc> Cool. Yay bootstrap
17:26:41 <lbragstad> it'd be nice to get rid of all the stuff we have that loads custom test data
17:26:46 <lbragstad> in favor of what bootstrap does
17:27:05 <lbragstad> then push the test class specific data needed to certain tests into the setup classes of those classes
17:27:18 <lbragstad> needed by*
17:33:14 <lbragstad> ideally - we could start converting TestCase to TestCaseWithBootstrap
17:33:38 <lbragstad> once everything is moved over, we could rename TestCaseWithBootstrap to TestCase and remove the load_fixtures method
17:33:56 <kmalloc> Hah, turns out, irccloud crashed my phone :P
17:34:07 <kmalloc> Uninstall and reinstall all fixed.
17:34:38 <gagehugo> hmm
17:36:40 <kmalloc> mordred: now let me remember what I was going to ask you.... Ugh.
17:56:55 <kmalloc> Oh right, Mordred, the question doesn't really impact you because you're a consumer of other clouds.
17:57:14 <kmalloc> Was going to ask about MFA and some.defaults for say a whole domain.
17:58:03 <kmalloc> lbragstad: I'll implement some logic this weekend around MFA rules and such for domains and get the resource-options for projects/domains/groups done so we have a place for that to live.
17:59:00 <kmalloc> cmurphy: do you see a benefit for resource options on app creds?
17:59:26 <kmalloc> If so, since I'm adding the support code in other places, happy to do it there as well.
18:01:40 <cmurphy> kmalloc: i'm missing context
18:02:14 <kmalloc> Like the pci-dss user options, do you see a value to support such things in app creds.
18:02:33 <kmalloc> Not anything specific, but options for the app-creds that are fluid like that
18:03:13 <kmalloc> I'm writing code to support it in other subsystems (resource) and can easily hit app-creds too if that would be of benefit
18:07:53 <cmurphy> kmalloc: maybe i've been out of it this week but i'm not understanding what this would be introducing
18:08:12 <cmurphy> kmalloc: but i see value in making everything consistent so go for it
18:08:31 <kmalloc> Okie, I'll propose it, and we can discuss in review.
18:10:10 <cmurphy> that will help
18:14:54 <ayoung> kmalloc, what do you think about these servers for a DIY cluster: https://www.ebay.com/itm/Dell-PowerEdge-R610-Server-2x-2-53GHz-E5540-8-Cores-32GB-SAS6i-2x146G/191545700823?hash=item2c990371d7:g:rmQAAOSwvfZZ57~d
18:17:26 <ayoung> supported CPU on RHEL 7 etc
18:21:25 <ayoung> But I bet they have no management controllers, which means no IPMI and thus can't work for Director
18:41:12 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Decouple bootstrap from cli module  https://review.openstack.org/558903
18:41:12 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Introduce new TokenModel object  https://review.openstack.org/559129
18:41:50 <lbragstad> ^ kmalloc ok - those should pass now
19:01:43 <kmalloc> ayoung: hmmm
19:01:48 <kmalloc> lbragstad: thanks
19:01:58 <lbragstad> yup
19:02:06 <kmalloc> ayoung: Dell, should have idrac card compat
19:02:20 <kmalloc> ayoung: at least.
19:02:28 <kmalloc> But I'd go hp
19:07:26 <mordred> lbragstad: re: one of your questions - I don't THINK we need specific docs on aliases ... the tl;dr is "you can now just use the official type regardless of what the deployer used"
19:07:38 <mordred> but with a good dose of "it'll also do the right thing if you ignore this"
19:07:39 <kmalloc> ayoung: but otherwise those aren't too bad. I personally would worry about the sas drives vs sata. Just so $$$ to replace.
19:07:47 <mordred> what do you think?
19:08:02 <lbragstad> mordred: ok
19:08:15 <kmalloc> mordred: link to the aliases would be my recommendation, and just a simple page showing them (docs published)
19:08:30 <kmalloc> But I'd be ok with that down the line. Not needed to land this now.
19:08:50 <kmalloc> Heck, I'll even sign up to help make sure that is a thing.
19:08:58 <mordred> kmalloc: ++
19:09:15 <mordred> kmalloc, lbragstad: also, replied to the other question on https://review.openstack.org/#/c/462218/4/keystoneauth1/access/service_catalog.py inline
19:09:16 * kmalloc tosses on the top of the TODO pile
19:09:52 <kmalloc> Ah, nice.
19:10:29 <lbragstad> ok - so that's part of the service-types-authority api
19:10:35 <kmalloc> Yeah
19:10:38 <lbragstad> and we shouldn't have to worry about the sorting bit
19:10:48 <kmalloc> Exactly, it should be ordered for us.
19:10:55 <lbragstad> i saw it mentioned in the comment, but wasn't sure if that fell on our shoulders or not
19:11:04 <kmalloc> Nah. Good thing too :)
19:11:17 <mordred> yah - part of the api there (also, it should really never change because we should not add any new aliases - but you never know)
19:11:26 <kmalloc> mordred: do we have a keystone/KSA/service-types cross gate?
19:11:27 <mordred> kmalloc: oh - also - I made more patches for you
19:11:31 <kmalloc> Cauuuuuseeeeee.....
19:11:38 <mordred> kmalloc: we do not - that's a great idea
19:12:03 <kmalloc> Yah. That is something we need. ASAP imo
19:12:12 <kmalloc> Woo, more patches!
19:12:16 <mordred> honestly - a cross-gate with service-types-authority and os-service-types and keystoneauth
19:12:53 * mordred can get that cranked out
19:12:56 <kmalloc> Yep. I'd prob run keystone under it too, just to avoid mocking and make sure keystone isnt doing stupid.
19:13:23 <kmalloc> If you don't get to it, it's on my next week to poke at while I work on doc publish.
19:13:23 <mordred> kmalloc: cmon. keystone never does stupid
19:13:56 * kmalloc looks around. Looks at keystone...looks at domains....
19:14:02 <kmalloc> Yeah... Nevvvvvaaaarrrrr
19:14:12 <kmalloc> ;)
19:14:17 <mordred> hehe
19:15:35 <kmalloc> Hmm... I should buy another couple xeon-d low power machines for a full OpenStack cluster...
19:16:01 <kmalloc> (Brie might actually shove me in a hole and leave me there:p ;)
19:16:39 <kmalloc> I want an ironic managed lab >.>
19:18:47 <kmalloc> mordred: totally unrelated, any advice on a good combo smoker/grill... I ... Need one for summertime things..
19:19:13 <kmalloc> lbragstad: ^ you too, I'd ask dolphm, but you're both right here.
19:19:37 <lbragstad> funny you ask...
19:19:43 <lbragstad> i was just working on a comparison
19:19:49 * lbragstad digs
19:20:15 <lbragstad> https://github.com/lbragstad/kitchen/blob/master/smoker-comparison.md
19:21:52 <openstackgerrit> Steve Noyes proposed openstack/keystone master: [WIP] Enables MySQL Cluster support for Keystone  https://review.openstack.org/431229
19:29:29 <kmalloc> lbragstad: niiice
19:29:55 <lbragstad> those were a couple models i was looking at
19:30:33 <lbragstad> imo - if you're looking for efficiency and easy of use, go with a pellet grill
19:30:45 <lbragstad> if you're looking to go old school, get an off set
19:31:03 <lbragstad> or a kamado, but i was talking to dolph and his goes through a lot of charcoal
19:31:24 <lbragstad> but you get higher cooking temperatures than you would on most pellet setups
19:31:47 <lbragstad> so - mimicking wood-fired pizza would be easier
19:32:05 <lbragstad> the down side is that it's hard to find a smoker that does it all
19:32:44 <lbragstad> if you're into cooking from your bed, you can get a GMG which are wifi enabled and programmable
19:33:23 <lbragstad> which seems interesting, but looking at a screen is the last thing i want to do when i'm cooking outside
19:36:38 * mordred is a fan of offset
19:36:46 <mordred> agrees about finding a smoker that does it all
19:37:12 <lbragstad> it's literally impossible
19:37:18 <mordred> for my next trick I'm planning a custom offset for smoking and then a weber for charcoal grilling
19:37:57 <mordred> I currently have a cheap offset smoker that's also a charcoal grill - it's fine for smoking, but is a bit lame for the charcoaling
19:38:23 <mordred> (in that I have to use WAY more charcoal to get the heat since the shape of the enclosure isn't really helping any)
19:38:24 <lbragstad> can you open up the heat box?
19:38:46 <lbragstad> some offsets come with grates above the fire box for direct heat
19:39:14 <lbragstad> so you can grill and smoke at the same time, or use it as a hot plate https://github.com/lbragstad/kitchen/blob/master/smoker-comparison.md#yoder-cheyenne
19:40:13 <mordred> lbragstad: I'm using this right now: https://www.lowes.com/pd/Char-Griller-Duo-Black-Dual-Function-Combo-Grill/1245537?cm_mmc=SCE_PLA-_-SeasonalOutdoorLiving-_-Grills-_-1245537:Char-Griller&CAWELAID=&kpid=1245537&CAGPSPN=pla&store_code=2516&k_clickID=a2bb504e-60a1-4625-9f39-056f5bc1bc35&gclid=CjwKCAjwlIvXBRBjEiwATWAQItbnNvU9z0SWASwoL0XSqG95MqJuWmzg3qm-isXIjNGwoPoLGycmkxoCuY0QAvD_BwE
19:40:18 <mordred> wow. terrible link, sorry
19:40:26 <lbragstad> still got the job done
19:40:28 <lbragstad> :)
19:40:40 <mordred> it's mostly garbage. however - it turns out meat is VERY forgiving
19:40:56 <mordred> and I've smoked many delicious things in it
19:41:05 <lbragstad> nice
19:41:26 <mordred> all full-log - none of this pellet or fan-assisted madness
19:41:54 <lbragstad> old school ;)
19:41:55 <mordred> fans are not needed- physics creates airflow for you just via the chimney effect
19:42:38 <lbragstad> i've cooked on https://github.com/lbragstad/kitchen/blob/master/smoker-comparison.md#char-griller-acorn quite a bit (just a cheaper kamado)
19:42:57 <mordred> ++
19:42:58 <lbragstad> combined with https://bbqguru.com/storenav?categoryid=1&productid=22 and you have a pretty sweet smoking setup
19:43:28 <lbragstad> dolph uses ^ with a kamado joe and it works amazing, we did beef ribs on it at 225 and it was rock solid
19:43:52 <lbragstad> the fan just controls airflow and is hooked up to a thermostat
19:44:10 <lbragstad> it can keep temperature within a perfect range for smoking
19:44:13 <lbragstad> i was impressed
19:44:19 <mordred> yah - that's too much tech for me
19:44:35 * mordred make fire
19:44:40 * mordred ungh
19:44:54 <mordred> lbragstad: http://www.bellfab.com/ <-- I'll be getting my next smoker from that guy
19:46:25 <lbragstad> oh wow
19:47:06 <lbragstad> that's a lot of cooking surface
20:29:09 <kmalloc> Oh yes, Weber for grilling
20:29:32 <kmalloc> Damn that is a nice smoker!
20:30:08 <kmalloc> mordred: I just want to be able to use the wine cask staves. :)
20:31:05 <mordred> kmalloc: :)
20:31:26 <mordred> kmalloc: so - what I've actually done with them is use them in conjunction with lump charcoal
20:31:52 <mordred> in a two-zone setup using the barrel smoker as a charcoal grill
20:32:41 <mordred> closing the lid for some indirect cooking/smoking - then shifting the meat over the coals/wood-fire for a finishing
20:33:24 <mordred> lbragstad: yah - the thing I especially like about that guy too is that I can go custom - so that I can get the exit chimney out the side, etc
20:33:28 <kmalloc> Oooh
20:33:38 <kmalloc> Good to know! That sounds good.
20:34:01 <mordred> kmalloc: yah - that way I can use like one per cook - get the smoke from it without having to, you know, use the whole box just to get a nice fire :)
20:34:22 <kmalloc> Yes they get expensive otherwise
20:34:33 <lbragstad> mordred: that's nice, i know yoder has a build your own process for offsets, so you can get the features you want
20:35:08 * lbragstad has 0 experience on an off set
20:37:13 <lbragstad> mordred: how long does it take him to build one?
20:37:20 <lbragstad> er... on average
20:38:17 <mordred> lbragstad: not sure yet - we're just in the initial stages of discussing - I'll let you know when I get a timeframe from him
20:38:47 * mordred wants to be able to get an entire half-pig on the cook surface - so it's not gonna be a small one
20:39:09 <mordred> the transit is gonna be ... fun
20:39:20 <lbragstad> nice
20:39:46 <lbragstad> i was gonna ask - how do you plan on getting it home? ;)
20:39:53 <lbragstad> free shipping?
20:40:27 <mordred> I plan on driving a pickup truck up to oklahoma and hoping I can find enough people to help me get it off the truck and in to my yard once I'm back
20:40:56 <lbragstad> offer bbq coupons
20:41:26 <lbragstad> "redeemable for limit 1 (one) plate of bbq"
20:44:00 <mordred> ++
22:12:33 <ayoung> kmalloc, I think the lack of ipmi is going to be the killer for me after all.  I want something that does full Tripleo.
22:42:00 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: WIP: re-use bootstrap in tests  https://review.openstack.org/564905
23:13:10 <kmalloc> ayoung: it is exactly why I went super micro and considered HP for lab boxes.
23:50:44 <empty_cup> Using the openstack client i've created a new domain, a user, and a role and have made a role assignment on the domain. In the policy file I've given that role access. How come when listing roles I receive service catalog is empty?
23:51:08 <empty_cup> I was imagining I would see the single role in the domain
00:57:51 <empty_cup> when operating devstack, should it run from the master branch?
00:58:02 <empty_cup> is the master branch a "develop" branch?
01:05:50 <empty_cup> i'm rerunning ./stack.sh while on the stable/queens branch. Originally it was situated on master.
01:06:48 <empty_cup> What prompted this path was the user showing as unauthenticated when originally set with a password and options showing up as empty
10:53:43 <openstackgerrit> Merged openstack/keystone master: Invalidate the shadow user cache when deleting a user  https://review.openstack.org/561908
13:24:24 <gagehugo> O/
13:55:09 <hrybacki> o/
13:56:42 <kmalloc> Zzzz
13:56:44 <kmalloc> ;)
14:00:51 <lbragstad> o/
14:11:21 <openstackgerrit> Matt Riedemann proposed openstack/oslo.policy master: make the sphinxpolicygen extension handle multiple input/output files  https://review.openstack.org/564627
15:22:25 <mordred> kmalloc, lbragstad there's a failure in the openstacksdk patch that I made to have it consume the ksa alias/discovery stuff that looks real - although it looks VERY strange
15:22:49 <kmalloc> Hmm
15:22:54 <mordred> kmalloc, lbragstad: tl;dr - it looks like cinder endpoints in devstack are being discovered incorrectly - missing the version
15:23:02 <kmalloc> Doh!
15:23:08 <mordred> I'm digging in now to figure out if it's a bug in ksa or a bug in sdk's use of ksa
15:23:24 <kmalloc> This is KSA master or a release?
15:23:44 <mordred> master
15:23:55 <mordred> this is the "test the new ksa patches with shade/sdk before landing them"
15:24:07 <kmalloc> Ok. Good hope that means no release with the bug. ;)
15:24:35 <kmalloc> Also yay for that test.
15:25:20 <mordred> http://logs.openstack.org/94/564494/1/check/openstacksdk-functional-devstack-tips/52ba4b2/testr_results.html.gz is the failed test run
15:25:58 <knikolla> o/
15:26:01 <mordred> kmalloc: yah. I mean, also it's a patch to remove discovery related logic from sdk and replace it with just passing parameters to ksa ... which is awesome - but it's entirely possible there's a nuance in that patch that's bong
15:26:02 <kmalloc> Walking teh dog, those.logs don't load well (too big) on mobile (any logs)
15:26:09 <kmalloc> Will look as soon as I am home.
15:26:12 <mordred> kmalloc: bah. get a bigger phone
15:26:25 <kmalloc> It is actual data volume
15:26:40 <kmalloc> This is a pixel XL 2. Chrome just crashes with our logs
15:26:54 <kmalloc> (it is also a bad phone imo)
15:27:50 <mordred> well, you could use a laptop as a phone and then you could look at logs on your phone while you walk the dog
15:28:18 <kmalloc> Oh that is the report not the full log
15:28:33 <kmalloc> That loaded ok
15:30:07 <mordred> the first failure actually looks like it's doing the right things (ignore for a sec the fact that it's a test of v2 and it's talking to v3)
15:30:30 <kmalloc> Ahh
15:30:39 <kmalloc> Hehe
16:26:37 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Use the provider_api module in limit controller  https://review.openstack.org/562712
16:26:37 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Add configuration option for enforcement models  https://review.openstack.org/562713
16:26:38 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Add policy for limit model protection  https://review.openstack.org/562714
16:26:38 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Implement enforcement model logic in Manager  https://review.openstack.org/562715
16:26:39 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Expose endpoint to return enforcement model  https://review.openstack.org/562716
16:43:43 * lbragstad goes to take a run over lunch
18:57:06 <lbragstad> dims: just got done parsing the google doc link you added to the jwt spec
18:57:19 <lbragstad> it sounds like they are iterating on jwt to provide service tokens?
18:57:49 <dims> right, this one seems to be under discussion still
18:58:55 <lbragstad> so a workload is equivalent to a service then?
18:59:21 <lbragstad> i'm still trying to build a mapping of terms
19:03:14 <lbragstad> it appears they're looking to do nested jwts to limit the power of service scoped token?
19:08:58 <dims> @lbragstad : i am not sure :) we may have to go to one of their meetings may be in a week or so (this week is KubeconEU)
19:09:08 <lbragstad> oh - right
19:09:35 <lbragstad> dims: is there a spec for just the jwt work they did?
19:10:11 <lbragstad> this seems like it's reusing some of that to solve a different problem, so i'm just wondering if there is another document that has more context
19:14:55 <dims> @lbragstad : let me check
19:19:53 <dims> @lbragstad : https://github.com/kubernetes/community/pull/1460/commits/6e209490c441d8df84b6b5d8e352c0e2491a41bd may be?
19:20:18 <dims> @lbragstad : was looking through notes from the container identity work group - https://docs.google.com/document/d/1uH60pNr1-jBn7N2pEcddk6-6NTnmV5qepwKUJe9tMRo/edit?ts=59a03344#heading=h.n00r55m5f4gw
19:25:36 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Implement enforcement model logic in Manager  https://review.openstack.org/562715
19:25:36 <openstackgerrit> Lance Bragstad proposed openstack/keystone master: Expose endpoint to return enforcement model  https://review.openstack.org/562716
19:25:42 <lbragstad> dims: thanks
20:56:14 <lbragstad> anyone around to talk limits?
21:35:06 <ayoung> lbragstad, I am, but I don't think I count.  Have not really thought about limites in a while.
21:35:11 <ayoung> Just need a sounding board, fire away
21:37:08 * lbragstad thanks ayoung for being his rubber duck
21:37:11 <lbragstad> ok
21:37:39 <lbragstad> i'm reviewing the current specification for CERNs limit use cases
21:37:48 <lbragstad> https://review.openstack.org/#/c/540803/9/specs/keystone/rocky/hierarchical-unified-limits.rst
21:38:23 <lbragstad> it's expected to be a "strict" model, it in that appears to lean on the side of being explicit versus implicit
21:38:33 <ayoung> Good
21:38:48 <lbragstad> in queens we implemented registered limits and project limits
21:39:09 <lbragstad> where registered limits are something that need to be created before a limit can be associated to a project
21:39:13 <ayoung> "A child may have no more quota than it's parent. The
21:39:14 <ayoung> sum of all **direct children** is must be <= the
21:39:14 <ayoung> parent. all children would have to have default quota
21:39:14 <ayoung> equal to its parent's."
21:39:14 <ayoung> }
21:39:16 <lbragstad> they also act as a default
21:39:57 <lbragstad> so an operator could registered a limit of 10 cores for the compute service for a deployment
21:40:06 <ayoung> So...management of Quota happens inside Keystone, where we have full access to all data.  Enforcement makes a call to Keystone?
21:40:32 <lbragstad> yeah - pretty much, the limit and it's validation is stored within keystone
21:40:48 <ayoung> go on...I have about 5 minutes
21:40:48 <lbragstad> and exposed to services and consumed via a library to make it easier to adopt/understand
21:41:01 <lbragstad> so - given a registered limit of 10 cores
21:41:12 <lbragstad> each project without an explicit limit "override" or project limit
21:41:21 <lbragstad> will assume a default limit of 10 core
21:41:23 <lbragstad> cores*
21:41:45 <lbragstad> now - imagine you have project A, whose limit is 20 and usage is 1
21:41:56 <lbragstad> and A has two children projects
21:41:58 <lbragstad> B and C
21:42:12 <lbragstad> neither have project overrides
21:42:25 <lbragstad> so - they assume the default value for cores set by the registered limit
21:42:34 <lbragstad> (e.g. 10 a piece)
21:42:41 <ayoung> a piece?
21:42:58 <lbragstad> yeah - becaues of the registered limit
21:42:59 <ayoung> so the default is each project gets 10, including child projects
21:43:21 <lbragstad> let's say the parent is A in this example and it has a project limit of 20 cores
21:43:25 <lbragstad> (so overriding the default
21:43:29 <ayoung> and that is OK, cuz there are two, and the parent project has 20
21:43:32 <lbragstad> and B and C are siblings
21:43:51 <lbragstad> ok - here's where it get's confusing
21:43:58 <lbragstad> (for me at least)
21:44:22 <lbragstad> should you be able to create project D, which is a sibling to B and C without a project limit override?
21:44:34 <ayoung> I never found a solution to this problem that I liked
21:44:49 <lbragstad> because SUM(B.limit, C.limit, D.limit) > A.limit
21:44:53 <ayoung> I would say yes
21:45:24 <lbragstad> ok - so let's say we can do that
21:45:36 <lbragstad> we now have B, C, and D under A
21:45:48 <ayoung> So...
21:45:54 <lbragstad> all children are relaying on the default value of 10 cores and A's limit is still 20
21:46:23 <ayoung> If A has a quoate of 20, and  A creates B and assignes to it 10, A then has 10 remaining...right?
21:46:27 <ayoung> it is tracked that way?
21:46:59 <lbragstad> yeah - i think so
21:47:11 <lbragstad> so - let's assume
21:47:18 <lbragstad> A.usage = 1
21:47:22 <lbragstad> B.usage = 2
21:47:35 <lbragstad> C.usage = 10
21:47:37 <ayoung> But usage is not tracked in Keystone
21:47:41 <lbragstad> right...
21:47:43 <ayoung> only in Nova
21:47:50 <lbragstad> this is where is get's really fuzzy
21:47:58 <lbragstad> D.usage = 0
21:48:22 <lbragstad> let's say you want to use 10 cores in D
21:48:40 <lbragstad> and the interface for oslo.limit from the service handing out cores is:
21:49:01 <lbragstad> enforce(project_usage, project_id)
21:49:30 <lbragstad> so, leaving oslo.limit to use the project ID get the hierarchy of limits from keystone and evaluate the usage to say "yes" or "no"
21:50:02 <lbragstad> don't we have a circular dependency because oslo.limit still needs more usage information from the other children in the tree to make the decision?
21:51:43 <lbragstad> the current method signature doesn't allow oslo.limit to determine if (A.usage + B.usage + C.usage + D.usage) <= A.limit (which it finds by querying keystone for limit information)
21:57:41 <lbragstad> to me, it seems like we have to problems... 1.) there is a certain level of ambiguity in the model and 2.) requiring services to pass in *all* resources and their owning projects to oslo.limit is going to be painful
21:58:32 <lbragstad> s/to problems/two problems/
22:04:51 <ayoung> lbragstad, yes, that is the problem with usage not being recorded in Keystone. That has always been the tough-to-solve problem
22:04:59 <lbragstad> right
22:05:04 <lbragstad> so - what about this
22:05:14 * lbragstad whips out the crazy idea notebook
22:05:21 <ayoung> I had thought about an idea of a resource pool
22:05:43 <ayoung> so you have a unified identifier for the quota.  Additional level of indirection
22:06:00 <ayoung> parent project ID could be the resource pool id by default
22:06:06 <lbragstad> what if all limit validation operations assumed the registered limit for all projects without an explicit override?
22:06:15 <ayoung> not sure it is a solution
22:06:22 <lbragstad> ayoung: the resource pool?
22:06:23 <ayoung> I need to parse that
22:06:46 <lbragstad> does the resource pool just push usage the leaves of the tree?
22:06:48 <ayoung> yeah, resource pool requires the projects to keep track of what project is in what resource pool
22:07:11 <ayoung> I think that was why I discarded it last time It came up...let me turn to your statement
22:07:29 <lbragstad> ok
22:07:41 <ayoung> So...I kindof think that all quotas should be explicit, and at the project level
22:08:02 <ayoung> like, if I have 100 quota, and 10 proejcts, each get 10, or each get 5 and I keep 50 at the parent or something]
22:08:17 <lbragstad> so - let's back up the example
22:08:22 <lbragstad> A.limit is still 20
22:08:27 <ayoung> and then push back on the user to request/authorize push-down and automate reclaim
22:08:49 <lbragstad> B and C are still children of A and are siblings
22:09:04 <ayoung> resetting to your problems initial state?
22:09:09 <lbragstad> yep
22:09:12 <ayoung> q(A)=20
22:09:16 <ayoung> no children
22:09:26 <ayoung> create proejct B child of A
22:09:32 <ayoung> by default, 0 quota
22:09:41 <lbragstad> if we assume A.limit to be divisible by the number of children in the tree, then it's clear that project D shouldn't be created, right?
22:09:54 <ayoung> explicitly grab 5 quota from A, and now q(A)=15, q(B)=5
22:10:01 <ayoung> B quickly burns through quota.
22:10:10 <lbragstad> well - by default, B and C get 10 each because of the registered limit
22:10:13 <ayoung> A has a policy of "allow 5 floating"
22:10:30 <ayoung> so, automated, Nova could say "requiest more quota for B"
22:11:04 <ayoung> and Keystone would grab 1 (or whatever) from "Floating:" and now q(A)=14 q(B)=6
22:12:06 <ayoung> now once B deletes a VM, the question is how to rebalance
22:12:18 <lbragstad> so - the interesting part is that the default limit established by the registered limit resource keeps the service from having to pass in all resources and owning projects when calculating usage
22:12:58 <ayoung> OK...so if the default is each get 10, then the question is, would we want to split A, or force the new quota to come from outside A
22:13:09 <ayoung> like...maybe A is just a polaceholder project, and should never have a VM
22:13:17 <lbragstad> maybe
22:13:21 <lbragstad> it depends
22:13:42 <lbragstad> but you'd have to go to A to update A.limit to 30 in order to create another child project under A
22:13:49 <ayoung> right
22:14:12 <ayoung> OR, more likely, new projects get 0 quota, or the create-project call fails
22:14:28 <ayoung> or some other reasonable failure mode
22:14:38 <lbragstad> the outcome would be the same, in that case you'd have to supply explicit overrides
22:15:04 <ayoung> If the goal is to say "give explicit quota to A, and then let that float among all of A's childre" it is one value system
22:15:16 <ayoung> if the goal is "each proejct should have a strict quota" it is a different one
22:15:22 <ayoung> strict quota is easier to enforce
22:15:37 <ayoung> floating...I still suspect needs to use Keystone as a clearing house
22:15:45 <lbragstad> the more i think about it - the more i think they should be separate models
22:15:50 <ayoung> and...I suspect that clearing through Keystone is OK
22:15:58 <ayoung> yes, they are def separate models
22:16:11 <lbragstad> because they affect how the service passes usage information to oslo.limit
22:16:27 <ayoung> if request/approve quota are easy and light weight, then explict quota makes sense
22:16:31 <lbragstad> and ideally, that should be consistent regardless of the enforcement model configured in keystone
22:17:05 <ayoung> the question quickly becomes one of churn
22:17:29 <ayoung> Since Keystone can't call into Nova to free up Quota, nova has to trigger the free-action when the VM is deleted
22:18:04 <lbragstad> well - it is possible for keystone to store a limit that is higher than a project's usage for a given resource
22:19:28 <lbragstad> if H.limit is 15 and H.usage is 15, you can update H.limit to be 10 if you want
22:20:13 <lbragstad> the usage enforcement check should prevent any more resources to be allocated to project H until usage is back under a limit of 10
22:20:27 <lbragstad> that can be done by an admin, or members of the project
22:21:23 <ayoung> It was this veruy discussion that got the Cinder folks to withdraw "have keystone record quota" the first time around...I want to say that discussion happened in Portland
22:21:47 <lbragstad> keystone is just storing the limit information though
22:21:57 <lbragstad> we're still not tracking the actual usage information
22:23:37 <lbragstad> if the usage exceeds the limit, that's fine and should be handled by the oslo.limit library during the usage calculation
22:24:03 <lbragstad> at least until the usage is back under the limit threshold
22:24:13 <ayoung> You hit my trigger word
22:24:18 <ayoung> "just"
22:24:38 <lbragstad> lol - it's *just* that easy ayoung
22:24:38 <ayoung> So, I think that we need to make all quotas explicit
22:25:39 <lbragstad> that makes the code easier to write i think
23:28:20 <kmalloc> ayoung: as in, no floating quotas (within children)?
23:30:11 <kmalloc> ayoung: remember, all nodes are strictly leaning on keystone for limits -- it is on the services to do "reserve" and "consume" and "free" for active quota usage -- we don't want to hold what is in use for every service.
02:43:44 <openstackgerrit> Merged openstack/keystone master: Use the provider_api module in limit controller  https://review.openstack.org/562712
04:35:44 <openstackgerrit> XiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url  https://review.openstack.org/565411
04:40:40 <openstackgerrit> Lance Bragstad proposed openstack/keystone-specs master: Add scenarios to strict hierarchy enforcement model  https://review.openstack.org/565412
04:59:46 <openstackgerrit> XiaojueGuan proposed openstack/keystoneauth master: Trivial: Update pypi url to new url  https://review.openstack.org/565418
07:09:42 <openstackgerrit> OpenStack Proposal Bot proposed openstack/keystonemiddleware master: Imported Translations from Zanata  https://review.openstack.org/565455
10:43:42 <johnthetubaguy> wondering if anyone could help me better understand the heat and federated user problems: https://bugs.launchpad.net/keystone/+bug/1589993
10:43:43 <openstack> Launchpad bug 1589993 in OpenStack Identity (keystone) "cannot use trusts with federated users" [High,Triaged]
10:44:09 <johnthetubaguy> I am using a group mapping and I think I see the same problems as described in the bug (as I don't do the create role assignment at login thingy)
11:00:55 <mordred> kmalloc: the issue I reported yesterday on the new ksa stack is an issue with the devstack endpoint registration
13:39:50 <lbragstad> johnthetubaguy: ping - i reviewed wxy|'s patch for unified limits with the use cases from CERN
13:39:58 <lbragstad> was curious if I could run something by you
13:40:56 <lbragstad> i attempted to walk through the scenarios here https://review.openstack.org/#/c/565412/
13:41:34 <johnthetubaguy> lbragstad: sure, I should pick your brain about a federation but with heat in return :p
13:41:57 <lbragstad> johnthetubaguy: that sounds fair - we can start with federation stuff
13:42:15 <johnthetubaguy> OK, basically its this bug again: https://bugs.launchpad.net/keystone/+bug/1589993
13:42:16 <openstack> Launchpad bug 1589993 in OpenStack Identity (keystone) "cannot use trusts with federated users" [High,Triaged]
13:42:17 <lbragstad> there is a lot of context behind https://bugs.launchpad.net/keystone/+bug/1589993
13:42:33 <johnthetubaguy> we seem to hit that when doing dynamic role assignment
13:42:42 <johnthetubaguy> which I think is expected?
13:42:53 <johnthetubaguy> well, known I guess
13:42:59 <lbragstad> sure
13:43:08 <lbragstad> i remember them bringing this up to us in atlanta
13:43:12 <lbragstad> during the PTG
13:43:27 <johnthetubaguy> the problem is doing the dynamic role assignment on first login doesn't work for our use case
13:44:11 <johnthetubaguy> basically because we want the group to map correct to an assertion that can be quite dynamic (level of assurance)
13:44:44 <johnthetubaguy> not sure if I am missing a work around, or some other way of doing things
13:45:12 <lbragstad> well - we do have something that came out of newton design sessions aimed at solving a similar problem
13:45:24 <lbragstad> it was really for the "first login" case
13:46:06 <lbragstad> where the current federated flow resulted in terrible user experience, because a user would need to hit the keystone service provider to get a shadow user created
13:46:41 <lbragstad> then an admin would have to come through and manually create the assignments between that shadow user and various projects
13:47:09 <lbragstad> which isn't a great a experience for users, because it's like "hurry up and wait"
13:47:12 <lbragstad> so we built - https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning
13:47:34 <johnthetubaguy> right, but the auto provisioning fixes the roles on first login, I thought?
13:48:19 * lbragstad double checks the code
13:49:19 <johnthetubaguy> we would need it remove role assignments and replacement on a subsequent login
13:49:58 <johnthetubaguy> (FWIW, I suspect we are about to hit application credential issues for the same reason, but its a pure guess at this point)
13:51:10 <lbragstad> so - it looks like the code will run each time federated authentication is used
13:51:40 <johnthetubaguy> maybe we didn't test this properly then
13:51:46 <lbragstad> so if the mapping changes in between user auth requests, the mapping will be applied both times
13:51:54 <johnthetubaguy> would it remove role assignments?
13:52:15 <lbragstad> it would not
13:52:21 <lbragstad> not the mapped auth bit
13:52:29 <johnthetubaguy> yeah, that is the bit the group mapping does for "free"
13:52:30 <lbragstad> but... it might if you drop the user
13:52:55 <johnthetubaguy> so the use case is where the IdP says if two factor was used
13:53:06 <johnthetubaguy> if the two factor isn't used, then they get access to less projects
13:53:49 <lbragstad> huh
13:53:52 <johnthetubaguy> I mean its not two factor, its a generic level of assurance assertion, but same difference really
13:53:53 <lbragstad> ok - that makes sense
13:54:09 <johnthetubaguy> its not something I had considered before working through this project
13:54:15 <lbragstad> right - it's elevating permissions based on attributes of the assertion
13:54:53 <johnthetubaguy> its using EGI AAI, its an IdP proxy, so a single user can login with multiple idendities, with varing levels of assurance, some folks need a minimum level
13:55:24 <lbragstad> couldn't you use the regular group mapping bits?
13:55:34 <johnthetubaguy> we do, but that breaks heat
13:55:39 <johnthetubaguy> or seems to
13:55:42 <lbragstad> ah
13:57:01 * lbragstad thinks
13:57:33 <johnthetubaguy> (not sure about how this maps to application credentials yet, as the openid connect access_token dance is really stupid for ansible/CLI usage patterns)
13:58:29 <lbragstad> the problem is that that trust implementation relies on the role assignment existing in keystone
14:00:12 <lbragstad> i think it would work with auto-provisioning, but then the problem becomes cleaning up stale role assignments
14:00:27 <lbragstad> right?
14:00:41 <johnthetubaguy> yeah
14:00:52 <lbragstad> so -something that would be very crude
14:01:08 <johnthetubaguy> think is not cleaning those up is a feature for some users, but could be a mapping option I guess force_refresh=True?
14:01:32 <lbragstad> would be to use auto-provisioning and then purge users using keystone-manage mapping purge
14:02:03 * lbragstad looks back at the code
14:02:40 <lbragstad> ah - nevermind, mapping_purge only works for ID mappings
14:02:44 <johnthetubaguy> maybe a always_purge_old_roles mapping? would that delete app creds as we modify the role assinments?
14:02:45 <lbragstad> not shadow users
14:02:57 <johnthetubaguy> ah
14:03:27 <lbragstad> so - with a refresh option, how would that get invoked?
14:03:28 <johnthetubaguy> I guess we want something to say that the mapping should delete all other role assignments, somehow
14:03:56 <johnthetubaguy> in a way that doesn't always clear out all current application credentials
14:04:34 <johnthetubaguy> (unless there were role assignments delete, then I guess you have to invalidate app creds, which seems fine)
14:04:43 <johnthetubaguy> hmm, needs writing up I think
14:05:09 <lbragstad> yeah - that's the stance we take on app creds currently
14:05:27 <johnthetubaguy> yankcrime: are you following along with my thinking here, or have I gone crazy :)
14:05:27 <lbragstad> anytime a role assignment is removed for a user, we invalidate the application credential
14:05:42 <johnthetubaguy> lbragstad: yeah, I think I still like that, its simple
14:06:27 <johnthetubaguy> I think the action is to write up the use case, and document the missing piece
14:06:36 <johnthetubaguy> like a partial spec write up or something?
14:06:44 <johnthetubaguy> (a very short one!)
14:07:18 <lbragstad> at least documenting the use case would be good
14:07:41 <lbragstad> i hadn't really heard anything back from the heat team
14:08:26 <johnthetubaguy> OK, cool
14:09:12 <johnthetubaguy> feels like the group mapping might want to get deprecated if it really doesn't work with these features
14:09:19 <yankcrime> johnthetubaguy: not at all, i think that's a succinct summary of what we're trying to achieve
14:09:26 <johnthetubaguy> but anyways, that is for that write up
14:09:30 <johnthetubaguy> yankcrime: cool
14:09:45 <johnthetubaguy> now you may have walked into my trap, but we should work out who should write that up :)
14:10:07 <lbragstad> ok - one last question
14:10:11 <yankcrime> bah i'd hoped that by agreeing i'd avoid that trap ;)
14:10:22 <johnthetubaguy> lbragstad: sure
14:10:35 <yankcrime> the use-case is pretty clear, i don't mind getting that drafted
14:10:41 <yankcrime> (clear to us i mean)
14:10:46 <lbragstad> you always know which users need this refresh behavior, right?
14:11:02 <knikolla> o/
14:11:13 <johnthetubaguy> I think its all users in this case, so I think the answer is yes
14:11:16 <yankcrime> we can infer which users need this refresh by other attributes
14:11:25 <lbragstad> hmm
14:11:27 <yankcrime> (claims, whatever)
14:11:43 <lbragstad> ok - i was thinking about a possible implementation that might be pretty eas y
14:12:00 <yankcrime> but yeah right now for mvp we can safely assume all users in this case
14:12:11 <lbragstad> but it would require using the keystone API to set an attribute on the user's reference that would get checked during federated authentication
14:12:32 <johnthetubaguy> if that can be set in the mapping? maybe
14:12:55 <ayoung> what if we are overthinking this?
14:13:10 <ayoung> what if...we really don't want dynamic role assignments or any of that
14:13:24 <ayoung> and instead...somehow use HMT to solve the problem.
14:13:26 <ayoung> like...
14:13:36 <ayoung> first login, a federated user gets put into a pool
14:13:50 <ayoung> and...we have a rule (hand wave) that says that user can create their own project
14:14:00 <ayoung> the pool is the parent project, and it has 0 quota
14:14:34 <ayoung> the child project create is triggered by the user themself, and they get an automatic role assignment for it
14:14:41 <yankcrime> ayoung: is this mercador?
14:15:18 <ayoung> yankcrime, I thought it was psedoconical
14:15:29 * ayoung just made a map joke
14:16:05 * yankcrime googles pseudoconical
14:16:13 <ayoung> https://en.wikipedia.org/wiki/Map_projection
14:16:19 <johnthetubaguy> so while I do like that, the main use case is to get access to existing project resources based on IdP assertions, that are dynamic, i.e. revoke access if assertions change
14:16:33 <yankcrime> ayoung: lol
14:16:51 <ayoung> johnthetubaguy, I caught a bit about the groups
14:17:20 <lbragstad> keystone supports one way to do that - but things like trusts don't work with it
14:17:37 <ayoung> it seems to me that the role assignments should be what control that...the IdP data should be providing the input to the role assignments, but should not be creating anything
14:17:45 <lbragstad> if we use the other way (with auto provisioning) then the features work, but cleaning up the assignments is harder
14:17:47 <knikolla> i remember there was a proposed change to have the assignments from the mapping persist until the user logs in again through federation and the attributes are reavaluated
14:18:14 <lbragstad> knikolla: interesting, do you know where that ended up
14:18:15 <lbragstad> ?
14:18:23 <knikolla> https://review.openstack.org/#/c/415545/
14:18:29 <johnthetubaguy> knikolla: yeah, that is basically what we need here
14:18:33 * yankcrime nods
14:18:56 <ayoung> Dec 28, 2016
14:19:02 <ayoung> That is as old as some of my patches
14:19:30 <ayoung> let me take a look at that
14:20:42 <johnthetubaguy> I kinda like the mapping doing a "clean out any other assignments" thing, and just have groups treated like any other role assignment
14:21:27 <johnthetubaguy> the delete re/add thing screws with app creds, which would be bad
14:22:01 <johnthetubaguy> lbragstad: back to quotas, do you have more context to your patch, was that from an email?
14:22:47 <lbragstad> johnthetubaguy: it wasn't - but wxy too a shot at working the CERN use cases into https://review.openstack.org/#/c/540803/
14:23:17 <lbragstad> and i tried to follow it up with this - https://review.openstack.org/#/c/565412/1
14:23:43 <johnthetubaguy> lbragstad: why is it not restricted to two levels?
14:24:10 <lbragstad> johnthetubaguy: which specification?
14:24:49 <ayoung> so...it seems to me that group membership should be based solely on the data in the assertion.  If the data that triggers the group is not there, the user cannot get a token with roles based on that group
14:25:17 <johnthetubaguy> johnthetubaguy: https://review.openstack.org/#/c/540803
14:25:38 <ayoung> groups are not role assignments.  Groups are attributes from the IdP that are separate from the user herself
14:25:50 <lbragstad> johnthetubaguy: there are some things in there that need to be updated
14:26:13 <lbragstad> and that's one of them, because i don't think we actually call that out anywhere
14:26:23 <lbragstad> johnthetubaguy: so - good catch :)
14:27:05 <ayoung> so...we should not trigger any persisted data on the group based on the IdP data.  It should all be dynamic.
14:27:41 <johnthetubaguy> ayoung: technically, sure. But as a user we want to dynamically assign a collection of role assignments, based on attributes we get from the IdP that change over time, right now we are doing that via groups (... which breaks heat)
14:28:10 <lbragstad> because the role assignment isn't persistent and breaks when heat tries to use a trust
14:28:32 <johnthetubaguy> so the rub is you want to create an application credential, so you don't have to go via the IdP for automation, that gains what the federated user had at the time
14:28:46 <johnthetubaguy> and you want the heat trust to persit, as long as it can
14:29:29 <johnthetubaguy> now if the IdP info changes on a later login (or a refresh of the role assignments is triggered via some automation process out of band), sure that access gets revoked, but thats not a usual every day thing
14:30:00 <ayoung> johnthetubaguy, so what you are saying is that Heat trusts are based on attributes that came in a Federated assertion.  Either we have them re-affirmed everytime (ephemeral) or we make them persistant, and then we are stuck with them for all time.
14:30:01 <johnthetubaguy> anyways, I think we need to write up this scenario, if nothing else so its very clear in our heads
14:30:11 <ayoung> Hmmm
14:30:33 <ayoung> This is a David Chadwick question.
14:30:37 <johnthetubaguy> yeah, its in between those too... its like the user is partially deleted...
14:31:11 <johnthetubaguy> it is a bit fishy, to be sure
14:31:26 <johnthetubaguy> its a cache of IdP assertions
14:31:32 <johnthetubaguy> but anyways
14:33:56 <lbragstad> johnthetubaguy: so - the examples i worked on last night to unified limits only account for two levels
14:34:49 <johnthetubaguy> lbragstad: they sound sensible as I read through them, I am adding a comment on the other spec a second, will see what you think about it.
14:35:05 <ayoung> OK...so the question is "how long should the data from the assertion be considered valid"
14:35:35 <ayoung> on one hand, the assertion has a time limit on it,  which is something like 8 hours
14:35:40 <lbragstad> johnthetubaguy: granted, both examples I wrote in the follow on break
14:35:42 <ayoung> and that is not long enough for most use cases
14:36:14 <ayoung> one the other hand, Keystone gets no notifications for user changes from the IdP, so if we make it any longer, those are going to be based on stale data
14:36:43 <ayoung> So having a trust based on a Federated account needs to be an elevated level of priv no matter what
14:36:59 <ayoung> same is true of any other delegation, to include app creds
14:37:01 <johnthetubaguy> same with app credentials
14:37:03 <johnthetubaguy> yeah
14:37:09 <ayoung> so, who gets to decide?
14:37:37 <ayoung> I think the answer is that the ability to create a trust should be an explicit role assignment
14:38:04 <ayoung> if you have that, then the group assignments you get via a federated token get persisted
14:38:24 <ayoung> another way to do it is at the IdP level, but that is fairly course grained
14:38:33 <johnthetubaguy> it feels like this is all configuration of the mapping
14:38:38 <ayoung> and...most IdPs don't provide more than identifying info
14:38:51 <ayoung> johnthetubaguy, so, it could be
14:38:55 <johnthetubaguy> should the group membership or assignment persist or not
14:39:02 <ayoung> one part of the mapping could be "persist group assignments"
14:39:29 <ayoung> but, the question is whether you would want that for everyone from the IdP
14:40:00 <openstackgerrit> ayoung proposed openstack/keystone master: Enable trusts for federated users  https://review.openstack.org/415545
14:40:07 <kmalloc> mordred: ah good to know and phew
14:40:09 <ayoung> that is just a manual rebase knikolla
14:41:00 <johnthetubaguy> ayoung: personally I am fine with global, but I could see arguments both ways
14:41:02 <ayoung> johnthetubaguy, I see three cases
14:41:06 <mordred> kmalloc: I have a patch up to fix devstack and the sdk patch now depends on it - and correctly shows test failures in the "install ksa from release" but test successes with "install ksa from master/depends-on"
14:41:26 <mordred> kmalloc: https://review.openstack.org/#/c/564494/
14:41:41 <ayoung> 1.  Everyone from the IdP is ephemeral. Persist nothing.  2.  Ecveryone is persisted.  3.  Certain users get promoted to persisted after the fact.
14:41:51 <ayoung> actually, a fourth
14:42:23 <ayoung> 4. there is a property in the Assertion that allows the mapping to determine whether to persist the users groups or not
14:42:44 <ayoung> note that "everyone is persisted " does not mean that "all groups should be persisted" either
14:42:50 <kmalloc> mordred: looking now.
14:42:59 <ayoung> it merely means that a specific group mapping should be persisted
14:43:04 <ayoung> johnthetubaguy, that make sense?
14:44:05 <kmalloc> mordred: so uh... Dumb question, I thought we worked hard for versionless entries in the catalog?
14:44:17 <johnthetubaguy> ayoung: I think so... although something tells me we are missing a bit, just not sure what right now
14:44:22 <kmalloc> mordred: this is, afaict, undoing the "best practices"
14:45:31 <ayoung> johnthetubaguy, I think if we make it possible to identify on a group mapping that the group assignment should be persisted, we'll have it down.  We will also need a way to clean out group assignments, although that technically can be done today with e.g. an ansible role
14:46:41 <johnthetubaguy> right, today we have persist roles or transient groups, what we really need is a "clear out stale" at login option, but I need to write this all down, I am getting lost
14:50:42 <ayoung> johnthetubaguy, not at login
14:50:48 <ayoung> johnthetubaguy, that may never happen
14:51:34 <johnthetubaguy> its only one way to get refreshed info, but its a way.
14:51:53 <johnthetubaguy> you would need out of band automation...
14:52:11 <kmalloc> mordred: this is sounding like a bug in ksa
14:52:27 <kmalloc> mordred: or in cinder handling versionless endpoints
14:52:36 <johnthetubaguy> ayoung: that is the fishy bit I was worried about before
14:52:48 <ayoung> limited time trusts
14:53:17 <ayoung> make them refreshable...you have to log in periodically with the right groups to keep it alive
14:53:23 <ayoung> so...limited time group assignments?
14:53:58 <ayoung> put an expiration date on some groups assignments, with a way to make them permanent via the group API
14:55:15 <johnthetubaguy> ayoung: yeah, that could work, like app cred expiry I guess
14:55:15 <mordred> kmalloc: versionless endpoints in the catalog don't work for cinder - so the bug is there
14:55:38 <kmalloc> Ahhhh.
14:55:48 <mordred> kmalloc: I believe that basically we somehow missed actually getting cinder updated in the same way nova got updated
14:55:57 <kmalloc> So, how did this ever work :P ;)
14:56:08 <mordred> nobody has used the block-storage endpoint for any reason yet
14:56:18 <kmalloc> Haha
14:56:21 <kmalloc> Ok then.
14:56:23 <mordred> the other endpoints int he catalog in devstack - for volumev2 and volumev3 - are versioned
14:56:46 <johnthetubaguy> lbragstad: added a comment here: https://review.openstack.org/#/c/540803/9/specs/keystone/rocky/hierarchical-unified-limits.rst@35
14:56:55 <lbragstad> johnthetubaguy: just saw that - reading it now
14:57:47 <mordred> kmalloc: so - in short, the answer to "how did this ever work?" is "It never has" :)
14:57:55 <lbragstad> johnthetubaguy: is VO a domain?
14:58:00 <lbragstad> or a top level project?
14:58:02 <kmalloc> We need to bug cinder folks, but, annnyway, this.sounds like a clean fix for now.
14:58:19 <mordred> kmalloc: yah - ksa correctly fails with the misconfigured endpoint
14:58:35 <lbragstad> i'm struggling with the mapping of terms
15:00:04 <lbragstad> ok - so VO seems like a parent project or a domain
15:00:10 <kmalloc> mordred: yay successfully failing >.>
15:00:38 <lbragstad> and VO groups are children projects of the VO
15:02:35 <johnthetubaguy> lbragstad: yeah
15:02:50 <lbragstad> johnthetubaguy: ok
15:02:54 <johnthetubaguy> lbragstad: VO = parent project, in my head
15:03:10 <lbragstad> cool - so if we set the VO to have 20 cores
15:03:31 <lbragstad> you *always* want the VO to be within that limit, right?
15:03:50 <johnthetubaguy> lbragstad: yes
15:04:04 <lbragstad> ok
15:04:23 <lbragstad> if we set 20 cores on the VO, and the default registered limit is 10
15:04:38 <johnthetubaguy> lbragstad: basically the PI pays the bill for all resources used by anyone in the VO
15:04:49 <johnthetubaguy> (via a research grant, but whatever)
15:04:54 <lbragstad> sure
15:05:06 <lbragstad> so the VO has to be within whatever the limit is
15:05:21 <lbragstad> it can never over-extend it's limit, right?
15:05:28 <johnthetubaguy> correct
15:05:38 <lbragstad> here are where my questions come in
15:05:41 <johnthetubaguy> no exceptions, wibble races and if you care about them
15:05:53 <lbragstad> say we have a VO with 20 cores
15:05:54 <johnthetubaguy> so you say this "which defeats the purpose of not having the service understand the hierarchy."
15:06:07 <johnthetubaguy> we should get back to that bit after your questions
15:06:13 <lbragstad> yeah
15:06:20 <lbragstad> so the VO doens't have any children
15:06:29 <lbragstad> and the default registered limit for cores is 10
15:06:34 <johnthetubaguy> yep
15:06:46 <lbragstad> so any project without a specific limit assumes the default
15:06:50 <lbragstad> right?
15:06:52 <johnthetubaguy> yep
15:06:54 <lbragstad> cool
15:07:09 <lbragstad> so - as the PI, I create a child under VO
15:07:21 <johnthetubaguy> yep, gets a limit of 10
15:07:28 <lbragstad> yep - because i didn't specify it
15:07:30 <johnthetubaguy> for all 15 sub projects I create
15:07:41 <lbragstad> then i create another child
15:07:49 <lbragstad> so - two children, each default to 10
15:07:52 <johnthetubaguy> yep
15:07:58 <lbragstad> and total resources in the tree is 20
15:08:05 <lbragstad> technically, I'm at capacity
15:08:26 <johnthetubaguy> so not in the model I was proposing
15:08:34 <johnthetubaguy> (in my head)
15:08:43 <lbragstad> say both child use all of their quota
15:08:47 <johnthetubaguy> it might be the old "garbutt" model, if memory severs me
15:09:03 <johnthetubaguy> right, so we have project A, B, C
15:09:09 <johnthetubaguy> say we also created D
15:09:16 <johnthetubaguy> B, C, D are all children of A
15:09:27 <lbragstad> A = VO, B and C are children with default limits for cores
15:09:40 <johnthetubaguy> yeah
15:09:48 <lbragstad> should you be able to create D?
15:09:50 <johnthetubaguy> likes add D, same as B and C
15:09:52 <lbragstad> without specifying a limit?
15:09:55 <johnthetubaguy> I am saying yes
15:10:01 <johnthetubaguy> (just go with it...)
15:10:08 <lbragstad> ok
15:10:11 <lbragstad> so - let's create D
15:10:21 <johnthetubaguy> so we have A (limit 20), B, C, D all with limit of 10
15:10:30 <lbragstad> yep - so the sum of the children is 30
15:10:34 <johnthetubaguy> yep
15:10:50 <johnthetubaguy> so lets say actual usage of all projects is 1 test VM
15:10:57 <johnthetubaguy> so current actually usage is now 4
15:11:05 <lbragstad> sure
15:11:20 <johnthetubaguy> project A now tries to spin up 19 VMs, what happens
15:11:34 <johnthetubaguy> well thats bad, its only allowed 16
15:11:39 <lbragstad> right
15:11:43 <johnthetubaguy> because B, C, D are using some
15:11:58 * lbragstad assumes the interface for using oslo.limit is enforce(project_id, project_usage)
15:12:19 <johnthetubaguy> that is the bit I don't like
15:12:41 <lbragstad> i'm not sure i like it either, but i'm curious to hear why you don't
15:12:59 <johnthetubaguy> so my assumption was based on the new Nova code, which is terrible I know
15:13:16 <johnthetubaguy> in that world we have call back to the project:
15:13:26 <johnthetubaguy> count_usage(resource_type, project_id)
15:13:34 <lbragstad> oh...
15:13:45 <johnthetubaguy> so when you call enforce(project_id, project_usage)... you get the idea
15:14:08 <lbragstad> right - oslo.limit gets the tree A, B, C, and D from keystone
15:14:21 <lbragstad> then uses the callback to get all the usage for that tree
15:14:28 <johnthetubaguy> right, so the limit on A really means sum A, B, C and D usage
15:14:47 <johnthetubaguy> and any check on a leaf means check the parent for any limit
15:14:58 <lbragstad> hmm
15:15:04 <johnthetubaguy> or something a bit like that
15:15:15 <lbragstad> ok - i was operating yesterday under the assumption there wouldn't be a callback
15:15:18 <johnthetubaguy> now.. this could be expensive
15:15:32 <lbragstad> ^ that's why i was operating under that assumption :)
15:15:40 <johnthetubaguy> when there are lots of resources owned by a VO, but then its expensive when you are a really big project too
15:15:43 <lbragstad> i didn't think we'd actually get anyone to bite on that
15:15:52 <johnthetubaguy> so its one extra thing that is nice about the two level limit
15:16:25 <lbragstad> are you saying you wouldn't need the callback with a two-level limit?
15:16:29 <lbragstad> s/limit/model/
15:16:38 <johnthetubaguy> I think you need the callback still, in some form
15:16:48 <lbragstad> ok - i agree
15:16:58 <johnthetubaguy> it just you know its not going to be too expensive, as there are only two levels of hierarchy
15:18:14 <lbragstad> you could still have a parent with hundreds of children
15:18:18 <lbragstad> i suppose
15:18:35 <johnthetubaguy> you could, and it would suck
15:19:09 <lbragstad> ok - so i think we hit an important point
15:19:26 <johnthetubaguy> now we don't have any real use cases that map to the one that is efficient to implement
15:19:42 <johnthetubaguy> we can invent some, for sure, but they feel odd
15:20:03 <lbragstad> if you assume oslo.limit accepts a callback for getting usage of something per project, then you can technically create project D
15:20:39 <johnthetubaguy> so even in the other model, what if project A has already used all 20?
15:20:40 <lbragstad> if you can't assume that about oslo.limit, then creating project D results in a possible violation of the limit, because the library doesn't have a way to get all the usage it needs to make a decision with confidence
15:21:10 <johnthetubaguy> even though the children only sum to 20, it doesn't help you
15:21:52 <lbragstad> i think that needs to be a characteristic of the model
15:22:02 <johnthetubaguy> agreed
15:22:06 <lbragstad> if A.limit == SUM(B.limit, C.limit)
15:22:17 <lbragstad> then you've decided to push all resource usage to the leaves of the tree
15:22:29 <lbragstad> which we can figure out in oslo.limit
15:22:45 <lbragstad> so when you go to allocate 2 cores to A, we can say "no"
15:22:55 <johnthetubaguy> well, I think what you are looking for is where project A usage counts any resource limits in the children, before counting its own resources
15:23:17 <lbragstad> oh...
15:23:24 <johnthetubaguy> so A, B, C, (20, 10, 10) means project A can't create anything
15:23:30 <lbragstad> i guess i was assuming that A.usage = 0
15:24:07 <johnthetubaguy> this is where the three level gets nuts, as you get usages all over the three levels, etc
15:24:16 <lbragstad> yeah.....
15:24:39 <johnthetubaguy> I think we can't assume A has usage 0, generally it has all the shared resources
15:24:53 <johnthetubaguy> or the stuff that was created before someone created sub projects
15:24:59 <lbragstad> right...
15:25:01 <johnthetubaguy> same difference I guess
15:25:34 <johnthetubaguy> anyways, I think that is why I like us going down the specific two level Virtual organisation case, it makes all the decisions for us, its quite specific and real
15:25:46 <ayoung> anyone else feel like a lot of these problems come down to "we know when to do the action, but we have no event one which to undo it."
15:26:25 <openstackgerrit> XiaojueGuan proposed openstack/keystoneauth master: Trivial: Update pypi url to new url  https://review.openstack.org/565418
15:27:32 <lbragstad> johnthetubaguy: ok - so what do we need to accomplish this? the callback?
15:28:06 <johnthetubaguy> lbragstad: this code may help: https://github.com/openstack/nova/blob/c15a0139af08fd33d94a6ca48aa68c06deadbfca/nova/quota.py#L172
15:28:23 <johnthetubaguy> lbragstad: you could get the usage for each project as a callback?
15:28:57 <lbragstad> well - somewhere the service (e.g. nova in this case) code, I'm envisioning
15:29:16 <lbragstad> limit.enforce(project_id, usage_callback)
15:29:41 <lbragstad> oslo.limit get's the limit tree
15:29:57 <lbragstad> then uses the callback to collect usage for each node in the tree
15:30:21 <lbragstad> and compares that to the request being made to return a "yes" or "no" to nova
15:30:54 <lbragstad> sorry - limit.enforce(resource_name, project_id, usage_callback)
15:30:58 <johnthetubaguy> looking for a good line in Nova to help
15:31:38 <johnthetubaguy> lbragstad: https://github.com/openstack/nova/blob/644ac5ec37903b0a08891cc403c8b3b63fc2a91c/nova/compute/api.py#L293
15:32:15 <lbragstad> ok
15:32:35 <lbragstad> part of that feels like it would belong in oslo.limit
15:34:17 <johnthetubaguy> lbragstad: my bad, here we are, this moves to an oslo.limit call, where oslo.limit has been setup with some registed callback, a bit like CONF reload callback: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/server_groups.py#L157
15:34:26 <lbragstad> and it would look like limit.enforce('injected_files', 'e33f5fbbe4a4407292e7ccef49ebac0c', api._get_injected_file_usage_for_project)
15:34:33 <johnthetubaguy> note the check quota, create thing, recheck quota model
15:35:11 <johnthetubaguy> sorry, the delta check is better, check we can increase by one, etc, etc.
15:35:23 <lbragstad> ahh
15:35:53 <lbragstad> so - objects.Quotas.check_deltas() would be a replaced by a call to oslo.limit
15:36:12 <johnthetubaguy> yeah
15:36:30 <lbragstad> i suppose the oslo.limit library is going to need to know the usage requested, too
15:36:34 <johnthetubaguy> more complicated example, to note how to make it more efficient: https://github.com/openstack/nova/blob/ee1d4e7bb5fbc358ed83c40842b4c08524b5fcfb/nova/compute/utils.py#L842
15:36:53 <johnthetubaguy> thats the delta bit in the above two examples
15:37:28 <johnthetubaguy> the other case is a bit odd... its a more static limit, there are a few different cases
15:37:39 <lbragstad> so limit.enforce('cores', 'e33f5fbbe4a4407292e7ccef49ebac0c', 2, api._get_injected_file_usage_for_project)
15:37:55 <johnthetubaguy> so it needs to be a dict
15:38:02 <johnthetubaguy> https://github.com/openstack/nova/blob/ee1d4e7bb5fbc358ed83c40842b4c08524b5fcfb/nova/compute/utils.py#L852
15:38:08 <johnthetubaguy> otherwise it really sucks
15:38:27 <lbragstad> hmm
15:38:41 <johnthetubaguy> I mean oslo.limits wouldn't check the user quoats of course, they are deleted
15:38:41 <lbragstad> doesn't the max count stuff come from keystone now though?
15:39:49 <lbragstad> don't we only need to know the resource name, requested units, project id, and usage callback?
15:40:30 <johnthetubaguy> so you would want an oslo limits API to check the max limits I suspect, but that could be overkill
15:41:36 <johnthetubaguy> anyways, focus on the instance case, yes that is what you need
15:41:54 <lbragstad> how is a max limit different from a project limit or registered limit?
15:42:14 <johnthetubaguy> the count of existing resources would always return zero I think
15:42:42 <johnthetubaguy> its like, how many tags can you have on an instance, its a per project setting
15:43:03 <johnthetubaguy> but the count is not per project, its per instance
15:43:16 <johnthetubaguy> I messed that up...
15:43:17 <johnthetubaguy> reset
15:43:28 <johnthetubaguy> tags on an instance is a good example
15:43:32 <johnthetubaguy> to add a new tag
15:43:37 <johnthetubaguy> check per project limit
15:43:56 <johnthetubaguy> but you count the instance you are setting it on, not loads of resources owned by that project
15:44:08 <openstackgerrit> XiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url  https://review.openstack.org/565411
15:44:15 <johnthetubaguy> so usually we don't send a delta, we just check the actual number
15:44:24 <johnthetubaguy> but we can probably ignore all that for now
15:44:29 <lbragstad> hmm
15:44:29 <johnthetubaguy> focus on the basic case first
15:44:30 <lbragstad> ok
15:44:36 <openstackgerrit> XiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url  https://review.openstack.org/565411
15:44:38 <lbragstad> i think i agree
15:44:40 <johnthetubaguy> core, instances, ram when doing create instance
15:44:54 * lbragstad has smoke rolling out his ears
15:45:00 <johnthetubaguy> did you get how we check quotas twice for every operation
15:45:24 <lbragstad> yeah - i think we could expose that both ways with oslo.limit
15:45:43 <lbragstad> and nova would just iterate the creation call and set each requested usage to 1?
15:45:53 <johnthetubaguy> lbragstad: based on my experience that is a good sign, it means you understand a good chunk of the problem now :)
15:45:54 <lbragstad> to maintain the check for a race condition, right?
15:46:47 <johnthetubaguy> so the second call you would just set the deltas to zero I think, I should check how we do that
15:47:18 <johnthetubaguy> yeah, this is the second check: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/server_groups.py#L180
15:47:23 <lbragstad> that feels like it should be two different functions
15:47:25 <johnthetubaguy> delta of zero
15:47:35 <johnthetubaguy> I mean it could be
15:48:00 <lbragstad> the first call you want it to tell you if you can create the thing
15:48:17 <lbragstad> the second call happens after you've created said thing
15:48:24 <johnthetubaguy> yeah
15:48:51 <lbragstad> and the purpose of the second call is to check that someone didn't put us over quota in that time frame, right?
15:49:03 <johnthetubaguy> yeah
15:49:05 <lbragstad> and if they did, we need to clean up the instance
15:49:37 <johnthetubaguy> yeah
15:49:42 <lbragstad> ok
15:49:49 <johnthetubaguy> it is a very poor implementation of this: https://en.wikipedia.org/wiki/Optimistic_concurrency_control
15:50:16 <lbragstad> so the first call happens in the "begin"
15:50:30 <lbragstad> and the second is done in the "validate"
15:50:51 <johnthetubaguy> basically
15:51:01 <lbragstad> ok - i think that makes sense
15:51:10 <johnthetubaguy> although its all a bit fast and loose, kill the API and you will be left over quota, possibly
15:51:38 <johnthetubaguy> it all assumes some kind of rate limiting is in place to limit damage around the edge cases
15:51:39 <lbragstad> right- because you didn't have a chance to process the roll back
15:52:14 <johnthetubaguy> we don't hide the writes from other transactions, basically
15:52:28 <johnthetubaguy> well, don't hide the writes until commit happens
15:53:02 <johnthetubaguy> its a bit more optimistic that optimistic concurrency control, its more concurrency damage limitation
15:53:11 <johnthetubaguy> s/that/than/
15:53:25 * lbragstad nods
15:53:49 <johnthetubaguy> Ok, did we both burn out yet? I think I am close
15:54:05 <lbragstad> same...
15:54:24 <johnthetubaguy> I think you understand where I was thinking with all this though...?
15:54:30 <johnthetubaguy> does that make more sense to you now?
15:54:34 <lbragstad> i think so
15:54:38 * johnthetubaguy waves arms about
15:54:45 <lbragstad> i think i can rework the model
15:54:54 <lbragstad> because things change lot with the callback bit
15:55:12 <johnthetubaguy> I don't think its too terrible to implement
15:55:33 <lbragstad> yeah - i just need to draw it all out
15:55:50 <lbragstad> but that callback let's us "over-commit" limits but maintain useage
15:55:52 <lbragstad> right?
15:55:58 <johnthetubaguy> I think it means we can implement any enforcement model in the future (if we don't care about performance)
15:56:02 <lbragstad> and that's the important bit you want regarding the VO/
15:56:08 <openstackgerrit> Merged openstack/keystone master: Fix the outdated URL  https://review.openstack.org/564714
15:56:08 <johnthetubaguy> yeah
15:56:16 <lbragstad> ok - so to recap
15:56:37 <lbragstad> 1.) make sure we include a statement about only supporting two-level hierarchies
15:56:54 <lbragstad> 2.) include the callback detail since that's important for usage calculation
15:57:10 <lbragstad> is that it?
15:57:28 <johnthetubaguy> probably...
15:57:34 <johnthetubaguy> I guess what does the callback look like
15:57:59 <johnthetubaguy> list of resources to count for a given project_id?
15:58:16 <johnthetubaguy> or a list of project_ids and a list of resources to count for each project?
15:58:25 <johnthetubaguy> maybe it doesn't matter, so simpler is better
15:58:41 <lbragstad> i would say it's a method that returns an int that represents the usage
15:58:53 <lbragstad> it can accept a project id
15:58:59 <lbragstad> s/can/must/
15:59:15 <lbragstad> because it will depend on the structure of the tree
15:59:29 <lbragstad> and just let oslo.limit iterate the tree and call usage
15:59:32 <lbragstad> call for usage*
16:00:02 <empty_cup> i received a 401, and looked at the keystone log and see 3 lines: mfa rules not processed for user; user has no access to project; request requires authentication
16:00:47 <johnthetubaguy> lbragstad: yeah, I think we will need it to request multiple resources in a single go
16:01:09 <wxy|> johnthetubaguy: I don't oppose to two-level hierarchies. Actually I like it and I think it is simple enough for the origin proposal.
16:01:18 <johnthetubaguy> lbragstad: its a stupid reason, think about nova checking three types of resources, we don't want to fetch the instances three times
16:02:20 <johnthetubaguy> wxy|: cool, for the record I don't oppose multi-level, its just that feels like a next step, and I want to see us make little steps in the right direction
16:02:40 <lbragstad> ++
16:02:49 <lbragstad> this is a really hard problem...
16:03:31 <johnthetubaguy> yeah, everytime I think its easy I find some new horrible hole!
16:04:10 <lbragstad> exactly...
16:04:20 <johnthetubaguy> would love to get melwitt to take a peak at this stuff, given all the Nova quota bits she has been doing, she has way more detailed knowledge than me
16:05:07 <lbragstad> yea
16:05:14 <lbragstad> i can work on documenting everything
16:05:24 <lbragstad> which should hopefully make it easier
16:05:30 <lbragstad> instead of having to read a mile of scrollback
16:06:02 <johnthetubaguy> awesome, do ping me again for a review when its ready, I will try make time for that, almost all our customers will need this eventually, well most of them anyways!
16:06:38 <johnthetubaguy> once the federation is working (like CERN have already) they will hit this need
16:07:18 <lbragstad> johnthetubaguy: ++
16:08:03 <johnthetubaguy> OK, so I should probably call it a day, my brain is mashed and its just past 5pm
16:08:22 <lbragstad> johnthetubaguy: sounds like a plan, thanks for the help!
16:08:22 <johnthetubaguy> now is not the time for writing ansible
16:08:43 <johnthetubaguy> no problem, happy I could help
16:09:21 <wxy|> johnthetubaguy: yeah. The only problem I concern is about the implementation. In your spec, to enable the hierarchy is controlled by the API with "include_all_children". It means that in a keystone system, some project may be hierarchy, some don't. I's a little mess-up for a Keystone system IMO. How about the limit model driver way? So that all project's behavior will be the same at the same time.
16:13:05 <wxy|> johnthetubaguy: some way like this: https://review.openstack.org/#/c/557696/3 Of cause It can be changed to two-level easily.
16:54:39 * ayoung moves Keystone meeting one hour earlier on his schedule
16:57:44 <kmalloc> ayoung: set the meeting time in UTC/or reykjavik timezone (if your calendar does't have UTC)
16:58:04 <kmalloc> ayoung: means you wont have to "move" it explicitly :)
16:58:36 <kmalloc> the latter use of reykjavik was a workaround for google calendar (not sure if that has been fixed)
17:02:20 <lbragstad> the ical on eavesdrop actually has the meeting in UTC
17:02:34 <openstack> lbragstad: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
17:02:39 <lbragstad> ugh
17:02:43 <lbragstad> #endmeeting