Tuesday, 2018-05-01

*** toddnni has quit IRC00:19
*** toddnni has joined #openstack-keystone00:22
*** Rhys has quit IRC00:25
*** Rhvs has joined #openstack-keystone00:25
*** d0ugal__ has joined #openstack-keystone00:27
*** Dinesh_Bhor has joined #openstack-keystone00:41
*** jmlowe has quit IRC00:43
*** jmlowe has joined #openstack-keystone00:44
*** d0ugal__ has quit IRC00:58
*** d0ugal__ has joined #openstack-keystone01:06
*** dklyle has quit IRC01:09
*** dklyle has joined #openstack-keystone01:09
*** dklyle has quit IRC01:11
*** dklyle has joined #openstack-keystone01:12
*** felipemonteiro__ has quit IRC01:18
*** r-daneel has joined #openstack-keystone01:39
*** d0ugal__ has quit IRC01:39
*** r-daneel_ has joined #openstack-keystone01:42
*** r-daneel has quit IRC01:43
*** r-daneel_ is now known as r-daneel01:43
*** panbalag has joined #openstack-keystone01:53
*** dave-mccowan has joined #openstack-keystone02:12
*** dklyle has quit IRC02:15
*** dklyle has joined #openstack-keystone02:16
*** dklyle has quit IRC02:22
*** dklyle has joined #openstack-keystone02:23
*** dklyle has quit IRC02:24
*** dklyle has joined #openstack-keystone02:24
*** nicolasbock has quit IRC02:25
*** dklyle has quit IRC02:25
*** dklyle has joined #openstack-keystone02:26
*** dikonoor has joined #openstack-keystone02:41
openstackgerritMerged openstack/keystone master: Use the provider_api module in limit controller  https://review.openstack.org/56271202:43
*** itlinux has joined #openstack-keystone02:58
*** Rhvs is now known as Rhys03:03
*** Rhys is now known as Guest2919603:03
*** d0ugal__ has joined #openstack-keystone03:05
*** panbalag has quit IRC03:13
*** d0ugal__ has quit IRC03:21
*** d0ugal__ has joined #openstack-keystone03:22
*** dklyle has quit IRC03:27
*** dklyle has joined #openstack-keystone03:28
*** d0ugal__ has quit IRC03:29
*** itlinux has quit IRC03:32
*** d0ugal__ has joined #openstack-keystone03:36
*** d0ugal__ has quit IRC03:47
*** lbragstad has joined #openstack-keystone03:49
*** ChanServ sets mode: +o lbragstad03:49
*** dklyle has quit IRC03:50
*** dklyle has joined #openstack-keystone03:50
*** dklyle has quit IRC04:01
*** dklyle has joined #openstack-keystone04:01
*** Kumar has joined #openstack-keystone04:29
openstackgerritXiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url  https://review.openstack.org/56541104:35
openstackgerritLance Bragstad proposed openstack/keystone-specs master: Add scenarios to strict hierarchy enforcement model  https://review.openstack.org/56541204:40
*** dave-mccowan has quit IRC04:43
*** lbragstad has quit IRC04:51
openstackgerritXiaojueGuan proposed openstack/keystoneauth master: Trivial: Update pypi url to new url  https://review.openstack.org/56541804:59
*** links has joined #openstack-keystone05:03
*** Kumar has quit IRC05:29
*** gongysh has joined #openstack-keystone06:07
*** homeski has quit IRC06:12
*** pcichy has joined #openstack-keystone06:22
*** gongysh has quit IRC06:27
*** gongysh has joined #openstack-keystone06:31
*** dikonoor has quit IRC06:38
*** gongysh has quit IRC06:49
*** rcernin has quit IRC07:06
openstackgerritOpenStack Proposal Bot proposed openstack/keystonemiddleware master: Imported Translations from Zanata  https://review.openstack.org/56545507:09
*** gongysh has joined #openstack-keystone07:12
*** gongysh has quit IRC07:13
*** gongysh has joined #openstack-keystone07:17
*** david-lyle has joined #openstack-keystone07:22
*** david-lyle has quit IRC07:24
*** dklyle has quit IRC07:24
*** gongysh has quit IRC08:06
*** belmoreira has joined #openstack-keystone08:33
*** pcichy has quit IRC08:38
*** belmoreira has quit IRC08:42
*** gongysh has joined #openstack-keystone08:55
*** d0ugal__ has joined #openstack-keystone08:57
*** d0ugal__ has quit IRC09:02
*** Dinesh_Bhor has quit IRC09:21
*** dklyle has joined #openstack-keystone09:26
*** gongysh has quit IRC09:44
*** lifeless_ is now known as lifeless09:46
*** d0ugal has joined #openstack-keystone10:00
*** d0ugal has quit IRC10:00
*** d0ugal has joined #openstack-keystone10:00
*** panbalag has joined #openstack-keystone10:21
johnthetubaguywondering if anyone could help me better understand the heat and federated user problems: https://bugs.launchpad.net/keystone/+bug/158999310:43
openstackLaunchpad bug 1589993 in OpenStack Identity (keystone) "cannot use trusts with federated users" [High,Triaged]10:43
johnthetubaguyI am using a group mapping and I think I see the same problems as described in the bug (as I don't do the create role assignment at login thingy)10:44
*** nicolasbock has joined #openstack-keystone10:48
*** panbalag has quit IRC10:52
mordredkmalloc: the issue I reported yesterday on the new ksa stack is an issue with the devstack endpoint registration11:00
*** nicolasbock has quit IRC11:09
*** dklyle has quit IRC11:15
*** gongysh has joined #openstack-keystone11:15
*** nicolasbock has joined #openstack-keystone11:22
*** raildo has joined #openstack-keystone11:49
*** dklyle has joined #openstack-keystone11:56
*** dklyle has quit IRC11:58
*** david-lyle has joined #openstack-keystone11:58
*** dave-mccowan has joined #openstack-keystone12:08
*** pcichy has joined #openstack-keystone12:15
*** mchlumsky has quit IRC12:16
*** edmondsw has joined #openstack-keystone12:19
*** mchlumsky has joined #openstack-keystone12:20
*** Horrorcat has joined #openstack-keystone12:40
*** panbalag has joined #openstack-keystone12:44
*** pcichy has quit IRC12:48
*** panbalag has quit IRC12:48
*** martinus__ has joined #openstack-keystone12:53
*** panbalag has joined #openstack-keystone13:04
*** wxy| has joined #openstack-keystone13:07
*** david-lyle has quit IRC13:08
*** panbalag has quit IRC13:29
*** panbalag has joined #openstack-keystone13:29
*** lbragstad has joined #openstack-keystone13:32
*** ChanServ sets mode: +o lbragstad13:32
*** asettle has left #openstack-keystone13:33
*** openstackgerrit has quit IRC13:34
*** dklyle has joined #openstack-keystone13:38
lbragstadjohnthetubaguy: ping - i reviewed wxy|'s patch for unified limits with the use cases from CERN13:39
lbragstadwas curious if I could run something by you13:39
lbragstadi attempted to walk through the scenarios here https://review.openstack.org/#/c/565412/13:40
johnthetubaguylbragstad: sure, I should pick your brain about a federation but with heat in return :p13:41
lbragstadjohnthetubaguy: that sounds fair - we can start with federation stuff13:41
johnthetubaguyOK, basically its this bug again: https://bugs.launchpad.net/keystone/+bug/158999313:42
openstackLaunchpad bug 1589993 in OpenStack Identity (keystone) "cannot use trusts with federated users" [High,Triaged]13:42
lbragstadthere is a lot of context behind https://bugs.launchpad.net/keystone/+bug/158999313:42
johnthetubaguywe seem to hit that when doing dynamic role assignment13:42
johnthetubaguywhich I think is expected?13:42
johnthetubaguywell, known I guess13:42
lbragstadi remember them bringing this up to us in atlanta13:43
lbragstadduring the PTG13:43
johnthetubaguythe problem is doing the dynamic role assignment on first login doesn't work for our use case13:43
johnthetubaguybasically because we want the group to map correct to an assertion that can be quite dynamic (level of assurance)13:44
johnthetubaguynot sure if I am missing a work around, or some other way of doing things13:44
*** felipemonteiro__ has joined #openstack-keystone13:44
lbragstadwell - we do have something that came out of newton design sessions aimed at solving a similar problem13:45
lbragstadit was really for the "first login" case13:45
lbragstadwhere the current federated flow resulted in terrible user experience, because a user would need to hit the keystone service provider to get a shadow user created13:46
lbragstadthen an admin would have to come through and manually create the assignments between that shadow user and various projects13:46
lbragstadwhich isn't a great a experience for users, because it's like "hurry up and wait"13:47
lbragstadso we built - https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning13:47
johnthetubaguyright, but the auto provisioning fixes the roles on first login, I thought?13:47
* lbragstad double checks the code13:48
johnthetubaguywe would need it remove role assignments and replacement on a subsequent login13:49
*** felipemonteiro_ has joined #openstack-keystone13:49
johnthetubaguy(FWIW, I suspect we are about to hit application credential issues for the same reason, but its a pure guess at this point)13:49
lbragstadso - it looks like the code will run each time federated authentication is used13:51
*** panbalag has left #openstack-keystone13:51
johnthetubaguymaybe we didn't test this properly then13:51
lbragstadso if the mapping changes in between user auth requests, the mapping will be applied both times13:51
johnthetubaguywould it remove role assignments?13:51
lbragstadit would not13:52
lbragstadnot the mapped auth bit13:52
johnthetubaguyyeah, that is the bit the group mapping does for "free"13:52
lbragstadbut... it might if you drop the user13:52
johnthetubaguyso the use case is where the IdP says if two factor was used13:52
johnthetubaguyif the two factor isn't used, then they get access to less projects13:53
*** felipemonteiro__ has quit IRC13:53
johnthetubaguyI mean its not two factor, its a generic level of assurance assertion, but same difference really13:53
lbragstadok - that makes sense13:53
johnthetubaguyits not something I had considered before working through this project13:54
lbragstadright - it's elevating permissions based on attributes of the assertion13:54
johnthetubaguyits using EGI AAI, its an IdP proxy, so a single user can login with multiple idendities, with varing levels of assurance, some folks need a minimum level13:54
*** links has quit IRC13:54
lbragstadcouldn't you use the regular group mapping bits?13:55
johnthetubaguywe do, but that breaks heat13:55
*** dklyle has quit IRC13:55
johnthetubaguyor seems to13:55
* lbragstad thinks13:57
*** spilla has joined #openstack-keystone13:57
johnthetubaguy(not sure about how this maps to application credentials yet, as the openid connect access_token dance is really stupid for ansible/CLI usage patterns)13:57
lbragstadthe problem is that that trust implementation relies on the role assignment existing in keystone13:58
lbragstadi think it would work with auto-provisioning, but then the problem becomes cleaning up stale role assignments14:00
lbragstadso -something that would be very crude14:00
johnthetubaguythink is not cleaning those up is a feature for some users, but could be a mapping option I guess force_refresh=True?14:01
*** dikonoor has joined #openstack-keystone14:01
lbragstadwould be to use auto-provisioning and then purge users using keystone-manage mapping purge14:01
* lbragstad looks back at the code14:02
lbragstadah - nevermind, mapping_purge only works for ID mappings14:02
johnthetubaguymaybe a always_purge_old_roles mapping? would that delete app creds as we modify the role assinments?14:02
lbragstadnot shadow users14:02
lbragstadso - with a refresh option, how would that get invoked?14:03
johnthetubaguyI guess we want something to say that the mapping should delete all other role assignments, somehow14:03
johnthetubaguyin a way that doesn't always clear out all current application credentials14:03
johnthetubaguy(unless there were role assignments delete, then I guess you have to invalidate app creds, which seems fine)14:04
johnthetubaguyhmm, needs writing up I think14:04
lbragstadyeah - that's the stance we take on app creds currently14:05
*** felipemonteiro_ has quit IRC14:05
johnthetubaguyyankcrime: are you following along with my thinking here, or have I gone crazy :)14:05
lbragstadanytime a role assignment is removed for a user, we invalidate the application credential14:05
johnthetubaguylbragstad: yeah, I think I still like that, its simple14:05
*** felipemonteiro_ has joined #openstack-keystone14:05
johnthetubaguyI think the action is to write up the use case, and document the missing piece14:06
johnthetubaguylike a partial spec write up or something?14:06
johnthetubaguy(a very short one!)14:06
lbragstadat least documenting the use case would be good14:07
lbragstadi hadn't really heard anything back from the heat team14:07
johnthetubaguyOK, cool14:08
johnthetubaguyfeels like the group mapping might want to get deprecated if it really doesn't work with these features14:09
yankcrimejohnthetubaguy: not at all, i think that's a succinct summary of what we're trying to achieve14:09
johnthetubaguybut anyways, that is for that write up14:09
johnthetubaguyyankcrime: cool14:09
johnthetubaguynow you may have walked into my trap, but we should work out who should write that up :)14:09
lbragstadok - one last question14:10
yankcrimebah i'd hoped that by agreeing i'd avoid that trap ;)14:10
johnthetubaguylbragstad: sure14:10
yankcrimethe use-case is pretty clear, i don't mind getting that drafted14:10
yankcrime(clear to us i mean)14:10
lbragstadyou always know which users need this refresh behavior, right?14:10
johnthetubaguyI think its all users in this case, so I think the answer is yes14:11
yankcrimewe can infer which users need this refresh by other attributes14:11
yankcrime(claims, whatever)14:11
lbragstadok - i was thinking about a possible implementation that might be pretty eas y14:11
yankcrimebut yeah right now for mvp we can safely assume all users in this case14:12
lbragstadbut it would require using the keystone API to set an attribute on the user's reference that would get checked during federated authentication14:12
johnthetubaguyif that can be set in the mapping? maybe14:12
ayoungwhat if we are overthinking this?14:12
ayoungwhat if...we really don't want dynamic role assignments or any of that14:13
ayoungand instead...somehow use HMT to solve the problem.14:13
ayoungfirst login, a federated user gets put into a pool14:13
ayoungand...we have a rule (hand wave) that says that user can create their own project14:13
ayoungthe pool is the parent project, and it has 0 quota14:14
*** wxy| has quit IRC14:14
ayoungthe child project create is triggered by the user themself, and they get an automatic role assignment for it14:14
*** dikonoor has quit IRC14:14
yankcrimeayoung: is this mercador?14:14
ayoungyankcrime, I thought it was psedoconical14:15
* ayoung just made a map joke14:15
* yankcrime googles pseudoconical14:16
johnthetubaguyso while I do like that, the main use case is to get access to existing project resources based on IdP assertions, that are dynamic, i.e. revoke access if assertions change14:16
yankcrimeayoung: lol14:16
ayoungjohnthetubaguy, I caught a bit about the groups14:16
lbragstadkeystone supports one way to do that - but things like trusts don't work with it14:17
ayoungit seems to me that the role assignments should be what control that...the IdP data should be providing the input to the role assignments, but should not be creating anything14:17
lbragstadif we use the other way (with auto provisioning) then the features work, but cleaning up the assignments is harder14:17
knikollai remember there was a proposed change to have the assignments from the mapping persist until the user logs in again through federation and the attributes are reavaluated14:17
lbragstadknikolla: interesting, do you know where that ended up14:18
johnthetubaguyknikolla: yeah, that is basically what we need here14:18
* yankcrime nods14:18
ayoungDec 28, 201614:18
ayoungThat is as old as some of my patches14:19
ayounglet me take a look at that14:19
johnthetubaguyI kinda like the mapping doing a "clean out any other assignments" thing, and just have groups treated like any other role assignment14:20
johnthetubaguythe delete re/add thing screws with app creds, which would be bad14:21
johnthetubaguylbragstad: back to quotas, do you have more context to your patch, was that from an email?14:22
lbragstadjohnthetubaguy: it wasn't - but wxy too a shot at working the CERN use cases into https://review.openstack.org/#/c/540803/14:22
lbragstadand i tried to follow it up with this - https://review.openstack.org/#/c/565412/114:23
johnthetubaguylbragstad: why is it not restricted to two levels?14:23
lbragstadjohnthetubaguy: which specification?14:24
ayoungso...it seems to me that group membership should be based solely on the data in the assertion.  If the data that triggers the group is not there, the user cannot get a token with roles based on that group14:24
johnthetubaguyjohnthetubaguy: https://review.openstack.org/#/c/54080314:25
ayounggroups are not role assignments.  Groups are attributes from the IdP that are separate from the user herself14:25
lbragstadjohnthetubaguy: there are some things in there that need to be updated14:25
lbragstadand that's one of them, because i don't think we actually call that out anywhere14:26
lbragstadjohnthetubaguy: so - good catch :)14:26
ayoungso...we should not trigger any persisted data on the group based on the IdP data.  It should all be dynamic.14:27
*** felipemonteiro__ has joined #openstack-keystone14:27
johnthetubaguyayoung: technically, sure. But as a user we want to dynamically assign a collection of role assignments, based on attributes we get from the IdP that change over time, right now we are doing that via groups (... which breaks heat)14:27
lbragstadbecause the role assignment isn't persistent and breaks when heat tries to use a trust14:28
johnthetubaguyso the rub is you want to create an application credential, so you don't have to go via the IdP for automation, that gains what the federated user had at the time14:28
johnthetubaguyand you want the heat trust to persit, as long as it can14:28
johnthetubaguynow if the IdP info changes on a later login (or a refresh of the role assignments is triggered via some automation process out of band), sure that access gets revoked, but thats not a usual every day thing14:29
ayoungjohnthetubaguy, so what you are saying is that Heat trusts are based on attributes that came in a Federated assertion.  Either we have them re-affirmed everytime (ephemeral) or we make them persistant, and then we are stuck with them for all time.14:30
johnthetubaguyanyways, I think we need to write up this scenario, if nothing else so its very clear in our heads14:30
*** felipemonteiro_ has quit IRC14:30
ayoungThis is a David Chadwick question.14:30
johnthetubaguyyeah, its in between those too... its like the user is partially deleted...14:30
johnthetubaguyit is a bit fishy, to be sure14:31
johnthetubaguyits a cache of IdP assertions14:31
johnthetubaguybut anyways14:31
*** jdennis has quit IRC14:33
lbragstadjohnthetubaguy: so - the examples i worked on last night to unified limits only account for two levels14:33
*** jdennis has joined #openstack-keystone14:34
johnthetubaguylbragstad: they sound sensible as I read through them, I am adding a comment on the other spec a second, will see what you think about it.14:34
ayoungOK...so the question is "how long should the data from the assertion be considered valid"14:35
ayoungon one hand, the assertion has a time limit on it,  which is something like 8 hours14:35
lbragstadjohnthetubaguy: granted, both examples I wrote in the follow on break14:35
ayoungand that is not long enough for most use cases14:35
ayoungone the other hand, Keystone gets no notifications for user changes from the IdP, so if we make it any longer, those are going to be based on stale data14:36
ayoungSo having a trust based on a Federated account needs to be an elevated level of priv no matter what14:36
ayoungsame is true of any other delegation, to include app creds14:36
johnthetubaguysame with app credentials14:37
*** gongysh has quit IRC14:37
ayoungso, who gets to decide?14:37
ayoungI think the answer is that the ability to create a trust should be an explicit role assignment14:37
ayoungif you have that, then the group assignments you get via a federated token get persisted14:38
ayounganother way to do it is at the IdP level, but that is fairly course grained14:38
johnthetubaguyit feels like this is all configuration of the mapping14:38
ayoungand...most IdPs don't provide more than identifying info14:38
ayoungjohnthetubaguy, so, it could be14:38
johnthetubaguyshould the group membership or assignment persist or not14:38
ayoungone part of the mapping could be "persist group assignments"14:39
ayoungbut, the question is whether you would want that for everyone from the IdP14:39
*** openstackgerrit has joined #openstack-keystone14:40
openstackgerritayoung proposed openstack/keystone master: Enable trusts for federated users  https://review.openstack.org/41554514:40
kmallocmordred: ah good to know and phew14:40
ayoungthat is just a manual rebase knikolla14:40
johnthetubaguyayoung: personally I am fine with global, but I could see arguments both ways14:41
ayoungjohnthetubaguy, I see three cases14:41
mordredkmalloc: I have a patch up to fix devstack and the sdk patch now depends on it - and correctly shows test failures in the "install ksa from release" but test successes with "install ksa from master/depends-on"14:41
mordredkmalloc: https://review.openstack.org/#/c/564494/14:41
ayoung1.  Everyone from the IdP is ephemeral. Persist nothing.  2.  Ecveryone is persisted.  3.  Certain users get promoted to persisted after the fact.14:41
ayoungactually, a fourth14:41
ayoung4. there is a property in the Assertion that allows the mapping to determine whether to persist the users groups or not14:42
ayoungnote that "everyone is persisted " does not mean that "all groups should be persisted" either14:42
kmallocmordred: looking now.14:42
ayoungit merely means that a specific group mapping should be persisted14:42
ayoungjohnthetubaguy, that make sense?14:43
kmallocmordred: so uh... Dumb question, I thought we worked hard for versionless entries in the catalog?14:44
johnthetubaguyayoung: I think so... although something tells me we are missing a bit, just not sure what right now14:44
kmallocmordred: this is, afaict, undoing the "best practices"14:44
ayoungjohnthetubaguy, I think if we make it possible to identify on a group mapping that the group assignment should be persisted, we'll have it down.  We will also need a way to clean out group assignments, although that technically can be done today with e.g. an ansible role14:45
johnthetubaguyright, today we have persist roles or transient groups, what we really need is a "clear out stale" at login option, but I need to write this all down, I am getting lost14:46
ayoungjohnthetubaguy, not at login14:50
ayoungjohnthetubaguy, that may never happen14:50
johnthetubaguyits only one way to get refreshed info, but its a way.14:51
johnthetubaguyyou would need out of band automation...14:51
kmallocmordred: this is sounding like a bug in ksa14:52
kmallocmordred: or in cinder handling versionless endpoints14:52
johnthetubaguyayoung: that is the fishy bit I was worried about before14:52
ayounglimited time trusts14:52
ayoungmake them refreshable...you have to log in periodically with the right groups to keep it alive14:53
ayoungso...limited time group assignments?14:53
ayoungput an expiration date on some groups assignments, with a way to make them permanent via the group API14:53
johnthetubaguyayoung: yeah, that could work, like app cred expiry I guess14:55
mordredkmalloc: versionless endpoints in the catalog don't work for cinder - so the bug is there14:55
mordredkmalloc: I believe that basically we somehow missed actually getting cinder updated in the same way nova got updated14:55
kmallocSo, how did this ever work :P ;)14:55
mordrednobody has used the block-storage endpoint for any reason yet14:56
kmallocOk then.14:56
mordredthe other endpoints int he catalog in devstack - for volumev2 and volumev3 - are versioned14:56
johnthetubaguylbragstad: added a comment here: https://review.openstack.org/#/c/540803/9/specs/keystone/rocky/hierarchical-unified-limits.rst@3514:56
lbragstadjohnthetubaguy: just saw that - reading it now14:56
mordredkmalloc: so - in short, the answer to "how did this ever work?" is "It never has" :)14:57
lbragstadjohnthetubaguy: is VO a domain?14:57
lbragstador a top level project?14:58
kmallocWe need to bug cinder folks, but, annnyway, this.sounds like a clean fix for now.14:58
mordredkmalloc: yah - ksa correctly fails with the misconfigured endpoint14:58
lbragstadi'm struggling with the mapping of terms14:58
lbragstadok - so VO seems like a parent project or a domain15:00
kmallocmordred: yay successfully failing >.>15:00
lbragstadand VO groups are children projects of the VO15:00
johnthetubaguylbragstad: yeah15:02
lbragstadjohnthetubaguy: ok15:02
johnthetubaguylbragstad: VO = parent project, in my head15:02
lbragstadcool - so if we set the VO to have 20 cores15:03
lbragstadyou *always* want the VO to be within that limit, right?15:03
johnthetubaguylbragstad: yes15:03
lbragstadif we set 20 cores on the VO, and the default registered limit is 1015:04
*** wxy| has joined #openstack-keystone15:04
johnthetubaguylbragstad: basically the PI pays the bill for all resources used by anyone in the VO15:04
johnthetubaguy(via a research grant, but whatever)15:04
lbragstadso the VO has to be within whatever the limit is15:05
lbragstadit can never over-extend it's limit, right?15:05
lbragstadhere are where my questions come in15:05
johnthetubaguyno exceptions, wibble races and if you care about them15:05
*** mchlumsky has quit IRC15:05
*** pcichy has joined #openstack-keystone15:05
lbragstadsay we have a VO with 20 cores15:05
johnthetubaguyso you say this "which defeats the purpose of not having the service understand the hierarchy."15:05
johnthetubaguywe should get back to that bit after your questions15:06
lbragstadso the VO doens't have any children15:06
lbragstadand the default registered limit for cores is 1015:06
lbragstadso any project without a specific limit assumes the default15:06
*** spilla has quit IRC15:06
lbragstadso - as the PI, I create a child under VO15:07
johnthetubaguyyep, gets a limit of 1015:07
lbragstadyep - because i didn't specify it15:07
johnthetubaguyfor all 15 sub projects I create15:07
lbragstadthen i create another child15:07
lbragstadso - two children, each default to 1015:07
lbragstadand total resources in the tree is 2015:07
lbragstadtechnically, I'm at capacity15:08
johnthetubaguyso not in the model I was proposing15:08
johnthetubaguy(in my head)15:08
lbragstadsay both child use all of their quota15:08
johnthetubaguyit might be the old "garbutt" model, if memory severs me15:08
johnthetubaguyright, so we have project A, B, C15:09
johnthetubaguysay we also created D15:09
johnthetubaguyB, C, D are all children of A15:09
*** spilla has joined #openstack-keystone15:09
lbragstadA = VO, B and C are children with default limits for cores15:09
*** r-daneel has quit IRC15:09
lbragstadshould you be able to create D?15:09
johnthetubaguylikes add D, same as B and C15:09
lbragstadwithout specifying a limit?15:09
johnthetubaguyI am saying yes15:09
johnthetubaguy(just go with it...)15:10
lbragstadso - let's create D15:10
johnthetubaguyso we have A (limit 20), B, C, D all with limit of 1015:10
lbragstadyep - so the sum of the children is 3015:10
johnthetubaguyso lets say actual usage of all projects is 1 test VM15:10
johnthetubaguyso current actually usage is now 415:10
johnthetubaguyproject A now tries to spin up 19 VMs, what happens15:11
johnthetubaguywell thats bad, its only allowed 1615:11
johnthetubaguybecause B, C, D are using some15:11
* lbragstad assumes the interface for using oslo.limit is enforce(project_id, project_usage)15:11
johnthetubaguythat is the bit I don't like15:12
lbragstadi'm not sure i like it either, but i'm curious to hear why you don't15:12
johnthetubaguyso my assumption was based on the new Nova code, which is terrible I know15:12
johnthetubaguyin that world we have call back to the project:15:13
johnthetubaguycount_usage(resource_type, project_id)15:13
johnthetubaguyso when you call enforce(project_id, project_usage)... you get the idea15:13
lbragstadright - oslo.limit gets the tree A, B, C, and D from keystone15:14
lbragstadthen uses the callback to get all the usage for that tree15:14
johnthetubaguyright, so the limit on A really means sum A, B, C and D usage15:14
johnthetubaguyand any check on a leaf means check the parent for any limit15:14
johnthetubaguyor something a bit like that15:15
lbragstadok - i was operating yesterday under the assumption there wouldn't be a callback15:15
johnthetubaguynow.. this could be expensive15:15
lbragstad^ that's why i was operating under that assumption :)15:15
johnthetubaguywhen there are lots of resources owned by a VO, but then its expensive when you are a really big project too15:15
lbragstadi didn't think we'd actually get anyone to bite on that15:15
johnthetubaguyso its one extra thing that is nice about the two level limit15:15
lbragstadare you saying you wouldn't need the callback with a two-level limit?15:16
johnthetubaguyI think you need the callback still, in some form15:16
lbragstadok - i agree15:16
johnthetubaguyit just you know its not going to be too expensive, as there are only two levels of hierarchy15:16
lbragstadyou could still have a parent with hundreds of children15:18
lbragstadi suppose15:18
johnthetubaguyyou could, and it would suck15:18
lbragstadok - so i think we hit an important point15:19
johnthetubaguynow we don't have any real use cases that map to the one that is efficient to implement15:19
johnthetubaguywe can invent some, for sure, but they feel odd15:19
lbragstadif you assume oslo.limit accepts a callback for getting usage of something per project, then you can technically create project D15:20
johnthetubaguyso even in the other model, what if project A has already used all 20?15:20
lbragstadif you can't assume that about oslo.limit, then creating project D results in a possible violation of the limit, because the library doesn't have a way to get all the usage it needs to make a decision with confidence15:20
johnthetubaguyeven though the children only sum to 20, it doesn't help you15:21
lbragstadi think that needs to be a characteristic of the model15:21
lbragstadif A.limit == SUM(B.limit, C.limit)15:22
lbragstadthen you've decided to push all resource usage to the leaves of the tree15:22
lbragstadwhich we can figure out in oslo.limit15:22
lbragstadso when you go to allocate 2 cores to A, we can say "no"15:22
johnthetubaguywell, I think what you are looking for is where project A usage counts any resource limits in the children, before counting its own resources15:22
johnthetubaguyso A, B, C, (20, 10, 10) means project A can't create anything15:23
lbragstadi guess i was assuming that A.usage = 015:23
johnthetubaguythis is where the three level gets nuts, as you get usages all over the three levels, etc15:24
johnthetubaguyI think we can't assume A has usage 0, generally it has all the shared resources15:24
johnthetubaguyor the stuff that was created before someone created sub projects15:24
johnthetubaguysame difference I guess15:25
johnthetubaguyanyways, I think that is why I like us going down the specific two level Virtual organisation case, it makes all the decisions for us, its quite specific and real15:25
ayounganyone else feel like a lot of these problems come down to "we know when to do the action, but we have no event one which to undo it."15:25
openstackgerritXiaojueGuan proposed openstack/keystoneauth master: Trivial: Update pypi url to new url  https://review.openstack.org/56541815:26
*** wxy| has quit IRC15:27
lbragstadjohnthetubaguy: ok - so what do we need to accomplish this? the callback?15:27
*** wxy| has joined #openstack-keystone15:27
johnthetubaguylbragstad: this code may help: https://github.com/openstack/nova/blob/c15a0139af08fd33d94a6ca48aa68c06deadbfca/nova/quota.py#L17215:28
johnthetubaguylbragstad: you could get the usage for each project as a callback?15:28
lbragstadwell - somewhere the service (e.g. nova in this case) code, I'm envisioning15:28
lbragstadlimit.enforce(project_id, usage_callback)15:29
lbragstadoslo.limit get's the limit tree15:29
lbragstadthen uses the callback to collect usage for each node in the tree15:29
lbragstadand compares that to the request being made to return a "yes" or "no" to nova15:30
lbragstadsorry - limit.enforce(resource_name, project_id, usage_callback)15:30
johnthetubaguylooking for a good line in Nova to help15:30
johnthetubaguylbragstad: https://github.com/openstack/nova/blob/644ac5ec37903b0a08891cc403c8b3b63fc2a91c/nova/compute/api.py#L29315:31
lbragstadpart of that feels like it would belong in oslo.limit15:32
johnthetubaguylbragstad: my bad, here we are, this moves to an oslo.limit call, where oslo.limit has been setup with some registed callback, a bit like CONF reload callback: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/server_groups.py#L15715:34
lbragstadand it would look like limit.enforce('injected_files', 'e33f5fbbe4a4407292e7ccef49ebac0c', api._get_injected_file_usage_for_project)15:34
johnthetubaguynote the check quota, create thing, recheck quota model15:34
johnthetubaguysorry, the delta check is better, check we can increase by one, etc, etc.15:35
lbragstadso - objects.Quotas.check_deltas() would be a replaced by a call to oslo.limit15:35
lbragstadi suppose the oslo.limit library is going to need to know the usage requested, too15:36
johnthetubaguymore complicated example, to note how to make it more efficient: https://github.com/openstack/nova/blob/ee1d4e7bb5fbc358ed83c40842b4c08524b5fcfb/nova/compute/utils.py#L84215:36
johnthetubaguythats the delta bit in the above two examples15:36
johnthetubaguythe other case is a bit odd... its a more static limit, there are a few different cases15:37
lbragstadso limit.enforce('cores', 'e33f5fbbe4a4407292e7ccef49ebac0c', 2, api._get_injected_file_usage_for_project)15:37
johnthetubaguyso it needs to be a dict15:37
johnthetubaguyotherwise it really sucks15:38
johnthetubaguyI mean oslo.limits wouldn't check the user quoats of course, they are deleted15:38
lbragstaddoesn't the max count stuff come from keystone now though?15:38
lbragstaddon't we only need to know the resource name, requested units, project id, and usage callback?15:39
johnthetubaguyso you would want an oslo limits API to check the max limits I suspect, but that could be overkill15:40
johnthetubaguyanyways, focus on the instance case, yes that is what you need15:41
lbragstadhow is a max limit different from a project limit or registered limit?15:41
*** itlinux has joined #openstack-keystone15:42
johnthetubaguythe count of existing resources would always return zero I think15:42
johnthetubaguyits like, how many tags can you have on an instance, its a per project setting15:42
johnthetubaguybut the count is not per project, its per instance15:43
johnthetubaguyI messed that up...15:43
johnthetubaguytags on an instance is a good example15:43
johnthetubaguyto add a new tag15:43
johnthetubaguycheck per project limit15:43
johnthetubaguybut you count the instance you are setting it on, not loads of resources owned by that project15:43
openstackgerritXiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url  https://review.openstack.org/56541115:44
johnthetubaguyso usually we don't send a delta, we just check the actual number15:44
johnthetubaguybut we can probably ignore all that for now15:44
johnthetubaguyfocus on the basic case first15:44
openstackgerritXiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url  https://review.openstack.org/56541115:44
lbragstadi think i agree15:44
johnthetubaguycore, instances, ram when doing create instance15:44
* lbragstad has smoke rolling out his ears15:44
johnthetubaguydid you get how we check quotas twice for every operation15:45
lbragstadyeah - i think we could expose that both ways with oslo.limit15:45
lbragstadand nova would just iterate the creation call and set each requested usage to 1?15:45
johnthetubaguylbragstad: based on my experience that is a good sign, it means you understand a good chunk of the problem now :)15:45
lbragstadto maintain the check for a race condition, right?15:45
johnthetubaguyso the second call you would just set the deltas to zero I think, I should check how we do that15:46
*** r-daneel has joined #openstack-keystone15:47
johnthetubaguyyeah, this is the second check: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/server_groups.py#L18015:47
lbragstadthat feels like it should be two different functions15:47
johnthetubaguydelta of zero15:47
johnthetubaguyI mean it could be15:47
lbragstadthe first call you want it to tell you if you can create the thing15:48
lbragstadthe second call happens after you've created said thing15:48
lbragstadand the purpose of the second call is to check that someone didn't put us over quota in that time frame, right?15:48
lbragstadand if they did, we need to clean up the instance15:49
johnthetubaguyit is a very poor implementation of this: https://en.wikipedia.org/wiki/Optimistic_concurrency_control15:49
lbragstadso the first call happens in the "begin"15:50
lbragstadand the second is done in the "validate"15:50
lbragstadok - i think that makes sense15:51
johnthetubaguyalthough its all a bit fast and loose, kill the API and you will be left over quota, possibly15:51
johnthetubaguyit all assumes some kind of rate limiting is in place to limit damage around the edge cases15:51
lbragstadright- because you didn't have a chance to process the roll back15:51
*** r-daneel has quit IRC15:51
johnthetubaguywe don't hide the writes from other transactions, basically15:52
johnthetubaguywell, don't hide the writes until commit happens15:52
johnthetubaguyits a bit more optimistic that optimistic concurrency control, its more concurrency damage limitation15:53
* lbragstad nods15:53
johnthetubaguyOk, did we both burn out yet? I think I am close15:53
johnthetubaguyI think you understand where I was thinking with all this though...?15:54
johnthetubaguydoes that make more sense to you now?15:54
lbragstadi think so15:54
* johnthetubaguy waves arms about15:54
lbragstadi think i can rework the model15:54
lbragstadbecause things change lot with the callback bit15:54
johnthetubaguyI don't think its too terrible to implement15:55
lbragstadyeah - i just need to draw it all out15:55
lbragstadbut that callback let's us "over-commit" limits but maintain useage15:55
johnthetubaguyI think it means we can implement any enforcement model in the future (if we don't care about performance)15:55
lbragstadand that's the important bit you want regarding the VO/15:56
openstackgerritMerged openstack/keystone master: Fix the outdated URL  https://review.openstack.org/56471415:56
lbragstadok - so to recap15:56
lbragstad1.) make sure we include a statement about only supporting two-level hierarchies15:56
lbragstad2.) include the callback detail since that's important for usage calculation15:56
*** empty_cup has joined #openstack-keystone15:57
lbragstadis that it?15:57
*** r-daneel has joined #openstack-keystone15:57
johnthetubaguyI guess what does the callback look like15:57
*** gyee has joined #openstack-keystone15:57
johnthetubaguylist of resources to count for a given project_id?15:57
johnthetubaguyor a list of project_ids and a list of resources to count for each project?15:58
johnthetubaguymaybe it doesn't matter, so simpler is better15:58
lbragstadi would say it's a method that returns an int that represents the usage15:58
lbragstadit can accept a project id15:58
lbragstadbecause it will depend on the structure of the tree15:59
lbragstadand just let oslo.limit iterate the tree and call usage15:59
lbragstadcall for usage*15:59
empty_cupi received a 401, and looked at the keystone log and see 3 lines: mfa rules not processed for user; user has no access to project; request requires authentication16:00
johnthetubaguylbragstad: yeah, I think we will need it to request multiple resources in a single go16:00
wxy|johnthetubaguy: I don't oppose to two-level hierarchies. Actually I like it and I think it is simple enough for the origin proposal.16:01
johnthetubaguylbragstad: its a stupid reason, think about nova checking three types of resources, we don't want to fetch the instances three times16:01
johnthetubaguywxy|: cool, for the record I don't oppose multi-level, its just that feels like a next step, and I want to see us make little steps in the right direction16:02
*** panbalag has joined #openstack-keystone16:02
lbragstadthis is a really hard problem...16:02
johnthetubaguyyeah, everytime I think its easy I find some new horrible hole!16:03
johnthetubaguywould love to get melwitt to take a peak at this stuff, given all the Nova quota bits she has been doing, she has way more detailed knowledge than me16:04
lbragstadi can work on documenting everything16:05
lbragstadwhich should hopefully make it easier16:05
lbragstadinstead of having to read a mile of scrollback16:05
johnthetubaguyawesome, do ping me again for a review when its ready, I will try make time for that, almost all our customers will need this eventually, well most of them anyways!16:06
johnthetubaguyonce the federation is working (like CERN have already) they will hit this need16:06
*** mchlumsky has joined #openstack-keystone16:07
lbragstadjohnthetubaguy: ++16:07
johnthetubaguyOK, so I should probably call it a day, my brain is mashed and its just past 5pm16:08
lbragstadjohnthetubaguy: sounds like a plan, thanks for the help!16:08
johnthetubaguynow is not the time for writing ansible16:08
johnthetubaguyno problem, happy I could help16:08
wxy|johnthetubaguy: yeah. The only problem I concern is about the implementation. In your spec, to enable the hierarchy is controlled by the API with "include_all_children". It means that in a keystone system, some project may be hierarchy, some don't. I's a little mess-up for a Keystone system IMO. How about the limit model driver way? So that all project's behavior will be the same at the same time.16:09
wxy|johnthetubaguy: some way like this: https://review.openstack.org/#/c/557696/3 Of cause It can be changed to two-level easily.16:13
*** panbalag has left #openstack-keystone16:39
*** wxy| has quit IRC16:39
* ayoung moves Keystone meeting one hour earlier on his schedule16:54
kmallocayoung: set the meeting time in UTC/or reykjavik timezone (if your calendar does't have UTC)16:57
kmallocayoung: means you wont have to "move" it explicitly :)16:58
kmallocthe latter use of reykjavik was a workaround for google calendar (not sure if that has been fixed)16:58
*** spilla has quit IRC16:58
lbragstadthe ical on eavesdrop actually has the meeting in UTC17:02
lbragstad#startmeeting keystone-office-hours17:02
openstacklbragstad: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.17:02
*** openstack changes topic to "Rocky release schedule: https://releases.openstack.org/rocky/schedule.html | Meeting agenda: https://etherpad.openstack.org/p/keystone-weekly-meeting | Bugs that need triaging: http://bit.ly/2iJuN1h | Trello: https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap"17:02
openstackMeeting ended Tue May  1 17:02:43 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-04-24-17.56.html17:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-04-24-17.56.txt17:02
openstackLog:            http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-04-24-17.56.log.html17:02
lbragstad#startmeeting keystone-office-hours17:02
openstackMeeting started Tue May  1 17:02:52 2018 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.17:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:02
*** openstack changes topic to " (Meeting topic: keystone-office-hours)"17:02
*** ChanServ changes topic to "Rocky release schedule: https://releases.openstack.org/rocky/schedule.html | Meeting agenda: https://etherpad.openstack.org/p/keystone-weekly-meeting | Bugs that need triaging: http://bit.ly/2iJuN1h | Trello: https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap"17:02
openstackThe meeting name has been set to 'keystone_office_hours'17:02
ayoungkmalloc, UTC 12:00 right17:03
lbragstadthat's the second time the meeting bot has failed to stop office hours17:03
ayoungWhat is the goal of office hours anyway?  Answering questions from community members?17:04
lbragstadyeah, that, closing bugs, working on specs17:04
lbragstadit's really just time set aside each week to ensure other people are around17:05
*** r-daneel_ has joined #openstack-keystone17:05
*** r-daneel has quit IRC17:07
*** r-daneel has joined #openstack-keystone17:08
*** r-daneel_ has quit IRC17:10
ayounglbragstad, so...what do you think of the idea of optional time-outs on group assignments?17:16
ayoungA user comes in via Federation, gets a group assignment and it will last for a configurable amount of time, set in the mapping17:16
ayoungonce the time is up, the group assignment is no longer valid, and role assignments based on that group are also kaput17:17
ayounglog in again, bump the time forward.17:17
ayoungan admin can come in and set the time out to "never" to make it permanent.17:17
lbragstadi need to think about that a bit more17:18
lbragstadit's interesting though17:18
ayounglbragstad, do we have an appropriate forum at the summit for that?17:18
lbragstadnot really, we have one for unified limits, default roles, and operator feedback17:19
kmallocayoung: yeah UTC (-0 offset)17:20
knikollathere is a forum session on federation stuff, but it will probably be extremely high level17:23
*** itlinux has quit IRC17:28
*** itlinux has joined #openstack-keystone17:41
*** pcichy has quit IRC17:44
*** pcichy has joined #openstack-keystone17:51
*** pcichy has quit IRC17:52
*** pcichy has joined #openstack-keystone17:53
ayoungknikolla, it might be worth discussing there, as I think this is one of the critical topics17:56
bretonayoung: that's nice17:59
bretonayoung: re: timeout for groups17:59
ayoungbreton, you like the idea?17:59
ayoungWhat about it appeals to you?17:59
bretonayoung: when we were talking about adding users to groups, users being in the group forever was the biggest concern. And i see you bumped https://review.openstack.org/#/c/415545/ already.18:02
ayoungbreton, yeah, it was a simple bump, but that was before the Time-out discussion happened18:02
ayoungI think I like the idea of timeouts for everything.  Optional, but the norm for Federation18:03
bretontimeout equal to token ttl maybe?18:04
bretoni wonder if something like ?allow_expired will be needed18:05
*** spilla has joined #openstack-keystone18:09
openstackgerritMerged openstack/keystone master: Add configuration option for enforcement models  https://review.openstack.org/56271318:11
ayoungtoken TTL should be much shorter18:12
ayoungbreton, thing trusts.18:12
ayoungbreton, allowed_expired would be way too long a time for these calls.18:13
*** pcichy has quit IRC18:18
*** pcichy has joined #openstack-keystone18:18
*** sonuk has joined #openstack-keystone18:29
*** sonuk has quit IRC18:36
*** felipemonteiro__ has quit IRC18:57
*** r-daneel has quit IRC19:00
*** r-daneel_ has joined #openstack-keystone19:00
*** ayoung has quit IRC19:02
*** r-daneel_ is now known as r-daneel19:02
*** raildo has quit IRC19:44
mordredlbragstad, cmurphy: if you are bored and want an easy +3 ... https://review.openstack.org/#/c/564495/19:46
*** mchlumsky has quit IRC20:12
*** itlinux has quit IRC20:39
*** itlinux has joined #openstack-keystone20:41
*** ayoung has joined #openstack-keystone20:55
openstackgerritLance Bragstad proposed openstack/keystone-specs master: Add scenarios to strict hierarchy enforcement model  https://review.openstack.org/56541220:59
lbragstadjohnthetubaguy: yankcrime ^21:01
lbragstadi think i got most of what we talked about out of my head and on paper21:01
lbragstadthe main bits are the "model behaviors" section and the "enforcement diagrams"21:02
*** openstack changes topic to "Rocky release schedule: https://releases.openstack.org/rocky/schedule.html | Meeting agenda: https://etherpad.openstack.org/p/keystone-weekly-meeting | Bugs that need triaging: http://bit.ly/2iJuN1h | Trello: https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap"21:03
openstackMeeting ended Tue May  1 21:03:02 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:03
openstackMinutes:        http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-05-01-17.02.html21:03
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-05-01-17.02.txt21:03
openstackLog:            http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-05-01-17.02.log.html21:03
*** dklyle has joined #openstack-keystone21:14
*** martinus__ has quit IRC21:15
*** spilla has quit IRC21:26
*** felipemonteiro__ has joined #openstack-keystone21:30
*** felipemonteiro_ has joined #openstack-keystone21:33
*** itlinux has quit IRC21:34
*** felipemonteiro__ has quit IRC21:36
*** idlemind has joined #openstack-keystone21:41
*** edmondsw has quit IRC21:48
*** pcichy has quit IRC21:49
*** pcichy has joined #openstack-keystone21:50
*** pcichy has quit IRC21:54
openstackgerritMerged openstack/keystoneauth master: Allow tuples and sets in interface list  https://review.openstack.org/56449521:58
*** jmlowe has quit IRC22:13
*** dklyle has quit IRC22:21
*** lbragstad has quit IRC22:22
*** dklyle has joined #openstack-keystone22:26
*** rcernin has joined #openstack-keystone22:36
*** jmlowe has joined #openstack-keystone22:42
*** lbragstad has joined #openstack-keystone22:46
*** ChanServ sets mode: +o lbragstad22:46
*** felipemonteiro_ has quit IRC22:58
*** r-daneel has quit IRC23:23
*** itlinux has joined #openstack-keystone23:34
*** itlinux has quit IRC23:39
*** r-daneel has joined #openstack-keystone23:50
*** dklyle has quit IRC23:51
*** dklyle has joined #openstack-keystone23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!