16:00:09 <lbragstad> #startmeeting keystone
16:00:10 <openstack> Meeting started Tue Dec 18 16:00:09 2018 UTC and is due to finish in 60 minutes.  The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:12 <lbragstad> #link https://etherpad.openstack.org/p/keystone-weekly-meeting
16:00:14 <lbragstad> agenda ^
16:00:15 <openstack> The meeting name has been set to 'keystone'
16:00:16 <lbragstad> o/
16:00:21 <hrybacki> o/
16:00:22 <ileixe> o/
16:00:23 <vishakha> o/
16:00:27 <gagehugo> o/
16:00:45 <lbragstad> wow - better attendance than i was expecting :)
16:01:27 <cmurphy> o/
16:02:08 <wxy|> o/
16:02:11 <lbragstad> ok - cool
16:02:21 <lbragstad> we have quite a bit on the agenda today - so we'll go ahead and get started
16:02:38 <lbragstad> #topic Upcoming Meetings/Holidays
16:02:50 <lbragstad> the next two tuesdays fall on holidays
16:03:16 <lbragstad> so i'm not expecting to hold meetings unless folks *really* want to have one while celebrating
16:03:55 <lbragstad> otherwise - we'll just pick things back up on January 8th
16:04:10 <lbragstad> i'll send a note after the meeting with a reminder to the openstack-discuss mailing list
16:04:26 <lbragstad> #topic Oslo Releases
16:04:39 <lbragstad> kind of related to the holiday schedule
16:04:52 <lbragstad> bnemec sent a note yesterday about oslo releases
16:04:52 <lbragstad> #link http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001047.html
16:05:15 <bnemec> I'm just about to propose the releases for this week.
16:05:22 <lbragstad> this is just a reminder that if anyone needs anything from an oslo library for the next few weeks, we'll have to do it soon
16:05:44 <knikolla> o/
16:05:44 <lbragstad> there isn't anything on my radar
16:05:54 <bnemec> Holding off on privsep because it makes a significant change and I don't want to deal with it over the holidays, but I don't think that will affect keystone.
16:06:04 <lbragstad> ack
16:06:09 * knikolla having a headache, but i'll lurk around.
16:06:30 <lbragstad> yeah - we don't use privsep i don't think
16:06:53 <wxy|> bnemec: does oslo has something like feature freeze time? I wonder if we can have oslo.limit 1.0 release in Stein.
16:07:28 <bnemec> wxy|: We do have feature freeze, and it's a bit earlier than the OpenStack-wide feature freeze.
16:07:31 <bnemec> Let me find the details.
16:07:46 <lbragstad> related to ^ - i pinged jaypipes and johnthetubaguy a few days ago about syncing up on that work
16:08:29 <wxy|> bnemec: Thanks, I'll pay attention for the deadline.
16:08:32 <wxy|> lbragstad: ++
16:08:36 <lbragstad> prior to berlin, there was a bunch of good discussion on the interface between nova and oslo.limit, but i don't think it has moved since then
16:08:43 <bnemec> For Rocky, Oslo's feature freeze actually coincided with Keystone's: https://releases.openstack.org/rocky/schedule.html
16:09:19 <bnemec> Which reminds me I probably need to get that on the Stein schedule.
16:09:25 <bnemec> Full details are here: http://specs.openstack.org/openstack/oslo-specs/specs/policy/feature-freeze.html
16:09:27 <lbragstad> if we ask again, we might not get a response this close to the holidays, but it might be worth putting together an action item for the beginning of January to follow up with the nova team on that stuff
16:09:59 <wxy|> lbragstad: it's good to have.
16:10:07 <lbragstad> wxy| want to take that one with me?
16:10:26 <wxy|> lbragstad: sure.
16:10:34 <lbragstad> #action lbragstad and wxy| to follow up with nova after the holidays about movement on oslo.limit + nova integration
16:10:38 <lbragstad> cool
16:10:44 <lbragstad> anything else oslo library related?
16:11:00 <wxy|> no, thanks
16:11:07 <lbragstad> thanks wxy|
16:11:11 <lbragstad> #topic Previous Action Items
16:11:37 <lbragstad> i think the only previous action item we had was to get a spec up for protecting the admin role from being deleted
16:11:40 <lbragstad> which cmurphy has done
16:11:45 <lbragstad> #link https://review.openstack.org/#/c/624692/
16:11:55 <lbragstad> up for review if you're interested in taking a look ^
16:12:11 <lbragstad> #topic Reviews
16:12:18 <lbragstad> does anyone have reviews that need eyes?
16:12:31 <lbragstad> or anything in review they want to call attention to specifically?
16:12:47 <cmurphy> https://review.openstack.org/624972
16:13:07 <lbragstad> that's the last bit of all the docs work, right?
16:13:25 <cmurphy> all of the admin guide consolidation/reorg yes
16:13:31 <cmurphy> i'm still working on the federation guide
16:13:34 <cmurphy> also interested in people's thoughts on https://review.openstack.org/623928 and the related bug report
16:13:44 <lbragstad> awesome - thanks for picking up the remaining consolidation bits cmurphy
16:14:36 <lbragstad> i'll take a look at 623928 today
16:15:14 <lbragstad> any other reviews people want to bring up?
16:16:06 <lbragstad> ok - moving on
16:16:14 <lbragstad> #topic System scope upgrade cases
16:16:36 <lbragstad> cmurphy and i have been going through the system scope changes for the projects API
16:16:42 <lbragstad> and it got me thinking about another case
16:16:43 <lbragstad> #link https://review.openstack.org/#/c/625732/
16:17:05 <lbragstad> i wanted to bring this to the rest of the group to walk through the upgrade, just so we're all on the same page
16:17:38 <lbragstad> ^ that review is specific to groups (not projects), but it's applicable
16:17:50 <lbragstad> if you look at #link https://review.openstack.org/#/c/625732/1/keystone/common/policies/group.py
16:18:14 <lbragstad> you can see that I'm deprecating the previous policies and implementing the system reader role as the default
16:18:41 <lbragstad> but... that only happens if a deployment sets ``keystone.conf [oslo_policy] enforce_scope=True`` and it's False by default
16:20:17 <lbragstad> for example the policy for get_group would be '(rule:admin_required or role:reader)'
16:20:40 <lbragstad> since deprecated policies are handled gracefully by oslo.policy in order to help with upgrade
16:21:17 <lbragstad> so - if enforce_scope=False (the default), the get_group policy would be accessible by something with the `reader` role on a project
16:22:21 <cmurphy> what exactly happens when a policy is deprecated? if the operator hasn't changed any defaults and policy is in code, does the new check string take effect or the old check string?
16:22:36 <lbragstad> good question
16:22:39 <lbragstad> they are OR'd
16:23:07 <lbragstad> for example, the current policy for get_group is rule:admin_required
16:23:31 <lbragstad> and if the new policy ends up being `role:reader`, it will be OR'd with the deprecated policy.
16:24:13 <lbragstad> this allows operators a window of time to assign users roles for the new default, or make adjustments so that they can either 1. consume the new default or 2. copy/paste the old policy and maintain it as an override
16:25:39 <cmurphy> so both policies will be allowed - so it's essentially more permissive while it's being deprecated?
16:26:04 <lbragstad> with that specific example, it is
16:26:20 <lbragstad> but... the new policy could be something like `role:reader AND system_scope:all`
16:27:05 <lbragstad> which wouldn't allow someone with the reader role on a project to access the get_group API
16:28:07 <lbragstad> i'm not a huge fan of encoding scope checks into check strings...
16:28:38 <lbragstad> and it's redundant with scope_types... but after thinking about this for a week or so.. i'm not sure there is another way to roll out new policies in a backwards compatible way?
16:28:56 <lbragstad> at least while we have enforce_scope=False by default
16:29:10 <lbragstad> if enforce_scope=True, then `role:reader` alone would be a bit safer
16:29:27 <cmurphy> i'm not sure either
16:30:12 <lbragstad> so far, the best answer i have (which may not be the best) is...
16:31:05 <lbragstad> 1. deprecate the old policies 2. the new policies have the scope check in the check string :( 3. when we go to remove the old deprecated policies in Train we can clean up the policies to remove the scope checks from the check string
16:31:58 <bnemec> Do they need to be OR'd? In general I would expect the new rule to just take effect if the operator hasn't overridden the old one.
16:32:10 <lbragstad> step 3 would also include a change for keystone to set ``keystone.conf [oslo_policy] enforce_scope=True``
16:32:36 <lbragstad> bnemec good question
16:33:50 <lbragstad> the reason why we OR'd them is because if the new rule is less permissive, then we want to make sure operators have time to adjust assignment accordingly so that users can continue to access that API
16:34:25 <lbragstad> otherwise, it would be possible for operators to break users on upgrade if the new, more restrictive rule, is used exclusively
16:34:38 <bnemec> How will they know they need to change it though? Is there a warning if it only passes the old, less restrictive rule?
16:34:58 * lbragstad grabs a link
16:36:00 <lbragstad> http://git.openstack.org/cgit/openstack/oslo.policy/tree/oslo_policy/policy.py#n678
16:36:13 <lbragstad> we run that code when we load rules in oslo.policy
16:37:33 <bnemec> Yeah, I guess that tells them it will change, but it doesn't necessarily tell them whether that's a problem.
16:38:02 <lbragstad> yeah - that gets tough since it depends on how they have roles setup?
16:38:06 <bnemec> I guess they would test by explicitly setting the new policy so the deprecated one isn't OR'd in and see if anything breaks.
16:38:19 <lbragstad> yes - exactly...
16:38:38 <lbragstad> which is how i've hard to write some of the new keystone protection tests
16:38:42 <lbragstad> had*
16:39:35 <bnemec> Yeah, it would be nice if we could be smarter with the warnings, but that would make the logic even more complicated and I already have a hard enough time following it. :-)
16:39:51 <lbragstad> =/
16:40:10 <lbragstad> there certainly isn't a shortage of edge cases here
16:40:27 <lbragstad> if people want to discuss this though, we can take it to office hours, too
16:41:32 <lbragstad> my other question was about the organization of #link https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:implement-default-roles
16:41:33 <bnemec> Sounds good. I don't want to hold up the meeting any more than I already have.
16:41:49 <vishakha> lbragstad: I will soon update my patches for system scope too.
16:42:18 <lbragstad> vishakha awesome - that's another reason why i wanted to talk about this as team, since we have multiple people doing the work
16:43:14 <wxy|> vishakha: ah, good to know, I'll review yours as well.
16:43:15 <vishakha> lbragstad: Yes will ping you for any doubts related to scopes. Thanks for the updates.
16:43:36 <lbragstad> i know we have bugs open for the majority of this owrk
16:43:49 <vishakha> wxy|: thanks :)
16:43:53 <lbragstad> #link https://bugs.launchpad.net/keystone/+bugs?field.tag=policy
16:44:36 <lbragstad> but - as the people who have to review this stuff... is there anything I (we) can do organizationally to maintain the chaos/review queue
16:47:03 <lbragstad> or make it easier for people to review in general
16:47:20 <cmurphy> not sure there's much that can be done about sheer volume
16:47:41 <lbragstad> yeah - that's the answer i was afraid of
16:48:04 <bnemec> Maybe talk to dhellmann. He does a lot of high volume review submission.
16:48:12 <lbragstad> i wasn't sure if people wanted to team up on specific resources, or have a priority queue of some kind that applied focus to certain areas
16:48:26 <lbragstad> bnemec oh - good call
16:48:52 <cmurphy> he does but it's usually distributed across projects
16:49:25 <cmurphy> so not so much review load on one team
16:49:47 <lbragstad> i just sympathize with people looking at this and not knowing where to start - so if there is anything i can do to make that easier, i'm all ears
16:50:07 <bnemec> Yeah, but maybe he has some tricks for distributing it. I know they had a team split up the work for the python3-first stuff.
16:52:48 <lbragstad> something we can talk about after the meeting, too
16:53:22 <lbragstad> few minutes left and there are two more topics, so we can move on for now
16:53:35 <lbragstad> #topic Tokens with tag attributes
16:53:41 <lbragstad> ileixe o/
16:53:46 <ileixe> o/
16:54:28 <ileixe> It's about the RFE which returns token with 'tag' attribute.
16:54:45 <ileixe> tag with project
16:55:23 <ileixe> we are using the tag for oslo.policy
16:55:47 <ileixe> for example get_network only for matching tag
16:56:18 <lbragstad> so - do you have custom policy check strings that are written to check the token directly?
16:56:34 <ileixe> in credential - yes
16:56:38 <lbragstad> e.g., %(target.token.project.tag)
16:56:56 <ileixe> yes similar
16:57:52 <ileixe> I heard of system_scope first time in this place.. and this can be used for our purpose though. I'm not sure
16:58:19 <lbragstad> do you have a more detailed example of why you need to override get_network?
16:58:36 <ileixe> We have two general scope
16:58:40 <ileixe> 'dev' 'prod'
16:58:48 <ileixe> every project include in one of them
16:58:59 <ileixe> and we have also two network dev_net prod_net
16:59:08 <ileixe> provider_network they are
16:59:17 <lbragstad> so 'dev' and 'prod' are not projects or domains?
16:59:22 <ileixe> yes
16:59:25 <ileixe> it just
16:59:32 <ileixe> scheme for our inhouse
16:59:40 <ileixe> codebase
16:59:50 <ileixe> we want to make some general scope to restrict resource
16:59:56 <ileixe> and for now I found 'tag'
17:00:23 <lbragstad> sure -  are you available to discuss this after the meeting in -keystone?
17:00:34 <ileixe> yes sure
17:00:43 <lbragstad> ok - cool, meet you over there
17:00:52 <lbragstad> thanks for the time everyone
17:00:55 <lbragstad> #endmeeting