16:00:08 <cmurphy> #startmeeting keystone
16:00:09 <openstack> Meeting started Tue Apr  2 16:00:08 2019 UTC and is due to finish in 60 minutes.  The chair is cmurphy. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:13 <openstack> The meeting name has been set to 'keystone'
16:00:14 <lbragstad> o/
16:00:17 <gagehugo> o/
16:00:26 <cmurphy> good morning
16:00:28 <kmalloc> o/
16:00:34 <lbragstad> hola
16:00:38 <cmurphy> #link https://etherpad.openstack.org/p/keystone-weekly-meeting agenda
16:01:02 <vishakha> o/
16:01:28 <wxy|> o/
16:02:13 <cmurphy> let's get started
16:02:21 <cmurphy> #topic action item review
16:02:42 * cmurphy cmurphy create poll for team dinner (cmurphy, 16:31:25)
16:02:49 <cmurphy> i didn't do that yet, will do this week
16:02:58 <cmurphy> #action cmurphy create poll for team dinner
16:03:11 * cmurphy cmurphy organize PTG brainstorm into rough schedule (cmurphy, 16:31:47)
16:03:15 <cmurphy> I did do that:
16:03:29 <cmurphy> #link https://etherpad.openstack.org/p/keystone-train-ptg PTG agenda
16:04:06 <cmurphy> please look it over, feel free to comment or edit
16:05:08 <cmurphy> i thought we could frontload the cycle retrospective so that we're starting with that early and then do the cycle planning the next day after we've had some discussions
16:05:24 <lbragstad> ++
16:05:51 <cmurphy> also all of this might get moved around since we may be trying to schedule meetings with other teams
16:06:23 <cmurphy> if there are topics that you just thought of that didn't make it into the brainstorming etherpad we can still work that in too
16:07:04 <cmurphy> any thoughts/questions/concerns?
16:07:17 <vishakha> seems like a very informative PTG schedule
16:07:35 <cmurphy> :)
16:09:29 <cmurphy> okay, moving on
16:09:35 <cmurphy> #topic RC2
16:09:50 <cmurphy> #link https://etherpad.openstack.org/p/keystone-stein-rc2-tracking rc2 tracking
16:10:19 <cmurphy> we're super close, i think these last two can probably get in today and we can propose the rc2 release
16:11:18 <cmurphy> looks like just waiting for zuul mostly
16:12:30 <cmurphy> gagehugo: did you have a concern with https://review.openstack.org/#/c/622589/8/keystone/tests/unit/test_policy.py ?
16:12:46 <gagehugo> a quick question yeah
16:13:12 <lbragstad> hmmm
16:13:23 <gagehugo> wait
16:13:23 <gagehugo> nvm
16:13:49 <gagehugo> I am good
16:13:54 <cmurphy> haha okay
16:14:06 <lbragstad> gagehugo wait - what was your concern?
16:14:26 <lbragstad> https://review.openstack.org/#/c/622589/8/keystone/tests/unit/test_policy.py@250 getting popped off the expected list but it also still being in https://review.openstack.org/#/c/622589/8/etc/policy.v3cloudsample.json
16:14:40 <gagehugo> yeah, I was looking at it wrong
16:15:11 <lbragstad> how so?
16:15:22 <gagehugo> I thought it was duplicated there
16:15:26 <gagehugo> lol
16:15:36 <gagehugo> in removed_policies
16:16:03 <lbragstad> removed_policies means that it has been removed from policy.v3cloudsample.json
16:16:18 <lbragstad> so - i'm wondering why that test didn't fail locally
16:16:48 <lbragstad> because that specific policy at line 250 in test_policy hasn't been removed from policy.v3cloudsample.json (per cmurphy's comment)
16:17:14 <gagehugo> 203 was the one that you kept in right?
16:17:29 <gagehugo> diff from 2-3 ps ago
16:19:23 <lbragstad> no - i think you might be onto something
16:19:44 <lbragstad> https://review.openstack.org/#/c/622589/8/etc/policy.v3cloudsample.json@73 shows that policy is still in the file
16:19:48 <lbragstad> (as it should be)
16:20:03 <lbragstad> but https://review.openstack.org/#/c/622589/8/keystone/tests/unit/test_policy.py@250 pops the policy off the expected list of policies
16:20:24 <gagehugo> yeah, the differences there is what I was expecting, but removed has duplicates from the policy sample file
16:20:29 <lbragstad> but the diff assertion didn't fail https://review.openstack.org/#/c/622589/8/keystone/tests/unit/test_policy.py@369
16:20:31 <gagehugo> that confused me for a bit
16:20:57 <gagehugo> also I was looking at a previous ps before you put back in the get_policy so updating that helped me as well
16:22:23 <cmurphy> it does look like 'identity:create_policy_association_for_endpoint' etc shouldn't be in that list
16:22:29 <lbragstad> right...
16:22:38 <lbragstad> i wonder why that test didn't fail..
16:22:39 <cmurphy> should we un-w+1 that so we can figure it out?
16:22:42 <lbragstad> yeah
16:22:44 <lbragstad> i think so
16:22:56 <lbragstad> i'm on the struggle bus today
16:23:10 <lbragstad> i'll pick it up after the meeting
16:23:41 <cmurphy> okay, anything else to discuss re rc2?
16:23:59 <cmurphy> thanks everyone for working so hard to get everything reviewed on time
16:25:50 <cmurphy> okay, next up
16:25:58 <cmurphy> #topic meeting time
16:26:07 <cmurphy> #link https://doodle.com/poll/zxv6d2mxngmhb3vc poll results
16:26:27 <cmurphy> I was surprised at the results, i thought we would end up with something more central to APAC/Americas
16:26:42 <ayoung> I have to say that this time has worked well for me for thus far.  But I am east coast.
16:26:54 <cmurphy> but actually the top result is for the same time but on thursday
16:27:07 <cmurphy> the second best is the current time on tuesday or an hour prior on tuesday
16:27:14 <ayoung> When Jamie and Herney were on, there was nothing we could have done to make it easier for both of them.
16:27:35 <ayoung> And remember Daylight savings time reaks havok
16:27:35 <cmurphy> knikolla: current time on tuesdays is a total no-go for you?
16:27:45 <cmurphy> yeah dst may have skewed the results
16:28:17 <kmalloc> i hope a number of states finalize the "stop the dst madness" legislation by next year
16:28:53 <cmurphy> parts of europe may be doing that soon
16:29:08 <kmalloc> yeah, it would be so good if it happened and just disappeared
16:29:25 <kmalloc> perma-savings-time is fine... perma-standard-time is fine. pick one. stick with it
16:29:44 <kmalloc> though I prefer savings time year round personally
16:30:34 <ayoung> I'm sure your preference has been noted.
16:30:43 <cmurphy> lol
16:31:19 <kmalloc> nah, i'm just lucky, my state opted for my preference.. i didn't even have to tell them :P
16:31:32 <ayoung> Thusday would probably be just as good for me as this time.
16:31:52 <cmurphy> if we're not going to change the meeting time to be more inclusive of timezones then i'd prefer to avoid the hassle and not to change it at all
16:32:07 <cmurphy> i'll check in with knikolla though, seems like he has a conflict with this time
16:34:52 <cmurphy> will check if that's a permanent or short-term conflict
16:35:05 <cmurphy> for now let's say next meeting is same time same day?
16:35:25 <lbragstad> wfm
16:35:33 <vishakha> lgtm
16:36:41 <ayoung> So...open discussion>?
16:36:56 <cmurphy> yep
16:37:00 <cmurphy> #topic open discussion
16:37:13 <ayoung> I'd like to bring up multisite stuff.
16:37:36 <ayoung> I've had discussions with a couple customers, and there seems to be a real pattern of deploying lots of little openstacks
16:38:06 <ayoung> like, that seems to be the norm, and we don't really have a good way to make any sort of deloyment like that work at the keystone level.
16:38:21 <ayoung> morgan kristi and I presented on some of it in Berlin
16:38:39 <ayoung> so, I'll commit to getting two changs in that, I think help move in the right direction:
16:38:54 <knikolla> o/ oops, didn't look at the time, sorry.
16:39:15 <ayoung> https://review.openstack.org/#/c/605235/  Allow an explicit_domain_id parameter when creating a domain
16:39:21 <knikolla> cmurphy: current time on tuesday works for me
16:39:25 <ayoung> https://review.openstack.org/#/c/605169/ Replace UUID with id_generator for Federated users
16:39:26 <knikolla> i can make it work
16:39:55 <cmurphy> okay thanks knikolla
16:39:55 <ayoung> the first kindof sat therebroken as I had messed up the error handling, which broke tests.  I'm fixing now
16:40:13 <ayoung> the second has been ready for a long time, but held up by an architectural discussion
16:40:31 <ayoung> I'd like to get a greement that the second one should go in as is.
16:40:46 <ayoung> Or even lots of greements. Greements are awesome
16:40:59 <cmurphy> lbragstad: can you sum up your concerns on that?
16:41:20 <lbragstad> so - my concern is that we're introducing an anti-pattern we already fixed back in mitaka
16:41:33 <lbragstad> reintroducing*
16:42:02 <lbragstad> way back in the day - we used to let backend implementations call into the managers that invoked them
16:42:50 <lbragstad> the concerns is that if you have to do something like that, it's a sign the backend knows too much, or is taking on too much responsbility
16:43:04 <lbragstad> and whatever they're calling for is probably business logic that should live in higher layers
16:44:03 <lbragstad> not allowing backends to call up keeps subsystem communication at the manager layer
16:44:36 <lbragstad> and reduces inconsistencies in backend implementations (not really applicable here since we only have one backend, but... that probably another discussion)
16:45:19 <lbragstad> in this case, the backend seems to be on the hook for figuring out an ID, instead of just getting passed an ID from the manager
16:45:42 <kmalloc> part of the issue is that the ID generation is in the wrong place
16:45:55 <kmalloc> i don't see an issue with doing explicitly this and following up with lifting the id generation
16:46:25 <kmalloc> the id generation should not be as low as it is, we worked to fix that as well. a lot of the cases we should be passing an ID in or calling ID gen at the manager layer.
16:47:18 <kmalloc> but we do need to lift the id generation up
16:47:24 <kmalloc> (in either order)
16:47:38 <ayoung> Lookin at the existing code for LDAP, the code looks like the code I wrote for Federation:  the ldap code calls into the Id Generator when it needs the ID.
16:48:01 <cmurphy> the ldap code calls it in authenticate()
16:48:28 <ayoung> I looked yesterday, I think it calls it when making the shadow record.  I can link
16:49:14 <ayoung> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/identity/mapping_backends/sql.py#n76
16:49:41 <cmurphy> the shadow record is only created when the user authenticates, or the id is generated in the mapping table when you explicitly call keystone-manage mapping_populate but the user isn't shadowed at that point
16:50:10 <ayoung> right.  So if you list users via LDAP, you create a bunch of shaod records at that time
16:51:00 <ayoung> But since LDAP tends to window the results, you get som,ething like 200 records.  It is weird.
16:51:48 <ayoung> However, my point is that there is nothing wrong per-se with having a driver call in to the id generator at a specific point if we are doing driver specific logic, like we do for LDAP and Federation
16:52:16 <ayoung> And, while I would argue that, yes, it should be done for all users in a uniform manner, that is not the scope of this change
16:52:45 <ayoung> I don't really have time or justification to do a major refactoring like that, and it would take a separate spec
16:52:54 <lbragstad> sure - but we could hang this change on a change that cleans up the technical debt
16:53:05 <ayoung> you're killing me smalls
16:53:22 <lbragstad> :) sorry
16:53:36 <ayoung> So, no, this is not the time to do that
16:53:49 <ayoung> the time to do that is when we decide that ALL users should have an ID generated this way
16:54:24 <ayoung> Othewise, we pull driver specific logic out of the driver.
16:54:43 <lbragstad> if other maintainers don't have an issue with this, then i'll remove the -1, i just think that a couple of the signs from the implementation of the patch raise certain code "smells" that tell us we should clean some things up
16:54:56 <lbragstad> to be clear - those spells aren't necessarily due to ayoung's code either
16:55:01 <lbragstad> smells*
16:55:11 <ayoung> I think I like spells better
16:56:17 <cmurphy> 4 minutes left
16:56:19 <ayoung> lbragstad, how about this:  we make this change now, and we alsop put in a spec for predicatable UserIds across the board. In that spec impl, we pull up the id geernation code to the manager
16:56:34 <cmurphy> +1 to that
16:57:07 <lbragstad> ayoung are you going to do the refactor to clean up the ID generation code and pull it into the manager/
16:57:07 <ayoung> OK, so if someone could pull the trigger on https://review.openstack.org/#/c/605169/  the first round in Denver is on me
16:57:31 <ayoung> lbragstad, I think I can do that, yeah.  Won't necessarily be for Train, tho
16:57:51 <ayoung> I think it is the right direction anyway, and, yeah, I would feel good about doing that
16:58:00 <cmurphy> ayoung: can you also take the action to propose the spec for predictable user ids across the board?
16:58:05 <kmalloc> ++
16:58:06 <ayoung> Yes
16:58:08 <cmurphy> also not necessarily for train
16:58:15 <lbragstad> to be clear - i don't have an issue with the overall direction
16:58:29 <ayoung> I'll make one explicitly for User Ids, so we can scope it in.
16:58:44 <ayoung> Or maybe user and groups together, so we can hit the id layer
16:58:51 <lbragstad> i just have a preference to see technical debt get cleaned up before tacking on features
16:59:29 <cmurphy> #agreed move forward with https://review.openstack.org/#/c/605169/ as-is, ayoung to propose cleanup and consistency across drivers later
16:59:45 <ayoung> lbragstad, in a refactoring like this, it actually makes sense to implement in all of the distict drivers, get the tests right, then pull up to a common base.
16:59:57 <kmalloc> much like we did flask
17:00:00 <kmalloc> minimize the test changes
17:00:15 <cmurphy> we're out of time, we can take this to -keystone
17:00:19 <cmurphy> #endmeeting