16:00:08 #startmeeting keystone 16:00:09 Meeting started Tue Apr 2 16:00:08 2019 UTC and is due to finish in 60 minutes. The chair is cmurphy. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:13 The meeting name has been set to 'keystone' 16:00:14 o/ 16:00:17 o/ 16:00:26 good morning 16:00:28 o/ 16:00:34 hola 16:00:38 #link https://etherpad.openstack.org/p/keystone-weekly-meeting agenda 16:01:02 o/ 16:01:28 o/ 16:02:13 let's get started 16:02:21 #topic action item review 16:02:42 * cmurphy cmurphy create poll for team dinner (cmurphy, 16:31:25) 16:02:49 i didn't do that yet, will do this week 16:02:58 #action cmurphy create poll for team dinner 16:03:11 * cmurphy cmurphy organize PTG brainstorm into rough schedule (cmurphy, 16:31:47) 16:03:15 I did do that: 16:03:29 #link https://etherpad.openstack.org/p/keystone-train-ptg PTG agenda 16:04:06 please look it over, feel free to comment or edit 16:05:08 i thought we could frontload the cycle retrospective so that we're starting with that early and then do the cycle planning the next day after we've had some discussions 16:05:24 ++ 16:05:51 also all of this might get moved around since we may be trying to schedule meetings with other teams 16:06:23 if there are topics that you just thought of that didn't make it into the brainstorming etherpad we can still work that in too 16:07:04 any thoughts/questions/concerns? 16:07:17 seems like a very informative PTG schedule 16:07:35 :) 16:09:29 okay, moving on 16:09:35 #topic RC2 16:09:50 #link https://etherpad.openstack.org/p/keystone-stein-rc2-tracking rc2 tracking 16:10:19 we're super close, i think these last two can probably get in today and we can propose the rc2 release 16:11:18 looks like just waiting for zuul mostly 16:12:30 gagehugo: did you have a concern with https://review.openstack.org/#/c/622589/8/keystone/tests/unit/test_policy.py ? 16:12:46 a quick question yeah 16:13:12 hmmm 16:13:23 wait 16:13:23 nvm 16:13:49 I am good 16:13:54 haha okay 16:14:06 gagehugo wait - what was your concern? 16:14:26 https://review.openstack.org/#/c/622589/8/keystone/tests/unit/test_policy.py@250 getting popped off the expected list but it also still being in https://review.openstack.org/#/c/622589/8/etc/policy.v3cloudsample.json 16:14:40 yeah, I was looking at it wrong 16:15:11 how so? 16:15:22 I thought it was duplicated there 16:15:26 lol 16:15:36 in removed_policies 16:16:03 removed_policies means that it has been removed from policy.v3cloudsample.json 16:16:18 so - i'm wondering why that test didn't fail locally 16:16:48 because that specific policy at line 250 in test_policy hasn't been removed from policy.v3cloudsample.json (per cmurphy's comment) 16:17:14 203 was the one that you kept in right? 16:17:29 diff from 2-3 ps ago 16:19:23 no - i think you might be onto something 16:19:44 https://review.openstack.org/#/c/622589/8/etc/policy.v3cloudsample.json@73 shows that policy is still in the file 16:19:48 (as it should be) 16:20:03 but https://review.openstack.org/#/c/622589/8/keystone/tests/unit/test_policy.py@250 pops the policy off the expected list of policies 16:20:24 yeah, the differences there is what I was expecting, but removed has duplicates from the policy sample file 16:20:29 but the diff assertion didn't fail https://review.openstack.org/#/c/622589/8/keystone/tests/unit/test_policy.py@369 16:20:31 that confused me for a bit 16:20:57 also I was looking at a previous ps before you put back in the get_policy so updating that helped me as well 16:22:23 it does look like 'identity:create_policy_association_for_endpoint' etc shouldn't be in that list 16:22:29 right... 16:22:38 i wonder why that test didn't fail.. 16:22:39 should we un-w+1 that so we can figure it out? 16:22:42 yeah 16:22:44 i think so 16:22:56 i'm on the struggle bus today 16:23:10 i'll pick it up after the meeting 16:23:41 okay, anything else to discuss re rc2? 16:23:59 thanks everyone for working so hard to get everything reviewed on time 16:25:50 okay, next up 16:25:58 #topic meeting time 16:26:07 #link https://doodle.com/poll/zxv6d2mxngmhb3vc poll results 16:26:27 I was surprised at the results, i thought we would end up with something more central to APAC/Americas 16:26:42 I have to say that this time has worked well for me for thus far. But I am east coast. 16:26:54 but actually the top result is for the same time but on thursday 16:27:07 the second best is the current time on tuesday or an hour prior on tuesday 16:27:14 When Jamie and Herney were on, there was nothing we could have done to make it easier for both of them. 16:27:35 And remember Daylight savings time reaks havok 16:27:35 knikolla: current time on tuesdays is a total no-go for you? 16:27:45 yeah dst may have skewed the results 16:28:17 i hope a number of states finalize the "stop the dst madness" legislation by next year 16:28:53 parts of europe may be doing that soon 16:29:08 yeah, it would be so good if it happened and just disappeared 16:29:25 perma-savings-time is fine... perma-standard-time is fine. pick one. stick with it 16:29:44 though I prefer savings time year round personally 16:30:34 I'm sure your preference has been noted. 16:30:43 lol 16:31:19 nah, i'm just lucky, my state opted for my preference.. i didn't even have to tell them :P 16:31:32 Thusday would probably be just as good for me as this time. 16:31:52 if we're not going to change the meeting time to be more inclusive of timezones then i'd prefer to avoid the hassle and not to change it at all 16:32:07 i'll check in with knikolla though, seems like he has a conflict with this time 16:34:52 will check if that's a permanent or short-term conflict 16:35:05 for now let's say next meeting is same time same day? 16:35:25 wfm 16:35:33 lgtm 16:36:41 So...open discussion>? 16:36:56 yep 16:37:00 #topic open discussion 16:37:13 I'd like to bring up multisite stuff. 16:37:36 I've had discussions with a couple customers, and there seems to be a real pattern of deploying lots of little openstacks 16:38:06 like, that seems to be the norm, and we don't really have a good way to make any sort of deloyment like that work at the keystone level. 16:38:21 morgan kristi and I presented on some of it in Berlin 16:38:39 so, I'll commit to getting two changs in that, I think help move in the right direction: 16:38:54 o/ oops, didn't look at the time, sorry. 16:39:15 https://review.openstack.org/#/c/605235/ Allow an explicit_domain_id parameter when creating a domain 16:39:21 cmurphy: current time on tuesday works for me 16:39:25 https://review.openstack.org/#/c/605169/ Replace UUID with id_generator for Federated users 16:39:26 i can make it work 16:39:55 okay thanks knikolla 16:39:55 the first kindof sat therebroken as I had messed up the error handling, which broke tests. I'm fixing now 16:40:13 the second has been ready for a long time, but held up by an architectural discussion 16:40:31 I'd like to get a greement that the second one should go in as is. 16:40:46 Or even lots of greements. Greements are awesome 16:40:59 lbragstad: can you sum up your concerns on that? 16:41:20 so - my concern is that we're introducing an anti-pattern we already fixed back in mitaka 16:41:33 reintroducing* 16:42:02 way back in the day - we used to let backend implementations call into the managers that invoked them 16:42:50 the concerns is that if you have to do something like that, it's a sign the backend knows too much, or is taking on too much responsbility 16:43:04 and whatever they're calling for is probably business logic that should live in higher layers 16:44:03 not allowing backends to call up keeps subsystem communication at the manager layer 16:44:36 and reduces inconsistencies in backend implementations (not really applicable here since we only have one backend, but... that probably another discussion) 16:45:19 in this case, the backend seems to be on the hook for figuring out an ID, instead of just getting passed an ID from the manager 16:45:42 part of the issue is that the ID generation is in the wrong place 16:45:55 i don't see an issue with doing explicitly this and following up with lifting the id generation 16:46:25 the id generation should not be as low as it is, we worked to fix that as well. a lot of the cases we should be passing an ID in or calling ID gen at the manager layer. 16:47:18 but we do need to lift the id generation up 16:47:24 (in either order) 16:47:38 Lookin at the existing code for LDAP, the code looks like the code I wrote for Federation: the ldap code calls into the Id Generator when it needs the ID. 16:48:01 the ldap code calls it in authenticate() 16:48:28 I looked yesterday, I think it calls it when making the shadow record. I can link 16:49:14 http://git.openstack.org/cgit/openstack/keystone/tree/keystone/identity/mapping_backends/sql.py#n76 16:49:41 the shadow record is only created when the user authenticates, or the id is generated in the mapping table when you explicitly call keystone-manage mapping_populate but the user isn't shadowed at that point 16:50:10 right. So if you list users via LDAP, you create a bunch of shaod records at that time 16:51:00 But since LDAP tends to window the results, you get som,ething like 200 records. It is weird. 16:51:48 However, my point is that there is nothing wrong per-se with having a driver call in to the id generator at a specific point if we are doing driver specific logic, like we do for LDAP and Federation 16:52:16 And, while I would argue that, yes, it should be done for all users in a uniform manner, that is not the scope of this change 16:52:45 I don't really have time or justification to do a major refactoring like that, and it would take a separate spec 16:52:54 sure - but we could hang this change on a change that cleans up the technical debt 16:53:05 you're killing me smalls 16:53:22 :) sorry 16:53:36 So, no, this is not the time to do that 16:53:49 the time to do that is when we decide that ALL users should have an ID generated this way 16:54:24 Othewise, we pull driver specific logic out of the driver. 16:54:43 if other maintainers don't have an issue with this, then i'll remove the -1, i just think that a couple of the signs from the implementation of the patch raise certain code "smells" that tell us we should clean some things up 16:54:56 to be clear - those spells aren't necessarily due to ayoung's code either 16:55:01 smells* 16:55:11 I think I like spells better 16:56:17 4 minutes left 16:56:19 lbragstad, how about this: we make this change now, and we alsop put in a spec for predicatable UserIds across the board. In that spec impl, we pull up the id geernation code to the manager 16:56:34 +1 to that 16:57:07 ayoung are you going to do the refactor to clean up the ID generation code and pull it into the manager/ 16:57:07 OK, so if someone could pull the trigger on https://review.openstack.org/#/c/605169/ the first round in Denver is on me 16:57:31 lbragstad, I think I can do that, yeah. Won't necessarily be for Train, tho 16:57:51 I think it is the right direction anyway, and, yeah, I would feel good about doing that 16:58:00 ayoung: can you also take the action to propose the spec for predictable user ids across the board? 16:58:05 ++ 16:58:06 Yes 16:58:08 also not necessarily for train 16:58:15 to be clear - i don't have an issue with the overall direction 16:58:29 I'll make one explicitly for User Ids, so we can scope it in. 16:58:44 Or maybe user and groups together, so we can hit the id layer 16:58:51 i just have a preference to see technical debt get cleaned up before tacking on features 16:59:29 #agreed move forward with https://review.openstack.org/#/c/605169/ as-is, ayoung to propose cleanup and consistency across drivers later 16:59:45 lbragstad, in a refactoring like this, it actually makes sense to implement in all of the distict drivers, get the tests right, then pull up to a common base. 16:59:57 much like we did flask 17:00:00 minimize the test changes 17:00:15 we're out of time, we can take this to -keystone 17:00:19 #endmeeting