17:00:53 #startmeeting keystone 17:00:54 Meeting started Tue May 12 17:00:53 2020 UTC and is due to finish in 60 minutes. The chair is knikolla. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:55 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:57 The meeting name has been set to 'keystone' 17:01:21 o/ 17:01:33 o/ 17:01:35 o/ 17:02:50 how's everyone doing? 17:03:17 pretty good here 17:04:01 Doing fine. Its been days I haven't stepped outside 17:04:26 same here. it snowed on saturday and it rained on monday. 17:05:13 o/ 17:05:16 ew snow 17:05:38 supposed to rain here now through Sunday 17:05:41 we pride ourselves in having all the seasons during each season 17:07:14 #topic Announcements 17:07:39 There were quite a few high profile bugs wrt to EC2 credentials that became public last week 17:07:52 if you work downstream or operate a cloud, make sure to keep a tab on those and update as soon as feasible 17:08:19 thanks cmurphy for getting a quick fix in place 17:08:30 and thanks gagehugo for your role on the vmt 17:08:51 ++ on thanks to cmurphy for getting those fixes out quickly 17:09:09 on that note, i disabled voting for the k2k tests for the train and stein branches because an infra problem was blocking those patches and am working with infra to get them working again 17:09:10 having ~5 of them was "fun" 17:09:33 when we have a v4 the oauth1 and ec2 apis should be the first to go >.< 17:09:58 i wish, we have users that use openstack as if it was amazon/s3 17:10:14 blegh 17:10:38 we should definitely rearchitect how they work though 17:10:51 i don't see any reason for an ec2 token to be used for getting a keystone token 17:12:52 i don't know what you would do with an ec2 token in keystone except to exchange it for a keystone token 17:13:26 i think it's so you can give it to swift/nova and pretend they're ec2/s3 17:13:33 and those are the ones using that api to validate it. 17:13:42 ah 17:14:02 i think i so a very old docs page describing that, but i need to hunt it down again 17:14:05 i saw* 17:15:05 we can discuss more in depth during the ptg 17:15:10 i'll make sure there is a slot for it 17:15:20 #topic Review Requests 17:16:06 #link https://review.opendev.org/#/q/status:open+project:openstack/keystone+branch:master+topic:update-onboarding 17:16:10 since vishakha has been posting the same reviews every week, maybe we should provide some SLAs for reviewing reviews that get posted in this slot? 17:16:25 The same onboarding docs 17:16:36 i feel bad not being able to get around to them, but these weeks have been meeting hell 17:16:57 there is space to improve our process of reviewing 17:17:47 cmurphy: gagehugo any ideas? 17:18:11 sorry i will try to look at them soon 17:18:33 cmurphy: i'm not trying to put anyone on the spot, i'm just wondering how we can improve the process :) 17:18:48 Sorry as well, I've been trapped in feature work for the last month 17:19:00 cmurphy: knikolla gagehugo : No problem. I will also try to come up with some ideas for review process 17:19:03 I will also try to take a look here as well soonish 17:19:41 knikolla: not sure, it's kind of the perpetual problem of not enough time 17:19:55 at least having this slot in the meeting is a good reminder 17:20:05 it may be that, it may also be the thought that someone else is going to get to it. 17:20:06 docs are hard to review quickly though 17:20:13 cmurphy: Should I abandon this https://review.opendev.org/#/c/724915/. Since you fixed this problem with a better approach 17:20:30 vishakha: i was going to bring that up next 17:20:42 for now we can just add a counter to the side of links to see how long some reviews requested are stuck looking for reviewers 17:20:56 just as a metric 17:21:05 knikolla: more metrics sounds like a great idea 17:21:07 Yeah the purpose of putting these is for a reminder 17:21:22 If there's time during the meeting, maybe just take a few minutes and do a group review of one. 17:22:12 we can give that a shot 17:22:37 are people lurking on multiple meetings during this time slot? 17:23:11 not me these days 17:23:52 my other meeting at this time is usually food-related 17:24:06 ^that :-) 17:24:09 gagehugo: that is a good meeting to be in 17:24:24 i was in a yoga meeting before this, haha. 17:26:12 for the k2k flakiness i was able to reproduce it, it seems to be a caching issue because it was hitting this UserNotFound error which should have been unreachable https://review.opendev.org/726729 17:26:40 #action Start incrementing a counter to review requests every meeting it goes unreviewed. 17:26:56 oh, i see, that makes sense 17:27:14 i proposed https://review.opendev.org/726727 as an alternative to vishakha's proposal, though i'm not sure that reaches the heart of the issue but i think it's simpler than trying to rearrange the cleanups 17:27:36 also https://review.opendev.org/722024 has been sitting for a while, so we haven't been testing k2k groups properly for a while 17:28:44 i just +A-ed the last one 17:28:52 ty 17:32:02 any other review requests? 17:32:10 or meta discussion about it 17:32:20 Uploaded file: https://uploads.kiwiirc.com/files/018364b9929904253157a69f16f72ca2/pasted.txt 17:32:27 Hi, I have worked on the oslo_limit patch for handling region_name/service_name since last week, so I splitted it in two patches : one for the code (https://review.opendev.org/#/c/726929/) 17:32:34 and I write an onboarding guide with example for oslo_limit in an other patch : https://review.opendev.org/#/c/726930/ 17:33:04 Oops, seems I messed up with my pasted message 17:33:59 awesome, really glad someone's working on that :D 17:34:14 Cool ! And BTW I have found some API parameters missing in keystone doc, so I raise a patch/bug for that too : https://review.opendev.org/#/c/726580/ 17:35:03 And it lead to the missing parameters in the sdk, who was required for my oslo patch, but this one is not keystone related 17:35:45 Yes! I saw that and was going to give kudos but didn't know the correct IRC nick. :-) 17:35:49 So, kudos! 17:36:31 Ahah, thanks :) 17:37:05 Hard work will come when I will submit my spec to glance folks I think ;) 17:38:11 will try to review those soon, thanks alistarle 17:38:18 cross project work is always fun 17:38:24 thanks alistarle :) 17:38:47 #topic Open Floor 17:39:26 not ready for review yet but i started working on moving the federation tests back to ubuntu https://review.opendev.org/726995 17:40:01 i don't want to be responsible for making sure opensuse is working any more 17:40:07 oh focal, nice 17:40:17 is devstack supporting focal yet? 17:40:34 not officially but almost https://review.opendev.org/726994 17:42:14 cool 17:42:21 transition periods are always weird 17:42:37 ++ 17:44:21 #link https://review.opendev.org/#/c/721267 17:45:20 I wanted to continue over this bug, but as per the to the last discussion resolving this changes the EC2 credential ID which according to tempest job was failing . If this is the correct behavior then should i push some changes to tempest? 17:47:10 Also I will update this patch over the master 17:49:06 i feel like the fact that tempest is catching it is evidence that it's not correct behavior, at least in this case 17:49:43 if it were me i would suggest blocking patching of that attribute like we just introduced for other attributes https://opendev.org/openstack/keystone/src/branch/master/keystone/api/credentials.py#L179-L182 17:50:31 any change here is kind of api-breaking, either changing how patch works or changing the usefulness of the resource ID, not really sure there's a good option 17:50:42 knikolla: gagehugo ? 17:51:31 If blocking that attribute is okay for the team? 17:51:48 hmm actually blocking the attribute change might also break tempest :/ 17:52:12 i remember in the discussion we agreed to not support changing the access_id? am i wrong? 17:52:31 I also thought we discussed not changing the id 17:54:08 we did, we didn't land on anything though http://eavesdrop.openstack.org/meetings/keystone/2020/keystone.2020-04-21-17.00.log.html#l-87 17:54:52 hmm 17:54:56 i think of all the bad options it would be more user-friendly to not accept updates to access_id 17:55:26 even if that would also potentially mean tempest needs to change its test 17:55:41 I will block the updatation of "access_id" and will update the tempest test case too 17:57:03 i feel like since changing the access_id breaks the credential anyway, so we're not breaking the API by not supporting that. maybe? 17:57:31 either way, we're running out of time, and it'd like to have a volunteer for next weeks bug duty 17:57:33 knikolla: yeah that sounds like a sane line of reasoning 17:57:34 :) 17:58:10 i can volunteer 17:58:42 This week we had 2 bugs registered 17:59:03 #link https://bugs.launchpad.net/keystone/+bug/1877851 17:59:03 Launchpad bug 1877851 in OpenStack Identity (keystone) "Add missing service name filter to service list api-ref" [Low,In progress] - Assigned to Victor Coutellier (alistarle) 17:59:09 awesome, cmurphy has been fixing everything since left the ptl role 17:59:16 #link https://bugs.launchpad.net/keystone/+bug/1877393 17:59:16 Launchpad bug 1877393 in OpenStack Identity (keystone) "train release notes link to explicit_domain_id spec wrong" [Undecided,In progress] - Assigned to Maurice Escher (maurice-escher) 18:00:22 we're out of time, we can continue on the main channel 18:00:25 #endmeeting