14:00:48 #startmeeting cinder 14:00:52 Meeting started Wed Dec 16 14:00:48 2020 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:53 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:55 The meeting name has been set to 'cinder' 14:01:07 #topic roll call 14:01:18 hi 14:01:21 hi 14:01:23 howdy 14:01:29 \o 14:01:32 hi 14:01:44 hi 14:01:45 hi 14:01:54 o/ 14:02:07 o/ 14:02:50 o/ 14:02:51 ok, let's get started 14:02:54 hello everyone 14:03:01 #link https://etherpad.openstack.org/p/cinder-wallaby-meetings 14:03:06 #topic announcements 14:03:29 lseki wishes to announce that he has moved to Red Hat 14:03:48 if you have NetApp questions, the person to contact from now on is sfernand 14:04:17 yep 14:04:18 deadlines update 14:04:31 the spec freeze is on Friday this week 14:04:40 lseki: Congratulations. 14:04:41 #link https://releases.openstack.org/wallaby/schedule.html#w-cinder-spec-freeze 14:04:53 we'll look at some specs later in the meeting 14:05:06 there will NOT be a cinder meeting on 30 December 14:05:25 so, next week's meeting on 23 December will be the last meeting of the month, so we will hold it in videoconf 14:05:42 connection info will be on the agenda etherpad 14:05:49 I wont be present, ill be on leave already, but I will have the bug report ready for you all to look at 14:05:54 great, ty 14:06:01 last announcement: 14:06:03 I will be on vacation. :-) 14:06:09 slacker 14:06:11 :) 14:06:14 Hey now! 14:06:18 :-) 14:06:31 we have one community goal on the verge of completion 14:06:41 #link https://review.opendev.org/c/openstack/cinder/+/763917 14:06:54 please review this patch so we can get it out of the way 14:07:08 i can't approve it because i am co-author 14:07:20 * jungleboyj will look 14:07:30 thank you, i withdraw my "slacker" comment 14:07:39 :-) 14:07:46 that's all from me, anyone else have an announcement? 14:07:50 * lseki sneaks in, but also in another meeting 14:08:03 jungleboyj: thanks 14:08:12 #topic New meeting time proposal 14:08:22 lseki: that's you if you can multitask 14:08:34 I'll try 14:08:43 :-) Good topic. I have the same problem. 14:08:59 ok, we used to have the cinder meeting at 1500 UTC 14:09:03 so now I have a conflicting daily meeting at 14:00-14:30 UTC 14:09:15 we moved it to be more friendly to APAC people 14:09:29 but i haven't seen evidence that it has helped 14:10:08 so we may have to think of another way to encourage APAC participation 14:10:37 unfortunately, if we move to 1500 then it conflicts with the horizon meeting 14:10:59 * jungleboyj liked it when it was 1600 :-) 14:11:23 well, that would solve the horizon problem 14:11:49 1600 works for me as well 14:12:04 but might be even worse for APAC folks 14:12:12 ok, i guess the thing to do is to take a poll 14:12:22 Works for me as it moves it out beyond all my APAC meetings. 14:12:38 which could result in 1300 for all i know 14:12:43 :D 14:13:11 ok, to be clear, next week's meeting will be at 1400 UTC as usual 14:13:36 i'll get a poll out today 14:13:49 #action rosmaita - poll for meeting time change 14:14:09 does anyone have any radical change suggestions, like changing the day of week? 14:15:09 hearing none, i'll go with some conservative suggestions and an open space for suggestions 14:15:26 ++ 14:15:45 ++ 14:15:51 rosmaita: another day of week works for me 14:16:12 e0ne: would it be possible to swap time slots with horizon? 14:16:22 or would that be bad for the horizon team? 14:16:36 rosmaita I need to ask for the team 14:16:46 rosmaita it should not be an issue 14:17:07 ok, thanks, i definitely don't want to schedule a meeting that has a conflict for you 14:17:24 #topic bug report 14:17:29 thanks rosmaita 14:17:37 we had quite a few opened this week, 11 to be exact 14:17:52 I will save time by not including the summary here, they are all in the bug report #link https://etherpad.opendev.org/p/cinder-wallaby-r18-bug-review 14:18:13 if anyone has any comments just reply after I link the bug, I will go through them in the same order as the etherpad 14:18:18 Cinder bug #1: Target volume type is still in use #link https://bugs.launchpad.net/cinder/+bug/1907157 14:18:22 Launchpad bug 1907157 in Cinder "Target volume type is still in use" [Medium,Triaged] 14:18:23 bug 1 in Ubuntu Malaysia LoCo Team "Microsoft has a majority market share" [Critical,In progress] https://launchpad.net/bugs/1 - Assigned to MFauzilkamil Zainuddin (apogee) 14:18:58 never mind the second part about ubuntu malaysia, thats my improper formatting 14:19:08 yeah, that is not cinder's fault 14:19:29 ok, we were added to it by submitter because they thought we might have some input or suggestions 14:19:59 i will look 14:20:10 np 14:20:14 Cinder bug 2: Attachment update API returns 500 when it should return 400 #link https://bugs.launchpad.net/cinder/+bug/1907295 14:20:16 Launchpad bug 1907295 in Cinder "attachment update API returns 500 when it should return 400" [Medium,Triaged] - Assigned to Eric Harney (eharney) 14:20:19 it may be the image cache 14:20:23 i ran into this while chasing a different issue 14:20:35 the log basically said "HTTP 500" with no useful context about why 14:20:44 i'll either chase it down or just close this if i can't reproduce it 14:21:11 (also seems like a bug in itself that we can log HTTP 500 with no info, but that's probably a whole different can of worms) 14:22:49 np thanks eharney 14:22:54 Cinder bug 3: py38 ReplicationTestCase unit test failure #link https://bugs.launchpad.net/cinder/+bug/1907672 14:22:57 Launchpad bug 1907672 in Cinder "py38 ReplicationTestCase unit test failure" [Low,Triaged] 14:22:58 bug 3 in mono (Ubuntu) "Custom information for each translation team" [Undecided,Fix committed] https://launchpad.net/bugs/3 14:23:12 this is just a flaky unit test that needs some work 14:23:23 yeah it looked fairly simple 14:23:37 Cinder bug_ 4: Cannot set quota for volume type #link https://bugs.launchpad.net/cinder/+bug/1907750 14:23:38 Launchpad bug 1907750 in Cinder "Cannot set quota for volume type" [Low,Triaged] - Assigned to haobing1 (haobing1) 14:25:21 this is limited to cases where second attempt at quota uses explicitly upper case variation of the first volume-type name 14:25:28 submitter confirmed in a follow up comment 14:26:23 any insights/comments or will we move on? 14:27:03 Cinder bug_ 5: cinder-backup does not allow to enable 'fast-diff' feature for backup images stored in ceph #link https://bugs.launchpad.net/cinder/+bug/1907964 14:27:05 Launchpad bug 1907964 in Cinder "cinder-backup does not allow to enable 'fast-diff' feature for backup images stored in ceph" [Low,In progress] - Assigned to Christian Rohmann (christian-rohmann) 14:27:10 the suggestion that when you delete a volume type, all the associated stuff should also be deledte sounds correct 14:27:22 yeah it is a fair assumption 14:27:39 if there is no vols associated it should be fine to delete 14:28:34 fast-diff looks like an improvement we should investigate for rbd, more a perf enhancement than a bug 14:29:27 eharney will I change this to importance: wishlist? it is already in progress by user and changes have been proposed 14:29:38 makes sense 14:29:57 no problem, I will reply to submitters latest comment 14:30:15 Cinder bug_ 6: use md5 to check volume metadata #link https://bugs.launchpad.net/cinder/+bug/1908040 14:30:16 Launchpad bug 1908040 in Cinder "use md5 to check volume metadata" [Undecided,Invalid] 14:30:39 subsequently marked as invalid after response from eharney 14:31:04 Cinder bug_ 7: "publish_service_capabilities" periodic task blocks cinder-volume #link 14:31:20 this one may be of higher importance than medium because it references environments at scale issues 14:31:25 i just marked this one as a duplicate, it's a known ceph issue that we fixed 14:31:31 ty 14:32:01 last cinder bug is a driver one 14:32:06 Cinder bug_ 8: ibm_storage driver: the "OSvol:" prefix should be optional in the volume name #link https://bugs.launchpad.net/cinder/+bug/1908181 14:32:07 Launchpad bug 1908181 in Cinder "ibm_storage driver: the "OSvol:" prefix should be optional in the volume name" [Low,Triaged] 14:32:18 this is a customer request for a driver change 14:32:30 it has been tagged correctly so hopefully IBM driver team see it and pick it up 14:32:32 anyone from ibm here? 14:33:12 well, hopefully they are watching launchpad 14:33:20 ok, thanks, michael-mcaleer 14:33:34 os-brick next 14:33:35 #topic specs 14:33:38 oops 14:33:40 haha 14:33:44 #topic bugs continued 14:33:45 pulled the trigger a bit quick 14:33:54 only three left, one is a duplicate 14:33:59 os-brick bug_ 1: Evacuation results in multipath residue when use fc #link https://bugs.launchpad.net/os-brick/+bug/1906768 14:34:01 Launchpad bug 1906768 in os-brick "Evacuation results in multipath residue when use fc" [Undecided,New] 14:34:10 and this one is a duplicate of the first ... 14:34:11 os-brick bug_ 2: fibre channel driver can not disconnet volume when VM to be evacuated #link https://bugs.launchpad.net/os-brick/+bug/1907442 14:34:13 Launchpad bug 1907442 in os-brick " fibre channel driver can not disconnet volume when VM to be evacuated" [Undecided,Invalid] 14:34:15 pretty sure a lot of multipath fixes made it back to queens 14:34:20 the answer is probably: upgrade from pike :/ 14:34:58 should the same be relayed to the submitter? 14:35:28 i'll leave a note 14:35:34 ty rosmaita 14:35:38 pike is EOL 14:35:46 and the last bug for this week... 14:35:47 python-cinderclient bug_ 1: backup delete fail #link https://bugs.launchpad.net/python-cinderclient/+bug/1907542 14:35:50 Launchpad bug 1907542 in python-cinderclient "backup delete fail" [Low,Triaged] - Assigned to FengJiankui (fengjiankui) 14:36:45 mmm 14:36:45 seems unlikely to be a client bug 14:37:14 are you thinking more cinder? 14:37:44 it sounds like and API bug 14:37:52 yeah but unless we have a backup-delete cascade option (i think we don't?) it's probably not actually a bug 14:38:12 don't users have to delete dependent backups first? 14:38:24 i forget 14:38:52 yes, but the issue is that it's reporting that there isn't a dependent backup 14:38:53 from my own work with our own driver yes that was my experience 14:38:58 when there is one in error state 14:39:11 yeah the CLI output should say has_dependent_backups = True 14:39:29 i'm not sure it should say True if the second backup failed 14:40:03 probably not, but you need to be able to find the failed backup somehow 14:40:08 right 14:40:22 the failed backup is still linked to the backup, albeit just in namespace because it failed 14:40:44 would has_failed_backups be more suitable? 14:40:53 no 14:41:24 we could have the delete failure message list the id of the second backup 14:42:09 also should probably have a backup-delete --cascade option like we do for volume snaps to make this easier 14:42:32 or auto-delete dependents that are in error state? 14:42:39 but i agree that --cascade would be useful 14:42:53 that could work too 14:43:30 well, it looks like FengJiankui wants to work on it 14:43:40 i'll leave a note for him on the bug 14:43:50 thats all from me this week, thanks everyone for the input 14:43:57 if he can't make the meeting, mabye we can discuss on the ML 14:44:02 thanks michael-mcaleer 14:44:09 #topic specs 14:44:28 anyone here with questions about their spec proposal?/ 14:45:37 i think i've left comment on everything but https://review.opendev.org/c/openstack/cinder-specs/+/764628 14:46:53 i vaguely remember discussing this glance ceph optimization years ago 14:47:23 it's something we should look into, not sure if it needs a full spec process or not 14:48:54 i guess a launchpad BP will do as a driver optimization 14:49:20 thanks eric 14:49:40 the proposal is basically: we currently optimize image->volume in situations where we can with ceph -- so do it in the other direction too 14:51:44 well, if no one here has questions about their specs, we can move on 14:52:07 just cinder-cores, please review specs so they can meet the freeze deadline 14:52:14 #topic open discussion 14:53:11 someone added a line about mypy? 14:53:15 yeah that was my 14:53:17 me* 14:53:51 I started to look at some of the code submissions for myPy from walshh_ and although it is straight forward there is still some questions on how to properly review it 14:54:18 well 14:54:20 things like when/when not to use myPy type settings, etc., its difficult because there is no reference correct way that things should be done 14:54:30 running "tox -e mypy" and reading the html report will basically tell you if it's working 14:54:44 type settings? 14:55:22 even at that, the uploaded patchset runs cleanly with mypy but I could see some methods where types are defined but others are not in different files and couldnt discern the difference why one function had them and another did not 14:55:48 type settings = wha the type is set to for input/return parameters for each function 14:55:51 what* 14:56:01 yes, this is the sticking point for most reviewers i think -- trying to determine the goal for coverage to consider it good enough to merge 14:56:31 ok, makes sense, I was incorrectly under assumption it was full coverage but wasn't sure 14:56:53 some people have been leaning that way, but i still question if that makes sense at the beginning of this 14:57:29 it's a lot of work to hit 100% coverage from what ive seen in the few submissions in already 14:57:50 not complicated changes, just a lot of them 14:57:52 the problem is, even if you hit 100% coverage in a file, it won't actually be fully covered unless you also hit that in every file it imports etc 14:58:21 so, we could do that, but i'm still not convinced it's the most efficient use of effort 14:59:22 no problem, that gives me a bit more direction with reviews for now anyway, I will wait for core reviewer comments to see if there is a consistent set of 'guidelines' or 'best practices' for this work 14:59:30 let's discuss this in video next week 14:59:36 thanks eharney 14:59:37 will be a bit easier to hash it out 14:59:50 ok, we need to make room for horizon 14:59:53 thanks for attending 14:59:55 * eharney is out of the office next week 14:59:57 rosmaita: thanks ;) 15:00:01 please review specs! 15:00:13 #endmeeting cinder