14:00:16 #startmeeting cinder 14:00:17 Meeting started Wed Dec 2 14:00:16 2020 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:20 The meeting name has been set to 'cinder' 14:00:28 #topic roll call 14:00:42 hi 14:00:47 hi 14:00:54 hi 14:00:56 hi 14:01:04 hi 14:01:31 o/ 14:01:51 o/ 14:01:54 hello everyone 14:01:55 Hi 14:02:00 hi 14:02:03 #link https://etherpad.openstack.org/p/cinder-wallaby-meetings 14:02:08 big turnout today! 14:02:17 #topic announcements 14:02:17 hi 14:02:50 ok, as we discussed/voted on last week, the first wallaby midcycle is next week 14:02:59 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-November/019038.html 14:03:31 details ^^, but the tldr is Wednesday 9 December, 1400-1600 UTC 14:03:45 and here is the planning etherpad: 14:03:49 #link https://etherpad.opendev.org/p/cinder-wallaby-mid-cycles 14:04:04 only 1 topic so far 14:04:29 if others aren't added, we may turn it into a hackfest 14:04:34 morning 14:04:36 so definitely plan to attend 14:04:53 next thing: Third Party CI updates to treat review.opendev.org better 14:05:02 #link http://lists.opendev.org/pipermail/service-discuss/2020-November/000136.html 14:05:18 apparently the new gerrit is getting hammered unmercifully by third party CI 14:05:31 they have suggestions in there for changes to make to get things to work more smoothly 14:05:55 Oh the irony. The CIs are fighting back. :-) 14:06:04 also, please make sure that your 3rd party CI contact info is up to date on the wiki 14:06:13 #link https://wiki.openstack.org/wiki/ThirdPartySystems 14:06:34 and one more thing, the 3PCI liaison position is still open 14:06:43 #link https://wiki.openstack.org/wiki/CinderWallabyPTGSummary#proposed 14:06:58 talk to me if you're interested and want details 14:07:09 ok, last announcement 14:07:23 Tested Python Runtimes for Wallaby 14:07:31 #link https://governance.openstack.org/tc/reference/runtimes/wallaby.html 14:07:40 these are 3.6 and 3.8 14:08:05 same as victoria, because 3.7 got dropped when the CI-on-Focal goal was completed around Milestone-3 14:08:07 anyway 14:08:14 we are good except for: 14:08:24 #link https://review.opendev.org/c/openstack/python-brick-cinderclient-ext/+/763802 14:08:48 I've merged that one 14:09:03 and cinderlib, which currently checks 3.6 and 3.7 for functional tests 14:09:30 but i would let geguileo decide whether to merge that before or after stable/victoria is cut for cinderlib 14:09:44 whoami-rajat__: ty 14:10:03 ok, that's all the announcements ... make sure you have the midcycle on your calendar for next week 14:10:15 #topic Wallaby R-20 Bug Review 14:10:19 thanks rosmaita 14:10:31 Wallaby R-20 bug review #link https://etherpad.opendev.org/p/cinder-wallaby-r20-bug-review 14:10:41 quiet week this week, 1 bug for cinder and 4 for drivers 14:10:49 Cinder first 14:10:56 Cinder-backed images occasionally fail to clone in A-A #link https://bugs.launchpad.net/cinder/+bug/1906286 14:10:58 Launchpad bug 1906286 in Cinder "Cinder-backed images occasionally fail to clone in A-A" [Medium,Triaged] 14:11:29 user is having issues with cinder backed images inside glance when using Active/Active in clustered envs 14:11:37 I think that needs to be fixed with rosmaita 's optimization patch for glance cinder store, i've left a comment on the bug 14:11:57 ok thanks Rajat 14:12:06 #action rosmaita get back to working on glance cinder store optimization patch 14:12:30 it's the same issue we face when using multiple glance cinder stores 14:12:53 thanks whoami-rajat__ 14:13:02 Next bug... PowerMax Driver - Update host SRP during promotion #link https://bugs.launchpad.net/cinder/+bug/1905564 14:13:05 Launchpad bug 1905564 in Cinder "PowerMax Driver - Update host SRP during promotion" [Medium,In progress] - Assigned to Simon O'Donovan (odonos12) 14:13:33 this issue is observed after failing over to remote array and not updating the host value if it differs from the primary 14:13:40 fix has been submitted for review 14:13:45 cool 14:13:59 Next... Tempest volume/snapshot manage cases do not work for PowerFlex cinder driver #link https://bugs.launchpad.net/cinder/+bug/1906380 14:14:01 Launchpad bug 1906380 in Cinder "tempest volume/snapshot manage cases do not work for PowerFlex cinder driver." [Low,Triaged] - Assigned to Sam Wan (sam-wan) 14:14:12 kubectl delete deploy sise-deploy 14:14:17 This one is down to how PowerStore handles UUIDs, they need to alter their approach 14:14:17 Haha, oops. 14:14:33 ^^ np :) 14:14:45 The powerstore bug is assigned and being worked on 14:14:54 ok 14:15:02 Next... NetApp ONTAP: QoS policy group is deleted after migration #link https://bugs.launchpad.net/cinder/+bug/1906291 14:15:03 Launchpad bug 1906291 in Cinder "NetApp ONTAP: QoS policy group is deleted after migration" [Medium,Triaged] 14:15:37 Problem with QoS policies being deleted in certain scenarios involving migration operations 14:15:41 seems that migration was not considered at all when qos was implemented in ontap driver 14:16:08 thanks for the update lseki 14:16:27 and lastly..., Storwize: Support IOPS throttling per GB at volume level based on size #link https://bugs.launchpad.net/cinder/+bug/1905988 14:16:30 Launchpad bug 1905988 in Cinder "Storwize: Support IOPS throttling per GB at volume level based on size" [Medium,Triaged] - Assigned to Venkata krishna Thumu (venkatakt) 14:16:34 Volume IOPS is set irrespective of that volume size with the current IBM storwize driver. 14:16:57 currently being worked 14:17:00 on 14:17:08 thats it for the bugs for R-20, thanks! 14:17:12 great, looks like we are under control this week 14:17:15 let's make sure this happens in a way that works well with the existing iops_per_gb support that we already have ^ 14:17:16 thanks michael-mcaleer 14:17:37 ehaarney... from their bug: Adding support 14:17:37 to calculate volume IOPS based on volume size and the value 'iops_per_gb' and update volume metadata for 14:17:37 the volume actions such as Creation, Update, Resize and Retype to avoid retype of volume for changing the 14:17:37 throttling value 14:17:37 eharney: maybe you could put a message on the bug 14:17:44 will do 14:17:53 thanks 14:18:00 #topic stable releases 14:18:07 whoami-rajat__: that's you 14:18:09 thanks rosmaita 14:18:46 Since the initial targeted patches were merged in victoria and ussuri, I've proposed the respective release patches (link on the meeting etherpad) 14:19:14 thanks for posting the patches, i will verify the hashes after this meeting 14:19:18 for train, we are experiencing gate failure on the lvm-lio-barbican job and there are still 3 patches remaining -- 1 in os-brick, 2 in cinder 14:19:36 so i will keep rechecking or look into the gate, and propose a release when all are merged 14:19:38 thanks rosmaita 14:19:40 about this: we have a not nice regression with encrypted volume which should maybe go in soon and may deserve another review 14:19:46 s/review/release/ 14:19:48 soon 14:20:04 tosky: which branch? 14:20:06 yes 14:20:10 or is it in master? 14:20:21 i see the reclone patch in victoria 14:20:23 this is the victoria backport: https://review.opendev.org/c/openstack/cinder/+/764503 14:20:39 oh, that bug 14:20:45 it is extremely un-nice 14:20:55 will that need to go to ussuri, too? 14:21:01 sorry, I forgot to remind everyone about that too 14:21:09 i can't remember when clone re-keying was implemented 14:21:15 and train 14:21:19 it needs to go into train 14:21:22 I don't remember about stein 14:21:41 ok. whoami-rajat__ this is a good reason to hold up the releases 14:22:04 ok 14:22:13 let's get the reclone patch merged into cinder and then re-propose 14:22:48 do we want a deadline for that patch to merge or just hold the release until that makes into train? 14:22:56 rosmaita: ^ 14:23:16 i think release U and V as soon as it merges, and then we will have to focus on train separately 14:23:33 ok 14:23:37 #link https://review.opendev.org/c/openstack/cinder/+/764503 14:23:43 so stable cores can take a look ^ 14:23:57 that should get in asap 14:24:35 agreed 14:24:47 ok, thanks whoami-rajat__ 14:25:09 anyone interested in figuring out what's up with cinder-tempest-plugin-lvm-lio-barbican job, feel free to take a look 14:25:14 you will be a Hero of Cinder 14:25:31 #topic community goal (no JSON for policies) 14:25:48 one patch and then we are done with this 14:25:51 #link https://review.opendev.org/c/openstack/cinder/+/763917 14:26:22 please take a look when you have a chance, it's in recheck now due to an unrelated failure 14:26:49 #topic ceph-iscs driver reviews needed 14:26:51 hemna: that's you 14:27:03 heh the release notes for every release since queens has had a bump about converting policy files to yaml 14:27:28 anyway, the ceph-iscsi driver has been passing zuul for a while now. I was hoping to get reviews on it 14:27:32 so we can get it to land 14:27:51 thanks for your work on this, walt 14:27:57 here are the series of patches outstanding https://review.opendev.org/q/hashtag:%22ceph-iscsi%22+(status:open%20OR%20status:merged) 14:27:58 ++ 14:28:02 including the driver 14:28:05 hemna: Thanks! 14:28:09 hemna: great! will review it asap 14:28:15 I'd love for this to actually land :) 14:28:23 it's been a long time in the works. 14:28:29 i think a lot of people are interested in this 14:28:37 rosmaita: +1 14:28:54 I will make time to look at it. 14:29:11 we need the driver to merge first, then the others can follow 14:29:29 that's it from me. 14:29:34 it would be great if we can land it soon, we will have a couple of new drivers to review for Milestone-2, so let's get the ceph-iscsi out of the way ASAP 14:29:47 thanks hemna 14:29:59 #topic Windows RBD os-brick support 14:30:05 lpetrut: you're up 14:30:10 hi 14:30:37 we've recently ported RBD to Windows and we'd like to add an os-brick connector, allowing RBD Ceph volumes to be attached to Hyper-v vms 14:30:49 nice 14:31:05 I thought it might be worth bringing up the bp: https://blueprints.launchpad.net/cinder/+spec/os-brick-windows-rbd 14:31:14 here's the implementation: https://review.opendev.org/c/openstack/os-brick/+/718403 14:31:40 so this is only supported in ceph pacific release ? 14:31:50 yep 14:32:49 ok so it should be disabled for < pacific then on windows. 14:33:25 well, older versions won't even compile so I'm not sure if it's worth adding an explicit check 14:33:37 the ceph driver will think it's ok 14:33:48 it'll startup and then fail on attach 14:34:20 the user will be able to create volumes, but never attach them. 14:34:33 kind goes along w/ our ceph driver support issue 14:35:36 i think that would happen anyway, since the check would be in the os-brick side? 14:36:30 fwiw we do have a check on the os-brick side, ensuring that rbd is installed https://review.opendev.org/c/openstack/os-brick/+/718403/6/os_brick/initiator/windows/rbd.py#65 14:36:44 yeah, i think that's all that's needed? 14:36:44 sure, but the get connector should return something that signals the ceph driver that it's not supported at all. 14:37:02 the volume driver? 14:37:09 I think for other connectors the get connector doesn't return what's needed to allow an attach, so then the driver can say, hey this isn't supported. 14:37:11 yah 14:37:25 why does it care? maybe c-vol is serving rbd volumes to other hosts... 14:37:45 (or maybe i don't have a clear picture of the whole deployment model here) 14:37:58 so the log can say something useful instead of just a failed attach 14:38:46 atm an os-brick exception having the "rbd.exe is not available." message would be raised if it's missing 14:38:46 we don't have to solve it here, I just wanted to raise the issue 14:38:50 the client trying to attach saying "rbd doesn't exist" seems to cover that 14:39:06 hemna thanks for bringing it up 14:39:27 let's keep this issue in mind when reviewing the patch 14:39:32 rbd doesn't exist isn't really the same thing as this will never work as it's not the release required to do this. 14:39:44 $0.02 14:40:56 oh, you're not talking only about the client side binaries but also about the ceph version 14:40:59 cluster version* 14:41:26 well, our official position is now that we expect client/server alignment 14:41:44 makes sense 14:41:58 well, there was some pushback from operators about that 14:42:06 but i agree that it makes sense 14:42:49 there are some client compatibility modes for the server side too 14:43:28 i guess the issue is partially how much can reasonably be addressed in documentation vs. what needs to be checked in the code 14:44:08 well since you can't even install < pacific on windows, then I guess we just need to document the driver 14:44:19 right 14:44:52 lpetrut: maybe we can discuss this at the mid-cycle 14:45:13 it will be coming up more as we try to improve the rbd driver to take advantage of newer ceph developments 14:45:27 definitely 14:45:41 ok, cool 14:46:00 meanwhile we're taking care of the os-brick CI 14:46:10 excellent! 14:46:24 is the specless bp ok or do we need a spec? 14:46:45 I don't think we need a spec for a new connector 14:47:17 i think the bp is ok for this 14:47:43 great, I guess someone will have to approve it though 14:48:07 done 14:48:14 awesome, thanks! 14:48:33 lpetrut nice job man 14:48:51 lpetrut: anything else? 14:48:53 thanks :) I hope it will be useful 14:49:00 rosmaita: that's it from my side, thanks 14:49:08 great 14:49:15 #topic open discussion 14:49:24 I would like to request for review on the nested quota driver removal, this was agreed to be removed in wallaby PTG https://review.opendev.org/c/openstack/cinder/+/758913 14:50:10 one of the few patches i've seen recently with a +1 from Zuul! 14:50:51 because it hasn't been rechecked since a long time, might fail on a recheck 14:50:58 :) 14:51:28 as there is a bit of time, do you think this fix should be backported? https://review.opendev.org/c/openstack/cinder/+/743040 14:51:40 I mean, the question is a bit biased 14:53:40 well, it's a small isolated change, and it's a bugfix 14:54:06 yes, it should 14:54:39 oook, backport incoming 14:54:41 thanks 14:55:27 anyone else? 14:55:54 i mean on a different topic ... if you have a strong opinion on the backport, you can leave a vote there 14:57:54 ok, sounds like that's all ... reviewing priorities: ceph-iscsi and mypy, hierarchical quota driver removal 14:58:06 have a good week, and see you at the midcycle next week! 14:58:12 thanks! 14:58:15 thanks! 14:58:30 thanks 14:58:46 #endmeeting