14:00:16 <rosmaita> #startmeeting cinder
14:00:17 <openstack> Meeting started Wed Dec  2 14:00:16 2020 UTC and is due to finish in 60 minutes.  The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:20 <openstack> The meeting name has been set to 'cinder'
14:00:28 <rosmaita> #topic roll call
14:00:42 <eharney> hi
14:00:47 <michael-mcaleer> hi
14:00:54 <lseki> hi
14:00:56 <enriquetaso> hi
14:01:04 <walshh_> hi
14:01:31 <smcginnis> o/
14:01:51 <tosky> o/
14:01:54 <rosmaita> hello everyone
14:01:55 <whoami-rajat__> Hi
14:02:00 <lpetrut> hi
14:02:03 <rosmaita> #link https://etherpad.openstack.org/p/cinder-wallaby-meetings
14:02:08 <rosmaita> big turnout today!
14:02:17 <rosmaita> #topic announcements
14:02:17 <e0ne> hi
14:02:50 <rosmaita> ok, as we discussed/voted on last week, the first wallaby midcycle is next week
14:02:59 <rosmaita> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-November/019038.html
14:03:31 <rosmaita> details ^^, but the tldr is Wednesday 9 December, 1400-1600 UTC
14:03:45 <rosmaita> and here is the planning etherpad:
14:03:49 <rosmaita> #link https://etherpad.opendev.org/p/cinder-wallaby-mid-cycles
14:04:04 <rosmaita> only 1 topic so far
14:04:29 <rosmaita> if others aren't added, we may turn it into a hackfest
14:04:34 <hemna> morning
14:04:36 <rosmaita> so definitely plan to attend
14:04:53 <rosmaita> next thing: Third Party CI updates to treat review.opendev.org better
14:05:02 <rosmaita> #link http://lists.opendev.org/pipermail/service-discuss/2020-November/000136.html
14:05:18 <rosmaita> apparently the new gerrit is getting hammered unmercifully by third party CI
14:05:31 <rosmaita> they have suggestions in there for changes to make to get things to work more smoothly
14:05:55 <jungleboyj> Oh the irony.  The CIs are fighting back.  :-)
14:06:04 <rosmaita> also, please make sure that your 3rd party CI contact info is up to date on the wiki
14:06:13 <rosmaita> #link https://wiki.openstack.org/wiki/ThirdPartySystems
14:06:34 <rosmaita> and one more thing, the 3PCI liaison position is still open
14:06:43 <rosmaita> #link https://wiki.openstack.org/wiki/CinderWallabyPTGSummary#proposed
14:06:58 <rosmaita> talk to me if you're interested and want details
14:07:09 <rosmaita> ok, last announcement
14:07:23 <rosmaita> Tested Python Runtimes for Wallaby
14:07:31 <rosmaita> #link https://governance.openstack.org/tc/reference/runtimes/wallaby.html
14:07:40 <rosmaita> these are 3.6 and 3.8
14:08:05 <rosmaita> same as victoria, because 3.7 got dropped when the CI-on-Focal goal was completed around Milestone-3
14:08:07 <rosmaita> anyway
14:08:14 <rosmaita> we are good except for:
14:08:24 <rosmaita> #link https://review.opendev.org/c/openstack/python-brick-cinderclient-ext/+/763802
14:08:48 <whoami-rajat__> I've merged that one
14:09:03 <rosmaita> and cinderlib, which currently checks 3.6 and 3.7 for functional tests
14:09:30 <rosmaita> but i would let geguileo decide whether to merge that before or after stable/victoria is cut for cinderlib
14:09:44 <rosmaita> whoami-rajat__: ty
14:10:03 <rosmaita> ok, that's all the announcements ... make sure you have the midcycle on your calendar for next week
14:10:15 <rosmaita> #topic Wallaby R-20 Bug Review
14:10:19 <michael-mcaleer> thanks rosmaita
14:10:31 <michael-mcaleer> Wallaby R-20 bug review #link https://etherpad.opendev.org/p/cinder-wallaby-r20-bug-review
14:10:41 <michael-mcaleer> quiet week this week, 1 bug for cinder and 4 for drivers
14:10:49 <michael-mcaleer> Cinder first
14:10:56 <michael-mcaleer> Cinder-backed images occasionally fail to clone in A-A #link https://bugs.launchpad.net/cinder/+bug/1906286
14:10:58 <openstack> Launchpad bug 1906286 in Cinder "Cinder-backed images occasionally fail to clone in A-A" [Medium,Triaged]
14:11:29 <michael-mcaleer> user is having issues with cinder backed images inside glance  when using Active/Active in clustered envs
14:11:37 <whoami-rajat__> I think that needs to be fixed with rosmaita  's optimization patch for glance cinder store, i've left a comment on the bug
14:11:57 <michael-mcaleer> ok thanks Rajat
14:12:06 <rosmaita> #action rosmaita get back to working on glance cinder store optimization patch
14:12:30 <whoami-rajat__> it's the same issue we face when using multiple glance cinder stores
14:12:53 <rosmaita> thanks whoami-rajat__
14:13:02 <michael-mcaleer> Next bug... PowerMax Driver - Update host SRP during promotion #link https://bugs.launchpad.net/cinder/+bug/1905564
14:13:05 <openstack> Launchpad bug 1905564 in Cinder "PowerMax Driver - Update host SRP during promotion" [Medium,In progress] - Assigned to Simon O'Donovan (odonos12)
14:13:33 <michael-mcaleer> this issue is observed after failing over to remote array and not updating the host value if it differs from the primary
14:13:40 <michael-mcaleer> fix has been submitted for review
14:13:45 <rosmaita> cool
14:13:59 <michael-mcaleer> Next... Tempest volume/snapshot manage cases do not work for PowerFlex cinder driver #link https://bugs.launchpad.net/cinder/+bug/1906380
14:14:01 <openstack> Launchpad bug 1906380 in Cinder "tempest volume/snapshot manage cases do not work for PowerFlex cinder driver." [Low,Triaged] - Assigned to Sam Wan (sam-wan)
14:14:12 <smcginnis> kubectl delete deploy sise-deploy
14:14:17 <michael-mcaleer> This one is down to how PowerStore handles UUIDs, they need to alter their approach
14:14:17 <smcginnis> Haha, oops.
14:14:33 <michael-mcaleer> ^^ np :)
14:14:45 <michael-mcaleer> The powerstore bug is assigned and being worked on
14:14:54 <rosmaita> ok
14:15:02 <michael-mcaleer> Next... NetApp ONTAP: QoS policy group is deleted after migration #link https://bugs.launchpad.net/cinder/+bug/1906291
14:15:03 <openstack> Launchpad bug 1906291 in Cinder "NetApp ONTAP: QoS policy group is deleted after migration" [Medium,Triaged]
14:15:37 <michael-mcaleer> Problem with QoS policies being deleted in certain scenarios involving migration operations
14:15:41 <lseki> seems that migration was not considered at all when qos was implemented in ontap driver
14:16:08 <michael-mcaleer> thanks for the update lseki
14:16:27 <michael-mcaleer> and lastly..., Storwize: Support IOPS throttling per GB at volume level based on size #link https://bugs.launchpad.net/cinder/+bug/1905988
14:16:30 <openstack> Launchpad bug 1905988 in Cinder "Storwize: Support IOPS throttling per GB at volume level based on size" [Medium,Triaged] - Assigned to Venkata krishna Thumu (venkatakt)
14:16:34 <michael-mcaleer> Volume IOPS is set irrespective of that volume size with the current IBM storwize driver.
14:16:57 <michael-mcaleer> currently being worked
14:17:00 <michael-mcaleer> on
14:17:08 <michael-mcaleer> thats it for the bugs for R-20, thanks!
14:17:12 <rosmaita> great, looks like we are under control this week
14:17:15 <eharney> let's make sure this happens in a way that works well with the existing iops_per_gb support that we already have ^
14:17:16 <rosmaita> thanks michael-mcaleer
14:17:37 <michael-mcaleer> ehaarney... from their bug: Adding support
14:17:37 <michael-mcaleer> to calculate volume IOPS based on volume size and the value 'iops_per_gb' and update volume metadata for
14:17:37 <michael-mcaleer> the volume actions such as Creation, Update, Resize and Retype to avoid retype of volume for changing the
14:17:37 <michael-mcaleer> throttling value
14:17:37 <rosmaita> eharney: maybe you could put a message on the bug
14:17:44 <eharney> will do
14:17:53 <rosmaita> thanks
14:18:00 <rosmaita> #topic stable releases
14:18:07 <rosmaita> whoami-rajat__: that's you
14:18:09 <whoami-rajat__> thanks rosmaita
14:18:46 <whoami-rajat__> Since the initial targeted patches were merged in victoria and ussuri, I've proposed the respective release patches (link on the meeting etherpad)
14:19:14 <rosmaita> thanks for posting the patches, i will verify the hashes after this meeting
14:19:18 <whoami-rajat__> for train, we are experiencing gate failure on the lvm-lio-barbican job and there are still 3 patches remaining -- 1 in os-brick, 2 in cinder
14:19:36 <whoami-rajat__> so i will keep rechecking or look into the gate, and propose a release when all are merged
14:19:38 <whoami-rajat__> thanks rosmaita
14:19:40 <tosky> about this: we have a not nice regression with encrypted volume which should maybe go in soon and may deserve another review
14:19:46 <tosky> s/review/release/
14:19:48 <tosky> soon
14:20:04 <rosmaita> tosky: which branch?
14:20:06 <eharney> yes
14:20:10 <rosmaita> or is it in master?
14:20:21 <whoami-rajat__> i see the reclone patch in victoria
14:20:23 <tosky> this is the victoria backport: https://review.opendev.org/c/openstack/cinder/+/764503
14:20:39 <rosmaita> oh, that bug
14:20:45 <rosmaita> it is extremely un-nice
14:20:55 <rosmaita> will that need to go to ussuri, too?
14:21:01 <tosky> sorry, I forgot to remind everyone about that too
14:21:09 <rosmaita> i can't remember when clone re-keying was implemented
14:21:15 <tosky> and train
14:21:19 <eharney> it needs to go into train
14:21:22 <tosky> I don't remember about stein
14:21:41 <rosmaita> ok. whoami-rajat__ this is a good reason to hold up the releases
14:22:04 <whoami-rajat__> ok
14:22:13 <rosmaita> let's get the reclone patch merged into cinder and then re-propose
14:22:48 <whoami-rajat__> do we want a deadline for that patch to merge or just hold the release until that makes into train?
14:22:56 <whoami-rajat__> rosmaita:  ^
14:23:16 <rosmaita> i think release U and V as soon as it merges, and then we will have to focus on train separately
14:23:33 <whoami-rajat__> ok
14:23:37 <whoami-rajat__> #link https://review.opendev.org/c/openstack/cinder/+/764503
14:23:43 <whoami-rajat__> so stable cores can take a look ^
14:23:57 <hemna> that should get in asap
14:24:35 <rosmaita> agreed
14:24:47 <rosmaita> ok, thanks whoami-rajat__
14:25:09 <rosmaita> anyone interested in figuring out what's up with cinder-tempest-plugin-lvm-lio-barbican job, feel free to take a look
14:25:14 <rosmaita> you will be a Hero of Cinder
14:25:31 <rosmaita> #topic community goal (no JSON for policies)
14:25:48 <rosmaita> one patch and then we are done with this
14:25:51 <rosmaita> #link https://review.opendev.org/c/openstack/cinder/+/763917
14:26:22 <rosmaita> please take a look when you have a chance, it's in recheck now due to an unrelated failure
14:26:49 <rosmaita> #topic ceph-iscs driver reviews needed
14:26:51 <rosmaita> hemna: that's you
14:27:03 <hemna> heh the release notes for every release since queens has had a bump about converting policy files to yaml
14:27:28 <hemna> anyway, the ceph-iscsi driver has been passing zuul for a while now.  I was hoping to get reviews on it
14:27:32 <hemna> so we can get it to land
14:27:51 <rosmaita> thanks for your work on this, walt
14:27:57 <hemna> here are the series of patches outstanding https://review.opendev.org/q/hashtag:%22ceph-iscsi%22+(status:open%20OR%20status:merged)
14:27:58 <jungleboyj> ++
14:28:02 <hemna> including the driver
14:28:05 <jungleboyj> hemna:  Thanks!
14:28:09 <e0ne> hemna: great! will review it asap
14:28:15 <hemna> I'd love for this to actually land :)
14:28:23 <hemna> it's been a long time in the works.
14:28:29 <rosmaita> i think a lot of people are interested in this
14:28:37 <e0ne> rosmaita: +1
14:28:54 <jungleboyj> I will make time to look at it.
14:29:11 <hemna> we need the driver to merge first, then the others can follow
14:29:29 <hemna> that's it from me.
14:29:34 <rosmaita> it would be great if we can land it soon, we will have a couple of new drivers to review for Milestone-2, so let's get the ceph-iscsi out of the way ASAP
14:29:47 <rosmaita> thanks hemna
14:29:59 <rosmaita> #topic Windows RBD os-brick support
14:30:05 <rosmaita> lpetrut: you're up
14:30:10 <lpetrut> hi
14:30:37 <lpetrut> we've recently ported RBD to Windows and we'd like to add an os-brick connector, allowing RBD Ceph volumes to be attached to Hyper-v vms
14:30:49 <hemna> nice
14:31:05 <lpetrut> I thought it might be worth bringing up the bp: https://blueprints.launchpad.net/cinder/+spec/os-brick-windows-rbd
14:31:14 <lpetrut> here's the implementation: https://review.opendev.org/c/openstack/os-brick/+/718403
14:31:40 <hemna> so this is only supported in ceph pacific release ?
14:31:50 <lpetrut> yep
14:32:49 <hemna> ok so it should be disabled for < pacific then on windows.
14:33:25 <lpetrut> well, older versions won't even compile so I'm not sure if it's worth adding an explicit check
14:33:37 <hemna> the ceph driver will think it's ok
14:33:48 <hemna> it'll startup and then fail on attach
14:34:20 <hemna> the user will be able to create volumes, but never attach them.
14:34:33 <hemna> kind goes along w/ our ceph driver support issue
14:35:36 <eharney> i think that would happen anyway, since the check would be in the os-brick side?
14:36:30 <lpetrut> fwiw we do have a check on the os-brick side, ensuring that rbd is installed https://review.opendev.org/c/openstack/os-brick/+/718403/6/os_brick/initiator/windows/rbd.py#65
14:36:44 <eharney> yeah, i think that's all that's needed?
14:36:44 <hemna> sure, but the get connector should return something that signals the ceph driver that it's not supported at all.
14:37:02 <eharney> the volume driver?
14:37:09 <hemna> I think for other connectors the get connector doesn't return what's needed to allow an attach, so then the driver can say, hey this isn't supported.
14:37:11 <hemna> yah
14:37:25 <eharney> why does it care?  maybe c-vol is serving rbd volumes to other hosts...
14:37:45 <eharney> (or maybe i don't have a clear picture of the whole deployment model here)
14:37:58 <hemna> so the log can say something useful instead of just a failed attach
14:38:46 <lpetrut> atm an os-brick exception having the "rbd.exe is not available." message would be raised if it's missing
14:38:46 <hemna> we don't have to solve it here, I just wanted to raise the issue
14:38:50 <eharney> the client trying to attach saying "rbd doesn't exist" seems to cover that
14:39:06 <lpetrut> hemna thanks for bringing it up
14:39:27 <rosmaita> let's keep this issue in mind when reviewing the patch
14:39:32 <hemna> rbd doesn't exist isn't really the same thing as this will never work as it's not the release required to do this.
14:39:44 <hemna> $0.02
14:40:56 <lpetrut> oh, you're not talking only about the client side binaries but also about the ceph version
14:40:59 <lpetrut> cluster version*
14:41:26 <rosmaita> well, our official position is now that we expect client/server alignment
14:41:44 <lpetrut> makes sense
14:41:58 <rosmaita> well, there was some pushback from operators about that
14:42:06 <rosmaita> but i agree that it makes sense
14:42:49 <hemna> there are some client compatibility modes for the server side too
14:43:28 <rosmaita> i guess the issue is partially how much can reasonably be addressed in documentation vs. what needs to be checked in the code
14:44:08 <hemna> well since you can't even install < pacific on windows, then I guess we just need to document the driver
14:44:19 <rosmaita> right
14:44:52 <rosmaita> lpetrut: maybe we can discuss this at the mid-cycle
14:45:13 <rosmaita> it will be coming up more as we try to improve the rbd driver to take advantage of newer ceph developments
14:45:27 <lpetrut> definitely
14:45:41 <rosmaita> ok, cool
14:46:00 <lpetrut> meanwhile we're taking care of the os-brick CI
14:46:10 <rosmaita> excellent!
14:46:24 <lpetrut> is the specless bp ok or do we need a spec?
14:46:45 <hemna> I don't think we need a spec for a new connector
14:47:17 <rosmaita> i think the bp is ok for this
14:47:43 <lpetrut> great, I guess someone will have to approve it though
14:48:07 <rosmaita> done
14:48:14 <lpetrut> awesome, thanks!
14:48:33 <hemna> lpetrut nice job man
14:48:51 <rosmaita> lpetrut: anything else?
14:48:53 <lpetrut> thanks :) I hope it will be useful
14:49:00 <lpetrut> rosmaita: that's it from my side, thanks
14:49:08 <rosmaita> great
14:49:15 <rosmaita> #topic open discussion
14:49:24 <whoami-rajat__> I would like to request for review on the nested quota driver removal, this was agreed to be removed in wallaby PTG https://review.opendev.org/c/openstack/cinder/+/758913
14:50:10 <rosmaita> one of the few patches i've seen recently with a +1 from Zuul!
14:50:51 <whoami-rajat__> because it hasn't been rechecked since a long time, might fail on a recheck
14:50:58 <rosmaita> :)
14:51:28 <tosky> as there is a bit of time, do you think this fix should be backported? https://review.opendev.org/c/openstack/cinder/+/743040
14:51:40 <tosky> I mean, the question is a bit biased
14:53:40 <rosmaita> well, it's a small isolated change, and it's a bugfix
14:54:06 <eharney> yes, it should
14:54:39 <tosky> oook, backport incoming
14:54:41 <tosky> thanks
14:55:27 <rosmaita> anyone else?
14:55:54 <rosmaita> i mean on a different topic ... if you have a strong opinion on the backport, you can leave a vote there
14:57:54 <rosmaita> ok, sounds like that's all ... reviewing priorities: ceph-iscsi  and mypy, hierarchical quota driver removal
14:58:06 <rosmaita> have a good week, and see you at the midcycle next week!
14:58:12 <whoami-rajat__> thanks!
14:58:15 <michael-mcaleer> thanks!
14:58:30 <lseki> thanks
14:58:46 <rosmaita> #endmeeting