14:00:04 <whoami-rajat> #startmeeting cinder
14:00:04 <opendevmeet> Meeting started Wed Jul 20 14:00:04 2022 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:04 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:04 <opendevmeet> The meeting name has been set to 'cinder'
14:00:07 <whoami-rajat> #topic roll call
14:00:14 <tosky> hi
14:00:18 <eharney> hi
14:00:22 <rosmaita> o/
14:00:22 <abishop> o/
14:00:39 <TusharTgite> hi
14:00:41 <hemna> yough
14:01:04 <geguileo> hi! o/
14:01:26 <whoami-rajat> #link https://etherpad.openstack.org/p/cinder-zed-meetings
14:01:57 <enriquetaso> hi
14:02:02 <felipe_rodrigues> o/
14:02:49 <nahimsouza[m]> o/
14:03:05 <amalashenko> hi!
14:03:21 <whoami-rajat> hello everyone
14:03:56 <whoami-rajat> we've quite a few announcements so let's get started
14:03:57 <caiquemello[m]> o/
14:04:05 <whoami-rajat> #topic announcements
14:04:12 <whoami-rajat> first, Driver merge deadline extended
14:04:23 <whoami-rajat> this was discussed last week, but if someone didn't attend it, here's a recap
14:04:31 <whoami-rajat> driver merge deadline has been extended by 2 weeks and new deadline is R-10 29th July, 2022
14:04:38 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029550.html
14:04:56 <rosmaita> hmmm ... which is coming up fast
14:05:19 <whoami-rajat> yes, everything's going fast this cycle
14:05:42 <whoami-rajat> we've the nvmeof os-brick patches that needs to be merged first to test the nvmeof drivers proposed , 3 IIRC
14:06:10 <whoami-rajat> let me just quickly find the topic for it
14:06:24 <whoami-rajat> #link https://review.opendev.org/q/topic:nvme-4
14:07:21 <whoami-rajat> so please take a look at the brick patches as well as the drivers proposed
14:07:42 <whoami-rajat> these are the new drivers and current status: https://etherpad.opendev.org/p/cinder-zed-new-drivers
14:07:53 <whoami-rajat> maybe not current, haven't looked at it in sometime
14:08:19 <rosmaita> driver developers: please update the comments as you fix stuff
14:08:31 <geguileo> regarding the nvme-4 patches, a couple of improvements were made
14:08:39 <whoami-rajat> anyway I've added all these links to the meeting etherpad so please take a look
14:08:53 <geguileo> and simondodsley tested them on the CI with Pure's new RDMA-NVMe driver
14:09:20 <whoami-rajat> good news, so we know the changes work with an actual NVMe driver
14:09:42 <rosmaita> \o/
14:10:27 <whoami-rajat> so drivers are a priority for the next couple of weeks
14:10:33 <whoami-rajat> and that brings me to the next announcement
14:10:40 <whoami-rajat> next, Upcoming release deadlines
14:10:46 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029583.html
14:10:53 <whoami-rajat> we've 3 deadlines coming up
14:10:59 <whoami-rajat> os-brick (Non-client library freeze): August 26th, 2022 (R-6 week)
14:11:05 <whoami-rajat> python-cinderclient and python-brick-cinderclient-ext (Client library freeze): September 1st, 2022 (R-5 week)
14:11:10 <whoami-rajat> Feature freeze (Zed-3 milestone): September 1st, 2022 (R-5 week)
14:11:32 <whoami-rajat> although we still have a month for the first deadline, we've seen that they come really fast
14:11:35 <whoami-rajat> so be prepared for them
14:12:01 <rosmaita> abishop may have a change that will affect cinderclient
14:12:07 <rosmaita> i'm not sure of any others?
14:12:29 <abishop> the only impact will be a bump of the max mv
14:12:34 <whoami-rajat> does quota changes require any change from client side? geguileo
14:12:43 <rosmaita> abishop: cool, thanks
14:12:51 <geguileo> whoami-rajat: fortunately no
14:12:56 <whoami-rajat> great
14:13:06 <geguileo> whoami-rajat: we'll have changes to the cinder-manage command, but that is released with Cinder itself
14:13:14 <geguileo> s/we'll/will
14:13:37 <whoami-rajat> yes, so looks like we won't have major changes to cinderclient
14:13:45 <whoami-rajat> but we surely do have for os-brick
14:13:46 <rosmaita> ok, that's good, we can concentrate on os-brick and not worry about the client
14:14:04 <whoami-rajat> yes
14:14:22 <whoami-rajat> we still have time but just wanted to add this so people are aware
14:14:53 <whoami-rajat> ok, moving on
14:14:56 <whoami-rajat> next, Z-2 Releases
14:15:20 <whoami-rajat> i totally forgot about announcing this but guess rosmaita added point about os-brick that released half an hour ago
14:15:30 <whoami-rajat> so we had 3 releases for Zed Milestone 2
14:15:38 <whoami-rajat> 20 July - os-brick 6.0.0 (zed) released today
14:15:39 <whoami-rajat> 15 July - cinderclient 9.0.0 (zed)
14:15:39 <whoami-rajat> 15 July - python-brick-cinderclient-ext 2.0.0 (zed)
14:16:39 <whoami-rajat> and we've all os-brick changes landed that were important so we're good
14:17:00 <whoami-rajat> next, Vote for OpenStack release name
14:17:08 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029608.html
14:17:21 <whoami-rajat> so the voting is open for next openstack release
14:17:25 <whoami-rajat> we've 3 options: Anchovy, Anteater, Antelope
14:17:39 <rosmaita> go Anchovy!!!
14:17:43 <whoami-rajat> there are some details added about choosing these names on the ML thread
14:18:30 <whoami-rajat> i think Antelope was winning when i votes but we still have time i guess
14:18:37 <whoami-rajat> yeah today is the last date so vote fast!
14:18:42 <rosmaita> winning by a lot
14:18:51 <whoami-rajat> s/votes/voted
14:18:51 <rosmaita> anchovy lovers of the world unite!
14:19:22 <whoami-rajat> :D
14:19:46 <enriquetaso> cool
14:19:55 <whoami-rajat> next, Mentors Needed - Grace Hopper Open Source Day + OpenStack
14:20:14 <whoami-rajat> so we've Grace Hopper day scheduled on 16th Sept and they require mentor
14:20:34 <whoami-rajat> currently their focus will be to teach creating a dev env and assign some OSC gaps
14:21:09 <whoami-rajat> but if someone knows about any other work items, we can propose it
14:21:15 <whoami-rajat> Date: Friday, September 16, 2022, Time: 8am to 3pm Pacific Time
14:21:27 <whoami-rajat> if anyone is interested, they can discuss with Kendall
14:22:00 <whoami-rajat> last announcement, which should actually be a topic but let's go with it
14:22:16 <whoami-rajat> Fungible team is working on setting up CI using Software Factory
14:22:24 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029635.html
14:22:42 <whoami-rajat> so Fungible team is still working on their CI and have sent a mail to ML asking for help with software factory
14:22:55 <whoami-rajat> i think NetApp was working on it
14:23:06 <whoami-rajat> sfernand, maybe you might be interested ? ^
14:23:29 <whoami-rajat> but yeah, vendors please help them setup the CI so they can get their driver in
14:24:02 <sfernand> yes sure
14:24:10 <whoami-rajat> thanks
14:24:13 <sfernand> i've just seen his email
14:24:31 <sfernand> our team will try to help
14:24:51 <whoami-rajat> great!
14:24:59 <whoami-rajat> that's all for announcements from my side, does anyone have anything else?
14:25:51 <whoami-rajat> guess not
14:25:56 <whoami-rajat> let's move on to topics then
14:26:02 <whoami-rajat> #topic New blueprint "Support transferring encrypted volumes"
14:26:06 <whoami-rajat> abishop, that's you
14:26:10 <abishop> hi
14:26:13 <abishop> #link https://blueprints.launchpad.net/cinder/+spec/transfer-encrypted-volume
14:26:26 <abishop> I'll repeat the opening sentence here:
14:26:31 <abishop> "Currently, cinder does not support transferring an encrypted volume due to the need to transfer its (and its snapshots) encryption key. This blueprint proposes an enhancement to transfer the key along with the volume and its snapshots."
14:26:55 <abishop> I didn't write a spec because the changes don't affect the API request, or response
14:27:16 <abishop> as I mention in the bp, I have a working PoC and the code change is minimal
14:27:51 <abishop> I propose adding a new mv (3.69) just to signal the fact that xfers of encrypted volumes are now possible
14:28:17 <abishop> questions?
14:29:00 <rosmaita> will the mv be required in the request to get the functionality?
14:29:21 <abishop> yes, that's how I currently have it coded
14:29:41 <rosmaita> i wonder about that for backward compatability
14:29:52 <rosmaita> suppose you currently don't want any of your encrypted stuff transferred
14:29:54 <abishop> that's why I went that way
14:29:57 <rosmaita> you can rely on the request failing
14:30:03 <rosmaita> so that it doesn't happen
14:30:13 <rosmaita> now, it will go through
14:30:23 <abishop> yes, requests <3.69 will fail the same way they do now
14:30:46 <rosmaita> oh, ok, that's fine then
14:31:06 <rosmaita> sorry, i misread your answer earlier
14:32:04 <whoami-rajat> so it's a new feature that users will need to opt into with the new MV but also doesn't have any API impact
14:32:11 <abishop> correct
14:32:41 <whoami-rajat> I guess I'm fine with a blueprint given the scope described is minimal, a releasenote + docs mentioning this is possible now should be fine
14:32:57 <whoami-rajat> this = transfer of encrypted volumes
14:34:08 <whoami-rajat> anyone have any other opinion about this? or everyone agrees this is a good idea
14:34:48 <eharney> it's a good idea
14:35:30 <whoami-rajat> cool
14:35:57 <whoami-rajat> i also think it is a good functionality and we should have it
14:36:27 <rosmaita> agree
14:36:34 <eharney> a new cinder-tempest-plugin test for it would be appreciated, of course
14:36:59 <abishop> agree, I'll work to add one
14:37:36 <whoami-rajat> great, so anything else on this topic?
14:37:47 <abishop> not from me!
14:38:17 <whoami-rajat> good, so moving on to the next topic
14:38:27 <whoami-rajat> #topic Discuss new transport types patch for NVMe
14:38:31 <whoami-rajat> simondodsley, that's you
14:38:50 <simondodsley> thanks - I have a patch in https://review.opendev.org/c/openstack/cinder/+/849690 to enhance the NVMe transport types.
14:39:09 <simondodsley> The current NVMEoF is too generivc as there are actual 3 types of NVMe hence the patch
14:39:22 <simondodsley> I'd like opinions and agreement on these changes
14:39:39 <simondodsley> It  wont affect existing NVME drivers, eg lightOS
14:39:49 <simondodsley> but will give new drivers the ability to be more granualr
14:40:02 <simondodsley> especially if they can support different transports
14:40:11 <simondodsley> There is a tempest tests change as well to support this
14:40:28 <simondodsley> https://review.opendev.org/c/openstack/tempest/+/849841
14:41:11 <rosmaita> you couldn't resist that black formatting, could you?  :)
14:41:24 <whoami-rajat> i think all constants of a particular protocol (NVMe here) have same functionality?
14:41:44 <simondodsley> actually they sort of don't
14:42:00 <simondodsley> FC is very different to RDMA (RoCE and TCP)
14:42:09 <simondodsley> We can't even support that yet with os-brick
14:42:49 <whoami-rajat> oh, i thought the idea of using those constants was to unite the protocol naming to a standard format
14:42:52 <simondodsley> for a vendor that supports multiple transport types (like Pure) we need to differentiate, especially for unit tests
14:43:07 <rosmaita> so the get-pools response is going to report the "canonical" name, not the variant
14:43:11 <rosmaita> is that OK?
14:43:42 <whoami-rajat> yeah, that's what i meant, cinder will treat all variants as the same
14:43:55 <simondodsley> yes - that is ok and adding the variants will just make it easier for customers to see the different ytypes - this is exspecially true in the support matrix
14:44:17 <geguileo> I think the point they want to make is that an operator will not see the difference
14:44:28 <geguileo> and even the volume type won't be able to leverage it
14:44:36 <rosmaita> geguileo: ++
14:44:37 <geguileo> so then, what's the point?
14:44:44 <whoami-rajat> geguileo, +1
14:44:53 <simondodsley> the volume type sabsolutely should leverage it
14:44:53 <abishop> so they may need to be distinct types, and not variants?
14:45:02 <rosmaita> right, my question is do we need to rethink this whole thing
14:45:03 <geguileo> simondodsley: but it can't (with current code)
14:45:25 <simondodsley> if i have one backend with multiple NVMe types I need to differentiate
14:45:57 <geguileo> true, that's why we have to think of not adding it as variants
14:46:08 <geguileo> because all variants are treated as if they were the same thing
14:46:09 <rosmaita> yes, so these wouldn't be variants, then
14:46:16 <geguileo> rosmaita: +1
14:46:18 <rosmaita> just new identifiers
14:46:39 <geguileo> The driver could return multiple values
14:46:49 <simondodsley> but then we should have 3 NVMe varients and would have to fix the current NVMe drivers to use the correct one, not the generic one
14:46:53 <geguileo> ['NVMe-oF', 'RDMA-NVMe']
14:47:07 <geguileo> ^ you can do that in your driver
14:47:15 <geguileo> that way a volume type can leverage it
14:47:31 <geguileo> and iirc the get-pools and all other user facing APIs will return both
14:47:37 <geguileo> so admins will be able to see it
14:47:57 <geguileo> and the scheduler is ready for that
14:48:06 <simondodsley> as a side not it shouldn't be RDMA either - that is a superset of RoCE and TCP
14:49:43 <simondodsley> so we should remove the NVMe-oF variant and replace with NVMe-RoCE, NVMe-TCP and NVMe-FC ??
14:49:58 <geguileo> simondodsley: are you sure about the RDMA not being correct?
14:50:05 <simondodsley> and add those to the cacheable protocols?
14:50:18 <geguileo> I think we should have both things
14:50:21 <simondodsley> RDMA can be over either RCP of RoCE
14:50:23 <geguileo> the generic one, and specific ones
14:50:40 <simondodsley> TCP ^
14:51:35 <simondodsley> I can certainly change the patch to be new protoocls and not extend the current NVMe-oF varients
14:51:47 <geguileo> I like that idea
14:51:50 <simondodsley> if that is more palatable
14:52:09 <geguileo> and drivers can return both: the generic and the specific ones
14:52:23 <simondodsley> is that definatley a list currently?
14:52:34 <geguileo> it's polymorphic
14:52:38 <simondodsley> ok
14:52:39 <geguileo> it can be a single string or a list
14:52:46 <geguileo> though no driver is using a list afaik
14:53:01 <simondodsley> well we can be the first and test it out -:)
14:53:17 <geguileo> I did test it out a bit, at least the scheduler side of it
14:53:26 <geguileo> and iirc also the get-pools command
14:53:48 <simondodsley> OK - I'll amend my driver, the constants patch and the tempest patch
14:53:54 <geguileo> but before you do anything, maybe we should hear what eharney has to say
14:53:55 <simondodsley> If there are issues I'll reach out
14:54:03 <geguileo> because he didn't like the drivers returning a list thingy
14:54:10 <simondodsley> waiting...
14:54:43 <geguileo> simondodsley: maybe ping him afterwards, because he doesn't seem to be around
14:54:49 <eharney> protocols are returned.. where exactly?
14:55:10 <geguileo> eharney: in the storage_protocol from the capabilities
14:55:13 <geguileo> of the driver
14:55:37 <geguileo> returning 2 values, a generic and then a specific one
14:55:48 <geguileo> ['NVMe-oF', 'FC-NVMe']
14:56:34 <eharney> i'll have to study up on how that actually works in the scheduler etc
14:57:32 <whoami-rajat> we can continue the discussion on the patch or cinder channel but i think the general issue is addressed
14:57:42 <simondodsley> ok
14:57:54 <whoami-rajat> ok so we've 3 minutes
14:58:02 <whoami-rajat> rosmaita, would you like to discuss your topic or shift it to next week?
14:58:10 <rosmaita> next week is fine
14:58:39 <whoami-rajat> cool, i will shift it to next week and sorry about that
14:59:40 <whoami-rajat> so we're towards the end of the meeting, any final thoughts?
15:00:13 <whoami-rajat> ok we're out of time, thanks everyone for attending!
15:00:16 <whoami-rajat> #endmeeting