14:00:04 #startmeeting cinder 14:00:04 Meeting started Wed Jul 20 14:00:04 2022 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:04 The meeting name has been set to 'cinder' 14:00:07 #topic roll call 14:00:14 hi 14:00:18 hi 14:00:22 o/ 14:00:22 o/ 14:00:39 hi 14:00:41 yough 14:01:04 hi! o/ 14:01:26 #link https://etherpad.openstack.org/p/cinder-zed-meetings 14:01:57 hi 14:02:02 o/ 14:02:49 o/ 14:03:05 hi! 14:03:21 hello everyone 14:03:56 we've quite a few announcements so let's get started 14:03:57 o/ 14:04:05 #topic announcements 14:04:12 first, Driver merge deadline extended 14:04:23 this was discussed last week, but if someone didn't attend it, here's a recap 14:04:31 driver merge deadline has been extended by 2 weeks and new deadline is R-10 29th July, 2022 14:04:38 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029550.html 14:04:56 hmmm ... which is coming up fast 14:05:19 yes, everything's going fast this cycle 14:05:42 we've the nvmeof os-brick patches that needs to be merged first to test the nvmeof drivers proposed , 3 IIRC 14:06:10 let me just quickly find the topic for it 14:06:24 #link https://review.opendev.org/q/topic:nvme-4 14:07:21 so please take a look at the brick patches as well as the drivers proposed 14:07:42 these are the new drivers and current status: https://etherpad.opendev.org/p/cinder-zed-new-drivers 14:07:53 maybe not current, haven't looked at it in sometime 14:08:19 driver developers: please update the comments as you fix stuff 14:08:31 regarding the nvme-4 patches, a couple of improvements were made 14:08:39 anyway I've added all these links to the meeting etherpad so please take a look 14:08:53 and simondodsley tested them on the CI with Pure's new RDMA-NVMe driver 14:09:20 good news, so we know the changes work with an actual NVMe driver 14:09:42 \o/ 14:10:27 so drivers are a priority for the next couple of weeks 14:10:33 and that brings me to the next announcement 14:10:40 next, Upcoming release deadlines 14:10:46 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029583.html 14:10:53 we've 3 deadlines coming up 14:10:59 os-brick (Non-client library freeze): August 26th, 2022 (R-6 week) 14:11:05 python-cinderclient and python-brick-cinderclient-ext (Client library freeze): September 1st, 2022 (R-5 week) 14:11:10 Feature freeze (Zed-3 milestone): September 1st, 2022 (R-5 week) 14:11:32 although we still have a month for the first deadline, we've seen that they come really fast 14:11:35 so be prepared for them 14:12:01 abishop may have a change that will affect cinderclient 14:12:07 i'm not sure of any others? 14:12:29 the only impact will be a bump of the max mv 14:12:34 does quota changes require any change from client side? geguileo 14:12:43 abishop: cool, thanks 14:12:51 whoami-rajat: fortunately no 14:12:56 great 14:13:06 whoami-rajat: we'll have changes to the cinder-manage command, but that is released with Cinder itself 14:13:14 s/we'll/will 14:13:37 yes, so looks like we won't have major changes to cinderclient 14:13:45 but we surely do have for os-brick 14:13:46 ok, that's good, we can concentrate on os-brick and not worry about the client 14:14:04 yes 14:14:22 we still have time but just wanted to add this so people are aware 14:14:53 ok, moving on 14:14:56 next, Z-2 Releases 14:15:20 i totally forgot about announcing this but guess rosmaita added point about os-brick that released half an hour ago 14:15:30 so we had 3 releases for Zed Milestone 2 14:15:38 20 July - os-brick 6.0.0 (zed) released today 14:15:39 15 July - cinderclient 9.0.0 (zed) 14:15:39 15 July - python-brick-cinderclient-ext 2.0.0 (zed) 14:16:39 and we've all os-brick changes landed that were important so we're good 14:17:00 next, Vote for OpenStack release name 14:17:08 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029608.html 14:17:21 so the voting is open for next openstack release 14:17:25 we've 3 options: Anchovy, Anteater, Antelope 14:17:39 go Anchovy!!! 14:17:43 there are some details added about choosing these names on the ML thread 14:18:30 i think Antelope was winning when i votes but we still have time i guess 14:18:37 yeah today is the last date so vote fast! 14:18:42 winning by a lot 14:18:51 s/votes/voted 14:18:51 anchovy lovers of the world unite! 14:19:22 :D 14:19:46 cool 14:19:55 next, Mentors Needed - Grace Hopper Open Source Day + OpenStack 14:20:14 so we've Grace Hopper day scheduled on 16th Sept and they require mentor 14:20:34 currently their focus will be to teach creating a dev env and assign some OSC gaps 14:21:09 but if someone knows about any other work items, we can propose it 14:21:15 Date: Friday, September 16, 2022, Time: 8am to 3pm Pacific Time 14:21:27 if anyone is interested, they can discuss with Kendall 14:22:00 last announcement, which should actually be a topic but let's go with it 14:22:16 Fungible team is working on setting up CI using Software Factory 14:22:24 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029635.html 14:22:42 so Fungible team is still working on their CI and have sent a mail to ML asking for help with software factory 14:22:55 i think NetApp was working on it 14:23:06 sfernand, maybe you might be interested ? ^ 14:23:29 but yeah, vendors please help them setup the CI so they can get their driver in 14:24:02 yes sure 14:24:10 thanks 14:24:13 i've just seen his email 14:24:31 our team will try to help 14:24:51 great! 14:24:59 that's all for announcements from my side, does anyone have anything else? 14:25:51 guess not 14:25:56 let's move on to topics then 14:26:02 #topic New blueprint "Support transferring encrypted volumes" 14:26:06 abishop, that's you 14:26:10 hi 14:26:13 #link https://blueprints.launchpad.net/cinder/+spec/transfer-encrypted-volume 14:26:26 I'll repeat the opening sentence here: 14:26:31 "Currently, cinder does not support transferring an encrypted volume due to the need to transfer its (and its snapshots) encryption key. This blueprint proposes an enhancement to transfer the key along with the volume and its snapshots." 14:26:55 I didn't write a spec because the changes don't affect the API request, or response 14:27:16 as I mention in the bp, I have a working PoC and the code change is minimal 14:27:51 I propose adding a new mv (3.69) just to signal the fact that xfers of encrypted volumes are now possible 14:28:17 questions? 14:29:00 will the mv be required in the request to get the functionality? 14:29:21 yes, that's how I currently have it coded 14:29:41 i wonder about that for backward compatability 14:29:52 suppose you currently don't want any of your encrypted stuff transferred 14:29:54 that's why I went that way 14:29:57 you can rely on the request failing 14:30:03 so that it doesn't happen 14:30:13 now, it will go through 14:30:23 yes, requests <3.69 will fail the same way they do now 14:30:46 oh, ok, that's fine then 14:31:06 sorry, i misread your answer earlier 14:32:04 so it's a new feature that users will need to opt into with the new MV but also doesn't have any API impact 14:32:11 correct 14:32:41 I guess I'm fine with a blueprint given the scope described is minimal, a releasenote + docs mentioning this is possible now should be fine 14:32:57 this = transfer of encrypted volumes 14:34:08 anyone have any other opinion about this? or everyone agrees this is a good idea 14:34:48 it's a good idea 14:35:30 cool 14:35:57 i also think it is a good functionality and we should have it 14:36:27 agree 14:36:34 a new cinder-tempest-plugin test for it would be appreciated, of course 14:36:59 agree, I'll work to add one 14:37:36 great, so anything else on this topic? 14:37:47 not from me! 14:38:17 good, so moving on to the next topic 14:38:27 #topic Discuss new transport types patch for NVMe 14:38:31 simondodsley, that's you 14:38:50 thanks - I have a patch in https://review.opendev.org/c/openstack/cinder/+/849690 to enhance the NVMe transport types. 14:39:09 The current NVMEoF is too generivc as there are actual 3 types of NVMe hence the patch 14:39:22 I'd like opinions and agreement on these changes 14:39:39 It wont affect existing NVME drivers, eg lightOS 14:39:49 but will give new drivers the ability to be more granualr 14:40:02 especially if they can support different transports 14:40:11 There is a tempest tests change as well to support this 14:40:28 https://review.opendev.org/c/openstack/tempest/+/849841 14:41:11 you couldn't resist that black formatting, could you? :) 14:41:24 i think all constants of a particular protocol (NVMe here) have same functionality? 14:41:44 actually they sort of don't 14:42:00 FC is very different to RDMA (RoCE and TCP) 14:42:09 We can't even support that yet with os-brick 14:42:49 oh, i thought the idea of using those constants was to unite the protocol naming to a standard format 14:42:52 for a vendor that supports multiple transport types (like Pure) we need to differentiate, especially for unit tests 14:43:07 so the get-pools response is going to report the "canonical" name, not the variant 14:43:11 is that OK? 14:43:42 yeah, that's what i meant, cinder will treat all variants as the same 14:43:55 yes - that is ok and adding the variants will just make it easier for customers to see the different ytypes - this is exspecially true in the support matrix 14:44:17 I think the point they want to make is that an operator will not see the difference 14:44:28 and even the volume type won't be able to leverage it 14:44:36 geguileo: ++ 14:44:37 so then, what's the point? 14:44:44 geguileo, +1 14:44:53 the volume type sabsolutely should leverage it 14:44:53 so they may need to be distinct types, and not variants? 14:45:02 right, my question is do we need to rethink this whole thing 14:45:03 simondodsley: but it can't (with current code) 14:45:25 if i have one backend with multiple NVMe types I need to differentiate 14:45:57 true, that's why we have to think of not adding it as variants 14:46:08 because all variants are treated as if they were the same thing 14:46:09 yes, so these wouldn't be variants, then 14:46:16 rosmaita: +1 14:46:18 just new identifiers 14:46:39 The driver could return multiple values 14:46:49 but then we should have 3 NVMe varients and would have to fix the current NVMe drivers to use the correct one, not the generic one 14:46:53 ['NVMe-oF', 'RDMA-NVMe'] 14:47:07 ^ you can do that in your driver 14:47:15 that way a volume type can leverage it 14:47:31 and iirc the get-pools and all other user facing APIs will return both 14:47:37 so admins will be able to see it 14:47:57 and the scheduler is ready for that 14:48:06 as a side not it shouldn't be RDMA either - that is a superset of RoCE and TCP 14:49:43 so we should remove the NVMe-oF variant and replace with NVMe-RoCE, NVMe-TCP and NVMe-FC ?? 14:49:58 simondodsley: are you sure about the RDMA not being correct? 14:50:05 and add those to the cacheable protocols? 14:50:18 I think we should have both things 14:50:21 RDMA can be over either RCP of RoCE 14:50:23 the generic one, and specific ones 14:50:40 TCP ^ 14:51:35 I can certainly change the patch to be new protoocls and not extend the current NVMe-oF varients 14:51:47 I like that idea 14:51:50 if that is more palatable 14:52:09 and drivers can return both: the generic and the specific ones 14:52:23 is that definatley a list currently? 14:52:34 it's polymorphic 14:52:38 ok 14:52:39 it can be a single string or a list 14:52:46 though no driver is using a list afaik 14:53:01 well we can be the first and test it out -:) 14:53:17 I did test it out a bit, at least the scheduler side of it 14:53:26 and iirc also the get-pools command 14:53:48 OK - I'll amend my driver, the constants patch and the tempest patch 14:53:54 but before you do anything, maybe we should hear what eharney has to say 14:53:55 If there are issues I'll reach out 14:54:03 because he didn't like the drivers returning a list thingy 14:54:10 waiting... 14:54:43 simondodsley: maybe ping him afterwards, because he doesn't seem to be around 14:54:49 protocols are returned.. where exactly? 14:55:10 eharney: in the storage_protocol from the capabilities 14:55:13 of the driver 14:55:37 returning 2 values, a generic and then a specific one 14:55:48 ['NVMe-oF', 'FC-NVMe'] 14:56:34 i'll have to study up on how that actually works in the scheduler etc 14:57:32 we can continue the discussion on the patch or cinder channel but i think the general issue is addressed 14:57:42 ok 14:57:54 ok so we've 3 minutes 14:58:02 rosmaita, would you like to discuss your topic or shift it to next week? 14:58:10 next week is fine 14:58:39 cool, i will shift it to next week and sorry about that 14:59:40 so we're towards the end of the meeting, any final thoughts? 15:00:13 ok we're out of time, thanks everyone for attending! 15:00:16 #endmeeting