14:00:31 #startmeeting cinder 14:00:31 Meeting started Wed Mar 1 14:00:31 2023 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:31 The meeting name has been set to 'cinder' 14:00:36 #topic roll call 14:00:49 o/ 14:00:50 hello 14:01:39 o/ 14:01:51 o/ 14:01:57 o/ 14:02:13 o/ 14:02:18 o/ 14:02:28 #link https://etherpad.opendev.org/p/cinder-antelope-meetings 14:03:42 hello everyone 14:03:49 let's get started 14:03:54 #topic announcements 14:03:59 RC1 this week 14:04:04 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032438.html 14:04:10 #link https://review.opendev.org/c/openstack/releases/+/875397 14:04:36 I've created an etherpad to prioritize patches for RC1 and RC2, please add your patches here 14:04:41 #link https://etherpad.opendev.org/p/cinder-antelope-fixes-rc 14:05:14 during RC1, stable/2023.1 will be cut and 2023.2 will be master 14:05:34 any changes merged after RC1 needs to be backported to stable/2023.1 to include it in the 2023.1 release 14:05:43 o/ 14:06:28 in summary, please add your patches (bug fixes) that you think are important and need to go in 2023.1 Antelope release 14:06:48 next, Vancouver PTG attendance 14:06:55 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032478.html 14:07:29 this is a followup on our discussion last week regarding the people interested in joining the physical PTG in Vancouver 14:07:44 NetApp and pure storage teams have shown their interests 14:08:03 if you're planning to attend, please reply to the email stating it so we've record of people planning to attend 14:08:53 next, TC + PTL results are out 14:09:00 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032442.html 14:09:14 congratulations rosmaita for running for TC for another tenure! 14:09:28 thanks! 14:09:41 ++ Congrats! 14:10:12 will most likely be my last time, so anyone interested or curious about serving on the TC, reach out and I'll be happy to fill you in 14:10:32 congrats! 14:11:20 and just stating it formally if people didn't notice, I will be the PTL again for 2023.2 (Bobcat) 14:11:23 rosmaita++ 14:11:32 whoami-rajat++ 14:11:36 whoami-rajat: congratulations! 14:12:10 Congratulations and thank you! 14:12:46 thanks! 14:12:54 and last announcement for today, We have a new TC Chair - Kristi Nikolla 14:13:00 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032499.html 14:13:11 gmann stepped down as TC chair after serving 2 years 14:13:24 so gmann++ for serving as TC chair! 14:13:45 ++ 14:14:24 that's all for announcements 14:14:53 anyone would like to announce something? sometimes I miss news 14:15:53 guess not 14:15:59 let's move to topics then 14:16:03 #topic Multiattach issue 14:16:08 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-February/032389.html 14:16:27 this was initially raised on ML so i will briefly describe the problem 14:16:47 we used to support creating multiattach volumes by providing multiattach=True in volume create request body 14:17:30 this was discouraged as users might accidentally create MA volumes and don't set up a cluster aware FS 14:17:35 which can eventually lead to data loss 14:17:46 we switched to using multiattach volume types to create MA volumes 14:17:58 since volume types are created by admin users 14:18:25 the deprecation was done in queens but we kept old behavior for compatibility 14:18:27 I think there's a upstream bug for this: 14:18:29 #link https://bugs.launchpad.net/cinder/+bug/2008259 14:18:39 thanks enriquetaso 14:18:57 so I proposed 2 changes, one for cinder (to remove the support) and one for tempest (to update test to use new way) 14:19:41 sounds good 14:19:53 this morning i had a discussion with gmann regarding this and his concern is we're breaking backward compatibility with this change and should be done with a MV 14:20:11 my argument was we don't want to keep the old behavior since that's a bug 14:20:24 i need to find logs of that discussion (i will look for that) 14:20:29 but you can find the details in the tempest patch 14:20:31 #link https://review.opendev.org/c/openstack/tempest/+/875372 14:20:38 and also on the ML 14:20:39 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-March/032502.html 14:21:17 found the discussion 14:21:18 #link https://meetings.opendev.org/irclogs/%23openstack-qa/%23openstack-qa.2023-03-01.log.html#t2023-03-01T01:25:48 14:23:30 so i wanted cinder team's input on this, what should be the ideal way forward, 1) remove the compat code altogether 2) keep the compat code and handle it with a new microversion 14:24:01 i am against the microversioning 14:24:14 this is a data loss issue, so we don't want people doing it 14:24:22 in case of 2) users will still be able to create multiattach volumes the old way (which we clearly don't want) 14:24:54 ack, I've the same thought 14:24:58 I would agree with rosmaita . If it is a dataloss issue and shouldn't have been possible in the first place, then it should be fixed. 14:25:12 Not mv'ed . 14:25:21 from the API ref, it looks like the 'multiattach' in the request body has been there since 3.0 ? https://docs.openstack.org/api-ref/block-storage/v3/?expanded=create-a-volume-detail#volumes-volumes 14:25:48 yeah, it was carried over from v2 API 14:26:17 and we deprecated it with a MV 3.50 (introducing the volume type way) 14:26:23 text says "Note that support for multiattach volumes depends on the volume type being used." 14:26:40 so if you include --multiattach and the VT doesn't allow it, what happens? 14:27:19 i think it takes either of those values, if "multiattach" is there it doesn't consider the volume type 14:27:34 multiattach or extra_specs.get("multiatach"... 14:27:59 which is again a issue that you rightly pointed out, we're not honoring the volume type 14:28:00 well, we could say that's a bug, and change it to keep multiattach in the request body, but reject the request if the VT doesn't allow it? 14:28:15 then no API change, but correct behavior? 14:28:41 would be kind of stupid, but would be backward compatible 14:29:27 hmm, but it would still break volume creation for people passing multiattach=True 14:29:38 only sometimes 14:29:44 :D 14:29:50 I don't think users use both the ways simultaneously 14:30:11 if they start using the correct volume type, multiattach automatically becomes redundant 14:30:14 but i see your point 14:30:27 me neither, probably better to just say, this is unsafe, we no longer allow it 14:30:40 we're keeping the API request consistent but changing it's behavior on backend (is that acceptable change?) 14:31:00 well, if it doesn't break tempest, no one will notice 14:31:43 tempest will break since they create the volume with only multiattach=True 14:32:09 not providing volume type at all (so it might take the default type which is not MA) 14:32:22 gotcha 14:34:24 so I think the consensus is we don't want to go the microversion way right? 14:34:46 what's supposed to happen if you have "multiattach": false on a VT that supports multiattach? 14:35:17 or maybe, what does happen currently? 14:35:36 it has an OR operator so takes either of those values 14:36:08 here https://review.opendev.org/c/openstack/cinder/+/874865/2/cinder/volume/flows/api/create_volume.py#b499 14:36:28 thanks 14:36:50 https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L496-L500 14:38:39 ok, so currently, if you say --multiattach=False in the request, but the VT allows it, you get multiattach ... am i right about that? 14:38:58 rosmaita, yes correct 14:38:58 line 500 in https://review.opendev.org/c/openstack/cinder/+/874865/2/cinder/volume/flows/api/create_volume.py#b499 14:39:32 ok, then we definitely need to remove 'multiattach' from the volume-create request 14:40:11 i'll try to write something coherent on your tempest patch and see if i can convince gmann 14:40:38 that would be great, thanks! 14:41:02 thanks for the discussion and explaining this 14:41:22 np, thanks for all the valuable feedback 14:41:56 so that's all the topics we had for today 14:42:00 let's move to open discussion 14:42:05 #topic open discussion 14:42:41 please review this rbd fix: https://review.opendev.org/c/openstack/cinder/+/865855 14:43:50 Hey guys we have these antelope Dell driver bugs and was wondering if there are any blockers https://review.opendev.org/c/openstack/cinder/+/768105 https://review.opendev.org/c/openstack/cinder/+/797970 https://review.opendev.org/c/openstack/cinder/+/821739 https://review.opendev.org/c/openstack/cinder/+/858370 14:45:03 eharney, added a comment, it's missing a releasenote 14:47:26 I don't think we've much for open discussion so let's close early 14:47:35 remember to review the bug fixes important for RC1 and RC2 14:47:41 also add your patches on the RC etherpad 14:47:46 we should really look at https://review.opendev.org/c/openstack/cinder/+/873249 14:48:00 do we use the review-priority flag? 14:48:47 Hey guys we have these antelope Dell driver bugs and was wondering if there are any blockers https://review.opendev.org/c/openstack/cinder/+/768105 https://review.opendev.org/c/openstack/cinder/+/797970 https://review.opendev.org/c/openstack/cinder/+/821739 https://review.opendev.org/c/openstack/cinder/+/858370 14:49:48 eharney: ++ on 873249 14:49:51 eharney, we do and there should be a link to track that but please do set it 14:49:54 seems important 14:50:02 whoami-rajat: i set it weeks ago 14:51:05 oh i see, I will find out the tracker link then and we can target that 14:52:40 hmm these config values haven't been working since beginning 14:55:42 added it to the RC tracker etherpad 14:55:56 Tony_Saad, I've left a comment on one patch 14:56:32 thanks! 14:56:57 I'll read more about the context and review 873249 15:00:04 we're out of time, thanks everyone for attending! 15:00:06 #endmeeting