15:01:31 #startmeeting cinder_bs 15:01:31 Meeting started Wed May 4 15:01:31 2022 UTC and is due to finish in 60 minutes. The chair is enriquetaso. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:31 The meeting name has been set to 'cinder_bs' 15:01:46 Hello, 5 new bugs were reported this period. 15:01:55 #link https://etherpad.opendev.org/p/cinder-bug-squad-meeting 15:01:55 #link http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028404.html 15:02:01 #topic Temporary volume accepts deletion while it is used 15:02:09 #link https://bugs.launchpad.net/cinder/+bug/1970768 15:02:09 A temporary volume is created when an attached volume is backing up. This temporary volume can be deleted by DELETE API because its status is 'available'. 15:02:09 The fix is already merged on master. 15:02:36 #link https://review.opendev.org/c/openstack/cinder/+/826949 15:02:36 Moving on with a related bug 15:02:43 #topic Temporary volume could be deleted with force 15:02:43 Fix proposed to master 15:03:06 #link https://bugs.launchpad.net/cinder/+bug/1971483 15:04:02 #link https://review.opendev.org/c/openstack/cinder/+/830901 15:06:32 Moving on 15:06:35 #topic Volume reset-state API validation state checking is incorrect 15:06:41 #link https://bugs.launchpad.net/cinder/+bug/1970624 15:06:56 eharney, Code from I0a53dfee "Reset state robustification for volume os-reset_status" aims to reject volume state updates to "error_deleting" and "detaching" but fails to do so due to a typo. 15:07:08 Original fix: 15:07:08 #link https://review.opendev.org/c/openstack/cinder/+/773985 15:07:08 Fix proposed to master: 15:07:08 #link https://review.opendev.org/c/openstack/cinder/+/839416 15:07:18 yeah i'm going to rework the patch for this based on rosmaita's input 15:07:29 773985 isn't an original fix, it's where the bug was introduced 15:07:39 oh, my bad 15:08:01 sounds good, please cinder team review :) 15:08:18 moving on 15:08:41 #topic rbd_store_chunk_size in megabytes is an unwanted limitation 15:08:52 #link https://bugs.launchpad.net/cinder/+bug/1971154 15:09:01 The report requests some changes regarding rbd_store_chunk_size. The reporter proposed two alternatives. 15:09:14 I need help to understand if this makes sense or if I should ask more questions about the problem. 15:09:30 well there are restrictions on what we can set it to, since glance and cinder in many configurations need to agree on the chunk size 15:10:20 we should look into this area, there are some deficiencies we still need to address w/ RBD as far as sector sizes too (512 vs 4k) 15:10:43 but i don't think the suggestion to just add a new config value that lets deployers set whatever is necessarily the right answer 15:11:30 as usual, we don't have much concrete performance data to analyze, so more info on the actual problem would be helpful 15:12:56 OK, so this bug is not trivial then. 15:13:06 I'll ask for concrete data and point out all this 15:13:22 right, it's a good idea to improve this, but it needs a lot of thought 15:13:40 makes sense 15:14:02 i think his question #2 is a good one 15:14:49 it is 15:15:21 all i can think is it's leftover from the days before there were dedicated pools? 15:17:30 well, specifying the chunk size helps prevent situations where you end up with images that can't be moved between pools during migration, cinder<->glance, etc 15:17:43 but i don't have a nice clear answer at the moment 15:18:17 gotcha 15:20:33 OK, so this looks like something interested to look for in the future. I can bring it back later, as Eric pointed out, it needs more thought. 15:20:45 Last but not least 15:20:50 #topic Could not find any paths for the volume 15:20:58 #link https://bugs.launchpad.net/os-brick/+bug/1961613 15:21:08 In an environment using NVMeoF with SPDK, when an instance is shut off or hard restarted, it is not able to find the volume again. The volume is visible on the node with "nvme list" but nova reports: "Could not find any paths for the volume." 15:21:19 geguileo, I'm not familiar with NVMeoF. The reporter mentioned that this could be an os-brick problem. I think the bug report makes sense. 15:21:48 it's an os-brick problem 15:22:10 it's already reported, and I will be fixing it shortly 15:22:19 \o/ 15:22:19 (there's a patch proposed now, but I'm making changes) 15:22:33 sure, thanks Gorka 15:22:39 I'm saying it without looking at the bug report 15:22:49 :P 15:23:11 I mean, it can be one of 2 cases... 15:23:16 that I'm fixing... 15:23:28 let's not forget that I'm fixing close to 12 nvmeof bugs... 15:23:43 this is most likely the one where you cannot call os-brick with an already existing subsystem 15:24:03 mixed with the not disconnecting a subsystem on volume_disconnect 15:24:08 we should probably just reevaluate any nvmeof bugs after geguileo's current stuff lands 15:24:13 if they wait for 10 minutes it probably works lol 15:24:26 of course, well, i can bring it back later when you update the patches, sounds like the right thing to do 15:24:33 because the nvmeof timeout will kick in and remove it... 15:24:39 those bugs are great, by the time you get help from support, there is no problem 15:24:48 lol 15:25:11 ha 15:25:19 OK, that's all i have for today's meeting 15:25:25 thanks! 15:25:34 i have 2 quick questons 15:25:39 sure 15:26:00 i came across this when i was looking into eharney's patch that he mentioned earlier 15:26:04 #link https://review.opendev.org/c/openstack/cinder/+/839825 15:26:26 i didn't file a bug, but wonder if i should 15:27:07 does it have any observable behavior impacts? 15:27:41 not now, my concern is that if someone actually uses this and then backports the usage, it will break without this patch 15:27:47 (does that make sense?) 15:28:19 i think so, but i wouldn't be inclined to file a bug for it myself 15:28:30 ok, works for me 15:28:33 no bug then 15:29:06 ok, my other issue is test-only, so i won't file a bug for it either 15:29:41 ok, sorry i wasted our time 15:29:54 don't worry 15:30:02 :D 15:30:08 OK, running out of time 15:30:15 #endmeeting