14:01:00 #startmeeting cinder 14:01:00 Meeting started Wed Apr 24 14:01:00 2024 UTC and is due to finish in 60 minutes. The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:00 The meeting name has been set to 'cinder' 14:01:04 o/ 14:01:05 welcome everyone 14:01:06 o/ 14:01:07 hi 14:01:09 o/ 14:01:10 o/ 14:01:11 hi 14:01:30 o/ 14:01:42 o/ 14:02:19 #link https://etherpad.opendev.org/p/cinder-dalmatian-meetings 14:04:19 ok 14:04:27 welcome 14:04:53 ive just realized that we usually have video for the last meeting, my appologies 14:05:15 I missed a few meetings due to overload 14:05:20 #topic announcements 14:05:21 sorry for that 14:05:26 happystacker: no worries 14:05:45 PTG summary went out to the list 14:05:46 i think i forgot a lot of things during my first time as PTL so it should be fine 14:05:55 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/33XWCG7YQHDKGRR5AVUO4RAJ77AOUMRW/ 14:06:26 also, I captured the summary and the etherpad as a wiki page 14:06:29 #link https://wiki.openstack.org/wiki/CinderDalmatianPTGSummary 14:06:51 let me know if i missed something or something doesn't look right or inaccurate 14:08:30 I'm working through the release cycle timeline this week 14:08:45 i need to schedule our midcycle and get that posted 14:08:59 i will propose some options in the next meeting, if not before 14:09:33 jungleboyj: if you have the videos posted, i can update our wiki to include those links 14:10:23 that's all i have for major annoucements, things are pretty quiet at the moment 14:10:47 jbernard: I need to get that done. Have been sick the first part of the week. 14:10:55 Will try to get that done ASAP. 14:11:12 jungleboyj: no worries, there's no rush, i was just thinking about it 14:11:41 jungleboyj: hope you feel better, we've all been sick a ton this spring 14:11:58 Thanks. :-) Working on getting better. 14:12:03 #topic bug on reimaging volume 14:12:07 whoami-rajat: ^ all you 14:12:14 thanks! 14:12:32 so one of the upstream contributors was testing the rebuild volume backed instance operation 14:12:39 that i worked on in the Zed cycle 14:12:52 he found a case which wasn't handled in the code 14:12:59 so it's a bug in both nova and cinder implementation 14:13:21 Well done jbernard: 14:13:32 for cinder specific discussion, we can consider "reimaging an available volume" scenario 14:13:34 #link https://bugs.launchpad.net/cinder/+bug/2062539 14:13:43 When reimaging the volume, if the image is created from a VM snapshot of a bootable volume, we error out since there is no actual image but a reference to volume snapshot 14:14:00 When we create a volume from such image, we trigger the workflow of create volume from snapshot 14:14:05 When reimaging the volume, that doesn't work since we are using an existing volume 14:14:14 and not creating a new one 14:14:21 good catch 14:14:34 The solution I came up with is to revert the volume to the snapshot that is backing the image which I have implemented here 14:14:39 #link https://review.opendev.org/c/openstack/cinder/+/916408 14:14:59 what happens if the original snapshot has subsequently been deleted? Can it have been deleted given it is still referenced by the glance image? 14:15:04 if anyone is interested, here is the nova bug and patch 14:15:05 #link https://bugs.launchpad.net/nova/+bug/2062127 14:15:10 #link https://review.opendev.org/c/openstack/nova/+/916409 14:15:59 I'm not sure about that, there shouldn't be anything preventing us from deleting that snapshot (unless that's a handled case) but the image would be useless if we delete that snapshot 14:16:15 so there might be a general recommendation somewhere that "don't delete snapshots backing up the image" 14:16:27 although TBH I haven't tried that scenario 14:17:29 what i wanted to discuss was, is revert to snapshot the right mechanism to handle this case, since we want to put the contents of the snapshot in an existing volume 14:17:34 i think it is quite likely that a anspshot will be deleted after it has been used to create a new image. This woulkd then allow subsequent changes to the base volume to be snapshotted again 14:18:26 we are not using snapshot to create image, we are creating a VM snapshot of a bootable volume so glance has an image reference and cinder volume snapshot has the actual image contents 14:19:39 yes, i get that but the reimage will have a problem is the snapshot used to create the volume has been deleted. We need to check that out 14:19:57 isn't that what you are saying? 14:20:46 that is not the problem for the reimage operation but for the image in general, if the snapshot is deleted then we cannot do anything with that image, bootable volume/launching instance 14:20:54 the case i was discussing is 14:21:12 when we use a snapshot backed image to reimage a volume, it doesn't work 14:21:19 ok - understood 14:22:25 we cannot download that image, we need to use the snapshot to reimage the volume -- for which i found revert to snapshot (generic) the best mechanism 14:22:56 but i wanted to know what the team thinks about it before moving forward with that approach 14:25:15 what is the alternative to reverting to snapshot? 14:25:45 I've no alternative atm but I'm open to ideas 14:26:50 https://review.opendev.org/c/openstack/cinder/+/916408/3/cinder/volume/manager.py 14:27:00 this is the approach I'm referring to 14:27:38 initially i was planning to do these operations manually but i found revert to snap (generic) already doing it 14:27:49 i.e. create volume from snap and copy data to our existing volume 14:28:37 if only we could revert from any snapshot, rather than only from the latest, that would also help out here. 14:29:37 but we can do that in the generic mechanism, we just need to pass the snapshot backing the image and it creates a temp volume out of it 14:30:07 i don't think the snapshot needs to be latest here 14:32:01 anyways, i don't want to take up all the meeting time, i can bring this back again 14:32:04 but if we have no concerns then i can work on completing the patch 14:32:05 if that is the case then the solution seems sound. 14:32:18 great, thanks simondodsley ! 14:32:53 it might be worth looking at the links from an image to a snapshot and alerting that the snapshot backs an image if the snapshot needs to be deleted 14:33:53 yeah, maybe we can have a field in the snapshot for image_id, which if set, checks if the associated image exists and prevent deletion of that snapshot 14:34:10 i will test that out to see if we have a bug there 14:34:36 or the very least, we can document this behavior 14:34:53 +1 14:34:55 don't we store metadata in the volume if it is created from a snapshot? could we not determine this with what's already there? 14:35:57 i think we have the snapshot_id field set but we need something to differentiate between a normal snapshot from a volume vs a snapshot that is backing an image 14:36:18 ahh 14:37:31 i need to dig more into this whole feature of "VM snapshots of volume backed instance" since i feel there are more potential bugs there which i haven't tested yet 14:37:53 but for this topic, that was all from my side 14:38:04 #action whoami-rajat to look into image-backing shpashots for next meeting 14:38:15 +1 14:38:28 #topic image encryption 14:38:33 Luzi: you're up :) 14:38:50 #link https://review.opendev.org/c/openstack/glance-specs/+/915726 14:38:51 As discussed at the PTG, I wrote a spec for the new Image Encryption with LUKS-Images in Glance, can you tell if there is also a spec needed for Cinder? 14:39:47 Mainly we introduce new metadata-parameters for images - so there might be some work that has to be done in Cinder - but I doubt that there will be API-changes or something like this 14:39:50 i think it depends on the nature of the changes needed to support it on the cinder side. do you have a sense of what we will need? 14:40:29 my sense is that it wouldn't hurt to have a spec, what do others think? 14:40:51 well there need to be a check at least when creating a volume that a volume type with an encryption type is chosen (or the default type has an ecnryption type) 14:41:33 a spec may facilitate on the metadata parameter and how they're used 14:41:47 faciliate /discussion/ on 14:42:05 okay - I will write a spec 14:42:31 whoami-rajat: curious your thoughts, im new here ;) 14:43:12 maybe I'm interpreting our discussion in a wrong way 14:43:28 i think maybe this will also concern one of the two bugs I linked - the 2nd one I think 14:43:29 but we decided to implement the metadef API like structure for Cinder 14:43:36 and not use the one existing in glance right? 14:44:01 rosmaita, you can correct me if my understanding is wrong here ^ 14:44:35 #link https://etherpad.opendev.org/p/dalmatian-ptg-cinder#L113 14:44:38 i think the metadefs was for the other effort, about exposing info for volume types 14:44:53 yes, that was for user-visible volume types 14:45:13 the metadefs stay in Glance, we would just define some key,value pairs for what we need 14:45:15 yes 14:46:25 rosmaita: is that spec-necessary for tracking? how do we do this? 14:46:56 well, we should go ahead and un-accept the gpg-based spec 14:47:05 agree 14:47:20 #action unaccept the gpg-based spec 14:47:50 sorry, my bad, i mixed up things here, I will add my comments to the spec and see if we need one for cinder 14:48:03 i guess i should read the new glance spec and see what its impact on cinder is 14:48:16 that would be nice rosmaita 14:48:27 if we don't need a full spec, we can use an LP blueprint to at least keep track of the work 14:49:05 #action review glance spec on image encryption, do we need a spec on our side? 14:49:23 Luzi: please ping me on monday if i haven't left comments on the glance spec by then 14:49:30 okay 14:49:38 thanks! 14:49:51 ok 14:49:59 #topic review requests 14:50:38 simondodsley has one 14:50:48 #link https://review.opendev.org/903095 14:51:02 yes - this is an upgrade of our core API version 14:51:12 needs a change in the underlying SDK. 14:51:27 Lots of changes to the tests and core code, but nothing new added really. 14:51:59 We added one piece to cater for something called safemode in Pure, which if not covered can cause volume deletions to fail as far as cinder is concerned 14:52:18 we don't have a Pure CI currently so we have to rely on Zull and our inhouse testing. 14:52:29 The CI system is on-route to a new datacenter 14:52:40 not sure when it will be back up and running 14:53:12 simondodsley: how confident are you with zuul and in-house results? 14:53:28 very- i spent about 3 months trying to test every scenario 14:53:42 nice 14:53:53 when you say in-house, is it manual testing or a tempest suite or a combination of both? 14:54:05 manula testing of every function 14:55:25 ok, if possible, can we also have a confirmation from a tempest run since they have some good scenario testing there? 14:55:51 i can try and set that up. 14:55:59 great, thanks! 14:56:52 #action simondodsley will report tempest results for pure upgrade patch 14:57:22 Luzi has a spec for user-visible volume types 14:57:33 #link https://review.opendev.org/c/openstack/cinder-specs/+/909195/7 14:57:45 yeah we discussed this at the PTG and I adjusted the spec accordingly 14:57:57 awesome, thanks 14:58:14 and eharney is upgrading hacking 14:58:17 #link https://review.opendev.org/c/openstack/cinder/+/891906 14:58:37 so we are going with the approach of a metadata field in volume type and later leveraging the metadef APIs? 15:00:41 as far as I did understand it: yes 15:01:11 seems like I'm very confused about this feature but i will go through the spec, thanks! 15:01:42 alright, that's time, thank you everyone! 15:01:49 #endmeeting