14:01:00 <jbernard> #startmeeting cinder
14:01:00 <opendevmeet> Meeting started Wed Apr 24 14:01:00 2024 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:00 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:00 <opendevmeet> The meeting name has been set to 'cinder'
14:01:04 <Sai> o/
14:01:05 <jbernard> welcome everyone
14:01:06 <tosky> o/
14:01:07 <whoami-rajat> hi
14:01:09 <happystacker> o/
14:01:10 <simondodsley> o/
14:01:11 <eharney> hi
14:01:30 <rosmaita> o/
14:01:42 <Luzi> o/
14:02:19 <jbernard> #link https://etherpad.opendev.org/p/cinder-dalmatian-meetings
14:04:19 <jbernard> ok
14:04:27 <jbernard> welcome
14:04:53 <jbernard> ive just realized that we usually have video for the last meeting, my appologies
14:05:15 <happystacker> I missed a few meetings due to overload
14:05:20 <jbernard> #topic announcements
14:05:21 <happystacker> sorry for that
14:05:26 <jbernard> happystacker: no worries
14:05:45 <jbernard> PTG summary went out to the list
14:05:46 <whoami-rajat> i think i forgot a lot of things during my first time as PTL so it should be fine
14:05:55 <jbernard> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/33XWCG7YQHDKGRR5AVUO4RAJ77AOUMRW/
14:06:26 <jbernard> also, I captured the summary and the etherpad as a wiki page
14:06:29 <jbernard> #link https://wiki.openstack.org/wiki/CinderDalmatianPTGSummary
14:06:51 <jbernard> let me know if i missed something or something doesn't look right or inaccurate
14:08:30 <jbernard> I'm working through the release cycle timeline this week
14:08:45 <jbernard> i need to schedule our midcycle and get that posted
14:08:59 <jbernard> i will propose some options in the next meeting, if not before
14:09:33 <jbernard> jungleboyj: if you have the videos posted, i can update our wiki to include those links
14:10:23 <jbernard> that's all i have for major annoucements, things are pretty quiet at the moment
14:10:47 <jungleboyj> jbernard: I need to get that done.  Have been sick the first part of the week.
14:10:55 <jungleboyj> Will try to get that done ASAP.
14:11:12 <jbernard> jungleboyj: no worries, there's no rush, i was just thinking about it
14:11:41 <jbernard> jungleboyj: hope you feel better, we've all been sick a ton this spring
14:11:58 <jungleboyj> Thanks.  :-)  Working on getting better.
14:12:03 <jbernard> #topic bug on reimaging volume
14:12:07 <jbernard> whoami-rajat: ^ all you
14:12:14 <whoami-rajat> thanks!
14:12:32 <whoami-rajat> so one of the upstream contributors was testing the rebuild volume backed instance operation
14:12:39 <whoami-rajat> that i worked on in the Zed cycle
14:12:52 <whoami-rajat> he found a case which wasn't handled in the code
14:12:59 <whoami-rajat> so it's a bug in both nova and cinder implementation
14:13:21 <ccokeke[m]> Well done jbernard:
14:13:32 <whoami-rajat> for cinder specific discussion, we can consider "reimaging an available volume" scenario
14:13:34 <whoami-rajat> #link https://bugs.launchpad.net/cinder/+bug/2062539
14:13:43 <whoami-rajat> When reimaging the volume, if the image is created from a VM snapshot of a bootable volume, we error out since there is no actual image but a reference to volume snapshot
14:14:00 <whoami-rajat> When we create a volume from such image, we trigger the workflow of create volume from snapshot
14:14:05 <whoami-rajat> When reimaging the volume, that doesn't work since we are using an existing volume
14:14:14 <whoami-rajat> and not creating a new one
14:14:21 <happystacker> good catch
14:14:34 <whoami-rajat> The solution I came up with is to revert the volume to the snapshot that is backing the image which I have implemented here
14:14:39 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/916408
14:14:59 <simondodsley> what happens if the original snapshot has subsequently been deleted? Can it have been deleted given it is still referenced by the glance image?
14:15:04 <whoami-rajat> if anyone is interested, here is the nova bug and patch
14:15:05 <whoami-rajat> #link https://bugs.launchpad.net/nova/+bug/2062127
14:15:10 <whoami-rajat> #link https://review.opendev.org/c/openstack/nova/+/916409
14:15:59 <whoami-rajat> I'm not sure about that, there shouldn't be anything preventing us from deleting that snapshot (unless that's a handled case) but the image would be useless if we delete that snapshot
14:16:15 <whoami-rajat> so there might be a general recommendation somewhere that "don't delete snapshots backing up the image"
14:16:27 <whoami-rajat> although TBH I haven't tried that scenario
14:17:29 <whoami-rajat> what i wanted to discuss was, is revert to snapshot the right mechanism to handle this case, since we want to put the contents of the snapshot in an existing volume
14:17:34 <simondodsley> i think it is quite likely that a anspshot will be deleted after it has been used to create a new image. This woulkd then allow subsequent changes to the base volume to be snapshotted again
14:18:26 <whoami-rajat> we are not using snapshot to create image, we are creating a VM snapshot of a bootable volume so glance has an image reference and cinder volume snapshot has the actual image contents
14:19:39 <simondodsley> yes, i get that but the reimage will have a problem is the snapshot used to create the volume has been deleted. We need to check that out
14:19:57 <simondodsley> isn't that what you are saying?
14:20:46 <whoami-rajat> that is not the problem for the reimage operation but for the image in general, if the snapshot is deleted then we cannot do anything with that image, bootable volume/launching instance
14:20:54 <whoami-rajat> the case i was discussing is
14:21:12 <whoami-rajat> when we use a snapshot backed image to reimage a volume, it doesn't work
14:21:19 <simondodsley> ok - understood
14:22:25 <whoami-rajat> we cannot download that image, we need to use the snapshot to reimage the volume -- for which i found revert to snapshot (generic) the best mechanism
14:22:56 <whoami-rajat> but i wanted to know what the team thinks about it before moving forward with that approach
14:25:15 <jbernard> what is the alternative to reverting to snapshot?
14:25:45 <whoami-rajat> I've no alternative atm but I'm open to ideas
14:26:50 <whoami-rajat> https://review.opendev.org/c/openstack/cinder/+/916408/3/cinder/volume/manager.py
14:27:00 <whoami-rajat> this is the approach I'm referring to
14:27:38 <whoami-rajat> initially i was planning to do these operations manually but i found revert to snap (generic) already doing it
14:27:49 <whoami-rajat> i.e. create volume from snap and copy data to our existing volume
14:28:37 <simondodsley> if only we could revert from any snapshot, rather than only from the latest, that would also help out here.
14:29:37 <whoami-rajat> but we can do that in the generic mechanism, we just need to pass the snapshot backing the image and it creates a temp volume out of it
14:30:07 <whoami-rajat> i don't think the snapshot needs to be latest here
14:32:01 <whoami-rajat> anyways, i don't want to take up all the meeting time, i can bring this back again
14:32:04 <whoami-rajat> but if we have no concerns then i can work on completing the patch
14:32:05 <simondodsley> if that is the case then the solution seems sound.
14:32:18 <whoami-rajat> great, thanks simondodsley !
14:32:53 <simondodsley> it might be worth looking at the links from an image to a snapshot and alerting that the snapshot backs an image if the snapshot needs to be deleted
14:33:53 <whoami-rajat> yeah, maybe we can have a field in the snapshot for image_id, which if set, checks if the associated image exists and prevent deletion of that snapshot
14:34:10 <whoami-rajat> i will test that out to see if we have a bug there
14:34:36 <whoami-rajat> or the very least, we can document this behavior
14:34:53 <simondodsley> +1
14:34:55 <jbernard> don't we store metadata in the volume if it is created from a snapshot? could we not determine this with what's already there?
14:35:57 <whoami-rajat> i think we have the snapshot_id field set but we need something to differentiate between a normal snapshot from a volume vs a snapshot that is backing an image
14:36:18 <jbernard> ahh
14:37:31 <whoami-rajat> i need to dig more into this whole feature of "VM snapshots of volume backed instance" since i feel there are more potential bugs there which i haven't tested yet
14:37:53 <whoami-rajat> but for this topic, that was all from my side
14:38:04 <jbernard> #action whoami-rajat to look into image-backing shpashots for next meeting
14:38:15 <whoami-rajat> +1
14:38:28 <jbernard> #topic image encryption
14:38:33 <jbernard> Luzi: you're up :)
14:38:50 <jbernard> #link https://review.opendev.org/c/openstack/glance-specs/+/915726
14:38:51 <Luzi> As discussed at the PTG, I wrote a spec for the new Image Encryption with LUKS-Images in Glance, can you tell if there is also a spec needed for Cinder?
14:39:47 <Luzi> Mainly we introduce new metadata-parameters for images - so there might be some work that has to be done in Cinder - but I doubt that there will be API-changes or something like this
14:39:50 <jbernard> i think it depends on the nature of the changes needed to support it on the cinder side.  do you have a sense of what we will need?
14:40:29 <jbernard> my sense is that it wouldn't hurt to have a spec, what do others think?
14:40:51 <Luzi> well there need to be a check at least when creating a volume that a volume type with an encryption type is chosen (or the default type has an ecnryption type)
14:41:33 <jbernard> a spec may facilitate on the metadata parameter and how they're used
14:41:47 <jbernard> faciliate /discussion/ on
14:42:05 <Luzi> okay - I will write a spec
14:42:31 <jbernard> whoami-rajat: curious your thoughts, im new here ;)
14:43:12 <whoami-rajat> maybe I'm interpreting our discussion in a wrong way
14:43:28 <Luzi> i think maybe this will also concern one of the two bugs I linked - the 2nd one I think
14:43:29 <whoami-rajat> but we decided to implement the metadef API like structure for Cinder
14:43:36 <whoami-rajat> and not use the one existing in glance right?
14:44:01 <whoami-rajat> rosmaita, you can correct me if my understanding is wrong here ^
14:44:35 <jbernard> #link https://etherpad.opendev.org/p/dalmatian-ptg-cinder#L113
14:44:38 <rosmaita> i think the metadefs was for the other effort, about exposing info for volume types
14:44:53 <jbernard> yes, that was for user-visible volume types
14:45:13 <rosmaita> the metadefs stay in Glance, we would just define some key,value pairs for what we need
14:45:15 <Luzi> yes
14:46:25 <jbernard> rosmaita: is that spec-necessary for tracking? how do we do this?
14:46:56 <rosmaita> well, we should go ahead and un-accept the gpg-based spec
14:47:05 <jbernard> agree
14:47:20 <jbernard> #action unaccept the gpg-based spec
14:47:50 <whoami-rajat> sorry, my bad, i mixed up things here, I will add my comments to the spec and see if we need one for cinder
14:48:03 <rosmaita> i guess i should read the new glance spec and see what its impact on cinder is
14:48:16 <Luzi> that would be nice rosmaita
14:48:27 <rosmaita> if we don't need a full spec, we can use an LP blueprint to at least keep track of the work
14:49:05 <jbernard> #action review glance spec on image encryption, do we need a spec on our side?
14:49:23 <rosmaita> Luzi: please ping me on monday if i haven't left comments on the glance spec by then
14:49:30 <Luzi> okay
14:49:38 <rosmaita> thanks!
14:49:51 <jbernard> ok
14:49:59 <jbernard> #topic review requests
14:50:38 <jbernard> simondodsley has one
14:50:48 <jbernard> #link https://review.opendev.org/903095
14:51:02 <simondodsley> yes - this is an upgrade of our core API version
14:51:12 <simondodsley> needs a change in the underlying SDK.
14:51:27 <simondodsley> Lots of changes to the tests and core code, but nothing new added really.
14:51:59 <simondodsley> We added one piece to cater for something called safemode in Pure, which if not covered can cause volume deletions to fail as far as cinder is concerned
14:52:18 <simondodsley> we don't have a Pure CI currently so we have to rely on Zull and our inhouse testing.
14:52:29 <simondodsley> The CI system is on-route to a new datacenter
14:52:40 <simondodsley> not sure when it will be back up and running
14:53:12 <jbernard> simondodsley: how confident are you with zuul and in-house results?
14:53:28 <simondodsley> very- i spent about 3 months trying to test every scenario
14:53:42 <jbernard> nice
14:53:53 <whoami-rajat> when you say in-house, is it manual testing or a tempest suite or a combination of both?
14:54:05 <simondodsley> manula testing of every function
14:55:25 <whoami-rajat> ok, if possible, can we also have a confirmation from a tempest run since they have some good scenario testing there?
14:55:51 <simondodsley> i can try and set that up.
14:55:59 <whoami-rajat> great, thanks!
14:56:52 <jbernard> #action simondodsley will report tempest results for pure upgrade patch
14:57:22 <jbernard> Luzi has a spec for user-visible volume types
14:57:33 <jbernard> #link https://review.opendev.org/c/openstack/cinder-specs/+/909195/7
14:57:45 <Luzi> yeah we discussed this at the PTG and I adjusted the spec accordingly
14:57:57 <jbernard> awesome, thanks
14:58:14 <jbernard> and eharney is upgrading hacking
14:58:17 <jbernard> #link https://review.opendev.org/c/openstack/cinder/+/891906
14:58:37 <whoami-rajat> so we are going with the approach of a metadata field in volume type and later leveraging the metadef APIs?
15:00:41 <Luzi> as far as I did understand it: yes
15:01:11 <whoami-rajat> seems like I'm very confused about this feature but i will go through the spec, thanks!
15:01:42 <jbernard> alright, that's time, thank you everyone!
15:01:49 <jbernard> #endmeeting