14:00:17 <rosmaita> #startmeeting cinder
14:00:17 <opendevmeet> Meeting started Wed Jan 12 14:00:17 2022 UTC and is due to finish in 60 minutes.  The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:17 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:17 <opendevmeet> The meeting name has been set to 'cinder'
14:00:24 <rosmaita> #topic roll call
14:00:26 <jungleboyj> o/
14:00:27 <tosky> o/
14:00:30 <eharney> hi
14:00:33 <enriquetaso> hi
14:00:36 <SachinMore> hi
14:00:49 <simondodsley> hi
14:00:53 <fabiooliveira> hi
14:01:05 <felipe_rodrigues> hi
14:01:17 <nahimsouza[m]> o/
14:01:32 <LeoCampelo> o/
14:02:05 <walshh_> hi
14:04:03 <rosmaita> hello everyone
14:04:11 <rosmaita> #link https://etherpad.opendev.org/p/cinder-yoga-meetings
14:04:16 <rosmaita> #topic announcements
14:04:45 <rosmaita> first, a reminder about the OpenInfrastructure Foundation election/polling happening this week
14:04:56 <rosmaita> it closes on Friday 14 Jan at 1800 UTC
14:05:25 <rosmaita> i think that as openstack contributors, you are all also signed up as individual foundation members
14:05:29 <rosmaita> (not sure about that though)
14:05:54 <rosmaita> anyway, you should have received a customized email with a personalized link to the voting website
14:06:04 <rosmaita> look for email with subject "2022 Individual Director Election and Bylaws Amendments"
14:06:29 <rosmaita> second announcement: new driver merge deadline is 21 January
14:06:35 <rosmaita> which is really soon
14:06:43 <rosmaita> we are tracking new driver patches here:
14:06:51 <rosmaita> #link https://etherpad.opendev.org/p/cinder-yoga-new-drivers
14:07:16 <rosmaita> obviously, the review priority until 21 January is new drivers
14:07:52 <rosmaita> so people who have other patches, if you help review new drivers, we can get that done faster and can get back to reviewing other stuff
14:08:19 <rosmaita> and people who have new drivers, you can speed things up by checking the review guidelines and make sure your CI is working properly
14:08:28 <rosmaita> #link https://docs.openstack.org/cinder/latest/contributor/new_driver_checklist.html
14:08:53 <rosmaita> that link is also helpful for people doing the reviews, of course
14:09:07 <rosmaita> we have a variety of drivers of various complexity this cycle
14:09:32 <rosmaita> some simple wrappers, and one that requires a new os-brick connector, plus the driver, plus a driver for nova
14:09:48 <rosmaita> ok, other upcoming stuff:
14:10:10 <rosmaita> festival of xs reviews next friday
14:10:28 <rosmaita> (which is 21 January)
14:10:31 <rosmaita> yoga midcycle-2 on 26 January
14:10:40 <rosmaita> yoga os-brick release week of 7 February
14:10:48 <rosmaita> yoga feature freeze week of 21 February
14:11:14 <rosmaita> that's all the announcements I have ... anyone else have something to share?
14:12:14 <rosmaita> ok, moving on
14:12:28 <rosmaita> #topic Discuss revert-to-snapshot potential issue
14:12:31 <rosmaita> simondodsley: that's you
14:12:33 <simondodsley> Thanks
14:12:52 <simondodsley> Reading the spoec for revert from any snapshot (https://review.opendev.org/c/openstack/cinder-specs/+/736111) I noticed that the main revert from snapshot functionality doesn't set volume creation name to the creation date of the snapshot after reversion. Without this there is no way to tell if a volume has actually been reverted. Not only does this stop the revert from any snapshot spec moving forward, it seems
14:12:52 <simondodsley> to me like a big hole for operators.
14:12:57 <simondodsley> Discuss...
14:13:27 <simondodsley> Am I missing something obvious here, or is that actually a failing of the feature?
14:13:46 <rosmaita> well, the idea of the feature is that as far as the end user is concerned, it's the same volume
14:13:59 <rosmaita> so i think the creation date is supposed to stay the same?
14:14:09 <eharney> that's my first thought as well
14:14:20 <simondodsley> YEs, but they have no idea if it has been reverted. It makes sense that the volume creation time should be the time of the snapshot
14:14:57 <simondodsley> How do 3rd party backends deal with this? I know Pure will reset the volume creation time to the snapshot creation time on reversion
14:15:21 <simondodsley> If the revert to any snapshot feature is to move forward there has to be a way to tell if a volume has been reset
14:15:50 <rosmaita> so from the Pure point of view, the reverted volume actually is a different volume, is that correct?
14:15:58 <eharney> i'm not sure we have anything defining details of how the volume creation timestamp works in general and the details of what it would mean in this situation
14:16:08 <simondodsley> Not really, it is just overwritten with the snapshot
14:16:22 <rosmaita> ok
14:16:53 <eharney> resetting the creation timestamp on revert would also mean you can end up with snapshots that have a creation time earlier than the volume's creation time
14:16:57 <simondodsley> if you have a DB running on the volume  you would potentially need to know that a snapshot reversion has taken place so you know where your data is at
14:17:41 <simondodsley> eharney, exactly, so that would tell you that those snapshots are no longer valid or are available for reversion to an earlier timestamp if the initial reversion didn't go far enough back
14:18:02 <simondodsley> think about DB recover from corruption etc
14:18:08 <eharney> well they should still be valid
14:18:39 <simondodsley> but the operator/user has no idea which are valid if they weren't the person who did the original reversion.
14:19:01 <simondodsley> you are assuming only one operator who has a perfect memory
14:19:54 <rosmaita> this sounds like a case for volume metadata
14:20:02 <simondodsley> without knowing the timestamp of the reverted snapshot onto the underlying volume you can leave yourself open to data corruption
14:20:29 <simondodsley> yes - metadata wolkd be a good use of this, but there would have to be a standard name for it that all drivers used.
14:20:29 <eharney> why data corruption?
14:20:44 <simondodsley> As it is I think every driver would have to be patched to fx this
14:22:03 <jungleboyj> :-(
14:22:17 <rosmaita> well, the spec hasn't been approved yet (I hope), so we can work out a standard way to record this
14:22:39 <simondodsley> eharney, because you could revert to snapshot 3, then do work and create a new snapshots (4 and 5), but someone and in the future there is a need to go back to snapshot 4, but a user who can't see snashot timestamps could pick snashot 2 not knowing about the snaspht 3 reversion
14:22:44 <rosmaita> i am kind of leery of messing with the volume creation timestamp though
14:24:06 <rosmaita> eharney has been pointing out on that spec review that some kind of tracking needs to happen, but the proposers haven't really addressed that, i don't think
14:25:05 <simondodsley> I would be OK with metadata being used to this, but a regular user, using Horizon would not see the metadata (unless they do additional actions) and as I say it would need to be a standard name that all drivers had to amend.
14:25:36 <jungleboyj> Does sound like this requires additional thought/design.
14:25:36 <simondodsley> I would like to know what other vendors do with snashot recovery to the underlying volume timestamp. Anyone care to comment?
14:26:04 <simondodsley> this would inform the way forewsard for revert and this spec
14:26:33 <rosmaita> kind of quiet in here today, maybe we need to take this to the mailing list
14:27:13 <rosmaita> what i mean is the question of whether changing the semantics of the volume create timestamp is a good way forward here, or whether we need something else
14:27:20 <jungleboyj> Or a topic for the next mid-cycle?
14:27:32 <eharney> i think it would be better to use something else
14:27:32 <rosmaita> or both!
14:27:51 <eharney> the volume timestamp still represents when the volume was created, that isn't really related to snapshots being reverted to etc...
14:28:08 <simondodsley> but it soet of is as the volume is recreated
14:28:31 <simondodsley> base don the timestamp of the snapshot
14:28:49 <eharney> the volume isn't recreated from a cinder point of view (and presumably not on a good number of backends either?)
14:29:29 <rosmaita> yeah, i am inclined to agree with eharney, but if a lot of backends do what Pure does, maybe user expectations are different from my intuition
14:29:48 <simondodsley> that is what i'm trying to find out - even in Pure we do not recreate the volume, just apply a bunch of changes that make the volume look like it was at the time of the snaphot
14:29:54 <rosmaita> i think simondodsley definitely makes a good case that we want some kind of tracking here
14:30:05 <jungleboyj> rosmaita: ++
14:30:40 <simondodsley> maybe a direct mail to the driver maintainers as not all of them read the mailing list
14:30:51 <rosmaita> ok, simondodsley how about we draft out something to the ML on an etherpad
14:31:08 <simondodsley> sounds good
14:31:19 <rosmaita> well, they should read the ML, so i would prefer conversation on the ML, with a direct email informing them about the discussion
14:31:58 <rosmaita> #action simondodsley, rosmaita - draft email to the ML about volume revert tracking
14:32:43 <rosmaita> ok thanks simondodsley, let's move on
14:32:55 <rosmaita> #topic removing contrib/black-box from the code tree
14:33:09 <rosmaita> this is prompted by a comment from hemna on an eharney patch
14:33:24 <rosmaita> we have this contrib/black-box/* in the code tree
14:33:38 <rosmaita> it has been marked as 'unsupported' since 20 Jan 2020
14:33:45 <rosmaita> #link https://review.opendev.org/c/openstack/cinder/+/701974
14:33:59 <rosmaita> looks like it has definitely not been maintained
14:34:17 <rosmaita> it looks like it only builds properly with stable/ocata (and that branch no longer exists)
14:34:25 <rosmaita> so i think we should remove it
14:34:40 <rosmaita> is anyone horrified by that suggestion?
14:35:05 <eharney> i think it's a good idea
14:35:10 <rosmaita> i just wanted to get an initial reaction before proposing this on the ML
14:35:25 <rosmaita> sounds like we have no black-box fans present today
14:35:42 <jungleboyj> I agree.
14:35:48 <simondodsley> ++
14:35:54 <rosmaita> #action rosmaita - proposal to remove blackbox support this cycle to the ML
14:36:03 <rosmaita> ok, thanks for the feedback!
14:36:05 <eharney> blockbox :)
14:36:27 <rosmaita> 3topci coordination and strategy for the changes required in devstack-plugin-ceph
14:36:58 <tosky> missed a # and topic spelling
14:37:00 <rosmaita> oops, block-box
14:37:16 <jungleboyj> He he.
14:37:16 <rosmaita> yes, let me try again
14:37:28 <rosmaita> #topic coordination and strategy for the changes required in devstack-plugin-ceph
14:37:34 <rosmaita> ok tosky has the floor
14:37:38 <tosky> hi, you are probably aware that the new way to install ceph is cephadm, and talking about this with manila people, they are experimenting with it (hi vkmc :)
14:37:42 <tosky> the experiments are being done outside devstack-plugin-ceph , but I think the final code should live there.
14:37:45 <tosky> Now, there are several consumers for devstack-plugin-ceph (cinder, glance, manila, nova, and generally QA), so maybe it would make sense to setup a popup team to handle the cephadm migration?
14:37:53 <tosky> That would mean (if the TC and all the affected groups approve it) a few more meetings and some volunteer to handle it. I can help with it but it would be better if I wouldn't be the only one.
14:37:57 <tosky> Any thoughts in general?
14:38:10 * vkmc sneaks in
14:38:11 <vkmc> o/
14:38:13 <rosmaita> i agree with everything you have said
14:38:18 <rosmaita> hello vkmc
14:38:30 <eharney> makes sense to add it as a new mode/option to devstack-plugin-ceph
14:38:55 <jungleboyj> vkmc:  Long time no see!
14:39:06 <vkmc> jungleboyj, indeed :D
14:39:08 <eharney> i can help with reviews etc at least
14:39:36 <tosky> or can we just solve it without an additional team and meetings?
14:40:09 <rosmaita> well, the popup team doesn't actually have to hold meetings, just an etherpad to keep people informed
14:40:34 <rosmaita> but quick meetings are probably good to make people focus on the work
14:41:15 <rosmaita> though we don't want to add unnecessary overhead to this work
14:42:06 <rosmaita> but since vkmc is actually doing work on this, her opinion is important here
14:42:40 <rosmaita> i guess the question is, what kind of coordination would be useful?
14:43:16 <vkmc> a weekly sync up would be useful for those interested
14:43:35 <rosmaita> that sounds reasonable
14:44:08 <rosmaita> tosky: do you want to represent cinder, or were you saying earlier that you are interested in helping, but not specifically representing cinder?
14:44:20 <vkmc> right now the goal is to build a plugin that can be used externally and make sure that main functionality for all services work as expected
14:44:41 <vkmc> perhaps we would need to setup environments with devstack, cephadm and run tempest there
14:44:52 <vkmc> (debug and fix whatever comes from there)
14:45:08 <rosmaita> that sounds right
14:45:17 <tosky> rosmaita: whatever is needed to move this forward, keeping in mind I can help more with testing and maybe not much with the code
14:45:22 <vkmc> once we got that working, we could see how to integrate that to what we have
14:45:45 <tosky> vkmc: wouldn't it be more work to write an external code and integrate it back later?
14:47:28 <vkmc> tosky, it's ok for me in any case
14:47:51 <rosmaita> vkmc: so are you thinking of a changing the devstack ceph plugin architecture so that it would take a plugin for cephadm-type services?
14:48:16 <rosmaita> actually, i guess architecture would be a good topic for your first meeting
14:48:32 <rosmaita> vkmc: are you willing to start a pop-up team for this?
14:49:14 <vkmc> rosmaita, that's what we need to figure out... I don't know if it makes sense to do that if we are going to start using cephadm as the reference tool for ceph deployments
14:49:42 <vkmc> yep, discussing how we are going to tackle this would be a great topic for the first meeting
14:49:47 <vkmc> rosmaita, of course, I can do that
14:50:00 <rosmaita> vkmc: my impression is that the ceph project is moving to cephadm, so i'm assuming everyone else should too
14:50:09 <vkmc> rosmaita, my impression too
14:50:14 <rosmaita> ok, cool
14:50:24 <tosky> rosmaita: that may have an impact on the version supported by cinder, not sure about the services
14:50:45 <tosky> the other services
14:50:54 <rosmaita> well, we have a support statement somewhere, we only support officially like 1 year older than ceph itself
14:51:00 <vkmc> we can keep the devstack-plugin-ceph for testing in stable branches, and slightly adopt a new plugin with time... but we should discuss this in a specific meeting, I don't want to hijack this meeting :)
14:51:17 <tosky> devstack-plugin-ceph is branched
14:51:46 <rosmaita> vkmc: sounds good, we will look for your announcement on the ML about the pop-up team, please include [cinder] in the subject
14:51:54 <vkmc> yes, I'm aware... but little of what we have in that repo would be useful for us if we start using cephadm as the deployment tool
14:51:59 <enriquetaso> Cephadm is new in the Octopus v15.2.0 release and does not support older versions of Ceph.
14:52:00 <vkmc> rosmaita, sure
14:52:06 <rosmaita> and thanks tosky for pushing this
14:52:40 <enriquetaso> #link https://docs.ceph.com/en/octopus/cephadm/
14:53:04 <rosmaita> well, as long as devstack-plugin-ceph is branched, backcompat shouldn't be a problem (i don't think)
14:53:16 <enriquetaso> sure
14:53:16 <vkmc> enriquetaso, thanks, that's helpful to decide what we end up doing :)
14:53:31 <enriquetaso> thanks vkmc
14:53:35 <rosmaita> ok thanks
14:53:54 <rosmaita> SachinMore: do you have a specific comment about those patches you listed on the agenda?
14:54:00 <SachinMore> what can we do to help get patches 806687, 811717 and 811718 in upstream? These patches fix bugs in KIOXIA's code.
14:54:31 <simondodsley> these are all os-brick patches, correct?
14:54:38 <SachinMore> yes
14:55:47 <tosky> rosmaita: topic change ?
14:56:01 <rosmaita> #topic aging os-brick patches
14:56:10 <rosmaita> #link https://review.opendev.org/c/openstack/os-brick/+/811718
14:56:18 <rosmaita> #link https://review.opendev.org/c/openstack/os-brick/+/811717
14:56:29 <rosmaita> #link https://review.opendev.org/c/openstack/os-brick/+/806687
14:58:03 <rosmaita> ok, one thing is that it looks like some of them need a rebase (since the patches are stacked)
14:58:14 <rosmaita> but we do need to review these
14:58:57 <rosmaita> since the os-brick release has to happen shortly after the new driver merge deadline, i should revise my earlier statement
14:59:19 <rosmaita> review priorities over the next week should be: os-brick + new drivers
14:59:51 <rosmaita> SachinMore: thanks for bringing these to our attention
15:00:01 <SachinMore> thanks, rosmaita
15:00:24 <SachinMore> please let us know if something is needed from our side
15:00:59 <rosmaita> will do
15:01:09 <rosmaita> ok, we are out of time ... thanks everyone!
15:01:15 <SachinMore> thanks everyone!
15:01:17 <geguileo> what is the replica_count?
15:01:25 <geguileo> is a new field on the conn info?
15:01:49 <jungleboyj> Thank you!
15:02:34 <rosmaita> #endmeeting