14:00:17 #startmeeting cinder 14:00:17 Meeting started Wed Jan 12 14:00:17 2022 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:17 The meeting name has been set to 'cinder' 14:00:24 #topic roll call 14:00:26 o/ 14:00:27 o/ 14:00:30 hi 14:00:33 hi 14:00:36 hi 14:00:49 hi 14:00:53 hi 14:01:05 hi 14:01:17 o/ 14:01:32 o/ 14:02:05 hi 14:04:03 hello everyone 14:04:11 #link https://etherpad.opendev.org/p/cinder-yoga-meetings 14:04:16 #topic announcements 14:04:45 first, a reminder about the OpenInfrastructure Foundation election/polling happening this week 14:04:56 it closes on Friday 14 Jan at 1800 UTC 14:05:25 i think that as openstack contributors, you are all also signed up as individual foundation members 14:05:29 (not sure about that though) 14:05:54 anyway, you should have received a customized email with a personalized link to the voting website 14:06:04 look for email with subject "2022 Individual Director Election and Bylaws Amendments" 14:06:29 second announcement: new driver merge deadline is 21 January 14:06:35 which is really soon 14:06:43 we are tracking new driver patches here: 14:06:51 #link https://etherpad.opendev.org/p/cinder-yoga-new-drivers 14:07:16 obviously, the review priority until 21 January is new drivers 14:07:52 so people who have other patches, if you help review new drivers, we can get that done faster and can get back to reviewing other stuff 14:08:19 and people who have new drivers, you can speed things up by checking the review guidelines and make sure your CI is working properly 14:08:28 #link https://docs.openstack.org/cinder/latest/contributor/new_driver_checklist.html 14:08:53 that link is also helpful for people doing the reviews, of course 14:09:07 we have a variety of drivers of various complexity this cycle 14:09:32 some simple wrappers, and one that requires a new os-brick connector, plus the driver, plus a driver for nova 14:09:48 ok, other upcoming stuff: 14:10:10 festival of xs reviews next friday 14:10:28 (which is 21 January) 14:10:31 yoga midcycle-2 on 26 January 14:10:40 yoga os-brick release week of 7 February 14:10:48 yoga feature freeze week of 21 February 14:11:14 that's all the announcements I have ... anyone else have something to share? 14:12:14 ok, moving on 14:12:28 #topic Discuss revert-to-snapshot potential issue 14:12:31 simondodsley: that's you 14:12:33 Thanks 14:12:52 Reading the spoec for revert from any snapshot (https://review.opendev.org/c/openstack/cinder-specs/+/736111) I noticed that the main revert from snapshot functionality doesn't set volume creation name to the creation date of the snapshot after reversion. Without this there is no way to tell if a volume has actually been reverted. Not only does this stop the revert from any snapshot spec moving forward, it seems 14:12:52 to me like a big hole for operators. 14:12:57 Discuss... 14:13:27 Am I missing something obvious here, or is that actually a failing of the feature? 14:13:46 well, the idea of the feature is that as far as the end user is concerned, it's the same volume 14:13:59 so i think the creation date is supposed to stay the same? 14:14:09 that's my first thought as well 14:14:20 YEs, but they have no idea if it has been reverted. It makes sense that the volume creation time should be the time of the snapshot 14:14:57 How do 3rd party backends deal with this? I know Pure will reset the volume creation time to the snapshot creation time on reversion 14:15:21 If the revert to any snapshot feature is to move forward there has to be a way to tell if a volume has been reset 14:15:50 so from the Pure point of view, the reverted volume actually is a different volume, is that correct? 14:15:58 i'm not sure we have anything defining details of how the volume creation timestamp works in general and the details of what it would mean in this situation 14:16:08 Not really, it is just overwritten with the snapshot 14:16:22 ok 14:16:53 resetting the creation timestamp on revert would also mean you can end up with snapshots that have a creation time earlier than the volume's creation time 14:16:57 if you have a DB running on the volume you would potentially need to know that a snapshot reversion has taken place so you know where your data is at 14:17:41 eharney, exactly, so that would tell you that those snapshots are no longer valid or are available for reversion to an earlier timestamp if the initial reversion didn't go far enough back 14:18:02 think about DB recover from corruption etc 14:18:08 well they should still be valid 14:18:39 but the operator/user has no idea which are valid if they weren't the person who did the original reversion. 14:19:01 you are assuming only one operator who has a perfect memory 14:19:54 this sounds like a case for volume metadata 14:20:02 without knowing the timestamp of the reverted snapshot onto the underlying volume you can leave yourself open to data corruption 14:20:29 yes - metadata wolkd be a good use of this, but there would have to be a standard name for it that all drivers used. 14:20:29 why data corruption? 14:20:44 As it is I think every driver would have to be patched to fx this 14:22:03 :-( 14:22:17 well, the spec hasn't been approved yet (I hope), so we can work out a standard way to record this 14:22:39 eharney, because you could revert to snapshot 3, then do work and create a new snapshots (4 and 5), but someone and in the future there is a need to go back to snapshot 4, but a user who can't see snashot timestamps could pick snashot 2 not knowing about the snaspht 3 reversion 14:22:44 i am kind of leery of messing with the volume creation timestamp though 14:24:06 eharney has been pointing out on that spec review that some kind of tracking needs to happen, but the proposers haven't really addressed that, i don't think 14:25:05 I would be OK with metadata being used to this, but a regular user, using Horizon would not see the metadata (unless they do additional actions) and as I say it would need to be a standard name that all drivers had to amend. 14:25:36 Does sound like this requires additional thought/design. 14:25:36 I would like to know what other vendors do with snashot recovery to the underlying volume timestamp. Anyone care to comment? 14:26:04 this would inform the way forewsard for revert and this spec 14:26:33 kind of quiet in here today, maybe we need to take this to the mailing list 14:27:13 what i mean is the question of whether changing the semantics of the volume create timestamp is a good way forward here, or whether we need something else 14:27:20 Or a topic for the next mid-cycle? 14:27:32 i think it would be better to use something else 14:27:32 or both! 14:27:51 the volume timestamp still represents when the volume was created, that isn't really related to snapshots being reverted to etc... 14:28:08 but it soet of is as the volume is recreated 14:28:31 base don the timestamp of the snapshot 14:28:49 the volume isn't recreated from a cinder point of view (and presumably not on a good number of backends either?) 14:29:29 yeah, i am inclined to agree with eharney, but if a lot of backends do what Pure does, maybe user expectations are different from my intuition 14:29:48 that is what i'm trying to find out - even in Pure we do not recreate the volume, just apply a bunch of changes that make the volume look like it was at the time of the snaphot 14:29:54 i think simondodsley definitely makes a good case that we want some kind of tracking here 14:30:05 rosmaita: ++ 14:30:40 maybe a direct mail to the driver maintainers as not all of them read the mailing list 14:30:51 ok, simondodsley how about we draft out something to the ML on an etherpad 14:31:08 sounds good 14:31:19 well, they should read the ML, so i would prefer conversation on the ML, with a direct email informing them about the discussion 14:31:58 #action simondodsley, rosmaita - draft email to the ML about volume revert tracking 14:32:43 ok thanks simondodsley, let's move on 14:32:55 #topic removing contrib/black-box from the code tree 14:33:09 this is prompted by a comment from hemna on an eharney patch 14:33:24 we have this contrib/black-box/* in the code tree 14:33:38 it has been marked as 'unsupported' since 20 Jan 2020 14:33:45 #link https://review.opendev.org/c/openstack/cinder/+/701974 14:33:59 looks like it has definitely not been maintained 14:34:17 it looks like it only builds properly with stable/ocata (and that branch no longer exists) 14:34:25 so i think we should remove it 14:34:40 is anyone horrified by that suggestion? 14:35:05 i think it's a good idea 14:35:10 i just wanted to get an initial reaction before proposing this on the ML 14:35:25 sounds like we have no black-box fans present today 14:35:42 I agree. 14:35:48 ++ 14:35:54 #action rosmaita - proposal to remove blackbox support this cycle to the ML 14:36:03 ok, thanks for the feedback! 14:36:05 blockbox :) 14:36:27 3topci coordination and strategy for the changes required in devstack-plugin-ceph 14:36:58 missed a # and topic spelling 14:37:00 oops, block-box 14:37:16 He he. 14:37:16 yes, let me try again 14:37:28 #topic coordination and strategy for the changes required in devstack-plugin-ceph 14:37:34 ok tosky has the floor 14:37:38 hi, you are probably aware that the new way to install ceph is cephadm, and talking about this with manila people, they are experimenting with it (hi vkmc :) 14:37:42 the experiments are being done outside devstack-plugin-ceph , but I think the final code should live there. 14:37:45 Now, there are several consumers for devstack-plugin-ceph (cinder, glance, manila, nova, and generally QA), so maybe it would make sense to setup a popup team to handle the cephadm migration? 14:37:53 That would mean (if the TC and all the affected groups approve it) a few more meetings and some volunteer to handle it. I can help with it but it would be better if I wouldn't be the only one. 14:37:57 Any thoughts in general? 14:38:10 * vkmc sneaks in 14:38:11 o/ 14:38:13 i agree with everything you have said 14:38:18 hello vkmc 14:38:30 makes sense to add it as a new mode/option to devstack-plugin-ceph 14:38:55 vkmc: Long time no see! 14:39:06 jungleboyj, indeed :D 14:39:08 i can help with reviews etc at least 14:39:36 or can we just solve it without an additional team and meetings? 14:40:09 well, the popup team doesn't actually have to hold meetings, just an etherpad to keep people informed 14:40:34 but quick meetings are probably good to make people focus on the work 14:41:15 though we don't want to add unnecessary overhead to this work 14:42:06 but since vkmc is actually doing work on this, her opinion is important here 14:42:40 i guess the question is, what kind of coordination would be useful? 14:43:16 a weekly sync up would be useful for those interested 14:43:35 that sounds reasonable 14:44:08 tosky: do you want to represent cinder, or were you saying earlier that you are interested in helping, but not specifically representing cinder? 14:44:20 right now the goal is to build a plugin that can be used externally and make sure that main functionality for all services work as expected 14:44:41 perhaps we would need to setup environments with devstack, cephadm and run tempest there 14:44:52 (debug and fix whatever comes from there) 14:45:08 that sounds right 14:45:17 rosmaita: whatever is needed to move this forward, keeping in mind I can help more with testing and maybe not much with the code 14:45:22 once we got that working, we could see how to integrate that to what we have 14:45:45 vkmc: wouldn't it be more work to write an external code and integrate it back later? 14:47:28 tosky, it's ok for me in any case 14:47:51 vkmc: so are you thinking of a changing the devstack ceph plugin architecture so that it would take a plugin for cephadm-type services? 14:48:16 actually, i guess architecture would be a good topic for your first meeting 14:48:32 vkmc: are you willing to start a pop-up team for this? 14:49:14 rosmaita, that's what we need to figure out... I don't know if it makes sense to do that if we are going to start using cephadm as the reference tool for ceph deployments 14:49:42 yep, discussing how we are going to tackle this would be a great topic for the first meeting 14:49:47 rosmaita, of course, I can do that 14:50:00 vkmc: my impression is that the ceph project is moving to cephadm, so i'm assuming everyone else should too 14:50:09 rosmaita, my impression too 14:50:14 ok, cool 14:50:24 rosmaita: that may have an impact on the version supported by cinder, not sure about the services 14:50:45 the other services 14:50:54 well, we have a support statement somewhere, we only support officially like 1 year older than ceph itself 14:51:00 we can keep the devstack-plugin-ceph for testing in stable branches, and slightly adopt a new plugin with time... but we should discuss this in a specific meeting, I don't want to hijack this meeting :) 14:51:17 devstack-plugin-ceph is branched 14:51:46 vkmc: sounds good, we will look for your announcement on the ML about the pop-up team, please include [cinder] in the subject 14:51:54 yes, I'm aware... but little of what we have in that repo would be useful for us if we start using cephadm as the deployment tool 14:51:59 Cephadm is new in the Octopus v15.2.0 release and does not support older versions of Ceph. 14:52:00 rosmaita, sure 14:52:06 and thanks tosky for pushing this 14:52:40 #link https://docs.ceph.com/en/octopus/cephadm/ 14:53:04 well, as long as devstack-plugin-ceph is branched, backcompat shouldn't be a problem (i don't think) 14:53:16 sure 14:53:16 enriquetaso, thanks, that's helpful to decide what we end up doing :) 14:53:31 thanks vkmc 14:53:35 ok thanks 14:53:54 SachinMore: do you have a specific comment about those patches you listed on the agenda? 14:54:00 what can we do to help get patches 806687, 811717 and 811718 in upstream? These patches fix bugs in KIOXIA's code. 14:54:31 these are all os-brick patches, correct? 14:54:38 yes 14:55:47 rosmaita: topic change ? 14:56:01 #topic aging os-brick patches 14:56:10 #link https://review.opendev.org/c/openstack/os-brick/+/811718 14:56:18 #link https://review.opendev.org/c/openstack/os-brick/+/811717 14:56:29 #link https://review.opendev.org/c/openstack/os-brick/+/806687 14:58:03 ok, one thing is that it looks like some of them need a rebase (since the patches are stacked) 14:58:14 but we do need to review these 14:58:57 since the os-brick release has to happen shortly after the new driver merge deadline, i should revise my earlier statement 14:59:19 review priorities over the next week should be: os-brick + new drivers 14:59:51 SachinMore: thanks for bringing these to our attention 15:00:01 thanks, rosmaita 15:00:24 please let us know if something is needed from our side 15:00:59 will do 15:01:09 ok, we are out of time ... thanks everyone! 15:01:15 thanks everyone! 15:01:17 what is the replica_count? 15:01:25 is a new field on the conn info? 15:01:49 Thank you! 15:02:34 #endmeeting