14:00:03 #startmeeting cinder 14:00:03 Meeting started Wed Jun 22 14:00:03 2022 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:03 The meeting name has been set to 'cinder' 14:00:06 #topic roll call 14:00:43 hi 14:00:43 doink 14:00:49 hi 14:01:02 o/ 14:02:08 hi 14:02:22 #link https://etherpad.openstack.org/p/cinder-zed-meetings 14:03:00 hi 14:03:20 let's wait for a few minutes more 14:03:24 hi! o/ 14:04:13 looks like we've all the usual people here so let's get started 14:04:21 #topic announcements 14:04:25 hi 14:04:38 first, Cinder spec freeze this week 14:04:59 so we're in the week of spec freeze, and we have a bunch of specs open 14:05:21 they're listed on the meetings etherpad page, so please take a look 14:05:38 o/ 14:05:51 some needs revision as i last looked but would be good to review so the author can address all comments at once 14:06:32 also note that we are not going to implement system scope in this cycle so most of the work will be cleaning things up so no SRBAC spec needed 14:06:50 next, Cinderlib yoga release this week 14:07:07 we are also going to release cinderlib yoga (as it's cycle trailing) this week 14:07:16 I have fixed the privsep issue 14:07:32 yes, and i think would be good to get that in before the release 14:07:37 that affects all branches, but it's currently breaking only wallaby (iirc) 14:07:37 will add a comment to the release patch 14:07:47 #link https://review.opendev.org/c/openstack/releases/+/842105 14:08:00 I just updated my spec 14:08:25 geguileo, yep, only wallaby as of now (didn't look before that since victoria is EM now) 14:08:33 yoga and xena looks fine 14:08:45 whoami-rajat: apparently we had a bug from Ussuri https://bugs.launchpad.net/cinderlib/+bug/1958159 14:08:54 with the same issue :-( 14:09:03 hemna, great, will have a look 14:09:18 I'll try to update my spec asap, but the cinderlib fix was harder than expected to reproduce and fix 14:09:21 oh :( 14:09:38 geguileo can you add link to the fix? 14:09:52 https://review.opendev.org/c/openstack/cinderlib/+/847170 14:09:55 tnx 14:10:07 geguileo, yeah that one was also important, let me know if you require spec freeze exception as you've been busy with a bunch of tasks 14:10:14 #link https://review.opendev.org/c/openstack/cinderlib/+/847170 14:11:06 at least we finally got motivation to fix the cinderlib issue so we're good 14:11:09 whoami-rajat: thanks, I'll probably need it because people will need time to review the new spec 14:11:56 geguileo, ack, i will send out a mail after the spec freeze where you can apply for spec freeze exception 14:12:00 but we can discuss that later 14:12:04 thanks 14:12:11 thanks for fixing the cinderlib issues! 14:12:45 moving on, October PTG Survey 14:13:03 so i came across this survey and it has a question regarding if it is going to be a team meetup 14:13:14 curious about it, i thought about asking it here 14:13:38 is anyone planning to go to the October PTG in Ohio? 14:13:50 I'm planning to fill the survey and the response might be helpful 14:14:03 #link https://openinfrafoundation.formstack.com/forms/oct2022_ptg_team_signup 14:14:39 I haven't considered it yet 14:15:04 same here 14:15:38 same 14:16:37 I guess this will be a better topic for the video + IRC meeting next week so everyone has some time to think and discuss and can express better (in video meet) 14:17:28 Ofcourse if most of the team don't plan to go we can conduct it virtually as we do now but let's see 14:17:46 whoami-rajat: when do you have to reply? 14:17:55 geguileo, before 31st July 14:18:10 we've time but wanted to bring it up early 14:18:19 ok, I'll start talking with my manager 14:18:44 oh? I wasn't aware of a meetup 14:19:23 #link https://openinfra.dev/ptg/ 14:19:30 hemna, there's one going to be in october, look for a mail with subject : Save the Date: PTG October 2022 14:19:54 or the page geguileo provided 14:20:01 ok as soon as that's sent, then I can ping my mgr. I'm in VA, so it shouldn't cost too much to travel 14:20:44 NetApp shall send one dev for Cinder 14:21:15 cool, looks like people are going to attend 14:21:40 sfernand: that sounded like a command lol 14:21:48 so let's discuss again in next week in the video meeting, maybe people will have some concrete responses as well 14:21:51 hahaha! 14:22:05 but tha was not my intention :P 14:22:23 sfernand: you probably started learning English before it changed to will ;-) 14:22:38 (except for specific cases) 14:22:49 Maybe I will be focused on Manila for next PTG but we should have another person for Cinder 14:24:21 so moving on to topics now 14:24:24 #topic reviews needed for https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 14:24:26 tosky, that's you 14:24:50 so, no much to say: please review it! 14:25:07 ack! 14:25:10 :D short and simple 14:25:26 the review has been up for a while, but there may be some fixes needed - I'd say if there are no structural changes, they can be addressed later 14:25:30 :) 14:25:34 i will just paste the statement from the etherpad as it has a good reasoning for it to be reviewed 14:25:36 this patch adds the support for cephadm (official ceph deployment way), already went through several round of reviews, probably better to merge it sooner than later 14:25:48 does that work with ceph-iscsi ? 14:25:50 the change should allow us to enable the newer ceph using the official deployment method (cephadm) 14:26:42 hemna: that reminds me there is an unmerged job for ceph-iscsi, so we don't have even information on whether it works now 14:26:54 :( 14:27:19 is there something I can do to help? I wrote the initial ceph-iscsi support for devstack and the cinder driver 14:28:15 test it, but personally if the current patch won't require structural changes to properly support ceph-iscsi, if it doesn't do it already, I would move the additional fixes to another patch 14:28:32 and we definitely need to come back to the ceph-iscsi job because we don't have a baseline for comparison 14:28:42 ok sounds good 14:29:22 also remember the new method is not enabled by default 14:29:28 and manila people would like to start using it 14:30:16 so we are not breaking anything by merging the change - of course it would be nice if a newly merged patch worked without additional fixes, but that seems to be the case here luckily 14:30:44 unless there are other questions, please review, and $next_topic 14:31:20 cool, thanks for the verbose explanation as well tosky 14:31:23 #link https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 14:31:27 moving on 14:31:37 #topic Idea https://lists.openstack.org/pipermail/openstack-discuss/2022-June/029145.html 14:31:42 jsmdk, that's you 14:32:04 Yes, as a supplement to backup drivers, I looked into hooking benji backup into cinder-backup service. I did some early development and testing, but do not know how to procede to make this hopefully become part of cinder upstream in the future. 14:32:30 i was looking at the benji driver and looks like it can backup lvm and rbd volumes, not sure why noone has proposed it in the cinder codebase 14:33:21 my code is here https://github.com/jsm222/cinder-backup-benji-driver is there a driver already? 14:33:35 jsmdk, looks like you had the same doubt as i do, I'm not sure if we have a proper documentation for contributing a backup driver 14:33:53 so it can only backup lvm and rbd ? 14:33:53 but i guess you can take reference from the previously contributed drivers 14:34:05 I'm not sure but that's what i read ^ 14:34:43 how would that work in a multi backend deployment ? 14:35:06 I have only tested backing up from lvm and ceph backends, it looks for the volumetype extra specs in multibackend setups 14:35:20 jsmdk: some time ago I wrote a blog post on how to write a backup driver: https://gorka.eguileor.com/write-a-cinder-backup-driver/ 14:35:29 nice 14:35:53 jsmdk: your driver needs to support ALL the cinder volume backends 14:36:02 jsmdk: though it may be optimized for some of them 14:36:15 Hello everyone! my name is alexander and i am software engineer in dell. our team develops and supports openstack drivers for dell storages. 14:36:36 geguileo, would be good to include that in our documentation as well 14:37:06 amalashenko: welcome! If you have topics you want to discuss you can either add them to the etherpad https://etherpad.opendev.org/p/cinder-zed-meetings 14:37:07 wow it dates back to 2016 14:37:15 okay, I will keep that in mind, is there a link with list of supported backends? 14:37:20 amalashenko: or wait until the open discussion time 14:37:43 jsmdk, https://docs.openstack.org/cinder/latest/reference/support-matrix.html 14:37:50 jsmdk: I recommend you looking at the chuncked backup driver that is the basis for most backup drivers 14:38:02 ok thanks 14:38:06 jsmdk: because there you can see how it attaches the source volume using os-brick 14:38:19 that way for most cases you don't have to worry what the source is 14:38:55 which I think I changed fairly recently to allow for asynchronous operations between cinder backup and cinder volume process for getting the volume to backup (re: long clone operations in cinder volume) 14:39:00 Yes that was also my impression that volume_file gives you the source of the backup.volume ? 14:39:38 hemna, i think the request goes through scheduler now right? 14:39:58 but doesn't os-brick return an rbd handle instead of a file path on disk for iscsi ? 14:40:22 it depends on the volumes backend 14:40:40 iirc we always get a file-like object (maybe except RBD) 14:40:56 whoami-rajat: no not really. backup does an rpc cast to cinder-volume to clone, which that can go through the scheduler, but the clone operation can take ages to do, once that clone is done, cinder volume does a cast back to backup to continue 14:40:59 but it should be possible to do something similar for the RBD driver as well 14:41:00 in RBD we get a custom RBD file wrapper (faced the issue in glance store) 14:41:33 whoami-rajat: but we are talking about backup drivers, so it's different 14:41:39 https://github.com/openstack/os-brick/blob/924af884db5797092e16e6176e9a70feddc9c892/os_brick/initiator/connectors/rbd.py#L130-L131 14:41:57 whoami-rajat: the backup method should be receiving the volume_file parameters, which is file-like in all cases (iirc) 14:42:24 Yes it is, even rbd is a file like object. 14:42:25 hemna, ack, got it 14:42:57 https://github.com/openstack/cinder/blob/2774c2537e8afabe8e46f1e5c9b08e4ff2641743/cinder/backup/manager.py#L478-L492 14:43:24 geguileo, i was referring to os-brick returning the volume path but maybe that's different from the current discussion 14:43:27 https://github.com/openstack/cinder/blob/master/cinder/backup/manager.py#L448-L467 14:43:41 whoami-rajat: yeah, this is only relative to backups 14:43:52 ok 14:43:57 but I think I may have missled everytone when I mentioned os-brick 14:44:05 since that is managed by the backup.manager code 14:44:07 not the driver 14:44:14 my bad 14:44:23 yeah it is managed by the manager. no problem 14:45:01 iirc we are not requiring backup drivers to have a CI, right? 14:45:35 afaik we only test swift 14:45:50 geguileo, we currently don't have specific CIs for backup drivers 14:46:12 oh! and maybe we also test a bit the ceph one on the ceph job 14:46:29 whoami-rajat: so they wouldn't need to provide a CI for that new driver 14:47:08 geguileo, yes, i don't think so but some validation would be good to confirm their driver works 14:47:26 i will discuss this with rosmaita how we've dealt with that in the past 14:48:01 Anything else I should take into consideration? 14:48:23 jsmdk: I don't think Benji can backup FC volumes (from the page https://benji-backup.me/) 14:48:55 oh, sorry, it says it can 14:49:08 it just says that it's better for LVM and Ceph 14:49:40 okay 14:49:40 jsmdk: where are the backups stored? 14:50:13 Oh you can choose from a number of backends NFS or even s3 14:50:25 for storing the backups that is 14:50:53 jsmdk: mmmm, we already have an NFS and S3 backup drivers... 14:52:12 I know, but do they take advantage of rbd diffs for incrementals. benji also does zstd compression 14:53:45 jsmdk: not for ceph to NFS 14:54:07 so OK, we have a reason there why the driver is useful 14:54:16 no more complains from me for now ;-) 14:55:26 jsmdk, so you can go ahead and propose a patch for the benji driver, we're very close to M2 for Zed but let's see if we can make progress on getting it in 14:55:53 whoami-rajat: and we should be able to test the driver with NFS 14:56:09 to verify that it works with devstack 14:57:23 I will do some more testing, I then I will post a patch for review, and some instructionsn on testing 14:57:36 jsmdk: ok 14:57:38 whoami-rajat: I think we have 2 minutes for open discussion 14:57:40 lol 14:57:51 yep, let's move to open discussion 14:57:55 thanks jsmdk 14:57:59 #topic open discussion 14:58:02 thank you 14:58:30 amalashenko, hello and welcome! 15:00:05 whoami-rajat, nice to be there! 15:00:12 :) 15:00:14 we're out of time now, thanks everyone for attending! 15:00:17 #endmeeting