14:00:01 #startmeeting cinder 14:00:01 Meeting started Wed May 10 14:00:01 2023 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:01 The meeting name has been set to 'cinder' 14:00:06 #topic roll call 14:00:15 hi 14:01:27 hi 14:01:47 hi 14:01:48 o/ 14:01:52 o/ 14:02:07 o/ 14:03:08 #link https://etherpad.opendev.org/p/cinder-bobcat-meetings 14:03:12 Hello! 14:03:49 o/ 14:04:19 hi 14:05:22 hello 14:05:24 let's get started 14:05:29 #topic announcements 14:05:40 first, Cinderlib 2023.1 Antelope (5.1.0) released 14:05:53 we've released cinderlib for 2023.1 Antelope with tag 5.1.0 14:06:05 #link https://pypi.org/project/cinderlib/5.1.0/ 14:06:20 hello 14:06:34 i think the next step is to modify the zuul and tox files to open for 2023.2 Bobcat development 14:06:50 (open cinderlib for) 14:07:38 next, Runtime update for 2023.2: Test libraries against py38 14:07:46 #link https://review.opendev.org/c/openstack/governance/+/882165 14:08:11 there was a problem with some jobs breaking when we removed py38 support 14:08:23 the supported runtimes for 2023.2 are py39 and py310 14:08:47 but there is a patch up in governance project to make py38 a runtime for libraries 14:09:20 does that mean cinder too, to support cinderlib? 14:10:14 good question, since cinderlib depends on cinder's requirements we might need to 14:10:20 but there is still ongoing discussion so things might change 14:10:27 i think that's a "yes" 14:10:33 * jungleboyj sneaks in late. 14:12:01 yes, i don't think there is any harm is supporting py38 for another release, the jobs don't consume a lot of gate resources and we can assure py38 compatibility 14:13:14 anyway, let's see how this will be finalized, maybe we will end up having py38 for all projects 14:13:38 next, Query on A/A 14:14:05 so raghavendra, who works on the HPE driver, sent a query on the ML regarding A/A support for their driver 14:14:13 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-May/033577.html 14:14:43 and i don't think we've a proper docs page that describes the changes required by drivers to enable A/A 14:15:34 i was able to find this doc but this might have a lot of info irrelevant for the implementation of the active active support itself 14:15:35 #link https://docs.openstack.org/cinder/latest/contributor/high_availability.html 14:16:30 just wanted to bring this up for attention if we will have more and more vendors wanting to enable A/A then this is something we need to consider 14:16:46 agreed! ;-) 14:17:38 much of the effort is in understanding how your driver works and if it designed in a way where running multiple instances of it in HA will break its assumptions (local state, etc) 14:20:35 that's a good point, maybe we can document all the generic things common to all drivers and add points like this ^ for driver vendors to test out scenarios 14:21:03 the next vendor implementing this can also consider documenting the steps 14:22:21 next, Bobcat Midcycle - 1 14:22:26 #link https://etherpad.opendev.org/p/cinder-bobcat-midcycles 14:22:34 please add topics for the midcycle 14:23:32 also i was thinking if there are enough people attending the Vancouver summit, we can use the midcycle topics for the PTG there 14:23:52 i mean skip the midcycle and discuss the topics in vancouver 14:24:41 in any case, do add topics so we will have something to discuss :D 14:25:43 next, Upcoming events 14:25:50 Bobcat-1 milestone: May 11th, 2023 14:26:15 i think we've clients and library releases proposed for M-1 14:26:46 I will go through them to see if we've merged patches that needs a release 14:26:53 OpenInfra Summit Vancouver (including PTG): June 13-15, 2023 14:27:08 Vancouver summit+PTG is next month 14:27:25 please plan accordingly if you're going to travel there 14:28:01 that's all for announcements 14:28:14 let's move to topics 14:28:17 #topic Add the next job to periodic jobs 14:28:19 enriquetaso, that's you 14:28:25 hello 14:28:41 Hello, I'm currently working on fixing the Ceph backup driver. You can find my work here: 14:28:49 #link https://review.opendev.org/c/openstack/cinder/+/880965 14:28:54 I've updated the commit message ^ because eharney identified the patch that introduced the bug. (I've added that info on the commit message) If you have some spare time, I would greatly appreciate a review. 14:29:01 Once the patch in the master branch is merged, I plan to propose the backports. 14:29:08 In addition to the fix, I've proposed a non-voting job 14:29:14 which you can find here: 14:29:18 #link https://review.opendev.org/c/openstack/cinder/+/881032 14:29:41 to display results.. However, I believe we're at our limit for adding additional Ceph jobs to the CI. Consequently, I'm considering adding this job to the periodic jobs, allowing it to run at least once a week. 14:29:52 Is this feasible? 14:30:45 If so, I could use some guidance on adding jobs to the periodic queue. 14:31:22 i can help you there 14:32:03 here's an example: https://opendev.org/openstack/glance/src/branch/master/.zuul.yaml 14:32:12 yay 14:32:13 glance runs a bunch of periodic jobs 14:32:44 it's basically the same as a normal job, you just put it in the 'periodic' section 14:32:53 i think by default they run once a day 14:33:43 thanks rosmaita, i'll check where are the periodic section in cinder 14:34:00 i don't think we have one, you can add it 14:34:25 excellent 14:35:24 thanks! 14:35:25 if the idea of the new job is to only test backup/restore, we can use a regex to only run backup/restore tests 14:36:08 right now i think it's doing some redundant testing same as it's parent job cinder-plugin-ceph-tempest 14:36:09 The goal is to only have LVM as volume backend y ceph as backup driver 14:36:34 i think I need to add a new job, otherwise, restore will be ceph to ceph 14:37:16 I'm not saying to use the cinder-plugin-ceph-tempest job 14:37:34 what I'm trying to say is we can limit the tests running in the new job 14:37:53 since the volume tests would run the same as LVM job i guess 14:38:15 my idea is if we can use a regex to limit this job to run only backup/restore tests 14:38:23 aahh, i understand 14:39:21 i have no problem with limit the amount of test with regex if that's a better alternative 14:39:53 i dont know if its something we want to run in every patch tho 14:40:19 but sure, i can update the patch to add the regex 14:41:33 here I added a new job to run a single test because i required two different images for that test 14:41:34 https://review.opendev.org/c/openstack/tempest/+/831018/25/tox.ini 14:41:59 might not be the best example to reference but just to give an idea 14:42:17 looks good! I'll update my job then 14:42:28 great, thanks enriquetaso 14:42:48 @all Please review the main fix when possible! 14:42:55 thanks! 14:43:25 ok, moving on to the next topic 14:43:28 #topic Cinder-Backup very slow / inefficient when using chunked drivers, e.g. S3 14:43:40 crohmann, that's you 14:44:22 I was going to look into it too. Do we have a bug number? 14:45:05 zaitcev, i think it's this one 14:45:07 #link https://bugs.launchpad.net/cinder/+bug/1918119 14:45:33 reading the etherpad, i think crohmann couldn't make it today 14:45:54 let's discuss this again when he's around 14:46:13 moving on 14:46:25 #topic Need help with zuul errors on https://review.opendev.org/c/openstack/cinder/+/868485 14:46:30 drencrom, that's you 14:46:42 Hi, we discussed this issue previously 14:47:20 I did a patch to check for the length in bytes of the metadata values 14:47:36 But now zuul is failing and it does not seem to be related to my patch 14:50:53 it seems that cinder-plugin-ceph-tempest and tempest-integrated-storage-ubuntu-focal are causing the failure, I don't feel this is connected to your change 14:51:17 2023-04-24 13:01:11.268256 | controller | ERROR: No matching distribution found for tooz===4.0.0 (from -c /opt/stack/requirements/upper-constraints.txt (line 550)) 14:51:28 i remember tooz issue relating to py38 14:51:37 Yes, I'm asking in case you have alresdey seen this in other patches. 14:51:49 *already 14:52:52 Also would like you to validate that my patch to validators.py is sound 14:53:14 hmm, tooz 4.0.0 doesn't support py38 14:58:54 maybe they were pinning tooz for py38 compatibility 14:59:06 but we need to wait for TC to come up with a resolution 15:00:10 we're out of time 15:00:15 thanks everyone for joining 15:00:18 #endmeeting