14:00:01 <whoami-rajat> #startmeeting cinder
14:00:01 <opendevmeet> Meeting started Wed May 10 14:00:01 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:01 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:01 <opendevmeet> The meeting name has been set to 'cinder'
14:00:06 <whoami-rajat> #topic roll call
14:00:15 <eharney> hi
14:01:27 <felipe_rodrigues> hi
14:01:47 <enriquetaso> hi
14:01:48 <thiagoalvoravel> o/
14:01:52 <MatheusAndrade[m]> o/
14:02:07 <rosmaita> o/
14:03:08 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-bobcat-meetings
14:03:12 <happystacker> Hello!
14:03:49 <helenadantas[m]> o/
14:04:19 <caiquemello[m]> hi
14:05:22 <whoami-rajat> hello
14:05:24 <whoami-rajat> let's get started
14:05:29 <whoami-rajat> #topic announcements
14:05:40 <whoami-rajat> first, Cinderlib 2023.1 Antelope (5.1.0) released
14:05:53 <whoami-rajat> we've released cinderlib for 2023.1 Antelope with tag 5.1.0
14:06:05 <whoami-rajat> #link https://pypi.org/project/cinderlib/5.1.0/
14:06:20 <yuval> hello
14:06:34 <whoami-rajat> i think the next step is to modify the zuul and tox files to open for 2023.2 Bobcat development
14:06:50 <whoami-rajat> (open cinderlib for)
14:07:38 <whoami-rajat> next, Runtime update for 2023.2: Test libraries against py38
14:07:46 <whoami-rajat> #link https://review.opendev.org/c/openstack/governance/+/882165
14:08:11 <whoami-rajat> there was a problem with some jobs breaking when we removed py38 support
14:08:23 <whoami-rajat> the supported runtimes for 2023.2 are py39 and py310
14:08:47 <whoami-rajat> but there is a patch up in governance project to make py38 a runtime for libraries
14:09:20 <eharney> does that mean cinder too, to support cinderlib?
14:10:14 <whoami-rajat> good question, since cinderlib depends on cinder's requirements we might need to
14:10:20 <whoami-rajat> but there is still ongoing discussion so things might change
14:10:27 <rosmaita> i think that's a "yes"
14:10:33 * jungleboyj sneaks in late.
14:12:01 <whoami-rajat> yes, i don't think there is any harm is supporting py38 for another release, the jobs don't consume a lot of gate resources and we can assure py38 compatibility
14:13:14 <whoami-rajat> anyway, let's see how this will be finalized, maybe we will end up having py38 for all projects
14:13:38 <whoami-rajat> next, Query on A/A
14:14:05 <whoami-rajat> so raghavendra, who works on the HPE driver, sent a query on the ML regarding A/A support for their driver
14:14:13 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-May/033577.html
14:14:43 <whoami-rajat> and i don't think we've a proper docs page that describes the changes required by drivers to enable A/A
14:15:34 <whoami-rajat> i was able to find this doc but this might have a lot of info irrelevant for the implementation of the active active support itself
14:15:35 <whoami-rajat> #link https://docs.openstack.org/cinder/latest/contributor/high_availability.html
14:16:30 <whoami-rajat> just wanted to bring this up for attention if we will have more and more vendors wanting to enable A/A then this is something we need to consider
14:16:46 <happystacker> agreed! ;-)
14:17:38 <eharney> much of the effort is in understanding how your driver works and if it designed in a way where running multiple instances of it in HA will break its assumptions (local state, etc)
14:20:35 <whoami-rajat> that's a good point, maybe we can document all the generic things common to all drivers and add points like this ^ for driver vendors to test out scenarios
14:21:03 <whoami-rajat> the next vendor implementing this can also consider documenting the steps
14:22:21 <whoami-rajat> next, Bobcat Midcycle - 1
14:22:26 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-bobcat-midcycles
14:22:34 <whoami-rajat> please add topics for the midcycle
14:23:32 <whoami-rajat> also i was thinking if there are enough people attending the Vancouver summit, we can use the midcycle topics for the PTG there
14:23:52 <whoami-rajat> i mean skip the midcycle and discuss the topics in vancouver
14:24:41 <whoami-rajat> in any case, do add topics so we will have something to discuss :D
14:25:43 <whoami-rajat> next, Upcoming events
14:25:50 <whoami-rajat> Bobcat-1 milestone: May 11th, 2023
14:26:15 <whoami-rajat> i think we've clients and library releases proposed for M-1
14:26:46 <whoami-rajat> I will go through them to see if we've merged patches that needs a release
14:26:53 <whoami-rajat> OpenInfra Summit Vancouver (including PTG): June 13-15, 2023
14:27:08 <whoami-rajat> Vancouver summit+PTG is next month
14:27:25 <whoami-rajat> please plan accordingly if you're going to travel there
14:28:01 <whoami-rajat> that's all for announcements
14:28:14 <whoami-rajat> let's move to topics
14:28:17 <whoami-rajat> #topic Add the next job to periodic jobs
14:28:19 <whoami-rajat> enriquetaso, that's you
14:28:25 <enriquetaso> hello
14:28:41 <enriquetaso> Hello, I'm currently working on fixing the Ceph backup driver. You can find my work here:
14:28:49 <enriquetaso> #link https://review.opendev.org/c/openstack/cinder/+/880965
14:28:54 <enriquetaso> I've updated the commit message ^ because eharney identified the patch that introduced the bug. (I've added that info on the commit message) If you have some spare time, I would greatly appreciate a review.
14:29:01 <enriquetaso> Once the patch in the master branch is merged, I plan to propose the backports.
14:29:08 <enriquetaso> In addition to the fix, I've proposed a non-voting job
14:29:14 <enriquetaso> which you can find here:
14:29:18 <enriquetaso> #link https://review.opendev.org/c/openstack/cinder/+/881032
14:29:41 <enriquetaso> to display results.. However, I believe we're at our limit for adding additional Ceph jobs to the CI. Consequently, I'm considering adding this job to the periodic jobs, allowing it to run at least once a week.
14:29:52 <enriquetaso> Is this feasible?
14:30:45 <enriquetaso> If so, I could use some guidance on adding jobs to the periodic queue.
14:31:22 <rosmaita> i can help you there
14:32:03 <rosmaita> here's an example: https://opendev.org/openstack/glance/src/branch/master/.zuul.yaml
14:32:12 <enriquetaso> yay
14:32:13 <rosmaita> glance runs a bunch of periodic jobs
14:32:44 <rosmaita> it's basically the same as a normal job, you just put it in the 'periodic' section
14:32:53 <rosmaita> i think by default they run once a day
14:33:43 <enriquetaso> thanks rosmaita, i'll check where are the periodic section in cinder
14:34:00 <rosmaita> i don't think we have one, you can add it
14:34:25 <enriquetaso> excellent
14:35:24 <enriquetaso> thanks!
14:35:25 <whoami-rajat> if the idea of the new job is to only test backup/restore, we can use a regex to only run backup/restore tests
14:36:08 <whoami-rajat> right now i think it's doing some redundant testing same as it's parent job cinder-plugin-ceph-tempest
14:36:09 <enriquetaso> The goal is to only have LVM as volume backend y ceph as backup driver
14:36:34 <enriquetaso> i think I need to add a new job, otherwise, restore will be ceph to ceph
14:37:16 <whoami-rajat> I'm not saying to use the cinder-plugin-ceph-tempest job
14:37:34 <whoami-rajat> what I'm trying to say is we can limit the tests running in the new job
14:37:53 <whoami-rajat> since the volume tests would run the same as LVM job i guess
14:38:15 <whoami-rajat> my idea is if we can use a regex to limit this job to run only backup/restore tests
14:38:23 <enriquetaso> aahh, i understand
14:39:21 <enriquetaso> i have no problem with limit the amount of test with regex if that's a better alternative
14:39:53 <enriquetaso> i dont know if its something we want to run in every patch tho
14:40:19 <enriquetaso> but sure, i can update the patch to add the regex
14:41:33 <whoami-rajat> here I added a new job to run a single test because i required two different images for that test
14:41:34 <whoami-rajat> https://review.opendev.org/c/openstack/tempest/+/831018/25/tox.ini
14:41:59 <whoami-rajat> might not be the best example to reference but just to give an idea
14:42:17 <enriquetaso> looks good! I'll update my job then
14:42:28 <whoami-rajat> great, thanks enriquetaso
14:42:48 <enriquetaso> @all Please review the main fix when possible!
14:42:55 <enriquetaso> thanks!
14:43:25 <whoami-rajat> ok, moving on to the next topic
14:43:28 <whoami-rajat> #topic Cinder-Backup very slow / inefficient when using chunked drivers, e.g. S3
14:43:40 <whoami-rajat> crohmann, that's you
14:44:22 <zaitcev> I was going to look into it too. Do we have a bug number?
14:45:05 <whoami-rajat> zaitcev, i think it's this one
14:45:07 <whoami-rajat> #link https://bugs.launchpad.net/cinder/+bug/1918119
14:45:33 <whoami-rajat> reading the etherpad, i think crohmann couldn't make it today
14:45:54 <whoami-rajat> let's discuss this again when he's around
14:46:13 <whoami-rajat> moving on
14:46:25 <whoami-rajat> #topic Need help with zuul errors on https://review.opendev.org/c/openstack/cinder/+/868485
14:46:30 <whoami-rajat> drencrom, that's you
14:46:42 <drencrom> Hi, we discussed this issue previously
14:47:20 <drencrom> I did a patch to check for the length in bytes of the metadata values
14:47:36 <drencrom> But now zuul is failing and it does not seem to be related to my patch
14:50:53 <happystacker> it seems that cinder-plugin-ceph-tempest and tempest-integrated-storage-ubuntu-focal are causing the failure, I don't feel this is connected to your change
14:51:17 <whoami-rajat> 2023-04-24 13:01:11.268256 | controller | ERROR: No matching distribution found for tooz===4.0.0 (from -c /opt/stack/requirements/upper-constraints.txt (line 550))
14:51:28 <whoami-rajat> i remember tooz issue relating to py38
14:51:37 <drencrom> Yes, I'm asking in case you have alresdey seen this in other patches.
14:51:49 <drencrom> *already
14:52:52 <drencrom> Also would like you to validate that my patch to validators.py is sound
14:53:14 <eharney> hmm, tooz 4.0.0 doesn't support py38
14:58:54 <whoami-rajat> maybe they were pinning tooz for py38 compatibility
14:59:06 <whoami-rajat> but we need to wait for TC to come up with a resolution
15:00:10 <whoami-rajat> we're out of time
15:00:15 <whoami-rajat> thanks everyone for joining
15:00:18 <whoami-rajat> #endmeeting