14:01:04 <whoami-rajat> #startmeeting cinder
14:01:04 <opendevmeet> Meeting started Wed Sep  6 14:01:04 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:04 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:04 <opendevmeet> The meeting name has been set to 'cinder'
14:01:08 <Saikumar> o/
14:01:12 <whoami-rajat> #topic roll call
14:01:16 <jungleboyj> o/
14:01:20 <eharney> hi
14:01:24 <rosmaita> o/
14:01:43 <msaravan> hi
14:01:45 <felipe_rodrigues> o/
14:01:56 <akawai> o/
14:02:05 <toheeb> o/
14:02:39 <jbernard> o/
14:02:45 <thiagoalvoravel> o/
14:03:09 <caiquemello[m]> o/
14:03:24 <jayaanand> hi
14:04:03 <Dessira_> o/
14:04:29 <geguileo> o/
14:04:31 <simondodsley> o/
14:05:52 <whoami-rajat> hello everyone
14:05:58 <whoami-rajat> let's get started
14:06:03 <whoami-rajat> #topic announcements
14:06:08 <whoami-rajat> first, Midcycle -2 Summary
14:06:13 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-September/034946.html
14:06:28 <whoami-rajat> Midcycle 2 summary is available at the Cinder wiki
14:06:36 <whoami-rajat> #link https://wiki.openstack.org/wiki/CinderBobcatMidCycleSummary
14:06:59 <whoami-rajat> next, TC Election Results
14:07:05 <whoami-rajat> #link https://civs1.civs.us/cgi-bin/results.pl?id=E_41d42603087bcf58
14:07:18 <whoami-rajat> following are the 4 candidates that got selected as TC this time
14:07:19 <whoami-rajat> Ghanshyam Mann (gmann)
14:07:20 <whoami-rajat> Dan Smith (dansmith)
14:07:20 <whoami-rajat> Jay Faulkner (JayF)
14:07:20 <whoami-rajat> Dmitriy Rabotyagov (noonedeadpunk)
14:08:02 <whoami-rajat> next, Recheck state (past week)
14:08:07 <whoami-rajat> #link https://etherpad.opendev.org/p/recheck-weekly-summary
14:08:27 <whoami-rajat> last week we had 2 bare rechecks out of 22 total rechecks
14:08:28 <whoami-rajat> | Team               | Bare rechecks | All Rechecks | Bare rechecks [%] |
14:08:28 <whoami-rajat> | cinder             | 2             | 22           | 9.09              |
14:08:35 <whoami-rajat> which is a good number
14:09:33 <whoami-rajat> just to reiterate, if gate fails, it's always good to check the reason even if it's a random failure and put a recheck comment with that particular reason
14:10:09 <whoami-rajat> example, recheck cinder-barbican-lvm-lio job failed because of X test failure SSH Timeout
14:10:32 <whoami-rajat> another thing is the 90 days number
14:10:33 <whoami-rajat> | Team               | Bare rechecks | All Rechecks | Bare rechecks [%] |
14:10:33 <whoami-rajat> | cinder             | 112           | 356          | 31.46             |
14:10:43 <whoami-rajat> 112 bare rechecks out of 356 total
14:11:00 <whoami-rajat> which isn't bad percentage wise (31.46) but still good to improve upon it
14:11:02 <eharney> are we finding any particular patterns in the rechecks?
14:11:32 <rosmaita> no, i don't think anyone is analyzing that
14:12:19 <whoami-rajat> for my patches, cinder-tempest-plugin-lvm-lio-barbican fails with SSHTimeout in some test
14:12:31 <whoami-rajat> the test is random
14:12:39 <whoami-rajat> but I haven't dug much deeper into it
14:13:47 <happystacker> I have issues with cinder-tempest-plugin-lvm-lio-barbican and devstack-plugin-nfs-tempest-full from time to time
14:14:11 <whoami-rajat> it's better if we follow up on the recommendations discussed during midcycle
14:14:19 <whoami-rajat> and see if it makes any difference
14:18:35 <whoami-rajat> i see this patch from rosmaita  where the ceph tempest job is passing after applying the mysql thing, but need more evidence to be certain https://review.opendev.org/c/openstack/cinder/+/893798
14:19:29 <rosmaita> yeah, there was some discussion that the mysql-reduce-memory thing was turned on by default, so that patch may be unnecessary
14:19:51 <whoami-rajat> oh ok
14:19:56 <whoami-rajat> we can check on that
14:19:59 <rosmaita> but i don't think it's on by default for the parents of those jobs
14:21:57 <whoami-rajat> ok
14:22:44 <whoami-rajat> i remember it was enabled in some tempest/devstack base jobs but good to check
14:23:08 <whoami-rajat> last announcement, Devstack dropped support for Focal
14:23:28 <rosmaita> (only in master, though)
14:23:51 <whoami-rajat> good correction
14:24:20 <whoami-rajat> the mail says, it was planned for Caracal but nova bumped the libvirt version
14:24:24 <whoami-rajat> so they had to remove the job
14:24:30 <whoami-rajat> even tempest did remove it's focal job
14:24:36 <rosmaita> i haven't looked, i don't think we had any focal jobs? except maybe rbd-iscsi-client?
14:24:47 <whoami-rajat> with a quick search i couldn't find any usage of those jobs
14:24:48 <whoami-rajat> devstack-platform-ubuntu-focal or tempest-full-ubuntu-focal jobs
14:25:06 <whoami-rajat> rosmaita, i couldn't find us using those jobs anywhere ^
14:25:28 <rosmaita> sometimes we define a nodeset for our jobs, though
14:26:14 <rosmaita> ok, no nodeset specified in rbd-iscsi-client .zuul.yaml
14:27:16 <whoami-rajat> i can see that (nodeset openstack-single-node-focal) used in cinder-tempest-plugin for stable branch jobs so we should be good?
14:27:40 <whoami-rajat> https://opendev.org/openstack/cinder-tempest-plugin/src/branch/master/.zuul.yaml
14:27:57 <rosmaita> yes, i think the problem is only if you use devstack master with focal
14:28:51 <whoami-rajat> because the libvirt version bump is only in master, so we are good
14:28:53 <whoami-rajat> thanks for confirming
14:29:32 <whoami-rajat> so, that's all for the announcements
14:29:40 <whoami-rajat> and i made a mistake in one of the announcement
14:29:48 <whoami-rajat> regarding TC elections
14:30:18 <whoami-rajat> rosmaita, can correct and better tell the details
14:30:43 <rosmaita> well, what is happening is that the election is *starting* now
14:30:55 <rosmaita> but something has changed with the way you register to vote
14:31:03 <rosmaita> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-September/034981.html
14:31:19 <rosmaita> it used to be that the election coordinator gave a list to the voting website
14:31:39 <rosmaita> now, it's a list + you personally have to opt in to be able to vote
14:31:50 <rosmaita> so, you only have 9 hours to do that
14:32:03 <rosmaita> just to be clear
14:32:16 <rosmaita> you won't be able to vote unless you follow the instructions in the email
14:32:26 <rosmaita> before 23:45 UTC *today*
14:33:31 <whoami-rajat> thanks rosmaita !
14:34:39 <whoami-rajat> so please do the registration and vote for the TC member of your choice
14:35:33 <whoami-rajat> now that's ACTUALLY all for the announcements
14:35:41 <whoami-rajat> let's move to topics
14:35:54 <whoami-rajat> #topic Feature Reviews
14:36:05 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-September/034948.html
14:36:44 <whoami-rajat> FFE was granted for 6 features out of which None have merged till now
14:37:01 <whoami-rajat> some features have dependency on other patches which needs to be reviewed first
14:37:10 <whoami-rajat> let's go through it one by one
14:37:18 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-2023.2-bobcat-features
14:37:23 <whoami-rajat> first, Fujitsu Driver: Add QoS support
14:37:28 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/847730
14:37:39 <inori> Here I am,
14:37:57 <whoami-rajat> reviews on this feature have been requested from time to time, I've taken a look twice and it looks good
14:38:10 <whoami-rajat> I would like another core to volunteer to take a look at it
14:38:13 <whoami-rajat> inori, hey
14:38:31 <inori> Thanks for your code-review +2 and review-priority, rajat.
14:39:27 <whoami-rajat> np, it's a review priority since we won't merge any feature after this week!
14:39:35 <inori> This feature is crucial for us, so we need another core reviewer to review it.
14:39:46 <rosmaita> ok, i will sign up
14:39:54 <jbernard> ive finished my stable stuffs, will try to help out on some of these now
14:40:09 <inori> Thank you rosmaita
14:40:10 <whoami-rajat> great, thanks rosmaita
14:40:56 <whoami-rajat> jbernard, thanks, we've more features that can benefit from reviews
14:41:11 <whoami-rajat> ok next, NetApp ONTAP: Added support to Active/Active mode in NFS driver
14:41:20 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/889826
14:41:27 <whoami-rajat> there were 3 patches for this feature
14:41:35 <whoami-rajat> 1 is merged and another already has 2 +2s
14:41:43 <whoami-rajat> this one requires another review and we are good to go here
14:42:59 <jungleboyj> Looking.
14:43:01 <whoami-rajat> again, require a volunteer to sign up for this review https://etherpad.opendev.org/p/cinder-2023.2-bobcat-features#L22
14:43:05 <whoami-rajat> it's a small change actually
14:43:26 <whoami-rajat> jungleboyj, thanks!
14:44:20 <whoami-rajat> next, [NetApp] LUN space-allocation support for iSCSI
14:44:28 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/893106
14:45:05 <whoami-rajat> as per my last discussion with geguileo , the support they are trying to add still comes under thin provisioning
14:45:10 <whoami-rajat> specifically this part: It enables ONTAP to reclaim space automatically when host deletes data.
14:45:41 <geguileo> in my opinion that's thin provisioning
14:45:45 <whoami-rajat> when host reads/deletes data and it supports thin provisioning, then NetApp should be able to allocate or reclaim space based on that
14:46:04 <geguileo> without the possibility of reclaiming space with the trim/discard/unmap commands, then it's not really thin
14:46:33 <geguileo> what I don't know is if they should do that automatically when the pool is thin
14:48:07 <whoami-rajat> i think this feature can use some more discussion and is a good topic for PTG, for now it doesn't seem straightforward to include it in the release
14:49:30 <whoami-rajat> jayaanand, thanks for your efforts but the cinder team is still not convinced if the *proposed* way is the correct way for implementing this feature
14:50:12 <whoami-rajat> let's continue discussion on it and try to target it for the Caracal release
14:50:37 <whoami-rajat> ok moving on
14:50:39 <whoami-rajat> next, [Pure Storage] Replication-Enabled and Snapshot Consistency Groups
14:50:55 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/891234
14:51:09 <whoami-rajat> so the feature looks good, the problem is i couldn't find UTs for the new code added
14:51:16 <jayaanand> ok, thank you! we will take up in PTG
14:51:33 <whoami-rajat> i talked to simondodsley but he said the dev who works on UTs is out this week
14:52:05 <simondodsley> yep - sorry my mock-fu is not good
14:52:05 <whoami-rajat> so should we allow this feature and agree to do the UTs in a followup or block the feature due to the UTs?
14:53:08 <whoami-rajat> I'm assuming the code path is properly tested but in case a syntax error anywhere can break the operation (past experience)
14:53:23 <whoami-rajat> so wanted to know the team's opinion on it
14:56:09 * whoami-rajat hears crickets
14:57:52 <jbernard> i think (personally) since simon has been with us for quite some time, that it's okay
14:58:20 <whoami-rajat> jbernard, cool, thanks for your input
14:58:55 <whoami-rajat> I'm OK then to +2 it if simondodsley can reply to my comment saying UTs will be added as a followup (just to keep a record of it)
14:59:16 <whoami-rajat> jbernard, would you be OK being a second reviewer on that patch?
14:59:23 <jbernard> whoami-rajat: can do
14:59:34 <whoami-rajat> thanks!
14:59:56 <whoami-rajat> finally this is the last feature
15:00:02 <whoami-rajat> but we have no time left for this discussion
15:00:06 <whoami-rajat> [HPE XP] Support HA and data deduplication
15:00:13 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/892608
15:00:21 <whoami-rajat> I've left a comment that 2 features shouldn't be part of the same patch
15:00:34 <whoami-rajat> we can continue discussion on the patch itself
15:00:48 <whoami-rajat> it's dependent patches all have +2
15:00:55 <whoami-rajat> need another reviewer to take a look
15:01:00 <whoami-rajat> we're out of time
15:01:06 <whoami-rajat> i will move the other topics for next meeting
15:01:11 <whoami-rajat> thanks everyone for joining!
15:01:14 <whoami-rajat> #endmeeting