Wednesday, 2023-09-06

*** bauzas_ is now known as bauzas12:01
*** dviroel_ is now known as dviroel12:27
*** bauzas_ is now known as bauzas13:07
whoami-rajat#startmeeting cinder14:01
opendevmeetMeeting started Wed Sep  6 14:01:04 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
opendevmeetThe meeting name has been set to 'cinder'14:01
Saikumaro/14:01
whoami-rajat#topic roll call14:01
jungleboyjo/14:01
eharneyhi14:01
rosmaitao/14:01
msaravanhi 14:01
felipe_rodrigueso/14:01
akawaio/14:01
toheebo/14:02
jbernardo/14:02
thiagoalvoravelo/14:02
caiquemello[m]o/14:03
jayaanandhi14:03
Dessira_o/14:04
geguileoo/14:04
simondodsleyo/14:04
whoami-rajathello everyone14:05
whoami-rajatlet's get started14:05
whoami-rajat#topic announcements14:06
whoami-rajatfirst, Midcycle -2 Summary14:06
whoami-rajat#link https://lists.openstack.org/pipermail/openstack-discuss/2023-September/034946.html14:06
whoami-rajatMidcycle 2 summary is available at the Cinder wiki14:06
whoami-rajat#link https://wiki.openstack.org/wiki/CinderBobcatMidCycleSummary14:06
whoami-rajatnext, TC Election Results14:06
whoami-rajat#link https://civs1.civs.us/cgi-bin/results.pl?id=E_41d42603087bcf5814:07
whoami-rajatfollowing are the 4 candidates that got selected as TC this time14:07
whoami-rajatGhanshyam Mann (gmann)14:07
whoami-rajatDan Smith (dansmith)14:07
whoami-rajatJay Faulkner (JayF)14:07
whoami-rajatDmitriy Rabotyagov (noonedeadpunk)14:07
whoami-rajatnext, Recheck state (past week)14:08
whoami-rajat#link https://etherpad.opendev.org/p/recheck-weekly-summary14:08
whoami-rajatlast week we had 2 bare rechecks out of 22 total rechecks14:08
whoami-rajat| Team               | Bare rechecks | All Rechecks | Bare rechecks [%] | 14:08
whoami-rajat| cinder             | 2             | 22           | 9.09              | 14:08
whoami-rajatwhich is a good number14:08
whoami-rajatjust to reiterate, if gate fails, it's always good to check the reason even if it's a random failure and put a recheck comment with that particular reason14:09
whoami-rajatexample, recheck cinder-barbican-lvm-lio job failed because of X test failure SSH Timeout14:10
whoami-rajatanother thing is the 90 days number14:10
whoami-rajat| Team               | Bare rechecks | All Rechecks | Bare rechecks [%] | 14:10
whoami-rajat| cinder             | 112           | 356          | 31.46             | 14:10
whoami-rajat112 bare rechecks out of 356 total14:10
whoami-rajatwhich isn't bad percentage wise (31.46) but still good to improve upon it14:11
eharneyare we finding any particular patterns in the rechecks?14:11
rosmaitano, i don't think anyone is analyzing that14:11
whoami-rajatfor my patches, cinder-tempest-plugin-lvm-lio-barbican fails with SSHTimeout in some test14:12
whoami-rajatthe test is random14:12
whoami-rajatbut I haven't dug much deeper into it14:12
happystackerI have issues with cinder-tempest-plugin-lvm-lio-barbican and devstack-plugin-nfs-tempest-full from time to time14:13
whoami-rajatit's better if we follow up on the recommendations discussed during midcycle14:14
whoami-rajatand see if it makes any difference14:14
whoami-rajati see this patch from rosmaita  where the ceph tempest job is passing after applying the mysql thing, but need more evidence to be certain https://review.opendev.org/c/openstack/cinder/+/893798 14:18
rosmaitayeah, there was some discussion that the mysql-reduce-memory thing was turned on by default, so that patch may be unnecessary14:19
whoami-rajatoh ok14:19
whoami-rajatwe can check on that14:19
rosmaitabut i don't think it's on by default for the parents of those jobs14:19
whoami-rajatok14:21
whoami-rajati remember it was enabled in some tempest/devstack base jobs but good to check14:22
whoami-rajatlast announcement, Devstack dropped support for Focal14:23
rosmaita(only in master, though)14:23
whoami-rajatgood correction14:23
whoami-rajatthe mail says, it was planned for Caracal but nova bumped the libvirt version14:24
whoami-rajatso they had to remove the job14:24
whoami-rajateven tempest did remove it's focal job14:24
rosmaitai haven't looked, i don't think we had any focal jobs? except maybe rbd-iscsi-client?14:24
whoami-rajatwith a quick search i couldn't find any usage of those jobs14:24
whoami-rajatdevstack-platform-ubuntu-focal or tempest-full-ubuntu-focal jobs14:24
whoami-rajatrosmaita, i couldn't find us using those jobs anywhere ^14:25
rosmaitasometimes we define a nodeset for our jobs, though14:25
rosmaitaok, no nodeset specified in rbd-iscsi-client .zuul.yaml14:26
whoami-rajati can see that (nodeset openstack-single-node-focal) used in cinder-tempest-plugin for stable branch jobs so we should be good?14:27
whoami-rajathttps://opendev.org/openstack/cinder-tempest-plugin/src/branch/master/.zuul.yaml14:27
rosmaitayes, i think the problem is only if you use devstack master with focal14:27
whoami-rajatbecause the libvirt version bump is only in master, so we are good14:28
whoami-rajatthanks for confirming14:28
whoami-rajatso, that's all for the announcements14:29
whoami-rajatand i made a mistake in one of the announcement14:29
whoami-rajatregarding TC elections14:29
whoami-rajatrosmaita, can correct and better tell the details14:30
rosmaitawell, what is happening is that the election is *starting* now14:30
rosmaitabut something has changed with the way you register to vote14:30
rosmaita#link https://lists.openstack.org/pipermail/openstack-discuss/2023-September/034981.html14:31
rosmaitait used to be that the election coordinator gave a list to the voting website14:31
rosmaitanow, it's a list + you personally have to opt in to be able to vote14:31
rosmaitaso, you only have 9 hours to do that14:31
rosmaitajust to be clear14:32
rosmaitayou won't be able to vote unless you follow the instructions in the email14:32
rosmaitabefore 23:45 UTC *today*14:32
whoami-rajatthanks rosmaita !14:33
whoami-rajatso please do the registration and vote for the TC member of your choice14:34
whoami-rajatnow that's ACTUALLY all for the announcements14:35
whoami-rajatlet's move to topics14:35
whoami-rajat#topic Feature Reviews14:35
whoami-rajat#link https://lists.openstack.org/pipermail/openstack-discuss/2023-September/034948.html14:36
whoami-rajatFFE was granted for 6 features out of which None have merged till now14:36
whoami-rajatsome features have dependency on other patches which needs to be reviewed first14:37
whoami-rajatlet's go through it one by one14:37
whoami-rajat#link https://etherpad.opendev.org/p/cinder-2023.2-bobcat-features14:37
whoami-rajatfirst, Fujitsu Driver: Add QoS support14:37
whoami-rajat#link https://review.opendev.org/c/openstack/cinder/+/84773014:37
inoriHere I am,14:37
whoami-rajatreviews on this feature have been requested from time to time, I've taken a look twice and it looks good14:37
whoami-rajatI would like another core to volunteer to take a look at it14:38
whoami-rajatinori, hey14:38
inoriThanks for your code-review +2 and review-priority, rajat.14:38
whoami-rajatnp, it's a review priority since we won't merge any feature after this week!14:39
inoriThis feature is crucial for us, so we need another core reviewer to review it.14:39
rosmaitaok, i will sign up14:39
jbernardive finished my stable stuffs, will try to help out on some of these now14:39
inoriThank you rosmaita14:40
whoami-rajatgreat, thanks rosmaita 14:40
whoami-rajatjbernard, thanks, we've more features that can benefit from reviews14:40
whoami-rajatok next, NetApp ONTAP: Added support to Active/Active mode in NFS driver14:41
whoami-rajat#link https://review.opendev.org/c/openstack/cinder/+/88982614:41
whoami-rajatthere were 3 patches for this feature14:41
whoami-rajat1 is merged and another already has 2 +2s14:41
whoami-rajatthis one requires another review and we are good to go here14:41
jungleboyjLooking.14:42
whoami-rajatagain, require a volunteer to sign up for this review https://etherpad.opendev.org/p/cinder-2023.2-bobcat-features#L2214:43
whoami-rajatit's a small change actually14:43
whoami-rajatjungleboyj, thanks!14:43
whoami-rajatnext, [NetApp] LUN space-allocation support for iSCSI14:44
whoami-rajat#link https://review.opendev.org/c/openstack/cinder/+/89310614:44
whoami-rajatas per my last discussion with geguileo , the support they are trying to add still comes under thin provisioning14:45
whoami-rajatspecifically this part: It enables ONTAP to reclaim space automatically when host deletes data.14:45
geguileoin my opinion that's thin provisioning14:45
whoami-rajatwhen host reads/deletes data and it supports thin provisioning, then NetApp should be able to allocate or reclaim space based on that14:45
geguileowithout the possibility of reclaiming space with the trim/discard/unmap commands, then it's not really thin14:46
geguileowhat I don't know is if they should do that automatically when the pool is thin14:46
whoami-rajati think this feature can use some more discussion and is a good topic for PTG, for now it doesn't seem straightforward to include it in the release14:48
whoami-rajatjayaanand, thanks for your efforts but the cinder team is still not convinced if the *proposed* way is the correct way for implementing this feature14:49
whoami-rajatlet's continue discussion on it and try to target it for the Caracal release14:50
whoami-rajatok moving on14:50
whoami-rajatnext, [Pure Storage] Replication-Enabled and Snapshot Consistency Groups14:50
whoami-rajat#link https://review.opendev.org/c/openstack/cinder/+/89123414:50
whoami-rajatso the feature looks good, the problem is i couldn't find UTs for the new code added14:51
jayaanandok, thank you! we will take up in PTG14:51
whoami-rajati talked to simondodsley but he said the dev who works on UTs is out this week14:51
simondodsleyyep - sorry my mock-fu is not good14:52
whoami-rajatso should we allow this feature and agree to do the UTs in a followup or block the feature due to the UTs?14:52
whoami-rajatI'm assuming the code path is properly tested but in case a syntax error anywhere can break the operation (past experience)14:53
whoami-rajatso wanted to know the team's opinion on it14:53
* whoami-rajat hears crickets14:56
jbernardi think (personally) since simon has been with us for quite some time, that it's okay14:57
whoami-rajatjbernard, cool, thanks for your input14:58
whoami-rajatI'm OK then to +2 it if simondodsley can reply to my comment saying UTs will be added as a followup (just to keep a record of it)14:58
whoami-rajatjbernard, would you be OK being a second reviewer on that patch?14:59
jbernardwhoami-rajat: can do14:59
whoami-rajatthanks!14:59
whoami-rajatfinally this is the last feature14:59
whoami-rajatbut we have no time left for this discussion15:00
whoami-rajat[HPE XP] Support HA and data deduplication 15:00
whoami-rajat#link https://review.opendev.org/c/openstack/cinder/+/89260815:00
whoami-rajatI've left a comment that 2 features shouldn't be part of the same patch15:00
whoami-rajatwe can continue discussion on the patch itself15:00
whoami-rajatit's dependent patches all have +215:00
whoami-rajatneed another reviewer to take a look15:00
whoami-rajatwe're out of time15:01
whoami-rajati will move the other topics for next meeting15:01
whoami-rajatthanks everyone for joining!15:01
whoami-rajat#endmeeting15:01
opendevmeetMeeting ended Wed Sep  6 15:01:14 2023 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:01
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-09-06-14.01.html15:01
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-09-06-14.01.txt15:01
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-09-06-14.01.log.html15:01
jungleboyj++  Thanks!15:01
*** bauzas_ is now known as bauzas17:09
*** bauzas_ is now known as bauzas19:12
*** bauzas_ is now known as bauzas19:36
*** bauzas_ is now known as bauzas20:23
*** bauzas_ is now known as bauzas21:16
*** bauzas_ is now known as bauzas21:24
*** bauzas_ is now known as bauzas23:24

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!