Wednesday, 2024-01-24

whoami-rajat#startmeeting cinder14:01
opendevmeetMeeting started Wed Jan 24 14:01:45 2024 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
opendevmeetThe meeting name has been set to 'cinder'14:01
whoami-rajat#topic roll call14:01
rosmaitao/14:01
jungleboyjo/14:01
simondodsleyo/14:01
akawaio/14:02
inorio/14:02
gireeshhi14:02
Saikumaro/14:03
msaravanHi14:03
whoami-rajat#link https://etherpad.opendev.org/p/cinder-caracal-meetings14:03
crohmanno/14:03
whoami-rajatlet's get started14:04
whoami-rajat#topic announcements14:05
whoami-rajatfirst, Dalmatian release schedule14:05
whoami-rajat#link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/KS2WP242E4EQZ2GANROQMS4OHPQQ3ZJZ/14:05
whoami-rajat#link https://review.opendev.org/c/openstack/releases/+/90605014:05
jayaanando/14:06
whoami-rajatthe D release schedule is proposed, If you are interested you can take a look and plan accordingly14:06
whoami-rajatnext, RDO at CentOS Connect and FOSDEM14:06
whoami-rajat#link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/JVBVRUVZDEGQYSIBWVWQI4CBOYKHNQ3U/14:07
whoami-rajatOpportunities to learn more about RDO in centos connect and FOSDEM in Brussels, BE14:07
whoami-rajatcentos connect: January 31st and February 1st14:07
whoami-rajatFOSDEM: February 2nd and 3rd14:07
whoami-rajatthis is in person event so people who are attending can keep a look out for RDO related sessions if you are eager to learn more about it14:07
whoami-rajatfinally we have some upcoming milestones14:08
whoami-rajatNon-client library freeze:     February 22nd, 2024 (R-6 week)14:08
whoami-rajatClient library freeze:         February 29th, 2024 (R-5 week)14:08
whoami-rajatCaracal-3 milestone:           February 29th, 2024 (R-5 week)14:08
whoami-rajat2024.1 Caracal final release:  April 3rd, 202414:08
whoami-rajat2024.2 'D' virtual PTG ( https://openinfra.dev/ptg/ ): April 8th-12th, 202414:08
whoami-rajatthat's all for announcements14:10
whoami-rajatthere are some cinder related questions on the ML14:10
whoami-rajatwhich to my understanding were answered14:10
whoami-rajatbut good to see people raising deployment specific issues and other members helping them out14:11
rosmaita\o/14:12
whoami-rajatanyone has anything else for announcements ?14:13
whoami-rajatlooks like not14:16
whoami-rajatlet's get to the topics14:16
whoami-rajat#topic Ceph caps for Cinder / Glance14:16
whoami-rajat#link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/5SVYDYOXWMG4TJKWEA6BFMPZZGC3Q5CS/14:16
whoami-rajatI've been busy for the past couple of days but i will make a note to reply to it today14:17
whoami-rajatand please ping me tomorrow if i don't14:17
whoami-rajatsorry for the delay here14:17
crohmannthanks a bunch whoami-rajat! I will do the patches, but I'd rather have the proper caps down before I even push anything14:18
whoami-rajatI can check and provide the info about cinder caps, but do we have info from glance and nova team about their caps?14:18
whoami-rajator you are planning to just do a cinder specific doc effort14:19
crohmannno. I'd rather have them all down. For operators it's just "a list of users + caps"14:19
whoami-rajatokay, i will add a reply from cinder side on it (and the team can correct me if something is not accurate)14:20
crohmannCould you maybe drag someone from Glance / Nova into the conversation on the ML to have this done all in one go?14:20
crohmannnot one reply ... I mean in one effort / a few days.14:21
rosmaitacrohmann: i will put something on the glance agenda, though i think the meeting may be cancelled this week14:22
whoami-rajatI can mention to ask the teams for their input and the benefit it will have on operators14:22
rosmaitacrohmann: it would probably help to raise a bug and add all appropriate projects to it14:23
crohmannI will do that then. whoami-rajat do you want to hold off your reply until there is a bug then or use the ML firsT?14:24
whoami-rajatcrohmann, sure, let me know once it's filed14:24
crohmannk.14:24
whoami-rajatthanks crohmann for keeping on top of it, anything else on this topic?14:25
crohmannnot frome me.14:25
whoami-rajatokay, so we can move to the next topic14:26
whoami-rajat#topic Any update on the performance issues with cinder-backup and the chunked driver (e.g. S3)14:26
whoami-rajat#link https://bugs.launchpad.net/cinder/+bug/191811914:26
whoami-rajatcrohmann, that's you again14:27
crohmannThis is me again asking for a follow up / current state.14:27
whoami-rajati don't like to ping zaitcev again and again but only he can answer it14:27
crohmannI'd like to know if this is treated as a real issue (S3 not being usably fast for production).14:28
rosmaitawell, it's not not being treated as a real issue14:28
rosmaitapete is working on a few different backup items, and one of them has caused some slowdown due to database differences14:28
rosmaitaso it's still on his radar, just probably not much progress this week14:29
whoami-rajatI haven't heard a lot of people use S3 for backups, most of the user survey answers mentions Ceph but happy to be corrected about it14:29
rosmaitahe's in utc-6 time zone14:29
crohmannI did not mean to put this on him personally. The question was more of general nature about getting cinder-backup using the chunked driver to a different performance level14:29
crohmannwhoami-rajat: Yeah. But if you look at the referenced but, Cern wanted to use S3, but it's simply too slow due to implementation inefficiencies.14:30
crohmanns/but/bug/14:30
crohmannThey even spoke about this in their talk ;-)14:30
rosmaitacrohmann: slow because of the chunked driver, or slow on the S3 side?14:31
crohmannAnd it's not just "S3", but all non-rbd drivers that base on the chunked driver14:31
crohmannSince rbd is just a "rbd export | rbd import" with no data hitting Python at all it's  a whole different ball game14:32
crohmannthe "chunked driver" on the other hand does actively read "chunks" from the source volume and mangles them until they end up being uploaded to some (remote) storage. Be it S3, GCP or others14:33
crohmanns/GCP/GCS/14:33
crohmannsee https://bugs.launchpad.net/cinder/+bug/1918119 for some of the analysis we did to narrow down where the time is spent14:34
whoami-rajatso the improvement is not specific to S3 but any driver inheriting from chunked driver, this makes the effort more generalized14:34
crohmannYes whoami-rajat 14:34
zaitcevThe performance of the chunkdriver is in my QC goals and literally is my only over-arching bug. So it's either that or my neck.14:35
crohmannThe S3 driver is simply the last bit of the processing (the upload bit) to remote storage. The heavy-lifing was already done to the data at that point14:35
rosmaitasounds like zaitcev is motivated to work on this!14:36
crohmannYeah. I did not mean to pull on the grass to make it grow any faster (as they say in German).14:36
rosmaitai have not heard that expression before, but i like it!14:37
crohmannIn any case - zaitcev please let me know if you require any more input, testing, ideas ... 14:37
crohmannThat's all I wanted to discuss regarding this topic.14:37
zaitcevcrohmann: sure, thanks14:37
simondodsleynew Cinder motto - we don't pull on the grass...14:37
zaitcevSpeaking of backup, we need to drum up +2 for https://review.opendev.org/c/openstack/cinder/+/88658414:38
jungleboyjLOL14:38
crohmann(Yeah. I put that one and some more cinder-backup related quick fixed in the review list of this meeting)14:38
zaitcevoh14:39
rosmaitai will look at 886584 today14:39
whoami-rajati think i reviewed it earlier, will take a look at the update14:40
whoami-rajatokay, anything else on this topic?14:40
whoami-rajati will take the silence as no, let's move on to the next one then14:42
whoami-rajat#topic some CI updates14:42
whoami-rajatrosmaita, that's you14:42
rosmaitahello14:42
rosmaitacurrent word on the street is that CI jobs have having odd timeouts14:43
rosmaitai guess one theory is that swapping is causing slowdowns14:43
rosmaitaso, a patch merged recently to enable zswap on devstack runs14:43
rosmaita#link https://review.opendev.org/c/openstack/devstack/+/89069314:44
rosmaita(that patch had a typo in the flag name, so there's a followup in the gate now: https://review.opendev.org/c/openstack/devstack/+/906504 )14:44
rosmaitai think the plan is that nova will enable this on a few jobs to see what happens14:44
rosmaitai don't know if we want to get in on that action or not?14:45
rosmaitathere are some limitations noted on the original patch's commit message14:45
whoami-rajatif it can make the situation worse, then maybe better to wait for nova to publish their findings14:46
rosmaitaanyway, just wanted to bring this up in case anyone is interested in this14:46
whoami-rajatif it can only improve the situation, i don't have any issues14:46
whoami-rajatbut I need to do a bit of reading on the concepts used here14:46
simondodsleyseems to me zswap could only make things better14:46
rosmaitawell, it's a bit weird because we're using RAM to store swap pages that are swapped out because of lack of RAM, it's just that they're compressed14:47
rosmaitaso i think it's going to depend on what the pages are like, i.e., how compressible they are14:48
rosmaitabut maybe i am being too naive14:48
simondodsleythe nice thing about ZSWAP is that it can dynamically shrink when not needed, unlike SWAP14:48
whoami-rajatzswap basically trades CPU cycles14:48
whoami-rajatfor potentially reduced swap I/O.  This trade-off can also result in a14:48
whoami-rajatsignificant performance improvement if reads from the compressed cache are14:48
whoami-rajatfaster than reads from a swap device.14:48
whoami-rajatthat's what i got from the docs14:49
rosmaitait will be interesting to see what happens14:49
whoami-rajati don't know how to determine the performance of reading from compressed cache14:50
whoami-rajatbut let's see14:50
rosmaitaclarkb has said that a lot of time is wasted in swap preparation (writing out zeros), so this should be faster14:50
rosmaitaanyway, that's all from me14:51
whoami-rajatthanks for bringing this up rosmaita , we can keep an eye on it for any improvements in the gate situation (we surely need it)14:52
whoami-rajatthat's all the topics we had for today14:53
whoami-rajatlet's move to open discussion14:53
whoami-rajat#topic open discussion14:53
inoriHi, I'd like to highlight a patch I've been working on: https://review.opendev.org/c/openstack/cinder/+/89607714:54
inoriIt recieved a +2 from whoami-rajat last week, I'm now seeking additional reviews from other core reviewers to merge it. Your input would be highly valuable.14:54
inoriSo could you please take a moment to review the patch at your earliest convenience?14:56
whoami-rajatit's a Fujitsu feature sitting for a long time, good to get that in14:57
whoami-rajats/for/from14:57
rosmaitai'll take a look, though i share zaitcev's surprise that CLI gives you a speedup14:58
rosmaitabut if that's what Fujitsu is seeing in their testing, what the heck14:58
inoriThx for your time.14:58
rosmaitawe value empirical evidence in the cinder project14:58
zaitcevI'll look at it again.15:00
inorithank you zaitcev15:00
whoami-rajati didn't see any performance numbers on the patch but if Fujitsu claims it for their driver, i can believe that15:00
whoami-rajatokay we are out of time15:00
whoami-rajatplease take a look at the review request section15:01
whoami-rajatthanks everyone for joining15:01
whoami-rajat#endmeeting15:01
opendevmeetMeeting ended Wed Jan 24 15:01:07 2024 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:01
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-01-24-14.01.html15:01
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-01-24-14.01.txt15:01
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-01-24-14.01.log.html15:01

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!