14:00:02 <whoami-rajat> #startmeeting cinder
14:00:02 <opendevmeet> Meeting started Wed Apr  5 14:00:02 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:02 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:02 <opendevmeet> The meeting name has been set to 'cinder'
14:00:10 <whoami-rajat> #topic roll call
14:00:33 <rosmaita> o/
14:00:38 <felipe_rodrigues> hi
14:00:41 <Mounika> Hi
14:00:58 <nahimsouza[m]> hi
14:01:15 <thiagoalvoravel> hi
14:01:27 <whoami-rajat> we've new etherpad!
14:01:29 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-bobcat-meetings
14:01:40 <crohmann> whoami-rajat: I already abused that :-)
14:02:19 <HelenaDantas[m]> hi
14:02:39 <simondodsley> o/
14:02:45 <whoami-rajat> starting the cycle with a topic this big should be a good sign (at least I'm optimistic about it)
14:02:45 <lucasmoliveira059> o/
14:02:50 <keerthivasansuresh> o/
14:04:38 <whoami-rajat> we've enough people, let's start the first meeting of 2023.2 Bobcat
14:04:42 <whoami-rajat> #topic announcements
14:04:57 <whoami-rajat> these are some announcements from PTG, in case anyone missed it
14:04:59 <whoami-rajat> 2023.1 (Antelope) is released!
14:05:04 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-March/032872.html
14:05:17 <jungleboyj> o/
14:05:20 <whoami-rajat> again thanks everyone for your contributions and efforts!
14:05:49 <whoami-rajat> next, 2023.1 (Antelope) Project update in OpenInfra live
14:05:51 <caiquemello[m]> o/
14:05:54 <whoami-rajat> #link https://www.youtube.com/watch?v=YdLTUTyJ1eU
14:06:18 <whoami-rajat> I along with other PTLs provided project updates for 2023.1 Antelope, if you're interest please take a look at the youtube video
14:06:42 <whoami-rajat> next, Welcome to first meeting of 2023.2 Bobcat cycle
14:06:49 <whoami-rajat> Add your name in the Courtesy reminder section if you would like to be notified about cinder meeting (Wednesday, 1400 UTC)
14:07:30 <whoami-rajat> it's on L#27
14:07:32 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-bobcat-meetings#L27
14:08:14 <rosmaita> we've never figured out a way to tell people who aren't here and don't want reminders any more to remove their names
14:08:30 <whoami-rajat> we had 2023.2 Bobcat PTG last week from tuesday to friday 1300-1700 UTC, here is the etherpad containing all the discussion minutes
14:08:32 <whoami-rajat> #link https://etherpad.opendev.org/p/bobcat-ptg-cinder
14:09:23 <whoami-rajat> rosmaita, true, but we ping them on #openstack-cinder so they can reply there or reach me out if they don't want to be notified anymore (that is my assumption)
14:09:37 <jungleboyj> ++
14:09:57 <jungleboyj> I haven't seen anyone complaining about still getting pinged.  So, I don't think that is a big issue.
14:10:10 <whoami-rajat> welcome back jungleboyj , we missed you at the PTG
14:10:35 <jungleboyj> Thank you!  Sorry I had to miss it.
14:11:01 <whoami-rajat> no problem, I'm sure there were valid reasons for it :)
14:11:26 <whoami-rajat> so continuing with the announcements
14:11:43 <whoami-rajat> we've recordings of sessions for all 4 days in the cinder etherpad
14:12:01 <whoami-rajat> It's currently in bluejeans but will be uploaded to Cinder youtube channel soon
14:12:21 <jungleboyj> Working on that as we speak.
14:12:23 <whoami-rajat> but i don't think that should be a problem if you would like to watch a session you're interested in
14:12:33 <whoami-rajat> great, thanks Jay!
14:13:48 <whoami-rajat> that's all the announcements I had
14:14:20 <whoami-rajat> I'm currently working on the summary, it will be out soon before end of the week
14:15:12 <whoami-rajat> let's move to the topics then
14:15:25 <whoami-rajat> #topic Cinder backup "improvements"
14:15:29 <whoami-rajat> crohmann, that's you
14:15:54 <whoami-rajat> just to mention we discussed your topics and there should be action items/discussion points in the etherpad
14:16:55 <crohmann> Yes. Thanks again. I took off the specs and now have the remaining "improvements" listed here
14:17:55 <crohmann> Overall I am simply interested in getting cinder-backup work really well with object storage (via chunked driver). So my "little" list here is about the issues I currently see.
14:18:54 <crohmann> If you already did talk about anything I have on todays list, please mention that again and I gladly look at the recording / notes
14:19:06 <crohmann> (notes I did look at though)
14:20:58 <whoami-rajat> regarding the (How) to reduce memory footprint for "all" deployment tools, as done here
14:21:09 <whoami-rajat> we discussed that in the operator hour https://etherpad.opendev.org/p/march2023-ptg-operator-hour-cinder
14:21:15 <whoami-rajat> #link https://etherpad.opendev.org/p/march2023-ptg-operator-hour-cinder
14:22:17 <crohmann> but itthat is more of a workaround
14:23:13 <crohmann> to simply reduce the chunk size. I was more talking about the proposed and merged changes using MALLOC_ARENA_MAX malloc tuning..
14:23:57 <crohmann> The referenced changes already set this for devstack and tripleo. I was just wondering how to have other deployment tools apply the same.
14:25:04 <whoami-rajat> oh, for that you will need to contact other deployment projects, probably you can write a mail to openstack-discuss mentioning all the deployment tools you would like to include these changes
14:25:38 <whoami-rajat> something like, [kolla-ansible][deployment-project-2][...]... Imporve cinder backup performance
14:26:11 <crohmann> Yes. I certainly could do that. I was just wondering that this rather vital tuning paramter was something the cinder team would like to promote / package more prominently somehow.
14:27:46 <whoami-rajat> ack, maybe geguileo can take this question but as per my understanding, it did bring down the memory consumption in his testing
14:28:37 <geguileo> crohmann: you could always submit a patch to the other deployment tools
14:28:42 <geguileo> crohmann: or ask them to add it
14:29:15 <geguileo> there are too many deployment tools out there
14:29:33 <crohmann> geguileo:  good point. I shall do so for at least openstack-ansible then. I was just wondering (not anymore) if these malloc tunrings are the way to go. But that seems to be the case. Maybe somebody has any thoughts about next bullet point (streaming io) ?
14:29:55 <geguileo> crohmann: for Cinder that is the way to go, I don't know about other projects
14:30:19 <geguileo> the reason to have multiple memory arenas in glibc is to remove bottlenecks in multithreaded applications
14:30:29 <geguileo> that do a lot of memory allocation/free operations
14:30:41 <geguileo> but we are running on Python, which already has its own memory management system
14:31:03 <geguileo> so having additional per native thread memory arenas ends up creating memory problems
14:31:10 <geguileo> higher peak consumption
14:31:19 <geguileo> high watermark issues
14:31:51 <geguileo> considering the way cinder works, we wouldn't be really benefiting from those arenas, and they just create problems for us
14:32:00 <crohmann> Yeah. But thanks for clarifying this again. I did not want to promote adding these parameters without another consultation here.
14:32:07 <geguileo> so I doubt openstack-ansible will be oposed to your patch
14:32:24 <crohmann> I shall point them to this weekly meeting then :-)
14:32:31 <geguileo> crohmann: if you have trouble with the reviewers you can ping me on IRC and I'll chime in that review
14:33:08 <geguileo> crohmann: I recommend you pointing out how this has already been done in devstack and tripleo and show the links to the patches
14:33:22 <geguileo> that usually helps to convince people
14:34:00 <whoami-rajat> example: *nova does it so we should too*
14:35:17 <whoami-rajat> so that should address the second point
14:35:24 <crohmann> I'll get to it soon. It's just unfortunate that options like these need to be done by deployment tooling sometimes and there is not easy way for you to simply include that in your release.
14:36:26 <whoami-rajat> regarding the 3rd point, i think zaitcev brought up the discussion of streaming IO rather than using chunks
14:36:29 <whoami-rajat> but i might be wrong
14:37:05 <rosmaita> i think you're right, it was Pete
14:37:25 <whoami-rajat> ok, good to know
14:37:47 <crohmann> Maybe yes. But I never talked to him directly about it but send s few emails regarding cinder-backup improvement ideas to geguileo and zaitcev .
14:38:40 <crohmann> Currently the throughput using chunked driver powered cinder-backup drivers is a little sub-par. So maybe there is only so much one can do tuning the current design
14:39:17 <zaitcev> The large size of uncompressed chunk was something that jumped out, but I thought that I might be out of touch with memory sizes at typical modern systems.
14:40:17 <crohmann> But doing streaming io (including compression, ...) there memory foodprint should be much lower, right?
14:41:34 <crohmann> zaitcev: You also said that doing multi-threading on the current chunking driver (https://review.opendev.org/c/openstack/cinder/+/779233) was not helping (to improve throughput). Maybe that change should be rejected alltogether then?
14:43:48 <zaitcev> crohmann: I highly doubt that I could say something like that. I don't have enough field experience to make such claims.
14:43:56 <zaitcev> but let's see
14:45:26 <crohmann> All I am saying is that cinder-backup via chunked driver currently is memory heavy and rather slow. That's unfortunate, but cause that hinders all of them drivers using the abstract chunking to be really a good replacement. So I am simply urging to find the bottlenecks to be able to use object storage for volume backups :-)
14:46:55 <Tony_Saad> Hey guys, I have a couple for review https://review.opendev.org/c/openstack/cinder/+/879067 https://review.opendev.org/c/openstack/os-brick/+/876284 https://review.opendev.org/c/openstack/cinder/+/874813/3
14:47:41 <whoami-rajat> Tony_Saad, please add it to the review request section on the etherpad
14:48:15 <Tony_Saad> Also I have a question about https://bugs.launchpad.net/cinder/+bug/2003179 Password appears in plain text. This is a security vulnerability. Is there a way to merge security issues faster than bobcat?
14:48:55 <zaitcev> I made a note to look into the performance of backups. Having a bug is a great help, because it adds it credibility and the data point that is not an artefact of my test VMs.
14:49:09 <whoami-rajat> Tony_Saad, please wait for the open discussion or add topics on etherpad before the cinder meeting if you would like something to be discussed, we're currently discussing an ongoing topic
14:49:33 <Tony_Saad> opps sorry thought we were open discussion
14:49:46 <crohmann> zaitcev: Thanks. I shall retest what I sent via email a few weeks ago and raise a bug with some data
14:50:03 <whoami-rajat> Tony_Saad, no problem
14:50:24 <whoami-rajat> crohmann, regarding the last point, "What ever happend to "multi-backend" support for cinder-backup?"
14:50:44 <whoami-rajat> I can see the spec was proposed by Ivan and he's not currently active anymore (though he shows up from time to time)
14:51:11 <whoami-rajat> I don't remember if he implemented that functionality but that should be open for taking up if anyone would like to work on it
14:52:03 <crohmann> So is this still valid to start working on and then propose for e.g. Bobcat or Bobcat++ ?
14:52:40 <crohmann> Or would this need further discussion / a resubmission of the spec?
14:53:03 <whoami-rajat> resubmitting the spec should be a good start
14:53:38 <crohmann> May I simply do that (because I believe allowing for multiple backends is a good idea) or do you want to try and reach Ivan?
14:54:14 <zaitcev> Do you guys have a link to that spec for "multi-backend"? The title sounds dubious because we certainly have many back-ends today.
14:54:24 <zaitcev> Maybe needs renaming for clarity.
14:54:37 <zaitcev> Oh
14:54:39 <zaitcev> Ooooh
14:54:51 <whoami-rajat> crohmann, i think you can pursue it, I haven't seen Ivan around for last 2 cycles
14:54:58 <zaitcev> He probably means many back-ends simultaneously on the same Cinder backup service.
14:54:59 <whoami-rajat> zaitcev, it's for backups, https://review.opendev.org/c/openstack/cinder-specs/+/712301
14:55:01 <crohmann> zaitcev: Link is in the otherpad
14:57:11 <whoami-rajat> crohmann, the other topic you've added seems relevant for the glance team, "Glance support for multiple checksum types" or maybe i missed something there and we can discuss it?
14:57:18 <crohmann> Having this is not only about having some sort of a migration path ... but actually an invitation to "try" other backends or to offer e.g. "local backup" and "offsite" backups ..
14:57:45 <crohmann> whoami-rajat: Yes. Sorry - that should have gone to the glance list of topics.
14:57:57 <whoami-rajat> cool
14:58:36 <whoami-rajat> let's quickly do open discussion to discuss Tony's topic
14:58:41 <whoami-rajat> #topic open discussion
14:58:50 <whoami-rajat> Tony_Saad, do we have a fix proposed for that bug?
14:58:59 <Tony_Saad> not yet
14:59:31 <rosmaita> eharney left some suggestions in the bug
14:59:59 <whoami-rajat> ack, so I think it's upto the Dell team to propose a fix first then the cinder team can actively review->merge->release it
15:00:31 <Tony_Saad> is it possible to do this before bobcat?
15:00:31 <whoami-rajat> reading eharney's comment, I think geguileo already proposed a patch for it and it merged last cycle
15:01:09 <whoami-rajat> #link https://review.opendev.org/c/openstack/os-brick/+/871835
15:01:28 <whoami-rajat> ok we're out of time, let's discuss this later in #openstack-cinder
15:01:32 <whoami-rajat> thanks everyone for joining!
15:01:35 <whoami-rajat> #endmeeting