14:00:02 #startmeeting cinder 14:00:02 Meeting started Wed Apr 5 14:00:02 2023 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:02 The meeting name has been set to 'cinder' 14:00:10 #topic roll call 14:00:33 o/ 14:00:38 hi 14:00:41 Hi 14:00:58 hi 14:01:15 hi 14:01:27 we've new etherpad! 14:01:29 #link https://etherpad.opendev.org/p/cinder-bobcat-meetings 14:01:40 whoami-rajat: I already abused that :-) 14:02:19 hi 14:02:39 o/ 14:02:45 starting the cycle with a topic this big should be a good sign (at least I'm optimistic about it) 14:02:45 o/ 14:02:50 o/ 14:04:38 we've enough people, let's start the first meeting of 2023.2 Bobcat 14:04:42 #topic announcements 14:04:57 these are some announcements from PTG, in case anyone missed it 14:04:59 2023.1 (Antelope) is released! 14:05:04 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-March/032872.html 14:05:17 o/ 14:05:20 again thanks everyone for your contributions and efforts! 14:05:49 next, 2023.1 (Antelope) Project update in OpenInfra live 14:05:51 o/ 14:05:54 #link https://www.youtube.com/watch?v=YdLTUTyJ1eU 14:06:18 I along with other PTLs provided project updates for 2023.1 Antelope, if you're interest please take a look at the youtube video 14:06:42 next, Welcome to first meeting of 2023.2 Bobcat cycle 14:06:49 Add your name in the Courtesy reminder section if you would like to be notified about cinder meeting (Wednesday, 1400 UTC) 14:07:30 it's on L#27 14:07:32 #link https://etherpad.opendev.org/p/cinder-bobcat-meetings#L27 14:08:14 we've never figured out a way to tell people who aren't here and don't want reminders any more to remove their names 14:08:30 we had 2023.2 Bobcat PTG last week from tuesday to friday 1300-1700 UTC, here is the etherpad containing all the discussion minutes 14:08:32 #link https://etherpad.opendev.org/p/bobcat-ptg-cinder 14:09:23 rosmaita, true, but we ping them on #openstack-cinder so they can reply there or reach me out if they don't want to be notified anymore (that is my assumption) 14:09:37 ++ 14:09:57 I haven't seen anyone complaining about still getting pinged. So, I don't think that is a big issue. 14:10:10 welcome back jungleboyj , we missed you at the PTG 14:10:35 Thank you! Sorry I had to miss it. 14:11:01 no problem, I'm sure there were valid reasons for it :) 14:11:26 so continuing with the announcements 14:11:43 we've recordings of sessions for all 4 days in the cinder etherpad 14:12:01 It's currently in bluejeans but will be uploaded to Cinder youtube channel soon 14:12:21 Working on that as we speak. 14:12:23 but i don't think that should be a problem if you would like to watch a session you're interested in 14:12:33 great, thanks Jay! 14:13:48 that's all the announcements I had 14:14:20 I'm currently working on the summary, it will be out soon before end of the week 14:15:12 let's move to the topics then 14:15:25 #topic Cinder backup "improvements" 14:15:29 crohmann, that's you 14:15:54 just to mention we discussed your topics and there should be action items/discussion points in the etherpad 14:16:55 Yes. Thanks again. I took off the specs and now have the remaining "improvements" listed here 14:17:55 Overall I am simply interested in getting cinder-backup work really well with object storage (via chunked driver). So my "little" list here is about the issues I currently see. 14:18:54 If you already did talk about anything I have on todays list, please mention that again and I gladly look at the recording / notes 14:19:06 (notes I did look at though) 14:20:58 regarding the (How) to reduce memory footprint for "all" deployment tools, as done here 14:21:09 we discussed that in the operator hour https://etherpad.opendev.org/p/march2023-ptg-operator-hour-cinder 14:21:15 #link https://etherpad.opendev.org/p/march2023-ptg-operator-hour-cinder 14:22:17 but itthat is more of a workaround 14:23:13 to simply reduce the chunk size. I was more talking about the proposed and merged changes using MALLOC_ARENA_MAX malloc tuning.. 14:23:57 The referenced changes already set this for devstack and tripleo. I was just wondering how to have other deployment tools apply the same. 14:25:04 oh, for that you will need to contact other deployment projects, probably you can write a mail to openstack-discuss mentioning all the deployment tools you would like to include these changes 14:25:38 something like, [kolla-ansible][deployment-project-2][...]... Imporve cinder backup performance 14:26:11 Yes. I certainly could do that. I was just wondering that this rather vital tuning paramter was something the cinder team would like to promote / package more prominently somehow. 14:27:46 ack, maybe geguileo can take this question but as per my understanding, it did bring down the memory consumption in his testing 14:28:37 crohmann: you could always submit a patch to the other deployment tools 14:28:42 crohmann: or ask them to add it 14:29:15 there are too many deployment tools out there 14:29:33 geguileo: good point. I shall do so for at least openstack-ansible then. I was just wondering (not anymore) if these malloc tunrings are the way to go. But that seems to be the case. Maybe somebody has any thoughts about next bullet point (streaming io) ? 14:29:55 crohmann: for Cinder that is the way to go, I don't know about other projects 14:30:19 the reason to have multiple memory arenas in glibc is to remove bottlenecks in multithreaded applications 14:30:29 that do a lot of memory allocation/free operations 14:30:41 but we are running on Python, which already has its own memory management system 14:31:03 so having additional per native thread memory arenas ends up creating memory problems 14:31:10 higher peak consumption 14:31:19 high watermark issues 14:31:51 considering the way cinder works, we wouldn't be really benefiting from those arenas, and they just create problems for us 14:32:00 Yeah. But thanks for clarifying this again. I did not want to promote adding these parameters without another consultation here. 14:32:07 so I doubt openstack-ansible will be oposed to your patch 14:32:24 I shall point them to this weekly meeting then :-) 14:32:31 crohmann: if you have trouble with the reviewers you can ping me on IRC and I'll chime in that review 14:33:08 crohmann: I recommend you pointing out how this has already been done in devstack and tripleo and show the links to the patches 14:33:22 that usually helps to convince people 14:34:00 example: *nova does it so we should too* 14:35:17 so that should address the second point 14:35:24 I'll get to it soon. It's just unfortunate that options like these need to be done by deployment tooling sometimes and there is not easy way for you to simply include that in your release. 14:36:26 regarding the 3rd point, i think zaitcev brought up the discussion of streaming IO rather than using chunks 14:36:29 but i might be wrong 14:37:05 i think you're right, it was Pete 14:37:25 ok, good to know 14:37:47 Maybe yes. But I never talked to him directly about it but send s few emails regarding cinder-backup improvement ideas to geguileo and zaitcev . 14:38:40 Currently the throughput using chunked driver powered cinder-backup drivers is a little sub-par. So maybe there is only so much one can do tuning the current design 14:39:17 The large size of uncompressed chunk was something that jumped out, but I thought that I might be out of touch with memory sizes at typical modern systems. 14:40:17 But doing streaming io (including compression, ...) there memory foodprint should be much lower, right? 14:41:34 zaitcev: You also said that doing multi-threading on the current chunking driver (https://review.opendev.org/c/openstack/cinder/+/779233) was not helping (to improve throughput). Maybe that change should be rejected alltogether then? 14:43:48 crohmann: I highly doubt that I could say something like that. I don't have enough field experience to make such claims. 14:43:56 but let's see 14:45:26 All I am saying is that cinder-backup via chunked driver currently is memory heavy and rather slow. That's unfortunate, but cause that hinders all of them drivers using the abstract chunking to be really a good replacement. So I am simply urging to find the bottlenecks to be able to use object storage for volume backups :-) 14:46:55 Hey guys, I have a couple for review https://review.opendev.org/c/openstack/cinder/+/879067 https://review.opendev.org/c/openstack/os-brick/+/876284 https://review.opendev.org/c/openstack/cinder/+/874813/3 14:47:41 Tony_Saad, please add it to the review request section on the etherpad 14:48:15 Also I have a question about https://bugs.launchpad.net/cinder/+bug/2003179 Password appears in plain text. This is a security vulnerability. Is there a way to merge security issues faster than bobcat? 14:48:55 I made a note to look into the performance of backups. Having a bug is a great help, because it adds it credibility and the data point that is not an artefact of my test VMs. 14:49:09 Tony_Saad, please wait for the open discussion or add topics on etherpad before the cinder meeting if you would like something to be discussed, we're currently discussing an ongoing topic 14:49:33 opps sorry thought we were open discussion 14:49:46 zaitcev: Thanks. I shall retest what I sent via email a few weeks ago and raise a bug with some data 14:50:03 Tony_Saad, no problem 14:50:24 crohmann, regarding the last point, "What ever happend to "multi-backend" support for cinder-backup?" 14:50:44 I can see the spec was proposed by Ivan and he's not currently active anymore (though he shows up from time to time) 14:51:11 I don't remember if he implemented that functionality but that should be open for taking up if anyone would like to work on it 14:52:03 So is this still valid to start working on and then propose for e.g. Bobcat or Bobcat++ ? 14:52:40 Or would this need further discussion / a resubmission of the spec? 14:53:03 resubmitting the spec should be a good start 14:53:38 May I simply do that (because I believe allowing for multiple backends is a good idea) or do you want to try and reach Ivan? 14:54:14 Do you guys have a link to that spec for "multi-backend"? The title sounds dubious because we certainly have many back-ends today. 14:54:24 Maybe needs renaming for clarity. 14:54:37 Oh 14:54:39 Ooooh 14:54:51 crohmann, i think you can pursue it, I haven't seen Ivan around for last 2 cycles 14:54:58 He probably means many back-ends simultaneously on the same Cinder backup service. 14:54:59 zaitcev, it's for backups, https://review.opendev.org/c/openstack/cinder-specs/+/712301 14:55:01 zaitcev: Link is in the otherpad 14:57:11 crohmann, the other topic you've added seems relevant for the glance team, "Glance support for multiple checksum types" or maybe i missed something there and we can discuss it? 14:57:18 Having this is not only about having some sort of a migration path ... but actually an invitation to "try" other backends or to offer e.g. "local backup" and "offsite" backups .. 14:57:45 whoami-rajat: Yes. Sorry - that should have gone to the glance list of topics. 14:57:57 cool 14:58:36 let's quickly do open discussion to discuss Tony's topic 14:58:41 #topic open discussion 14:58:50 Tony_Saad, do we have a fix proposed for that bug? 14:58:59 not yet 14:59:31 eharney left some suggestions in the bug 14:59:59 ack, so I think it's upto the Dell team to propose a fix first then the cinder team can actively review->merge->release it 15:00:31 is it possible to do this before bobcat? 15:00:31 reading eharney's comment, I think geguileo already proposed a patch for it and it merged last cycle 15:01:09 #link https://review.opendev.org/c/openstack/os-brick/+/871835 15:01:28 ok we're out of time, let's discuss this later in #openstack-cinder 15:01:32 thanks everyone for joining! 15:01:35 #endmeeting