14:00:02 <whoami-rajat> #startmeeting cinder
14:00:02 <opendevmeet> Meeting started Wed May 17 14:00:02 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:02 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:02 <opendevmeet> The meeting name has been set to 'cinder'
14:00:04 <whoami-rajat> #topic roll call
14:00:49 <raghavendrat> hi
14:00:55 <harsh> hi
14:01:09 <geguileo> hi! o/
14:01:16 <eharney> hi
14:01:20 <helenadantas[m]> hi
14:01:21 <nahimsouza[m]> o/
14:01:23 <crohmann> hi there!
14:01:34 <MatheusAndrade[m]> o/
14:02:15 <keerthivasansuresh> o/
14:02:17 <jungleboyj> o/
14:02:18 <caiquemello[m]> o/
14:02:21 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-bobcat-meetings
14:02:45 <rosmaita> o/
14:03:06 <simondodsley> o/
14:04:26 <yuval> 0/
14:04:28 <thiagoalvoravel> o/
14:04:53 <whoami-rajat> good number of people around today
14:04:56 <whoami-rajat> let's get started
14:05:03 <whoami-rajat> #topic announcements
14:05:11 <whoami-rajat> first, CVE-2023-2088
14:05:18 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-May/033614.html
14:05:40 <whoami-rajat> you can go through the verbose mail but i will summarize it here
14:06:02 <whoami-rajat> we've a security vulnerability (fixed now) which causes unauthorized access to volumes
14:06:29 <whoami-rajat> it could be *accidental* and also *intentional*, so we should be more careful about the intentional case if any user with malicious intents tries it
14:06:51 <whoami-rajat> The fixes spanned across cinder, os-brick, glance and nova projects, which to my knowledge, everything is merged now
14:07:03 <whoami-rajat> from master to all active stable branches (till yoga) -- xena is EM now
14:07:41 <crohmann> whoami-rajat: I believe there are required config changes for those who did not yet configure service_users / service_tokens / roles correctly right?
14:07:47 <whoami-rajat> Thanks to geguileo , rosmaita , dansmith and melwitt for fixing the cinder, glance and nova side of things respectively! (Also a lot of other people were involved)
14:08:03 <whoami-rajat> crohmann, yes correct, it has a deployment impact on upgrade
14:08:04 <geguileo> crohmann: correct, there are configuration changes necessary in the deployments
14:08:17 <whoami-rajat> which reminds me geguileo put up a doc patch related to this
14:08:25 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/883360
14:08:30 <rosmaita> i'm reviewing it now!
14:08:44 <yuval> me also
14:08:46 <geguileo> here's the current doc https://docs.openstack.org/cinder/latest/configuration/block-storage/service-token.html
14:08:46 <whoami-rajat> crohmann, you can refer to it for the changes required ^
14:08:57 <yuval> its an improvement
14:09:33 <yuval> I still didnt managed to configure my system...
14:09:49 <yuval> but I missing core knowledge on keystone
14:10:06 <geguileo> yuval: not much should be needed to be done on keystone...
14:10:16 <geguileo> just make sure cinder and nova users have the service role
14:10:27 <geguileo> then make nova send the service token by changing its configuration
14:10:35 <geguileo> configure cinder to accept the token and validate it
14:10:44 <yuval> yes yes
14:10:46 <geguileo> at a high level that should be all
14:11:14 <whoami-rajat> this should also be a brief doc for people in a hurry ^
14:11:32 <yuval> anybody else here using master and managed to make it work?
14:12:10 <geguileo> in devstack nova should already be configured
14:12:28 <yuval> every third party ci - once patchset will rebase will have to be updated
14:12:37 <geguileo> because devstack configures the service role in nova
14:12:46 <geguileo> I don't think 3rd party needs to be updated
14:12:54 <geguileo> as long as they do normal devstack deployment
14:13:11 <yuval> kolla-ansible?
14:13:15 <geguileo> I made the cinder patch work even if cinder is not configured to accept the token
14:13:26 <geguileo> yuval: no idea what kolla does...
14:14:29 <harsh> I use devstack. Will go through the doc and see how to integrate.
14:14:51 <yuval> geguile can you share the devstack patch you added to support it?
14:14:59 <geguileo> harsh: with devstack it should work out of the box without any changes
14:15:12 <harsh> ok thanks :)
14:15:27 <geguileo> yuval: I didn't have to add anything, devstack has been configuring nova to send tokens for a very long time
14:15:40 <yuval> I see
14:16:03 <geguileo> it just adds the service role to the nova user and configures it, let me see if I can find the patch
14:17:21 <whoami-rajat> does any of our third party CIs use deployment tools other than devstack?
14:17:43 <simondodsley> they shouldn't do
14:17:49 <geguileo> yuval: I think this is the area where the role is created https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4248eaaa/lib/keystone#L325
14:18:47 <drencrom> I made a ptch to nova for the service token stuff
14:18:59 <geguileo> the code that adds the service user to nova: https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4248eaaa/lib/nova#L816
14:19:30 <geguileo> drencrom: you mean to configure it by default instead of relying on devstack or the deployment tool?
14:20:11 <whoami-rajat> "The ResellerAdmin role is used by Nova and Ceilometer" -- interesting
14:20:16 <geguileo> https://github.com/openstack/nova/commit/41c64b94b0af333845e998f6cc195e72ca5ab6bc
14:20:24 <geguileo> ^ I think that's the other nova patch
14:20:30 <drencrom> Sorry I messed up, I did a patch to configure it automatically on juju charms
14:20:49 <geguileo> drencrom: nice!
14:20:52 <drencrom> but it is already working in nova and cinder by my tests
14:21:15 <drencrom> I'm working on the same thing for cinder charms now
14:21:17 <geguileo> drencrom++
14:21:33 <whoami-rajat> drencrom++
14:22:40 <geguileo> yuval: Did you change your nova config for ALL the nova services and confirm on boot with debug that they are using that config?
14:22:51 <geguileo> I'm saying it because it has happened to me in the past
14:23:08 <geguileo> I update a .conf file, but that's not the one that the service is actually using, or I forget to restart the service
14:23:23 <crohmann> whoami-rajat: I know I asked about the required config change myself. But could we maybe continue with the weekly and postpone the discussion about the CVE related config changes?
14:23:39 <geguileo> It would be embarassing if people knew how many times that has happened to me
14:23:40 <yuval> currently I tested just nova-compute,nova-api cinder-volume, cinder-api
14:24:27 <yuval> crohmann: agree we can continue the conf issue on the cinder chat
14:24:33 <yuval> after the meeting
14:25:25 <whoami-rajat> crohmann, agreed, i just didn't want to interrupt the flow of discussions but we've a lot more to discuss
14:25:38 <whoami-rajat> let's discuss this after the meeting
14:25:52 <whoami-rajat> in the meantime you can go through the email
14:26:01 <whoami-rajat> next announcement, Festival of XS reviews
14:26:07 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-festival-of-reviews
14:26:29 <whoami-rajat> this is the third week of the month so we will have festival of XS reviews this friday (19 May)
14:26:55 <whoami-rajat> since bluejeans is out and meetpad sucks (at least for me), we will use google meet for which i will create a meeting link before the festival starts
14:27:09 <whoami-rajat> so stay alert on the cinder channel if you would like to join
14:27:31 <whoami-rajat> next, Forum session for Vancouver summit
14:27:37 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-May/033625.html
14:27:50 <whoami-rajat> the date for forum sessions was extended and now the deadline is tomorrow
14:28:07 <whoami-rajat> The date is extended to Thursday May, 18th at 7:00 UTC
14:28:23 <whoami-rajat> if you are planning to attend, you can submit a forum session here
14:28:24 <whoami-rajat> #link https://openinfrafoundation.formstack.com/forms/forum_expansion
14:28:27 <rosmaita> i completely missed that, thanks for mentioning it
14:28:49 <zaitcev> By that you mean the festival or forum? But either way both are interesting.
14:28:57 <whoami-rajat> np, would be good to have more people joining
14:29:15 <whoami-rajat> zaitcev, the forum, we don't require form filling for festival, it's open for all :D
14:30:12 <whoami-rajat> finally the Upcoming events
14:30:34 <whoami-rajat> M-1 just passed, we released os-brick with the new CVE fix
14:31:09 <whoami-rajat> python-cinderclient and python-brick-cinderclient-ext didn't have functional changes so abandoned those releases
14:31:31 <whoami-rajat> os-brick 6.3.0 should contain the CVE fix
14:31:33 <whoami-rajat> #link https://pypi.org/project/os-brick/6.3.0/
14:31:49 <whoami-rajat> now on to the future events
14:31:55 <whoami-rajat> 1) OpenInfra Summit & PTG in Vancouver: June 13-15, 2023
14:32:01 <whoami-rajat> 2) Bobcat-2 Milestone: July 6th, 2023
14:32:03 <simondodsley> anyone here going to Vancouver? Would be good to meet in person again.
14:32:50 <whoami-rajat> M-2 will include driver freeze (volume + target)
14:33:04 <whoami-rajat> also forgot to mention we've spec freeze before that
14:33:04 <whoami-rajat> Spec freeze 23 June, 2023
14:34:06 <whoami-rajat> simondodsley, I'm planning to but the process is complicated for me so can't guarantee it
14:34:17 <zaitcev> simondodsley: I'm going, although I haven't even registered and booked a hotel yet.
14:34:45 <nahimsouza[m]> simondodsley: from netapp, me and caiquemello are planning to go
14:34:59 <eharney> simondodsley: i'll be there
14:36:26 <jungleboyj> simondodsley:  Wish I could be but it conflicts with a workshop in Shanghai that I need to be at.
14:37:09 <simondodsley> cool - we'll have to arrange a meetup for those that are there. Pure is having a Happy Hour in the Brass Fish on the Tuesday evening - so you can all come to that if you want
14:37:40 <simondodsley> you can register here: https://forms.gle/J9m6N3h6WGLguDLT6
14:38:09 <simondodsley> marketing went a bit mad so take the title of the event with a pinch of salt
14:39:03 <whoami-rajat> everyone will meet at PTG but i understand you're referring to something unofficial
14:39:34 <whoami-rajat> good to see lot of people joining
14:39:52 <whoami-rajat> anyway, we've less time so let's move to topics -- because i see a big one
14:39:57 <whoami-rajat> #topic Cinder-Backup very slow / inefficient when using chunked drivers, e.g. S3
14:40:00 <whoami-rajat> crohmann, that's you
14:40:27 <crohmann> Yes. Sorry about me not attending the last two meetings. Life got in the way.
14:40:52 <whoami-rajat> no worries, good that you could make it today
14:41:32 <crohmann> I'd really love to see "the" alternative to RBD as cinder-backup driver to reach usable performance levels. I am asking for someone to dive into the bottlenecks here. I added some measurements to the referenced bug (not mine BTW)
14:43:30 <crohmann> Basically currently the drivers based the chunked approach as simply not fast enough to be usable. And I blieve a real deep-dive into the issue and possible performance gains is required to make this fly.
14:44:18 <crohmann> Be it multi-threading the read -> hash -> compress -> upload pipeline or use streaming IO or whatever
14:44:39 <crohmann> zaitcev: Did offer to look into this in the past?
14:45:28 <zaitcev> crohmann: I promised but I did not. Sorry about that. It is assigned to me semi-officially.
14:47:00 <crohmann> I did not want to pin you personally to this issue. I'd rather have some sort of aggreement that having alternatives in the form of object storage as backup target would be good and that the current performance is too slow for larger volumes.
14:47:32 <crohmann> To quote myself: "Consider an not crazy big 8TiB volume is being backed up in full.
14:47:36 <crohmann> At 1 GiB/s the volume backup will still take ~2.5 hrs to complete"
14:49:00 <crohmann> With no core support for this issue I am really afraid to invest any time in more testing or even restructuring the data flow there.
14:52:21 <simondodsley> sorry for the interrupt - the form link I sent earlier for the Pure Vanvcouver event had a permissions error - this has now been fixed
14:53:33 <crohmann> whoami-rajat: that's all I have on this issue really. I am simply seeking clarity if using object storage to cinder backups is viable
14:55:13 <whoami-rajat> crohmann, thanks for bringing this up, i understand the concern but unless we've someone to commit to working on it, we can't do much here
14:55:38 <whoami-rajat> i don't think it's trivial and will require quite a bit of testing
14:56:00 <whoami-rajat> thereby consuming lot of cycles
14:56:58 <whoami-rajat> anyway, if anyone plans on taking this up, you can contact crohmann
14:57:06 <crohmann> saidly yes. But what good is a driver that does not perform well enough to be usable? There had to be some performance targets when this was added right? Backing up a test volume of 1GiB is not helping.
14:57:40 <jbernard> o/ i might be interested, im also looking at the s3 spec, crohmann maybe we can catch up later
14:58:16 <whoami-rajat> great!
14:58:38 <crohmann> jbernard: gladly. Do you have my email? Just drop me a line christian.rohmann@inovex.de. I might have a working student who could also take this apart and rework the data flow. But in the end this needs to go through review.
14:58:50 <jbernard> crohmann: jobernar@redhat.com
14:58:58 <jbernard> crohmann: will do
14:59:09 <crohmann> thanks!
14:59:41 <whoami-rajat> we've another topic but not much time to discuss
14:59:56 <whoami-rajat> if anyone is aware about the tooz situation, please leave a comment here
14:59:58 <whoami-rajat> #link https://review.opendev.org/c/openstack/os-brick/+/873100
15:00:20 <whoami-rajat> i think gates should be working fine by now so a recheck should be good to try
15:00:26 <whoami-rajat> we're out of time
15:00:30 <whoami-rajat> take a look at review requests
15:00:34 <whoami-rajat> thanks everyone for attending
15:00:35 <whoami-rajat> #endmeeting