14:00:02 #startmeeting cinder 14:00:02 Meeting started Wed May 17 14:00:02 2023 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:02 The meeting name has been set to 'cinder' 14:00:04 #topic roll call 14:00:49 hi 14:00:55 hi 14:01:09 hi! o/ 14:01:16 hi 14:01:20 hi 14:01:21 o/ 14:01:23 hi there! 14:01:34 o/ 14:02:15 o/ 14:02:17 o/ 14:02:18 o/ 14:02:21 #link https://etherpad.opendev.org/p/cinder-bobcat-meetings 14:02:45 o/ 14:03:06 o/ 14:04:26 0/ 14:04:28 o/ 14:04:53 good number of people around today 14:04:56 let's get started 14:05:03 #topic announcements 14:05:11 first, CVE-2023-2088 14:05:18 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-May/033614.html 14:05:40 you can go through the verbose mail but i will summarize it here 14:06:02 we've a security vulnerability (fixed now) which causes unauthorized access to volumes 14:06:29 it could be *accidental* and also *intentional*, so we should be more careful about the intentional case if any user with malicious intents tries it 14:06:51 The fixes spanned across cinder, os-brick, glance and nova projects, which to my knowledge, everything is merged now 14:07:03 from master to all active stable branches (till yoga) -- xena is EM now 14:07:41 whoami-rajat: I believe there are required config changes for those who did not yet configure service_users / service_tokens / roles correctly right? 14:07:47 Thanks to geguileo , rosmaita , dansmith and melwitt for fixing the cinder, glance and nova side of things respectively! (Also a lot of other people were involved) 14:08:03 crohmann, yes correct, it has a deployment impact on upgrade 14:08:04 crohmann: correct, there are configuration changes necessary in the deployments 14:08:17 which reminds me geguileo put up a doc patch related to this 14:08:25 #link https://review.opendev.org/c/openstack/cinder/+/883360 14:08:30 i'm reviewing it now! 14:08:44 me also 14:08:46 here's the current doc https://docs.openstack.org/cinder/latest/configuration/block-storage/service-token.html 14:08:46 crohmann, you can refer to it for the changes required ^ 14:08:57 its an improvement 14:09:33 I still didnt managed to configure my system... 14:09:49 but I missing core knowledge on keystone 14:10:06 yuval: not much should be needed to be done on keystone... 14:10:16 just make sure cinder and nova users have the service role 14:10:27 then make nova send the service token by changing its configuration 14:10:35 configure cinder to accept the token and validate it 14:10:44 yes yes 14:10:46 at a high level that should be all 14:11:14 this should also be a brief doc for people in a hurry ^ 14:11:32 anybody else here using master and managed to make it work? 14:12:10 in devstack nova should already be configured 14:12:28 every third party ci - once patchset will rebase will have to be updated 14:12:37 because devstack configures the service role in nova 14:12:46 I don't think 3rd party needs to be updated 14:12:54 as long as they do normal devstack deployment 14:13:11 kolla-ansible? 14:13:15 I made the cinder patch work even if cinder is not configured to accept the token 14:13:26 yuval: no idea what kolla does... 14:14:29 I use devstack. Will go through the doc and see how to integrate. 14:14:51 geguile can you share the devstack patch you added to support it? 14:14:59 harsh: with devstack it should work out of the box without any changes 14:15:12 ok thanks :) 14:15:27 yuval: I didn't have to add anything, devstack has been configuring nova to send tokens for a very long time 14:15:40 I see 14:16:03 it just adds the service role to the nova user and configures it, let me see if I can find the patch 14:17:21 does any of our third party CIs use deployment tools other than devstack? 14:17:43 they shouldn't do 14:17:49 yuval: I think this is the area where the role is created https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4248eaaa/lib/keystone#L325 14:18:47 I made a ptch to nova for the service token stuff 14:18:59 the code that adds the service user to nova: https://github.com/openstack/devstack/blob/34afa91fc9f830fc8e1fdc4d76e7aa6d4248eaaa/lib/nova#L816 14:19:30 drencrom: you mean to configure it by default instead of relying on devstack or the deployment tool? 14:20:11 "The ResellerAdmin role is used by Nova and Ceilometer" -- interesting 14:20:16 https://github.com/openstack/nova/commit/41c64b94b0af333845e998f6cc195e72ca5ab6bc 14:20:24 ^ I think that's the other nova patch 14:20:30 Sorry I messed up, I did a patch to configure it automatically on juju charms 14:20:49 drencrom: nice! 14:20:52 but it is already working in nova and cinder by my tests 14:21:15 I'm working on the same thing for cinder charms now 14:21:17 drencrom++ 14:21:33 drencrom++ 14:22:40 yuval: Did you change your nova config for ALL the nova services and confirm on boot with debug that they are using that config? 14:22:51 I'm saying it because it has happened to me in the past 14:23:08 I update a .conf file, but that's not the one that the service is actually using, or I forget to restart the service 14:23:23 whoami-rajat: I know I asked about the required config change myself. But could we maybe continue with the weekly and postpone the discussion about the CVE related config changes? 14:23:39 It would be embarassing if people knew how many times that has happened to me 14:23:40 currently I tested just nova-compute,nova-api cinder-volume, cinder-api 14:24:27 crohmann: agree we can continue the conf issue on the cinder chat 14:24:33 after the meeting 14:25:25 crohmann, agreed, i just didn't want to interrupt the flow of discussions but we've a lot more to discuss 14:25:38 let's discuss this after the meeting 14:25:52 in the meantime you can go through the email 14:26:01 next announcement, Festival of XS reviews 14:26:07 #link https://etherpad.opendev.org/p/cinder-festival-of-reviews 14:26:29 this is the third week of the month so we will have festival of XS reviews this friday (19 May) 14:26:55 since bluejeans is out and meetpad sucks (at least for me), we will use google meet for which i will create a meeting link before the festival starts 14:27:09 so stay alert on the cinder channel if you would like to join 14:27:31 next, Forum session for Vancouver summit 14:27:37 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-May/033625.html 14:27:50 the date for forum sessions was extended and now the deadline is tomorrow 14:28:07 The date is extended to Thursday May, 18th at 7:00 UTC 14:28:23 if you are planning to attend, you can submit a forum session here 14:28:24 #link https://openinfrafoundation.formstack.com/forms/forum_expansion 14:28:27 i completely missed that, thanks for mentioning it 14:28:49 By that you mean the festival or forum? But either way both are interesting. 14:28:57 np, would be good to have more people joining 14:29:15 zaitcev, the forum, we don't require form filling for festival, it's open for all :D 14:30:12 finally the Upcoming events 14:30:34 M-1 just passed, we released os-brick with the new CVE fix 14:31:09 python-cinderclient and python-brick-cinderclient-ext didn't have functional changes so abandoned those releases 14:31:31 os-brick 6.3.0 should contain the CVE fix 14:31:33 #link https://pypi.org/project/os-brick/6.3.0/ 14:31:49 now on to the future events 14:31:55 1) OpenInfra Summit & PTG in Vancouver: June 13-15, 2023 14:32:01 2) Bobcat-2 Milestone: July 6th, 2023 14:32:03 anyone here going to Vancouver? Would be good to meet in person again. 14:32:50 M-2 will include driver freeze (volume + target) 14:33:04 also forgot to mention we've spec freeze before that 14:33:04 Spec freeze 23 June, 2023 14:34:06 simondodsley, I'm planning to but the process is complicated for me so can't guarantee it 14:34:17 simondodsley: I'm going, although I haven't even registered and booked a hotel yet. 14:34:45 simondodsley: from netapp, me and caiquemello are planning to go 14:34:59 simondodsley: i'll be there 14:36:26 simondodsley: Wish I could be but it conflicts with a workshop in Shanghai that I need to be at. 14:37:09 cool - we'll have to arrange a meetup for those that are there. Pure is having a Happy Hour in the Brass Fish on the Tuesday evening - so you can all come to that if you want 14:37:40 you can register here: https://forms.gle/J9m6N3h6WGLguDLT6 14:38:09 marketing went a bit mad so take the title of the event with a pinch of salt 14:39:03 everyone will meet at PTG but i understand you're referring to something unofficial 14:39:34 good to see lot of people joining 14:39:52 anyway, we've less time so let's move to topics -- because i see a big one 14:39:57 #topic Cinder-Backup very slow / inefficient when using chunked drivers, e.g. S3 14:40:00 crohmann, that's you 14:40:27 Yes. Sorry about me not attending the last two meetings. Life got in the way. 14:40:52 no worries, good that you could make it today 14:41:32 I'd really love to see "the" alternative to RBD as cinder-backup driver to reach usable performance levels. I am asking for someone to dive into the bottlenecks here. I added some measurements to the referenced bug (not mine BTW) 14:43:30 Basically currently the drivers based the chunked approach as simply not fast enough to be usable. And I blieve a real deep-dive into the issue and possible performance gains is required to make this fly. 14:44:18 Be it multi-threading the read -> hash -> compress -> upload pipeline or use streaming IO or whatever 14:44:39 zaitcev: Did offer to look into this in the past? 14:45:28 crohmann: I promised but I did not. Sorry about that. It is assigned to me semi-officially. 14:47:00 I did not want to pin you personally to this issue. I'd rather have some sort of aggreement that having alternatives in the form of object storage as backup target would be good and that the current performance is too slow for larger volumes. 14:47:32 To quote myself: "Consider an not crazy big 8TiB volume is being backed up in full. 14:47:36 At 1 GiB/s the volume backup will still take ~2.5 hrs to complete" 14:49:00 With no core support for this issue I am really afraid to invest any time in more testing or even restructuring the data flow there. 14:52:21 sorry for the interrupt - the form link I sent earlier for the Pure Vanvcouver event had a permissions error - this has now been fixed 14:53:33 whoami-rajat: that's all I have on this issue really. I am simply seeking clarity if using object storage to cinder backups is viable 14:55:13 crohmann, thanks for bringing this up, i understand the concern but unless we've someone to commit to working on it, we can't do much here 14:55:38 i don't think it's trivial and will require quite a bit of testing 14:56:00 thereby consuming lot of cycles 14:56:58 anyway, if anyone plans on taking this up, you can contact crohmann 14:57:06 saidly yes. But what good is a driver that does not perform well enough to be usable? There had to be some performance targets when this was added right? Backing up a test volume of 1GiB is not helping. 14:57:40 o/ i might be interested, im also looking at the s3 spec, crohmann maybe we can catch up later 14:58:16 great! 14:58:38 jbernard: gladly. Do you have my email? Just drop me a line christian.rohmann@inovex.de. I might have a working student who could also take this apart and rework the data flow. But in the end this needs to go through review. 14:58:50 crohmann: jobernar@redhat.com 14:58:58 crohmann: will do 14:59:09 thanks! 14:59:41 we've another topic but not much time to discuss 14:59:56 if anyone is aware about the tooz situation, please leave a comment here 14:59:58 #link https://review.opendev.org/c/openstack/os-brick/+/873100 15:00:20 i think gates should be working fine by now so a recheck should be good to try 15:00:26 we're out of time 15:00:30 take a look at review requests 15:00:34 thanks everyone for attending 15:00:35 #endmeeting