14:00:20 #startmeeting cinder 14:00:20 Meeting started Wed Jun 14 14:00:20 2023 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:20 The meeting name has been set to 'cinder' 14:00:24 hi 14:00:27 #topic roll call 14:00:32 yo 14:00:54 hi 14:01:43 #link https://etherpad.opendev.org/p/cinder-bobcat-meetings 14:02:01 some of the folks are at Vancouver PTG so we might have less attendance 14:02:17 o/ 14:02:22 o/ 14:02:29 o/ 14:03:18 o/ 14:03:40 o/ 14:04:44 good attendance 14:04:46 let's get started 14:04:50 0/ 14:04:53 Hello 14:05:01 #topic announcements 14:05:16 first, Cinder PTG Schedule 14:05:21 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034056.html 14:05:42 unfortunately i wasn't able to travel to vancouver but Brian is taking care of the PTG which is really great 14:05:47 he sent out the schedule to ML 14:06:04 cinder is going to have 3 sessions for PTG given the less number of people attending 14:06:11 Wednesday 14:06:12 10:20-10:50 Support for NVMe-OF in os-brick 14:06:12 15:50-16:20 Cinder Operator Half-Hour 14:06:12 Thursday 14:06:12 16:40-17:10 Open Discussion with the Cinder project team 14:06:34 since Vancouver is UTC-7, i think it will start later today 14:06:58 I don't have info if there is a virtual thing planned but let's see if we get a summary or notes out of the sessions 14:07:00 good 14:07:25 next, Cinder Operators event 14:07:30 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034057.html 14:07:33 yeah it'd be great if we can have the minutes 14:08:06 happystacker, yep, i can ask Brian if he can write those down from the sessions (if I'm able to make contact with him) 14:08:12 maybe just drop a mail 14:08:28 ok cool, thanks rajat 14:08:50 np 14:08:54 so same sessions are also good for operators to attend 14:08:59 but we also have additional forum session 14:09:01 Forum session: 14:09:02 Cinder, the OpenStack Block Storage service ... how are we doing? 14:09:02 Looking for feedback from operators, vendors, and end-users 14:09:08 #link https://etherpad.opendev.org/p/cinder-vancouver-forum-2023 14:09:15 Timing: 1840 UTC - 1910 UTC 14:10:29 that's all about the Vancouver summit from Cinder perspective 14:10:50 next, Spec freeze (22nd June) 14:10:55 #link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-specs 14:11:06 we've the cinder spec deadline upcoming 14:11:24 i.e. 22nd June 14:11:33 I've created the above etherpad to track the specs 14:11:41 majorly the first two need reviews 14:12:21 Considering the bandwidth we have and the current date, I don't tkink we'll make it for https://review.opendev.org/c/openstack/cinder-specs/+/872019 14:12:36 should be postponed to C release 14:13:38 happystacker, do you mean the developer bandwidth or the reviewer bandwidth? I'm assuming the former 14:13:53 dev perspective 14:13:53 if you feel it is hard to complete this cycle, we can surely push for next cycle 14:14:08 yeah that's what I mean to say 14:14:17 this piece of good chunk of work 14:14:36 sure, I will add a W-1 stating our meeting discussion and we can come back to it next cycle 14:15:31 thanks for the heads up happystacker 14:15:47 np, sorru for that 14:16:15 no worries 14:16:18 I've added a comment 14:16:26 so we only have 1 spec to review now 14:16:33 other one is just a reproposal 14:17:24 #link https://review.opendev.org/c/openstack/cinder-specs/+/868761 14:17:30 #link https://review.opendev.org/c/openstack/cinder-specs/+/877230 14:17:34 if anyone needs a quick link ^ 14:17:46 next, Milestone-2 (06 July) 14:17:55 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034062.html 14:18:01 #link https://releases.openstack.org/bobcat/schedule.html#b-mf 14:18:07 we have Milestone 2 upcoming 14:18:16 along with which we have the volume and target driver merge deadline 14:18:39 I've created an etherpad to track the drivers for this cycle 14:18:43 #link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-drivers 14:18:56 so far I've added the Yadro FC driver and the Lustre driver 14:19:14 but if you are planning to propose of have proposed any driver for 2023.2 Bobcat cycle, please add it to the list 14:19:29 new driver you mean to say? 14:19:59 yes 14:20:05 new volume and target drivers 14:20:14 ok nothing new from our side for cinder 14:20:19 thks 14:20:24 ack, good to know 14:21:17 ok, last announcement, Cinder Incremental backup working 14:21:23 just for general awareness 14:21:40 if anyone has doubts how the drivers inheriting from chunkedbackupdriver does incremental backups 14:21:49 Gorka has an article written about it 14:22:13 some of the info might be dated but the incremental mechanism should still be the same (i just took a quick glance and it looks same) 14:22:26 it came up on ML so thought others might have a doubt about this 14:22:28 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034098.html 14:22:38 oh cool, will have a look 14:22:38 link to gorka's article 14:22:40 #link #link https://web.archive.org/web/20160407151329/http://gorka.eguileor.com/inside-cinders-incremental-backup 14:23:28 ++ 14:25:04 that's all for announcements 14:25:25 we also don't have any topic today 14:25:30 let's move to open discussion 14:25:34 #topic open discussion 14:25:48 reminder to take a look at the review request patches 14:26:47 I have one question if you don't mind 14:27:28 sure 14:27:57 i would like to discuss https://review.opendev.org/c/openstack/oslo.privsep/+/884344 14:28:41 Well, my question is described in https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034042.html 14:30:23 I'm able to provide additional comments if needed. Main question - is it known issue and are there any plans to work with it 14:30:45 #link HPE STORAGE CI 14:30:53 sorry 14:30:54 https://bugs.launchpad.net/cinder/+bug/2003179/comments/7 14:31:07 Tony_Saad, sure, what's your question 14:31:29 for https://review.opendev.org/c/openstack/oslo.privsep/+/884344 my patch works. I tried the way Eric described and it did not hid the password. The only way i got the password hidden is with that patch 14:31:31 IPO_, is that related to cinder A/A or the scheduler reporting wrong stats? would like to know how many schedulers are there in your deployment? 14:33:53 Tony_Saad, will that set all privsep logs to logging level ERROR? that would be problematic when debugging issues in deployments ... 14:33:56 out of curiosity: why the logging error was added to `oslo_privsep/daemon.py` and not `os_brick/privileged/__init__.py`? 14:33:57 whoami-rajat, this issue related to A/A cinder volume and not depending on number of cinder-scheduler instances. 14:35:11 oh sorry, because it's using the logger from oslo_log 14:35:42 whoami-rajat, no it only sets that one log line to error but because it is set to debug it pretty much skips that logger. I am open to discuss and test other ways but not sure how exactly eric wanted it done 14:37:10 IPO_, ack, the volume driver reports the capabilities in a periodic interval (i think default is 60 seconds) to the scheduler and the get pools call returns info from scheduler cache 14:37:26 though I'm not an expert in A/A and Gorka isn't around today 14:37:34 this bug doesn't need to be fixed in privsep, it can be fixed from brick (i put some notes about how in the lp bug) 14:38:31 eharney, i saw your notes and tried them but the password was still getting leaked. Possible that i did something wrong or missed something? 14:39:29 i'd have to look at what you tried and dig into it, may be able to look at that next week 14:39:43 the patch posted to privsep i think will disable some logging that is expected to be there 14:40:30 Sure i can push a patch with the changes that you suggested and review 14:41:17 but from my testing https://review.opendev.org/c/openstack/oslo.privsep/+/884344 only disabled that one log line that i changed 14:43:46 rajat, thanks for comment. Yes - it leads to incorrect reporting of allocated_capacity_gb for pool - so 1. we get problem to understand of cinder allocated amount for pools and 2. some features like reservation capacity in cinder doesn't work either 14:45:03 And it isn't clear, why related https://bugs.launchpad.net/cinder/+bug/1927186 is incomplete 14:46:35 I see a similar comment from Gorka about number of schedulers being > 1 which is also my observation in some deployments 14:46:44 if that's not the case with you, we can discuss this issue further 14:47:54 Was `incomplete` because I left a question a while ago and nobody change it after that 14:47:58 I've updated it 14:48:12 No, it isnt the case as it reproduced with one cinder-scheduler either. In case of multiple instances of cinder scheduler it became more bad :) 14:49:48 ack got it, I think even with multiple cinder volume services in A/A, there should only be one that gets data from backend and reports it to the scheduler in a periodic time (again not A/A expert) 14:49:53 enriquetaso, thanks for comment - so should we reopened it or I'm able to report new one 14:50:07 can you try lowering the time interval of reporting ? and see if the issue persists 14:51:12 https://github.com/openstack/cinder/blob/d7ae9610d765919660a9f7a8769478f0b6e0aadf/cinder/volume/manager.py#L135-L142 14:51:22 i mean setting backend_stats_polling_interval to a value lower than 60 seconds 14:51:24 1927186 is open 14:52:16 rajat, each cinder-volume keeps local allocated_capacity_gb and periodically reports it to scheduler. Each time cinder volume gets new task to create volume - it increase local value and report it back to scheduler 14:52:19 I'm not sure if it's the same bug or not... open a new bug report if it's not related to 1927186 14:53:15 that shouldn't be the case, only scheduler keeps the pool data in cache, cinder-volume's purpose is to get the pool data and send it to scheduler 14:53:18 I need to drop guys, thanks for all 14:53:32 and happy summit for the lucky ones 14:53:47 also the allocated_capacity_gb is increased/decreased by the scheduler only 14:53:56 so I saw even negative valu of allcated capacity 14:54:09 cinder volume shouldn't be performing any calculations on the backend stats 14:54:56 Looks like it does - when it start and when it get task to create or delete volume 14:55:38 can you show the place where you think it's performing calculations on the backend stats? 14:57:58 https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py#L403 14:59:19 IPO_, that is only done when we initialize the cinder volume host, it doesn't happen in every cinder volume create/delete operation 15:00:10 Shure, that is why when we restart cinder volume - it recalculate capacity and show correct value for a while 15:00:31 yes 15:00:37 else c-vol shouldn't be interfering with those values 15:00:40 anyways we are out of time 15:00:47 would be good to discuss this again next week 15:00:52 when we have better team bandwidth 15:01:00 right now many team members are in the vancouver summit 15:01:01 ok, thank you ! 15:01:23 thanks for bringing this up 15:01:26 and thanks everyone for joining 15:01:29 #endmeeting