Wednesday, 2023-06-14

whoami-rajat#startmeeting cinder14:00
opendevmeetMeeting started Wed Jun 14 14:00:20 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
opendevmeetThe meeting name has been set to 'cinder'14:00
enriquetasohi14:00
whoami-rajat#topic roll call14:00
yuvalyo14:00
IPO_hi14:00
whoami-rajat#link https://etherpad.opendev.org/p/cinder-bobcat-meetings14:01
whoami-rajatsome of the folks are at Vancouver PTG so we might have less attendance14:02
MatheusAndrade[m]o/14:02
helenadantas[m]o/14:02
luizsantos[m]o/14:02
toskyo/14:03
thiagoalvoravelo/14:03
whoami-rajatgood attendance14:04
whoami-rajatlet's get started14:04
Tony_Saad0/14:04
happystackerHello14:04
whoami-rajat#topic announcements14:05
whoami-rajatfirst, Cinder PTG Schedule14:05
whoami-rajat#link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034056.html14:05
whoami-rajatunfortunately i wasn't able to travel to vancouver but Brian is taking care of the PTG which is really great14:05
whoami-rajathe sent out the schedule to ML14:05
whoami-rajatcinder is going to have 3 sessions for PTG given the less number of people attending14:06
whoami-rajatWednesday14:06
whoami-rajat10:20-10:50  Support for NVMe-OF in os-brick14:06
whoami-rajat15:50-16:20  Cinder Operator Half-Hour14:06
whoami-rajatThursday14:06
whoami-rajat16:40-17:10  Open Discussion with the Cinder project team14:06
whoami-rajatsince Vancouver is UTC-7, i think it will start later today14:06
whoami-rajatI don't have info if there is a virtual thing planned but let's see if we get a summary or notes out of the sessions14:06
happystackergood14:07
whoami-rajatnext, Cinder Operators event14:07
whoami-rajat#link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034057.html14:07
happystackeryeah it'd be great if we can have the minutes14:07
whoami-rajathappystacker, yep, i can ask Brian if he can write those down from the sessions (if I'm able to make contact with him)14:08
whoami-rajatmaybe just drop a mail14:08
happystackerok cool, thanks rajat14:08
whoami-rajatnp14:08
whoami-rajatso same sessions are also good for operators to attend14:08
whoami-rajatbut we also have additional forum session14:08
whoami-rajatForum session:14:09
whoami-rajatCinder, the OpenStack Block Storage service ... how are we doing?14:09
whoami-rajatLooking for feedback from operators, vendors, and end-users14:09
whoami-rajat#link https://etherpad.opendev.org/p/cinder-vancouver-forum-202314:09
whoami-rajatTiming: 1840 UTC - 1910 UTC14:09
whoami-rajatthat's all about the Vancouver summit from Cinder perspective14:10
whoami-rajatnext, Spec freeze (22nd June)14:10
whoami-rajat#link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-specs14:10
whoami-rajatwe've the cinder spec deadline upcoming14:11
whoami-rajati.e. 22nd June14:11
whoami-rajatI've created the above etherpad to track the specs14:11
whoami-rajatmajorly the first two need reviews14:11
happystackerConsidering the bandwidth we have and the current date, I don't tkink we'll make it for https://review.opendev.org/c/openstack/cinder-specs/+/87201914:12
happystackershould be postponed to C release14:12
whoami-rajathappystacker, do you mean the developer bandwidth or the reviewer bandwidth? I'm assuming the former14:13
happystackerdev perspective14:13
whoami-rajatif you feel it is hard to complete this cycle, we can surely push for next cycle14:13
happystackeryeah that's what I mean to say14:14
happystackerthis piece of good chunk of work14:14
whoami-rajatsure, I will add a W-1 stating our meeting discussion and we can come back to it next cycle14:14
whoami-rajatthanks for the heads up happystacker 14:15
happystackernp, sorru for that14:15
whoami-rajatno worries14:16
whoami-rajatI've added a comment14:16
whoami-rajatso we only have 1 spec to review now14:16
whoami-rajatother one is just a reproposal14:16
whoami-rajat#link https://review.opendev.org/c/openstack/cinder-specs/+/86876114:17
whoami-rajat#link https://review.opendev.org/c/openstack/cinder-specs/+/87723014:17
whoami-rajatif anyone needs a quick link ^14:17
whoami-rajatnext, Milestone-2 (06 July)14:17
whoami-rajat#link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034062.html14:17
whoami-rajat#link https://releases.openstack.org/bobcat/schedule.html#b-mf14:18
whoami-rajatwe have Milestone 2 upcoming14:18
whoami-rajatalong with which we have the volume and target driver merge deadline14:18
whoami-rajatI've created an etherpad to track the drivers for this cycle14:18
whoami-rajat#link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-drivers14:18
whoami-rajatso far I've added the Yadro FC driver and the Lustre driver14:18
whoami-rajatbut if you are planning to propose of have proposed any driver for 2023.2 Bobcat cycle, please add it to the list14:19
happystackernew driver you mean to say?14:19
whoami-rajatyes14:19
whoami-rajatnew volume and target drivers14:20
happystackerok nothing new from our side for cinder14:20
happystackerthks14:20
whoami-rajatack, good to know14:20
whoami-rajatok, last announcement, Cinder Incremental backup working14:21
whoami-rajatjust for general awareness14:21
whoami-rajatif anyone has doubts how the drivers inheriting from chunkedbackupdriver does incremental backups14:21
whoami-rajatGorka has an article written about it14:21
whoami-rajatsome of the info might be dated but the incremental mechanism should still be the same (i just took a quick glance and it looks same)14:22
whoami-rajatit came up on ML so thought others might have a doubt about this14:22
whoami-rajat#link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034098.html14:22
happystackeroh cool, will have a look14:22
whoami-rajatlink to gorka's article14:22
whoami-rajat#link #link https://web.archive.org/web/20160407151329/http://gorka.eguileor.com/inside-cinders-incremental-backup14:22
enriquetaso++14:23
whoami-rajatthat's all for announcements14:25
whoami-rajatwe also don't have any topic today14:25
whoami-rajatlet's move to open discussion14:25
whoami-rajat#topic open discussion14:25
whoami-rajatreminder to take a look at the review request patches14:25
IPO_I have one question if you don't mind14:26
whoami-rajatsure14:27
Tony_Saadi would like to discuss https://review.opendev.org/c/openstack/oslo.privsep/+/88434414:27
IPO_Well, my question is described in https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034042.html14:28
IPO_I'm able to provide additional comments if needed. Main question - is it known issue and are there any plans to work with it14:30
enriquetaso#link HPE STORAGE CI14:30
enriquetasosorry 14:30
enriquetasohttps://bugs.launchpad.net/cinder/+bug/2003179/comments/714:30
whoami-rajatTony_Saad, sure, what's your question14:31
Tony_Saadfor https://review.opendev.org/c/openstack/oslo.privsep/+/884344 my patch works. I tried the way Eric described and it did not hid the password. The only way i got the password hidden is with that patch14:31
whoami-rajatIPO_, is that related to cinder A/A or the scheduler reporting wrong stats? would like to know how many schedulers are there in your deployment?14:31
whoami-rajatTony_Saad, will that set all privsep logs to logging level ERROR? that would be problematic when debugging issues in deployments ...14:33
enriquetasoout of curiosity: why the logging error was added to `oslo_privsep/daemon.py` and not `os_brick/privileged/__init__.py`?14:33
IPO_whoami-rajat, this issue related to A/A cinder volume and not depending on number of cinder-scheduler instances.14:33
enriquetasooh sorry, because it's using the logger from oslo_log14:35
Tony_Saadwhoami-rajat, no it only sets that one log line to error but because it is set to debug it pretty much skips that logger. I am open to discuss and test other ways but not sure how exactly eric wanted it done14:35
whoami-rajatIPO_, ack, the volume driver reports the capabilities in a periodic interval (i think default is 60 seconds) to the scheduler and the get pools call returns info from scheduler cache14:37
whoami-rajatthough I'm not an expert in A/A and Gorka isn't around today14:37
eharneythis bug doesn't need to be fixed in privsep, it can be fixed from brick  (i put some notes about how in the lp bug)14:37
Tony_Saadeharney, i saw your notes and tried them but the password was still getting leaked. Possible that i did something wrong or missed something?14:38
eharneyi'd have to look at what you tried and dig into it, may be able to look at that next week14:39
eharneythe patch posted to privsep i think will disable some logging that is expected to be there14:39
Tony_SaadSure i can push a patch with the changes that you suggested and review14:40
Tony_Saadbut from my testing https://review.opendev.org/c/openstack/oslo.privsep/+/884344 only disabled that one log line that i changed14:41
IPO_rajat, thanks for comment. Yes - it leads to incorrect reporting of allocated_capacity_gb for pool - so 1. we get problem to understand of cinder allocated amount for pools and 2. some features like reservation capacity in cinder doesn't work either14:43
IPO_And it isn't clear, why related https://bugs.launchpad.net/cinder/+bug/1927186 is incomplete14:45
whoami-rajatI see a similar comment from Gorka about number of schedulers being > 1 which is also my observation in some deployments14:46
whoami-rajatif that's not the case with you, we can discuss this issue further14:46
enriquetasoWas `incomplete` because I left a question a while ago and nobody change it after that 14:47
enriquetasoI've updated it14:47
IPO_No, it isnt the case as it reproduced with one cinder-scheduler either. In case of multiple instances of cinder scheduler it became more bad :)14:48
whoami-rajatack got it, I think even with multiple cinder volume services in A/A, there should only be one that gets data from backend and reports it to the scheduler in a periodic time (again not A/A expert)14:49
IPO_enriquetaso, thanks for comment - so should we reopened it or I'm able to report new one14:49
whoami-rajatcan you try lowering the time interval of reporting ? and see if the issue persists14:50
whoami-rajathttps://github.com/openstack/cinder/blob/d7ae9610d765919660a9f7a8769478f0b6e0aadf/cinder/volume/manager.py#L135-L14214:51
whoami-rajati mean setting backend_stats_polling_interval to a value lower than 60 seconds14:51
enriquetaso1927186 is open14:51
IPO_rajat, each cinder-volume keeps local allocated_capacity_gb and periodically reports it to scheduler. Each time cinder volume gets new task to create volume - it increase local value and report it back to scheduler14:52
enriquetasoI'm not sure if it's the same bug or not... open a new bug report if it's not related to 192718614:52
whoami-rajatthat shouldn't be the case, only scheduler keeps the pool data in cache, cinder-volume's purpose is to get the pool data and send it to scheduler14:53
happystackerI need to drop guys, thanks for all14:53
happystackerand happy summit for the lucky ones14:53
whoami-rajatalso the allocated_capacity_gb is increased/decreased by the scheduler only14:53
IPO_so I saw even negative valu of allcated capacity14:53
whoami-rajatcinder volume shouldn't be performing any calculations on the backend stats14:54
IPO_Looks like it does - when it start and when it get task to create or delete volume14:54
whoami-rajatcan you show the place where you think it's performing calculations on the backend stats?14:55
IPO_https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py#L40314:57
whoami-rajatIPO_, that is only done when we initialize the cinder volume host, it doesn't happen in every cinder volume create/delete operation14:59
IPO_Shure, that is why when we restart cinder volume - it recalculate capacity and show correct value for a while15:00
whoami-rajatyes15:00
whoami-rajatelse c-vol shouldn't be interfering with those values15:00
whoami-rajatanyways we are out of time15:00
whoami-rajatwould be good to discuss this again next week15:00
whoami-rajatwhen we have better team bandwidth15:00
whoami-rajatright now many team members are in the vancouver summit15:01
IPO_ok, thank you !15:01
whoami-rajatthanks for bringing this up15:01
whoami-rajatand thanks everyone for joining15:01
whoami-rajat#endmeeting15:01
opendevmeetMeeting ended Wed Jun 14 15:01:29 2023 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:01
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-06-14-14.00.html15:01
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-06-14-14.00.txt15:01
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-06-14-14.00.log.html15:01
IPO_enriquetaso, thank you for reopening 192718615:06
enriquetasoIPO_, was never close lol15:55
enriquetasoi've just moved the status to `new`15:55

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!