14:00:13 <whoami-rajat> #startmeeting cinder
14:00:13 <opendevmeet> Meeting started Wed Jun 21 14:00:13 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:13 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:13 <opendevmeet> The meeting name has been set to 'cinder'
14:00:17 <whoami-rajat> #topic roll call
14:00:28 <simondodsley> o/
14:00:30 <thiagoalvoravel> o/
14:00:35 <eharney> hi
14:00:40 <enriquetaso> hi
14:00:45 <IPO> hi
14:00:56 <felipe_rodrigues> hi
14:01:00 <keerthivasansuresh> o/
14:01:11 <tosky> hi
14:02:17 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-bobcat-meetings
14:04:37 <whoami-rajat> hello everyone
14:04:51 <whoami-rajat> and folks that are back from Vancouver PTG
14:04:57 <helenadantas[m]> o/
14:05:09 <whoami-rajat> let's get started
14:05:12 <whoami-rajat> #topic announcements
14:05:24 <whoami-rajat> first, Spec freeze
14:05:36 <whoami-rajat> The deadline for spec freeze is tomorrow i.e. 22nd June
14:05:44 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-specs
14:06:32 <nahimsouza[m]> o/
14:06:41 <whoami-rajat> we only have 1 remaining spec for this cycle
14:06:43 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder-specs/+/868761
14:07:37 <whoami-rajat> i see Christian has addressed the recent comments
14:07:45 <whoami-rajat> i will review the spec again
14:08:03 <whoami-rajat> but other cores, if you get some time, please take a look at the spec ^
14:08:06 <whoami-rajat> since we only have 1 day
14:08:21 <whoami-rajat> we might extend the deadline for this spec if it misses but it's always good to be on time
14:09:02 <whoami-rajat> also don't want to have conflicts during implementation since it adds a new field to the volume response
14:09:13 <whoami-rajat> better to have the discussion complete on the spec itself
14:09:59 <whoami-rajat> moving on
14:10:16 <whoami-rajat> we have M-2 coming up in 2 weeks
14:10:26 <whoami-rajat> i.e. 6th July
14:10:38 <whoami-rajat> we have the driver merge deadline on M-2
14:11:00 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-drivers
14:11:11 <whoami-rajat> here is the list of drivers currently under my track
14:11:27 <whoami-rajat> if you are planning to propose a driver or have already proposed it, please add it to the above etherpad ^
14:13:11 <whoami-rajat> on a related note, it's a good time to propose stable releases
14:13:32 <whoami-rajat> jbernard doesn't seem to be around at this time but i can have a chat with him later
14:13:38 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034062.html
14:14:32 <whoami-rajat> Thierry mentioned a script that can provide list of unreleased changes
14:14:37 <whoami-rajat> tools/list_stable_unreleased_changes.sh
14:16:42 <whoami-rajat> that's all the announcements i had
14:16:54 <whoami-rajat> let's move to topics
14:17:38 <whoami-rajat> #topic Vancouver 2023 Summit/PTG/Forum report
14:17:56 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-vancouver-2023-followup
14:18:16 <whoami-rajat> rosmaita wrote a great summary of the PTG
14:18:37 <whoami-rajat> for people who are not aware, we had in-person PTG in Vancouver from 13-15 June and some of the team members attended
14:18:53 <whoami-rajat> you can read through the summary but I have created a summary of the summary
14:19:12 <whoami-rajat> i.e. just the highlights of what rosmaita mentioned (according to me)
14:19:24 <whoami-rajat> TC & Community Leaders Interaction
14:19:24 <whoami-rajat> we need to better communicate the available job troubleshooting tools that we have available
14:19:24 <whoami-rajat> services are encouraged to add sqlalchemy 2.0 jobs (as non-voting) to catch problems (and then fix them)
14:19:56 <whoami-rajat> S-RBAC Operator Feedback
14:19:56 <whoami-rajat> we're sticking with only project scope for now
14:19:56 <whoami-rajat> operators would like some more default personas, but we need to get the current goal completed before thinking about these
14:19:56 <whoami-rajat> global reader
14:19:56 <whoami-rajat> L2-support-admin
14:19:58 <whoami-rajat> domain scope support
14:20:23 <whoami-rajat> Managing OSC and SDK as multi-project deliverables
14:20:23 <whoami-rajat> cinder-core will be added to osc-service-core and sdk-service-core as +2 (and not +W)
14:20:46 <whoami-rajat> this is really a good news, we will get to have faster code merges in osc and sdk ^
14:20:57 <whoami-rajat> The OpenStack Block Storage service ... how are we doing?
14:20:58 <whoami-rajat> operators would like multiple backup backends
14:21:42 <whoami-rajat> How do we end the Extended Maintenance "experiment"?
14:21:43 <whoami-rajat> There was agreement that the name "extended maintenance" is not accurate, and we need to come up with something different/update the docs
14:21:43 <whoami-rajat> Several operators argued that keeping the branches available (even if CVE fixes are not merged into them) is useful for collaboration
14:22:31 <enriquetaso> ++
14:22:31 <whoami-rajat> though we didn't have any opposition on the ML, some operators seems to have issues with EOLing the EM branches ^
14:23:02 <IPO> +
14:23:19 <enriquetaso> ++ to +2 to osc and sdk
14:24:15 <whoami-rajat> yeah, we have been trying to get more involvement in those projects, good to see progress being made there
14:24:43 <whoami-rajat> i would still recommend going through Brian's summary but hopefully the information was helpful
14:25:16 <enriquetaso> i will, thanks rajat
14:26:18 <whoami-rajat> thanks enriquetaso
14:26:32 <whoami-rajat> that was all for today
14:26:36 <whoami-rajat> let's move to open discussion
14:26:39 <whoami-rajat> #topic open discussion
14:27:17 <zaitcev> Is there a LP bug for multiple backup back-ends? I am not sure if I saw one or not.
14:28:41 <enriquetaso> are you talking about https://review.opendev.org/c/openstack/cinder-specs/+/868761 zaitcev ?
14:29:28 <whoami-rajat> enriquetaso, i think that is different
14:29:44 <whoami-rajat> we are referring to being able to configure multiple backup backends
14:29:49 <whoami-rajat> like we do for volume backends
14:29:50 <enriquetaso> oh sorry
14:30:00 <zaitcev> enriquetaso: no, unless having multiple back-ends requires the separate status. I don't think it does.
14:30:04 <whoami-rajat> the spec you referred is for allowing other operations like live migration along with backup
14:30:22 <whoami-rajat> zaitcev, i remember we had discussions about it in the past, and maybe an old spec for it
14:30:23 <whoami-rajat> let me look
14:30:32 <zaitcev> I seem to remember some kind of spec for the syntax... Like [backup-1] or something?
14:31:30 <whoami-rajat> zaitcev, https://review.opendev.org/c/openstack/cinder-specs/+/712301
14:31:51 <IPO> I'd like to sort out with Active/Active cinder cluster and allocated_capacity_gb - https://bugs.launchpad.net/cinder/+bug/1927186 - it there is some time and persons for it
14:32:59 <zaitcev> whoami-rajat: perfect, thanks. I got that. So, do we have a bug in LaunchPad to track the progress and issues of this?
14:34:29 <enriquetaso> eharney, found a issue that may be related with configure multiple backup backends with ceph/rbd backup driver. Currently it's possible to select a diff backend/pool/container using the `--container` argument when creating the backup
14:35:03 <enriquetaso> #link https://bugs.launchpad.net/cinder/+bug/2024484
14:37:33 <eharney> hmm
14:37:44 <eharney> looks like a good thing to fix
14:42:26 <whoami-rajat> if that's all for the discussion, we can end the meeting early today
14:42:42 <IPO> I'd like to sort out with Active/Active cinder cluster and allocated_capacity_gb - https://bugs.launchpad.net/cinder/+bug/1927186 - it there is some time and persons for it
14:43:52 <whoami-rajat> IPO, i think geguileo might be able to help in Active Active related things, though not sure if he is around since i don't see him in the meeting
14:44:55 <geguileo> whoami-rajat: I'm around
14:45:09 <IPO> :)
14:45:17 <whoami-rajat> great!
14:45:23 <eharney> didn't hemna do some work that improved capacity tracking w/ multiple services running?
14:45:49 <geguileo> eharney: I believe his work was around properly counting in all the missing cases
14:46:04 <geguileo> The problem with A/A is a different pain  :'-(
14:46:09 <eharney> ah
14:46:31 <geguileo> Basically all schedulers have different accounting, and also all the different volume services will have different numbers
14:46:36 <geguileo> So it's a literal mess
14:48:05 <IPO> geguileo: So you confirm that it is the bug and there is no some missconfuguratio, etc...
14:48:07 <whoami-rajat> geguileo, but i think IPO tried with a single scheduler but still sees issues with counting
14:48:30 <whoami-rajat> s/counting/allocated_capacity_gb
14:48:48 <IPO> Thx, rajat. Single scheduler, multiple cinder-volumes in cluster
14:48:49 <geguileo> whoami-rajat: yeah, because in A/A you also have different values on the different cinder-volume nodes
14:49:16 <geguileo> The problem is that our current code does absolute accounting everywhere
14:49:30 <geguileo> So everyone thinks their value is "the right one"
14:49:36 <geguileo> It's a mess
14:49:41 <whoami-rajat> geguileo, hmm, i thought c-vol only sends the capabilities to the scheduler and scheduler caches it
14:49:44 <whoami-rajat> i could be wrong though
14:49:59 <geguileo> cinder-volume also changes those values during some operations
14:50:10 <geguileo> iirc that's where hemna did changes
14:50:26 <whoami-rajat> it's during host initialization IIRC
14:50:56 <IPO> so looks like it is real bug... As I wrote, may be usage of memcached will help
14:50:58 <whoami-rajat> but IPO says the values get corrected during that but again it becomes inconsistent after a while
14:51:33 <IPO> Yes, it corrected after restart of all cinder volume
14:51:57 <IPO> and than after time it go back to incorrec (even negative) values
14:52:48 <whoami-rajat> yeah so I'm not sure apart from host initialization, we change the values anywhere else
14:53:28 <whoami-rajat> in c-vol i mean
14:53:54 <IPO> Instad of listening for RPC notifications to correct allocated_capacity_gb for each cinder volume and use memcached to share this value I think there is one way to correct it
14:54:26 <IPO> less complicated... We can add one more pereodic task for recalculation of allocated_capacity_gb
14:54:27 <geguileo> IPO: the problem with that approach is that we'd be adding a new service dependency
14:55:10 <IPO> I even create POC with chunks of volume init code
14:56:46 <geguileo> IPO does you POC make atomic modifications when cinder-volume or scheduler make changes to those values?
14:57:45 <IPO> No ofcourse :) no locks and etc
14:58:51 <IPO> But it looks like better when to schedule cinder-volume service restart on pereodic basis
14:59:25 <geguileo> IPO I think the whole "stats" model needs a rethink
14:59:32 <IPO> ++
15:00:04 <IPO> But we need to live somehow till bright future will come
15:00:08 <whoami-rajat> we're out of time
15:00:15 <whoami-rajat> we can continue discussion in #openstack-cinder
15:00:19 <whoami-rajat> after the BS meeting ofcourse
15:00:25 <whoami-rajat> thanks everyone for joining
15:00:27 <whoami-rajat> #endmeeting