14:00:07 <whoami-rajat> #startmeeting cinder
14:00:07 <opendevmeet> Meeting started Wed Jun 15 14:00:07 2022 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:07 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:07 <opendevmeet> The meeting name has been set to 'cinder'
14:00:17 <whoami-rajat> #topic roll call
14:00:29 <fabiooliveira> hi o/
14:00:45 <felipe_rodrigues> o;
14:00:52 <felipe_rodrigues> o/
14:01:03 <Roamer`> o/
14:01:03 <jungleboyj> o/
14:01:07 <simondodsley> hi
14:01:17 <rosmaita> o/
14:01:18 <aneeeshp1> hi
14:01:35 <caiquemello[m]> o/
14:02:11 <nahimsouza[m]> o/
14:02:22 <whoami-rajat> #link https://etherpad.openstack.org/p/cinder-zed-meetings
14:02:36 <luizsantos[m]> o/
14:02:48 <enriquetaso> hi
14:04:17 <whoami-rajat> looks like a good turnout
14:04:23 <whoami-rajat> let's get started
14:04:31 <tosky> hi
14:04:36 <whoami-rajat> #topic announcements
14:04:52 <whoami-rajat> SRBAC Berlin discussion
14:05:24 <whoami-rajat> so I'm not sure about the date or times of the sessions but i believe one related to service role was conducted on thursday and related to operator feedback was conducted on friday
14:05:39 <yuval> hey
14:05:49 <whoami-rajat> anyway, we had a ops meetup in berlin which had a topic to discuss about our issues with the current SRBAC strategy
14:06:38 <whoami-rajat> majorly being the scopes since it has changed quite a few times in design and still the current one is not satisfactory
14:07:01 <whoami-rajat> gmann, has described this in a ML thread
14:07:03 <whoami-rajat> #link http://lists.openstack.org/pipermail/openstack-discuss/2022-June/028878.html
14:07:28 <whoami-rajat> the feedback etherpad for SRBAC in ops meetup is here
14:07:29 <whoami-rajat> #link https://etherpad.opendev.org/p/rbac-operator-feedback
14:07:47 <whoami-rajat> honesty, i wasn't able to derive any conclusion or major points to highlight here
14:08:04 <whoami-rajat> so feel free to take a look at the etherpad discussion section
14:08:21 <whoami-rajat> another session related to service role was held on thursday last week
14:08:31 <whoami-rajat> #link https://etherpad.opendev.org/p/deprivilization-of-service-accounts
14:09:05 <whoami-rajat> again, the etherpad says very less and until a recording is available, I couldn't derive much concrete items from the etherpad
14:09:32 <rosmaita> me neither
14:09:57 <whoami-rajat> i was hoping rosmaita would but hard luck :/
14:10:06 <whoami-rajat> let's see if TC posts an update about it
14:10:20 <rosmaita> the policy pop-up team did not meet yesterday, so hoping to get some info on thursday at the TC meeting
14:10:46 <gmann> yeah, I am consolidating about all the feedback from various place and ops meetup/forums etc
14:11:09 <whoami-rajat> gmann, great thanks
14:11:12 <gmann> and also will send the meeting schedule soon to decide the next step
14:11:54 <whoami-rajat> sounds like a plan so let's wait for further discussions
14:12:01 <whoami-rajat> thanks gmann for an update
14:12:02 <rosmaita> gmann: my impression is that ironic absolutely wants scope, but maybe it can be optional for other projects?
14:12:44 <rosmaita> (if it's too complicated to answer, we can discuss at the tc meeting)
14:12:54 <gmann> rosmaita: may be but ironic can be exception but any other inter-dependent projects should not have scope as individual. but let's discuss in policy popup meeting
14:12:55 <gmann> yeah
14:13:37 <whoami-rajat> thanks again, let's move on to our next announcement
14:13:44 <whoami-rajat> next, cinderlib for yoga must be released by 23 June
14:13:51 <whoami-rajat> rosmaita, that's you
14:14:19 <rosmaita> just wanted to remind everyone, i will talk about some issues later
14:14:48 <whoami-rajat> ack, I received one reminder from herve about the cinderlib release
14:14:51 <whoami-rajat> they've proposed a patch
14:14:54 <whoami-rajat> #link https://review.opendev.org/c/openstack/releases/+/845701
14:15:15 <rosmaita> yeah, please put a -1 on that
14:15:28 <whoami-rajat> and I've told him that after discussing all the issues we've in cinderlib, me or rosmaita will add a comment to the patch
14:15:51 <rosmaita> also, there's https://review.opendev.org/c/openstack/releases/+/842105
14:16:21 <whoami-rajat> ah, this one is old ...
14:16:35 <whoami-rajat> I will ask him to abandon the new one in favor of this one then
14:17:01 <rosmaita> sounds good, just -1 both so that there's no confusion
14:17:20 <whoami-rajat> yep, will do, thanks
14:17:35 <whoami-rajat> next annoucement, spec freeze is 24 June
14:18:43 <whoami-rajat> so i see the quota spec by geguileo still needs an update
14:18:55 <rosmaita> has anyone looked at the task status field proposal?
14:18:58 <simondodsley> also this one needs to be addressed: https://review.opendev.org/c/openstack/cinder-specs/+/818551
14:19:06 <simondodsley> that's the one
14:19:08 <rosmaita> that's the one
14:19:13 <rosmaita> jinx!!!
14:19:27 <whoami-rajat> also the SRBAC one needs to be updated after we've all the discussion points from ops meetup (maybe will get a Spec feeze exception)
14:19:28 <simondodsley> Walt has an issue with it
14:19:48 <simondodsley> hemna are you here?
14:20:17 <geguileo> whoami-rajat: yeah, sorry I was busy with the nvmeof and backup memory usage stuff
14:20:23 * fabiooliveira oh no, you're jinxed -- kidding
14:20:28 <geguileo> whoami-rajat: those are ready now, so I should be able to go back to it
14:21:07 <simondodsley> the task status one really needs to be either approved or not so they can start the coding. This has been hanging around since Yoga
14:21:17 <whoami-rajat> geguileo, np, just wanted to know we're on track as next week is spec freeze
14:21:20 <rosmaita> next time someone sees hemna in #openstack-cinder, please ask him to go back to https://review.opendev.org/c/openstack/cinder-specs/+/818551 and respond to their responses
14:21:25 <whoami-rajat> geguileo, great
14:21:41 <geguileo> whoami-rajat: who needs sleep!!
14:22:19 <whoami-rajat> :D sorry about all the work items you've got in this cycle ... and everything is IMPORTANT!!
14:22:52 <geguileo> lol
14:23:06 <simondodsley> there are a lot of old specs out there - we need to either kill them or retarget to zed
14:23:07 <whoami-rajat> as rosmaita said, please followup with hemna on the spec and i will also take a look by this week
14:23:18 <rosmaita> speaking of the memory usage stuff, geguileo has an interesting patch up: https://review.opendev.org/c/openstack/devstack/+/845805
14:24:19 <whoami-rajat> the numbers look fascinating, 1GB -> 130 MB
14:24:37 <rosmaita> plus, it's not cinder-backup's fault!
14:24:43 <rosmaita> (that's the best part)
14:24:44 <enriquetaso> ++
14:24:46 <geguileo> lol
14:24:54 <geguileo> yeah, not our fault for once
14:25:11 <whoami-rajat> yeah, another not a cinder issue
14:25:12 <jungleboyj> \o/
14:26:33 <whoami-rajat> simondodsley, good idea, i will take a look at specs that are not relevant for Zed and ask them to be retargeted or abandoned
14:26:54 <whoami-rajat> ok so let's move on to topics
14:26:55 <simondodsley> i've already asked for retargets but there have been no responses
14:27:24 <whoami-rajat> oh, then we probably should abandon them after a certain amount of time but not too sure about it
14:27:32 <whoami-rajat> rosmaita, what do you think ? ^
14:28:15 <rosmaita> whoami-rajat: yes, we should abandon anything maybe older than 2 cycles
14:28:23 <jungleboyj> rosmaita: ++
14:28:30 <rosmaita> with a note, "feel free to restore if you want to keep working on this"
14:28:47 <whoami-rajat> we've got 2 PTLs approval on this so let's move on with this strategy ^
14:29:16 <whoami-rajat> ok moving on to topics
14:29:18 <whoami-rajat> #topic Reviews request
14:29:22 <whoami-rajat> enriquetaso, that's you
14:29:32 <enriquetaso> Hey
14:29:33 <whoami-rajat> #link https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/782624/
14:29:50 <enriquetaso> Just asking for reviews again :P
14:29:51 <enriquetaso> I really need some reviews in this to continue working on more tempest patches.
14:30:16 <enriquetaso> I'd like to see the mimic client on the CI before submitting more tempest test
14:30:20 <rosmaita> reminder: all cinder-core are also devstack-plugin-ceph cores
14:30:33 <enriquetaso> thanks! That's all for me!
14:30:45 <enriquetaso> I think we discussed this patch a few time ago on this meeting
14:30:52 <enriquetaso> and could be ready to merge
14:31:20 <whoami-rajat> the comment i read in the patch might not be too accurate
14:31:21 <whoami-rajat> # Enables new features such as Clone v2 API, which allows proper handling of
14:31:21 <whoami-rajat> # deleting snapshots with child clone images.
14:31:33 <whoami-rajat> but will add a comment to the patch
14:31:45 <whoami-rajat> so cores, please take a look as it's blocking work ^
14:31:59 <enriquetaso> whoami-rajat++
14:32:08 <enriquetaso> thanks, I'll update the commit msg after your review
14:32:23 <whoami-rajat> ack thanks
14:32:29 <whoami-rajat> moving on
14:32:31 <whoami-rajat> #topic Four issues with cinderlib
14:32:35 <whoami-rajat> rosmaita, that's you
14:32:39 <rosmaita> lot to talk about here, but mostly informative (i believe everything is close to a solution if we agree)
14:32:47 <rosmaita> so, the previous PTL seems to have left cinderlib in a heck of a state
14:32:52 <rosmaita> issue #1: cinderlib CI
14:32:59 <rosmaita> cinderlib is a cycle-trailing release, so master is still yoga development
14:33:05 <rosmaita> hasn't been a problem in the past, but zed doesn't support py36 anymore, while yoga does
14:33:15 <rosmaita> so we started hitting gate failures due to testing cinderlib (yoga development) with master (zed development) upper-constraints
14:33:23 <rosmaita> fixed by including overrides in .zuul.yaml
14:33:30 <rosmaita> part one (merged): https://review.opendev.org/c/openstack/cinderlib/+/845170
14:33:37 <rosmaita> part two: https://review.opendev.org/c/openstack/cinderlib/+/845272
14:33:44 <rosmaita> (job wasn't failing, but I think that was luck, and we should be consistent about these changes)
14:34:02 <rosmaita> so, just need reviews on part two ^^
14:34:10 <rosmaita> and the issue will be solved!
14:34:17 <rosmaita> issue #2: unconstrained builds
14:34:22 <rosmaita> tox.ini is set up so that we install cinder and os-brick from source
14:34:29 <rosmaita> os-brick is in upper-constraints, so if we use upper-constraints, we can't install it
14:34:35 <rosmaita> (because the development version exceeds what's in upper-constraints)
14:34:44 <rosmaita> at the same time, if we don't constrain cinderlib for testing, we really don't know what library versions are actually being used
14:34:51 <rosmaita> so we really do want to constrain it
14:34:58 <rosmaita> proposed solution is https://review.opendev.org/c/openstack/cinderlib/+/845607
14:35:04 <rosmaita> (which needs reviews)
14:35:11 <rosmaita> it creates a local-upper-constraints.txt that doesn't include os-brick
14:35:16 <rosmaita> it gets generated as part of the tox install_command
14:35:24 <rosmaita> you can override the file that's used with the CINDERLIB_CONSTRAINTS_FILE environment var
14:35:30 <rosmaita> I decided not to hide local-upper-constraints.txt in the tox temp dir so you can see exactly what's being used
14:35:36 <rosmaita> that filename is added to .gitignore, so it shouldn't bother you at all
14:35:43 <rosmaita> the reason why we're not using the standard TOX_CONSTRAINTS_FILE environment var is to make this change work with zuul
14:35:50 <rosmaita> zuul always overrides TOX_CONSTRAINTS_FILE to use upper-constraints directly from its install of openstack/requirements
14:35:57 <rosmaita> (this makes it possible to use Depends-on when testing upper-constraints changes)
14:36:05 <rosmaita> and it overrides it aggressively, can't change this in our .zuul.yaml
14:36:11 <rosmaita> so we have to use CINDERLIB_CONSTRAINTS_FILE
14:36:16 <rosmaita> the downside is that we can't use Depends-on to test upper-constraints changes in the gate
14:36:22 <rosmaita> but i don't think this is a big deal because cinderlib is a cycle-trailing release
14:36:30 <rosmaita> so any problems will most likely be caught earlier by cinder
14:36:38 <rosmaita> and you can always test locally by downloading the patched u-c file and setting CINDERLIB_CONSTRAINTS_FILE
14:36:44 <rosmaita> (though you have to remember to remove os-brick from the patched file)
14:36:55 <rosmaita> so that's how the patch works, please review and leave questions etc:
14:37:00 <rosmaita> https://review.opendev.org/c/openstack/cinderlib/+/845607
14:37:30 <rosmaita> the nice thing is that that patch works for both tox and zuul
14:37:50 <rosmaita> issue #3: not running requirements-check job
14:37:55 <rosmaita> the standard requirements check template is not set up to handle trailing releases
14:38:04 <rosmaita> but we only have 3 requirements:
14:38:10 <rosmaita> https://opendev.org/openstack/cinderlib/src/branch/master/requirements.txt
14:38:25 <rosmaita> so, this file ^^ is interesting because it contains cinder
14:38:30 <rosmaita> it's needed for when someone installs cinderlib from pypi
14:38:37 <rosmaita> we don't actually use the requirements.txt in the cinderlib tox.ini
14:38:43 <rosmaita> because we install cinder and os-brick from source
14:38:49 <rosmaita> and importlib-metadata is used by cinder, so we get it that way
14:38:59 <rosmaita> so my proposal is that the PTL just check manually to make sure that requirements.txt is correct
14:39:09 <rosmaita> os-brick and importlib-metadata are in global-requirements
14:39:14 <rosmaita> but ... cinder is not
14:39:19 <rosmaita> and it's not allowed in there
14:39:30 <rosmaita> for info about this if you care, see this discussion in #openstack-requirements:
14:39:37 <rosmaita> https://meetings.opendev.org/irclogs/%23openstack-requirements/%23openstack-requirements.2022-06-08.log.html#t2022-06-08T15:36:55
14:39:44 <rosmaita> if cinderlib starts using more requirements, we can revisit this
14:39:50 <rosmaita> but for now, I propose we do nothing
14:39:59 <rosmaita> (just wanted to make sure everyone understands what's going on)
14:40:11 <rosmaita> ok, finally
14:40:18 <rosmaita> issue #4: not using released versions of cinder, os-brick in CI
14:40:25 <rosmaita> we probably had this discussion when cinderlib CI was first set up for the train release, but I don't remember the reasons
14:40:31 <rosmaita> so the issue is that all our CI is using cinder and os-brick source, so possibly using unreleased changes
14:40:39 <rosmaita> we could add more jobs
14:40:46 <rosmaita> or, we could just make sure that when we release yoga cinderlib
14:40:53 <rosmaita> we also release yoga cinder and os-brick (if they contain any unreleased changes)
14:41:00 <rosmaita> then we'll know that cinderlib has not been relying on any unreleased code to pass its CI
14:41:26 <rosmaita> that's it ... so to summarize
14:41:33 <rosmaita> issue #1 pretty much solved
14:41:45 <rosmaita> issue #2 probably solved?
14:42:00 <rosmaita> issues #3, #4 ... i propose we do nothing
14:42:18 <rosmaita> sorry that was a lot of text to dump in here
14:42:26 <rosmaita> any questions?
14:42:32 <tosky> (I voted -1 on the second patch of #1 but it's either easily solvable or I'm plainly wrong)
14:42:34 <geguileo> rosmaita: regarding the cinderlib requirements, it should *never* have any more than what we currently have
14:42:49 <geguileo> so it should be ok leaving it as it is (like you propose)
14:42:56 <rosmaita> works for me!
14:42:57 <whoami-rajat> for #3, i think it's OK to manually check if right versions of cinder and os-brick are mentioned in requirements.txt unless someone thinks otherwise
14:43:35 <rosmaita> yeah, the alternative is to hack the requirements file like i did with upper-constraints ... don't think it's worth it, though
14:43:56 <geguileo> rosmaita: I don't see the issue for #4
14:44:34 <whoami-rajat> for 3 requirements? don't think so and doesn't add much burden on me as well so no problem at all
14:44:50 <geguileo> rosmaita: when both are working on the same release it makes sense to run it against master, since we want them to keep in sync and not find surprises
14:45:25 <geguileo> rosmaita: once os-brick releases we pin it to in cinderlib tox.ini to the stable branch, and the same thing when cinder releases
14:45:40 <geguileo> and then once cinderlib releases we can unpin those 2
14:45:50 <rosmaita> right, it's just that it could be possible that some stuff has been merged to cinder or os-brick stable/yoga that we are testing with
14:45:57 <geguileo> I believe that's mostly what we've been doing
14:46:19 <rosmaita> and if someone installs cinderlib from pypi, they get released versions of cinder, os-brick
14:46:37 <geguileo> true
14:46:44 <rosmaita> it's pretty unlikely
14:46:57 <geguileo> I don't anticipate many issues there though
14:46:59 <rosmaita> but to be safe, we can just release new cinder and os-brick at the same time
14:47:09 <rosmaita> yeah, i am just being over-cautious
14:47:31 <geguileo> yeah, I just feel bad giving extra work with those 2 additional releases
14:47:43 <whoami-rajat> rosmaita, do you mean releasing stable/yoga of cinder and os-brick?
14:47:47 <rosmaita> releases are cheap ... testing is hard!
14:47:50 <rosmaita> whoami-rajat: yes, exactly
14:47:56 <jungleboyj> :-)
14:48:25 <rosmaita> by the way, i meant to say something about cinderlib for people here who are unfamiliar with it
14:48:26 <whoami-rajat> we can do it but if we think we're good with what we currently have then maybe not required
14:49:34 <whoami-rajat> ok that doesn't seem like a big issue to discuss right now, i think the focus should be more on #1 and #2
14:49:47 <rosmaita> cinderlib is used by Ember-CSI which is a container storage interface for kubernetes
14:49:50 <whoami-rajat> and thanks rosmaita for finding out the issues and providing a verbose summary
14:50:03 <rosmaita> so you can use the cinder drivers without having to run cinder as a service
14:50:17 <rosmaita> yeah, verbosity is my middle name
14:50:25 <rosmaita> that's all from me
14:50:30 <whoami-rajat> lol
14:50:51 <whoami-rajat> great, so i guess that's all we had for topics
14:50:56 <whoami-rajat> let's move to open discussion
14:51:00 <simondodsley> Anyone else seeing tempest test `tempest.api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap` failing with `ValueError: Multiple pingable or sshable servers not supported at this stage`? Seems to be ever since https://review.opendev.org/c/openstack/tempest/+/842921 merged
14:51:12 <whoami-rajat> #topic open discussion
14:51:45 <aneeeshp1> Hi, I am representing Fungible (https://www.fungible.com/product/nvme-over-tcp-fungible-storage-cluster/) and attending this meeting for the first time.
14:51:54 <aneeeshp1> Just wanted to talk about a new driver for Fungible storage backend.
14:52:02 <aneeeshp1> I have submitted a blueprint for the new driver (https://blueprints.launchpad.net/cinder/+spec/fungible-volume-driver)
14:52:09 <aneeeshp1> CI setup in progress. Expected to be ready by end of this month. Planning to submit a patch once the CI is ready.
14:52:17 <aneeeshp1> Can this be targeted for Zed?
14:52:27 <rosmaita> aneeeshp1: welcome!
14:52:42 <whoami-rajat> aneeeshp1, Welcome!
14:52:44 <geguileo> aneeeshp1: welcome to the cinder meetings!
14:52:55 <aneeeshp1> Thank you!
14:52:55 <jungleboyj> Welcome.  :-)
14:52:59 <fabiooliveira> welcome \o/
14:53:05 <whoami-rajat> aneeeshp1, sine you've filed a blueprint, you're already on the right track, one question, do you have a patch up for the new driver?
14:53:32 <rosmaita> aneeeshp1:  https://releases.openstack.org/zed/schedule.html#cinder-new-driver-merge-deadline
14:53:36 <whoami-rajat> ah you already said it will be pushed once CI is ready, my bad
14:53:36 <aneeeshp1> whoami-rajat: not yet. Will be ready by end of this month.
14:54:38 <whoami-rajat> aneeeshp1, so currently our deadline for driver merging is 15th July, maybe enough time to review the change but will be good if you can try to get it done earlier
14:54:47 <whoami-rajat> #link https://releases.openstack.org/zed/schedule.html#z-cinder-driver-deadline
14:55:15 <rosmaita> how many drivers are proposed at this point? i am losing count
14:55:17 <whoami-rajat> ah, rosmaita shared this already, I'm skipping some messages ...
14:55:30 <whoami-rajat> more than we can review?
14:55:32 <geguileo> aneeeshp1: you can push the patch before upstream CI is ready
14:55:36 <whoami-rajat> 6-7 probably
14:55:38 <aneeeshp1> Can the patch review start before the CI is ready. I might be able to submit the patch earlier, but CI will take some time (end of the month)
14:55:46 <geguileo> yes it can
14:55:59 <aneeeshp1> Thank you geguileo. I will do that
14:56:09 <whoami-rajat> yes, it won't be merged unless the CI is reporting but that doesn't block reviewing the driver patch
14:56:19 <aneeeshp1> Okay thanks
14:56:28 <aneeeshp1> I will create patch ASAP.
14:56:50 <whoami-rajat> great, thanks for your contribution
14:56:59 <aneeeshp1> thank you all
14:57:03 <rosmaita> looks like 8 new drivers
14:57:07 <whoami-rajat> make sure to add it to the work items section in your blueprint
14:57:13 <aneeeshp1> sure
14:57:41 <whoami-rajat> rosmaita, wow, maybe the highest I've seen in a cycle
14:57:51 <jungleboyj> Yes.
14:58:07 <jungleboyj> Since the old days when we were the hot new thing in town.
14:58:15 <whoami-rajat> I will create etherpad for spec and drivers to prioritize them
14:58:21 <enriquetaso> cool
14:58:29 <Roamer`> hi, so real quick (I hope... unless there are any objections and maybe I should have put this in the schedule)... so you may remember that in the May 25h video meeting I brought up a problem with the StorPool driver keeping Glance images in a different Cinder pool (underlying StorPool template) than the volumes the users wish to create, and there seemed to be some consensus that instead of
14:58:35 <Roamer`> every driver reimplementing half of the workflow for creating an image out of a volume, it might be easier to add a driver capability "I know how to clone volumes efficiently even into a different pool"... so today I filed https://blueprints.launchpad.net/cinder/+spec/clone-across-pools and what do people think about the name of the capability?
14:59:09 <Roamer`> I have already started working on it (we have to do something like this at a customer installation and this option, a driver capability, will be *much* cleaner than what we have now), I guess I will have something ready for review in a day or two
14:59:11 <whoami-rajat> also one announcement i forgot ... we've the festival of XS reviews this Friday, but i will be sending a reminder to ML anyway
15:00:27 <enriquetaso> oh,i wont attend this XS review festival because i'm on AR Holiday :(
15:00:39 <rosmaita> wow, third friday of the month has arrived really fast!
15:01:07 <jungleboyj> rosmaita:  Yeah, how did that happen already?  :-)
15:01:08 <whoami-rajat> Roamer`, thanks for providing the update
15:01:23 <whoami-rajat> enriquetaso, ah shoot, but no problem
15:01:44 <whoami-rajat> rosmaita, yeah really, i thought it was the second one this Friday
15:01:57 <whoami-rajat> we've passed the time limit for the meeting
15:02:03 <whoami-rajat> so let's wrap it up
15:02:08 <whoami-rajat> thanks everyone
15:02:09 <whoami-rajat> #endmeeting