14:00:04 <whoami-rajat> #startmeeting cinder
14:00:04 <opendevmeet> Meeting started Wed May 18 14:00:04 2022 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:04 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:04 <opendevmeet> The meeting name has been set to 'cinder'
14:00:11 <whoami-rajat> #topic roll call
14:00:17 <rosmaita> o/
14:00:22 <abishop> o/
14:00:29 <TusharTgite> Hi
14:00:30 <eharney> hi
14:01:02 <mnaser> o/
14:01:23 <ricolin> o/
14:01:23 <caiquemello[m]> 0/
14:02:06 <fabiooliveira> o/
14:02:17 <whoami-rajat> good turnout, let's wait few more minutes for others to join
14:02:57 <jungleboyj> o/
14:03:10 <sfernand> hi
14:03:18 <whoami-rajat> guess that's all, let's get started
14:03:26 <whoami-rajat> #topic announcements
14:03:56 <whoami-rajat> there are a bunch of announcement so i won't explain anything in depth but there are links if people want to know more
14:04:04 <whoami-rajat> Updates from TC Meeting
14:04:10 <whoami-rajat> #link http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028541.html
14:04:43 <whoami-rajat> most of the announcements are from the TC updates which i thought would be good for awareness of the cinder team
14:04:47 <whoami-rajat> first is, FIPS goal is selected community-wide goal now
14:04:59 <whoami-rajat> #link https://governance.openstack.org/tc/goals/selected/fips.html
14:05:11 <whoami-rajat> second, Release naming change
14:05:18 <whoami-rajat> #link https://review.opendev.org/c/openstack/governance/+/841800
14:05:45 <whoami-rajat> We are moving away from release names and will use release number which is the year of release followed by 1 or 2 (6 months release model)
14:06:00 <whoami-rajat> Eg: 2022.2 for AA, 2023.1 for BB
14:06:20 <whoami-rajat> Release naming will be handed over to OpenInfra Foundation and will be used for marketing purposes (TC won't be involved now)
14:06:35 <whoami-rajat> ^ that's what is written in the proposal and TC updates
14:06:51 <whoami-rajat> next, SLURP (Skip Level Upgrade Release Process)
14:06:57 <whoami-rajat> #link https://review.opendev.org/c/openstack/governance/+/840354
14:07:07 <whoami-rajat> As per OpenStack legal team, it is not OK to use tick-tock naming so we will be shifting to SLURP (previously tick) and not-SLURP (previously tock) names
14:07:46 <whoami-rajat> again, this is not a major change but you can visit the review of the change proposed to know more
14:08:05 <whoami-rajat> next, Releasenote handling in the new release candence (SLURP now)
14:08:17 <whoami-rajat> Brian has a patch up for 3 approaches on how to handle it
14:08:25 <whoami-rajat> the details of the 3 approaches are in the commit msg
14:08:31 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/840996
14:08:50 <whoami-rajat> i think TC will resolve to one and that we will follow with the new SLURP release model
14:09:15 <whoami-rajat> last announcement is regarding cinderlib release
14:09:22 <whoami-rajat> Cycle trailing release for cinderlib (yoga)
14:09:29 <whoami-rajat> #link http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028522.html
14:09:59 <whoami-rajat> The deadline is 23rd June which is far away but i already discussed this once with geguileor
14:10:09 <whoami-rajat> and release team had a mail regarding it so good to mention
14:10:37 <whoami-rajat> that's all the announcements from my side, anything else from anyone?
14:11:22 <tobias-urdin> forgot to wave o/ everybody
14:11:33 <whoami-rajat> hey
14:11:38 <whoami-rajat> ok, we can move on to topics then
14:11:43 <whoami-rajat> #topic Add image_conversion_disable config
14:11:52 <whoami-rajat> rosmaita, mnaser ricolin that's you
14:12:03 <mnaser> henlo
14:12:17 <mnaser> so the main reason behind this is that we have environments where we allow both qcow2 and raw images being uploaded
14:12:34 <mnaser> reason being that if a user uses qcow2, they can use local storage on the hvs and its easier to use qcow2
14:12:54 <mnaser> however, if they use qcow2 + volume, then bad times for your control plane
14:13:13 <mnaser> which is why we would like to allow disabling conversion for these environments
14:13:24 <mnaser> and the user can get a note saying they are using a bad image and they need to upload the right image
14:13:41 <mnaser> aka a raw format image in order to do bfv
14:14:17 <rosmaita> just to have it in the meeting log, here's the links to what we're talking about:
14:14:17 <rosmaita> #link https://review.opendev.org/c/openstack/cinder/+/839793
14:14:17 <rosmaita> #link https://launchpad.net/bugs/1970115
14:14:28 <whoami-rajat> thanks rosmaita
14:14:42 <whoami-rajat> i went through the replies in the patch and i think this is a good use case
14:15:06 <whoami-rajat> currently what we're doing is rejecting a format that we think is bad (not matching the volume) and asking users to manually upload a new image satisfying the criteria
14:15:29 <whoami-rajat> i was thinking if we could do something in glance to automate this to avoid the manual work
14:15:40 <mnaser> yep, that's the idea/goal.  it's also allowing to close a potential for a end user to effectively dos a cloud's control plane
14:15:52 <rosmaita> yeah, i don't know that we can push it to glance
14:16:09 <mnaser> my "bigger" long term goal would be to optimize the conversion to be backend specific
14:16:14 <rosmaita> you can restrict what disk_format glance will allow, but mnaser's use case is that you want multiple formats
14:16:18 <mnaser> for example, qemu-img can write directly to the ceph cluster
14:16:22 <ricolin> consider it as a two step plan?:)
14:16:26 <rosmaita> you just don't want to convert them
14:16:50 <mnaser> so instead of convert to local disk, then upload to ceph -- you can convert straight to ceph, which can maybe potentially allow us to re-enable this
14:17:00 <mnaser> since the disk i/o is really what hurts our control planes really bad
14:17:17 <mnaser> but that is a _faaaar_ bigger project, and this is a good interim option
14:17:20 <mnaser> (imho)
14:18:18 <whoami-rajat> ack, i wasn't aware of the idea of handling this better in the coming future (hence this discussion is useful)
14:18:40 <rosmaita> so as far as end users go, they'll get a 400 if they upload a volume as an image requesting a format that would require conversion
14:18:58 <rosmaita> on the download side, we have create-volume and reimage-volume
14:19:18 <rosmaita> for both of those, we don't know the volume format until after the REST API has returned a 202
14:19:28 <mnaser> #link https://bugs.launchpad.net/cinder/+bug/1970114
14:19:38 <mnaser> ^ this is the 'bug' for the more 'efficent' way of doing things
14:19:39 <rosmaita> but the patch adds user messages for when the volume goes to error
14:22:53 <whoami-rajat> was just going through the bug report, looks like 2 is something we should discuss about in the midcycle
14:23:19 <rosmaita> i agree
14:23:27 <whoami-rajat> but i also remember a discussion that concurrent requests wait for image cache to be created by the first one and then use the cache
14:23:33 <whoami-rajat> or maybe not
14:23:59 <abishop> yes, it does wait for the cache to be seeded
14:24:01 <whoami-rajat> so I don't have much objections given this is a short term solution/workaround for a problem that is being worked upon
14:24:05 <rosmaita> yeah, i thought we had addressed that, not sure if the conversion means it's handled differently
14:25:06 <whoami-rajat> ok, then 2 in the bug should not be true since the image cache is the first volume entry we create (after conversion) and clone from it
14:26:43 <rosmaita> i guess my feeling is that it's a good idea not to kill your cinder-volume nodes, and as long as you communicate to your users what their expectations should be about volume-image format relations, this is ok
14:27:55 <whoami-rajat> agreed and since default is False (which is current behavior) this won't harm anyone not opting for it
14:30:03 <whoami-rajat> any other concerns or observations on this?
14:30:59 <rosmaita> i guess for the record, for the upload use case you could push this to glance using image import, but then you could kill the glance node instead of cinder-volume
14:31:13 <rosmaita> and we'd have to modify cinder to use image import
14:31:29 <rosmaita> on the download sides, i don't think this could be handed off to glance
14:32:06 <rosmaita> because the use case is to have multiple formats stored in glance, and only a subset of those would be accepted by cinder for create-volume or reimage-volume
14:32:25 <mnaser> rosmaita: could be an idea for glance to have multiple types the same way it has multiple backends
14:32:27 <rosmaita> so the end user could always pick the wrong format image by mistake
14:32:54 <mnaser> so then cinder requests a specific format and glance provides it, but yeah, that's also a 'perfect world' scenario
14:33:00 <whoami-rajat> that's a good motivation to adapt to import plugin but we might not have enough bandwidth to work upon it
14:33:30 <rosmaita> mnaser: that was what Glare was for (remember that?)
14:33:41 <mnaser> openstack lore :)
14:33:56 <mnaser> glare was a bit ahead of its time
14:34:04 <mnaser> now we have OCI
14:35:53 <whoami-rajat> maybe I'm wrong but a use case of glance image carrying multiple locations is to have different formats of the same image linked with a single image?
14:36:10 <whoami-rajat> or not, maybe it's more of a redundancy feature
14:36:17 <mnaser> multistore is multiple locations for the same format for hte same image
14:36:18 <rosmaita> whoami-rajat: well, that was what Glare was for
14:36:32 <mnaser> i.e.: edge cloud where you have 1 image identifier, but it can be stored in a ceph cluster in each site
14:36:33 <rosmaita> glance is supposed to work with immutable images
14:36:53 <mnaser> so there would be multiple locations, but all hosting the same exact data
14:37:07 <whoami-rajat> ack, thanks for the clarification
14:37:26 <rosmaita> ok, as far as the patch goes
14:37:59 <rosmaita> i would feel better with functional tests, but not sure we can do those with something that involves the image service
14:38:02 <tobias-urdin> which fwiw is a real world issue when storing for example glance images in a ceph cluster and then needing to copy that to another ceph cluster in another availability-zone
14:38:12 <tobias-urdin> sorry kinda semi-unrelated to cinder tho
14:39:08 <rosmaita> whoami-rajat: i am thinking that we don't have tempest coverage for reimage yet?
14:40:09 <whoami-rajat> rosmaita, I've proposed a patch in tempest which I'm working on
14:40:29 <whoami-rajat> it's failing for unknown reason as of now as I've already tested the flow manually multiple times and it works
14:40:35 <rosmaita> mnaser: are you already running a version of this patch?
14:40:37 <whoami-rajat> but i will try to get that fixed and merged in Zed
14:41:08 <rosmaita> whoami-rajat: yeah, you definitely need more stuff to work on :)
14:41:14 <mnaser> rosmaita: we are not, no
14:41:23 <mnaser> ricolin has been putting in most of the work
14:41:51 <rosmaita> ok, i think the unit test coverage is good, i'm just worried about it blowing up in an unexpected way
14:42:00 <ricolin> the test was only run against my local test environment
14:42:05 <rosmaita> for no particular reason (my being worried, i mean)
14:42:52 <rosmaita> well, it will ship with the option "off"
14:43:09 <rosmaita> so that makes me feel a bit better
14:43:11 <whoami-rajat> would be good to get assurance if it's running in a functional cluster (but maybe that's a lot to expect)
14:43:24 <ricolin> rosmaita: yeah, so it should not make difference to current use cases
14:44:00 <mnaser> If the team is +2 on this
14:44:10 <mnaser> Then we can run it as a patch on top of your env and see if it blows up
14:44:24 <mnaser> Since we were gonna cherry pick it back internally anyways
14:44:24 <rosmaita> yeah i think we have enough test coverage that we can be confident of no regression when the option is "off"
14:44:40 <mnaser> That way we don’t have to land follow ups out of the box
14:45:58 <rosmaita> so as far as the discussion has gone, it sounds like there's no objection to the concept?
14:46:14 <rosmaita> eharney: you should like this patch, it adds a new user message
14:46:20 <eharney> i like the concept
14:46:26 <whoami-rajat> not from my side
14:46:53 <whoami-rajat> looks like the major concerns on this are resolved and we are short on time so let's move on to the next topic
14:47:11 <rosmaita> thanks mnaser and ricolin
14:47:19 <whoami-rajat> thanks mnaser ricolin and rosmaita  for the discussion
14:47:24 <whoami-rajat> #topic Getting pylint job into shape
14:47:25 <ricolin> thanks rosmaita whoami-rajat
14:47:33 <whoami-rajat> eharney, that's you
14:47:46 <eharney> well, i looked at our pylint results, and found broken code with it
14:48:06 <eharney> i think it would be useful to get this integrated back into the review process, but the job is kind of noisy
14:48:19 <eharney> so i'm submitting patches to clean up all the code it complains about
14:48:30 <eharney> review them and look at the pylint output, especially on large patches
14:48:37 <eharney> https://review.opendev.org/c/openstack/cinder/+/842149
14:48:38 <eharney> that's about it
14:49:04 <eharney> (there's more to clean up yet, but it's a start)
14:49:09 <rosmaita> eharney: is it the pylint job or the mypy job that doesn't post html output?
14:49:20 <rosmaita> (i forgot to follow up on that)
14:49:26 <eharney> mypy
14:49:30 <rosmaita> ok
14:49:37 <eharney> the pylint job shows a bunch of red errors: https://zuul.opendev.org/t/openstack/build/ccc4a1165ab94f689984f49b417a24ed
14:49:52 <whoami-rajat> cool, thanks eharney for keeping an eye on the pylint job
14:51:04 <whoami-rajat> moving on
14:51:07 <whoami-rajat> #topic Open discussion
14:51:39 <whoami-rajat> we've a bunch of review requests on the etherpad (looks like it's beginning to be a trend for open discussions)
14:53:22 <whoami-rajat> just a mention that we won't have a bug squad meeting today since Sofia is not around but she has posted a mail covering the bugs this week
14:54:18 <whoami-rajat> anything else for open discussion?
14:55:53 <whoami-rajat> looks like everyone is busy reviewing the patches so we can end 5 minutes early
14:55:58 <whoami-rajat> Thanks everyone!
14:56:02 <whoami-rajat> #endmeeting