14:00:00 <whoami-rajat> #startmeeting cinder
14:00:00 <opendevmeet> Meeting started Wed Jan 25 14:00:00 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:00 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:00 <opendevmeet> The meeting name has been set to 'cinder'
14:00:03 <whoami-rajat> #topic roll call
14:00:53 <simondodsley> o/
14:01:06 <tobias-urdin> o/
14:01:21 <rosmaita> o/
14:01:41 <eharney> hi
14:01:47 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-antelope-meetings
14:02:03 <nahimsouza[m]> o/
14:02:48 <happystacker> Hello!
14:03:16 <keerthivasansuresh> o/
14:03:16 <enriquetaso> hi
14:03:30 <geguileo> hi!
14:03:41 <whoami-rajat> hello everyone
14:03:44 <abdi> Hi
14:03:53 <whoami-rajat> let's get started
14:03:57 <felipe_rodrigues> hi
14:03:59 <thiagoalvoravel> o/
14:04:04 <gksk> HI
14:04:07 <ganso> o/
14:04:29 <whoami-rajat> #topic announcements
14:04:35 <whoami-rajat> Security issue with VMDK format
14:04:40 <whoami-rajat> #link https://security.openstack.org/ossa/OSSA-2023-002.html
14:04:54 <whoami-rajat> rosmaita, will cover this since he's been working on it
14:05:02 <rosmaita> sure
14:05:43 <rosmaita> i see Rajat already posted the link to the patches
14:05:59 <rosmaita> we are contractually obligated to fix xena, yoga, and zed
14:06:05 <happystacker> https://review.opendev.org/q/I3c60ee4c0795aadf03108ed9b5a46ecd116894af
14:06:10 <rosmaita> the rest are "courtesy" patches
14:06:30 <rosmaita> because people are still using those branches even though we don't release from them any more
14:06:36 <rosmaita> i stopped at train though
14:07:11 <happystacker> makes sense not to go beyond
14:07:13 <rosmaita> anyway, as we get to the older branches, it is harder to actually run tests because of old dependencies
14:07:14 <whoami-rajat> #link https://review.opendev.org/q/I3c60ee4c0795aadf03108ed9b5a46ecd116894af
14:07:20 <happystacker> Train is pretty old
14:07:47 <rosmaita> i had to mess with the tox.ini and requirements.txt and upper constraints to get things to run locally
14:08:03 <rosmaita> just mentioning that in case you are in a similar situation
14:08:47 <rosmaita> one thing is that somewhere around ussuri, you need to set basepython=python3.6 to get pep8 and hte docs jobs to work
14:08:58 <rosmaita> not sure if it's worth backporting that though
14:09:26 <rosmaita> i think the gate gets around this because they are using older distros when they run the zuul jobs
14:10:20 <rosmaita> there is one change in the patch that may affect some drivers
14:10:38 <simondodsley> happystacker: i have customers stuck on Queens as they are using Mirantis OpenStack... it makes me cry
14:11:04 <roquej> wow, that's terrible! it would make me cry too ;-)
14:11:06 <rosmaita> simondodsley: someone attached a queens backport to the bug, though i haven't looked at it
14:11:56 <rosmaita> i did write more detailed than usual commit messages about conflicts during cherry-pick to explain what changes i made
14:12:14 <whoami-rajat> i think we EOLed queens and deleted the branch?
14:12:14 <rosmaita> so if you look at train, you get the whole story
14:12:38 <rosmaita> whoami-rajat: you are correct
14:13:10 <rosmaita> yeah, if you do need to work with queens, you can check out the queens-eol tag and work from there
14:13:55 <simondodsley> rosmaita: story... that's a novel :)
14:14:01 <rosmaita> :D
14:14:10 <rosmaita> ok, as far as the drivers go ...
14:14:32 <rosmaita> some drivers call cinder.image.image_utils.convert_image() directly
14:15:03 <rosmaita> that function now checks to verify that the image data matches the image format that the caller says it is
14:15:21 <rosmaita> it does this by calling out to 'qeum-img info'
14:16:22 <rosmaita> anyway, the main point is that you can pass in an image_id and the qumu-img-info stuff
14:16:26 <rosmaita> but that's optional
14:16:47 <rosmaita> if you don't pass an image_id, if a problem is encountered, the message will say 'internal image'
14:16:55 <rosmaita> which for some drivers it is
14:17:14 <rosmaita> but there are some drivers who do have a glance image id when they make the call
14:17:29 <rosmaita> so the log/error message will be slightly misleading, but only if there's a problem
14:17:44 <rosmaita> anyway, i think it affects 3 or so drivers
14:18:12 <rosmaita> i'll put up patches for master for those drivers, and rely on the driver maintainers to do testing and backports
14:18:17 <happystacker> It'd worth to do a quick test on our drivers as well
14:18:31 <rosmaita> happystacker: that's a good point
14:18:50 <enriquetaso> i'll check the generic-nfs scenario because is affected by this
14:18:51 <rosmaita> so the VMDK problem is like the backing_file issue with qcow2 images
14:19:25 <rosmaita> right now, we do check for backing_file reported by quemu-img info
14:20:02 <rosmaita> this VMDK thing happened because the "backing file" (called a "named extent" by vmware) doesn't show up in that field
14:20:18 <rosmaita> it shows up in the format specific data in the quem-img info response
14:20:31 <rosmaita> (if you look at the unit tests on the patch you can see an example)
14:20:58 <rosmaita> anyway, my point is that I don't know that we have caught this issue for all possible image formats we support
14:21:14 <rosmaita> (which depends on what the glance disk_formats config option is)
14:21:25 <happystacker> complex to handle
14:21:42 <rosmaita> so if you deal with other format images, please look for this kind of exploit
14:22:01 <rosmaita> because it doesn't matter if your cloud doesn't support format X on the hypervisors
14:22:16 <rosmaita> it's still poissible for someone to put a X format image into glance, and then create a volume from it
14:22:33 <rosmaita> at which point cinder will courteously convert the format to raw to write it to disk
14:23:01 <rosmaita> which is nice in general, but can be bad in some cases!
14:23:25 <rosmaita> ok, i will shut up now ... ping me if you have any questions
14:23:30 <happystacker> I have a question around it but we talk in our bug review
14:23:38 <rosmaita> sure
14:23:44 <rosmaita> one thing i should say , though
14:24:06 <rosmaita> if you do discover this issue with another format, please file it as a security bug in Launchpad
14:24:18 <rosmaita> that will give us time to fix it before it's announced
14:24:38 <rosmaita> though the bad guys will already know about it, i guess
14:24:45 <rosmaita> (at least some of them)
14:24:52 <happystacker> probably...
14:24:54 <rosmaita> ok, now i will shut up for real
14:25:04 <whoami-rajat> thanks rosmaita for working on the fix and the detailed summary!
14:25:29 <rosmaita> np, it didn't look too bad, i just didn't anticipate 50 backports
14:25:40 <whoami-rajat> current status is we're waiting for xena patch to merge and then we will release all active stable branches i.e. xena, yoga and zed
14:25:50 <rosmaita> and the tempest and grenad jobs in xena being so uncooperative
14:26:15 <whoami-rajat> yeah, gate fails for important changes or when a deadline is around
14:26:53 <whoami-rajat> anyway, thanks again and let's move to the next announcement
14:27:01 <whoami-rajat> Driver status
14:27:17 <whoami-rajat> 1) HPE XP
14:27:26 <whoami-rajat> #link  https://review.opendev.org/c/openstack/cinder/+/815582
14:27:42 <whoami-rajat> we've approved the HPE driver but i see it is dependent on one hitachi feature
14:27:48 <whoami-rajat> this https://review.opendev.org/c/openstack/cinder/+/846977
14:27:52 <whoami-rajat> is abdi around?
14:27:59 <abdi> Yes
14:28:03 <whoami-rajat> hey
14:28:09 <abdi> Hi
14:28:27 <whoami-rajat> so do we need that hitachi feature for this driver or it can be done later? later = M3 which is driver feature merge deadline
14:29:53 <abdi> I believe that feature can be later but I will double check today and get back to you.
14:30:48 <whoami-rajat> sure, since it is blocking the driver merge, would be good if it can removed as a dependency
14:31:12 <whoami-rajat> if not needed, rebase the driver on master (removing this dependency) and let me know, i will review again
14:31:16 <abdi> Ok.  I will have that addressed today.
14:31:24 <whoami-rajat> great thanks
14:31:30 <abdi> Thank you.
14:31:48 <whoami-rajat> np
14:31:54 <whoami-rajat> 2) Fungible NVMe TCP
14:32:00 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/849143
14:32:04 <whoami-rajat> this driver is merged
14:32:12 <whoami-rajat> 3) Lustre
14:32:19 <whoami-rajat> #link https://review.opendev.org/q/topic:bp%252Fadd-lustre-driver
14:32:27 <whoami-rajat> we've decided to move this to next cycle
14:32:34 <whoami-rajat> since it doesn't have a working CI yet
14:32:43 <whoami-rajat> i will add it as a review comment on the patch
14:32:54 <whoami-rajat> 4) NetApp NVME TCP
14:33:03 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/870004
14:33:14 <whoami-rajat> I've reviewed it and it mostly resembles the iSCSI driver
14:33:34 <whoami-rajat> it's missing a releasenote currently and a code section which says it should've been removed in Mitaka
14:33:55 <whoami-rajat> i checked the same against netapp iSCSI driver so i think it needs to be removed from both drivers
14:34:01 <whoami-rajat> i.e. iSCSI and new NVMe TCP
14:34:17 <felipe_rodrigues> thank you whoami-rajat for the review. I saw the comments, I am fixing the issues.. submitting a new patch very soon
14:34:41 <whoami-rajat> great, thanks felipe_rodrigues
14:34:58 <whoami-rajat> that's all for the drivers
14:35:04 <whoami-rajat> next, Upcoming release deadlines
14:35:13 <whoami-rajat> os-brick (February 9)
14:35:13 <whoami-rajat> cinderclient (February 16)
14:35:30 <whoami-rajat> i don't see anything major in both projects this cycle but good to get some open patches merged
14:35:39 <whoami-rajat> we still have time so things are looking good as of now
14:36:03 <whoami-rajat> next, 2023.2 (Bobcat) schedule
14:36:17 <whoami-rajat> i don't remember if i announced it but the next release name is Bobcat
14:36:32 <whoami-rajat> #link https://review.opendev.org/c/openstack/releases/+/869976
14:36:37 <whoami-rajat> the schedule is proposed
14:36:46 <whoami-rajat> this is the HTML render
14:36:48 <whoami-rajat> #link https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_280/869976/1/check/openstack-tox-docs/28016f8/docs/bobcat/schedule.html
14:36:48 <enriquetaso> Lince rojo in Spanish
14:38:17 <whoami-rajat> it doesn't mention the Bobcat virtual PTG which i think it should
14:38:20 <whoami-rajat> but i will add that to the review
14:38:49 <whoami-rajat> finally we've outreachy announcement
14:38:55 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031869.html
14:39:01 <whoami-rajat> enriquetaso, would you like to cover this?
14:39:06 <enriquetaso> sure
14:39:11 <whoami-rajat> great
14:39:24 <enriquetaso> (1)Looking for project/goals: Looking for potential intern projects ideas. Please let me know if you have an upstream bug or implementation that could be fixed by an intern (probably basic/medium tasks)
14:39:34 <enriquetaso> A project could be fixing something that has been broken for a long time and could be fixed in a 3 month period. (small to medium size bug). Or a new partial implementation.
14:39:58 <enriquetaso> (2)Looking for mentors: I think jbernard is interested in mentoring this round, let me know if anyone is interested as well.
14:40:08 <enriquetaso> (3)Finally, if you know of any low-hanging-fruit bugs that are not tagged yet, please let me know, I'm going to make a list of easy bugs.
14:40:13 <jbernard> i am, if needed
14:40:18 <enriquetaso> This list is going to be used as a tech test before the internship selection.
14:40:34 <enriquetaso> jbernard, :P
14:40:53 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031869.html
14:41:01 <enriquetaso> that's all for me, thanks whoami-rajat
14:41:44 <whoami-rajat> Sofia has been actively working on the outreachy part since past several cycles and if you've any doubts about it she can clear it (as she cleared my doubts)
14:41:47 <whoami-rajat> thanks enriquetaso
14:42:20 <whoami-rajat> does anyone has anything else to announce?
14:42:50 <Roamer`> about the driver status: I guess I should have mentioned this earlier, but I pushed new revisions for a couple of patches for the StorPool driver these days
14:43:03 <Roamer`> the last one (of the pure bugfixes) is https://review.opendev.org/c/openstack/cinder/+/843277
14:43:47 <Roamer`> the first couple of patches, e.g. https://review.opendev.org/c/openstack/cinder/+/845178 were discussed with, I believe, whoami-rajat a couple of months ago (and yes, it did take me some time to refresh the patches, we had some issues redeploying our CI)
14:44:37 <Roamer`> and then there is also https://review.opendev.org/c/openstack/cinder/+/847536 - add iSCSI export support to the StorPool driver - which, I believe, in a meeting some months ago people said that it could be seen as a change to the driver, not as a whole new driver
14:45:16 <Roamer`> so... yeah... basically I guess we would be grateful for some reviews at least for the bugfixes, and preferably also for the iSCSI export functionality
14:45:46 <whoami-rajat> totally missed that one, i think i avoided review because i was the co-author but i just discussed the idea and really didn't work on the code changes, i think it's good to go
14:45:59 <whoami-rajat> Roamer`, thanks, you can add the patches in review request section
14:46:18 <whoami-rajat> though honestly i haven't been actively looking at them, i will try to clear the backlog every week from now on
14:46:30 <roquej> thank you
14:46:40 <Roamer`> yeah, thanks a lot
14:46:53 <whoami-rajat> np
14:46:58 <Roamer`> so yeah, nothing more from me
14:47:05 <whoami-rajat> cool, let's move to topics then
14:47:22 <whoami-rajat> #topic Do we still support stable/victoria  and stable/ussuri jobs?
14:47:30 <whoami-rajat> enriquetaso, that's you
14:47:41 <enriquetaso> We need to add  zed and yoga jobs for NFS CI. My question is if we still need to support the victoria and ussuri jobs or if it's okay to remove them.
14:47:48 <whoami-rajat> i think you mean in devstack-plugin-nfs and not for every cinder project?
14:47:52 <enriquetaso> #link https://review.opendev.org/c/openstack/devstack-plugin-nfs/+/871072
14:48:15 <enriquetaso> whoami-rajat, honestly, the question is general? I'm a bit lost
14:48:37 <enriquetaso> but yes, its mainly for nfs
14:48:38 <eharney> i don't see a reason to remove them from devstack-plugin-nfs?
14:48:47 <whoami-rajat> since those branches are EM, we do want to keep the gate active
14:48:58 <whoami-rajat> until they're EOL but yeah others can comment on it
14:49:30 <rosmaita> well, i believe that for EM, you only need to run unit tests and pep8
14:49:40 <eharney> they still work, might as well keep them there, it's not like they run very often
14:49:54 <eharney> rosmaita: this is for the devstack plugin itself
14:50:09 <rosmaita> ok, sorry
14:50:59 <enriquetaso> okay, so we keep then
14:51:12 <enriquetaso> I'll leave a comment on the patch
14:51:36 <whoami-rajat> yeah i think that would be good, as eharney said they don't run often and don't take much of gate resources
14:51:43 <enriquetaso> ++
14:51:45 <enriquetaso> thanks
14:52:20 <whoami-rajat> thanks enriquetaso
14:52:25 <whoami-rajat> so we don't have any other topics
14:52:28 <whoami-rajat> let's move to open discussion
14:52:31 <whoami-rajat> #topic open discussion
14:52:54 <roquej> What's the process to update documentation?
14:53:12 <roquej> typically on openstack.org
14:53:18 <whoami-rajat> in the cinder repo, there's a doc folder
14:53:36 <whoami-rajat> you can find cinder related documentation there
14:53:39 <roquej> Ok so basically the same process of proposing code changes
14:53:47 <whoami-rajat> yes
14:53:54 <roquej> copy that
14:53:56 <roquej> thks
14:53:56 <whoami-rajat> you will need to propose a gerrit patch as normally you would
14:53:58 <whoami-rajat> np
14:54:21 <roquej> makes sense, that's what I thoughy to be honest
14:54:28 <roquej> thought
14:57:58 <whoami-rajat> we've a bunch of patches in review request section, if you get time please take a look at them
14:58:41 <whoami-rajat> also I've been seen a lot of review activity, thanks everyone for the reviews
14:58:59 <whoami-rajat> this is a doc that would be helpful https://docs.openstack.org/cinder/latest/contributor/gerrit.html#efficient-review-guidelines
14:59:12 <roquej> will look at those shortly
14:59:28 <whoami-rajat> cool
14:59:47 <roquej> considering my skill level, I'll try to do my best ;-)
14:59:56 <whoami-rajat> would be good if everyone can leave a comment with the review stating the part they've looked at and looks good to them so others can save some time
15:00:06 <whoami-rajat> sure, every review is appreciated :)
15:00:09 <whoami-rajat> we're out of time
15:00:14 <whoami-rajat> thanks everyone!
15:00:16 <whoami-rajat> #endmeeting