14:00:00 #startmeeting cinder 14:00:00 Meeting started Wed Jan 25 14:00:00 2023 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:00 The meeting name has been set to 'cinder' 14:00:03 #topic roll call 14:00:53 o/ 14:01:06 o/ 14:01:21 o/ 14:01:41 hi 14:01:47 #link https://etherpad.opendev.org/p/cinder-antelope-meetings 14:02:03 o/ 14:02:48 Hello! 14:03:16 o/ 14:03:16 hi 14:03:30 hi! 14:03:41 hello everyone 14:03:44 Hi 14:03:53 let's get started 14:03:57 hi 14:03:59 o/ 14:04:04 HI 14:04:07 o/ 14:04:29 #topic announcements 14:04:35 Security issue with VMDK format 14:04:40 #link https://security.openstack.org/ossa/OSSA-2023-002.html 14:04:54 rosmaita, will cover this since he's been working on it 14:05:02 sure 14:05:43 i see Rajat already posted the link to the patches 14:05:59 we are contractually obligated to fix xena, yoga, and zed 14:06:05 https://review.opendev.org/q/I3c60ee4c0795aadf03108ed9b5a46ecd116894af 14:06:10 the rest are "courtesy" patches 14:06:30 because people are still using those branches even though we don't release from them any more 14:06:36 i stopped at train though 14:07:11 makes sense not to go beyond 14:07:13 anyway, as we get to the older branches, it is harder to actually run tests because of old dependencies 14:07:14 #link https://review.opendev.org/q/I3c60ee4c0795aadf03108ed9b5a46ecd116894af 14:07:20 Train is pretty old 14:07:47 i had to mess with the tox.ini and requirements.txt and upper constraints to get things to run locally 14:08:03 just mentioning that in case you are in a similar situation 14:08:47 one thing is that somewhere around ussuri, you need to set basepython=python3.6 to get pep8 and hte docs jobs to work 14:08:58 not sure if it's worth backporting that though 14:09:26 i think the gate gets around this because they are using older distros when they run the zuul jobs 14:10:20 there is one change in the patch that may affect some drivers 14:10:38 happystacker: i have customers stuck on Queens as they are using Mirantis OpenStack... it makes me cry 14:11:04 wow, that's terrible! it would make me cry too ;-) 14:11:06 simondodsley: someone attached a queens backport to the bug, though i haven't looked at it 14:11:56 i did write more detailed than usual commit messages about conflicts during cherry-pick to explain what changes i made 14:12:14 i think we EOLed queens and deleted the branch? 14:12:14 so if you look at train, you get the whole story 14:12:38 whoami-rajat: you are correct 14:13:10 yeah, if you do need to work with queens, you can check out the queens-eol tag and work from there 14:13:55 rosmaita: story... that's a novel :) 14:14:01 :D 14:14:10 ok, as far as the drivers go ... 14:14:32 some drivers call cinder.image.image_utils.convert_image() directly 14:15:03 that function now checks to verify that the image data matches the image format that the caller says it is 14:15:21 it does this by calling out to 'qeum-img info' 14:16:22 anyway, the main point is that you can pass in an image_id and the qumu-img-info stuff 14:16:26 but that's optional 14:16:47 if you don't pass an image_id, if a problem is encountered, the message will say 'internal image' 14:16:55 which for some drivers it is 14:17:14 but there are some drivers who do have a glance image id when they make the call 14:17:29 so the log/error message will be slightly misleading, but only if there's a problem 14:17:44 anyway, i think it affects 3 or so drivers 14:18:12 i'll put up patches for master for those drivers, and rely on the driver maintainers to do testing and backports 14:18:17 It'd worth to do a quick test on our drivers as well 14:18:31 happystacker: that's a good point 14:18:50 i'll check the generic-nfs scenario because is affected by this 14:18:51 so the VMDK problem is like the backing_file issue with qcow2 images 14:19:25 right now, we do check for backing_file reported by quemu-img info 14:20:02 this VMDK thing happened because the "backing file" (called a "named extent" by vmware) doesn't show up in that field 14:20:18 it shows up in the format specific data in the quem-img info response 14:20:31 (if you look at the unit tests on the patch you can see an example) 14:20:58 anyway, my point is that I don't know that we have caught this issue for all possible image formats we support 14:21:14 (which depends on what the glance disk_formats config option is) 14:21:25 complex to handle 14:21:42 so if you deal with other format images, please look for this kind of exploit 14:22:01 because it doesn't matter if your cloud doesn't support format X on the hypervisors 14:22:16 it's still poissible for someone to put a X format image into glance, and then create a volume from it 14:22:33 at which point cinder will courteously convert the format to raw to write it to disk 14:23:01 which is nice in general, but can be bad in some cases! 14:23:25 ok, i will shut up now ... ping me if you have any questions 14:23:30 I have a question around it but we talk in our bug review 14:23:38 sure 14:23:44 one thing i should say , though 14:24:06 if you do discover this issue with another format, please file it as a security bug in Launchpad 14:24:18 that will give us time to fix it before it's announced 14:24:38 though the bad guys will already know about it, i guess 14:24:45 (at least some of them) 14:24:52 probably... 14:24:54 ok, now i will shut up for real 14:25:04 thanks rosmaita for working on the fix and the detailed summary! 14:25:29 np, it didn't look too bad, i just didn't anticipate 50 backports 14:25:40 current status is we're waiting for xena patch to merge and then we will release all active stable branches i.e. xena, yoga and zed 14:25:50 and the tempest and grenad jobs in xena being so uncooperative 14:26:15 yeah, gate fails for important changes or when a deadline is around 14:26:53 anyway, thanks again and let's move to the next announcement 14:27:01 Driver status 14:27:17 1) HPE XP 14:27:26 #link https://review.opendev.org/c/openstack/cinder/+/815582 14:27:42 we've approved the HPE driver but i see it is dependent on one hitachi feature 14:27:48 this https://review.opendev.org/c/openstack/cinder/+/846977 14:27:52 is abdi around? 14:27:59 Yes 14:28:03 hey 14:28:09 Hi 14:28:27 so do we need that hitachi feature for this driver or it can be done later? later = M3 which is driver feature merge deadline 14:29:53 I believe that feature can be later but I will double check today and get back to you. 14:30:48 sure, since it is blocking the driver merge, would be good if it can removed as a dependency 14:31:12 if not needed, rebase the driver on master (removing this dependency) and let me know, i will review again 14:31:16 Ok. I will have that addressed today. 14:31:24 great thanks 14:31:30 Thank you. 14:31:48 np 14:31:54 2) Fungible NVMe TCP 14:32:00 #link https://review.opendev.org/c/openstack/cinder/+/849143 14:32:04 this driver is merged 14:32:12 3) Lustre 14:32:19 #link https://review.opendev.org/q/topic:bp%252Fadd-lustre-driver 14:32:27 we've decided to move this to next cycle 14:32:34 since it doesn't have a working CI yet 14:32:43 i will add it as a review comment on the patch 14:32:54 4) NetApp NVME TCP 14:33:03 #link https://review.opendev.org/c/openstack/cinder/+/870004 14:33:14 I've reviewed it and it mostly resembles the iSCSI driver 14:33:34 it's missing a releasenote currently and a code section which says it should've been removed in Mitaka 14:33:55 i checked the same against netapp iSCSI driver so i think it needs to be removed from both drivers 14:34:01 i.e. iSCSI and new NVMe TCP 14:34:17 thank you whoami-rajat for the review. I saw the comments, I am fixing the issues.. submitting a new patch very soon 14:34:41 great, thanks felipe_rodrigues 14:34:58 that's all for the drivers 14:35:04 next, Upcoming release deadlines 14:35:13 os-brick (February 9) 14:35:13 cinderclient (February 16) 14:35:30 i don't see anything major in both projects this cycle but good to get some open patches merged 14:35:39 we still have time so things are looking good as of now 14:36:03 next, 2023.2 (Bobcat) schedule 14:36:17 i don't remember if i announced it but the next release name is Bobcat 14:36:32 #link https://review.opendev.org/c/openstack/releases/+/869976 14:36:37 the schedule is proposed 14:36:46 this is the HTML render 14:36:48 #link https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_280/869976/1/check/openstack-tox-docs/28016f8/docs/bobcat/schedule.html 14:36:48 Lince rojo in Spanish 14:38:17 it doesn't mention the Bobcat virtual PTG which i think it should 14:38:20 but i will add that to the review 14:38:49 finally we've outreachy announcement 14:38:55 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031869.html 14:39:01 enriquetaso, would you like to cover this? 14:39:06 sure 14:39:11 great 14:39:24 (1)Looking for project/goals: Looking for potential intern projects ideas. Please let me know if you have an upstream bug or implementation that could be fixed by an intern (probably basic/medium tasks) 14:39:34 A project could be fixing something that has been broken for a long time and could be fixed in a 3 month period. (small to medium size bug). Or a new partial implementation. 14:39:58 (2)Looking for mentors: I think jbernard is interested in mentoring this round, let me know if anyone is interested as well. 14:40:08 (3)Finally, if you know of any low-hanging-fruit bugs that are not tagged yet, please let me know, I'm going to make a list of easy bugs. 14:40:13 i am, if needed 14:40:18 This list is going to be used as a tech test before the internship selection. 14:40:34 jbernard, :P 14:40:53 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031869.html 14:41:01 that's all for me, thanks whoami-rajat 14:41:44 Sofia has been actively working on the outreachy part since past several cycles and if you've any doubts about it she can clear it (as she cleared my doubts) 14:41:47 thanks enriquetaso 14:42:20 does anyone has anything else to announce? 14:42:50 about the driver status: I guess I should have mentioned this earlier, but I pushed new revisions for a couple of patches for the StorPool driver these days 14:43:03 the last one (of the pure bugfixes) is https://review.opendev.org/c/openstack/cinder/+/843277 14:43:47 the first couple of patches, e.g. https://review.opendev.org/c/openstack/cinder/+/845178 were discussed with, I believe, whoami-rajat a couple of months ago (and yes, it did take me some time to refresh the patches, we had some issues redeploying our CI) 14:44:37 and then there is also https://review.opendev.org/c/openstack/cinder/+/847536 - add iSCSI export support to the StorPool driver - which, I believe, in a meeting some months ago people said that it could be seen as a change to the driver, not as a whole new driver 14:45:16 so... yeah... basically I guess we would be grateful for some reviews at least for the bugfixes, and preferably also for the iSCSI export functionality 14:45:46 totally missed that one, i think i avoided review because i was the co-author but i just discussed the idea and really didn't work on the code changes, i think it's good to go 14:45:59 Roamer`, thanks, you can add the patches in review request section 14:46:18 though honestly i haven't been actively looking at them, i will try to clear the backlog every week from now on 14:46:30 thank you 14:46:40 yeah, thanks a lot 14:46:53 np 14:46:58 so yeah, nothing more from me 14:47:05 cool, let's move to topics then 14:47:22 #topic Do we still support stable/victoria and stable/ussuri jobs? 14:47:30 enriquetaso, that's you 14:47:41 We need to add zed and yoga jobs for NFS CI. My question is if we still need to support the victoria and ussuri jobs or if it's okay to remove them. 14:47:48 i think you mean in devstack-plugin-nfs and not for every cinder project? 14:47:52 #link https://review.opendev.org/c/openstack/devstack-plugin-nfs/+/871072 14:48:15 whoami-rajat, honestly, the question is general? I'm a bit lost 14:48:37 but yes, its mainly for nfs 14:48:38 i don't see a reason to remove them from devstack-plugin-nfs? 14:48:47 since those branches are EM, we do want to keep the gate active 14:48:58 until they're EOL but yeah others can comment on it 14:49:30 well, i believe that for EM, you only need to run unit tests and pep8 14:49:40 they still work, might as well keep them there, it's not like they run very often 14:49:54 rosmaita: this is for the devstack plugin itself 14:50:09 ok, sorry 14:50:59 okay, so we keep then 14:51:12 I'll leave a comment on the patch 14:51:36 yeah i think that would be good, as eharney said they don't run often and don't take much of gate resources 14:51:43 ++ 14:51:45 thanks 14:52:20 thanks enriquetaso 14:52:25 so we don't have any other topics 14:52:28 let's move to open discussion 14:52:31 #topic open discussion 14:52:54 What's the process to update documentation? 14:53:12 typically on openstack.org 14:53:18 in the cinder repo, there's a doc folder 14:53:36 you can find cinder related documentation there 14:53:39 Ok so basically the same process of proposing code changes 14:53:47 yes 14:53:54 copy that 14:53:56 thks 14:53:56 you will need to propose a gerrit patch as normally you would 14:53:58 np 14:54:21 makes sense, that's what I thoughy to be honest 14:54:28 thought 14:57:58 we've a bunch of patches in review request section, if you get time please take a look at them 14:58:41 also I've been seen a lot of review activity, thanks everyone for the reviews 14:58:59 this is a doc that would be helpful https://docs.openstack.org/cinder/latest/contributor/gerrit.html#efficient-review-guidelines 14:59:12 will look at those shortly 14:59:28 cool 14:59:47 considering my skill level, I'll try to do my best ;-) 14:59:56 would be good if everyone can leave a comment with the review stating the part they've looked at and looks good to them so others can save some time 15:00:06 sure, every review is appreciated :) 15:00:09 we're out of time 15:00:14 thanks everyone! 15:00:16 #endmeeting