14:00:01 #startmeeting cinder 14:00:01 Meeting started Wed Jan 4 14:00:01 2023 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:01 The meeting name has been set to 'cinder' 14:00:03 #topic roll call 14:00:09 hi 14:00:09 o/ 14:00:30 hi 14:00:32 hi 14:00:33 hi 14:00:34 hi 14:01:13 #link https://etherpad.opendev.org/p/cinder-antelope-meetings 14:01:43 Happy new year everyone! 14:01:44 o/ 14:01:51 Happy New Year! 14:02:21 o/ 14:02:46 Happy New Year :) 14:03:01 Happy New Year! 14:03:01 good amount of people are around after the break 14:03:18 let's get started 14:03:24 #topic announcements 14:03:37 first, Status of Specs (deadline 23rd December, 2022) 14:03:44 so we're past the spec deadline 14:04:01 there were 3 specs proposed out of which 1 merged 14:04:10 which is Extend in-use FS volumes 14:04:25 the next one is, Encrypted Backup support 14:04:43 but that currently has 2 -1s and I think Gorka is still not back from break 14:05:16 last one is New backup field for allowing backups during live migration 14:05:26 which was recently proposed i think during year end 14:05:56 Wel that is well past spec freeze so it should wait 14:06:14 yes correct 14:06:27 we could've considered them but they're way far from merging right now 14:06:27 ++ 14:06:49 and extending it would only conflict with our next deadlines 14:06:56 still I'm open to suggestions if the team thinks otherwise 14:06:57 iirc we still need to assess whether fernet is a good solution for encrypted backups 14:07:17 that's a good point 14:07:39 there were some good concerns raised that haven't really been analyzed yet 14:07:56 hey guys! Happy new year to all the community 14:08:15 nad especially my loved core reviwers ;-) 14:09:36 so it makes sense to push them to next cycle, I will do the procedural -2 later today 14:09:43 thanks for the discussion 14:09:44 happystacker, happy new year! 14:10:10 let's move to the next announcement 14:10:13 Driver Merge deadline 20th Jan, 2023 14:10:42 deadline would've been this week but I shifted it based on past cycle experience so we've time to review them 14:10:59 ok, let's discuss the drivers quickly 14:11:01 1) HPE XP 14:11:07 #link https://review.opendev.org/c/openstack/cinder/+/815582 14:11:20 I did a CI check yesterday and it hasn't responded yet 14:11:40 I used the same comment they did in os-brick 14:11:48 so not sure what's wrong 14:12:05 is abdi: here? 14:12:17 regarding the code part, it has inherited everything from the hitachi driver, even supports the same features as the parent driver 14:12:50 it's the same device just rebadbed 14:12:56 rebadged 14:13:16 ah, that makes sense now 14:14:23 ok, I've left a comment for the author to check the CI 14:15:16 rosmaita do you have your CI checklist somewhere? 14:15:19 in the meantime, feel free to review the driver, the tests are good LOC so that could be reviewed 14:15:29 sightly off-topic, but whoami-rajat it would be a good idea to send something to the ML reminding vendors that we are requiring CI on os-brick changes for antelope 14:15:43 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-August/030014.html 14:15:49 follow-up to that ^^ 14:16:14 simondodsley: yes, i should put that up in an etherpad or something 14:16:24 btw, I have a few code changes I'd expect to be merged into Antelope, can someone take a look at them if I give you the list? 14:16:27 "that" == CI checklist 14:16:43 rosmaita, good idea, i will reply to that thread 14:17:05 sounds good 14:17:13 happystacker, we can discuss that towards the end in the open discussion 14:17:28 sounds good, preparing the code change IDs 14:17:44 happystacker, put them in the etherpad 14:17:55 will do 14:18:05 happystacker, better, I've added a review request section in the end, you can put the list there 14:18:11 all ^ 14:18:22 ok Rajat, working on it 14:18:39 great 14:18:45 so moving on to the next driver 14:18:58 2) Fungible NVMe TCP 14:19:03 #link https://review.opendev.org/c/openstack/cinder/+/849143 14:19:24 I've reviewed it today, they seem to be doing something with a default volume type that they've defined 14:19:39 I'm not sure what's the idea there but probably they will clarify 14:20:04 also I don't know if we support NVMe with TCP or not 14:20:17 from os-brick perspective 14:20:29 yes - TCP and RoCE and supported. FC is not 14:20:54 oh that's good 14:21:06 what's the etherpad link again? 14:21:15 happystacker, https://etherpad.opendev.org/p/cinder-antelope-meetings 14:21:15 FC is delayed as Red Hat don't want Gorka working on it as they don't see their customers wanting it 14:21:48 which is a shame for other vendors that do, but don't have the os-brick skillset... 14:22:24 ok, i think it also depends on vendors what protocol they use with nvme? do we have drivers (existing/upcoming) that would want nvme with FC? 14:22:41 Pure want it 14:23:12 simondodsley: just out of curiosity, is that for customers who already have FC, or completely new customers? 14:23:38 usually for existing customers. New ones tend to be greenfield and will go TCP or RoCE 14:23:57 thanks, that was my intuition, but i wanted a reality check 14:24:37 good to know we've a use case for NVMe FC support 14:25:09 ok, that's all for this driver as well 14:25:14 Hi guys 14:25:42 NetApp wants to deliver a NVMe/TCP driver for the next release 14:25:58 these are the 2 drivers we've for this cycle, reviewers please take a look 14:25:59 We are working downstream on it.. probably, sending the patch upstream end of this week 14:26:17 is it possible yet ? 14:26:33 felipe_rodrigues, if it's for next release, sounds good to me 14:26:55 for this cycle, it needs to have a working CI and all the driver guidelines satisfied 14:28:02 They are not available yet, because it is private.. The patch is a medium one.. because it is just about connection (initialize and terminate connection).. 14:28:54 I mean.. The CI/patch would be available end of this week, is it possible to have the review and merge to Antelope or it is too late ? 14:29:18 the deadline is 20th Jan so we've enough time 14:29:43 I see.. 14:29:59 Let see if it possible, thank you so much! 14:30:14 remember that we are requiring CI on os-brick changes for antelope 14:30:50 the idea is to have the driver and CI in a working state and respond quickly to review comments to have better chance to make it to the cycle 14:31:00 enriquetaso++ 14:31:29 ok, last announcement I've is for midcycle 2 14:31:34 Midcycle-2 Planning 14:31:40 #link https://etherpad.opendev.org/p/cinder-antelope-midcycles#L33 14:31:57 we hadn't finalized the date in the beginning since there were conflicts 14:32:09 currently I'm proposing the date 18th Jan which is next to next week 14:32:19 do we have any known conflicts for that date? 14:32:56 it is a 2 hour video meet 14:32:57 it will be on wednesday and will overlap with 1 hour of cinder upstream meeting 14:33:20 i'm not available then, but that shouldn't stop you 14:34:36 no conflicts from me, and at the risk of insulting someone, doesn't look like there's a national holiday in any of the major locations on that day 14:34:58 yea no conflict with me as well 14:35:21 no conflicts from me either (but i'm also available any other day) 14:35:40 cool, let's fix this date for now and discuss this again next week 14:35:48 in the meantime, please add topics 14:36:26 that's all i had for announcements 14:36:31 anyone has anything else? 14:37:50 looks like not, let's move to topics then 14:37:55 #topic tox jobs failing in the stable branches 14:37:58 rosmaita, that's you 14:38:35 yeah, we are having tox 4 failures in all stable branches 14:38:37 #link https://review.opendev.org/q/topic:tox-4-postponed 14:38:51 those are the patches ^^ that should fix it 14:39:11 at least for now, they don't use tox 4 14:39:32 ok so we'll continue to use tox 3 for now? 14:39:41 yes, in the stable branches 14:39:48 ok 14:39:53 you can't use tox4 with python 3.6 14:39:54 rosmaita, one question i had, i only see os-brick and cinder patches, don't we require it in cinderclient or other cinder projects? 14:40:35 yeah, i was waiting until the cinderclient patch to master was working 14:40:40 Do we have an estimate of when the swicth to tox4 will happen? 14:40:41 which doesn't make sense, now that i think about it 14:40:51 happystacker: december 23, 2022 14:41:06 rosmaita, ah we still have cinderclient change open? 14:41:08 i can put up patches for the cinderclient stable branches too 14:41:13 i mean the master one 14:41:29 december 23 has passed 14:41:31 yeah, i think i figured out what was happening, and put up a new patch 14:41:50 ok, i remember, it was failing gate 14:41:57 take a look at this real quick: https://zuul.openstack.org/stream/6997323310894f48855e4b9139b26f10?logfile=console.log 14:41:57 yes 14:42:12 the functional-py38 are passing now, but it's going to timeout 14:42:20 gate was failing but that was due to openstacksdk failures 14:42:22 happystacker, the tox4 migration has happened and that's why we're seeing gate breaking in stable branches, for master rosmaita already fixed it 14:42:38 harsh: this is a different issue (i think) 14:42:43 oh ok 14:42:45 harsh, that's a different issue which is fixed now 14:42:51 but related to tox4 14:42:55 yea 14:43:02 mmh, ok 14:43:02 maybe fungi is around? 14:43:19 i am 14:43:24 the func-py38 has been sitting for >20 min 14:43:43 fungi: happy new year! can you take a look at https://zuul.openstack.org/stream/6997323310894f48855e4b9139b26f10?logfile=console.log 14:44:14 looks like the job is stuck? hopefully it's not something on the cinder side? 14:45:12 this is the patch being checked: https://review.opendev.org/868317 14:45:16 what does a run of that job normally look like after that point? is it maybe collecting files or compressing something? 14:45:45 fungi: should look just like the func-py39 job, i think 14:45:52 https://zuul.opendev.org/t/openstack/build/9ba34208232b4db49c48ceac72716c98 14:45:57 subunit file analysis? 14:46:32 it proceeded 14:46:51 fungi: maybe, guess i shoud wait until it actually reports back to zuul 14:47:29 anyway, that's all from me ... i'll put up the cinderclient stable branch patches later today; in the mean time, we need to merge the cinder/os-brick stable patches 14:47:48 yeah, would be interesting to see what it was doing during that quiet period, but often it will be something like the executed commands generated waaaaay more logs than expected or tons more subunit attachments or something 14:48:13 also whether subsequent builds of the same job pause in the same place 14:48:15 i'll put something on the midcycle agenda about my reasons for not backporting the master branch changes, and we can discuss 14:48:50 sounds good, always up for topics 14:49:03 fungi: thanks, i'll put up a patch that removes the tempest test so we can get quicker response 14:49:50 I've reviewed all changes to pin tox<4, it is a straightforward 2 line change which also TC suggested so should be easy to review and fix our gate 14:49:53 other cores ^ 14:50:13 posting the link again to patches 14:50:15 #link https://review.opendev.org/q/topic:tox-4-postponed 14:50:47 rosmaita, anything else on this topic? 14:50:58 nothing from me 14:51:17 great, thanks for bringing this up 14:51:24 next topic 14:51:32 #topic Unit test will fail with python 3.11 14:51:34 enriquetaso, that's you 14:51:38 hello 14:51:40 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031655.html 14:51:44 Quoting the oficial bug: "An unfortunately common pattern over large codebases of Python tests is for spec'd Mock instances to be provided with Mock objects as their specs. This gives the false sense that a spec constraint is being applied when, in fact, nothing will be disallowed." 14:51:50 #link https://github.com/python/cpython/issues/87644 14:52:08 Just mentioning it because this would affect our future python 3.11 job CI. 14:52:12 do we have a debian job somewhere so we can reproduce this in gate? 14:52:12 i worked on this some before the winter break, have at least one patch posted 14:52:22 just run tox -e py311 to repro 14:52:23 We need to update at least 250 tests from different drivers. 14:52:38 ouch 14:52:40 i've reproduce this with docker (python3.11 debian image) 14:52:46 Thomas Goirand discovered this and opened a bug report to track the work: 14:52:47 ah so it fails even in other distros 14:52:51 it has nothing to do with Debian... 14:52:52 #link https://bugs.launchpad.net/cinder/+bug/2000436 14:53:22 it's related to python >3.11 14:53:23 this is a change in Python 14:53:48 anyway, fixing it is not particularly hard, but it will involve shuffling around a lot of mocks in unit tests, so it's laborious 14:54:10 ok, i got confused from the bug report 14:54:13 yes.. i think we dont have plans to have a 3.11 job yet 14:54:29 the plans are: we definitely need it to work at some point, so we should fix it :) 14:54:34 for antelope, the runtime is 3.8 and 3.10 but we need to be ready for next cycle runtimes 14:55:21 the new restriction is, basically, you can't mock a mock now, so make the mock once in the unit tests 14:55:46 https://review.opendev.org/c/openstack/cinder/+/867824 is the first fix i submitted for this 14:56:14 cool, i think we can use the bug number to track all the fixes or use a tag if needed 14:56:43 we should probably add a non-voting py3.11 job 14:56:48 the bug number is less important than a patch that turns on 3.11 and depends-on: patches 14:56:55 right 14:56:56 rosmaita++ 14:57:00 ++ 14:57:04 i thought that was going to happen as part of the antelope template 14:57:05 okay! 14:57:22 but there were other issues that came up ... i can ask at the TC meeting later today 14:57:40 in any case, we can do it ourselves in cinder, i think 14:58:00 maybe they plan to keep 3.10 for another cycle and add n-v template next cycle but not sure 14:58:23 also, fwiw, i just ran "tox -e py38" this morning and am seeing 3.11 failures in there 14:58:25 rosmaita, yep, even a DNM should be good to track the failing tests 14:58:33 maybe some tox4 weirdness? 14:58:52 i hope not 14:59:04 but probably so 14:59:40 it appears to be running the wrong version of python in that env :/ 15:00:26 topics are not available in gerrit anymore? 15:00:41 i think they are 15:00:46 we're out of time, let's continue this next week and in the meantime hoping we will get some fixes in 15:00:54 also want to mention the review request section 15:01:03 there are a bunch of review requests to please take a look at them 15:01:10 #link https://etherpad.opendev.org/p/cinder-antelope-meetings#L121 15:01:19 thanks everyone! 15:01:22 #endmeeting