14:00:02 #startmeeting cinder 14:00:02 Meeting started Wed Nov 9 14:00:02 2022 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:02 The meeting name has been set to 'cinder' 14:00:04 #topic roll call 14:00:17 o; 14:00:26 o/ 14:00:36 o/ 14:00:36 o/ 14:00:38 o/ 14:00:43 o/ 14:00:46 hi 14:01:33 for US and EMEA regions, the meeting might have shifted 1 hour earlier since this is UTC time 14:01:39 #link https://etherpad.opendev.org/p/cinder-antelope-meetings 14:02:24 If it's too early for anyone and they would like to join, let me know 14:03:18 hi 14:03:31 hi 14:03:53 hi 14:04:04 good turnout we've, let's get started 14:04:10 #topic announcements 14:04:23 first, cinderlib Zed release deadline is 16 Dec 2022 14:04:46 i was going to add it before the meeting but i think rosmaita added it earlier 14:04:53 :) 14:05:00 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031095.html 14:05:09 so we've cinderlib release coming up 14:05:35 rosmaita, would you like to give an current overview and elaborate on the gate situation? 14:05:43 sure 14:05:58 the short version is that cinderlib is completely broken in zed 14:06:24 :( 14:06:30 * jungleboyj sneaks in late 14:06:30 oh no 14:06:41 i'm pretty sure the problem is that the db code changes in zed, the moving the session to the context stuff, is what has done it 14:06:54 so, shouldn't be too bad to fix, but someone needs to do it 14:07:28 and then we need to get the gate fixed (current gate is still testing yoga, i think) 14:07:48 i'm pretty sure there haven't been many zed cinderlib patches, it's been kind of quiet 14:07:58 do we have a bug for this to keep track somewhere or we don't need it? 14:07:59 i think you had a patch that changes it but it never merged because of the gate failures 14:08:12 so once the db stuff is fixed, i am hopeful that it will be smooth sailing 14:08:27 enriquetaso: not sure, but you are right, we should be tracking it 14:08:36 whoami-rajat: yes, i will dig up a link to the review 14:08:41 +1 for tracking it 14:09:07 https://review.opendev.org/c/openstack/cinderlib/+/848846 14:09:26 #link https://review.opendev.org/c/openstack/cinderlib/+/848846 14:09:58 i should get that patch out of merge conflict and re-run it to get a fresh set of logs 14:10:09 OK, i will open a bug 14:10:42 title: "cinderlib gate is horribly broken" 14:10:48 lol 14:11:29 i think it's just the unit tests having DB issues 14:11:29 so i think we're looking good but some progress would be good as well 14:11:34 16 december is not far ... 14:11:48 rosmaita, let me know if i can help anywhere 14:12:20 yeah, let's coordinate with geguileo when he gets back next week 14:12:34 +1 14:12:49 #action: enriquetaso to open a bug for cinderlib gate situation 14:12:59 #action: rosmaita geguileo and whoami-rajat to coordinate of fixing it 14:13:04 and maybe see if it's possible to prevent future issues 14:13:49 cinderlib depends a lot on cinder so unless there is some major change in cinder, cinderlib should be fine 14:14:11 but we should consider cinderlib while reviewing major changes in cinder 14:15:18 yeah, we run cinderlib functional tests on cinder changes, not sure why we didn't see any breakage 14:15:31 i guess because the functional jobs were actually passing 14:15:44 is it n-v in cinder gate? 14:15:47 * whoami-rajat checks 14:16:12 i think cinderlib only needs to use the DB for certain drivers who (quite improperly) access the cinder db 14:16:30 so the drivers used in our gate aren't going to hit that problem 14:17:43 I'm unable to find a job running cinderlib tests, rosmaita do you know which one runs it? 14:18:11 maybe in cinder-tempest-plugin, there are some jobs that run cinder and then also run cinderlib functional 14:18:15 (i will look) 14:18:31 ok found in cinder-tempest-plugin-lvm-lio-barbican 14:18:45 https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8ad/860997/3/check/cinder-tempest-plugin-lvm-lio-barbican/8ad5892/testr_results.html 14:18:55 total 17 tests we run here 14:19:01 cinder-plugin-ceph-tempest also 14:19:23 hmm, I'm unable to find it there https://b5fbbfefaf709464375e-0f35cc5e18a5457e4954b3967fd49fc8.ssl.cf2.rackcdn.com/860997/3/check/cinder-plugin-ceph-tempest/f01de21/testr_results.html 14:20:22 yeah, that's weird, because the job is defined to use it: https://opendev.org/openstack/cinder/src/branch/master/.zuul.yaml#L153 14:20:35 yep was checking that 14:20:39 maybe something changed 14:21:49 anyway, we can take a look later 14:23:08 actually, that job did run the tests, looks like there was an error later when uploading logs: https://zuul.opendev.org/t/openstack/build/f01de213d6794d7b89e4521da94c6e3b/log/job-output.txt#26211 14:23:39 ah ok, looks good then 14:23:45 (at least i think that's the same job, those urls are crazy) 14:24:08 I think i'ts a different job 14:24:22 oh, no, it's that one 14:24:50 so both the lvm-lio-barbican job (defined inside cinder-tempest-plugin) and the ceph one (defined in cinder.git) should run those tests 14:25:49 ok, looks like that's working 14:26:12 actually the tests are defined to run in cinder-tempest-plugin-lvm-barbican-base 14:26:19 so any job inheriting from that should be running ti 14:26:22 s/ti/it 14:26:39 so there are few but mostly we're concerned with lvm-lio-barbican and ceph-tempest 14:26:43 https://github.com/openstack/cinder-tempest-plugin/blob/master/.zuul.yaml#L125 14:26:44 that was the plan, yes 14:26:59 ack 14:27:28 looks like we could use some kind of canary tests that hit the db 14:27:39 we can see if geguileo has any ideas 14:27:56 let's discuss this again next week when geguileo is back 14:28:52 ok moving on 14:28:59 next announcement, Milestone 1 next week 14:29:08 we had few targets for M-1 14:29:17 * rosmaita hides 14:29:23 :D 14:29:30 I've reviewed one change in TusharTgite's reset state series 14:29:36 he is working on those changes 14:29:54 apart from that, keystone team said they might be implementing service role by M-1 14:30:01 which I'm not sure about the status 14:30:41 that's all i can remember for now 14:31:08 if anyone has a patch that is a priority for M-1, add it to the etherpad under this topic 14:31:47 shoot, service role spec is still un-approved 14:32:03 that's bad ... 14:32:25 i'll bring it up at the TC meeting today, get some visibility 14:32:59 rosmaita++ 14:33:45 moving on then 14:33:50 next, Midcycle - 1 planning 14:34:00 so we've midcycle coming up on 30th November 14:34:30 which seems like far but the deadlines come very quickly 14:34:42 so I've created an etherpad to add topics for it 14:34:48 #link https://etherpad.opendev.org/p/cinder-antelope-midcycles 14:35:26 so encouraging everyone to add your topics in the etherpad 14:36:06 I've already added 2 so you don't have to be the first one 14:37:14 let's move to the last announcement 14:37:20 Bug Deputy and Stable release manager 14:37:23 first Bug deputy 14:37:38 enriquetaso, has been doing a great job since past cycles 14:38:05 ++ 14:38:38 although I would like her to continue the work, I also would like to ask her if she is willing to continue it or have other plans 14:39:08 ++ for enriquetaso from me too 14:39:43 it doesn't have to be a prompt response, we can discuss it 14:40:08 sure 14:40:17 lol 14:40:27 great, maybe next week you can update with your response 14:40:35 If anyone would like to take the position is also fine 14:40:55 i dont mind continue 14:41:01 let's be clear, we are concerned about over-working you, not about the job you are doing, which is fantastic 14:41:15 exactly what rosmaita said! 14:41:22 \o/ 14:41:32 :-) 14:41:42 Giving you the option out if you need a break. 14:42:40 rosmaita, and jungleboyj explains it better than i do but that's what i wanted to convey 14:43:03 whoami-rajat: :-) 14:43:06 thanks 14:43:29 so let's discuss this again next week 14:43:42 coming to Stable release manager 14:44:04 Jon doesn't seem to be around today but he became stable release maintainer last cycle 14:44:13 and has done an excellent job 14:44:29 he was able to carry out stable releases for all active branches 14:44:55 and also recently released final wallaby release before moving it to EM 14:45:24 since he's not around, we can discuss this next week as well 14:45:47 but he's doing a good job and would be good if he continues 14:46:04 anyway, moving on to the topics now 14:46:07 #topic Request reviews for new Pure Storage replication feature 14:46:10 simondodsley, that's you 14:46:33 #link https://review.opendev.org/c/openstack/cinder/+/862365 14:46:50 Yep - new replication feature. Passes all Pure CI and Zuul. Just need some core eyeys on it 14:48:38 since this is a driver feature, the deadline is M-3 so i would keep it little lower on my priority list (we've a lot for M-1 and M-2) 14:48:52 but don't want to discourage anyone from reviewing it ^ 14:48:56 OK - move on then 14:48:58 please take a look 14:49:45 if you are a driver vendor, would appreciate your reviews on driver changes like these ^ 14:50:09 next topic 14:50:11 #topic Request for re-review on new patchset 14:50:17 ganso, that's you 14:50:27 o/ 14:50:36 so just a request for re-review on that patch 14:50:38 I addressed the comments 14:50:42 whoami-rajat: thanks for the review btw! 14:50:48 ack, will take a look 14:50:50 np 14:50:54 #link https://review.opendev.org/c/openstack/cinder/+/812685 14:51:11 if other core reviewers could chime in, would be awesome! thanks in advance! 14:51:13 i will take a look too 14:51:21 (for realz this time) 14:51:29 it's been sitting there for long time and is important to fix for the glance multi store case 14:51:47 great 14:51:52 last topic then 14:51:56 #topic using cinderclient with older Block Storage API versions 14:51:59 rosmaita, that's you 14:52:36 yeah, walt found a bug earlier this week when using zed cinderclient with wallaby 14:53:02 i think i figured out what's going on, but we don't really test that scenario at all 14:53:40 anyway, the commit message gives my theory of what's happening, and i added a unit test for it 14:53:57 #link https://review.opendev.org/c/openstack/python-cinderclient/+/864027 14:54:02 thanks! 14:54:28 so, please review, and there may be some other cases where we will hit this issue 14:55:00 though it may not be worth worrying much about if we move to the openstackclient for CLI 14:55:38 that's all, and thanks to walt for testing the patch in his environment 14:55:51 yeah, that reminds me i need to work on that ^ 14:56:15 thanks hemna- and rosmaita for fixing this 14:57:00 we're done with topics so let's move to open discussion for 4 minutes 14:57:04 #topic open discussion 14:59:23 guess nothing else to discuss 14:59:32 thanks everyone for joining 14:59:36 #endmeeting