14:00:06 #startmeeting cinder 14:00:06 Meeting started Wed Nov 23 14:00:06 2022 UTC and is due to finish in 60 minutes. The chair is whoami-rajat__. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:07 The meeting name has been set to 'cinder' 14:00:11 #topic roll call 14:00:42 hi 14:01:11 o/ 14:01:47 hey all. 14:01:55 is there a holiday today in US Europe region? 14:02:24 #link https://etherpad.opendev.org/p/cinder-antelope-meetings 14:02:45 hi 14:02:52 tomorrow is a USA holiday, so some people may be taking an extra day 14:03:36 ah yes, i know about tomorrow but people might be extending it right 14:03:45 so we've few people around but a lot on the agenda 14:03:47 so let's get started 14:03:55 #topic announcements 14:04:01 first, Midcycle-1 next week (Please add topics!) (30th Nov) 14:04:15 we've midcycle-1 next week, so request everyone to add topics and attend it 14:04:28 it will be from 1400-1600 UTC (1 hour overlapping our current meeting time) 14:04:39 #link https://etherpad.opendev.org/p/cinder-antelope-midcycles 14:04:57 o/ 14:05:08 I will send a mail this week for reminder and details 14:05:20 next, Runtimes for 2023.1 14:05:27 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031229.html 14:05:35 so there are 2 points in the mail regarding runtimes 14:05:47 1) we need to propose at least one tempest job which runs on old ubuntu i.e. focal 14:05:50 I've proposed it 14:06:01 #link https://review.opendev.org/c/openstack/cinder/+/865427 14:06:07 it is the tempest integrated storage 14:06:28 the purpose is to verify a smooth upgrade from past releases 14:06:51 like ubuntu focal Zed to ubuntu focal 2023.1 14:07:09 2) the python runtimes are 3.8 and 3.10 14:07:17 unit tests are automatically modified by the templates 14:07:23 #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/864464 14:07:30 for functional tests, I've proposed a patch 14:07:39 #link https://review.opendev.org/c/openstack/cinder/+/865429 14:07:49 it's just changing py39 to py310 14:08:04 next, OpenInfra Board + OpenStack Syncup Call 14:08:10 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031242.html 14:08:19 if you go in the syncup call session in the above tc summary 14:08:27 there are some good points raised that i wanted to highlight 14:08:41 we want to make contribution easy for new contributors 14:09:03 we want to encourage platinum openstack members to contribute to openstack 14:09:20 you can read it out for more details but i liked the initiative 14:09:26 it will allow us to have more diversity 14:09:47 that's all the announcements I had for today 14:09:50 anyone has anything else? 14:11:00 or any doubts/clarifications about the above announcements? 14:11:15 If I may comment on the Runtimes issues - I actually thought there was only one release sharing two Ubuntu versions ... Yoga and then one had to upgrade to 22.04 prior to going to Zed. 14:11:33 (https://wiki.ubuntu.com/OpenStack/CloudArchive) 14:12:16 sorry I meant to show the graphic here:  14:12:18 https://ubuntu.com/about/release-cycle#openstack-release-cycle 14:13:23 they might be supplying their distro with Zed for deployment but our current gate jobs are running with ubuntu focal 14:13:34 the above effort is for migrating those jobs to ubuntu 22.04 14:14:03 we don't test Zed with 22.04 but will be testing 2023.1 with 22.04 14:14:09 in upstream jobs ^ 14:14:25 alright - thanks for the clarification- 14:14:55 yep it is more of a upstream CI thing 14:15:29 but thanks for pointing to that 14:15:58 looks like we can move to topics then 14:16:09 #topic Have backups happen independently from volume status field to allow e.g. live migrations to happen during potentially long running backups (crohmann) 14:16:14 crohmann, that's you 14:17:14 Yes. I'd like to bring this topic up once again of making cinder-backup moving out of the state machine of volumes and run independently. e.g. to enable live-migrations and other things to happen idenpendently. 14:19:53 crohmann, do you have a full detail plan? or a WIP patch to show? 14:20:19 Did you see the details I placed below the topic on the Etherpad? 14:20:29 i think we've a spec and also discussed this during PTG 14:20:34 oh sorry, my bad i haven't open the etherpad 14:20:39 need to refresh my memory on what we concluded 14:20:42 * senrique facepalms 14:20:47 #link https://etherpad.opendev.org/p/cinder-antelope-meetings 14:21:00 thanks! 14:21:55 #action: ask Christian about other cases where this feature would be useful, because it seems like a large feature just for 1 use case. 14:22:04 this is one of the action items crohmann ^ 14:22:47 #link https://etherpad.opendev.org/p/antelope-ptg-cinder#L302 14:22:48 in short: we tried with a spec to introduce a new task_status (https://review.opendev.org/c/openstack/cinder-specs/+/818551) but then concluded that this is way to heavy when maintaining backwards compatibility and likely only backups benefit from it for the forseeable future. 14:22:54 please see: https://review.opendev.org/c/openstack/cinder-specs/+/818551/comments/6ade3ca0_d95e489d 14:23:34 I will follow up on the discussion there 14:23:36 My question is: Does it make sense to "just" externalize the backup status from the volume status as this is the actual issue / use-case. 14:24:26 Thanks. We gladly start on a new spec, but that only makes sense if you agree that this is a conclusion of the discussion of the previous one. 14:24:34 it does make sense since we create another temp volume/snapshot from the original volume to back it up 14:24:59 so we are not exactly doing anything on the main volume rather than changing it's state to backing-up 14:25:20 geguileo, also had some ideas to use attachment API for internal attachments 14:25:21 yes, my argument exactly. The backup status does NOT matter to the volume status (attaching, in-use, ...) 14:25:37 but can't remember exactly how that would benefit this effort 14:26:07 And with the recent bug / discussion I referenced in the Etherpad as well of some race condition on the restoration of the actual volume state after a backup has happended makes this only more valid if you ask me 14:26:18 That field is simply over-(ab)used. 14:27:18 yeah, other operations already affect the volume state 14:27:47 let's discuss this during midcycle next week where the whole team will be around 14:27:56 and it's video so easier to have discussions 14:28:11 ++ 14:28:16 crohmann, can you add a topic here? https://etherpad.opendev.org/p/cinder-antelope-midcycles 14:28:24 on it 14:28:45 thanks 14:29:24 this is another benefit of midcycles to followup on ptg discussions! 14:30:30 ok, guess we can move to next topic then? crohmann 14:30:37 certainly. Thanks. 14:31:16 great 14:31:19 #topic Encrypted backups 14:31:22 crohmann, that's again you 14:32:06 (sorry) - I just wanted to check with you how this spec could move forward. 14:33:08 I was in the middle of reviewing it when got hit by other tasks 14:33:15 I will complete the review this week 14:33:17 After the last operator hour at the PTG Gorka wrote this up. We would love to have encrypted off-site backups (using e.g. S3-drivers). 14:33:38 we've spec freeze on 16th december but we will try to get that in earlier 14:33:52 Awesome whoami-rajat__! See my comment about using fernet keys to allow key roll overs ... 14:34:03 crohmann: this looks like another good topic for the PTG 14:34:11 ack 14:34:19 yeah, good to followup on this as well 14:34:22 rosmaita++ 14:34:39 So that one goes to the Midcycle list as well? 14:35:12 yeah, the main thing is for you to explain how you see the key alignment with keystone 14:35:22 i mean, how that would work exactly 14:35:32 not at all. 14:35:42 There is no relation to Keystone. 14:36:12 I just proposed to do it "like" keystone via Fernet-keys 14:36:44 sure, but you can explain why that's better than what gorka proposed 14:37:23 we can review the spec in the meantime, so we don't have to wait for midcycle 14:39:20 I would not store keys inside config files but as dedicated files: https://docs.openstack.org/keystone/zed/admin/fernet-token-faq.html#where-do-i-put-my-key-repository 14:40:46 And then allow for a switch of keys / rollover: https://docs.openstack.org/keystone/zed/admin/fernet-token-faq.html#what-are-the-different-types-of-keys 14:41:56 In shot: Allow the operator to introduce a new key for all new data, but allow for existing backups to still be restored / decrypted. 14:42:38 And since most operators might have code to deal with keystone fernet keys I thought it would be a nice touch to just reuse the mechanisms and terminology there. 14:43:10 i think it's a good idea, just needs to be thought through on our side 14:44:40 we can follow up with the discussion on the spec and mid cycle 14:44:51 cool. Thanks once again. 14:45:15 thank you for following up on the topics crohmann 14:45:21 moving on 14:45:27 #topic OSC size vs name discussion 14:45:56 so we had a discussion in the last week's meeting regarding making size as positional and name as optional in openstackclient 14:46:09 Stephen disagrees with that and has some concerns 14:46:19 #link https://review.opendev.org/c/openstack/python-openstackclient/+/865377 14:46:31 1) it will break existing scripts -- which every major change does 14:46:39 2) it is inconsistent with other OSC commands 14:46:55 during the meeting, he also sent out a mail to ML 14:47:08 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031284.html 14:47:32 just wanted to bring it to the attention and we can follow up on this on the patch or ML or both 14:48:24 that's all from me, moving on to next topic 14:48:26 #topic Update on review and acceptance of HPE Cinder Driver 14:48:30 abdi, that's you 14:48:33 Yes. 14:48:56 Just wanted to get a quick update if/when we get this approved and merged. 14:49:40 Curious if rosmaita had a chance to review. 14:49:45 nope 14:50:03 we've the driver merge deadline on 20th January 14:50:17 it was extended from existing 06th Jan to give people time to review 14:50:24 since that's year end holiday time 14:50:39 do we have CI running and passing on the driver? abdi 14:50:58 Ok. I just want to avoid something coming up last min and missing Antelope as we missed Zed. 14:51:28 Yes CI is running and passing. 2 iSCSI errors on CI are consistent and root caused to a race condition in nova/os-brick. 14:51:50 That's why it is important to get your review to agree/disagree with the root cause 14:51:57 adbi: i will take a look at the CI and update my CI check comment 14:52:06 Thank you. 14:52:21 are the errors specific to HPE CI job or does it show for other CIs as well? 14:53:14 in any case, i will take a look at the patch 14:53:15 Not sure if anyone has reported similar issues. But the bug I filed about the race condition is linked in the CI comment. Nova/os-brick folks reviewed and agreed 14:53:42 ack, will check it 14:53:53 it could be my environment exposes the issues. Thank you for the review. 14:54:06 Just trying to get ahead of this in case I need to take action and not miss the merge. 14:54:27 if CI is working at this time, don't worry about missing the deadline :) 14:54:36 ack. 14:54:50 anything else on this? 14:54:59 no. that's all. 14:55:12 thanks 14:55:14 next topic 14:55:16 #topic Requesting review for backport to Zed 14:55:22 tobias-urdin, that's you 14:55:35 #link https://review.opendev.org/c/openstack/cinder/+/864701 14:55:39 yes o/ 14:55:58 I would like some review on that backport, I would like to have it even further back if it's accepted 14:56:08 hopefully that will work :) 14:56:24 will take a look at the backport 14:57:03 thanks 14:57:35 np 14:57:47 that's all the topics we had for today 14:57:53 let's move to open discussion 14:57:56 #topic open discussion 14:59:54 looks like we've nothing else to discuss so we can end here 14:59:56 If I may bring up something else .... quota inconsistencies. We see quite a lot of those, especially for backups. We run cinder-backup on 3 nodes. Is there anything we could look at? 15:00:19 crohmann, geguileo was working on a quota effort 15:00:54 crohmann, https://review.opendev.org/c/openstack/cinder-specs/+/819693 15:01:01 Neutron had quite a few inconsistencies until Xena, but with the NoLock driver this seems to be gone. 15:01:20 uh, did not see that one ... "Cinder quotas have been a constant pain for operators and cloud users." 15:01:26 That's me - thanks. 15:01:37 yep, it's been an issue for very long 15:01:46 thanks for the pointer. 15:01:53 hopefully geguileo will complete the work and we've consistent quotas 15:01:57 anyway, we're out of time 15:02:02 thanks everyone for joining 15:02:05 and happy holidays! 15:02:11 Thank you. Good day. happy holidays. 15:02:12 #endmeeting