Wednesday, 2022-11-23

*** dviroel|afk is now known as dviroel11:15
*** dasm|off is now known as dasm13:59
whoami-rajat__#startmeeting cinder14:00
opendevmeetMeeting started Wed Nov 23 14:00:06 2022 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat__. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
opendevmeetThe meeting name has been set to 'cinder'14:00
whoami-rajat__#topic roll call14:00
toskyhi14:00
rosmaitao/14:01
crohmannhey all.14:01
whoami-rajat__is there a holiday today in US Europe region?14:01
whoami-rajat__#link https://etherpad.opendev.org/p/cinder-antelope-meetings14:02
senriquehi14:02
rosmaitatomorrow is a USA holiday, so some people may be taking an extra day14:02
whoami-rajat__ah yes, i know about tomorrow but people might be extending it right14:03
whoami-rajat__so we've few people around but a lot on the agenda14:03
whoami-rajat__so let's get started14:03
whoami-rajat__#topic announcements14:03
whoami-rajat__first, Midcycle-1 next week (Please add topics!) (30th Nov)14:04
whoami-rajat__we've midcycle-1 next week, so request everyone to add topics and attend it14:04
whoami-rajat__it will be from 1400-1600 UTC (1 hour overlapping our current meeting time)14:04
whoami-rajat__#link https://etherpad.opendev.org/p/cinder-antelope-midcycles14:04
simondodsleyo/14:04
whoami-rajat__I will send a mail this week for reminder and details14:05
whoami-rajat__next, Runtimes for 2023.114:05
whoami-rajat__#link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031229.html14:05
whoami-rajat__so there are 2 points in the mail regarding runtimes14:05
whoami-rajat__1) we need to propose at least one tempest job which runs on old ubuntu i.e. focal14:05
whoami-rajat__I've proposed it14:05
whoami-rajat__#link  https://review.opendev.org/c/openstack/cinder/+/86542714:06
whoami-rajat__it is the tempest integrated storage14:06
whoami-rajat__the purpose is to verify a smooth upgrade from past releases14:06
whoami-rajat__like ubuntu focal Zed to ubuntu focal 2023.114:06
whoami-rajat__2) the python runtimes are 3.8 and 3.1014:07
whoami-rajat__unit tests are automatically modified by the templates14:07
whoami-rajat__#link  https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/86446414:07
whoami-rajat__for functional tests, I've proposed a patch14:07
whoami-rajat__#link https://review.opendev.org/c/openstack/cinder/+/86542914:07
whoami-rajat__it's just changing py39 to py31014:07
whoami-rajat__next, OpenInfra Board + OpenStack Syncup Call14:08
whoami-rajat__#link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031242.html14:08
whoami-rajat__if you go in the syncup call session in the above tc summary14:08
whoami-rajat__there are some good points raised that i wanted to highlight14:08
whoami-rajat__we want to make contribution easy for new contributors14:08
whoami-rajat__we want to encourage platinum openstack members to contribute to openstack14:09
whoami-rajat__you can read it out for more details but i liked the initiative14:09
whoami-rajat__it will allow us to have more diversity14:09
whoami-rajat__that's all the announcements I had for today14:09
whoami-rajat__anyone has anything else?14:09
whoami-rajat__or any doubts/clarifications about the above announcements?14:11
crohmannIf I may comment on the Runtimes issues - I actually thought there was only one release sharing two Ubuntu versions ... Yoga and then one had to upgrade to 22.04 prior to going to Zed.14:11
crohmann(https://wiki.ubuntu.com/OpenStack/CloudArchive)14:11
crohmannsorry I meant to show the graphic here: 14:12
crohmannhttps://ubuntu.com/about/release-cycle#openstack-release-cycle14:12
whoami-rajat__they might be supplying their distro with Zed for deployment but our current gate jobs are running with ubuntu focal14:13
whoami-rajat__the above effort is for migrating those jobs to ubuntu 22.0414:13
whoami-rajat__we don't test Zed with 22.04 but will be testing 2023.1 with 22.0414:14
whoami-rajat__in upstream jobs ^14:14
crohmannalright - thanks for the clarification-14:14
whoami-rajat__yep it is more of a upstream CI thing14:14
whoami-rajat__but thanks for pointing to that14:15
whoami-rajat__looks like we can move to topics then14:15
whoami-rajat__#topic Have backups happen independently from volume status field to allow e.g. live migrations to happen during potentially long running backups (crohmann) 14:16
whoami-rajat__crohmann, that's you14:16
crohmannYes. I'd like to bring this topic up once again of making cinder-backup moving out of the state machine of volumes and run independently. e.g. to enable live-migrations and other things to happen idenpendently.14:17
senriquecrohmann, do you have a full detail plan? or a WIP patch to show?14:19
crohmannDid you see the details I placed below the topic on the Etherpad?14:20
whoami-rajat__i think we've a spec and also discussed this during PTG14:20
senriqueoh sorry, my bad i haven't open the etherpad 14:20
whoami-rajat__need to refresh my memory on what we concluded14:20
* senrique facepalms 14:20
whoami-rajat__#link https://etherpad.opendev.org/p/cinder-antelope-meetings14:20
senriquethanks!14:21
whoami-rajat__#action: ask Christian about other cases where this feature would be useful, because it seems like a large feature just for 1 use case.14:21
whoami-rajat__this is one of the action items crohmann ^14:22
*** senrique is now known as enriquetaso14:22
whoami-rajat__#link https://etherpad.opendev.org/p/antelope-ptg-cinder#L30214:22
crohmannin short: we tried with a spec to introduce a new task_status (https://review.opendev.org/c/openstack/cinder-specs/+/818551) but then concluded that this is way to heavy when maintaining backwards compatibility and likely only backups benefit from it for the forseeable future.14:22
crohmannplease see: https://review.opendev.org/c/openstack/cinder-specs/+/818551/comments/6ade3ca0_d95e489d14:22
whoami-rajat__I will follow up on the discussion there14:23
crohmannMy question is: Does it make sense to "just" externalize the backup status from the volume status as this is the actual issue / use-case.14:23
crohmannThanks. We gladly start on a new spec, but that only makes sense if you agree that this is a conclusion of the discussion of the previous one.14:24
whoami-rajat__it does make sense since we create another temp volume/snapshot from the original volume to back it up14:24
whoami-rajat__so we are not exactly doing anything on the main volume rather than changing it's state to backing-up14:24
whoami-rajat__geguileo, also had some ideas to use attachment API for internal attachments14:25
crohmannyes, my argument exactly. The backup status does NOT matter to the volume status (attaching, in-use, ...) 14:25
whoami-rajat__but can't remember exactly how that would benefit this effort14:25
crohmannAnd with the recent bug / discussion I referenced in the Etherpad as well of some race condition on the restoration of the actual volume state after a backup has happended makes this only more valid if you ask me14:26
crohmannThat field is simply over-(ab)used.14:26
whoami-rajat__yeah, other operations already affect the volume state14:27
whoami-rajat__let's discuss this during midcycle next week where the whole team will be around14:27
whoami-rajat__and it's video so easier to have discussions14:27
rosmaita++14:28
whoami-rajat__crohmann, can you add a topic here? https://etherpad.opendev.org/p/cinder-antelope-midcycles14:28
crohmannon it14:28
whoami-rajat__thanks14:28
whoami-rajat__this is another benefit of midcycles to followup on ptg discussions!14:29
whoami-rajat__ok, guess we can move to next topic then? crohmann 14:30
crohmanncertainly. Thanks.14:30
whoami-rajat__great14:31
whoami-rajat__#topic Encrypted backups14:31
whoami-rajat__crohmann, that's again you14:31
crohmann(sorry) - I just wanted to check with you how this spec could move forward.14:32
whoami-rajat__I was in the middle of reviewing it when got hit by other tasks14:33
whoami-rajat__I will complete the review this week14:33
crohmannAfter the last operator hour at the PTG Gorka wrote this up. We would love to have encrypted off-site backups (using e.g. S3-drivers).14:33
whoami-rajat__we've spec freeze on 16th december but we will try to get that in earlier14:33
crohmannAwesome whoami-rajat__! See my comment about using fernet keys to allow key roll overs ...14:33
rosmaitacrohmann: this looks like another good topic for the PTG14:34
whoami-rajat__ack14:34
whoami-rajat__yeah, good to followup on this as well14:34
whoami-rajat__rosmaita++14:34
crohmannSo that one goes to the Midcycle list as well?14:34
rosmaitayeah, the main thing is for you to explain how you see the key alignment with keystone14:35
rosmaitai mean, how that would work exactly14:35
crohmannnot at all.14:35
crohmannThere is no relation to Keystone.14:35
crohmannI just proposed to do it "like" keystone via Fernet-keys14:36
rosmaitasure, but you can explain why that's better than what gorka proposed14:36
whoami-rajat__we can review the spec in the meantime, so we don't have to wait for midcycle14:37
crohmannI would not store keys inside config files but as dedicated files: https://docs.openstack.org/keystone/zed/admin/fernet-token-faq.html#where-do-i-put-my-key-repository14:39
crohmannAnd then allow for a switch of keys / rollover: https://docs.openstack.org/keystone/zed/admin/fernet-token-faq.html#what-are-the-different-types-of-keys14:40
crohmannIn shot: Allow the operator to introduce a new key for all new data, but allow for existing backups to still be restored / decrypted.14:41
crohmannAnd since most operators might have code to deal with keystone fernet keys I thought it would be a nice touch to just reuse the mechanisms and terminology there.14:42
rosmaitai think it's a good idea, just needs to be thought through on our side14:43
whoami-rajat__we can follow up with the discussion on the spec and mid cycle14:44
crohmanncool. Thanks once again.14:44
whoami-rajat__thank you for following up on the topics crohmann 14:45
whoami-rajat__moving on14:45
whoami-rajat__#topic OSC size vs name discussion14:45
whoami-rajat__so we had a discussion in the last week's meeting regarding making size as positional and name as optional in openstackclient14:45
whoami-rajat__Stephen disagrees with that and has some concerns14:46
whoami-rajat__#link https://review.opendev.org/c/openstack/python-openstackclient/+/86537714:46
whoami-rajat__1) it will break existing scripts -- which every major change does14:46
whoami-rajat__2) it is inconsistent with other OSC commands14:46
whoami-rajat__during the meeting, he also sent out a mail to ML 14:46
whoami-rajat__#link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031284.html14:47
whoami-rajat__just wanted to bring it to the attention and we can follow up on this on the patch or ML or both14:47
whoami-rajat__that's all from me, moving on to next topic14:48
whoami-rajat__#topic Update on review and acceptance of HPE Cinder Driver14:48
whoami-rajat__abdi, that's you14:48
abdiYes. 14:48
abdiJust wanted to get a quick update if/when we get this approved and merged. 14:48
abdiCurious if rosmaita had a chance to review.14:49
rosmaitanope14:49
whoami-rajat__we've the driver merge deadline on 20th January14:50
whoami-rajat__it was extended from existing 06th Jan to give people time to review14:50
whoami-rajat__since that's year end holiday time14:50
whoami-rajat__do we have CI running and passing on the driver? abdi 14:50
abdiOk.  I just want to avoid something coming up last min and missing Antelope as we missed Zed. 14:50
abdiYes CI is running and passing.  2 iSCSI errors on CI are consistent and root caused to a race condition in nova/os-brick. 14:51
abdiThat's why it is important to get your review to agree/disagree with the root cause 14:51
rosmaitaadbi: i will take a look at the CI and update my CI check comment14:51
abdiThank you.  14:52
whoami-rajat__are the errors specific to HPE CI job or does it show for other CIs as well?14:52
whoami-rajat__in any case, i will take a look at the patch14:53
abdiNot sure if anyone has reported similar issues.  But the bug I filed about the race condition is linked in the CI comment.  Nova/os-brick folks reviewed and agreed14:53
whoami-rajat__ack, will check it14:53
abdiit could be my environment exposes the issues.  Thank you for the review.14:53
abdiJust trying to get ahead of this in case I need to take action and not miss the merge. 14:54
whoami-rajat__if CI is working at this time, don't worry about missing the deadline :)14:54
abdiack.14:54
whoami-rajat__anything else on this?14:54
abdino.  that's all.14:54
whoami-rajat__thanks14:55
whoami-rajat__next topic14:55
whoami-rajat__#topic Requesting review for backport to Zed14:55
whoami-rajat__tobias-urdin, that's you14:55
whoami-rajat__#link https://review.opendev.org/c/openstack/cinder/+/86470114:55
tobias-urdinyes o/14:55
tobias-urdinI would like some review on that backport, I would like to have it even further back if it's accepted14:55
tobias-urdinhopefully that will work :)14:56
whoami-rajat__will take a look at the backport14:56
tobias-urdinthanks14:57
whoami-rajat__np14:57
whoami-rajat__that's all the topics we had for today14:57
whoami-rajat__let's move to open discussion14:57
whoami-rajat__#topic open discussion14:57
whoami-rajat__looks like we've nothing else to discuss so we can end here14:59
crohmannIf I may bring up something else .... quota inconsistencies. We see quite a lot of those, especially for backups. We run cinder-backup on 3 nodes. Is there anything we could look at?14:59
whoami-rajat__crohmann, geguileo was working on a quota effort15:00
whoami-rajat__crohmann, https://review.opendev.org/c/openstack/cinder-specs/+/81969315:00
crohmannNeutron had quite a few inconsistencies until Xena, but with the NoLock driver this seems to be gone.15:01
crohmannuh, did not see that one ... "Cinder quotas have been a constant pain for operators and cloud users."15:01
crohmannThat's me - thanks.15:01
whoami-rajat__yep, it's been an issue for very long15:01
crohmannthanks for the pointer.15:01
whoami-rajat__hopefully geguileo will complete the work and we've consistent quotas15:01
whoami-rajat__anyway, we're out of time15:01
whoami-rajat__thanks everyone for joining15:02
whoami-rajat__and happy holidays!15:02
abdiThank you.  Good day. happy holidays.15:02
whoami-rajat__#endmeeting15:02
opendevmeetMeeting ended Wed Nov 23 15:02:12 2022 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:02
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2022/cinder.2022-11-23-14.00.html15:02
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2022/cinder.2022-11-23-14.00.txt15:02
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2022/cinder.2022-11-23-14.00.log.html15:02
*** dviroel is now known as dviroel|lunch15:07
*** dviroel_ is now known as dviroel16:15
*** dviroel is now known as dviroel|afk21:25
*** dasm is now known as dasm|off23:49

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!