14:00:04 #startmeeting cinder 14:00:04 Meeting started Wed Dec 13 14:00:04 2023 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:04 The meeting name has been set to 'cinder' 14:00:07 #topic roll call 14:00:28 hi 14:00:31 hello! 14:00:53 o/ 14:01:18 Hi 14:01:41 o/ 14:01:41 #link https://etherpad.opendev.org/p/cinder-caracal-meetings 14:01:43 hi 14:02:08 0/ 14:02:14 o/ 14:02:39 o/ 14:03:39 we've a lot on the agenda 14:03:41 let's get started 14:03:44 #topic announcements 14:03:44 hi 14:04:17 first, OpenInfra Live: PTG Recap 14:04:27 #link https://www.youtube.com/watch?v=thidlQGX29M 14:04:41 last week we discussed PTG recap in open infra live 14:04:51 you can watch the recording for an overall openstack update 14:05:01 also there was update from StarlingX community 14:05:39 next, Festival of XS reviews 14:05:45 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/RPXZR5HKVJUYAHD3I6ESN6AMWFQEQMFJ/ 14:05:57 we will be conducting festival of XS reviews this Friday 14:06:03 Details: 14:06:04 Date: 15th December 14:06:04 Time: 1400-1600 UTC 14:06:04 Etherpad: https://etherpad.opendev.org/p/cinder-festival-of-reviews 14:06:04 Meeting link: https://meet.google.com/jqg-eigw-rku 14:08:04 next, Upcoming deadlines 14:08:07 Cinder spec freeze - 22nd December 14:08:15 we discussed specs 2 weeks ago 14:08:36 out of the 4, there is only one that needs attention 14:08:55 Encrypted backups 14:09:01 #link https://review.opendev.org/c/openstack/cinder-specs/+/862601 14:09:19 i think Jon updated the patch and will continue to work on it 14:09:29 and i don't want to turn the announcement into a topic 14:09:48 any other announcements? 14:10:57 okay, let's go to topics 14:11:04 #topic cinderlib deprecation 14:11:06 rosmaita, that's you 14:11:15 hello 14:11:26 announcement was sent to the ML earlier this week: 14:11:34 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/L6HJW55SEUL4NYQVOESJ22KFDW5SGZAE/ 14:11:49 the governance patch to actually do the deprecation has been posted: 14:11:57 #link https://review.opendev.org/c/openstack/governance/+/903259 14:12:28 it would be good for whoami-rajat as PTL and jbernard as release manager and maybe gegulio as primary contributor could +1 it 14:12:54 (just so it's clear that the entire project is behind this) 14:12:55 * jungleboyj sneaks in late 14:13:08 that's all 14:13:40 is Gorka going to reply to the email about Ember? 14:14:01 i would hope so 14:14:21 rosmaita, done, thanks for driving the effort 14:14:24 is there an email about ember, or do you mean reply to my email mentioning that ember community is OK? 14:14:55 i think Gorka already replied regarding Ember and oVirt that they are happy with older cinderlib releases 14:15:05 just noticed what whoami-rajat said 14:15:09 i didn't see it - i'll have a look 14:15:30 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/TCLC7KN4XFULWZ5IYIXVF23TD6GDULLR/ 14:15:35 yep - so he did - no worries 14:16:13 ok, that's all 14:16:18 great 14:16:24 we can move to the next topic then 14:16:28 #topic train, ussuri EOL 14:16:32 and that's you again rosmaita ! 14:16:53 yeah, the key thing is that we can't delete the branches if there are open patches 14:16:58 this is what we have: 14:17:07 #link https://review.opendev.org/dashboard/?title=Train,+Ussuri+Open+Patches&foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen&Ussuri=branch%3A%5Estable%2Fussuri&Train=branch%3A%5Estable%2Ftrain 14:17:16 (hopefully that link works) 14:17:30 it does 14:17:57 i think back in June when we first discussed EOLing everything, we informally agreed "no more merges"? 14:18:27 is there any benefit of merging those patches? since we are not going to release those branches 14:18:43 i am inclinde to say "no benefit" 14:18:58 i can abandon them all with a note 14:19:33 sounds good to me, also most of them have negative votes with unaddressed comments 14:20:03 +1 to abandon them (unless the team thinks otherwise) 14:20:21 +1 to abandon from me 14:20:48 jungleboyj: ? 14:20:54 +1 for me too 14:20:54 (i think train may have been jungleboyj's last release) 14:21:13 I am ok with that decision. 14:21:20 thanks! 14:21:31 ok, that closes out this topic 14:21:51 Welcome. :-) 14:22:48 thanks rosmaita 14:22:59 moving on to next topic 14:23:02 #topic Glance Image Encryption 14:23:19 this was originally proposed for midcycle but we went out of time so added it here 14:23:40 I don't think the author is here 14:23:53 there were some questions added to the topic 14:23:56 1. openstackclient now depends on and pulls in os-brick, because it needs the same encryption/decryption functions that Cinder uses to offer encrypted image upload/download to/from Glance 14:24:27 I need to check but OSC using os-brick seems wrong to me 14:24:56 i guess that's kind of heavyweight, but the original idea behind the encryption being in os-brick is that all the services that need encryption also use os-brick already 14:25:43 no usage of brick in OSC 14:25:52 well, you win some and you lose some 14:26:08 OSC pretty much imports everything else, so why not os-brick, too? 14:26:20 :D 14:26:34 i think you are right, the main idea is to have common code in os-brick to allow all services to consume from it 14:26:41 i don't think there's a technical reason why the encryption *must* be in os-brick, though 14:27:15 i'm pretty sure it was just so that there wouldn't have to be a new project with a new library 14:27:56 i guess the key thing here would be to get feedback from the OSC team 14:29:23 okay, i can check with stephen regarding that 14:29:46 but i don't see any use case of OSC (a client) needing to interact with brick for any operation 14:29:47 that would be good, if he is like really negative, then we may have to reconsider 14:29:56 it should ideally be the service owning the resource 14:30:35 unless someone plans to add the os-brick-cinderclient-ext functionality to OSC? 14:31:37 maybe, i think the idea of brickclient was testing and debugging but i might be completely wrong on this 14:31:50 but i will check with tehm 14:31:51 them 14:32:38 next question, do we need new microversion in Cinder and cinderclient for the new API params? 14:33:28 #link https://review.opendev.org/c/openstack/python-cinderclient/+/902652 14:34:33 i think there will be a microversion change on cinder side 14:34:57 since we will be sending new parameters to the existing volume upload to image API 14:35:16 so cinderclient will follow similar microversion bump 14:35:45 that sounds correct 14:36:21 thanks for confirming 14:37:06 there is a notice 14:37:06 notice: Josephine Seifert (Luzi) will take over / continue the topic in January 14:37:12 and that's pretty much it for this topic 14:37:24 added our discussion points to the topic so the author can check it later 14:37:34 moving on to next topic 14:37:37 #topic Email to ML regarding increasing size of volume image metadata values had only one response suggesting you should accept my patch 14:37:40 drencrom, that's you 14:37:46 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/B7UET4JKHQU5SHH44KLSKHFBMFN3ZZYV/ 14:39:58 maybe he's not around 14:40:18 strange as he just added this to the etherpad 14:40:19 but i guess he was pointing to Erno's response as to nova and cinder should accept the large metadata fields 14:40:29 sorry 14:40:31 and not restrict it 14:41:07 yes, It had only one response commenting that we should accept the long metadata patch 14:41:30 and that nova should not decide the metadata length 14:41:36 so other than Erno, there seem to be no objections 14:42:09 I commented the fact that different metadata sizes per service could lead to strange behaviour for users 14:42:25 i think you are correct about that 14:42:41 seems to me that NOva need to sort out their issues with this rather than dictating to other projects 14:43:29 I would agree to how glance team wants it to be since image is their resource 14:44:20 But it seems that only nova is having performance issues with long values (at least I understand that is the reason for the reduced size on nova) 14:44:55 i think it's because on list-server-details, they include the image metadata 14:44:56 so maybe cinder and glance adopt this change and nova have to fix their side - maybe someone raises a bug against them once cinder and glance adopt. 14:45:08 but in cinder, i think it is a different call? 14:45:48 no, i am wrong about that 14:47:05 if we do volume list and the volume is created from an image, it will load the metadata values every time right 14:47:43 sorry volume show 14:47:52 what is the actual imapct on list by removing the 255 limit? 14:48:25 s/list/show/ 14:48:44 well, you may be stuffing a bunch of 65535 char fields into a bunch of volume responses 14:48:52 i think the discussion from nova side was theoretical and there are no performance numbers for it 14:49:25 remember that the 255 limit is just for api changes right now, when the volume is created from a image we get the full 65556 bytes values 14:49:41 drencrom: good point 14:49:51 so why not go with 65535 and see if it causes problems down the line. If it does we revert back to 255 14:50:43 that may be the way to go, we can say that we are making the image-vol-metadata update command consistent with what happens when you create a volume from an image 14:51:02 and if something bad happens, we will blame simondodsley 14:51:05 :D 14:51:19 not to get political, but is this a nova dictatorship or are we a democracy? 14:51:31 If you want to go that way I just need a WF+1 on my patch 14:53:04 i think nova team had good points for blocking the change but given the cinder team's perspective, it doesn't seem to be that bad for us 14:53:15 eharney: you had already givien a WF+1... 14:53:22 and as simondodsley said, we can always revert it to 255 and backport 14:53:28 if it causes an issue 14:54:08 i can W+1, but just to confirm if there are no objections from cinder team regarding this 14:54:48 The WF+1 was removed later after Sean Mooney comments 14:54:51 https://review.opendev.org/c/openstack/cinder/+/868485 14:55:09 well, it only has 1 +2 right now 14:56:01 whoami-rajat: i think add a comment that this really doesn't change anything for cinder because you can have the extral-long values already, and nova apparently deals with those and no one has complained 14:56:06 i will review it after the meeting and also add our discussion logs to it 14:56:15 we're just making our own API consistent 14:56:30 rosmaita, sure, i can do that 14:56:36 Thanks whoami-rajat 14:56:54 and like simon says, if it causes problems, we can reconsider 14:57:09 great, anything else on this topic? 14:57:18 Not for me 14:57:39 thanks drencrom 14:57:45 we don't have enough time to discuss any other topic 14:57:52 so i will move them to next week 14:58:04 #topic open discussion 14:58:07 2 minutes for open discussion 14:59:21 what about failing CI? 14:59:48 Andrei-1, rosmaita, is onto the swap space fix and I'm looking into the concurrency thing 14:59:59 before that we need to figure out the s3 backup job failing 15:00:02 was there any progress? I had 3 sequential failures on my patch from various jobs all related to connectivity 15:00:05 so that these changes can be merged 15:00:29 Andrei-1, that usually happens when OOM killer does kill some services like mysql 15:00:45 we will update once we have something 15:00:50 okay, we're out of time 15:00:54 thanks everyone for attending 15:00:56 #endmeeting