14:00:06 #startmeeting cinder 14:00:06 Meeting started Wed Jul 6 14:00:06 2022 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:06 The meeting name has been set to 'cinder' 14:00:13 hi! o/ 14:00:17 #topic roll call 14:00:22 hi 14:00:33 hi 14:00:40 o/ 14:00:46 o/ 14:01:04 o/ 14:01:04 o/ 14:01:10 o/ 14:01:13 #link https://etherpad.openstack.org/p/cinder-zed-meetings 14:01:42 hi 14:01:54 hi 14:02:04 o/ 14:02:09 o/ 14:03:14 good turnout today 14:03:24 hi 14:04:05 we don't have many things on the agenda today but anyway let's get started 14:04:11 #topic announcements 14:04:35 first, spec freeze exception week 14:04:52 so we had spec freeze on R-15 week 14:04:58 hi 14:05:17 then we provided the time of R-14 to request a spec freeze exception 14:05:27 and this is the week where the spec freeze exception deadline is 14:05:38 there was only one request for the quota system spec from geguileo 14:05:48 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029374.html 14:06:00 so please review that spec on priority 14:06:19 sorry about the size of the spec 14:06:39 ack 14:06:39 no worries, verbose = good (at least for me) 14:07:10 please review and get it merged before 8th July! 14:07:17 what is supposed to be the merge freeze for Zed? next week also? 14:07:47 HappyStacker, you mean the feature freeze? that's R-5 milestone 3 14:07:56 HappyStacker, here's the whole schedule https://releases.openstack.org/zed/schedule.html 14:08:40 for any patchset which are pending, I think yes it's feature freeze 14:08:59 so M-3 is for features and then we've time till RC-1 for bug fixes 14:10:25 moving on then, next announcement, October 2022 PTG Dates & Registration 14:10:56 I've brought this topic up before but doing it again just to get a realistic idea of the number of people visiting 14:11:32 from Red Hat's perspective, it is not final but might be 3-5 people visiting 14:11:38 It clashes with 2 other conferences so I have to see which takes priority 14:11:49 ok 14:11:58 I won't be able to make it. no travel budget 14:12:04 I haven't started investigating yet but I think it is unlikely that I will be in person. 14:12:27 AnsibleFest always seems to clash - Red Hat should take note of this 14:12:27 oh ok, good to know 14:13:07 Wow, that is a clash for sure. 14:13:13 simondodsley: we'll pass on the feedback 14:13:24 simondodsley: will that be a problem for you? 14:13:29 thanks for bringing it up simondodsley 14:13:43 Yes - I am a senior Ansible contributor as well. 14:13:55 ouch :-( 14:13:56 Plus I have an Observability conference in NYC the same week 14:14:11 All my things at the same time 14:14:25 simondodsley: I find you very observable. So, all good there. ;-) 14:14:38 he he 14:14:47 :-) 14:15:23 simondodsley: at least the Observability one is not RH's fault, right? 14:15:38 True - that is DataDog 14:15:57 so, is anybody actually going? 14:16:02 so if not a lot of people will be there in person then we can also continue with our current method of virtual PTG 14:16:10 geguileo, good question lol 14:16:22 i think sfernand said they might send someone for cinder 14:16:24 well, i don't want to go if no one is going to be there! 14:16:39 hi folks I will be there :) 14:17:02 probably not - I think either Ansible (Chicago) or DataDog will take precedence currently 14:17:11 I did go to Berlin though... 14:17:22 oh, good, sfernand and I can find a conference room and zoom with the rest of the team 14:17:40 :-) 14:17:47 I'm too newbie to attend ;-) 14:18:09 Nahim will be with me and we will be splitting between Cinder and Manila 14:18:10 never to new to attend... 14:18:29 hemna: will you be attending? 14:18:55 I won't be able to make it. no travel budget 14:19:06 :-( 14:19:14 yah :( 14:19:54 this isn't looking too good if only NetApp and RH plan to attend for Cinder... 14:20:07 geguileo: True. 14:20:17 yep 14:20:17 SAP has locked down travel currently. Otherwise I'd be there. 14:20:18 no, and i really don't want to go to Columbus to have virtual meetings 14:20:19 agreed 14:20:22 Damn COVID. It's been a great excuse for companies to slash travel budgets 14:20:46 rosmaita: Oh come on, that sounds like a blast. ;-) 14:20:49 rosmaita: yeah, that's why whoami-rajat is asking 14:21:01 hahaha rosmaita I'm already paying for the US visa so please do not give up from attending 14:21:05 :P 14:21:15 simondodsley: true, because "virtual meetings are the same thing" :-( 14:21:20 i've been to columbus ... would rather go to Hoboken 14:21:29 lol - so true 14:21:32 * jungleboyj laughs 14:21:35 sfernand: to be honest it's not looking good 14:21:38 worse than Atlanta though? 14:21:40 nothing wring with Hoboken 14:21:51 hemna: It is Ohio. 14:22:14 Lots of good bars and restaurants in Hoboken now 14:22:20 never been, so I have no idea. lots of good racing there afaik 14:22:24 simondodsley: ++ 14:23:26 geguileo: are you planning to attend? 14:24:05 thanks everyone for the feedback, it is helpful to plan further for the cinder PTG 14:24:26 which looks likely to be not many people there for cinder 14:24:41 so how does the count stand? sounds like sfernand ? 14:24:44 sfernand: not by the looks of it, because I'm not sure RH will consider it necessary if only NetApp is going to be there 14:25:15 I've been told by management that there is budget and that RH would be sending Cinder people 14:25:22 rosmaita, looks like it 14:25:36 but it's going to be hard to justify the expense if it's going to be mostly redhatters 14:26:16 geguileo: makes sense 14:26:32 which sucks :-( 14:26:35 as geguileo said, it is true for most/all of the red hatters 14:26:47 in case I'm the only Cinder folk there I could bring some content to Cinder newcomers or something 14:27:00 I don't know 14:27:16 sfernand, that would be great, and we will conduct the virtual PTG so you can join from there as well 14:27:31 but i will leave that planning to the future and just proceed with the feedback received 14:27:43 sfernand: you will get a much better internet connection in Columbus! 14:27:43 yes that would be actually very sad 14:27:51 to join a virtual PTG in person 14:27:56 haha 14:28:05 lol true 14:28:22 new concept 14:28:49 yeah, the worst of both worlds 14:28:56 well, this is a bummer, i was really looking forward to seeing people again for realz 14:29:09 me too :-( 14:29:58 so i think we're done with the discussion or anyone has anything else to add to it? 14:30:27 well, on the plus side, i dont' have to go to columbus 14:30:39 ROFL 14:30:50 :D 14:30:54 rosmaita: ++ 14:31:43 feedback for the next PTG - make it in Hawaii 14:31:53 Bwah ha ha! 14:32:00 hawaii++ 14:32:19 a newbie in Hawaii, that's a good starting point haha 14:32:23 that actually sound great, but unlikely 14:32:44 bahamas then 14:32:59 or Hoboken 14:33:27 ok, moving on to the next announcement else we will never finish with the agenda (sorry guys), New project added Dell EMC PowerStore Charm 14:33:39 #link https://review.opendev.org/c/openstack/governance/+/846890 14:33:58 I'm not sure how much that affects us as a project but thought it would be good to mention 14:34:14 the upstream repo is https://github.com/nobuto-m/cinder-dell-emc-powerstore.git (which is external) 14:34:35 good luck to them - i did the Pure charm 14:35:06 cool, so we've a charm repo for each driver? 14:35:20 only if the vendor decides to do one 14:35:51 ok, I'm not very much familiar with the project 14:36:04 next announcement, SRBAC goals for Zed 14:36:10 #link https://review.opendev.org/c/openstack/governance/+/847418 14:36:14 its basically for Canonical OpenStack deployment using JuJu 14:36:39 ack 14:37:21 so we've discussed about the RBAC work for Zed, which is not much but we've to do a bunch of cleanup in policy files 14:37:42 but the link points to an OpenStack wide goal, so feel free to go through it 14:37:57 i won't recommend until it's merged due to it's nature of constant change 14:38:36 but yeah, there's a patch on the revised goals based on the ops feedback at berlin 14:39:12 that's all the announcements from my side, anyone has anything else? 14:40:35 okay, moving on to topics then 14:40:38 #topic RBD backend QOS 14:40:45 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029403.html 14:41:08 so there a mail thread asking for reviews on a patch implementing backend QoS for RBD driver 14:41:20 In my opinion that didn't need a spec 14:41:26 and the author has also proposed a spec which we traditionally don't require 14:41:36 but anyway i thought it would be good for documentation 14:41:38 it's just a cinder base feature implementation 14:41:52 it does need a BP though 14:41:57 geguileo, yep, same thoughts 14:42:10 simondodsley: true, I forgot to say it in my reply 14:42:12 simondodsley, yes, they've registered one, in the introduction part of the spec 14:42:36 oh, then I didn't forget, I saw it! I saw it! 14:42:52 geguileo, i said it in my reply that they require one, but later noticed they already have one :D 14:43:02 OK - I really don't care if they want to raise a spec but I get it isn't required 14:43:24 it does need your reviews addressing though whoami-rajat 14:44:01 so my point of bringing this up is it doesn't have to follow the spec deadlines and can be merged even in the exception phase, at least that are my thoughts 14:44:13 agreed 14:44:18 simondodsley, yes, i left some comments, if they can address that then we should be good 14:45:08 okay, so i don't think we've much conflict there 14:45:11 as soon as they send a blueprint in time for the driver feature proposal deadline they should be fine, right? I would say to ask them to abandon the spec.. 14:45:24 let's see if they can revise it in time 14:46:04 sfernand, so I did abandon it since it wasn't worked upon for > 4 months but recently they brought it up and i think the spec is a good point of documentation for the backend QoS 14:46:17 but maybe it's just me 14:47:18 whoami-rajat: I prefer they update the RBD driver docs, though ;-) 14:47:30 and I noticed that our QoS docs are a bit of a mess 14:47:30 that is a given 14:47:50 we have it split in one without anything explaining how everything works together 14:47:52 the main code patch will have to have that. It is is merge conflict as well at the moment 14:47:59 split in 2 14:49:32 geguileo, yeah, i was referring to apart from driver docs, since they're already written the spec so it at least provides the reasoning for why we are introducing it 14:50:01 anyway, let's see if there's an update by Friday 14:50:18 but the spec shouldn't stop anyone from reviewing the actual code patch 14:51:28 so i don't see any other topics, let's move to open discussion 14:51:33 #topic open discussion 14:53:23 I have a question for driver maintainers 14:53:47 Do any of you use 2 synchronize decorators in a single method? 14:54:06 for example in a migration to lock on the source and on the destination? 14:54:21 not in Pure that I'm aware of 14:54:43 remotefs does 14:54:59 eharney: thanks, I'll see if I can find it 14:55:12 (create_cloned_volume) 14:55:57 ok, doesn't look like problematic 14:56:16 since it locks on the volume id and not on host or some other storage thingy 14:57:30 I'm asking because I've noticed a driver that can have a deadlock 14:58:00 and I'm working on some improvements to coordination.synchronized to make it easier for drivers 14:58:24 and wondering if this could be happening for other drivers 15:00:04 we're out of time, thanks everyone for attending! 15:00:05 could share the line with the possible deadlock you've found 15:00:09 #endmeeting