14:00:45 #startmeeting cinder 14:00:46 Meeting started Wed Nov 18 14:00:45 2020 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:47 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:49 The meeting name has been set to 'cinder' 14:00:52 #topic roll call 14:00:57 \o 14:01:07 o/ 14:01:08 hi 14:01:14 hi 14:01:14 hi 14:01:24 Hi 14:01:30 hi 14:01:31 o/ 14:01:39 Hi 14:02:01 o/ 14:02:06 hi! o/ 14:02:32 o/ 14:03:13 looks like a good turnout 14:03:14 #link https://etherpad.openstack.org/p/cinder-victoria-meetings 14:03:18 ok, let's get started 14:03:21 Topic: a couple of KIOXIA dev team questions 14:03:32 #topic announcements 14:03:44 hi 14:03:45 ChuckPiercey: you are on the agenda for later 14:03:56 hi sofia! 14:04:02 hope your exams went well 14:04:03 o/ :) 14:04:11 thanks! 14:04:16 reminder: next week's meeting (last meeting of the month) is videoconference 14:04:31 it will be recorded in bluejeans as usual, unless there are any objections? 14:04:49 i will send out connection info to the ML (though it has not changed from last time) 14:05:08 #topic announcements - upcoming events 14:05:17 this week is R-21 14:05:25 Wallaby Milestone-1 is at R-19 (yes, in 2 weeks) 14:05:34 i want to set some priorities for the next 2 weeks 14:05:47 priority for M-1: adding mypy type-checking 14:05:54 here are the patches 14:06:08 #link https://review.opendev.org/#/q/topic:mypy+(status:open+OR+status:merged)+project:openstack/cinder 14:06:20 this is an important code quality initiative 14:06:30 and it has been languishing despite the best efforts of eharney 14:06:47 so, i propose that each cinder core will take over one of these patches 14:07:03 except eharney, of course, but he will be available to answer questions that come up 14:07:18 i'll bump those in merge conflict today 14:07:40 i was going to propose that whoever takes over the patch should fix the merge conflict 14:07:59 that also works 14:08:00 so this will be slightly weird, a reviewer will also be a committer 14:08:14 but i think that is the best way to handle minor changes 14:08:42 and if a reviewer decides to make a major change, then they can't +2 the patch, but they can still be "in charge" of getting it merged 14:08:50 so we will need to use common sense here 14:09:08 so, to save people the embarrassment of volunteering, here's my proposal: 14:09:20 mypy: driver.py https://review.opendev.org/736856 hemna 14:09:20 mypy: annotate volume manager https://review.opendev.org/733621 whoami-rajat 14:09:20 mypy: annotate volume_utils / utils / exc https://review.opendev.org/736855 rosmaita 14:09:20 mypy: annotate remotefs https://review.opendev.org/733623 lseki 14:09:20 mypy: annotate lvm https://review.opendev.org/733624 geguileo 14:09:20 mypy: annotate api.py https://review.opendev.org/733622 rosmaita 14:09:51 ok, e0ne don't feel left out, you are heading up the v2 removal effort 14:10:02 and you will likely be bugged for reviews 14:10:18 rosmaita: I'm sorry, I was busy with our internal release last week 14:10:24 is everyone clear on what i am asking for here? 14:10:29 e0ne: no need to apologize 14:10:32 so there were no community activity from my side 14:11:15 ummmm ... i could use an 'ack' from hemna whoami-rajat__ lseki geguileo ^^ 14:11:20 rosmaita: yes, to do a massive downvote of eharney's patches, right? 14:11:32 ack 14:11:47 ack 👍 14:11:55 geguileo: no need to frown, just turn that downvote upside-down 14:12:04 there you go 14:12:06 eharney: rofl 14:12:11 :-) 14:12:30 jungleboyj and smcginnis, i suspect you will be bugged for reviews 14:12:39 hello guys, are we going to have an open floor topic? I would like to ask your guidance in a patch that has been open for quite a while for Cinder 14:12:40 rosmaita: ++ Can do. 14:13:00 rafaelweingartne: hopefully later in the meeting 14:13:29 community goals are looking OK, details are on the agenda 14:13:40 rosmaita: thanks :) 14:14:00 e0ne is heading up v2 removal, which we would like to have completed before M-2 14:14:09 he can maybe talk about that next week? 14:14:24 rosmaita: sounds good to me 14:14:40 need to figure out what is going with grenade without v2 14:14:40 great! 14:14:40 #topic announcements - midcycle 14:14:57 yeah, grenade will be ... interesting 14:15:21 ok the midcycle is supposed to be before specs freeze, which is coming up fast also 14:15:26 here is my proposal: 14:15:38 hold it on Wednesday 9 December, 2 hours either 1300-1500 or 1400-1600 14:15:54 that way it overlaps with a time everyone has reserved on wednesday already 14:16:07 so i expect most people will at least be able to attend for an hour 14:16:11 +1 14:16:35 I would vote for 14:00 to 16:00. 14:16:48 anyway, please check your calendars and we can verify/vote on a time next week 14:16:57 ++ 14:17:09 ++ 14:17:20 ok and just to have it in the minutes: 14:17:29 "All Cinder Specs for features to be implemented in Wallaby must be approved by Friday 18 December 2020 (23:59 UTC)." 14:17:33 and finally 14:17:41 Wallaby Milestone-2 is at R-12  (week of 18 January) 14:18:02 ok, but the big effort is getting mypy merged before M-1 14:18:12 and i forgot to mention the other part of my proposal about that 14:18:36 anyone who asks for reviews in #openstack-cinder and has not reviewed one of the mypy patches will be shunned 14:18:45 #link https://review.opendev.org/#/q/topic:mypy+(status:open+OR+status:merged)+project:openstack/cinder 14:19:01 the mypy patches for reference ^^ 14:19:10 ok, that's all for announcements 14:19:17 #topic bug review 14:19:22 michael-mcaleer: that's you 14:19:31 thanks rosmaita 14:19:31 6 bugs this week, all in cinder (3 x cinder, 1 x enhancement, 1 x doc, 1 x driver) 14:19:34 cinder first 14:19:46 Cinder backup error using Erasure Code Pool #link https://bugs.launchpad.net/cinder/+bug/1903713 14:19:47 Launchpad bug 1903713 in Cinder "Cinder backup error using Erasure Code Pool" [Undecided,New] 14:20:11 Not immediately obvious what is going on here, asked for more info from submitter 14:20:41 importance also not set, anyone have a suggestion on that? 14:21:32 e0ne: can you look at this one? ceph + backup, kind of in your domain there 14:22:00 rosmaita: sure, will take a look on it asap 14:22:06 ty! 14:22:08 ok next bug 14:22:14 cinder backup-import command fails, there is no way to repeat it #link https://bugs.launchpad.net/cinder/+bug/1903697 14:22:15 Launchpad bug 1903697 in Cinder "cinder backup-import command fails, there is no way to repeat it" [Undecided,Incomplete] 14:22:33 again not a lot of info provided here but its possible to discern what is going on in advance of logs coming back from submitter 14:23:01 assume low importance because user can just change name of backup 14:23:22 sounds correct 14:23:38 ok, last cinder bug 14:23:43 I can check it with the latest master 14:23:54 Cinder backup - Volume metadata inconsistency #link https://bugs.launchpad.net/cinder/+bug/1903839 14:23:55 Launchpad bug 1903839 in Cinder "Cinder backup - Volume metadata inconsistency" [Medium,Confirmed] 14:24:15 opened by powermax team, our QE team spotted this in April and we are only getting round to investigating now 14:24:26 this driver might be making invalid assumptions about how volume metadata works 14:24:44 metadata is assumed to be unique to each volume here 14:24:47 it's a Backup Bug Day 14:25:00 i don't think it's a bug that restoring a volume from backup restores its metadata 14:25:01 :) 14:25:13 also, this is end-user metadata and info used by the driver probably shouldn't be in that field 14:25:47 ok, im going to stay on topic of bugs and we can discuss the ins and outs of this one in a bug review, save some time 14:25:57 yeah 14:26:10 next up is an enhancement 14:26:19 Cinder - volume show for GVG #link https://bugs.launchpad.net/cinder/+bug/1903845 14:26:20 Launchpad bug 1903845 in Cinder "Cinder - volume show for GVG" [Wishlist,Triaged] 14:26:31 Consider updating volume properties when its added to an Generic Volume Group. 14:26:42 makes sense to consider since we do it w/ CGs 14:26:47 there is consistency group ID in there but no GVGs 14:26:49 yeah 14:27:05 set to enhancement/wish 14:27:15 sounds good 14:27:34 last cinder related bug is docs... Rocky Series Release Notes in Cinder Release Notes #link https://bugs.launchpad.net/cinder/+bug/1904219 14:27:35 Launchpad bug 1904219 in Cinder "Rocky Series Release Notes in Cinder Release Notes" [Low,Confirmed] 14:27:47 simple fix here, incorrect designation of code snippet in the release notes 14:28:11 and lastly the driver bug opened by ibm... os_type changed to 'linux' in python3 #link https://bugs.launchpad.net/cinder/+bug/1903648 14:28:13 Launchpad bug 1903648 in Cinder "os_type changed to 'linux' in python3" [Low,Triaged] 14:28:47 again a small fix here, ive set the tags for ibm so hopefully they see if if they didnt open it themselves 14:28:54 We can't update old release notes, otherwise they show up as new. There's also an outstanding bug in reno that may be causing that one. 14:29:51 smcginnis: thanks, the moral of the story is to make sure command line text is wrapped in `` in the releasenotes 14:30:03 (for future reviewers) 14:30:12 ok, thank you michael-mcaleer 14:30:18 will I close that bug out and advise submitter on updating of release notes? 14:30:34 no prob, thats all from me this week, thanks rosmaita 14:30:42 michael-mcaleer: yes, that sounds good 14:30:50 #topic stable releases 14:30:58 whoami-rajat__: that's you 14:31:21 Hi, just to provide an update on the patches to be merged before the stable release which we decided to be on 26th Nov 14:31:31 #link https://etherpad.opendev.org/p/stable-releases-review-tracker 14:31:38 oops, forgot to mention that in the updates 14:31:52 a lot of patches were merged in the past week, thanks to all the reviewers 14:32:17 victoria looks good to go, ussuri has 1 patch remaining, train has 4 patches in which there's a -1 on hemna 's patch from smcginnis 14:33:33 hopefully all patches will be merged before next cinder meeting to meet the release deadline on time 14:33:38 that's all from my side 14:33:47 ok, stable cores: ^^ 14:33:51 thanks, whoami-rajat__ 14:34:09 #topic marking config options as 'advanced' 14:34:21 just want to bring to people's attention that this is possible 14:34:40 useful for config options that should only have their values changed for very specific reasons 14:34:52 anyway, take a look at this patch and read my comment 14:35:01 #link https://review.opendev.org/#/c/744746/ 14:35:17 #topic v2 support removal from cinderclient and last call for comments 14:35:29 i sent an announcement to the ML 14:35:38 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018697.html 14:36:01 and, there has been no response 14:36:16 so we have the green light to remove v2 support from the python-cinderclient this release cycle 14:36:22 hooray! 14:36:39 Onward! 14:36:43 #topic restricting supported Ceph versions 14:36:49 I know there are patches submitted to puppet-cinder and tripleo to prepare for v2 removal 14:37:11 ok, i also sent an email out about this to the ML asking for comments on the review 14:37:20 #link https://review.opendev.org/#/c/762592/ 14:37:45 but hemna raised an interesting question 14:38:00 btw, thanks for geguileo for also responding to a comment about this on the ML 14:38:25 so, i was looking at this like an announcement for how people should behave 14:38:41 that is, do not use unsupported ceph versions and expect it to work 14:38:58 but, hemna pointed out that if we don't enforce it in code, they could do it anyway 14:39:02 i think the main goal is in fact to just say "don't do that" 14:39:19 eharney: +1 14:39:26 that was my thought, too, but i wanted to check 14:39:30 which is the current state of affairs today -- and i don't recall us ever having issues that came up because someone was using an older version of ceph that we don't test 14:39:37 however 14:39:44 Then if someone comes along and says "hey, this doesn't work", we can point to the docs. 14:39:52 but I'm ok adding a patch to check it 14:39:57 we are adding some advisory messages around the v2 clone API that says "since you are using this old version, your performance is worse" 14:40:00 RTFM. ;-) 14:40:09 lol 14:40:45 eharney: once that patch merges we can have another patch that removes those and does a check on the setup, right? 14:41:12 ok, i think advisory is good for now, and if something needs code validation, then we can still add it at the appropriate time 14:41:17 geguileo: not sure, that one is less straightforward since it also requires backend configuration 14:41:54 eharney: can't we just check the cluster version? 14:42:00 no 14:42:03 eharney: isn't that enough? 14:42:20 because they still have lower perf if they don't manually set some options 14:42:51 and even if they set the options, it won't be applied retroactively to old images, etc 14:43:24 eharney: ok, we cannot remove it, but we could barf if the ceph cluster version is not the right one 14:43:36 (which is what hemna was asking for) 14:44:06 i don't buy that failing because they are using an old ceph version really accomplishes much for us 14:44:11 ok, let's continue discussion of this on the patch 14:44:24 #topic replication support for nvme connector - code structure 14:44:32 zoharm: that's you 14:44:38 hi all 14:44:59 so we are working on upgrading the NVMe connector to also do replication 14:45:24 the main question in the high level, how do we structure it in regards to the existing NVMe connector? 14:46:13 for example, is it ok to tag alone the raid functionality for the connect/disconnect methods, if the connection properties specify replication? 14:47:08 in a sense, this means expanding the connector to have two modes, and do one depending on whether the nvme volumes to connect to are replicated 14:47:35 does this sound like the right way? 14:47:48 zoharm: sorry, I'm not quite following 14:47:49 are there any other suggestions or issues with this? 14:48:04 zoharm: do you have an example of the connection_info dict? 14:48:25 it's probably easier for me to understand what you are actually proposing like that 14:48:39 yes, so basically, if the connection_info dict has a field "volume_replicas" with connection info for each nvme device in it 14:48:43 then, put them into raid 14:49:05 for bare nvme its: 14:49:09 connection_info: 14:49:13 nqn 14:49:18 portal 14:49:18 It might be easier to write down a spec and propose it there. Then, the discussion can happen within a context, and maybe with some examples on what you want to accomplish 14:49:37 zoharm: That idea does sound good to me 14:49:43 ok, just to finish this part 14:49:48 if connection info looks like this: 14:49:52 conection_info: 14:49:52 zoharm: and it would not conflict with adding multipathing to the nvme connector later on 14:49:56 volume_repliacs: 14:50:00 nqn, etc... 14:50:10 yep geguileo 14:50:24 so multipathing will be handled under a "portals" field for each nvme device 14:50:25 is the question whether it's better to complicate the connect_volume function in current connector, or subclass it with a new connector that does replication? 14:50:29 which can have multiple portals 14:50:32 yeah, volume_replicas would be a list of connection_info that just doesn't have the volume_replicas in it 14:51:17 rosmaita, in a way yes, and if subclassing is better, how to do it right? 14:52:05 I think sublcassing is probably better for readability 14:52:17 the base nvme-of class can just do simle connections 14:52:39 true, though biggest benefit for readability is breaking up into re-usable functions (which are usable by both bare nvme and replicated) 14:52:46 and the raid version inherits from it and handles the replicas with specific code, but calls the base nvme-of class for individual connections 14:53:06 right, however, if a single volume backends supports both simple and replicated volumes, this complicates things 14:53:24 additionally, a new connector and libvirt volume driver needs to be named and registered 14:53:41 zoharm: why a new libvirt volume driver? 14:53:56 zoharm: can't we just create a new os-brick connector? 14:54:18 yes, and then we will need to tell nova to use it somehow 14:54:37 zoharm: best thing would be to write the way you think is correct in the spec, and say in the alternatives section why you don't like the other option 14:54:45 then we'll have a better basis for discussion 14:54:53 as far as i saw, this means adding it to libvirt/drivers 14:55:12 zoharm: you are correct 14:55:15 because current nvme connector is referenced in the same way in its own libvirt driver 14:55:32 so to make this work we would need to just duplicate this whole wrapping, and just change one name 14:55:42 this is another tradeoff for this approach 14:55:49 also it requires a code submission to nova :) 14:56:33 zoharm: you don't need to replicate the wrapping, you can just reference the same class in nova/virt/libvirt/driver.py 14:56:41 but with the name of the new connector 14:56:58 in libvirt_volume_drivers 14:57:21 'nvmeof-raid=nova.virt.libvirt.volume.nvme.LibvirtNVMEVolumeDriver' 14:57:37 zoharm: you also had another question? 14:57:56 yes, the problem is in that libvirtnvmevolumedriver 14:58:00 abishop: i think we will not get to your topic this week, but i will put you first for next week 14:58:11 it hardcodes which connector to use it seems 14:58:12 story of my life :-/ 14:58:28 rosmaita my other question is very quick 14:58:37 folks can read etherpad and check out review in meantime 14:58:39 but anyway i will take this offline in favor of time 14:58:48 ok, thanks! 14:58:54 other question is simple 14:58:55 #topic open discussion 14:59:06 rosmaita: I just have a question regarding https://review.opendev.org/#/c/666886/. It is a patch that has been on the making for quite a while. 14:59:14 zoharm: on second thought, I think it's probably best to use the existing connector 14:59:32 Before solving the conflicts, we would like to understand what we are missing there. for instance, a formal specification 15:00:09 rafaelweingartne: i completely lost track of that one 15:00:19 let me look it over and leave some comments 15:00:25 check back in a few days? 15:00:29 ok 15:00:31 thanks 15:00:35 and we are out of time ... 15:00:40 thanks, everyone 15:00:41 no problem, take your time 15:00:42 #endmeeting