14:00:45 <rosmaita> #startmeeting cinder
14:00:46 <openstack> Meeting started Wed Nov 18 14:00:45 2020 UTC and is due to finish in 60 minutes.  The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:47 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:49 <openstack> The meeting name has been set to 'cinder'
14:00:52 <rosmaita> #topic roll call
14:00:57 <rafaelweingartne> \o
14:01:07 <abishop> o/
14:01:08 <eharney> hi
14:01:14 <michael-mcaleer> hi
14:01:14 <sfernand> hi
14:01:24 <whoami-rajat__> Hi
14:01:30 <e0ne> hi
14:01:31 <jungleboyj> o/
14:01:39 <ChuckPiercey> Hi
14:02:01 <zoharm> o/
14:02:06 <geguileo> hi! o/
14:02:32 <lseki> o/
14:03:13 <rosmaita> looks like a good turnout
14:03:14 <rosmaita> #link https://etherpad.openstack.org/p/cinder-victoria-meetings
14:03:18 <rosmaita> ok, let's get started
14:03:21 <ChuckPiercey> Topic: a couple of KIOXIA dev team questions
14:03:32 <rosmaita> #topic announcements
14:03:44 <enriquetaso> hi
14:03:45 <rosmaita> ChuckPiercey: you are on the agenda for later
14:03:56 <rosmaita> hi sofia!
14:04:02 <rosmaita> hope your exams went well
14:04:03 <enriquetaso> o/ :)
14:04:11 <enriquetaso> thanks!
14:04:16 <rosmaita> reminder: next week's meeting (last meeting of the month) is videoconference
14:04:31 <rosmaita> it will be recorded in bluejeans as usual, unless there are any objections?
14:04:49 <rosmaita> i will send out connection info to the ML (though it has not changed from last time)
14:05:08 <rosmaita> #topic announcements - upcoming events
14:05:17 <rosmaita> this week is R-21
14:05:25 <rosmaita> Wallaby Milestone-1 is at R-19 (yes, in 2 weeks)
14:05:34 <rosmaita> i want to set some priorities for the next 2 weeks
14:05:47 <rosmaita> priority for M-1: adding mypy type-checking
14:05:54 <rosmaita> here are the patches
14:06:08 <rosmaita> #link https://review.opendev.org/#/q/topic:mypy+(status:open+OR+status:merged)+project:openstack/cinder
14:06:20 <rosmaita> this is an important code quality initiative
14:06:30 <rosmaita> and it has been languishing despite the best efforts of eharney
14:06:47 <rosmaita> so, i propose that each cinder core will take over one of these patches
14:07:03 <rosmaita> except eharney, of course, but he will be available to answer questions that come up
14:07:18 <eharney> i'll bump those in merge conflict today
14:07:40 <rosmaita> i was going to propose that whoever takes over the patch should fix the merge conflict
14:07:59 <eharney> that also works
14:08:00 <rosmaita> so this will be slightly weird, a reviewer will also be  a committer
14:08:14 <rosmaita> but i think that is the best way to handle minor changes
14:08:42 <rosmaita> and if a reviewer decides to make a major change, then they can't +2 the patch, but they can still be "in charge" of getting it merged
14:08:50 <rosmaita> so we will need to use common sense here
14:09:08 <rosmaita> so, to save people the embarrassment of volunteering, here's my proposal:
14:09:20 <rosmaita> mypy: driver.py   https://review.opendev.org/736856   hemna
14:09:20 <rosmaita> mypy: annotate volume manager   https://review.opendev.org/733621      whoami-rajat
14:09:20 <rosmaita> mypy: annotate volume_utils / utils / exc   https://review.opendev.org/736855   rosmaita
14:09:20 <rosmaita> mypy: annotate remotefs   https://review.opendev.org/733623   lseki
14:09:20 <rosmaita> mypy: annotate lvm   https://review.opendev.org/733624   geguileo
14:09:20 <rosmaita> mypy: annotate api.py    https://review.opendev.org/733622    rosmaita
14:09:51 <rosmaita> ok, e0ne don't feel left out, you are heading up the v2 removal effort
14:10:02 <rosmaita> and you will likely be bugged for reviews
14:10:18 <e0ne> rosmaita: I'm sorry, I was busy with our internal release last week
14:10:24 <rosmaita> is everyone clear on what i am asking for here?
14:10:29 <rosmaita> e0ne: no need to apologize
14:10:32 <e0ne> so there were no community activity from my side
14:11:15 <rosmaita> ummmm ... i could use an 'ack' from hemna whoami-rajat__ lseki geguileo ^^
14:11:20 <geguileo> rosmaita: yes, to do a massive downvote of eharney's patches, right?
14:11:32 <whoami-rajat__> ack
14:11:47 <lseki> ack 👍
14:11:55 <eharney> geguileo: no need to frown, just turn that downvote upside-down
14:12:04 <rosmaita> there you go
14:12:06 <geguileo> eharney: rofl
14:12:11 <jungleboyj> :-)
14:12:30 <rosmaita> jungleboyj and smcginnis, i suspect you will be bugged for reviews
14:12:39 <rafaelweingartne> hello guys, are we going to have an open floor topic? I would like to ask your guidance in a patch that has been open for quite a while for Cinder
14:12:40 <jungleboyj> rosmaita:  ++ Can do.
14:13:00 <rosmaita> rafaelweingartne: hopefully later in the meeting
14:13:29 <rosmaita> community goals are looking OK, details are on the agenda
14:13:40 <rafaelweingartne> rosmaita: thanks :)
14:14:00 <rosmaita> e0ne is heading up v2 removal, which we would like to have completed before M-2
14:14:09 <rosmaita> he can maybe talk about that next week?
14:14:24 <e0ne> rosmaita: sounds good to me
14:14:40 <e0ne> need to figure out what is going with grenade without v2
14:14:40 <rosmaita> great!
14:14:40 <rosmaita> #topic announcements - midcycle
14:14:57 <rosmaita> yeah, grenade will be ... interesting
14:15:21 <rosmaita> ok the midcycle is supposed to be before specs freeze, which is coming up fast also
14:15:26 <rosmaita> here is my proposal:
14:15:38 <rosmaita> hold it on Wednesday 9 December, 2 hours either 1300-1500 or 1400-1600
14:15:54 <rosmaita> that way it overlaps with a time everyone has reserved on wednesday already
14:16:07 <rosmaita> so i expect most people will at least be able to attend for an hour
14:16:11 <e0ne> +1
14:16:35 <jungleboyj> I would vote for 14:00 to 16:00.
14:16:48 <rosmaita> anyway, please check your calendars and we can verify/vote on a time next week
14:16:57 <jungleboyj> ++
14:17:09 <lseki> ++
14:17:20 <rosmaita> ok and just to have it in the minutes:
14:17:29 <rosmaita> "All Cinder Specs for features to be implemented in Wallaby must be approved by Friday 18 December 2020 (23:59 UTC)."
14:17:33 <rosmaita> and finally
14:17:41 <rosmaita> Wallaby Milestone-2 is at R-12  (week of 18 January)
14:18:02 <rosmaita> ok, but the big effort is getting mypy merged before M-1
14:18:12 <rosmaita> and i forgot to mention the other part of my proposal about that
14:18:36 <rosmaita> anyone who asks for reviews in #openstack-cinder and has not reviewed one of the mypy patches will be shunned
14:18:45 <rosmaita> #link https://review.opendev.org/#/q/topic:mypy+(status:open+OR+status:merged)+project:openstack/cinder
14:19:01 <rosmaita> the mypy patches for reference ^^
14:19:10 <rosmaita> ok, that's all for announcements
14:19:17 <rosmaita> #topic bug review
14:19:22 <rosmaita> michael-mcaleer: that's you
14:19:31 <michael-mcaleer> thanks rosmaita
14:19:31 <michael-mcaleer> 6 bugs this week, all in cinder (3 x cinder, 1 x enhancement, 1 x doc, 1 x driver)
14:19:34 <michael-mcaleer> cinder first
14:19:46 <michael-mcaleer> Cinder backup error using Erasure Code Pool #link https://bugs.launchpad.net/cinder/+bug/1903713
14:19:47 <openstack> Launchpad bug 1903713 in Cinder "Cinder backup error using Erasure Code Pool" [Undecided,New]
14:20:11 <michael-mcaleer> Not immediately obvious what is going on here, asked for more info from submitter
14:20:41 <michael-mcaleer> importance also not set, anyone have a suggestion on that?
14:21:32 <rosmaita> e0ne: can you look at this one? ceph + backup, kind of in your domain there
14:22:00 <e0ne> rosmaita: sure, will take a look on it asap
14:22:06 <rosmaita> ty!
14:22:08 <michael-mcaleer> ok next bug
14:22:14 <michael-mcaleer> cinder backup-import command fails, there is no way to repeat it #link https://bugs.launchpad.net/cinder/+bug/1903697
14:22:15 <openstack> Launchpad bug 1903697 in Cinder "cinder backup-import command fails, there is no way to repeat it" [Undecided,Incomplete]
14:22:33 <michael-mcaleer> again not a lot of info provided here but its possible to discern what is going on in advance of logs coming back from submitter
14:23:01 <michael-mcaleer> assume low importance because user can just change name of backup
14:23:22 <rosmaita> sounds correct
14:23:38 <michael-mcaleer> ok, last cinder bug
14:23:43 <e0ne> I can check it with the latest master
14:23:54 <michael-mcaleer> Cinder backup - Volume metadata inconsistency #link  https://bugs.launchpad.net/cinder/+bug/1903839
14:23:55 <openstack> Launchpad bug 1903839 in Cinder "Cinder backup - Volume metadata inconsistency" [Medium,Confirmed]
14:24:15 <michael-mcaleer> opened by powermax team, our QE team spotted this in April and we are only getting round to investigating now
14:24:26 <eharney> this driver might be making invalid assumptions about how volume metadata works
14:24:44 <michael-mcaleer> metadata is assumed to be unique to each volume here
14:24:47 <e0ne> it's a Backup Bug Day
14:25:00 <eharney> i don't think it's a bug that restoring a volume from backup restores its metadata
14:25:01 <rosmaita> :)
14:25:13 <eharney> also, this is end-user metadata and info used by the driver probably shouldn't be in that field
14:25:47 <michael-mcaleer> ok, im going to stay on topic of bugs and we can discuss the ins and outs of this one in a bug review, save some time
14:25:57 <eharney> yeah
14:26:10 <michael-mcaleer> next up is an enhancement
14:26:19 <michael-mcaleer> Cinder - volume show for GVG #link https://bugs.launchpad.net/cinder/+bug/1903845
14:26:20 <openstack> Launchpad bug 1903845 in Cinder "Cinder - volume show for GVG" [Wishlist,Triaged]
14:26:31 <michael-mcaleer> Consider updating volume properties when its added to an Generic Volume Group.
14:26:42 <eharney> makes sense to consider since we do it w/ CGs
14:26:47 <michael-mcaleer> there is consistency group ID in there but no GVGs
14:26:49 <michael-mcaleer> yeah
14:27:05 <michael-mcaleer> set to enhancement/wish
14:27:15 <rosmaita> sounds good
14:27:34 <michael-mcaleer> last cinder related bug is docs... Rocky Series Release Notes in Cinder Release Notes #link https://bugs.launchpad.net/cinder/+bug/1904219
14:27:35 <openstack> Launchpad bug 1904219 in Cinder "Rocky Series Release Notes in Cinder Release Notes" [Low,Confirmed]
14:27:47 <michael-mcaleer> simple fix here, incorrect designation of code snippet in the release notes
14:28:11 <michael-mcaleer> and lastly the driver bug opened by ibm... os_type changed to 'linux' in python3 #link https://bugs.launchpad.net/cinder/+bug/1903648
14:28:13 <openstack> Launchpad bug 1903648 in Cinder "os_type changed to 'linux' in python3" [Low,Triaged]
14:28:47 <michael-mcaleer> again a small fix here, ive set the tags for ibm so hopefully they see if if they didnt open it themselves
14:28:54 <smcginnis> We can't update old release notes, otherwise they show up as new. There's also an outstanding bug in reno that may be causing that one.
14:29:51 <rosmaita> smcginnis: thanks, the moral of the story is to make sure command line text is wrapped in `` in the releasenotes
14:30:03 <rosmaita> (for future reviewers)
14:30:12 <rosmaita> ok, thank you  michael-mcaleer
14:30:18 <michael-mcaleer> will I close that bug out and advise submitter on updating of release notes?
14:30:34 <michael-mcaleer> no prob, thats all from me this week, thanks rosmaita
14:30:42 <rosmaita> michael-mcaleer: yes, that sounds good
14:30:50 <rosmaita> #topic stable releases
14:30:58 <rosmaita> whoami-rajat__: that's you
14:31:21 <whoami-rajat__> Hi, just to provide an update on the patches to be merged before the stable release which we decided to be on 26th Nov
14:31:31 <whoami-rajat__> #link https://etherpad.opendev.org/p/stable-releases-review-tracker
14:31:38 <rosmaita> oops, forgot to mention that in the updates
14:31:52 <whoami-rajat__> a lot of patches were merged in the past week, thanks to all the reviewers
14:32:17 <whoami-rajat__> victoria looks good to go, ussuri has 1 patch remaining, train has 4 patches in which there's a -1 on hemna 's patch from smcginnis
14:33:33 <whoami-rajat__> hopefully all patches will be merged before next cinder meeting to meet the release deadline on time
14:33:38 <whoami-rajat__> that's all from my side
14:33:47 <rosmaita> ok, stable cores: ^^
14:33:51 <rosmaita> thanks, whoami-rajat__
14:34:09 <rosmaita> #topic marking config options as 'advanced'
14:34:21 <rosmaita> just want to bring to people's attention that this is possible
14:34:40 <rosmaita> useful for config options that should only have their values changed for very specific reasons
14:34:52 <rosmaita> anyway, take a look at this patch and read my comment
14:35:01 <rosmaita> #link https://review.opendev.org/#/c/744746/
14:35:17 <rosmaita> #topic v2 support removal from cinderclient and last call for comments
14:35:29 <rosmaita> i sent an announcement to the ML
14:35:38 <rosmaita> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018697.html
14:36:01 <rosmaita> and, there has been no response
14:36:16 <rosmaita> so we have the green light to remove v2 support from the python-cinderclient this release cycle
14:36:22 <rosmaita> hooray!
14:36:39 <jungleboyj> Onward!
14:36:43 <rosmaita> #topic restricting supported Ceph versions
14:36:49 <abishop> I know there are patches submitted to puppet-cinder and tripleo to prepare for v2 removal
14:37:11 <rosmaita> ok, i also sent an email out about this to the ML asking for comments on the review
14:37:20 <rosmaita> #link https://review.opendev.org/#/c/762592/
14:37:45 <rosmaita> but hemna raised an interesting question
14:38:00 <rosmaita> btw, thanks for geguileo for also responding to a comment about this on the ML
14:38:25 <rosmaita> so, i was looking at this like an announcement for how people should behave
14:38:41 <rosmaita> that is, do not use unsupported ceph versions and expect it to work
14:38:58 <rosmaita> but, hemna pointed out that if we don't enforce it in code, they could do it anyway
14:39:02 <eharney> i think the main goal is in fact to just say "don't do that"
14:39:19 <geguileo> eharney: +1
14:39:26 <rosmaita> that was my thought, too, but i wanted to check
14:39:30 <eharney> which is the current state of affairs today -- and i don't recall us ever having issues that came up because someone was using an older version of ceph that we don't test
14:39:37 <eharney> however
14:39:44 <smcginnis> Then if someone comes along and says "hey, this doesn't work", we can point to the docs.
14:39:52 <geguileo> but I'm ok adding a patch to check it
14:39:57 <eharney> we are adding some advisory messages around the v2 clone API that says "since you are using this old version, your performance is worse"
14:40:00 <jungleboyj> RTFM.  ;-)
14:40:09 <geguileo> lol
14:40:45 <geguileo> eharney: once that patch merges we can have another patch that removes those and does a check on the setup, right?
14:41:12 <rosmaita> ok, i think advisory is good for now, and if something needs code validation, then we can still add it at the appropriate time
14:41:17 <eharney> geguileo: not sure, that one is less straightforward since it also requires backend configuration
14:41:54 <geguileo> eharney: can't we just check the cluster version?
14:42:00 <eharney> no
14:42:03 <geguileo> eharney: isn't that enough?
14:42:20 <eharney> because they still have lower perf if they don't manually set some options
14:42:51 <eharney> and even if they set the options, it won't be applied retroactively to old images, etc
14:43:24 <geguileo> eharney: ok, we cannot remove it, but we could barf if the ceph cluster version is not the right one
14:43:36 <geguileo> (which is what hemna was asking for)
14:44:06 <eharney> i don't buy that failing because they are using an old ceph version really accomplishes much for us
14:44:11 <rosmaita> ok, let's continue discussion of this on the patch
14:44:24 <rosmaita> #topic replication support for nvme connector - code structure
14:44:32 <rosmaita> zoharm: that's you
14:44:38 <zoharm> hi all
14:44:59 <zoharm> so we are working on upgrading the NVMe connector to also do replication
14:45:24 <zoharm> the main question in the high level, how do we structure it in regards to the existing NVMe connector?
14:46:13 <zoharm> for example, is it ok to tag alone the raid functionality for the connect/disconnect methods, if the connection properties specify replication?
14:47:08 <zoharm> in a sense, this means expanding the connector to have two modes, and do one depending on whether the nvme volumes to connect to are replicated
14:47:35 <zoharm> does this sound like the right way?
14:47:48 <geguileo> zoharm: sorry, I'm not quite following
14:47:49 <zoharm> are there any other suggestions or issues with this?
14:48:04 <geguileo> zoharm: do you have an example of the connection_info dict?
14:48:25 <geguileo> it's probably easier for me to understand what you are actually proposing like that
14:48:39 <zoharm> yes, so basically, if the connection_info dict has a field "volume_replicas" with connection info for each nvme device in it
14:48:43 <zoharm> then, put them into raid
14:49:05 <zoharm> for bare nvme its:
14:49:09 <zoharm> connection_info:
14:49:13 <zoharm> nqn
14:49:18 <zoharm> portal
14:49:18 <rafaelweingartne> It might be easier to write down a spec and propose it there. Then, the discussion can happen within a context, and maybe with some examples on what you want to accomplish
14:49:37 <geguileo> zoharm: That idea does sound good to me
14:49:43 <zoharm> ok, just to finish this part
14:49:48 <zoharm> if connection info looks like this:
14:49:52 <zoharm> conection_info:
14:49:52 <geguileo> zoharm: and it would not conflict with adding multipathing to the nvme connector later on
14:49:56 <zoharm> volume_repliacs:
14:50:00 <zoharm> nqn, etc...
14:50:10 <zoharm> yep geguileo
14:50:24 <zoharm> so multipathing will be handled under a "portals" field for each nvme device
14:50:25 <rosmaita> is the question whether it's better to complicate the connect_volume function in current connector, or subclass it with a new connector that does replication?
14:50:29 <zoharm> which can have multiple portals
14:50:32 <geguileo> yeah, volume_replicas would be a list of connection_info that just doesn't have the volume_replicas in it
14:51:17 <zoharm> rosmaita, in a way yes, and if subclassing is better, how to do it right?
14:52:05 <geguileo> I think sublcassing is probably better for readability
14:52:17 <geguileo> the base nvme-of class can just do simle connections
14:52:39 <zoharm> true, though biggest benefit for readability is breaking up into re-usable functions (which are usable by both bare nvme and replicated)
14:52:46 <geguileo> and the raid version inherits from it and handles the replicas with specific code, but calls the base nvme-of class for individual connections
14:53:06 <zoharm> right, however, if a single volume backends supports both simple and replicated volumes, this complicates things
14:53:24 <zoharm> additionally, a new connector and libvirt volume driver needs to be named and registered
14:53:41 <geguileo> zoharm: why a new libvirt volume driver?
14:53:56 <geguileo> zoharm: can't we just create a new os-brick connector?
14:54:18 <zoharm> yes, and then we will need to tell nova to use it somehow
14:54:37 <rosmaita> zoharm: best thing would be to write the way you think is correct in the spec, and say in the alternatives section why you don't like the other option
14:54:45 <rosmaita> then we'll have a better basis for discussion
14:54:53 <zoharm> as far as i saw, this means adding it to libvirt/drivers
14:55:12 <geguileo> zoharm: you are correct
14:55:15 <zoharm> because current nvme connector is referenced in the same way in its own libvirt driver
14:55:32 <zoharm> so to make this work we would need to just duplicate this whole wrapping, and just change one name
14:55:42 <zoharm> this is another tradeoff for this approach
14:55:49 <zoharm> also it requires a code submission to nova :)
14:56:33 <geguileo> zoharm: you don't need to replicate the wrapping, you can just reference the same class in nova/virt/libvirt/driver.py
14:56:41 <geguileo> but with the name of the new connector
14:56:58 <geguileo> in libvirt_volume_drivers
14:57:21 <geguileo> 'nvmeof-raid=nova.virt.libvirt.volume.nvme.LibvirtNVMEVolumeDriver'
14:57:37 <rosmaita> zoharm: you also had another question?
14:57:56 <zoharm> yes, the problem is in that libvirtnvmevolumedriver
14:58:00 <rosmaita> abishop: i think we will not get to your topic this week, but i will put you first for next week
14:58:11 <zoharm> it hardcodes which connector to use it seems
14:58:12 <abishop> story of my life :-/
14:58:28 <zoharm> rosmaita my other question is very quick
14:58:37 <abishop> folks can read etherpad and check out review in meantime
14:58:39 <zoharm> but anyway i will take this offline in favor of time
14:58:48 <rosmaita> ok, thanks!
14:58:54 <zoharm> other question is simple
14:58:55 <rosmaita> #topic open discussion
14:59:06 <rafaelweingartne> rosmaita: I just have a question regarding https://review.opendev.org/#/c/666886/. It is a patch that has been on the making for quite a while.
14:59:14 <geguileo> zoharm: on second thought, I think it's probably best to use the existing connector
14:59:32 <rafaelweingartne> Before solving the conflicts, we would like to understand what we are missing there. for instance, a formal specification
15:00:09 <rosmaita> rafaelweingartne: i completely lost track of that one
15:00:19 <rosmaita> let me look it over and leave some comments
15:00:25 <rosmaita> check back in a few days?
15:00:29 <rafaelweingartne> ok
15:00:31 <rafaelweingartne> thanks
15:00:35 <rosmaita> and we are out of time ...
15:00:40 <rosmaita> thanks, everyone
15:00:41 <rafaelweingartne> no problem, take your time
15:00:42 <rosmaita> #endmeeting