14:00:04 <rosmaita> #startmeeting cinder
14:00:05 <openstack> Meeting started Wed Jan 15 14:00:04 2020 UTC and is due to finish in 60 minutes.  The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:09 <openstack> The meeting name has been set to 'cinder'
14:00:22 <rosmaita> #link https://etherpad.openstack.org/p/cinder-ussuri-meetings
14:00:27 <rosmaita> #topic roll call
14:00:34 <tosky> o/
14:00:40 <whoami-rajat> hi
14:00:43 <whfnst17> hi
14:00:43 <enriquetaso> hi
14:01:00 <m5z> hi :)
14:01:12 <lseki> hi
14:01:19 <jungleboyj> O/
14:01:23 <dviroel> hi
14:01:37 <e0ne> hi
14:01:39 <smcginnis> o/
14:01:43 <geguileo> hi! o/
14:01:44 <rosmaita> looks like a good turnout!
14:03:05 <rosmaita> #topic announcements
14:03:15 <rosmaita> some upcoming deadlines
14:03:21 <LiangFang> hi
14:03:23 <rosmaita> spec freeze: 31 Jan 2020 (23:59 UTC)
14:03:36 <rosmaita> new driver/new target driver must be merged by 13 February 2020 (23:59 UTC)
14:03:48 <rosmaita> i should probably send out a reminder to the ML about that one
14:04:01 <rosmaita> #action rosmaita send email about driver merge deadline
14:04:18 <rosmaita> that same week we have Ussuri milestone-2: 13 February 2020
14:04:41 <rosmaita> other news
14:04:49 <rosmaita> virtual mid-cycle part 1 next week on Tuesday 1300-1500 UTC
14:05:04 <rosmaita> thanks to everyone who participated in the poll to select the time/date
14:05:33 <rosmaita> we'll be holding the meeting in bluejeans like we did for the virtual PTG
14:05:48 <rosmaita> info is here:
14:05:48 <rosmaita> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-January/011968.html
14:06:13 <rosmaita> also, if you have specific items you want to discuss, please add them to the etherpad:
14:06:21 <rosmaita> #link https://etherpad.openstack.org/p/cinder-ussuri-mid-cycle-planning
14:06:42 <rosmaita> ok, final announcement is a reminder
14:06:52 <rosmaita> Voting for the 2020 Individual Directors of the OpenStack Foundation Board of Directors is now open and will remain open until Friday, January 17th, 2020 at 11:00am CST/1700 UTC
14:07:11 <rosmaita> to vote, you have to use the link that is sent to you in your email
14:07:20 <rosmaita> so i can't put a link here
14:07:32 <rosmaita> and that's all the announcements
14:07:39 <rosmaita> on to real business
14:07:41 <smcginnis> Thanks for the reminder. ;)
14:07:49 <jungleboyj> :-)
14:07:52 <rosmaita> np
14:08:21 <rosmaita> #topic Spec: Volume local cache
14:08:21 <rosmaita> #link https://review.opendev.org/#/c/684556/
14:08:31 <rosmaita> LiangFang: that's you!
14:08:49 <rosmaita> i see you pushed an update, i haven't had time to look at it yet
14:09:08 <geguileo> I've added more comments on the previous patch
14:09:08 <LiangFang> yes, Gorka give lots of comments
14:09:20 <geguileo> in response to LiangFang responses
14:09:26 <LiangFang> most of the schedule things
14:10:16 <LiangFang> corrently in flavor, no space for volume type
14:10:43 <geguileo> so they cannot schedule it correctly?
14:10:51 <LiangFang> yes
14:10:57 <LiangFang> some work around
14:11:01 <geguileo> then if os-brick just fails, it will be a nightmare
14:11:19 <geguileo> users attaching volumes to nodes without cache won't know why they can't attach
14:11:44 <geguileo> boot from cinder vol will fail until it lands on a node with cache (if there is one)
14:12:07 <LiangFang> they can create the VM to the servers with cache capability
14:12:30 <rosmaita> so they could define a flavor that has a cache?
14:12:48 <LiangFang> I got the new laptop, and no dictionary installed yet:(
14:13:16 <LiangFang> gibi said we can design the schedule later
14:13:48 <rosmaita> you may answer this question on the current version, but i will ask it anyway
14:13:51 <LiangFang> corrently use zone
14:14:16 <LiangFang> currently use zone
14:14:23 <LiangFang> or aggragate
14:14:40 <geguileo> my concern is that this is going to be complicated for customers
14:14:41 <LiangFang> so the VM can be scheduled to correct zone
14:15:34 <rosmaita> is there a reason why a cacheable volume type *must* be cached? could we have it be cached if it lands on a compute that supports it (user picks the flavor for the VM), and not cached if user picks non-cache flavor?
14:16:01 <geguileo> I think that's what's best right now
14:16:21 <smcginnis> I would think if someone requested a volume to be cached, it could be confusing to them to end up with a volume that is not cached.
14:16:34 <smcginnis> Especially if their service provider charges more for the cached volume types.
14:16:48 <LiangFang> I talked with Alex (Nova core), he suggest to fail the request if not supported.
14:17:01 <rosmaita> i was thinking of the caching as an attribute of the compute, not the volume
14:17:08 <geguileo> LiangFang: do they have a way to report to the users why it failed?
14:17:15 <geguileo> like we do with the user messages?
14:17:18 <rosmaita> so the provider would charge more for a flavor with a cache
14:17:31 <geguileo> rosmaita: makes sense to me
14:17:33 <smcginnis> Not sure, but I could see that being a "premium" option.
14:18:00 <rosmaita> because the nice thing about LiangFang's choice of open-cas (as opposed to bcache) is that it doesn't have to modify the volume at all
14:18:11 <LiangFang> geguileo: we plan the throw exception,
14:18:14 <rosmaita> so the volume itself could be attached anywhere
14:18:32 <geguileo> LiangFang: that's useless for users
14:18:43 <geguileo> only useful for admins
14:19:26 <geguileo> rosmaita: I think you were onto something when you said cacheable should be a Nova attribute or something
14:19:26 <eharney> i think some basics are missing about how this interacts w/ cinder features -- if using a write-back cache, how do you reliably use cinder snapshots, as a user?
14:19:50 <geguileo> eharney: and live migrations must be disabled somehow
14:20:03 <eharney> it would also affect consistency groups, backups, etc..
14:20:15 <smcginnis> eharney: Good point, we would need a hook for flushing cache.
14:20:21 <eharney> but for snapshots, you don't have a way to even know if the data you want to snapshot was written
14:21:11 <geguileo> eharney: I believe that's the case for migrations as well, because the hypervisor would just flush and expect that to do what it's meant to do
14:21:14 <LiangFang> write-back indeed have data integrity issues
14:21:18 <eharney> geguileo: right
14:22:01 <LiangFang> for data integrity scenarios, they should choose write-through
14:22:29 <eharney> i think we have to assume that data integrity is a priority in general
14:23:51 <smcginnis> Kind of a base requirement for a storage service.
14:24:23 <LiangFang> yes, so in most case, write-through is selected by default
14:24:26 <rosmaita> well, if you use a cache, you sometimes want to trade off speed for data integrity
14:25:02 <LiangFang> write-through is also the default cache mode of open-cas
14:25:04 <rosmaita> but we could decide to support only those modes that ensure data integrity
14:25:45 <rosmaita> so the speed boost would be mostly for subsequent reads
14:25:52 <rosmaita> which is still a plus
14:25:55 <eharney> rosmaita: sure, but to enable that trade-off we need a mechanism to make sure that things like live migration don't fall apart
14:26:15 <rosmaita> eharney: ++
14:26:41 <eharney> and i think the spec doesn't cover such concerns yet
14:27:47 <LiangFang> should we write this note in manual page?
14:28:15 <LiangFang> e.g. tell operator, if you choose write-back, then cannot live migration
14:28:49 <geguileo> LiangFang: that's not good enough, because users and admins are different people
14:28:52 <rosmaita> either that, or we don't allow anything other than write-through for now
14:28:57 <rosmaita> geguileo: ++
14:29:30 <rosmaita> LiangFang: if we only allowed write-through mode, would that make this feature useless?
14:29:49 <LiangFang> rosmaita: then only read io could be boost
14:29:57 <geguileo> rosmaita: we wouldn't know the mode they have actually selected
14:30:08 <geguileo> because it's configured outside of openstack
14:30:16 <rosmaita> geguileo: that's true
14:30:28 <geguileo> but we can document why those modes are not supported
14:30:38 <rosmaita> yes, that's what i was going to say
14:31:01 <rosmaita> so we leave it up to the operator how to handle this
14:31:08 <geguileo> if we document it then admins can ignore it and just be careful in those situations (I'm fine with that)
14:31:22 <rosmaita> because, if they do a flavor, they could have cache in xxx-mode, another flavor cache in yyy-mode
14:31:40 <rosmaita> and then the user picks which flavor to use depending on how risk-averse they are
14:32:27 <geguileo> that changes the approach we had
14:32:32 <geguileo> but I like it
14:32:43 <geguileo> not being a Cinder volume thing
14:32:56 <geguileo> except for the cacheable <is> True part
14:32:58 <rosmaita> right, we are back to just 'cacheable' for volume-type
14:33:06 <geguileo> because there are backends that don't support caching in OpenStack
14:33:53 <rosmaita> I really think the way to go is that 'cacheable' means that this volume *could* be cached if it lands on the correct hypervisor
14:33:57 <geguileo> so in the end Nova decides where to schedule, what mode it wants, etc
14:34:13 <geguileo> rosmaita: on Cinder side yes
14:34:25 <geguileo> rosmaita: it would be True by default and we would return False on RBD and RemoteFS
14:34:30 <eharney> having "cacheable <is> True" seems like it assumes we could swap a different caching tool in later when we support a second one, do we know if that's the case?  (it's not if part of your data is stored in an open-cas writeback cache..)
14:35:34 <rosmaita> it really seems like we need some "live" discussion of this
14:35:40 <geguileo> eharney: you are talking about in-use retype? migrate?
14:36:02 <eharney> geguileo: talking about making sure we don't get stuck when we try to add another cache system into Cinder later
14:36:04 <smcginnis> Definitely a midcycle topic I think.
14:36:15 <geguileo> eharney: because cacheable only means, from Cinder perspective, that there is a /dev/xyz device that can be used by the cache system
14:36:50 <geguileo> eharney: I don't think we would get stuck on a second one
14:36:52 <eharney> geguileo: ok, so the implication for the future is that even if we have multiple cache drivers, you can only use one in a deployment, then?
14:37:16 <geguileo> eharney: wouldn't that be a Nova thing?
14:37:23 <eharney> geguileo: i'm not sure it would be
14:37:30 <geguileo> how so? r:-??
14:37:51 <eharney> are we using this cache during reimage?  image->volume?  backup?
14:38:17 <LiangFang> if the cache mode is hard coded to write-through, then even in future there's a different cache software, it still works
14:38:27 <rosmaita> LiangFang: when is the nova spec deadline?
14:38:32 <eharney> i think the spec needs to spell out more about the interactions between cinder/nova and the cache
14:38:56 <geguileo> eharney: +1
14:39:01 <LiangFang> rosmaita: sorry, I know very near, but don't know the date
14:39:06 <jungleboyj> eharney: ++
14:39:12 <rosmaita> i couldn't find the date either
14:39:48 <rosmaita> i think we really need to discuss this at the midcycle so we can give you specific things that need to be addressed, and so that we can determine what these specific things are
14:40:03 <LiangFang> eharney: the cache is only in compute node
14:40:19 <eharney> LiangFang: if so, that has implications for how backups work
14:40:25 <jungleboyj> rosmaita: ++
14:40:31 <LiangFang> rosmaita: ok. thanks
14:40:36 <eharney> and reimage, etc
14:41:02 <geguileo> Feb 10 - Feb 14: Ussuri-2 milestone, nova spec freeze
14:41:09 <geguileo> https://wiki.openstack.org/wiki/Nova/Ussuri_Release_Schedule
14:41:15 <rosmaita> i think eric is bringing up some good points that we need to think about whether they impact your spec or not
14:41:17 <rosmaita> geguileo: thanks
14:41:47 <LiangFang> if the cache mode is write-through, then from cinder point of view, seems can ignore it, but all the data is backend volume
14:41:58 <rosmaita> so it looks like if we can get this discussed on Tuesday, we can get it hammered out next week and there will still be some time on the nova side
14:42:30 <rosmaita> especially if we push the idea of putting the cache as a flavor instead of into the scheduler
14:42:38 <rosmaita> but we can discuss on Tuesday
14:42:55 <rosmaita> LiangFang: i believe you will be available for the midcycle on tuesday?
14:43:07 <LiangFang> yes:)
14:43:11 <rosmaita> great!
14:43:19 <LiangFang> it's my first time to join
14:43:30 <LiangFang> I tried this afternoon:)
14:43:46 <rosmaita> were you able to access bluejeans?
14:44:02 <LiangFang> https://bluejeans.com/3228528973
14:44:06 <LiangFang> yes, as guest
14:44:15 <rosmaita> ok, great
14:44:30 <LiangFang> but I don't know how to register an account
14:44:48 <rosmaita> LiangFang: i think you have to be a guest
14:44:56 <LiangFang> OK
14:44:57 <rosmaita> it's a paid service
14:45:05 <LiangFang> ok
14:45:10 <rosmaita> (our use is courtesy of Red Hat)
14:45:57 <rosmaita> ok, let's wrap up with everyone please read through the spec before the midcycle so we all are ready to discuss
14:46:05 <LiangFang> thanks the discussion today, I will read the chat log later, because I may have not got all the points
14:46:25 <rosmaita> ok, great, and you can ask in IRC also if things aren't clear
14:46:32 <LiangFang> ok
14:46:51 <rosmaita> #topic Update on community goal "Drop Python 2.7 Support"
14:47:11 <rosmaita> as is my wont, i put together an etherpad for this:
14:47:14 <rosmaita> #link https://etherpad.openstack.org/p/cinder-ussuri-community-goal-drop-py27-support
14:47:43 <rosmaita> is xuanyandong here by any chance?
14:48:14 <rosmaita> anyway, if you look at the etherpad, we are mostly OK except for cinderclient
14:48:38 <eharney> the cinder-tempest-plugin question is interesting
14:48:51 <rosmaita> eharney: yes, thanks for the reminder about that
14:49:02 <rosmaita> i was hoping tosky might know
14:49:11 <smcginnis> Tempest is dropping support too.
14:49:23 <smcginnis> So though it's branchless, that's part of why they are still tagged.
14:49:26 <eharney> so do we mark the plugin to only work with sufficiently new versions of tempest, or, what?
14:49:51 <eharney> i guess we can just do that in requirements.txt
14:50:16 <smcginnis> Yeah, since we don't have a lower-constraints file, I guess it would be there.
14:50:25 <eharney> but the plugin is branchless, so then you have to pin jobs for stable branches too i guess?
14:50:29 <smcginnis> But I'm not sure it needs to be strictly enforced in code.
14:50:52 <smcginnis> It's a little on the consumer to get the right tempest plugin for the given tempest version.
14:51:07 <smcginnis> Tagging of those are done to be able to match up what versions go together.
14:51:15 <eharney> let's say we're the consumer in the stable/stein gate jobs :)
14:51:46 <smcginnis> It *should* be using the stein plugin with the stein tempest, but I'm not entirely sure there.
14:51:55 <tosky> eharney: no, tempest is going to run with python3 even on older branch
14:52:03 <tosky> and the plugins are going to be used from master as well
14:52:04 <eharney> oh
14:52:19 <smcginnis> Oh right, because the tempest runtime doesn't matter.
14:52:20 <tosky> tempest runs in its own venv, and the plugins are installed there
14:52:25 <smcginnis> That's separate from the service runtime.
14:52:26 <eharney> that works then
14:53:08 <tosky> but yeah, the support for py2 should be removed as soon as all the consumers of cinder-tempest-plugin are not testing with py2 anymore
14:53:31 <smcginnis> The python-cinderclient functional failure is odd. Looks like the project isn't being installed so it's not finding the cinder command.
14:53:50 <rosmaita> yeah, ignore that result
14:54:01 <rosmaita> i am having duelling patch sets with xuanyandon
14:54:05 <tosky> re python-cinderclient: if we run out of time, just revert to the in-tree playbook for now and we can figure it out later
14:54:27 <rosmaita> yeah, i am going to do what tosky suggests this afternoon
14:55:19 <rosmaita> smcginnis: i fixed the missing /bin/cinder
14:56:22 <rosmaita> ok, so it looks like the cinder-tempest-plugin is already using py3 as its basepython
14:57:03 <rosmaita> i think we are ok there
14:57:23 <whfnst17> Does this mean that drivers should also start abandoning py2?
14:57:42 <smcginnis> whfnst17: They can, but there isn't a priority to remove compatibility code.
14:57:54 <whfnst17> ok
14:58:09 <rosmaita> only remaining issue is Thing 5 on that etherpad
14:58:20 <rosmaita> update setup.cfg to *require* py36
14:58:43 <rosmaita> that's the thing nova did in november that broke everything
14:58:53 <rosmaita> but we should be in a better place now
14:59:13 <eharney> it does sound like the right thing to do, but i don't know a lot about possible side effects
14:59:17 <rosmaita> but i imagine that will be one of those bot-proposed items for all repos
14:59:27 <rosmaita> so i don't think we need to do it?
14:59:36 <smcginnis> I don't think so.
14:59:43 <rosmaita> ok, cool
14:59:52 <rosmaita> well, sorry that we are out of time for open discussion
15:00:01 <rosmaita> see everyone on Tuesday at the virtual mid-cycle!
15:00:03 <smcginnis> I think there's a difference between saying we don't support py2, and actually enforcing in code that it's absolutely disallowed.
15:00:09 <smcginnis> Or even for py35.
15:00:23 <rosmaita> smcginnis: that makes sense
15:00:27 <rosmaita> #endmeeting