15:59:01 <smcginnis> #startmeeting Cinder
15:59:02 <openstack> Meeting started Wed Dec  9 15:59:01 2015 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:59:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:59:07 <openstack> The meeting name has been set to 'cinder'
15:59:14 <smcginnis> Hey everyone.
15:59:15 <jgriffith> 0/
15:59:17 <ntpttr> hello o/
15:59:18 <tbarron> hi
15:59:29 <diablo_rojo> hello
15:59:31 <jgregor> Hello!
15:59:32 <smcginnis> Courtesy ping: duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang tbarron
15:59:33 <scottda> hi
15:59:47 <e0ne> hi
15:59:48 <flip214> hi
15:59:50 <geguileo> Hi
15:59:51 <mtanino> hi
15:59:51 <eharney> hi
15:59:53 <jungleboyj> Present!
15:59:56 <jungleboyj> You are early.  ;-)
16:00:04 <DuncanT> Hi
16:00:06 <smcginnis> jungleboyj: I'll delay.
16:00:07 <baumann> Hello
16:00:15 <bswartz> hi
16:00:16 <smcginnis> jungleboyj: OK, not it's time. :)
16:00:18 <rhedlind> hi
16:00:31 <adrianofr> hi
16:00:34 <xyang1> hi
16:00:35 <smcginnis> Agenda: https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting
16:00:51 <smcginnis> #topic Release Status
16:00:59 <smcginnis> #info M-1 went out last week
16:01:04 <jgriffith> WooT
16:01:28 <erlon> hi!
16:01:32 <smcginnis> We're making some progress on specs. Would be great to get a few more of those through.
16:01:32 <jose-falavinha> hi
16:01:35 <smcginnis> #link https://etherpad.openstack.org/p/mitaka-cinder-spec-review-tracking
16:01:54 <nibalizer> ss
16:01:54 <smcginnis> Please add any details to that etherpad to help bring attention to specs.
16:02:07 <smcginnis> #link http://ci-watch.tintri.com/project?project=cinder&time=7+days
16:02:27 <smcginnis> CIs are continuing to be less than ideal. I've pinged a few folks for the big ones.
16:03:03 <smcginnis> If everyone could take a look at the last weeks results there and check your CI, we really need to get a little more consistent results there.
16:03:32 <jgriffith> smcginnis: +1
16:03:45 <smcginnis> #info Bug stats: cinder: 474, cinderclient: 46, os-brick: 12
16:03:49 <jungleboyj> smcginnis: +1 ... been pinging people internally.
16:03:55 <smcginnis> Great!
16:04:07 <smcginnis> Thanks to those spending some time on bug management.
16:04:15 <smcginnis> I've noticed a few folks taking care of things there.
16:04:42 <smcginnis> My assumption is still that a lot of them are already fixed, just either not updated automatically when it merged, or fixed separately.
16:05:05 <smcginnis> And the usual reminder - Nova volume bugs can always use our help:
16:05:05 <smcginnis> https://bugs.launchpad.net/nova/+bugs?field.status:list=NEW&field.tag=volumes
16:05:09 <smcginnis> #link https://bugs.launchpad.net/nova/+bugs?field.status:list=NEW&field.tag=volumes
16:05:24 <smcginnis> #topic Reno Release Notes
16:05:34 <smcginnis> This is just to raise awareness.
16:05:50 <smcginnis> Projects are switching over to using reno for capturing release notes.
16:06:01 <smcginnis> It's a new thing we should watch for in reviews.
16:06:12 <smcginnis> Or submit as follow up patches if needed.
16:06:20 <smcginnis> I added some notes about it to our wiki.
16:06:44 <smcginnis> But basically, any time a new feature, major bug fix, new driver, etc. get added, it should include a release note so that is captured.
16:06:46 <bswartz> smcginnis: is it settled that reno note updates can go into a separate patch?
16:07:08 <bswartz> smcginnis: I thought some people still argued for the same patch (to make backporting suck less)
16:07:23 <smcginnis> bswartz: I think I would prefer them to be with the patch making the change, but it can still be added after the fact if it is missed.
16:07:34 <smcginnis> bswartz: Backporting is a good point.
16:07:46 <xyang1> bswartz: backport new feature?
16:07:47 <bswartz> okay there is an active ML discussion on that topic
16:07:48 <geguileo> smcginnis: +1
16:07:48 <smcginnis> It would be kind of annoying to have to do two backports.
16:08:40 <smcginnis> I think s^H^H^Htuff happens, so I'm sure regardless of policy we will have to add release notes separately.
16:08:52 <smcginnis> But it definitely would be better to have it with the code change.
16:09:08 <e0ne> xyang1: sometime it's helpful to get a list of critical bugs in release notes
16:09:09 <mtanino> +1
16:09:40 <xyang1> e0ne: right, makes sense for bug fixes
16:09:43 <smcginnis> e0ne: Yeah, knowing something has been fixed could really help operators I think.
16:09:54 <e0ne> smcginnis: +1
16:10:04 <smcginnis> #info Reviewers watch for inclusion of release notes in reviews
16:10:16 <smcginnis> #info Code submitters make sure to add release notes when needed.
16:10:19 <jungleboyj> smcginnis: Is it a -1 if they aren't included or not?
16:10:34 <erlon> smcginnis: is there any specific format or tag for the message?
16:10:39 <geguileo> smcginnis: So only bugs that are marked as critical must contain release notes, right?
16:10:39 <smcginnis> And I'm sure we'll have some gray areas where there's some debate as to whether something is RN worthy or not.
16:10:51 <smcginnis> jungleboyj: Yeah, probably.
16:11:08 <smcginnis> erlon: There is a format that gets generated when you run the reno command.
16:11:09 <e0ne> erlon: IMO, DocImpact flag is a good candidate to have a release notes
16:11:22 <smcginnis> Just follow that format. I'm sure after a while we'll all get used to it.
16:11:32 <erlon> e0ne: +1
16:11:40 <smcginnis> geguileo: I think definitely critical, but could also be other significant bugs.
16:11:47 <geguileo> Ok
16:11:50 <smcginnis> e0ne: Yes, good point.
16:11:52 <smcginnis> Again. ;)
16:12:02 <e0ne> NOTE: I didn't say each DocImpact changes requires a release notes change
16:12:03 <geguileo> So not limited to those, good to know
16:12:05 <e0ne> :)
16:12:18 <smcginnis> One note of warning for new users of reno.
16:12:38 <smcginnis> You need to commit the new release note before running tox to verify it.
16:12:44 <dulek> IMO in general it's better to have to many RN than not enough. :)
16:12:45 <smcginnis> It will not run against uncommited release notes.
16:12:54 <smcginnis> A little confusing the first time I did it.
16:13:02 <smcginnis> dulek: +1
16:13:22 <smcginnis> Moving on...
16:13:23 <jungleboyj> If they have the release notes do we still need the DocImpact flag.  I am thinking yes but just want to document that.
16:13:32 <smcginnis> jungleboyj: I think so.
16:13:50 <smcginnis> DocImpact should end up in docs.openstack.org, release notes are a shorter list of changes.
16:13:54 <e0ne> jungleboyj: +1 on DocImpact
16:14:03 <dulek> #link http://docs.openstack.org/project-team-guide/release-management.html#managing-release-notes
16:14:10 <jungleboyj> smcginnis: e0ne That was my thought.  Thanks for confirming.
16:14:14 <dulek> ^ general guidelines from dhellmann
16:14:14 <smcginnis> #topic Backend behaviour re snapshots
16:14:19 <scottda> This is a lot of info about Reno/release notes. Should this go on the Cinder wiki somewhere?
16:14:29 <smcginnis> scottda: I did add a little there.
16:14:34 <smcginnis> scottda: It's very light though.
16:14:36 <scottda> cool
16:14:43 <smcginnis> Feel free to expand on what I put there.
16:15:02 <smcginnis> Oh, side note on that - for the new driver section I added reno information and py3 unit test info.
16:15:26 <DuncanT> smcginnis: Sorry, I didn't sign this one on the agenda. More just pulling something in from the mailing list to get some answers quickly
16:15:30 <smcginnis> I think as we're switching over to py3 compat, we should make sure new drivers are compatible so we don't have to go back and fix them.
16:15:42 <smcginnis> DuncanT: Oh good, wasn't sure who did that.
16:15:48 <smcginnis> DuncanT: The floor is yours...
16:16:16 <DuncanT> So there's a mail on the mailing list once again asking about dependent snapshots
16:16:36 <DuncanT> Among things asking about whether extend volume breaks a snapshot
16:17:09 <DuncanT> I think it would be worth writing down what are snap rules are, and pushing some tempest tests to check that they are actually being followed
16:17:19 <DuncanT> Anybody think this is a bad idea?
16:17:31 <dulek> DuncanT: +1, great idea to have it formalized and tested.
16:17:43 <DuncanT> (e.g. we recently found VSA on kilo is different to LVM - I think it is not fixed)
16:18:04 <tbarron> It's a good idea.  I'm actually somewhat confused on this subject.
16:18:20 <DuncanT> This is probably going to result in driver bugs - are people prepared to work on the bugs if filed?
16:18:30 <smcginnis> tbarron: Me too. What are the snap rules?
16:18:37 <e0ne> DuncanT: +1, especially for tests
16:18:43 <smcginnis> My guess is different backends have different restrictions.
16:18:56 <smcginnis> Probably a good reason to get this written down.
16:19:04 <jungleboyj> smcginnis: +1
16:19:07 <mtanino> DuncanT: +1
16:19:18 <DuncanT> smcginnis: Indeed, at one people we said they have to copy the reference behaviour by hook or crook, but that seems to have faded
16:19:31 <e0ne> it's confusing that different drivers have different snapshots implementation
16:19:41 <smcginnis> DuncanT: much to jgriffith dismay. ;)
16:19:49 <jgriffith> :(
16:19:58 <flip214> DuncanT: I know that the DRBD driver currently won't like removing a volume if it still has snapshots.
16:20:01 <DuncanT> I think if I push a tempest test then push an empty cinder change that has a depends-on tag then the new test will run against CI
16:20:03 <kmartin> DuncanT, you have a bug for the VSA issue in Kilo? if so please send PM me with the id
16:20:06 <flip214> should the driver just return an error in that case?
16:20:11 <bswartz> +1 for enforcing snapshot semantics with tempest tests
16:20:18 <flip214> we plan to support that, but it'll take a some time.
16:20:24 <smcginnis> I think at least with snapshots though, it's just the unfortunate reality that snapshotting is different. :/
16:20:27 <jgriffith> anybody have a link to the ML post?
16:20:39 <e0ne> #link http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/71350
16:20:40 <jgriffith> never mind.. .found it
16:20:52 <DuncanT> flip214: I think that matches LVM, if it doesn't then returning a driver error is not acceptable IMO
16:21:28 <flip214> so, would that suddenly make the driver invalid?
16:21:48 <DuncanT> flip214:  It would be a bug
16:22:03 <smcginnis> I'm still a little unclear here. Are we saying we should not allow extending a volume with snapshots?
16:22:19 <DuncanT> smcginnis: I'm saying we need an answer to that question
16:22:22 <e0ne> can we do in the same it like a deprecation policy: file a bug in M and remove driver in N if it won't be fixed?
16:22:48 <DuncanT> e0ne: I'm going for 'file bugs and chase people' initially
16:22:59 <smcginnis> e0ne: I think first we need to identify what needs to be "fixed".
16:22:59 <flip214> sorry, I'm confused. wasn't the ML post about removing a volume that still has snapshots? was it about extending?
16:23:02 <patrickeast> seems like creating the tempest test(s) might be a good start to see how much impact (read: how many drivers are broken) and we could evaluate whether or not to enforce the rules
16:23:06 <smcginnis> I really don't understand yet.
16:23:12 <e0ne> smcginnis: sure
16:23:25 <DuncanT> flip214: Both I think. And any other unclear semantics
16:23:27 <smcginnis> flip214: They mention extending as being a different constraint than deleting.
16:23:37 <smcginnis> flip214: But to me the main point was deleting.
16:24:09 <xyang> smcginnis: sounds like the email wants the behavior to be the same for delete and extend
16:24:20 <smcginnis> xyang: Yeah, that's my take.
16:24:32 <smcginnis> But I know some arrays can't delete a volume that has snapshots.
16:24:45 <e0ne> lvm can't do it
16:24:53 <DuncanT> I think the behaviour should start at 'whatever working in LVM' - that's the point of a reference implementation IMO
16:24:54 <smcginnis> And preventing extending a volume just because there are snapshots sounds like a poor user experience IMO.
16:24:54 <eharney> allowing deleting a volume that has snapshots breaks the Cinder model of what volumes and snapshots are
16:24:58 <xyang> smcginnis: yes, we have dependencies in db too
16:25:08 <eharney> snapshots are children of volumes, that's part of the concept
16:25:20 <e0ne> eharney: +1
16:25:25 <smcginnis> eharney: +1
16:25:40 <xyang> eharney: +1
16:25:53 <geguileo> eharney: +1
16:25:58 <smcginnis> My opinion, FWIW, these are two different concepts, so I don't see the need to enforce commonality between them.
16:26:14 <mtanino> eharney: +1
16:26:25 <eharney> i agree, the idea that the behavior needs to be the same doesn't make sense to me
16:26:50 <erlon> smcginnis: +1
16:26:55 <smcginnis> DuncanT: Showing my LVM ignorance again - what is the behavior there?
16:26:58 <xyang> smcginnis: maybe she was saying the metadata changed after extend, but the old snapshot cannot use the new metadata?
16:26:58 <DuncanT> I think what is being said here is 'our array needs these behaviours to be the same' - not a good argument
16:27:07 <DuncanT> smcginnis: I need to test carefully
16:27:18 <eharney> iirc LVM allows extending when snapshots exist
16:27:33 <DuncanT> smcginnis: I think extend is allowed and the snaps are fine
16:27:47 <jgriffith> Please read my response to the ML posting
16:28:00 <smcginnis> eharney, DuncanT: OK, that's consistent with how my backend works at least. ;)
16:28:06 <jgriffith> I'm not going to subject you to my rants again here on IRC but I'm telling you it's an awful idea
16:28:07 <smcginnis> jgriffith: link?
16:28:27 <smcginnis> jgriffith: No rants? What fun is that?
16:28:33 <jgriffith> smcginnis: :)
16:28:52 <hemna> I'm here, let the rants begin.
16:29:11 <smcginnis> #link http://lists.openstack.org/pipermail/openstack-dev/2015-December/081875.html
16:29:11 <DuncanT> jgriffith: What is an awful idea?
16:29:50 <jgriffith> smcginnis: DuncanT http://goo.gl/WSsXzF
16:30:05 <smcginnis> jgriffith: Quick read through your response. I agree, makes sense to me.
16:30:27 <DuncanT> jgriffith: Ah, I agree with your email. API behaviour consistency is rather the point of our existance
16:30:56 <jgriffith> I also don't see the current behavior between extend and snapshots as inconsistent at all
16:31:00 <jgriffith> quite the opposite
16:31:32 <DuncanT> jgriffith: +1
16:32:20 <hemna> jgriffith, +1
16:32:20 <smcginnis> So... should we try to get unit tests that verify behavior of allowing extending with snapshots present?
16:32:27 <smcginnis> Anyone have a backend that does not support that?
16:33:17 <flip214> smcginnis: if I extend a volume, and go back to a snapshot, the old size is used again, right?
16:33:18 <eharney> for some context, the glusterfs driver used to block that as not supported, it's been fixed to support it now
16:33:26 * DuncanT is on vacation for all of next week, but I'll start pushing tests asap - anybody wants to start, email me
16:33:27 <e0ne> does lvm support extend snanpshot?
16:33:28 <smcginnis> flip214: That would be my expectation.
16:33:33 <DuncanT> flip214: Yes
16:33:36 <flip214> smcginnis: okay, sounds good
16:33:55 <jgriffith> e0ne: no... and to my point
16:33:59 <flip214> is there a consistency-group snapshot already?
16:34:09 <jgriffith> e0ne: extend snapshot doens't make any sense in our context
16:34:11 <xyang> flip214: yes
16:34:18 <xyang> flip214: cgsnapshot
16:34:18 <jgriffith> e0ne: even if LVM itself supports it
16:34:25 <e0ne> jgriffith: agree
16:34:28 <smcginnis> jgriffith: Not extend snapshot, but extend a volume that has snapshots.
16:34:29 <DuncanT> eharney: I'd have considered that a bug in the gluster driver - they happen, sounds like you considered it a bug too, so everything works as expected process-wise
16:34:34 <jungleboyj> jgriffith: Why would someone extend a snapshot?
16:34:52 <jgriffith> smcginnis: yeah... sorry, I thought e0ne was specifically asking about extending snapshots
16:34:53 <e0ne> jgriffith: I misunderstood smcginnis's question
16:34:58 <jungleboyj> smcginnis: I don't think we have any issues with that test.  Will make sure.
16:34:59 <jgriffith> ahhh :)
16:35:02 <smcginnis> jgriffith: Maybe he was. :)
16:35:04 <jgriffith> ok... carry on then :)
16:35:40 <smcginnis> So, to be clear, we should be able to extend a volume and its snapshots should not be affected by it.
16:35:42 <xyang> e0ne: snapshot is taken at a point in time.  it's kind of frozen.
16:35:43 <smcginnis> Right?
16:35:56 <DuncanT> smcginnis: Yes
16:35:58 <jgriffith> smcginnis: correct
16:35:58 <e0ne> smcginnis: +2
16:36:10 <smcginnis> OK, good. I'm not totally off the reservation. :)
16:36:15 <DuncanT> Sounds like plenty of agreement, tempest tests to follow, maybe some documentation too
16:36:21 <smcginnis> DuncanT: +1
16:36:34 <flip214> smcginnis: if it's in a consistency group, the same holds for it, too?
16:36:40 <jgriffith> I think this comes up because there are some backends that do everything as a linked object and this creates some challenges for them
16:36:45 <wanghao> DuncanT: +1
16:36:51 <xyang> flip214: extend is not allowed with CG
16:36:52 <DuncanT> flip214: Yes
16:36:56 <jgriffith> anyway
16:36:58 <smcginnis> This could be good info to capture in eharney's effort to capture expected behavior documentation.
16:36:59 <flip214> ie. snapshotting a CG, and then restoring, would only restore the volumes in the CG at the time of snapshot?
16:37:07 <DuncanT> smcginnis: +1
16:37:09 <eharney> yes
16:37:16 <xyang> flip214: yes
16:37:25 <flip214> okay, understood. At least I believe I do ;)
16:37:27 <smcginnis> eharney: I still have that open in a tab by the way, and intend to contribute. Some day. :]
16:37:35 <eharney> smcginnis: me too :)
16:37:41 <smcginnis> Hah :)
16:37:53 * DuncanT doesn't - anybody got a link?
16:38:08 <smcginnis> https://github.com/eharney/driver_doc/blob/master/driver.rst
16:38:17 <DuncanT> Thanks
16:38:26 <smcginnis> #topic Open Discussion
16:38:39 <ntpttr> Hi, I have a quick question
16:38:41 <jgriffith> smcginnis: do we have to discuss "open" again ?
16:38:42 <jgriffith> :)
16:38:47 <smcginnis> jgriffith: LOL
16:38:57 <smcginnis> ntpttr: Go for it!
16:39:00 <winston-d> FWIW, Li Yan used to work on Storwize but she's working for Intel now, so not that she has some backend to support or something.
16:39:02 <ntpttr> So I've been working on a fix for this bug for a little while https://bugs.launchpad.net/cinder/+bug/1508249 and we've discovered that right now any exceptions raised when checking quotas in delete calls (delete_volume, delete_snapshot ect.) are being silenced, so moving those checks to the API as they are rather than having them in the manager probably wouldn't make a difference because all that happens on failure is a message is written to the lo
16:39:03 <openstack> Launchpad bug 1508249 in Cinder "Quota checks are inconsistent" [Undecided,In progress] - Assigned to Nate Potter (ntpttr)
16:39:25 <ntpttr> I'm wondering if there was a reason for those exceptions not getting raised, and if it would be a good thing to raise them in the API to tell the user that the quota update failed when it happens and maybe make it possible to delete it anyways with the force flag or something else, or if those exceptions should stay silent.
16:39:27 <jgriffith> ntpttr: oh... I'm glad you mentioned that!
16:39:45 <ntpttr> Here are links to the patches I have up for this right now, this question mostly applies to the last two of them because the first patch is updating retype and it was already raising the exception - https://review.openstack.org/#/c/254980/ https://review.openstack.org/#/c/249388/ https://review.openstack.org/#/c/249441/
16:39:48 <dulek> jgriffith: :) Hopefully you'll shed some light of the bug intentions. :)
16:40:08 <jgriffith> dulek: I'm afraid I may make things worse....
16:40:31 <ntpttr> jgriffith: oh boy.. :)
16:40:41 <jgriffith> ntpttr: dulek so the exceptions piece aside for now....
16:41:07 <jgriffith> ntpttr: dulek I found that checking quotas "after" the rpc call to be kinda "silly"
16:41:32 <jgriffith> ntpttr: dulek I thought it would be good to check at the API on entry and just be quick about it if/where we could
16:41:41 <jgriffith> ntpttr: dulek this also had the benefit of consistency in the code
16:41:59 <jgriffith> as opposed to just willie-nillie "I'll add a quota check here or there"
16:42:38 <dulek> jgriffith: My main problem with that is that this changes RPC API a lot and we're supposed to try to keep L->M compatibility here.
16:42:38 <smcginnis> So don't even try if you already know you can't. Makes sense.
16:43:01 <jgriffith> dulek: hmm...
16:43:02 <dulek> jgriffith: I'm not sure if code consistency is worth the risk.
16:43:20 <dulek> (I mean - to be able to do rolling upgrades - we need compatibility)
16:43:21 <jgriffith> dulek: in other words, we screwed up, now we just live with it because of rpc versioning ?
16:43:30 <ntpttr> yes, the rpcapi change is to add reservations parameters to pass them along to the manager
16:43:34 <jgriffith> dulek: not saying that's not a valid point
16:43:37 <ntpttr> so they can be committed/rolled back there
16:43:53 <winston-d> RPC API change is not backwards compatible?
16:43:55 <dulek> jgriffith: If the only gain here is code consistency, then I would say yes, let's live with that/
16:44:04 <ntpttr> it is backwards compatible
16:44:06 <jgriffith> dulek: ntpttr ok... well if it's not possible in the name of the elusive unicorn of rolling  upgrades I guess that the way it goes
16:44:08 <wanghao> jgriffith: in some case, we need to check the quota after rpc call, like manage volume/snapshot.
16:44:09 <DuncanT> The other option is to add the request id to the reservations table in the DB, right?
16:44:17 <dulek> jgriffith: If there's more - like quick user feedback - then I'm okay with it.
16:44:28 <jgriffith> dulek: yeah... I can see the point
16:44:34 <DuncanT> Then you can look it up on the far side
16:44:56 <jgriffith> DuncanT: you mean in essence do "both" for the old versions?
16:44:56 <wanghao> jgriffith: since we need to get the volume size from driver and then check the quota.
16:45:06 <dulek> jgriffith, ntpttr: I'm not saying it's not possible to make it backward compatibile - it is, but the more changes, the more chances of introducing a bug.
16:45:12 <jgriffith> wanghao: say whhaaaat?
16:45:25 <jgriffith> wanghao: why do you have to get volume-size from backend?  That's what the DB is for
16:45:46 <wanghao> jgriffith: no, I said manage exitsing volume
16:45:46 <winston-d> jgriffith: he was talking about managing (importing) a vol into cinder.
16:45:47 * jgriffith smells a rat
16:46:05 <jgriffith> wanghao: oh... :)  Ok, I'm confused on that... let's circle back
16:46:08 <jgriffith> after this topic
16:46:14 <ntpttr> jgriffith: right now I've updated the patches to raise exceptions if quota checks fail in the API and it can be overridden with force, to give the user the quick feedback, but we aren't sure if that's a change that should be made
16:46:32 <ntpttr> jgriffith: so was there a reason then for the exceptions being silenced on the delete calls?
16:46:34 <DuncanT> dulek: There's a bunch of ways of doing this without needing an incomatable RPC I think
16:46:41 <jgriffith> Ok... so if moving the checks is a problem for rpc versioning... that's life; maybe we can look at DuncanT 's idea and come up witha  solution there
16:46:43 <DuncanT> ntpttr: Why is force allowed?
16:46:46 <dulek> (we're back with exceptions on quota errors in delet ecalls)
16:47:04 <jgriffith> ntpttr: the exception thing is actually problematic it turns out
16:47:20 <ntpttr> DuncanT: it was allowed in case the quota checks fail but the volume/snapshot must be deleted anyway, just experimenting with possibilities
16:47:30 <jgriffith> ntpttr: a lot of those were added recently in the name of "giving user feedback other than 503 or whatever it was"
16:47:33 <jgriffith> BUT
16:47:40 <dulek> DuncanT: Yeah, with ntpttr we've managed to make the code compatible. That's fine, but why even bother if there are no clear benefits of changing it?
16:47:45 <jgriffith> a side effect that  I noticed that kinda sucks is this:  http://goo.gl/FaKDl8
16:47:45 <DuncanT> ntpttr: How can a delete fail with a quota problem?
16:48:05 <jgriffith> DuncanT: good question :)
16:48:23 <DuncanT> dulek: The user feedback changes ('you're over quota' rather than not telling you) is definitely a big improvement
16:48:31 <winston-d> dulek: well, depends on our definition of 'clear benefit'
16:48:32 <dulek> DuncanT: If quota is out of sync it can, because we cannot reserve -1 quota on 0 volumes.
16:49:02 <DuncanT> dulek: Ah, we should just eat that exception - quota is out of sync so we aren't making things worse
16:49:11 <ntpttr> DuncanT: Hmm that I'm not sure, the exception is there in case any errors occur in add_volume_type_opts or reserve, it just wasn't being raised
16:49:14 <dulek> DuncanT: I'm totally for making such change for these calls. But for delete calls we're silencing the exception.
16:49:41 <dulek> So initial question probably should be:
16:49:51 <Swanson> hello
16:49:52 <dulek> Why we're silencing the quota exceptions for deletes?
16:50:11 <dulek> We're doing that consistently for delete_volume, snapshot, cg, cgsnapshot.
16:50:39 <DuncanT> dulek: Because they aren't something the tenant cares about or can fix
16:50:44 <winston-d> git blame?
16:51:44 <dulek> DuncanT: That was my intuition. So if I'm not missing anything - there's no point in moving quota checks from c-vol to API.
16:52:05 <dulek> DuncanT: We will just silence an exception - who cares where it's done?
16:52:19 <DuncanT> dulek: For delete? Not so much, no
16:52:23 <winston-d> dulek: that is for delete, only, right?
16:52:25 <dulek> (I know, code consistency is of value. ;))
16:52:40 <ntpttr> mostly delete, but the other case was retype
16:52:40 <DuncanT> dulek: I wasn't aware we were only discussing delete, sorry
16:52:45 <dulek> winston-d: Yes. For example for retype - gain from quick user feedback is understandable. :)
16:52:50 <ntpttr> which was raising the exception previously in the manager
16:52:56 <dulek> quick feedback to the user-
16:53:06 <DuncanT> retype should give the user feekback ideally I think
16:53:29 <dulek> DuncanT: +1 :)
16:53:45 <ntpttr> Okay - yesterday I separated retype into its own patch for this reason :)
16:53:59 <ntpttr> dulek: thanks for the feedback/help in the reviews
16:54:28 <dulek> jgriffith: You're okay with that outcome?
16:54:35 <jungleboyj> DuncanT: +1
16:55:37 <winston-d> ntpttr: is that all?
16:55:45 <dulek> Summing up - if we're silencing quota exceptions - there's no point in moving them. If not - there's value in giving quick feedback to the user and we should go and do checks in API. :)
16:55:50 <ntpttr> winston-d: yes I think that clears it up
16:55:57 <smcginnis> dulek: +1
16:56:14 <ntpttr> thanks everyone
16:56:24 <smcginnis> 4 minute warning
16:56:26 <DuncanT> I've got one more thing to bring up in the last couple of minutes then
16:56:29 <winston-d> ok, I have a quick question about Cinder Ironic interaction.
16:56:37 <smcginnis> Arm wrestle for it!
16:56:42 <hemna> winston-d, sup
16:56:57 <winston-d> we discssued this during the working session and I'd like to know if how we will follow up on that.
16:57:05 <DuncanT> A Toshiba engineer emailed me about where a driver can put things that don't fit in provider_info _id etc
16:57:25 <smcginnis> Let's try to quick get winston-d's, if that's OK.
16:57:26 <hemna> winston-d, e0ne is working on a cinderclient patch that adds the ability to do attachment to the host where the clinderclient command is executed
16:57:29 <e0ne> winston-d: I'm working on it https://review.openstack.org/224124 and https://review.openstack.org/254878
16:57:40 <smcginnis> winston-d: Good enough?
16:57:46 <smcginnis> :)
16:57:47 <DuncanT> Specifically they need to store some of the volume_type attributes, like whether there are mirrors, so they can correctly clean up on delete even if the type has been editted
16:57:47 <winston-d> hemna, e0ne: thx!
16:57:52 <winston-d> smcginnis: yes
16:58:01 <smcginnis> OK DuncanT .
16:58:03 <hemna> winston-d, just need help testing and reviewing at this point I think.
16:58:16 <hemna> it's on my list of things TODO/test
16:58:22 <winston-d> hemna: sure, glad to help with review at least.
16:58:24 <DuncanT> I suggested admin_metadata was the only sane place for this
16:58:28 <hemna> winston-d, ok thanks :)
16:58:29 <smcginnis> DuncanT: They don't know from their backend if there are mirrors?
16:58:48 <DuncanT> Doing some auditing I discovered that the hitachi driver uses volume_metadata for this
16:58:51 <winston-d> hopefully we can test it soon with our Ironic AZ.
16:58:57 <DuncanT> smcginnis: Apparently not
16:59:02 <e0ne> winston-d: I didn't add attchach/detach stuff into the latest code. will do it later this week
16:59:20 <hemna> if anyone is interested, os-brick has a WIP/POC for extending an attached volume  https://review.openstack.org/#/c/243776/
16:59:22 <DuncanT> hitachi driver looks like a malicious tenant can do bad thigns
16:59:24 <e0ne> winston-d: sounds great!
16:59:27 <DuncanT> *things
16:59:29 <smcginnis> DuncanT: So the question is should they all be using admin or volume metadata?
16:59:48 <smcginnis> hemna: Awesome!
16:59:49 <e0ne> hemna: added to my review inbox
16:59:52 <DuncanT> smcginnis: Definitely not volume_metadata - the tennant can change or delete that at will
17:00:03 <smcginnis> DuncanT: Yeah, that seems like a Bad Thing.
17:00:07 <hemna> I'd like to get that tested by various folks, so I can get it to land.
17:00:10 <jungleboyj> hemna: Awesome!
17:00:14 <smcginnis> DuncanT: Maybe file a bug against them to change that?
17:00:25 <DuncanT> smcginnis: A few drivers touch the volume_metadata, still auditing for what the bad effects are
17:00:40 <hemna> diablo_rojo, has been working on testing it for me for IBM.  I'd like to get some other vendors to test it as well if possible.
17:00:42 <DuncanT> smcginnis: Bug on the way, just trying to figure out if it should be securoty tagged
17:00:53 <jungleboyj> hemna: Do we need changes on the Nova side still though?
17:01:01 <hemna> but it has to be tested manually at the moment.
17:01:09 <hemna> jungleboyj, cinder and nova
17:01:14 <hemna> but os-brick has to have this first
17:01:15 <wanghao> smcginns:jgriffith We can continue to talk about the get replica volume ref in #cinder channel. :)
17:01:17 <smcginnis> DuncanT: Hmm, good question. I would lean towards no.
17:01:20 <DuncanT> IBM writes to the admin metadata during replication.... should we formally allow drivers to change admin_metadata?
17:01:22 <smcginnis> OK, we're over.
17:01:33 <smcginnis> Let's go back to our channel for anything else.
17:01:35 <smcginnis> Thanks everyone.
17:01:43 <smcginnis> #endmeeting