16:00:42 <smcginnis> #startmeeting Cinder
16:00:43 <openstack> Meeting started Wed Jun 15 16:00:42 2016 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:47 <openstack> The meeting name has been set to 'cinder'
16:00:53 <smcginnis> ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang tbarron scottda erlon rhedlind jbernard _alastor_ vincent_hou kmartin patrickeast sheel dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio
16:00:55 <dulek> o/
16:00:56 <yuriy_n17> hi
16:00:56 <jgregor> Hello!
16:00:57 <eharney> hi
16:00:58 <ntpttr> hi
16:00:59 <aimeeu> 0/
16:00:59 <adrianofr> hey
16:01:00 <jseiler__> hi
16:01:00 <jgriffith> o/
16:01:02 <e0ne> hi
16:01:02 <smcginnis> Hey everyone
16:01:04 <erlon> hey
16:01:08 <DuncanT> Hi
16:01:09 <scottda> hi
16:01:12 <mtanino> hi
16:01:15 <xyang1> hi
16:01:17 <andrei_perepiolk> Hello
16:01:19 <patrickeast> hi
16:01:19 <smcginnis> #topic Announcements
16:01:21 <fernnest_> hi
16:01:25 <thingee> o/
16:01:32 <smcginnis> Just the usual stuff.
16:01:52 <smcginnis> #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review tracking
16:01:53 <jungleboyj> o/
16:01:58 <_alastor_> o/
16:02:03 <diablo_rojo> Hello :)
16:02:23 <bswartz1> .o/
16:02:33 <andrei_perepiolk> About contributing new driver
16:02:36 <smcginnis> Still a couple new drivers there I would like to get two cores signed up to make sure they are on track.
16:02:49 <andrei_perepiolk> yes
16:03:00 <smcginnis> andrei_perepiolk: Did you have a question about that?
16:03:01 <andrei_perepiolk> Open-E JovianDSS --- new one
16:03:13 <erlon> smcginnis: I added the HNAS driver to the botton of the list, is a refactor
16:03:19 <smcginnis> andrei_perepiolk: Right, do you have a link for the review you can add to that list?
16:03:27 <smcginnis> erlon: Great, thanks
16:03:31 <andrei_perepiolk> smcginnis: I think I need some guidelines
16:03:38 <andrei_perepiolk> not yet
16:03:48 <andrei_perepiolk> I am in process of setuping CI
16:04:02 <erlon> scottda: you have done some reviews in that, can you sign in?
16:04:16 <e0ne> andrei_perepiolk: can we discuss it after the meeting in the #openstack-cinder channel?
16:04:23 <smcginnis> andrei_perepiolk: OK, as soon as you have a patch submitted, add it there and we can track it.
16:04:24 <andrei_perepiolk> yes
16:04:27 <scottda> erlon: The HNAS stuff? I'll review but I am not core
16:04:35 <smcginnis> #link https://etherpad.openstack.org/p/newton-cinder-midcycle Midcycle planning
16:04:47 <smcginnis> Please add midcycle topics to the etherpad.
16:04:48 <erlon> scottda: yes, humm no?
16:05:02 <erlon> scottda: I was pretty sure you where :)
16:05:02 <smcginnis> And sign up there if you plan on coming so we can plan accordingly.
16:05:16 <e0ne> erlon: are you talking about that patch with more than 2k LoC?
16:05:17 <andrei_perepiolk> e0ne: yes, lets discuss it there
16:05:32 <erlon> e0ne: yes
16:05:41 <smcginnis> scottda: I think we were good last week, but anything to add about the midcycle?
16:05:51 <e0ne> erlon: it will be very hard to review it :(
16:05:55 * dulek will be counting on the Google Hangout!
16:06:04 <scottda> smcginnis: No, nothing new. We are good to go
16:06:06 <erlon> e0ne: it caounts almost as a new driver, so I put in the new driver list
16:06:27 <e0ne> erlon: it makes sense
16:06:28 <smcginnis> scottda: Great, looking forward to seeing folks in Ft Collins.
16:06:29 <adrianofr> e0ne: a big part of it is to increase the test coverage. I think you'll like it :)
16:06:41 <smcginnis> #topic blueprints to discuss
16:06:46 <e0ne> adrianofr: I do:)
16:06:49 <smcginnis> So I have a few bps marked as Discussion.
16:06:58 <erlon> e0ne: there was a lot of things we needed to change/add into the driver. We tried to keep the changes in this patch minimal
16:07:01 <smcginnis> A few of them have been requested to submit a spec for mroe detail.
16:07:02 <andrei_perepiolk> smcginnis: will you share your opinions with me after meeting in openstack-cinder chat?
16:07:06 <e0ne> smcginnis: IMO, we can just drop them
16:07:11 <erlon> e0ne: and sent other changes in separate patches
16:07:20 <smcginnis> e0ne: The ones I listed here?
16:07:35 <e0ne> smcginnis: yes
16:07:43 <e0ne> #link https://blueprints.launchpad.net/cinder/+spec/create-volume-from-image-file
16:07:44 <erlon> e0ne: thanks!
16:07:56 <smcginnis> andrei_perepiolk: yep
16:08:08 <e0ne> smcginnis: I don't understand why we should re-implement glance features
16:08:23 <smcginnis> e0ne: I definitely agree on that one.
16:08:24 <eharney> i don't like the idea of this blueprint at all
16:08:33 <e0ne> smcginnis: and have more bugs that are not about cinder, but about uploading stuff
16:08:47 <smcginnis> OK good. I wanted to bring up here just to make sure I wasn't missing a compelling reason for this.
16:08:50 <smcginnis> I will reject that one.
16:08:51 <eharney> and file format vulnerabilities which have been an issue in every service handling these things
16:08:56 <erlon> eharney: +1
16:08:58 <smcginnis> The second one has a little more interest for me.
16:09:02 <e0ne> eharney: +1
16:09:07 <smcginnis> #link https://blueprints.launchpad.net/cinder/+spec/revert-volume-to-snapshot
16:09:10 <DuncanT> Putting large data transfers into cinder is, if we do it, going to need a significant re-architect... probably a separate http server for data transfers
16:09:11 <e0ne> #link https://blueprints.launchpad.net/cinder/+spec/revert-volume-to-snapshot
16:09:12 <e0ne> :)
16:09:17 <smcginnis> But not sure if it would be something broadly supported.
16:09:39 <smcginnis> It could be useful. Then again, it could be dangerous and add complexity.
16:09:44 <erlon> smcginnis: I was discussing with him about that in the ML
16:09:48 <smcginnis> So interested in thoughts on that too.
16:09:50 <eharney> i'm fairly sure this isn't the first proposal for this feature, right?
16:09:56 <e0ne> smcginnis: again, it's only IMHO: this feature/flow is good for heat or mistral, not for cinder
16:10:05 <erlon> smcginnis: there are good reasons to have that,
16:10:08 <smcginnis> erlon: Oh that's right! I forgot about that. I need to go back and read that thread.
16:10:16 <DuncanT> Personally I'm against it, I think we've got a set of semantic sand this doesn't add much value
16:10:32 <e0ne> smcginnis: and if some *cloud* software are depended on volume UUID it's a bad design
16:10:41 <xyang> smcginnis: regarding create volume from image file, how is this bp different form copy image to volume which we already have?
16:10:43 <erlon> DuncanT: I think you were in the discusstion as well
16:10:49 <e0ne> DuncanT: +1
16:11:01 <DuncanT> I've comments on the spec about making it universal via implementing it with existing primatives if we do go for it
16:11:01 <smcginnis> xyang: Bypassing glance and sending an image directly to a volume.
16:11:02 <jgriffith> eharney: correct, it's not
16:11:28 <smcginnis> jgriffith, eharney: So rejected already in the past?
16:11:32 <patrickeast> so.. for the revert-volume-to-snapshot i thought the decision in the past has always been that you can, by creating a new volume from the snapshot
16:11:38 <eharney> i think rollback makes a lot of sense from a usability point of view, but i'd have to spend some time on the spec figuring out how practical it is
16:11:45 <e0ne> jgriffith: you don't want to add more APIs, do you? :)
16:11:46 <DuncanT> jgriffith has definitely expressed sane opinions on this in the past
16:11:48 <jgriffith> Honestly given some of the other things we're adding this would seem like the more useful and less problematic of all of them
16:11:56 <eharney> smcginnis: i assumed it was more that it's just hard and nobody drove it all the way through, but not sure
16:11:58 <jgriffith> e0ne: that ship has sailed :)
16:11:58 <bswartz> I know of users who do create-volume from image file (without glance) using existing cinder -- you don't need a new feature for it
16:12:26 <bswartz> the workflow is: create volume, attach volume, dd, detach volume
16:12:29 <jgriffith> we've pretty much given up on the "be cloudy" thing... so we should either go all in or not :)
16:12:40 <patrickeast> jgriffith: haha, fair point
16:12:41 <smcginnis> bswartz: Right, certainly possible.
16:12:45 <eharney> i can't tell who is talking about which blueprint here
16:12:51 <smcginnis> jgriffith: :)
16:12:53 <e0ne> bswartz: looks like a mistral workflow
16:13:16 <smcginnis> eharney: Seems to be both, but I'm trying to talk about being able to revert to a snapshot.
16:13:18 <_alastor_> what was the "be cloudy" thing that was given up?
16:13:18 <jgriffith> eharney: sorry, I thought we were just talking about adding restore from snap
16:13:41 * bswartz apologizes for going back to the last topic
16:13:51 <smcginnis> _alastor_: That's a bigger philosophical discussion.
16:13:54 <jgriffith> _alastor_: things like "I don't like the name of the resource, so create a new one"
16:13:57 <_alastor_> ok, outside meeting then
16:14:25 <jgriffith> smcginnis: true.. nice save!
16:14:31 <smcginnis> ;)
16:14:33 <e0ne> jgriffith, _alastor_: the same for UUIDs
16:14:42 <Swanson> Hello
16:14:45 <patrickeast> how does the like 're-build' thing for instances work? thats similar to this revert idea, right?
16:14:50 <jgriffith> e0ne: you're trying to get me into trouble here :)
16:14:55 <patrickeast> does the instance get the same id?
16:15:06 <smcginnis> So are folks saying they are against the idea of rolling back to snapshot?
16:15:08 <DuncanT> re-build is a mess...
16:15:17 <jgriffith> so... what about this.  You allow reset to snapshot, but you don't allow it when the volume is attached
16:15:21 <e0ne> jgriffith: no, I'm just saying that we don't need to be depended on UUIDs
16:15:27 <DuncanT> smcginnis: I'm mildly against
16:15:32 <smcginnis> Revert would be the same ID. Nothing changes but the data on disk, I would think.
16:15:42 <jgriffith> I frankly don't think that eve if we do this we should ever do the whole thing of handling attached volumes
16:15:45 <DuncanT> jgriffith: Attached revert for sure we shouldn't allow
16:15:56 <erlon> smcginnis: yes
16:15:58 <smcginnis> Definietly no attached.
16:16:02 <DuncanT> jgriffith: That's crazy town
16:16:04 <smcginnis> That's just asking for data corruption.
16:16:08 <jgriffith> I would be curious how you're going to deal with the various corner cases of CG's etc
16:16:25 <eharney> wouldn't rollback perform better, and use less space, than having to clone etc?
16:16:26 <smcginnis> jgriffith: True, it would have to revert the entire group.
16:16:43 <smcginnis> eharney: That's the thing I like about the idea.
16:17:02 <eharney> smcginnis: the spec lists basically nothing under "Performance Impacts", but that's one of the interesting parts...
16:17:06 * DuncanT is also strongly against it unless it can be implemented generically on top of existing driver semantics (but I'm pretty sure it can be)
16:17:35 <erlon> smcginnis: for some BE like mine, revert to a CG is much better from a performace than having to duplicate all volumes in the CG
16:17:37 <smcginnis> DuncanT: That's the sticker.
16:17:50 <smcginnis> erlon: Same for a lot of us I think.
16:17:51 <_alastor_> If the driver supports rollback natively, will they be able to implement their own rollback rather than relying on cinder's generic implementation?
16:17:58 <smcginnis> But then there are some where it defintiely wouldn't work.
16:18:02 <patrickeast> DuncanT: i don't think we can, it would change some assumptions for the driver methods... surely someone will break :(
16:18:08 <jgriffith> I'm not sure why clones would be used in this context as opposed to rollback/merge
16:18:18 <bswartz> we are doing revert-to-snapshot in Manila currently and one of the surprising sticking points is that the vast majority of backends which can implement it efficiently can only revert to the most recent snapshot -- reverting to older snapshots requires deleting the more recent ones -- so if you pursue this in cinder you might want to consider that issue
16:18:22 <erlon> _alastor_: they should
16:18:26 <Swanson> smcginnis, that isn't true for us is it?
16:18:34 <eharney> bswartz: good thing to know
16:18:37 <smcginnis> bswartz: Oh, interesting.
16:18:41 <jgriffith> _alastor_: yup
16:18:44 <smcginnis> Swanson: We can go to any snapshot.
16:18:53 <jgriffith> bswartz: huh... wouldn't have thought of that
16:18:54 <DuncanT> patrickeast: I think it doable, I'm a strong -2 if it can't. Having this only work on some backends would be a disaster
16:18:58 <jgriffith> bswartz: that's a deal breaker IMO
16:19:12 <DuncanT> jgriffith: +1 on the deal breaker
16:19:16 <jungleboyj> I don't understand why a person wouldn't just create a new volume from the snapshot.
16:19:21 <smcginnis> Agreed
16:19:23 <jgriffith> jungleboyj: :)
16:19:25 <patrickeast> DuncanT: i guess the really generic one would be blow away the volume and give it a new fake id name like we do with migrations
16:19:25 <jungleboyj> Isn't that how snapshots are supposed to be used?
16:19:28 <smcginnis> jungleboyj: Performance.
16:19:28 <bswartz> jgriffith: it has to do with the dependencies of the snapshots on disk in most implementations
16:19:29 <DuncanT> jungleboyj: Performance, quota
16:19:31 <jgriffith> jungleboyj: you're getting "cloudy" on me :)
16:19:31 <eharney> jungleboyj: well for one, then you end up with two volumes, when you wanted one.
16:19:41 <smcginnis> We can revert to a snapshot instantly.
16:19:43 <jgriffith> DuncanT: performance?
16:19:56 <patrickeast> smcginnis: depends on the backend... some of us can clone instantly
16:20:03 <patrickeast> smcginnis: some will be slow either way
16:20:03 <jungleboyj> DuncanT: Ok.
16:20:04 <eharney> jungleboyj: "supposed to" as in "that's all Cinder lets you do"...
16:20:11 <jgriffith> DuncanT: ahh.. those that might have to dd or something
16:20:15 <jungleboyj> eharney: :-)
16:20:18 <jgriffith> Well... that's life IMO :)
16:20:22 <jgriffith> You cna't have it all
16:20:23 <DuncanT> jgriffith: Revert is apparently much faster than create-from-snap on some backends, so the spec writers assure me
16:20:24 <smcginnis> patrickeast: Clone may be instant, but then reattach, rescan etc can take some time.
16:20:32 <bswartz> jungleboyj: the theory is that cutting out the middle step is faster in many case -- it's why cinder has a create volume from volume API which is technically duplicating existing functionality
16:20:37 <smcginnis> But if we can't do attached (which we shouldn't) that does limit the advantage.
16:20:37 <jgriffith> It is... that's a true statement
16:20:44 <patrickeast> smcginnis: true, but we said it would have to be detached :(
16:20:50 <patrickeast> smcginnis: so we lose that part
16:20:55 <smcginnis> yeah
16:21:16 <jungleboyj> Ok, thanks all for the explanation.
16:21:26 <smcginnis> OK, great points. I'll not approve that one based on this feedback.
16:21:38 <eharney> if we don't put in the limitation that bswartz mentioned, doesn't that mean we'd end up with non-linear trees of snapshots?
16:21:41 <jgriffith> Now I remember why we've always said no on this :)
16:22:08 <bswartz> reverting to a snapshot can be dramatically more efficient than creating a new volume from a snapshot, so if the user wants to discard the current volume and go back to the snapshot, it could be worthwhile
16:22:32 <jgriffith> bswartz: but to your point, that also means snaps may be deleted
16:22:44 <jgriffith> bswartz: lvm is one of those cases
16:22:48 <DuncanT> eharney: If you think of snaps like that, then yes
16:22:53 <bswartz> jgriffith: we simply throw an error if the snapshot isn't the most recent one
16:23:00 <jgriffith> the merge will delete the snap you merged to
16:23:07 <bswartz> thus forcing you to delete them yourself if that's what you want
16:23:13 <smcginnis> Eew, bad juju
16:23:20 <patrickeast> part of the problem here is that we kind of start to get into the device specific details of what a snapshot is... where as we don't really touch that right now
16:23:26 <jgriffith> bswartz: no, you're missing my point
16:23:35 <DuncanT> Sounds like this is getting more and more ugly and special case
16:23:41 <jgriffith> bswartz: some backends (lvm) auto-delete when you merge back to a snap
16:24:03 <bswartz> jgriffith: yeah the manager would need to perform the check and veto the operation before it got to the backend
16:24:05 <DuncanT> jgriffith: Since it is an offline (unattach) operation, you can always immediately re-create the snap
16:24:08 <bswartz> or the API rather
16:24:24 <jgriffith> DuncanT: omg
16:24:40 <jgriffith> DuncanT: just because you *can* do something doesn't mean you should
16:24:53 <smcginnis> :)
16:25:08 <smcginnis> That's what grown ups always told me.
16:25:13 * jgriffith is not a fan of the device swapping magic behind the curtain
16:25:17 <DuncanT> jgriffith: I think all backends are going to have to work round semantic weirdnesses like this if we implement this feature
16:25:18 <jgriffith> smcginnis: LOL
16:25:24 <jungleboyj> So, I see how this is useful but I think I am in the camp with DuncanT that we need to be able to have a general implementation that will work for everyone.
16:25:38 <smcginnis> jungleboyj: I agree. Too many caveats here.
16:25:39 <jgriffith> DuncanT: yeah.. I'm checked out on this one yet again and saying "don't do it" :)
16:26:00 <bswartz> the utility of a special revert-to-snapshot API is certainly limited
16:26:04 <DuncanT> jgriffith: Fair.
16:26:06 <smcginnis> OK, that's all for that. There are a few other bps but I'll try to get them through in future meetings.
16:26:12 <smcginnis> Thanks for the feedback on these.
16:26:17 <jgriffith> smcginnis: that was easy :)
16:26:23 <smcginnis> reduxio_: You're all set now, right?
16:26:26 <smcginnis> jgriffith: ;)
16:26:36 <e0ne> jgriffith: :)
16:26:40 <reduxio_> smcginnis: yes I am thanks
16:26:41 <_alastor_> I'm in favor provided we get tempest tests to verify that the driver behaves correctly
16:26:47 <smcginnis> reduxio_: Great!
16:27:04 <smcginnis> #topic Open Discussion
16:27:25 <scottda> We've been meeting to talk about Cinder testing in the cinder channel
16:27:35 <scottda> At 1500 UTC Wed (all are welcome)
16:27:45 <dulek> smcginnis: scottda had another item I think.
16:27:49 <e0ne> scottda: one comment from my side
16:27:53 <scottda> It was brought up that maybe we should book time on a meeting channel
16:28:03 <scottda> e0ne: ?
16:28:22 <bswartz> +1 for booking time in an official meeting channel
16:28:33 <e0ne> scottda: if we'll do it in some -meeting- channel, we can use features like 'action items' 'topics', etc
16:28:54 <e0ne> scottda: it's not a big deal right now, but it would be useful
16:28:56 <scottda> e0ne: I think we can do that in cinder. I was able to "startmeeting" and get the bot
16:28:57 <dulek> e0ne: Turns out meetbot works also in #openstack-cinder.
16:29:14 <e0ne> scottda, dulek: cool!
16:29:16 <dulek> e0ne: http://eavesdrop.openstack.org/meetings/test/2016/test.2016-06-15-15.43.log.html
16:29:22 <smcginnis> My only concern is if it would be a distraction in channel.
16:29:28 <e0ne> I've missed meeting today:(
16:29:29 <scottda> The counter-arguement was that it made the efforts more visible by being in cinder channel
16:29:35 <smcginnis> I don't think so, and I like the visibilty there, but I'm flexible
16:29:42 <scottda> smcginnis: Yes, last week there was some cross-conversation
16:29:52 <smcginnis> scottda: Yeah
16:30:11 <xyang> I like it in the cinder channel so I won’t forget about it
16:30:17 <smcginnis> Usually pretty quiet around then in channel. Kinda.
16:30:45 <smcginnis> But I'm absolutely fine getting into one of the #opestack-meeting* channels.
16:30:56 <scottda> Should we keep it in Cinder for now, and wait and see? Always easiest to do nothing.
16:31:02 <DuncanT> We can stay on the channel now and move it if it actually causes a problem?
16:31:14 <eharney> scottda: yes, since nobody has any strong argument
16:31:19 <scottda> BTW, with either choice I'll book the meeting time with infra.
16:31:31 <scottda> OK, good enough for me. Thanks.
16:31:49 <e0ne> scottda, smcginnis: at lease, we have to add this info to https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting page
16:31:53 <xyang> scottda: if we move to another channel, please ping people in cinder channel when it starts
16:32:14 <scottda> xyang: No move yet, but I'll ping if we do change.
16:32:19 <smcginnis> xyang: Good idea
16:32:21 <xyang> sure
16:32:21 <e0ne> s/lease/least
16:32:28 <scottda> e0ne: Yes, I'll do that as well.
16:32:34 <e0ne> scottda: thanks
16:32:54 <smcginnis> scottda: I'm just glad the discussion is happening and it's getting attention. Thanks for driving this.
16:33:17 <scottda> smcginnis: np. I love tests.
16:33:22 <smcginnis> :)
16:33:23 <e0ne> scottda: noted!
16:33:25 <scottda> just like eharney
16:33:26 <smcginnis> You and eharney
16:33:28 <eharney> :)
16:33:30 <smcginnis> :D
16:33:40 <xyang> I have a topic if we are done with tests
16:33:42 <hemna> since we are talking about testing, I think we should probably add grenade tests to os-brick patches
16:33:55 <smcginnis> hemna: That is a good idea, given recent events.
16:33:57 <eharney> hemna: good idea
16:33:58 <scottda> hemna: I guess so
16:34:05 <e0ne> hemna: +1. I was thinking about it last night too
16:34:11 <diablo_rojo> hemna: +1
16:34:16 <smcginnis> xyang: Take it away
16:34:17 <hemna> https://review.openstack.org/#/c/329586/
16:34:24 <xyang> np
16:34:30 <hemna> FYI, reviews on that would be helpful.  gus has helped, and I'll make some changes
16:34:42 <hemna> xyang, go for it.
16:34:42 <e0ne> hemna. smcginnis: maybe even we would like to get grenade for cinderclient too
16:34:43 <diablo_rojo> hemna:  I think the patch looks good. Was glad to see Angus's comments
16:34:49 <xyang> so I am working on generic volume groups.  There are a few patches up for review
16:34:55 <xyang> I have one issue now
16:35:17 <smcginnis> e0ne: Not sure how that works
16:35:23 <xyang> that is how to use the new group APIs for CG stuff
16:35:25 <smcginnis> e0ne: We should discuss later though.
16:35:37 <xyang> one problem is that we have a group type now which was not there in CG
16:35:50 <e0ne> smcginnis: sure, it's good item for the next testing-related meeting
16:35:50 <xyang> group types: https://review.openstack.org/#/c/320165/
16:36:01 <xyang> groups: https://review.openstack.org/#/c/322459/
16:36:15 <xyang> group snapshots: https://review.openstack.org/#/c/328052/
16:36:34 <xyang> so should we create a default group type for existing CGs
16:36:42 <DuncanT> xyang: Make a magic group type?
16:36:51 <xyang> DuncanT: right:)
16:36:57 <eharney> is group snapshot covered in a spec?
16:37:09 <DuncanT> xyang: It's ugly but I can't think of anything between
16:37:23 <DuncanT> xyang: Is there any retype functionality to fix it up later?
16:37:27 <bswartz> is deprecating/dropping the old APIs a possibility?
16:37:37 <xyang> eharney: that is another thing. I have spec for generic volume types but didn’t realize we also need a group snapshot table
16:37:54 <xyang> so now I can add those in the exisitng spec or create a new spec for group snapshots
16:37:58 <DuncanT> bswartz: Officially? No
16:38:01 * bswartz ducks
16:38:13 <jgriffith> sigh
16:38:14 <smcginnis> I thought we were just going to internally redirect the existing CG apis to use the new group mechanism, right?
16:38:19 <xyang> DuncanT: retype for group type?
16:38:22 <eharney> well the problem is, when i look at the patch for group snapshots, my main thought is "this seems like it shouldn't exist"
16:39:00 <xyang> DuncanT: we could have a default group type for now and provide a way to change type later.  maybe as a cinder manage command instead of a retype?
16:39:01 <eharney> smcginnis: i thought so
16:39:18 <xyang> smcginnis: yes, but without a default group type, I can’t do that
16:39:27 <DuncanT> xyang: Sure, I'm not bothered over the form, it just seems like something you might want to do
16:39:28 <smcginnis> ok
16:39:44 <xyang> smcginnis: the existing CGs and CG table does not have group type, so I can’t really use the new one for the old one
16:39:45 <eharney> but how did that end up with an API where a user creates a group snapshot?
16:39:53 <jgriffith> xyang: I would like to simplify this a bit if we can maybe
16:40:12 <jgriffith> xyang: I know you're not going to like this... but honestly; what about going back to just the basics here
16:40:23 <jgriffith> xyang: just offering the ability to create a group of volumes... period
16:40:40 <xyang> jgriffith: so that is there
16:40:44 <jgriffith> xyang: I don't understand all of this business of grouped types, snapshots etc
16:40:45 <eharney> i don't think that's an API feature that was supposed to be a feature on its own
16:41:00 <jgriffith> xyang: I know it's there.. my problem is all the baggage and extra stuff that came along with it
16:41:14 <xyang> jgriffith: so you mean take one step at a time?
16:41:15 <jungleboyj> jgriffith: Starting simple definitely would be best.
16:41:25 <dulek> jgriffith: How do you want to differentiate between replication and consistency group?
16:41:27 <jgriffith> xyang: partially, but I also mean the rest of that stuff isn't necessary
16:41:32 <jgriffith> xyang: to eharney 's point....
16:41:36 <DuncanT> jgriffith:  the point of the groups types is that we're looking at having some APIs that can be calle don some groups but not others, and that needs to be discoverable /somehow/
16:41:49 <jgriffith> xyang: if you call a snapshot on a volume in a group, then it just does the right thing and snaps all of them
16:41:51 <eharney> it's not that it's not necessary or that it's a step at a time thing... it's that that's a feature that we probably specifically don't want
16:41:52 <DuncanT> jgriffith: We have the same issue with volume types now
16:41:53 <jgriffith> keep things simple
16:42:00 <hemna> jgriffith, +1
16:42:15 <jgriffith> DuncanT: I dont' understand why that's necessary?
16:42:29 <hemna> I completely agree.  I think we've been adding way too many complicated APIs recently and it's turning into a mess IMHO
16:42:29 <DuncanT> jgriffith: But if it can't do a consistent snapshot, and I as a user can't tell that, there's a problem
16:42:30 <jgriffith> DuncanT: so you put somethign in extra-specs that says "this works with groups:xyz"
16:42:31 <jgriffith> done
16:43:01 <DuncanT> jgriffith: But a tenant can't see extraspecs, so can't discover programatically what will or won't work
16:43:03 <jgriffith> DuncanT: As a user pick a volume-type that supports CG for yor group
16:43:21 <jgriffith> DuncanT: sigh
16:43:33 <DuncanT> jgriffith: For things like smaug, that means problems and manual per-cloud configs and such
16:43:34 <jgriffith> DuncanT: I'm saying this is a bad direction
16:43:57 <DuncanT> jgriffith: programatic discovery is a bad direction?
16:44:08 <jgriffith> look... here's the thing.  Some clouds will support CG's (for example) some won't
16:44:09 <jgriffith> that's fine
16:44:10 <jgriffith> BUT
16:44:27 <jgriffith> in order to do that safely, IMO it should be abstracted via types and extra-specs
16:44:38 <ameade> not to be that guys again, but manila is working on share groups right now and doing what jgriffith is saying
16:44:40 <DuncanT> I'm fine with that, BUT
16:44:40 <jgriffith> none of that should be in a user facing API
16:44:41 <ameade> guy*
16:45:07 <xyang> ameade: manila will have group_snapshots too.  that is in the spec
16:45:07 <DuncanT> I disagree strongly that a user shouldn't be able to call some API and find out if this cloud supports CGs
16:45:22 <jgriffith> DuncanT: well we're never going to agree here
16:45:33 <bswartz> DuncanT: +1
16:45:38 <jgriffith> DuncanT: that's a value add a cloud provider may or may not want to give
16:45:42 <ameade> i think they should be able to discover capabilities
16:45:54 <ameade> but that capability doesnt have to be a separate api
16:45:57 <jgriffith> ameade: DuncanT they can... it's called documentation!!!!
16:46:13 <DuncanT> jgriffith: Documentation is a terrible solution
16:46:20 <e0ne> :)
16:46:28 <ameade> yeah that's not great
16:46:29 <jgriffith> I just want to remind something... we're not building cloud software for Mirantis, HP, Rax or other
16:46:29 <dulek> DuncanT: +1
16:46:37 <jgriffith> We're building a general platform
16:46:38 <DuncanT> Can we push this out to a user group or something? They're the people who're hurting
16:46:47 <jgriffith> alright.. I'm out
16:46:52 <dulek> jgriffith: Whole openstack it's driving into interoperability. That means also discoverability.
16:46:55 <jgriffith> I can't push this rope any more
16:47:12 <Swanson> rabbit goes around the hole five or six times..
16:47:16 <jgriffith> dulek: ummm   interoperability != discoverability
16:47:31 <jgriffith> dulek: in fact... interop is WHY I'm saying what I'm saying
16:47:44 <DuncanT> jgriffith: It has to be as soon as you move beyond lowest common denominator
16:47:46 <jgriffith> dulek: what you all are proposing actually is the antithesis of interop
16:47:49 <dulek> jgriffith: To do interoperability between different public clouds you need to be able to discover what any of them offer.
16:48:10 <jgriffith> dulek: no, quite the opposite
16:48:18 <jgriffith> dulek: interop means shit just works
16:48:21 <eharney> not having an API to discover functionality only gets us interoperability if we can ensure that all drivers support the same features... which we know we can't
16:48:22 <jgriffith> dulek: they're interoperable
16:48:31 <jgriffith> eharney: thank you!!!
16:48:42 <hemna> if the API is solid, there is no reason to discover, it just works.
16:48:50 <eharney> it will never be "solid" in that way
16:48:50 <ameade> why not when you create a group snapshot, whether it is consistent or not depends on the extra spec. And if you dont care if it is consistent then you dont specify it
16:48:54 <smcginnis> I think discoverability is a good thing. We need a base level of expected functionality that anyone can assume on any cloud. But extra functionality can be available. Whether that is known via documentation or code, I guess that's up to the cloud.
16:48:57 <dulek> hemna: Point is - it isn't.
16:49:01 <jgriffith> DuncanT: dulek what you're suggesting is actually harming interop more than anything else
16:49:09 <smcginnis> If we can do discoverability without over complicating things, then I think it's a good thing.
16:49:23 <eharney> not doing discoverability is more complicated than doing it
16:49:27 <DuncanT> We have a tenant facing API, right now, with no way to discover if it actually works or not. We should fix that before adding any more at all IMO
16:49:34 <ameade> for me, i want a consistent api that works between clouds but if one cloud has better capabilities, i want that one
16:49:34 <jgriffith> ameade: that's almost my point
16:49:43 <jgriffith> ameade: I'm saying get rid of the CG API's altogether
16:49:46 <ameade> yeah i'm totally arguing in the middle of you guys
16:49:48 <ameade> agreed
16:49:52 <hemna> jgriffith, +1
16:50:00 <xyang> group_snapshot will be implemented for LVM, every one can support it
16:50:16 <jgriffith> ameade: if a provider wants to offer that up, then they have a volume-type that says "group:cg-blahblahwhateverthehelltheyneed"
16:50:16 <DuncanT> jgriffith: I could get behind removing CGs as a solution
16:50:20 <eharney> xyang: so i still have major issues there as i was saying earlier...
16:50:40 <jgriffith> xyang: I'm saying that shouldn't exist either
16:50:48 <dulek> Are we able to move that discussion to ML to get broader attention and be able to share more detailed thoughts?
16:50:50 <jgriffith> xyang: I'm saying that it should just *work*
16:51:21 <jgriffith> xyang: if I say "snapshot volume xyz" and when cinder looks (or the backend) at that type info and sees it's part of a group (cg or whatever) it does a cg snapshot
16:51:22 <jgriffith> done
16:51:32 <smcginnis> dulek: It would be good to get operator feedback on this.
16:51:45 <DuncanT> jgriffith: But what if you call it on something that can't do a CG snap?
16:51:50 <jgriffith> we shouldn't be putting in a billion API calls with all kinds of corner cases, special behaviors etc
16:51:53 <dulek> smcginnis: I was also thinking about API-WG.
16:51:59 <jgriffith> DuncanT: then it's a snapshot
16:52:06 <dulek> jgriffith: I can agree on that! :)
16:52:28 <DuncanT> jgriffith: That's broken. If I can't tell in code whether they're consistent, it's broken
16:52:29 <jgriffith> DuncanT: and you're not doing CG's then so this whole argument is irrelevant
16:52:44 <jgriffith> DuncanT: consistency is defined by the snapshots
16:52:55 <DuncanT> jgriffith: If we remove CGs from cinder, then sure, it becomes easy and we've nothing to argue about
16:53:14 <jgriffith> so if you can't do consistent snaps then you're not offering cgs anyway so you already lied to your user
16:54:19 <xyang> jgriffith: I think what you suggested could work too.  driver can still decide how to create the snapshot if the volume is in a generic group
16:54:22 <DuncanT> jgriffith: consistency is a feature of a group of snapshots... If I'm writing a DB service that can be deployed on any cloud (e.g. a heat template) but I need CGs, silent corruption is not a good answer at all
16:54:26 <jgriffith> DuncanT: on the other hand, if you have a type that says "CG"  you damn sure better have it set up to work
16:54:44 <dulek> jgriffith: The point is the user should be able to check that universally.
16:54:53 <jgriffith> DuncanT: then that cloud shouldn't offer you a type that says "CG's"
16:55:02 <jgriffith> ok, I can't argue this any more
16:55:08 <DuncanT> jgriffith: And expecting somebody deploying a heat template to manually check for CGs over estimates your user
16:55:30 <smcginnis> jgriffith: OK, I think I missed your earlier point. Definitely if type == CG it absolutely has to have consistency.
16:55:30 <jgriffith> DuncanT: dulek your proposals don't solve the problem IMO
16:55:40 <jgriffith> smcginnis: yes
16:56:01 <jgriffith> smcginnis: so back in Austin however long ago we talked about "well-known/defined" types
16:56:02 <smcginnis> If it's type != CG then just individual snapshots.
16:56:03 <DuncanT> Says "type == CG" *where*?
16:56:09 <dulek> jgriffith: To be honest I haven't had a proposal, just seen this issue in yours.
16:56:10 <eharney> handling it all with types sounds like we fix problems for users but just push similar problems to admins instead
16:56:24 <smcginnis> Isn't that one of the types of groups a user will be able to create?
16:56:30 <DuncanT> The name of the type?
16:56:37 <jgriffith> DuncanT: yes, the name
16:56:54 <jgriffith> DuncanT: and details in the description
16:57:13 <dulek> jgriffith: Can you automate such discovery?
16:57:27 <jgriffith> dulek: just as well as you can capabilities
16:57:38 <ameade> in manila we are having those capabilities/extraspecs public
16:57:42 <jgriffith> dulek: assuming you get providers to agree on what those defs should be
16:57:44 <dulek> jgriffith: Aren't caps having well-defined names?
16:57:57 <jgriffith> dulek: not currently... that work was never completed
16:58:20 <dulek> jgriffith: Oh. Anyway that was a step in good direction, wasn't it?
16:58:26 <smcginnis> ameade: Public extra specs?
16:58:30 <jgriffith> dulek: yes, for sure
16:58:38 <jgriffith> ameade: please no...
16:58:52 <jgriffith> volume-backend-name among others should never be exported
16:58:53 <DuncanT> jgriffith: the names are explicitly documented as freeform.
16:58:53 <DuncanT> jgriffith: And requiring e.g. a heat template user to read and understand the type description is not understanding the level of many of these users
16:58:53 <DuncanT> jgriffith: If we can add a simple API that does "Does this volume type X support CGs? yes/no" then I'm happy
16:58:53 <DuncanT> with the rest of your proposal
16:58:53 <ameade> smcginnis: yeah, but i remember we can never agree on semantics
16:58:55 <dulek> jgriffith: So if we could achieve something like that with volume groups - it would be it.
16:59:03 <DuncanT> Without that API I think we've a problem
16:59:09 * smcginnis shudders
16:59:12 <ameade> smcginnis: at least not explicitly
16:59:13 <jgriffith> DuncanT: alright, just crate more API's
16:59:17 <jgriffith> create
16:59:23 <smcginnis> Sorry, we gotta go. We should continue this though.
16:59:32 <jgriffith> build something nobody can use, maintain or understand
16:59:32 <smcginnis> Thanks everyone.
16:59:54 <bswartz> remember to bring asbestos underwear to the next meeting
16:59:58 <smcginnis> #endmeeting