16:00:52 <thingee> #startmeeting cinder
16:00:53 <openstack> Meeting started Wed Dec 17 16:00:52 2014 UTC and is due to finish in 60 minutes.  The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:54 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:58 <openstack> The meeting name has been set to 'cinder'
16:00:59 <smcginnis> 0/
16:01:04 <e0ne> hi
16:01:07 <thangp> o/
16:01:08 <cknight> Hi
16:01:10 <lpabon> hi
16:01:10 <jordanP> hi
16:01:11 <dulek> o/
16:01:11 <xyang1> hi
16:01:14 <thingee> hi all
16:01:19 <erlon> o/
16:01:21 <tbarron> hi
16:01:23 <rushiagr> o/
16:01:24 <rushil> hi
16:01:32 <mtanino> hi
16:01:33 <e0ne> meeting agenda https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting ;)
16:01:38 <bswartz> hi
16:01:38 <thingee> first of all thanks to everyone who has been helping with last minute k-1 merges
16:01:44 <thingee> here's the current priority list https://launchpad.net/cinder/+milestone/kilo-1
16:01:51 <thingee> https://etherpad.openstack.org/p/cinder-kilo-priorities
16:02:00 <jungleboyj> o/
16:02:24 <ameade_> o/
16:02:29 <thingee> We need the reviews in "ready" merged today.
16:02:45 <jungleboyj> thingee: aye aye captain!
16:02:50 <thingee> jaypipes: here?
16:03:13 <thingee> or jgriffith
16:03:18 <jgriffith> thingee: I'm here
16:03:35 <flip214> thingee: the DRBD driver is ready per today.
16:03:37 <jaypipes> thingee: yup
16:03:43 <flip214> is that too late?
16:03:53 <thingee> jgriffith: lets quickly talk about removing that CI wiki page or updating it
16:03:54 <thingee> flip214: thanks, will take a look and retarget
16:03:57 <eikke> hello all
16:04:02 <flip214> I ask because it was in "abandoned"; I moved it up.
16:04:04 <jgriffith> thingee: sure
16:04:13 <jgriffith> thingee: so there are multiple wiki pages for CI right now :(
16:04:21 <jgriffith> thingee: neither is fully up to date
16:04:46 <asselin_> o/
16:04:46 * smcginnis Highlander: There can be only one!
16:04:53 <jgriffith> thingee: IMO we should pick one (or neither) and have one point of truth that's actually maintained
16:04:54 <thingee> there's information here https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver
16:05:06 <jgriffith> thingee: whether that's market-place or other I don't care
16:05:07 <thingee> and here https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
16:05:13 <jgriffith> but we shouldn't have multiple copies
16:05:23 <thingee> jgriffith: I agree
16:05:48 <dulek> thingee: Quick question - had you reviewed HA etherpad as last weeks AR? I couldn't find anything on ML about this.
16:05:52 <thingee> I spent the time to move a lot of stuff out how to contribute driver to external wiki pages since it was getting out of date itself
16:05:57 <thingee> dulek: no
16:06:03 <thingee> end of milestone problem atm ;)
16:06:08 <dulek> Okay, thanks!
16:06:12 <thingee> everyone wants me
16:06:35 <thingee> jgriffith: ok, I can do the same for this page and move stuff to point to the real third party wiki
16:06:46 <thingee> and give some suggested forks of jaypipes's work that may or may not work
16:06:50 <jgriffith> thingee: so are you stating which is the "real" page ?
16:06:51 <jgriffith> :)
16:07:00 <thingee> do you mind people referring to your basic ci repo?
16:07:20 <thingee> well I don't think ours is the real one honestly. I don't think we should maintain one either.
16:07:28 <jgriffith> thingee: agreed
16:07:31 <thingee> it's bound to get out of date again
16:07:42 <jgriffith> thingee: I think that was mostly setup for our own internal tracking early on
16:07:45 <jungleboyj> Are we talking about the 3rd party CI page?
16:07:54 <thingee> jungleboyj: https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
16:08:13 <smcginnis> I think infra's should be the one: https://wiki.openstack.org/wiki/ThirdPartySystems
16:08:26 <thingee> ok, seems like no one is opposed. I'll start moving that out
16:08:30 <thingee> ok lets start today's agenda
16:08:39 <jungleboyj> thingee: Ok, because there was also the question about the accuracy of https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
16:08:49 <thingee> #link https://wiki.openstack.org/wiki/CinderMeetings
16:08:50 <thingee> jungleboyj: that's what started this
16:08:51 <jungleboyj> Should I make a call for people to get that updated with current status?
16:09:00 <asselin_> one more quick comment on 3rd party ci
16:09:05 <asselin_> #link Third-party self service account creation - http://ci.openstack.org/third_party.html#creating-a-service-account
16:09:33 <asselin_> for anyone one doesn't have an account yet ^^^
16:11:07 <xyang1> there is another one https://wiki.openstack.org/wiki/Cinder/third-party-ci-status
16:11:56 <jgriffith> xyang1: yeah, that was my point earlier about their being two :)
16:12:03 <jungleboyj> xyang1: Yeah, that was the one I was asking about.
16:12:25 <jgriffith> xyang1: there's another on under the infra wiki's as well
16:12:44 <smcginnis> Do we need anything more than the contact info on infra?
16:12:50 <jgriffith> xyang1: which I can't seem to find again at the moment :(
16:12:55 <smcginnis> https://wiki.openstack.org/wiki/ThirdPartySystems
16:13:09 <smcginnis> jgriffith: ^^ linked via infra
16:13:09 <jgriffith> smcginnis: thank you kind sir
16:13:13 <smcginnis> np
16:14:21 <jgriffith> so my point earlier was the the Cinder free-form wiki should probably go away
16:14:34 <jgriffith> and this infra page should be the "source of truth"
16:14:51 <jgriffith> albeit I'd love to add a change to seperate based on project
16:15:16 <erlon> jgriffith: how about all the other information that where in the first wiki?
16:15:18 <thingee> #topic  Volume deletion while snapshots exist
16:15:18 <thingee> smatzek: here?
16:15:18 <thingee> jungleboyj: I'll update the wiki and respond to the ML post
16:15:23 <thingee> #topic Volume deletion while snapshots exist
16:15:35 <smatzek> thingee: here.
16:16:09 <jungleboyj> thingee: Ok, won't insert myself there then.
16:16:13 <jgriffith> no offense but why are we talking about this again?
16:16:25 <bswartz> we lost thingee
16:16:27 <smatzek> The blueprint https://review.openstack.org/#/c/133822/ has -1s from some Cinder cores and a +1 from another.
16:16:55 <jgriffith> smatzek: this topic comes up every few months, and every few months we end up saying no
16:17:03 <thingee> I'm back
16:17:09 <thingee> sorry folks...irc issues
16:17:10 <jgriffith> smatzek: maybe this time will be different...
16:17:35 <thingee> jgriffith: are we done with this topic? :P
16:17:42 <smcginnis> Why is it a Bad Thing?
16:17:44 <jgriffith> thingee: :)  yes
16:17:46 <e0ne> :)
16:17:49 <hemna> :)
16:17:50 <smatzek> Some volume drivers have implemented snapshots in their backend by calling common methods to do volume clones.
16:17:53 <jungleboyj> smcginnis: +1
16:17:59 <jgriffith> smcginnis: because there are a number of drivers that the snapshots are linked and they can't do this
16:18:02 <eharney> deleting a volume and leaving hanging snapshots doesn't really make sense in our model of how things work.
16:18:04 <thingee> smatzek: sorry I'll have to review the logs after...just had irc issues
16:18:05 <jgriffith> IIRC Ceph is one, LVM is another
16:18:06 <jungleboyj> I don't understand the resistance to this.
16:18:14 <eharney> if this were about deleting a volume and removing the snapshots associated with it, it would make sense
16:18:24 <jgriffith> We've intentionally designed a use model that avoids this
16:18:32 <hemna> and other backends can't delete volumes that have snapshots.
16:18:37 <e0ne> eharney: +1
16:18:41 <jgriffith> hemna: that's what I just said :)
16:18:44 <jungleboyj> thangp: We aren't actually done.  YOu didn't much.
16:18:45 <jgriffith> hemna: linked :)
16:18:52 <e0ne> imo, it should be only for admin
16:18:55 <jungleboyj> thingee: We aren't actually done.  YOu didn't much.
16:18:56 <bswartz> I don't have a problem with allowing deletion of volumes with existing snapshots, but the restriction that you can't do that is buried deep in cinder's DNA, so changing that will requires a really strong use case
16:18:58 <hemna> ours for one cannot delete the volume and leave the snapshot.
16:18:58 <smatzek> EMC VMAX and SVC are two examples.  In these Cinder voluem drivers it would allow space savings to allow the volume to be deleted with Nova instances in this flow:  SNapshot Nova Instance, delete nova instance, but the user wants to keep the image of the instance.
16:19:01 <hemna> we would break
16:19:07 <eharney> jungleboyj: a proposal involving some kind of "promote this snapshot to a volume" might make sense, so you didn't have hanging leaves
16:19:10 <jgriffith> smcginnis: smatzek so the problem is you end up with goofy mixed behaviors based on driver
16:19:15 <jgriffith> which is what we strive to NOT have
16:19:25 <thingee> jgriffith: +1
16:19:26 <jgriffith> eharney: jungleboyj there already is!
16:19:30 <eharney> imo different driver behaviors is not the problem
16:19:32 <jgriffith> eharney: jungleboyj create volume from snapshot
16:19:34 <thingee> we're not going back to a matrix
16:19:45 <jgriffith> eharney: jungleboyj if you implement that as a fancy promote, that's great
16:20:00 <thingee> was the use case already mentioned for this?
16:20:04 <thingee> heh sorry
16:20:13 <winston-d> Instance snapshot is totaly different from volume snapshot
16:20:30 <hemna> yup
16:20:32 <rushiagr> yeah. I remember we defined what a 'snapshot' means in Cinder terms, and it's totally valid
16:20:52 <thingee> use case?
16:21:02 <smatzek> The use case is Nova boot from volume.  Nova snapshot instance, which produces a Glance image which references Cinder snapshots.  Nova delete instance.  The Cinder volumes backign the instance are not deleted as they have related snapshots.
16:21:20 <hemna> smatzek, you are comparing apples and oranges
16:21:21 <tbarron> smatzek: does promote snapshot to volume followed by delete of the original volume address your use case?
16:21:51 <hemna> images are not snapshots, they are self contained separate entities.   volume snapshots are not, for some backends.
16:21:55 <smatzek> tbarron:  it does not since the GLance image metadata contains block device mappings which point to the snapshot using a snapshot_id field.
16:22:08 <winston-d> smatzek: instance snapshot is a glance image, it can be completely unrelated to cinder.
16:22:47 <jgriffith> winston-d: yeah, but Bootable Volumes or attached Volumes create a Cinder Volume snap under the covers
16:22:51 <eharney> the BDMs contain references to cinder snapshots?
16:22:57 <jgriffith> winston-d: honestly it's kind of a lame implementation IMO
16:22:58 <smatzek> winston-d:  they are related.  1. Nova boot from volume.  2. Nova snapshot instance.  In this case Nova API will call Cinder snapshot on the volumes behind the VM, producing Cinder snapshots, which are then pointed to by Glance metadata
16:22:59 <thingee> does it make sense for a flag in delete volume to also delete snapshots? I think that's all smatzek wants here
16:23:20 <jgriffith> thingee: that does, but I don't think that's what he wants
16:23:25 <eharney> thingee: so that was how i originally read this, and i like that idea, but that wasn't what he wanted
16:23:27 <hemna> thingee, I thought he wanted to keep the snapshot around and delete the volume.
16:23:33 <jgriffith> thingee: so currently when you boot you can say "delete on term"
16:23:39 <jgriffith> but that pukes if you make a snapshot
16:23:51 <jgriffith> which is "ok" IMO
16:24:04 <jgriffith> all the tools are there; the volume, the snap etc
16:24:05 <jungleboyj> hemna: I think that is what he wants.
16:24:07 <thingee> if that's not what smatzek wants, the use case should be rephrased :)
16:24:11 <jgriffith> just create volume from snapshot
16:24:12 <jgriffith> done
16:24:15 <bswartz> cinder volume delete -f ?
16:24:15 <hemna> jungleboyj, yah and that's bad IMO
16:24:16 <jgriffith> moving on :)
16:24:18 <thingee> "The use case is Nova boot from volume.  Nova snapshot instance, which produces a Glance image which references Cinder snapshots.  Nova delete instance.  The Cinder volumes backign the instance are not deleted as they have related snapshots."
16:24:19 <smatzek> what I want is to allow an extra capability that can be set on a per-volume driver basis which will allow volume drivers to delete Cinder volumes in Cinder and in the backend while leaving the Cinder snapshot.
16:24:23 <jgriffith> bswartz: nope
16:24:27 <flip214> thingee: you'd need such a flag on "delete consistency group" then too...
16:24:28 <hemna> jungleboyj, if all he wants is a cascade delete, then that's ok.
16:24:33 <jgriffith> bswartz: you CANT delete volumes with snapshots
16:24:39 <jgriffith> bswartz: period, full stop
16:24:40 <jgriffith> :)
16:24:45 <hemna> jgriffith, +1
16:24:48 <bswartz> cinder volume delete -rf ?
16:24:53 <jgriffith> bswartz: LOL
16:25:01 <eharney> there's still an alternate proposal to delete volumes and snapshots
16:25:04 <bswartz> the implication would be delete the snaps first and the volume last
16:25:06 <jgriffith> bswartz: seems that's what might be proposed
16:25:15 <thingee> smatzek: for your first use, sure. I'm not sure we want your second use case.
16:25:19 <hemna> smatzek, yah that's what I was afraid you were proposing.  That fundamentally won't work on many of Cinder's arrays.
16:25:22 <thingee> reasons already expressed by others.
16:25:23 <smatzek> jgriffith:  That's not what's proposed.
16:25:23 <winston-d> bswartz: you need root privilge to do that
16:25:41 * flip214 .oO(  bikeshed painting ...  )
16:25:44 <jgriffith> smatzek: it's not in your proposal, but it was just suggested in the rolling delete option
16:25:55 <smatzek> What's being proposed won't work for many Cinder volume drivers.  But it would work for some.
16:26:03 <hemna> eharney, that would be ok.  I think of it as a cascade delete.  :)  delete the volume and delete all it's snapshots as well.
16:26:07 <jgriffith> smatzek: which makes it unacceptable
16:26:08 <eharney> hemna: right
16:26:18 <hemna> smatzek, and that's exactly what we are trying to avoid in Cinder.
16:26:21 <jgriffith> smatzek: ^^  see suggestion of cascading delete
16:26:23 <eharney> forget the driver differences.  The concept of "a snapshot that refers to no volume" doesn't make sense.
16:26:32 <jgriffith> eharney: +1
16:26:32 <rushiagr> eharney: +1
16:26:40 <thingee> ok folks I'm moving on. There has been an expressed concern with this feature due to it having different behavior from drivers, which we don't want
16:26:43 <jgriffith> smatzek: eharney that's why we introduced clone :)
16:26:47 <eharney> maybe Glance should be using clones instead of snaps for this..
16:27:01 <e0ne> eharney: +1
16:27:10 <jgriffith> eharney: yeah... but on a side; that's not so hot when using LVM
16:27:11 <hemna> next song.
16:27:12 <hemna> next
16:27:13 <flip214> eharney: +1
16:27:15 <jgriffith> hemna: +1
16:27:20 <winston-d> eharney: you mean Nova when createing instance snapshot for BFV?
16:27:40 <jgriffith> So just to be clear....
16:27:53 <jgriffith> snapshots of instances running BFV is kinda ridiculous anyway
16:28:00 <jgriffith> you're using a persitent store
16:28:01 <eharney> winston-d: dunno,  but it feels like it's doing something with a snapshot id that is questionable.  I don't really know, so, we should probably sort out that design elsewhere...
16:28:05 <smatzek> eharney:  I would like that Nova implementation better as well, but Nova's BDM specification doesn't currently allow it.
16:28:06 <jgriffith> you already have an image
16:28:08 <jgriffith> per say
16:28:22 <jgriffith> if you want to upload it to glance as a regular image... good enough
16:28:30 <jgriffith> convert to image
16:28:31 <jgriffith> done
16:28:33 <jgriffith> moving on
16:28:45 <thingee> #topic Return request ID to caller
16:28:54 <thingee> abhijeetm: here?
16:29:00 <abhijeetm> yes
16:29:12 <thingee> #link http://lists.openstack.org/pipermail/openstack-dev/2014-December/052822.html
16:29:41 <abhijeetm> which is good solution to return req id back to caller?
16:30:03 <flip214> abhijeetm: +1 to solution 1 ....
16:30:27 <jgriffith> abhijeetm: to be clear... by caller you mean "internal code"
16:30:32 <flip214> if there's no split made now, it might be necessary in the future. better do it now while there are less users.
16:30:45 <jgriffith> abhijeetm: not  like a response to "nova volume-attach xxxx" on the cmd line
16:30:48 <jgriffith> client
16:30:50 <eharney> caller of a method in the client lib
16:30:59 <eharney> i think?
16:31:10 <jgriffith> eharney: me too, but that's why I'm asking
16:31:23 <jgriffith> eharney: because depending on the answer my opinion changes :)
16:31:47 <abhijeetm> But solution # 1 is not be compatible with services which are using old cinder client
16:32:14 <thingee> abhijeetm: but it doesn't break them
16:32:23 <abhijeetm> and solution # 2 is already merged in glance : https://review.openstack.org/#/c/68524/7
16:32:28 <thingee> I think I already decided this in some patch ago and it was solution #1
16:32:34 <thingee> not sure why this is being brought up again
16:33:06 <thingee> abhijeetm: is there anything else?
16:33:34 <thingee> #topic Reopen discussion of "RemoteFS configuration improvements"
16:33:39 <abhijeetm> no, I will submit spec for sol # 1
16:33:43 <thingee> erlon: here?
16:33:46 <erlon> hi
16:33:48 <abhijeetm> thanks
16:33:58 <thingee> #link https://review.openstack.org/#/c/133173/
16:34:06 <thingee> spec^
16:34:11 <thingee> summary of problem:
16:34:14 <thingee> #link https://etherpad.openstack.org/p/remotefs-improvments
16:34:33 <erlon> so,  basically the discussion about this is
16:34:45 <jgriffith> erlon: how about just move all of these to Manilla :)
16:34:52 <bswartz> gah!
16:34:59 <eharney> the main question this arrived at, i think, is whether NFS/GlusterFS/RemoteFS drivers should support pools
16:35:04 <eharney> please don't do the manila thing
16:35:09 <hemna> lol
16:35:10 <erlon> spec is that we could use pool aware scheduling and have the same benefits proposed
16:35:19 * jgriffith shuts up
16:35:30 <hemna> why not move this to Manilla ?
16:35:36 <eharney> i was planning to eliminate the pseudo-scheduler code in the drivers in favor of pushing toward multi-backend in Kilo
16:35:40 <bswartz> hemna: it's not relevant
16:35:44 <eharney> because Manila is not for serving volumes to instances.
16:35:54 <bswartz> blocks over NFS belongs in cinder
16:35:59 <winston-d> what? isn't the pool support strongly requested by NetApp's NFS driver?
16:36:05 <bswartz> blocks over any filesystem is a cinder thing
16:36:10 <jgriffith> hemna: they're right... just the architecture is broken
16:36:14 <erlon> using the pool aware scheduler we eliminate the pseudo scheduler
16:36:16 <akerr> we support pools on our NFS driver already
16:36:18 <eharney> This proposal pushes more toward pool support instead
16:36:20 <winston-d> s/requested/demanded/
16:36:30 <eharney> this is not relevant for hardware NFS platforms/drivers for the most part
16:36:38 <eharney> afaict
16:36:47 <jgriffith> winston-d: +1  Certainly was "DEMANDED"
16:37:05 <jgriffith> and I'm puzzled after being called "unfair" to Netapp for initially not liking it
16:37:19 <hemna> ok so what's the problem with enabling the NFS drivers w/ pools ?
16:37:29 <bswartz> I'm not sure what the problem is either
16:37:32 <bswartz> the spec looked good to me
16:37:37 <hemna> bswartz, +1
16:37:44 <eharney> which spec?
16:37:46 <bswartz> it solves an existing problem
16:37:58 <bswartz> the initial NFS driver was implemented in a bad way, and this tries to fix it
16:38:01 <eharney> there may not be a big problem, this proposal just popped up and conflicts somewhat with the spec we already reviewed
16:38:12 <eharney> and i don't know how they mesh or whether they should yet
16:38:15 <erlon> bswartz: but conflicts with the pool aware design that most drivers are going in this direction
16:38:20 <thingee> eharney: https://review.openstack.org/#/c/133173/
16:38:30 <hemna> eharney, ok, so it sounds like we need another spec to introduce pools to the NFS drivers, and remove the merged spec ?
16:38:39 <jgriffith> bswartz: read line 11 of the etherpad
16:38:41 <bswartz> erlon: I disagree, it just requires drivers to handle the pools case themselves if they want
16:38:44 <eharney> it's not clear that the NFS drivers should support pools
16:38:50 <eharney> or what bswartz said as well
16:39:15 <jgriffith> sigh... now I'm utterly unsure what to make of this
16:39:29 <jgriffith> Pools were proposed for NFS to begin with, weren't they?
16:39:32 <bswartz> if an NFS-based driver wants to export multiple pools, it needs to handle that internally -- not by relying on the parent class's broken approach
16:39:33 <hemna> if a driver wants to support pools, great.   if not, great.  next.
16:39:35 <eharney> let me rephrase
16:39:41 <erlon> bswartz: it make impossible to NFS drivers to use pools
16:39:42 <jgriffith> eharney: and now you're pointing out they shouldn't be there?
16:39:57 <eharney> everyone is mixing up two kinds of NFS drivers here.
16:39:59 <akerr> it's not impossible to support pools on NFS, NetApp does that today
16:39:59 <bswartz> this proposal fixes the parent class without harming all the drivers that inherit from it, I believe
16:40:09 <tbarron> bswartz: +1
16:40:13 <jgriffith> hemna: the proposed changes to Shares might make that "not possible"
16:40:18 <eharney> there is a basic NFS driver where you have a list of shares that it mounts and it throws volumes across them, the basic software NFS driver
16:40:31 <eharney> there are other NFS drivers that are attached to hardware that has pools from the hardware side
16:40:40 <bswartz> eharney: yes, and that's the broken thing, which should be fixed
16:40:42 <eharney> this proposal is, we should use pools on the software basic NFS driver as well
16:41:04 <eharney> this proposal = the one erlon just posted
16:41:12 <bswartz> I wouldn't block enhancements to the generic NFS driver as long as they can be overriden by child classes
16:41:14 <eharney> https://review.openstack.org/#/c/141850/
16:41:23 <erlon> akerr: it is possible since we make a small change on RemoteFS driver
16:41:33 <eharney> so the question of blocking is: we are redesigning the configuration for the remotefs/nfs drivers
16:41:47 <eharney> and we need to decide if we are redesigning it the way i proposed, or in a different way
16:41:49 <thingee> eharney: thank you :)
16:42:24 <bswartz> +1 for the way eharney proposed
16:42:31 <eharney> having just looked at this proposal <24h ago, i haven't thought through the design enough to know all the details
16:43:39 <bswartz> at the very least, we need to do what eharney proposes to fix an existing annoy config issue
16:43:49 <erlon> eharney: we dont need to redesign the configuration only the scheduling suport part
16:44:04 <bswartz> if someone wants to subclass the generic NFS driver with a pool-supporting-NFS-driver that's cool, and they can configure that however they want
16:44:20 <erlon> bswartz: I only disagree with the 1 pool limitation
16:44:33 <erlon> the configuration problem is totaly ok
16:44:33 <thingee> bswartz: makes sense
16:44:34 <bswartz> erlon: fix that limitation in a subclass
16:44:47 <akerr> why not just make the share config option a list instead of a single share, and support pools in the generic nfs driver
16:45:05 <eharney> akerr: that'd be possible, but i thought that's what multi-backend was for
16:45:09 <bswartz> don't force everyone who inherits from remotefs to have the added complexity
16:45:12 <eharney> so why add the complexity
16:45:21 <tbarron> +1
16:45:28 <bswartz> eharney: +1
16:45:33 <erlon> bswartz: it does not make sense to write a hole subclass for a driver if I can change only a few lines on the base class
16:45:39 <akerr> if I have multiple exports on a single share server it makes more sense to allow that to be set up as a single backend with multiple pools
16:45:46 <kvidvans_> eharney: +1
16:45:59 <erlon> akerr: +1
16:46:06 <cknight> akerr: +1
16:46:09 <bswartz> erlon: if those few lines are going to be inheritted by a bunch of classes that don't want them, then it does
16:46:21 <thingee> erlon: seems like folks are more leaning towards not even having it in the base class.
16:46:31 <ameade_> akerr: +1
16:47:31 <winston-d> do we need a vote here?
16:47:41 <thingee> I think people already voted
16:47:50 <thingee> it seems like people don't want this in the base class
16:47:59 <eharney> honestly i'm still kind of undecided based on evaluating the actual impact
16:48:02 <erlon> thingee: did you count?
16:48:25 <eharney> because i'm not sure i saw in the patch the same thing we were discussing here
16:48:51 <winston-d> akerr got quite a few votes too, IIUC, he'd prefer having this in base class?
16:49:17 <thingee> ok, so we're split there.
16:49:29 <thingee> eharney: can you finish reviewing and weigh in once you're done?
16:49:39 <eharney> i'm not sure this is the whole thing, though?
16:49:50 <eharney> the patch covers passing more info into the driver but not the configuration implications to meet the stated goal
16:50:20 <eharney> so i'm having trouble seeing the big picture, will need to review more i suppose
16:50:22 <erlon> thingee: eharney the stated goal for configuration dont need to change at all
16:50:39 <eharney> well... it has to not change, when we were planning to change it
16:51:04 <eharney> imo <24h of thought and review is not enough to reach the end of this
16:51:31 <thingee> ok, I'll let eharney continue to weigh in and defer to that. Otherwise I'll go with the rest of the votes here.
16:51:42 <erlon> we could defer this for more time to review?
16:51:46 * flip214 time flies
16:51:54 <thingee> #topic Cinder unit test coverage improvement
16:51:59 <thingee> winston-d: you're up
16:52:10 <winston-d> really quick
16:52:46 <erlon> got 2 go
16:52:52 <winston-d> I did some UT coverage tests against Cinder and other openstack projects (Nova, Neutron, Swift) as well
16:53:17 <winston-d> and Cinder has lowest coverage, which is ~74% on master branch.
16:53:37 <flip214> winston-d: I guess that's partly because there are many drivers that never get used?
16:53:44 <rushiagr> winston-d: how much of it was core + LVM code?
16:53:48 <jgriffith> flip214: unit tests should cover drivers as well
16:54:00 <winston-d> I'd like to encourage everybody to do a better job writing UT
16:54:11 <rushiagr> jgriffith: agree
16:54:17 <cknight> winston-d: +1
16:54:19 <winston-d> flip214: some drivers, from big vendors also have low coverage
16:54:23 <flip214> jgriffith: yes, but not count code for *all* drivers if only a single one is run.
16:54:24 <jgriffith> winston-d: I'd second that, but I'd also add that I think our architecture of the unit test code needs some work
16:54:36 <jgriffith> flip214: it doesn't work that way...
16:54:36 <winston-d> jgriffith: agreed
16:54:48 <thingee> winston-d: no disagreements there. I think the general response from people about asking for more coverage, and I've experienced this myself when I've asked in reviews is that we don't need to cover everything.
16:54:51 <e0ne> jgriffith: so, will we force moving to mock?
16:55:10 <jgriffith> e0ne: that's not even my point, but I think we're slowly doing that
16:55:12 <flip214> jgriffith: if it only counts driver code to the total if the driver is in use, then okay. I thought that LOC of all drivers are summed up.
16:55:28 <jgriffith> e0ne: my point is more around how unit tests are written and what they're checking
16:55:37 <winston-d> I'd start contribute some UT to cinder framework, but driver developers, please take a look at the coverage rate for your driver.
16:55:39 <jgriffith> flip214: this isn't the time to debate this, we can talk in #cinder
16:55:42 <rhe00_> winston-d: how did you check the coverage?
16:55:45 <e0ne> jgriffith: got it
16:56:10 <winston-d> rhe00_: ./run_tests.sh -c and open the generated html
16:56:23 <rhe00_> ok, thanks
16:56:24 <akerr> can we also start moving the files to mimic the code's directory structure?  The flat test directory is getting a git large and unweildy
16:56:24 <thingee> ok, thanks winston-d
16:56:35 <thingee> #topic open topic
16:56:59 <rushiagr> I've seen many new drivers just testing the 'default' case, which gives roughly 3/4ths of coverage easily
16:57:05 <thingee> so I'm going to cancel the meeting next week
16:57:05 <jungleboyj> akerr: a git large huh ?  ;-)
16:57:10 <akerr> bit*
16:57:14 <thingee> holidays for some folks
16:57:19 <bswartz> thingee: will there meetings next week/following week?
16:57:26 <bswartz> oh nm
16:57:47 <thingee> 31st...are people going to be back?
16:57:56 <e0ne> thingee: it makes sence
16:57:56 <jgriffith> ummm  new years?
16:57:58 <jgriffith> not me :)
16:57:58 <jungleboyj> thingee: No.
16:58:00 <akerr> no
16:58:01 <eharney> no
16:58:02 <cknight> akerr: +1
16:58:11 <rhe00_> no
16:58:19 <thingee> ok, so the 7th we'll meet again!
16:58:32 <rushiagr> https://review.openstack.org/#/c/136980/ any strong exceptions on this patch?
16:58:34 <thingee> I'll update the wiki accordingly
16:58:46 <winston-d> merry xmas and a happy new year. :)
16:58:50 <rushiagr> (sorry for abruptly adding it)
16:58:51 <e0ne> ok... 7th is a Cristmas for some part of us...
16:58:54 <thingee> reminder to folks, especially core..today is the last day for us to help drivers get in
16:58:57 <thingee> https://etherpad.openstack.org/p/cinder-kilo-priorities
16:58:58 <e0ne> s/ok/oh
16:58:59 <rushil> is the deadline for blueprints not related to drivers tomorrow?
16:59:03 <jungleboyj> Oh, one thing.  from me.
16:59:11 <flip214> thingee: +1 ;)
16:59:27 <eikke> review time! :-)
16:59:30 <jungleboyj> I have hotel information in the mid-cycle meet-up wiki:  https://etherpad.openstack.org/p/cinder-kilo-midcycle-meetup
16:59:36 <jgriffith> rushil: ?  deadline for non-driver related bp's?
16:59:42 <thingee> I encourage core to signup for what they want to help through so we don't spend time on the same patches/burnt out etc.
16:59:42 <jungleboyj> Please update the list if youy are planning to come!
16:59:43 <jgriffith> thingee: can you clarify that?
16:59:52 <rushil> jgriffith: Yes
16:59:58 <thingee> jgriffith: we haven't discussed it
17:00:01 <jgriffith> thingee: seems problematic to me
17:00:06 <jgriffith> thingee: ok... I didn't think so
17:00:18 <thingee> jgriffith: I
17:00:27 <jgriffith> Seems like there's a lot of FUD going around about deadlines, features etc
17:00:36 <eikke> jgriffith: +1
17:00:39 <jgriffith> maybe we should write something up that's *clear*
17:00:47 <jungleboyj> jgriffith: +1
17:00:47 <thingee> jgriffith: not sure what you mean
17:00:51 <thingee> we'll talk in the cinder room
17:00:53 <tbarron> so backup driver BP spec already submitted but not yet approved isnt' excluded b/c of this deadline, right?
17:00:53 <thingee> #endmeeting