16:04:59 <jgriffith> #startmeeting cinder
16:05:00 <openstack> Meeting started Wed Jul 24 16:04:59 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:05:01 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:05:03 <openstack> The meeting name has been set to 'cinder'
16:05:06 <jgriffith> geesh!
16:05:09 <thingee_> o/
16:05:11 <avishay> :)
16:05:14 <winston-1> \o
16:05:15 <eharney> hi
16:05:18 <jgriffith> Ok... hey everyone
16:05:20 <JM1> hi
16:05:20 <zhiyan> hi
16:05:21 * bswartz is not on a plane or a train
16:05:23 <xyang_> hi
16:05:29 <kmartin> hello
16:05:29 <jgriffith> So to start I have a request....
16:05:39 <DuncanT-> Hey
16:05:58 <jgriffith> I've updated the meetings wiki to include an *ask* that if you post a topic also please put your name or irc nick so we know who the interested party is :)
16:06:31 <jgriffith> so on that note, is the owner of topic 1 around?
16:07:04 <jgriffith> ok
16:07:08 <jgriffith> onward
16:07:09 <avishay> (1. optional iscsi support for non-iscsi drivers)
16:07:21 <winston-1> dosboy here?
16:07:32 <winston-1> dosaboy: ping
16:07:48 <jgriffith> Ok... we'll circle back
16:07:56 <jgriffith> the VMWare topic?
16:08:04 <jgriffith> kartikaditya: that you?
16:08:26 <kartikaditya> Yep I for the vmware plugin
16:08:35 <kartikaditya> but I am workign on the code so nothing much to ask
16:08:52 <jgriffith> kartikaditya: cool
16:08:53 <kartikaditya> jgriffith: Removed the topic, since work is in progress
16:08:59 <jgriffith> so more of  a heads up
16:09:01 <jgriffith> :)
16:09:11 <jgriffith> kartikaditya: looking forward to seeing it
16:09:23 <DuncanT-> dosaboy is on his way
16:09:31 <kartikaditya> jgriffith: Yep, having an internal round before sending it out
16:09:41 <jgriffith> snapshotting generic block dev?
16:09:44 <dosaboy> i'm here
16:09:45 <YorikSar> Here
16:09:46 <jgriffith> anybody here for that?
16:09:49 <jgriffith> Ok...
16:09:54 <dosaboy> sorry was in another meeting
16:09:56 <jgriffith> YorikSar: let's start with you
16:10:09 <jgriffith> #topic generic block dev driver snapshot support
16:10:32 <YorikSar> The problem is that we have snapshotting in minimum requirements but it's not feasible to implement it for generic block device driver.
16:10:51 * thingee_ added driver dev doc to agenda
16:10:57 <YorikSar> So we should either agree on exception for this driver or... smth else
16:11:15 <jgriffith> YorikSar: I'm not convinced we can't come up with something on this
16:11:29 <dosaboy> winston-1: wassup?
16:11:53 <winston-1> dosaboy: i guess you were the owner of today's topic 1?
16:12:01 <jgriffith> YorikSar: however I think the use cases for the local disk one would warrant an exception
16:12:17 <YorikSar> jgriffith: Yes.
16:12:19 <jgriffith> YorikSar: do you have a proposal of some sort?
16:13:10 <YorikSar> It came from Savanna and they don't need snapshotting. They don't care if it's lost or corrupted - Hadoop will just forget about that node and move on.
16:13:30 <dosaboy> winston-1: i did not add that but it is my bp
16:13:49 <bswartz> YorikSar: that sounds like ephemeral storage -- similar to what nova does
16:13:57 <YorikSar> So I don't see if some generic snapshotting mechanism should be sone here.
16:14:25 <jgriffith> YorikSar: yeah  TBH it's been low on my list anyway
16:14:27 <avishay> bswartz: it uses a full disk partition as a volume
16:14:30 <YorikSar> bswartz: They need block devices not occupied by other IO. They need all iops they can get for HDFS
16:14:47 <jgriffith> YorikSar: I did want to change the patch the last time I looked though
16:14:54 <winston-1> YorikSar: so it's hadoop block device driver instead of generic ?
16:15:03 <jgriffith> YorikSar: IIRC it's currently using an entire disk and I proposed it should do partitions
16:15:13 * thingee is switching to bus, bbl
16:15:15 <jgriffith> but anyway... that's an entire different conversation
16:15:22 <YorikSar> winston-1: No, it's generic. Hadoop and Savannah are just the first usecases for it.
16:15:33 <jgriffith> speaking of this and thingee
16:15:36 <jgriffith> https://review.openstack.org/#/c/38383/
16:15:51 <jgriffith> We've put min driver requirements in the docs now :)
16:15:57 <eharney> can you not pass in partitions as the available_devices?
16:16:28 <YorikSar> jgriffith: I'm not sure if Nova can attach multiple HDs to one instance the way Savanna wants it.
16:16:44 <winston-1> YorikSar: if it's generic, maybe snpashot is needed for other use cases.
16:17:10 <YorikSar> winston-1: Yes, but I don't see any generic way to do snapshots in this case.
16:17:21 <avishay> is implementing snapshots using 'dd' useful to anyone?
16:17:42 <eharney> avishay: that would just result in offline clones here, right?
16:18:19 <avishay> eharney: yes.  i personally don't think it's useful, but maybe others disagree.  i can't think of any other solution off-hand.
16:18:29 <zhiyan> YorikSar: dm-snapshot?
16:18:35 <eharney> avishay: well it already has clone support so i'm not sure i see the value..
16:18:39 <YorikSar> avishay: I don't believe it will. Since we'll have to either use one more HD for 'snapshot' that won't be consistent...
16:19:08 <avishay> block device driver has another limitation here that it can't set sizes, so it will need to snapshot to a partition of >= size
16:19:08 <dosaboy> winston-1: i did add that agenda item ;)
16:19:22 <med_> wiki history rarely lies...
16:20:16 <avishay> eharney: agreed
16:20:54 <YorikSar> We can (almost) alias snapshot and clone just to make that check in the list of drivers...
16:21:07 <JM1> is there a clear definition of clones somewhere?
16:21:17 <jgriffith> JM1: yeah
16:21:30 <jgriffith> JM1: an independent copy of a volume that is a new volume
16:21:42 <JM1> ok
16:21:49 <eharney> jgriffith: a note on that...
16:21:52 <dosaboy> jgriffith: what are ramifications on min driver reqs?
16:21:53 <jgriffith> eharney: haha
16:22:03 <eharney> i haven't seen it specified whether offline only counts or if minimum is to support online
16:22:04 <dosaboy> ...if drivers do not meet min
16:22:06 <jgriffith> let's have a topic on that so as not to get confused :)
16:22:12 <jgriffith> #topic min driver reqs
16:22:33 <YorikSar> But what's the decision on block device driver?
16:22:59 <jgriffith> YorikSar: TBH I'm not overly concerned about it right now and it may be a sepcial case exception
16:23:12 <JM1> for a driver that has no specific support, snapshots and clones are just copies, and can be slow, right?
16:23:13 <YorikSar> jgriffith: ok
16:23:21 <jgriffith> YorikSar: I have no problem with it being an exception as it's not a "voume" device per say
16:23:29 <jgriffith> YorikSar: it's just raw disk
16:23:32 <YorikSar> JM1: yes
16:23:44 <jgriffith> YorikSar: but if that becomes an excuse or a problem we'll have to change it
16:23:44 <JM1> YorikSar: ok :)
16:23:56 <kmartin> dosaboy: thingee will send you an email with the missing feature(s) and the driver is at risk of being pulled out of cinder
16:23:57 <jgriffith> YorikSar: and as silly as it might seem we'll just use the clone
16:24:02 <jgriffith> ie the clone coe
16:24:04 <jgriffith> code
16:24:07 <YorikSar> jgriffith: Ok, great.
16:24:30 <jgriffith> as we've all said before we don't care how it's implemented just so that the expected behavior is achieved
16:25:02 <jgriffith> So... back to min driver reqas
16:25:06 <jgriffith> requirements
16:25:14 <jgriffith> geesh... can't type this am
16:25:18 <dosaboy> kmartin: sheesh
16:25:52 <dosaboy> so, current topic?
16:26:03 <dosaboy> i guess it has been answered
16:26:05 <jgriffith> dosaboy: min drier reqs
16:26:10 <jgriffith> driver!!
16:26:12 <jgriffith> bahhh
16:26:14 <eharney> so... offline or online clone is required?
16:26:16 * jgriffith is going to give up
16:26:33 <jgriffith> eharney: I don't really know what that distinction means
16:26:37 <dosaboy> jgriffith: deep breath, shot of espresso
16:26:40 <jgriffith> eharney: sorry... could you explain?
16:26:53 <jgriffith> dosaboy: ahhh... that's it, no coffee yet :)
16:27:02 <eharney> yeah, and this will kidn of segue into my current work which maybe should be another topic
16:27:07 <avishay> i guess online means instantaneous crash-consistent snapshot, which is not required?
16:27:21 <eharney> so a driver like generic block dev driver (or gluster) can easily do offline clones just by dd'ing data around
16:27:40 <jgriffith> eharney: ahhh... got ya
16:27:48 <eharney> but it falls down w/o snapshot capabilities for online clones
16:28:44 * med_ walks a new keyboard and coffee over to jgriffith
16:28:50 <jgriffith> so the problem is that say you have two volumes on a multi-backend system
16:28:53 <DuncanT-> I'd suggest online snapshots need not be in the minimum spec
16:28:54 <eharney> (i'd like to go over gluster work a bit once we decide we're done w/ the current topic)
16:28:58 <jgriffith> both vols are in-use
16:29:11 <jgriffith> "cinder create-snapshot vol-1" succeeds
16:29:23 <jgriffith> "cinder create-snapshot vol-2" fails because it's in-use
16:29:35 <jgriffith> from a user perspective that sucks
16:29:57 <jgriffith> DuncanT-: wow... really?
16:30:20 <bswartz> jgriffith, what if it didn't fail, it just make a snapshot that was non crash consistent?
16:30:24 <avishay> agreed, as long as the user doesn't pass the -force flag and gets a useless snapshot - we need to make sure that's documented well
16:30:38 <DuncanT-> jgriffith: I'm not that bothered TBH
16:30:44 <jgriffith> bswartz: that seems fine to me
16:30:56 <eharney> why would you want a non crash-consistent snapshot?
16:31:04 <jgriffith> DuncanT-: TBH me neither :)
16:31:06 * thingee is back
16:31:23 <JM1> eharney: I don't see how it would be useful either
16:31:27 <avishay> that's not too fine for me, but it's with the -force flag today...i think we need to find a way to fix this long-term
16:31:30 <bswartz> eharney, you might be able to arrange on the guest VM for all the I/O to that block device to be quiesced
16:31:46 <jgriffith> eharney: JM1 the only thing that might be useful is that it's implemented
16:31:53 <jgriffith> eharney: JM1 meaning consistency in the API
16:32:05 <JM1> bswartz: well then what your are doing is an offline snapshot
16:32:23 <bswartz> yeah but from cinder's perspective it's online
16:32:30 <DuncanT-> jgriffith: Unless you're using an instantanious snapshot, the sematics are a bit useless anyway
16:32:38 <thingee> kmartin: emails for drivers missing some features have been sent
16:32:48 <jgriffith> DuncanT-: don't necessarily disagree
16:32:52 <thingee> I also sent an email to the openstack dev ML
16:33:01 <YorikSar> thingee: I have one question about it, btw
16:33:03 <jgriffith> DuncanT-: sadly I thought much of this sort of discussion was behind us
16:33:07 <jgriffith> sadly it seems it is not
16:33:19 <avishay> I think we all agree that dd'ing an attached volume is useless, but i don't see us solving that for havana
16:33:21 <thingee> YorikSar: sure
16:33:21 <DuncanT-> jgriffith: I suspect it will pop up at least once per release
16:33:47 <jgriffith> DuncanT-: indeed
16:33:51 <kmartin> thingee: yep, I got mine :) A question was raised regarding what would happen if a driver did not meet the min driver features
16:33:51 <YorikSar> thingee: I can't actively support Nexenta driver now. So we've forwarded your email to Nexenta people.
16:34:14 <eharney> thingee: so... who is on the hook for the NFS driver?
16:34:15 <YorikSar> thingee: What whould happen if they can't provide missing features?
16:34:19 <avishay> kmartin: it will be shot from a cannon :)
16:34:39 <jgriffith> Ok, this is less than productive
16:34:50 <jgriffith> we're reviewing previous discussions
16:35:07 <thingee> YorikSar, kmartin: so far, it has been agreed the driver wouldn't be in the release that's missing it's minimum features.
16:35:32 <DuncanT-> YorikSar: Potentially the driver would be removed before the final cut
16:35:51 <dosaboy> thingee: so exisiting drivers could be pulled from H?
16:35:52 <YorikSar> thingee, DuncanT-: thanks. I'll rush them then.
16:35:52 <thingee> there has been positive responses from driver maintainers so far on getting these requirements fulfilled in time, which was my main concern
16:36:05 <bswartz> eharney, thingee: if there are issues w/ the NFS driver send the nastygrams to me
16:36:05 <DuncanT-> dosaboy: Yes. See old minutes
16:36:26 <eharney> bswartz: well.  i have some plans i'm scheming up for it w/ my current work.  lemme go over that in a minute
16:36:26 <YorikSar> I probably can find some people who were doing NFS driver as well...
16:37:42 <jgriffith> do we need to keep on this topic or should we move along?
16:37:48 <DuncanT-> Move along
16:38:09 <thingee> I say move along. this can be discussed anytime with core on #openstack-cinder
16:38:12 <winston-1> yeah, what's next?
16:38:16 <jgriffith> kk
16:38:21 <dosaboy> shall I do my original topic?
16:38:27 <dosaboy> optional iscsi?
16:38:45 <jgriffith> #topic optional iscsi-support for non-iscsi drivers
16:38:50 <jgriffith> dosaboy: k
16:38:52 <dosaboy> yay
16:39:07 <dosaboy> ok so this was already discussed soemwhat after last meeting
16:39:14 <dosaboy> basically the idea
16:39:21 <dosaboy> (not heavily thought through yet)
16:39:33 <dosaboy> is to add otpional iscsi support to non-iscsi drivers
16:39:39 <dosaboy> e.g. rbd driver
16:39:44 <dosaboy> or gluster
16:39:58 <avishay> non-iscsi == file system , right?
16:40:01 <dosaboy> so that hpervisors that do not support those protocols natively
16:40:08 <dosaboy> can still use those backends
16:40:22 <winston-1> avishay: not really
16:40:36 <dosaboy> avishay: it is simple to allow e.g. rbd drive to export rbd as iscsi volume
16:40:38 <jdurgin1> avishay: no, e.g. rbd and sheepdog use their own protocols
16:41:03 <dosaboy> so yeah this would apply to, off the top of my head
16:41:12 <jgriffith> dosaboy: so an optional iscsi wrapper around non-iscsi drivers
16:41:15 <dosaboy> rbd, gluster,
16:41:22 <dosaboy> excato
16:41:24 <jgriffith> dosaboy: is something I think we've talked about over the years
16:41:27 <dosaboy> exacto
16:41:30 <zhiyan> dosaboy: i agree with you to give iscsi-support to those drivers for maximum compatibility for hypervisors, but for long term, i think adding native driver in nova side for non-libvirt hypervisors will be better.
16:41:40 <jgriffith> dosaboy: that was actually a recommendation for doing Fibre Channel at one point :)
16:41:51 <dosaboy> it would not necessarily be performant
16:41:53 <eharney> so cinder would end up in the data path between a remote storage node and a remote compute node, serving iSCSI?
16:41:54 <jgriffith> dosaboy: so FC to a cinder node and export as iSCSI
16:42:02 <jgriffith> folks hated me for suggesting it IIRC
16:42:19 <dosaboy> jgriffith: yep thats one possible option
16:42:20 <avishay> interesting
16:42:27 <dosaboy> idea is to make it as generic as possible
16:42:33 <dosaboy> so e.g.
16:42:51 <dosaboy> nova now suppirts vware hv
16:43:01 <dosaboy> but vmdk suppoprt it not there yet
16:43:17 <dosaboy> so for the interim, an iscsi option could be provided fro non-iscsi backends
16:43:19 * med_ walks a new keyboard and coffee over the atlantic to dosaboy too
16:43:34 <hemna> huh
16:43:43 <hemna> mount an FC volume and export it as iSCSI ?
16:43:45 <hemna> that's interesting
16:43:54 <winston-1> anything can be officially supported by tgt/iet that comes from ubuntu/RHEL/CentOS/Fedora is fine
16:44:00 <avishay> dosaboy: will this code go into hemna's brick work?
16:44:02 <jgriffith> let's not get distracted
16:44:07 <hemna> :P
16:44:12 <jgriffith> dosaboy:  think you have something pretty specific in mind
16:44:16 <guitarzan> dosaboy: what is generic about it?
16:44:39 <dosaboy> it would be a generic option for all non-iscsi drivers
16:44:53 <dosaboy> i'm taking rbd as example
16:45:01 <dosaboy> but there are others of course
16:45:17 <jgriffith> dosaboy: so IMO this i smore a call for jdurgin1 and folks that have to support RBD
16:45:26 <med_> ie, make iscsi an even playing field ...
16:45:26 <med_> (for any hypervisor, any storage)
16:45:35 <dosaboy> jgriffith: there are 2 options here
16:45:44 <dosaboy> 1. implement this for rbd driver only
16:45:46 <jgriffith> dosaboy: I mean, personally like I said this was something I thought would be an option a while back
16:46:06 <dosaboy> 2. implement this as more common option for anyone who wants to use it
16:46:41 <dosaboy> it is easy enough to implement for rbd since tgt now has native support
16:46:45 <jdurgin1> jgriffith: I'm fine with it as long as it's clear that it's not meant to be the best for performance or HA. I agree with zhiyan that it's a short term solution
16:46:45 <DuncanT-> If you can get a common version working, sees daft to do it any other way
16:46:45 <zhiyan> med_: IMO iscsi can give maximum compatibility for consumer side, but I don't think it's a good enough idea. adding native driver to hypervisor will be better.
16:46:53 <avishay> dosaboy: instead of doing rbd only, maybe add it to brick which can already connect, and then export brick connections?
16:47:11 <winston-1> i prefer 2. otherwise, that code will have to heavily refactored to work with others
16:47:23 <dosaboy> avishay: yep, i need to familiarise myself with brick stuff tbh
16:47:29 <thingee> +1 we'll have copypasta later I'm sure
16:47:53 <jgriffith> dosaboy: sounds cool
16:48:04 <jgriffith> dosaboy: I've done it in our lab with FC to iscsi to openstack
16:48:17 <jgriffith> dosaboy: it's a descent model IMO
16:48:18 <med_> where is "brick" documented avishay
16:48:32 <bswartz> jgriffith: what kind of performance did you see?
16:48:34 <jgriffith> dosaboy: agree with jdurgin1 though that we need to point out it may not be the ideal option
16:48:47 <dosaboy> jgriffith: totally agree
16:48:48 <avishay> med_: i'm not sure the specific code i'm talking about is documented
16:48:57 <jgriffith> bswartz: wasn't too far off from what FC throughputs were
16:49:06 <med_> avishay, nod, that's kind of what I thought.
16:49:07 <thingee> avishay: It's not...yet :)
16:49:09 <dosaboy> it is just to sort out people using hypervisors that don't yet pair up
16:49:09 <winston-1> bswartz: bad per my experience on sheepdog
16:49:16 <jgriffith> bswartz: in most cases it was the same, but I needed some tweaking
16:49:19 <avishay> thingee: :)
16:49:37 <jgriffith> bswartz: and I was using a dedicated 10G network for iSCSI data
16:49:40 <dosaboy> ok i'll try to get a POC done
16:50:08 <jgriffith> Ok... anything else?
16:50:16 <dosaboy> not from me
16:50:16 <thingee> 10 min warning
16:50:26 <eharney> i'd like to touch on assisted snaps for a minute
16:50:32 <jgriffith> thingee: :) I'm actaully going to try and wrap early
16:50:47 <jgriffith> #topic assisted snaps
16:50:50 <jgriffith> eharney: have at it
16:51:03 <eharney> so i posted the cinder side of this work
16:51:05 <eharney> https://review.openstack.org/#/q/topic:bp/qemu-assisted-snapshots,n,z
16:51:20 <eharney> this is about supporting snapshots for drivers like GlusterFS that don't have storage backend snapshot support
16:51:52 <eharney> snapshotting is done via qcow2 files on the file system, and is handled by Cinder in the online case and Nova (coordinating with Cinder) in the online case
16:52:26 <DuncanT-> eharney: I've not read the code, but how does cinder ask nova to do the assistance?
16:52:27 <JM1> eharney: maybe you mean "by cinder in the offline case" ?
16:52:50 <eharney> DuncanT-: currently, the snapshot process is initiated by Nova
16:53:14 <DuncanT-> eharney: So this isn't a normal cinder snapshot-create --force??
16:53:15 <eharney> DuncanT-: there will be follow-up to hava Cinder initiate Nova's snapshotting since this is required to do a live snap for online clone
16:53:35 <eharney> no
16:53:56 <eharney> when the volume is attached, Nova/libvirt will quiesce and create a snapshot via libvirt/qemu
16:53:59 <winston-1> eharney: that nova initiated snapshot sounds like instance-snapshot instead of volume snapshot?
16:54:14 <eharney> this is coordinated with cinder by creating a snapshot record w/o calling the driver snapshot code
16:54:24 <eharney> JM1: not attached
16:54:35 <JM1> eharney: ok
16:55:01 <eharney> winston-1: no, it is volume snapshots, but you get features like a) VM is snapped so all volume snapshots are at the same time b) VM is quiesced
16:55:21 <jgriffith> winston-1: +1
16:56:01 <eharney> not sure i follow
16:56:07 <winston-1> eharney: curious how does cinder notify nova to quiesces all IO on that volume
16:56:25 <eharney> winston-1: currently, Nova initiates the snapshot process and calls cinderclient to create the snaps
16:56:39 <eharney> winston-1: in the future Cinder will need to be able to call into Nova via a new API
16:56:51 <eharney> since we need cinder to notify nova for cases like cinder volume clone
16:56:58 <winston-1> eharney: no, volume snapshot is initiated by cinder
16:56:59 <jgriffith> eharney: so I have an off question I'd like to ask folks here
16:57:24 <jgriffith> eharney: this is really something specific to Gluster and possibly NFS shared/fs systems no?
16:57:33 <DuncanT-> winston-1: I think he's doing it the other way as a PoC
16:57:38 <eharney> DuncanT-: right
16:57:49 <eharney> jgriffith: it's specific to remote file systems that you mount
16:57:53 <eharney> jgriffith: so, i'm starting on gluster
16:58:01 <eharney> but this should be ported to the NFS driver and other similar ones as well
16:58:10 <bswartz> jgriffith: VMware/VMFS too perhaps
16:58:18 <jgriffith> eharney: so...
16:58:24 <eharney> which is why i was asking earlier who we think is on the hook for NFS minimum feature reqs
16:58:27 <jgriffith> bswartz: what's the deal with shares project?
16:58:49 <bswartz> jgriffith: we're still working on launching it
16:58:56 <jgriffith> sighh
16:59:01 <bswartz> we haven't been able to choose a name yet and that's blocking some things
16:59:07 <jgriffith> haha!
16:59:17 <jgriffith> bswartz: are you guys working with anybody else on this
16:59:18 <hemna> :)
16:59:19 <thingee> bswartz: caring - sharing is caring
16:59:22 <jgriffith> bswartz: ie like eharney ?
16:59:25 <bswartz> right now it's mostly RedHat and NetApp
16:59:26 <eharney> i'd like for anyone interested to check out https://review.openstack.org/#/c/38225/ and see if i'm crazy
16:59:28 <jgriffith> thingee: nice!
16:59:34 <JM1> and as we all know, naming things is a tough problem in CS
16:59:36 <bswartz> we're trying to get IBM involved
16:59:48 <jgriffith> eharney: sorry... wasn't intending to derail your topic
16:59:51 <bswartz> once we have a name we can get a IRC channel and start holding meetings and all those nice things
16:59:51 <hemna> that's why I usually end up with variable names like 'ass'  for a while.
17:00:05 <thingee> time's up
17:00:20 <jgriffith> eharney: real quick
17:00:22 <eharney> but yes it is on my plate to work on shares service w.r.t. gluster support, but i haven't been active on it lately
17:00:33 <jgriffith> eharney: so I'm ok with moving forward obviously (we've talked)
17:00:43 <jgriffith> eharney: but we need to figure something out long term
17:00:58 <jgriffith> eharney: more and more of these things are going to come up
17:01:06 <med_> "Sharing is Caring"
17:01:10 <jgriffith> eharney: for now I say extensions for them are probably fine etc
17:01:30 <jgriffith> ok
17:01:34 <jgriffith> guess we'll end
17:01:38 <eharney> jgriffith: nothing about this is gluster-specific really. i think it makes sense for the class of remote-mounted file system drivers
17:01:42 <jgriffith> #end meeting cinder
17:01:48 <esker> thingee:  do you indemnify us from any trademarks you hold on "caring" if we go w/ that?
17:01:51 <jgriffith> eharney: ohhh... I agree with that
17:02:09 <avishay> bye all!
17:02:09 <jgriffith> eharney: I mean shared, not Gluster specfic
17:02:14 <jgriffith> #endmeeting cinder