16:04:59 #startmeeting cinder 16:05:00 Meeting started Wed Jul 24 16:04:59 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:05:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:05:03 The meeting name has been set to 'cinder' 16:05:06 geesh! 16:05:09 o/ 16:05:11 :) 16:05:14 \o 16:05:15 hi 16:05:18 Ok... hey everyone 16:05:20 hi 16:05:20 hi 16:05:21 * bswartz is not on a plane or a train 16:05:23 hi 16:05:29 hello 16:05:29 So to start I have a request.... 16:05:39 Hey 16:05:58 I've updated the meetings wiki to include an *ask* that if you post a topic also please put your name or irc nick so we know who the interested party is :) 16:06:31 so on that note, is the owner of topic 1 around? 16:07:04 ok 16:07:08 onward 16:07:09 (1. optional iscsi support for non-iscsi drivers) 16:07:21 dosboy here? 16:07:32 dosaboy: ping 16:07:48 Ok... we'll circle back 16:07:56 the VMWare topic? 16:08:04 kartikaditya: that you? 16:08:26 Yep I for the vmware plugin 16:08:35 but I am workign on the code so nothing much to ask 16:08:52 kartikaditya: cool 16:08:53 jgriffith: Removed the topic, since work is in progress 16:08:59 so more of a heads up 16:09:01 :) 16:09:11 kartikaditya: looking forward to seeing it 16:09:23 dosaboy is on his way 16:09:31 jgriffith: Yep, having an internal round before sending it out 16:09:41 snapshotting generic block dev? 16:09:44 i'm here 16:09:45 Here 16:09:46 anybody here for that? 16:09:49 Ok... 16:09:54 sorry was in another meeting 16:09:56 YorikSar: let's start with you 16:10:09 #topic generic block dev driver snapshot support 16:10:32 The problem is that we have snapshotting in minimum requirements but it's not feasible to implement it for generic block device driver. 16:10:51 * thingee_ added driver dev doc to agenda 16:10:57 So we should either agree on exception for this driver or... smth else 16:11:15 YorikSar: I'm not convinced we can't come up with something on this 16:11:29 winston-1: wassup? 16:11:53 dosaboy: i guess you were the owner of today's topic 1? 16:12:01 YorikSar: however I think the use cases for the local disk one would warrant an exception 16:12:17 jgriffith: Yes. 16:12:19 YorikSar: do you have a proposal of some sort? 16:13:10 It came from Savanna and they don't need snapshotting. They don't care if it's lost or corrupted - Hadoop will just forget about that node and move on. 16:13:30 winston-1: i did not add that but it is my bp 16:13:49 YorikSar: that sounds like ephemeral storage -- similar to what nova does 16:13:57 So I don't see if some generic snapshotting mechanism should be sone here. 16:14:25 YorikSar: yeah TBH it's been low on my list anyway 16:14:27 bswartz: it uses a full disk partition as a volume 16:14:30 bswartz: They need block devices not occupied by other IO. They need all iops they can get for HDFS 16:14:47 YorikSar: I did want to change the patch the last time I looked though 16:14:54 YorikSar: so it's hadoop block device driver instead of generic ? 16:15:03 YorikSar: IIRC it's currently using an entire disk and I proposed it should do partitions 16:15:13 * thingee is switching to bus, bbl 16:15:15 but anyway... that's an entire different conversation 16:15:22 winston-1: No, it's generic. Hadoop and Savannah are just the first usecases for it. 16:15:33 speaking of this and thingee 16:15:36 https://review.openstack.org/#/c/38383/ 16:15:51 We've put min driver requirements in the docs now :) 16:15:57 can you not pass in partitions as the available_devices? 16:16:28 jgriffith: I'm not sure if Nova can attach multiple HDs to one instance the way Savanna wants it. 16:16:44 YorikSar: if it's generic, maybe snpashot is needed for other use cases. 16:17:10 winston-1: Yes, but I don't see any generic way to do snapshots in this case. 16:17:21 is implementing snapshots using 'dd' useful to anyone? 16:17:42 avishay: that would just result in offline clones here, right? 16:18:19 eharney: yes. i personally don't think it's useful, but maybe others disagree. i can't think of any other solution off-hand. 16:18:29 YorikSar: dm-snapshot? 16:18:35 avishay: well it already has clone support so i'm not sure i see the value.. 16:18:39 avishay: I don't believe it will. Since we'll have to either use one more HD for 'snapshot' that won't be consistent... 16:19:08 block device driver has another limitation here that it can't set sizes, so it will need to snapshot to a partition of >= size 16:19:08 winston-1: i did add that agenda item ;) 16:19:22 wiki history rarely lies... 16:20:16 eharney: agreed 16:20:54 We can (almost) alias snapshot and clone just to make that check in the list of drivers... 16:21:07 is there a clear definition of clones somewhere? 16:21:17 JM1: yeah 16:21:30 JM1: an independent copy of a volume that is a new volume 16:21:42 ok 16:21:49 jgriffith: a note on that... 16:21:52 jgriffith: what are ramifications on min driver reqs? 16:21:53 eharney: haha 16:22:03 i haven't seen it specified whether offline only counts or if minimum is to support online 16:22:04 ...if drivers do not meet min 16:22:06 let's have a topic on that so as not to get confused :) 16:22:12 #topic min driver reqs 16:22:33 But what's the decision on block device driver? 16:22:59 YorikSar: TBH I'm not overly concerned about it right now and it may be a sepcial case exception 16:23:12 for a driver that has no specific support, snapshots and clones are just copies, and can be slow, right? 16:23:13 jgriffith: ok 16:23:21 YorikSar: I have no problem with it being an exception as it's not a "voume" device per say 16:23:29 YorikSar: it's just raw disk 16:23:32 JM1: yes 16:23:44 YorikSar: but if that becomes an excuse or a problem we'll have to change it 16:23:44 YorikSar: ok :) 16:23:56 dosaboy: thingee will send you an email with the missing feature(s) and the driver is at risk of being pulled out of cinder 16:23:57 YorikSar: and as silly as it might seem we'll just use the clone 16:24:02 ie the clone coe 16:24:04 code 16:24:07 jgriffith: Ok, great. 16:24:30 as we've all said before we don't care how it's implemented just so that the expected behavior is achieved 16:25:02 So... back to min driver reqas 16:25:06 requirements 16:25:14 geesh... can't type this am 16:25:18 kmartin: sheesh 16:25:52 so, current topic? 16:26:03 i guess it has been answered 16:26:05 dosaboy: min drier reqs 16:26:10 driver!! 16:26:12 bahhh 16:26:14 so... offline or online clone is required? 16:26:16 * jgriffith is going to give up 16:26:33 eharney: I don't really know what that distinction means 16:26:37 jgriffith: deep breath, shot of espresso 16:26:40 eharney: sorry... could you explain? 16:26:53 dosaboy: ahhh... that's it, no coffee yet :) 16:27:02 yeah, and this will kidn of segue into my current work which maybe should be another topic 16:27:07 i guess online means instantaneous crash-consistent snapshot, which is not required? 16:27:21 so a driver like generic block dev driver (or gluster) can easily do offline clones just by dd'ing data around 16:27:40 eharney: ahhh... got ya 16:27:48 but it falls down w/o snapshot capabilities for online clones 16:28:44 * med_ walks a new keyboard and coffee over to jgriffith 16:28:50 so the problem is that say you have two volumes on a multi-backend system 16:28:53 I'd suggest online snapshots need not be in the minimum spec 16:28:54 (i'd like to go over gluster work a bit once we decide we're done w/ the current topic) 16:28:58 both vols are in-use 16:29:11 "cinder create-snapshot vol-1" succeeds 16:29:23 "cinder create-snapshot vol-2" fails because it's in-use 16:29:35 from a user perspective that sucks 16:29:57 DuncanT-: wow... really? 16:30:20 jgriffith, what if it didn't fail, it just make a snapshot that was non crash consistent? 16:30:24 agreed, as long as the user doesn't pass the -force flag and gets a useless snapshot - we need to make sure that's documented well 16:30:38 jgriffith: I'm not that bothered TBH 16:30:44 bswartz: that seems fine to me 16:30:56 why would you want a non crash-consistent snapshot? 16:31:04 DuncanT-: TBH me neither :) 16:31:06 * thingee is back 16:31:23 eharney: I don't see how it would be useful either 16:31:27 that's not too fine for me, but it's with the -force flag today...i think we need to find a way to fix this long-term 16:31:30 eharney, you might be able to arrange on the guest VM for all the I/O to that block device to be quiesced 16:31:46 eharney: JM1 the only thing that might be useful is that it's implemented 16:31:53 eharney: JM1 meaning consistency in the API 16:32:05 bswartz: well then what your are doing is an offline snapshot 16:32:23 yeah but from cinder's perspective it's online 16:32:30 jgriffith: Unless you're using an instantanious snapshot, the sematics are a bit useless anyway 16:32:38 kmartin: emails for drivers missing some features have been sent 16:32:48 DuncanT-: don't necessarily disagree 16:32:52 I also sent an email to the openstack dev ML 16:33:01 thingee: I have one question about it, btw 16:33:03 DuncanT-: sadly I thought much of this sort of discussion was behind us 16:33:07 sadly it seems it is not 16:33:19 I think we all agree that dd'ing an attached volume is useless, but i don't see us solving that for havana 16:33:21 YorikSar: sure 16:33:21 jgriffith: I suspect it will pop up at least once per release 16:33:47 DuncanT-: indeed 16:33:51 thingee: yep, I got mine :) A question was raised regarding what would happen if a driver did not meet the min driver features 16:33:51 thingee: I can't actively support Nexenta driver now. So we've forwarded your email to Nexenta people. 16:34:14 thingee: so... who is on the hook for the NFS driver? 16:34:15 thingee: What whould happen if they can't provide missing features? 16:34:19 kmartin: it will be shot from a cannon :) 16:34:39 Ok, this is less than productive 16:34:50 we're reviewing previous discussions 16:35:07 YorikSar, kmartin: so far, it has been agreed the driver wouldn't be in the release that's missing it's minimum features. 16:35:32 YorikSar: Potentially the driver would be removed before the final cut 16:35:51 thingee: so exisiting drivers could be pulled from H? 16:35:52 thingee, DuncanT-: thanks. I'll rush them then. 16:35:52 there has been positive responses from driver maintainers so far on getting these requirements fulfilled in time, which was my main concern 16:36:05 eharney, thingee: if there are issues w/ the NFS driver send the nastygrams to me 16:36:05 dosaboy: Yes. See old minutes 16:36:26 bswartz: well. i have some plans i'm scheming up for it w/ my current work. lemme go over that in a minute 16:36:26 I probably can find some people who were doing NFS driver as well... 16:37:42 do we need to keep on this topic or should we move along? 16:37:48 Move along 16:38:09 I say move along. this can be discussed anytime with core on #openstack-cinder 16:38:12 yeah, what's next? 16:38:16 kk 16:38:21 shall I do my original topic? 16:38:27 optional iscsi? 16:38:45 #topic optional iscsi-support for non-iscsi drivers 16:38:50 dosaboy: k 16:38:52 yay 16:39:07 ok so this was already discussed soemwhat after last meeting 16:39:14 basically the idea 16:39:21 (not heavily thought through yet) 16:39:33 is to add otpional iscsi support to non-iscsi drivers 16:39:39 e.g. rbd driver 16:39:44 or gluster 16:39:58 non-iscsi == file system , right? 16:40:01 so that hpervisors that do not support those protocols natively 16:40:08 can still use those backends 16:40:22 avishay: not really 16:40:36 avishay: it is simple to allow e.g. rbd drive to export rbd as iscsi volume 16:40:38 avishay: no, e.g. rbd and sheepdog use their own protocols 16:41:03 so yeah this would apply to, off the top of my head 16:41:12 dosaboy: so an optional iscsi wrapper around non-iscsi drivers 16:41:15 rbd, gluster, 16:41:22 excato 16:41:24 dosaboy: is something I think we've talked about over the years 16:41:27 exacto 16:41:30 dosaboy: i agree with you to give iscsi-support to those drivers for maximum compatibility for hypervisors, but for long term, i think adding native driver in nova side for non-libvirt hypervisors will be better. 16:41:40 dosaboy: that was actually a recommendation for doing Fibre Channel at one point :) 16:41:51 it would not necessarily be performant 16:41:53 so cinder would end up in the data path between a remote storage node and a remote compute node, serving iSCSI? 16:41:54 dosaboy: so FC to a cinder node and export as iSCSI 16:42:02 folks hated me for suggesting it IIRC 16:42:19 jgriffith: yep thats one possible option 16:42:20 interesting 16:42:27 idea is to make it as generic as possible 16:42:33 so e.g. 16:42:51 nova now suppirts vware hv 16:43:01 but vmdk suppoprt it not there yet 16:43:17 so for the interim, an iscsi option could be provided fro non-iscsi backends 16:43:19 * med_ walks a new keyboard and coffee over the atlantic to dosaboy too 16:43:34 huh 16:43:43 mount an FC volume and export it as iSCSI ? 16:43:45 that's interesting 16:43:54 anything can be officially supported by tgt/iet that comes from ubuntu/RHEL/CentOS/Fedora is fine 16:44:00 dosaboy: will this code go into hemna's brick work? 16:44:02 let's not get distracted 16:44:07 :P 16:44:12 dosaboy: think you have something pretty specific in mind 16:44:16 dosaboy: what is generic about it? 16:44:39 it would be a generic option for all non-iscsi drivers 16:44:53 i'm taking rbd as example 16:45:01 but there are others of course 16:45:17 dosaboy: so IMO this i smore a call for jdurgin1 and folks that have to support RBD 16:45:26 ie, make iscsi an even playing field ... 16:45:26 (for any hypervisor, any storage) 16:45:35 jgriffith: there are 2 options here 16:45:44 1. implement this for rbd driver only 16:45:46 dosaboy: I mean, personally like I said this was something I thought would be an option a while back 16:46:06 2. implement this as more common option for anyone who wants to use it 16:46:41 it is easy enough to implement for rbd since tgt now has native support 16:46:45 jgriffith: I'm fine with it as long as it's clear that it's not meant to be the best for performance or HA. I agree with zhiyan that it's a short term solution 16:46:45 If you can get a common version working, sees daft to do it any other way 16:46:45 med_: IMO iscsi can give maximum compatibility for consumer side, but I don't think it's a good enough idea. adding native driver to hypervisor will be better. 16:46:53 dosaboy: instead of doing rbd only, maybe add it to brick which can already connect, and then export brick connections? 16:47:11 i prefer 2. otherwise, that code will have to heavily refactored to work with others 16:47:23 avishay: yep, i need to familiarise myself with brick stuff tbh 16:47:29 +1 we'll have copypasta later I'm sure 16:47:53 dosaboy: sounds cool 16:48:04 dosaboy: I've done it in our lab with FC to iscsi to openstack 16:48:17 dosaboy: it's a descent model IMO 16:48:18 where is "brick" documented avishay 16:48:32 jgriffith: what kind of performance did you see? 16:48:34 dosaboy: agree with jdurgin1 though that we need to point out it may not be the ideal option 16:48:47 jgriffith: totally agree 16:48:48 med_: i'm not sure the specific code i'm talking about is documented 16:48:57 bswartz: wasn't too far off from what FC throughputs were 16:49:06 avishay, nod, that's kind of what I thought. 16:49:07 avishay: It's not...yet :) 16:49:09 it is just to sort out people using hypervisors that don't yet pair up 16:49:09 bswartz: bad per my experience on sheepdog 16:49:16 bswartz: in most cases it was the same, but I needed some tweaking 16:49:19 thingee: :) 16:49:37 bswartz: and I was using a dedicated 10G network for iSCSI data 16:49:40 ok i'll try to get a POC done 16:50:08 Ok... anything else? 16:50:16 not from me 16:50:16 10 min warning 16:50:26 i'd like to touch on assisted snaps for a minute 16:50:32 thingee: :) I'm actaully going to try and wrap early 16:50:47 #topic assisted snaps 16:50:50 eharney: have at it 16:51:03 so i posted the cinder side of this work 16:51:05 https://review.openstack.org/#/q/topic:bp/qemu-assisted-snapshots,n,z 16:51:20 this is about supporting snapshots for drivers like GlusterFS that don't have storage backend snapshot support 16:51:52 snapshotting is done via qcow2 files on the file system, and is handled by Cinder in the online case and Nova (coordinating with Cinder) in the online case 16:52:26 eharney: I've not read the code, but how does cinder ask nova to do the assistance? 16:52:27 eharney: maybe you mean "by cinder in the offline case" ? 16:52:50 DuncanT-: currently, the snapshot process is initiated by Nova 16:53:14 eharney: So this isn't a normal cinder snapshot-create --force?? 16:53:15 DuncanT-: there will be follow-up to hava Cinder initiate Nova's snapshotting since this is required to do a live snap for online clone 16:53:35 no 16:53:56 when the volume is attached, Nova/libvirt will quiesce and create a snapshot via libvirt/qemu 16:53:59 eharney: that nova initiated snapshot sounds like instance-snapshot instead of volume snapshot? 16:54:14 this is coordinated with cinder by creating a snapshot record w/o calling the driver snapshot code 16:54:24 JM1: not attached 16:54:35 eharney: ok 16:55:01 winston-1: no, it is volume snapshots, but you get features like a) VM is snapped so all volume snapshots are at the same time b) VM is quiesced 16:55:21 winston-1: +1 16:56:01 not sure i follow 16:56:07 eharney: curious how does cinder notify nova to quiesces all IO on that volume 16:56:25 winston-1: currently, Nova initiates the snapshot process and calls cinderclient to create the snaps 16:56:39 winston-1: in the future Cinder will need to be able to call into Nova via a new API 16:56:51 since we need cinder to notify nova for cases like cinder volume clone 16:56:58 eharney: no, volume snapshot is initiated by cinder 16:56:59 eharney: so I have an off question I'd like to ask folks here 16:57:24 eharney: this is really something specific to Gluster and possibly NFS shared/fs systems no? 16:57:33 winston-1: I think he's doing it the other way as a PoC 16:57:38 DuncanT-: right 16:57:49 jgriffith: it's specific to remote file systems that you mount 16:57:53 jgriffith: so, i'm starting on gluster 16:58:01 but this should be ported to the NFS driver and other similar ones as well 16:58:10 jgriffith: VMware/VMFS too perhaps 16:58:18 eharney: so... 16:58:24 which is why i was asking earlier who we think is on the hook for NFS minimum feature reqs 16:58:27 bswartz: what's the deal with shares project? 16:58:49 jgriffith: we're still working on launching it 16:58:56 sighh 16:59:01 we haven't been able to choose a name yet and that's blocking some things 16:59:07 haha! 16:59:17 bswartz: are you guys working with anybody else on this 16:59:18 :) 16:59:19 bswartz: caring - sharing is caring 16:59:22 bswartz: ie like eharney ? 16:59:25 right now it's mostly RedHat and NetApp 16:59:26 i'd like for anyone interested to check out https://review.openstack.org/#/c/38225/ and see if i'm crazy 16:59:28 thingee: nice! 16:59:34 and as we all know, naming things is a tough problem in CS 16:59:36 we're trying to get IBM involved 16:59:48 eharney: sorry... wasn't intending to derail your topic 16:59:51 once we have a name we can get a IRC channel and start holding meetings and all those nice things 16:59:51 that's why I usually end up with variable names like 'ass' for a while. 17:00:05 time's up 17:00:20 eharney: real quick 17:00:22 but yes it is on my plate to work on shares service w.r.t. gluster support, but i haven't been active on it lately 17:00:33 eharney: so I'm ok with moving forward obviously (we've talked) 17:00:43 eharney: but we need to figure something out long term 17:00:58 eharney: more and more of these things are going to come up 17:01:06 "Sharing is Caring" 17:01:10 eharney: for now I say extensions for them are probably fine etc 17:01:30 ok 17:01:34 guess we'll end 17:01:38 jgriffith: nothing about this is gluster-specific really. i think it makes sense for the class of remote-mounted file system drivers 17:01:42 #end meeting cinder 17:01:48 thingee: do you indemnify us from any trademarks you hold on "caring" if we go w/ that? 17:01:51 eharney: ohhh... I agree with that 17:02:09 bye all! 17:02:09 eharney: I mean shared, not Gluster specfic 17:02:14 #endmeeting cinder