16:01:38 #startmeeting cinder 16:01:39 Meeting started Wed May 8 16:01:38 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:42 The meeting name has been set to 'cinder' 16:01:44 Hey everyone 16:01:48 hello 16:01:48 * BobBall runs and hides 16:01:52 hi 16:01:52 hi 16:01:55 o/ 16:01:56 * jungleboyj sounds the trumpet 16:02:02 Howdy 16:02:03 hi 16:02:14 o/ 16:02:17 Ok so here's the agenda 16:02:20 https://wiki.openstack.org/wiki/CinderMeetings 16:02:27 This should make for an interesting meeting :) 16:02:42 Looks like thingee may be delayed 16:02:54 No winston-d either 16:02:55 hi 16:02:59 bswartz: howdy 16:03:09 Perhaps we'll go out of order a bit here 16:03:35 #topic core team expectations 16:03:45 I just wanted to touch on some things real quick 16:04:00 I get a lot of emails from folks asking to be core, or asking for somebody they represent to be core 16:04:19 First, anybody on the team can nominate somebody for Core 16:04:25 You can even nominate yourself 16:04:33 The process is typically via the ML 16:04:49 That being said.... I wanted to point out what the responsibilities are: 16:05:31 of course I lost the link :) 16:06:38 https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess 16:06:40 jgriffith: ^ 16:07:12 haha 16:07:14 rushiagr: thnks 16:07:20 sorry, I had to step out for a second 16:07:30 So there's the outline on things 16:07:48 But I also want to point out that there's a lot of work expected to go along with it once you're nominated 16:07:57 Reviews is the biggest thing 16:08:21 and the way I see it if you're not reviewing on a regular basis to begin with then you probably don't want to be core 16:08:42 My expectations here would be that you'd be very active in the review process at least a couple of days a week minimum 16:08:54 Also... for those that are core 16:09:07 Keep in mind that you're allowed to do +2 and Approve 16:09:24 There are a number of patches I see from time to time with 10 +1's on them 16:09:31 some of which are from core team members 16:09:48 I'd like to keep the backlog of reviews down as much as possible 16:09:59 Also... wondering if anybody is interested in assigned core member days? 16:10:11 That's something they used to do in Nova, might be useful for us? 16:10:16 thoughts? 16:10:26 Or is this horribly booring for everyone? 16:10:28 what is an "assigned core member day"? 16:10:43 ? 16:10:48 ya, what are guys expected to do exactly on such days? 16:10:53 guaranteed available? 16:11:04 eharney: so the idea would be a published page that states what core members are available on what days 16:11:13 thingee: ok, guaranteed is a bit extreme :) 16:11:14 ah, ok 16:11:27 but the idea being they're on the hoook for +2/A reviews on that day 16:11:35 jgriffith: Not boring for those aspiring to be more involved. :-) 16:11:37 and also hopefully around IRC for questions if they come up 16:11:43 jungleboyj: +1 ;) 16:11:44 ok 16:12:03 jgriffith: is there a problem now that warrants this? 16:12:19 thingee: not necessarily no 16:12:21 Some of us are around most days but can't give much if any warning for being away... 16:12:28 (By that I mean me) 16:12:33 thingee: DuncanT k... maybe that doesn't fly 16:12:37 it was just an idea 16:13:15 Ok, sounds like there's not much interest here so let's move along :) 16:13:30 #topic Defined core API features 16:13:41 I thought people (core/noncore both) are around on IRC atleast all the days of week 16:13:59 rushiagr: yeah, IRC isn't a problem at all 16:14:00 :) 16:14:17 rushiagr: don't sweet it, doesn't seem it's worth taking up any more time on 16:14:35 Back to the core API features 16:14:38 jgriffith: sure 16:14:43 the problem is the time zone difference. 16:15:02 vincent_hou: indeed for IRC but not reviews 16:15:06 anyway.. moving along 16:15:09 * thingee still on phone waiting for laptop to boot 16:15:15 haha! 16:15:45 So I'll segway here 16:15:59 I don't think this topic is any sort of a surprise to anyone 16:16:04 we've talked about it on and off 16:16:06 muc better 16:16:16 and we have a wiki page outh there that took a crack at it 16:16:24 * guitarzan imagines jgriffith zipping around on a segway 16:16:30 haha! 16:16:40 jgriffith: so I guess speak about what came of the TC meeting 16:16:51 thingee: right 16:17:02 So, one of the things that I brought up at the last TC meeting 16:17:17 was how to deal with the issue we're starting to see 16:17:34 with each vendor/backend wanting to implement some feature of the API their own way 16:17:49 I'll use Duncans trivial snapshot example becuase it's my favorite :) 16:18:02 So HP has a back-end that they like to use snapshots for backups 16:18:13 They charge the end user a different rate and all is good 16:18:15 mep 16:18:34 The difference is they allow the parent to be deleted while the snap is still in existence 16:18:43 I believe RBD has a similar model 16:19:01 There were discussions around letting each backend do it "how they want" 16:19:04 afaik the 3PAR can't delete the parent volume of a snap 16:19:09 To me this seemed very very bad 16:19:28 different behaviors based on what backend is in use and the user has no idea what to expect 16:19:32 or they have a road-map to read 16:19:57 regardless... the TC for the most part agreed that the whole point of the openstack API's is software defined X 16:20:12 and that those delats should be extracted out 16:20:31 enhanced features above and beyond can be exposed in different ways 16:20:46 but there needs to be a common set of behaviors across the board 16:20:58 for us, that should be the reference implementation (LVM) 16:21:15 So for example if I can do "create/snapshot/delete/clone/copy-image" 16:21:25 those should all be expectations for every backend in in the system 16:21:36 quantum for example has already dealt with this at the summit. They defined a set of features and vendors are expected to follow it. 16:21:38 jgriffith: I liked the idea of additional flags to do custom behavior for custom backends, I guess DuncanT proposed it 16:21:57 wait, everything we can do in lvm should be expected by every backend? 16:22:11 uh oh...this discussion again? :P 16:22:18 I think we were all in an agreement that's the approach we wanted to take, but it's the reference that we follow that made this controversial. 16:22:33 not every array can behave exactly like LVM though 16:22:46 guitarzan: IMO yes, if we want to choose a different reference or make one up based on a subset of the ref implementation that's fine 16:22:55 I just think equating the reference implementation with the base functionality is not the right move to make 16:23:03 guitarzan: But really, *reference* implementations are supposed to be just that 16:23:15 guitarzan: but that's what a reference implementation is 16:23:16 I think it's only the reference because it's the easiest 16:23:25 haha! I disagree on tht 16:23:27 that 16:23:31 so we should remove the extra stuff from the reference 16:23:35 SolidFire is WAAAAYYYYY easier than LVM :) 16:23:35 and have an extended lvm backend 16:23:38 So the question then becomes how do things evolve? e.g. I can make the reference implementation do 'allow parent volumes to be deleted' easily enough 16:23:40 :) 16:23:47 guitarzan: the idea would be you can beyond the reference implementation 16:23:54 guitarzan: possibly, but what features is the real question I think 16:23:56 thingee: I understand 16:23:57 guitarzan +1 16:24:17 guitarzan: bswartz so the only thing we should require is create/delete volumes 16:24:22 lvm has support for lots of things that some of us consider extended 16:24:30 guitarzan: such as? 16:24:37 clone? 16:25:07 heck, even snapshots maybe :) 16:25:13 but I won't go there today 16:25:14 but some features can be emulated by the arrays, in which it gives the impression that it still supports feature X to the end user. 16:25:23 hemna: exactly 16:25:28 fwiw, I don't consider anything in the current LVM driver to be exotic, but if we use it as the bar to all other drivers, there will be a huge incentive to NOT add functionality to the LVM driver, even if it would be easy to do so, if there are other driver that can't match the same feature 16:25:37 guitarzan: +1 16:25:49 And at some point you have to say if array donesn't have features X, Y and Z implemented or emulated, it doesn't work with cinder 16:25:50 bswartz: I think that's the tricky point here 16:25:50 bswartz: I disagree witht that completely 16:25:56 in fact it's just the opposite IMO 16:26:31 guitarzan: bswartz so what do you propose? 16:26:31 guitarzan: with the proposal, it would have to be faked to support the idea that the user can request whatever without a matrice of the base features 16:26:39 guitarzan: bswartz create and delete are the only requirements? 16:26:46 that seems really lame to me 16:27:05 jgriffith: I just agree with the idea of separating the "reference LVM driver" from the "full feature LVM driver" 16:27:20 jgriffith, have to include attach/detach though no? 16:27:21 jgriffith: I think there's a big span between just create/delete and you have to do everything lvm can do 16:27:22 So far, I'm pretty happy with the features we have in LVM all being core, even if I think online clone should require a force flag 16:27:25 bswartz: ok, but can you explain why? And what the delta is? 16:27:44 i have to agree with bswartz... so far NFS/Gluster seem useful but don't support much 16:27:48 guitarzan: not really 16:27:53 * guitarzan shrugs 16:27:57 there's snapshots and clones 16:27:58 maybe the full feature LVM driver should use awesome BTRFS features 16:28:04 both of which you suggested shouldn't be required 16:28:14 bswartz: ? 16:28:25 jgriffith: It seems that the definition of core needs to be separated from LVM given that others have strong opinions based on their drivers. Sounds like something needs to be defined independently and voted upon. 16:28:28 My worry is that the fact that a feature is easy to do on LVM doesn't mean it is easy on a different architecture 16:28:43 we do have a wiki page now that describes the minimum driver requirements 16:28:48 DuncanT: fair 16:28:52 So let me backup a second 16:29:02 here's my view on what Cinder is: 16:29:08 Cinder is Software Defined Storage 16:29:13 https://wiki.openstack.org/wiki/Cinder#Minimum_Driver_Features 16:29:18 I'm not very interested in this, just agreeing that we might need to reconsider the low bar 16:29:22 It abstracts out a pool of backend storage devices for use by a consumer 16:29:28 thingee: Thanks. 16:29:36 jgriffith, not just block storage now? 16:29:48 bswartz: well, now that we all know bswartz isn't interested in the project :) 16:29:57 hemna: yes, block storage 16:30:05 but I'm not fighting that fight this morning 16:30:08 jgriffith, lol 16:30:16 :) 16:30:27 A reasonably high bar encourages more buy-in from those that chose to meet it 16:30:49 DuncanT: yes, and the other thing is if you're happy with just a piece of crap then we're done 16:30:51 DuncanT, and if it's too high? we risk not allowing drivers in that could be useful 16:31:01 Personally I think there's a lot of potential for Cinder 16:31:05 A low bar encourages code-and-dump drive by contributions that aren't necessarily healthy for the project as a whole 16:31:11 but it requires people to actually agree on what it's purpose is 16:31:13 DuncanT: +1 16:31:19 jgriffith, +1 16:31:22 DuncanT: +1000 16:31:32 So we need a middle ground IMO 16:31:44 I'm just saying I don't feel the need to take us further down this rathole. NetApp doesn't have an issue with the current minimum feature set. I'll let the people who do have issues make the arguments. 16:31:59 So looking at the wiki thingee reference: https://wiki.openstack.org/wiki/Cinder#Minimum_Driver_Features 16:32:06 Hema: So we've got a list on the wiki... is that a reasonable start or would you change it? 16:32:19 Is there anything on there that just makes people scream and kick? 16:32:32 snapshot and clone seems to be what has been brought up. 16:32:46 DuncanT, I think that's a great start and isn't an unreasonably high bar 16:32:52 clone was proposed to be faked for avoiding a matrice of what's supported by drivers 16:32:56 thingee: yeah, and funny enough we included "volume from snap, but NOT snap" haha 16:32:57 Online clone not needing a force flag makes me mutter darkly, but I'll raise a bug & patch for that and collect oppinions that way 16:33:17 what backends can't do a snap? 16:33:22 Either way, I feel if drivers begin to diverge in going beyond the base features, we're going to end up with a matrice of some sort =/ 16:33:30 DuncanT: I don't have a problem with it not being allowed online as the base 16:33:35 this minimum features list didn't say how delete snapshot should work 16:33:39 create/delete snap is there 16:33:46 DuncanT: IMO that's a perfect example of where differentiaton is good 16:33:50 jgriffith: Fantastic :-) 16:34:01 thingee: Sorry... yeah, I see it now :( 16:34:13 errr.. guitarzan ^^ 16:34:21 thanks, I just missed it before somehow 16:34:32 xyang_, do we really care how snapshot is implemented by a backend? 16:34:39 So the key here is that we're not saying everybody has to do it the same way 16:34:40 as long as it is... 16:34:49 jgriffith: +1 16:34:55 jgriffith: +1 16:35:03 They just have to provide something in the interface that at least gives the expected end-result 16:35:08 hemna: I thought we started the discussion by whether the parent should can be deleted while snapshots are still there 16:35:17 I'm quite interested in turning the current feature set into a test suite that can give a yes/no answer to is that what you mean by snapshot.... 16:35:19 xyang_: that was my lame example 16:35:28 DuncanT: +1 16:35:32 DuncanT, +1 16:35:50 So back to the prposal 16:35:59 xyang_, I don't think we can require how the backend is implemented for snaps for example. Not all arrays allow parent volumes to be deleted. 16:36:07 Note there's no implementation details or rules regarding online/offline 16:36:10 hemna: agree 16:36:15 We can add details around that, keeping the bar low 16:36:33 is there anything really that controversial, other than snapshots and clones for guitarzan ? 16:36:52 Which BTW guitarzan I'm not sure why you care, you guys don't have a driver submitted anyway :) 16:36:53 hah 16:36:55 Just like HP 16:37:15 I'm just playing devil's advocate for lvm 16:37:17 or against maybe 16:37:26 guitarzan: haha... I'd say it's against 16:37:32 guitarzan: it does those things :) 16:37:37 even if it isn't pretty :) 16:37:42 which is my whole point on this 16:37:43 So did we decide anything or just argue for fun? 16:37:51 it doesn't have to be efficient/pretty or elegant 16:37:54 hemna: I haven't seen what we're voting on yet :) 16:37:58 :) 16:38:13 Looks like we generally agree on the current minimum feature list? And on the principle of having the list? 16:38:23 DuncanT: +1 16:38:27 guitarzan: hemna I would like for an agreement on the prposed list 16:38:44 guitarzan: hemna and I'd like to warn folks that if we agree I'm going to enforce it via reviews 16:38:51 so where does the current minimum list leave drivers that already exist but fall short? 16:38:54 so the current requirements + the new ones for Havana? 16:38:55 in fact even if we don't agree I still might :) 16:39:04 the proposal is https://wiki.openstack.org/wiki/Cinder#Minimum_Driver_Features and that we don't care how you implement it, just don't raise an exception 16:39:06 eharney: the idea is they're supposed to be fixed in H 16:39:26 I don't think anyone has disagreed with the proposed list 16:39:45 Ok.. Yippeee! 16:39:50 jgriffith: ok. i'm scheming something in that area but i'll get to that later 16:40:00 the discussion started with something completely different, independent snapshots 16:40:02 I suggest we start submitting driver removal patches the day after H3 is cut.... that should get people's attention :-) 16:40:03 eharney: :) care to give a preview? 16:40:06 yah I'm ok w/ it. I have my TODO list to add copy image to volume/volume to image for Fibre Channel already, so the FC drivers should get that capability for Havana 16:40:13 jgriffith: https://blueprints.launchpad.net/cinder/+spec/qemu-assisted-snapshots 16:40:17 DuncanT: we agreed to split 16:40:30 thingee ;-) 16:40:32 I am after all "that guy" 16:40:44 jgriffith: ideas for snap support on gluster and maybe things like NFS too... still sketching out details to post soon 16:41:09 eharney: cool 16:41:13 jgriffith: https://review.openstack.org/#/c/25888/ 16:41:27 y 16:41:30 it changes things up a bit, but looks workable 16:42:08 it is related to https://bugs.launchpad.net/cinder/+bug/1148597 16:42:09 Launchpad bug 1148597 in cinder "Snapshot a volume on a different cinder-volume node" [Wishlist,In progress] 16:42:52 thingee: vincent_hou you guys wanna talk about that one? 16:43:01 it's next topic :) 16:43:02 yes 16:43:10 #topic https://review.openstack.org/#/c/25888/ 16:43:28 vincent_hou: go for it 16:43:52 Operation people in my company actually asked for this functionality. They wanted to put the snapshots in a different machines from where the volumes are located. 16:44:13 vincent_hou: understand but I have two issues: 16:44:15 They want to to prevent the loss of both the volume and the snapshot on the same machine 16:44:21 vincent_hou: snapshots are not backups 16:44:26 vincent_hou: we now have backups 16:44:28 right 16:44:33 vincent_hou: if you want a backup, do a backukp :) 16:44:36 :) 16:44:39 they said snapshot 16:44:50 vincent_hou: the other thing is, the concept of a snapshot we've pretty much made backend specific 16:44:50 Tell them 'no' then? 16:45:08 backend specific? 16:45:21 ohhhhh 16:45:25 Some backends already do snapshot as a copy to swift.... 16:45:26 vincent_hou: sure... on LVM it may be a qcow 16:45:31 dependent on the parent 16:45:33 Some do it as COW magic 16:45:36 on Ceph is't somethign else 16:45:38 etc etc 16:45:39 some do dd 16:45:57 I see. 16:46:17 not all backends are suppoorting snapshots now anyways. You avoid that problem with using the cinder-backup service....although that has recently changed in the last proposal. 16:46:42 I have one more question. 16:46:59 I've slightly lost track of all the directions people want to take backup in, looking forward to seeing code though 16:47:18 Can we backup directly on a different machine? 16:47:32 sure, have a separate swift machine 16:47:43 you'd better have more than one swift machine :) 16:47:51 vincent_hou: I'm also hoping that somebody finds the time to do backups to block etc 16:47:53 Mmmmm....... 16:48:30 so what are we agreeing on here? 16:48:32 jgriffith: exactly 16:48:36 vincent_hou: the problem is ceph wouldn't know what to do with a SF snap, that wouldn't know what to do with an HP snap that wouldn't konw what to do with..... 16:48:41 The whole idea behind backup is that you can put half your swift cluster in a different firecell and your data is safe even if your cinder & nova machines all catch fire 16:49:40 what are the disadvantages of having zoned snaps? 16:49:52 is it going to hurt anything if the option is there? 16:50:30 thingee: the problem I see is there's no way for another backend device to know what to do with it 16:50:36 You need to add a whole second code patch to do remote snaps 16:50:57 jgriffith: ok, and that's not addressed by that patch 16:51:01 and in general not how cinder works 16:51:28 open discussion? 16:51:36 thingee: correct, there's really no way to address it outside of the device itself 16:51:57 #topic open discussion 16:52:44 w00t 16:52:46 short meeting 16:52:47 i have a question about multipath/ALUA support in cinder -- do we have it? 16:52:55 jgriffith, so I need to start pulling in the FC attach code, and was going to put it in "brick" 16:53:01 where should I put it ? 16:53:10 hemna: hmmm.... 16:53:17 hemna: I *think* what I'm going to do is 16:53:27 put the dir structure in cinder 16:53:32 ie cinder/brick/ 16:53:38 populate and use it in Cinder first 16:53:50 ok 16:53:54 then create a separate lib that we can use elsewhere similar to what we do with common 16:54:09 * thingee has to cut out a bit early for a meeting 16:54:09 bswartz: we don't have it in the code currently no 16:54:11 bye all 16:54:16 thingee: cya 16:54:20 thingee: thanks! 16:54:26 anyone interested in multipath support for iSCSI? 16:54:42 bswartz, didn't that land in nova Grizzly? 16:54:50 I have multipath support in my FC attach code in Nova already 16:54:52 bswartz: me 16:54:55 nova has code for iscsi multipath 16:55:11 bswartz: oh... multipath or multiattach? 16:55:11 xyang_: what does nova use it for if cinder doesn't support it? 16:55:24 bswartz: sorry 16:55:27 bswartz: I misread 16:55:46 jgriffith: does that mean not interested? lol 16:55:55 xyang_: I think we just need to enable a flag, I am going to try it but haven't got chance 16:55:57 bswartz: sorry... no, doesn't mean that at all 16:56:20 xyang_: that would be awesome if you have time to do that and could let us know what you find 16:56:21 I think the attach call should probably evolve to return multiple targets 16:56:45 bswartz: just like FC multipath support in nova, I don't think we need to change cinder 16:57:03 okay perhaps I need to look into it 16:57:12 I find it hard to believe that it would work with no support from the drivers 16:57:31 we didn't have to do anything in cinder to support multipath for FC 16:57:36 bswartz: I looked at code. seems that we don't need to change anything other than the flag. I'll need to test it myself 16:57:54 jgriffith: sure, will do 16:58:14 bswartz: I think the iscsi layer does alot of that magic for you 16:58:28 okay thanks 16:58:36 I'll just do some testing then 16:58:51 bswartz: cool, let folks know how it goes 16:58:56 anything else from anybody? 16:59:10 I'll ask about the volume type stuff in #openstack-cinder 16:59:21 seems like DuncanT doesn't want it to be configurable, but always on 16:59:21 guitarzan: ohh, yes on the quotas 16:59:33 guitarzan: folks should join in so we can get that hammered out 16:59:42 alright, see ya in the channel 16:59:44 thanks everyone 16:59:48 #endmeeting