16:02:13 #startmeeting 16:02:14 Meeting started Wed Aug 8 16:02:13 2012 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:38 So first, I apologize for not updating the agenda this week and sending out a reminder 16:02:41 :) 16:03:21 Let's do things backwards today since we always run out of time... 16:03:37 Does anybody have anyhting that they need help with or want to discuss? 16:03:47 I'm here 16:03:55 dricco: Hey there 16:04:42 cool... so nobody has anything? 16:04:47 hey there. i wouldn't mind discussing boot-from-volume. 16:04:48 This should be a quick meeting for a change 16:04:56 I have some submissions coming 16:05:03 dtynan: sure.. 16:05:04 nothing to talk about necessarily, just fyi 16:05:06 #topic boot from volume 16:05:23 dtynan: go for it 16:05:36 i'm working for HP cloud services, over the wall from Duncan & co. 16:05:54 i've been looking @ the create-volume stuff using an image_id. 16:06:02 (on Diablo) 16:06:25 just noticed, about 10 mins ago that someone else has put some changes in to Cinder for (what looks like) the same thing 16:06:27 ? 16:06:57 Yeah, it's part of changes for bfv 16:07:37 i was curious if the bfv plans folded both the create-volume-from-image stuff with the boot-instance-from-bootable-volume. 16:08:16 dtynan: It's seperate patches as I recall 16:08:36 to me, they're two separate steps - anyone else agree/disagree? 16:08:52 * jgriffith agrees 16:08:54 I agree 16:09:21 is thingee here? 16:09:33 am here 16:09:42 dtynan: I'm confused... what's the question/issue? 16:10:09 is that your patch: "Change in openstack/cinder[master]: Added copy image to volume & create image of vol" 16:10:14 no 16:10:21 ah. ok. 16:10:29 dtynan: It's Unmesh 16:10:46 jgriffith: trying to figure out who else is working on this. 16:10:49 dtynan: He took the blueprint and ran with it after it sat for a very long time 16:10:58 ah. ok. 16:11:01 Unmesh Gurjar... 16:11:11 I think boot-instance-from-bootable-volume copies the image from swift to the volume, after the volume is attached to an instance. Can we do the copying without depending on an instance? I think it is necessary. 16:11:11 https://blueprints.launchpad.net/openstack/?searchtext=create-volume-from-image 16:12:20 that's what i've been looking at. i have something working in Diablo which creates a new volume (independent of an instance) which is initialized from Glance. 16:13:20 dtynan: Don't think anybody *knew* you were working on somehting 16:13:28 this looks good to me as well 16:13:36 dtynan: Also, you have it in diablo, are you working on it for Folsom and going to release something? 16:13:58 yes 16:14:13 dtynan: Ok, well there was no way for anybody to know that 16:14:25 haven't been working on it for long... :) 16:14:26 dtynan: Also, just FYI feature freeze is next week 16:14:35 no pressure, then 16:14:39 :) 16:15:05 Are there significant differences between your work and what Unmesh has proposed? 16:15:25 i only caught it today, so i'm still reviewing. 16:15:31 Ok 16:15:54 dtynan: So the best thing to do would be see if it lines up with what you had planned 16:16:07 dtynan: Feel free to suggest additions/mods 16:16:08 yeah, agreed. 16:16:21 dtynan: Also, see if you can ping Unmesh and maybe work together with him 16:16:22 any1 else working on this? 16:16:41 jgriffth: will do. 16:16:51 * bswartz is not working on it but is interested 16:16:59 I'm working on https://blueprints.launchpad.net/cinder/+spec/effecient-volumes-from-images, which depends on that 16:17:05 dtynan: Unmesh was the only person I knew of working on this, and jdurgin 16:17:09 nm... there ya go 16:17:11 :) 16:17:16 I don't seem to be able to assign myself to the blueprint though 16:17:51 jdurgin: Done 16:17:57 thanks 16:18:24 jdurgin: If you have something in progress make sure you update status etc 16:18:37 will do 16:19:30 dtynan: Anything else? 16:19:44 nope. ta muchly. 16:19:56 i'll get in touch with you guys directly to sync up. 16:20:02 sounds good 16:20:10 #topic netapp questions 16:20:16 bswartz: You're up 16:20:26 hi bswartz 16:20:30 I don't have any questions 16:20:36 but I can answer them 16:20:46 Oh, thought you said you had something to discuss :) 16:20:48 if anyone else wants to ask about what we're doing 16:21:06 bswartz: Ok... I'll bite. "what are you guys doing?" 16:21:08 :) 16:21:39 I'm planning to submit a driver that lets cinder use ordinary files on a NFS server as block devices 16:21:57 with additional enhancements when that NFS server is a NetApp device 16:22:06 bswartz: Ummmmm 16:22:24 bswartz: Did you talk to anybody about this beforehand? 16:22:30 bswartz: Did you submit a blueprint? 16:22:44 not yet 16:23:01 it's a self-contained driver at this point 16:23:22 Ok, I'd suggest getting a blueprint out asap 16:23:26 shouldn't impact architecture much if at all 16:23:52 bswartz: Also there were some lengthy discussions at the summit regarding NFS 16:23:56 I know, my appologies 16:24:09 bswartz: Rob was present and I think there was a concensus regarding NFS support 16:24:36 yeah we some ideas for significant new NFS-related features but we plan to wait for the next release to address those 16:25:32 So this brings up the question again... is Cinder the right place for NFS? 16:25:58 jdurgin: thoughts on this? 16:26:03 There are arguments for and against 16:26:25 i think it should be personally 16:26:30 the main argument for is that customers want to manage storage in 1 place if possible 16:26:55 if it's being used to store image files accessed as block devices, I think it makes sense, just like a local storage driver would make sense 16:27:15 if it's about exporting a filesystem directly to the guest, that's a bit different 16:27:35 jdurgin: Agreed. It is filesystem-as-a-service that is contentious 16:27:41 I don't want anything to do with the second 16:28:20 bswartz: So what you're saying is a driver to present NFS as block storage backends? 16:28:24 I realize that the second one is controversial 16:28:34 yes, that's what we have planned for Folsom 16:28:45 bswartz: Well you're running out of time 16:28:50 I don't expect our Folsom submission to be controversial 16:28:54 jgriffith: FYI because of SM, xenserver supports NFS as a part of n-vol/cinder 16:29:20 also netapp 16:29:58 also, we have customers who are already using NFS files are virtual block devices -- albeit without n-vol/cinder 16:30:07 renuka: Sorry, I don't know what you mean "supports NFS as a paort of n-vol/cinder" 16:30:10 NFS files as* block devices 16:30:31 NFS storing image files sounds fine... 16:31:18 jgriffith: The storage manager driver uses xenserver storage manager, which already has support for talking to NFS, equallogic, netapp, iscsi, etc. So we can support NFS block devices on xenserver today. And that is a part of nova volume 16:31:36 renuka: got it...thanks 16:32:05 Alright, sounds good then 16:32:21 bswartz: You really *need* to get a blueprint and get something submitted quickly though 16:32:33 bswartz: like you said, no pressure 16:32:57 bswartz: If it's a contained driver then it shouldn't be a big deal to get it in 16:33:16 jgriffith: that is the plan 16:33:30 bswartz: I would expect that there will be support for non NetAPP NFS in some fashion as well no? 16:33:48 bswartz: Or is this just a netapp driver of sorts? 16:34:15 jgriffith: Yes, although I'm not sure who would actually use that. It would be like a reference implementation. 16:35:07 jgriffith: That "yes" was to the first question. We will support non-NetApp devices as NFS servers as well. 16:35:17 bswartz: got ya 16:35:29 bswartz: Ok, sounds good. I'll look forward to a blueprint and some code 16:36:17 #topic https://blueprints.launchpad.net/openstack/?searchtext=create-volume-from-image 16:36:26 oops 16:36:40 #topic https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1028718 16:36:41 Launchpad bug 1028718 in nova "nova volumes are inappropriately clingy for ceph" [Medium,Triaged] 16:36:49 DuncanT: care to share 16:37:27 Basically I just want to add that we don't care which volume node gets requests, like ceph we only run multiple volume nodes for HA and load reasons 16:38:02 The recent fix for steering volume-from-snapshot requests to a specific volume node is a regression for us 16:38:27 I was wondering how much work in that direction people are willing to consider? 16:38:58 I'm interested in fixing it, but doubt I'll get to it this week 16:39:23 Are we limited to hacks like --host or can we promote this to a first-class configuration? 16:39:59 I'm not that familiar with the scheduler layer, but I'd like it to be first class configuration 16:40:09 DuncanT: We can make it first class 16:40:17 DuncanT: Just need to have a resource and time 16:40:19 i.e. a host_agnostic option or something 16:40:30 Awesome... Expect bugs / patches soon :-) 16:40:35 DuncanT: :) 16:40:47 As long as the bugs come with patches that's great :) 16:41:36 (That's the extent of my questions on the subject) 16:41:43 DuncanT: Ok, thanks 16:41:49 I have a question 16:42:23 I am surprised that we dont have multiple volume/cinder instances performing behaving only as control planes 16:42:37 i dont understand this dependence on a particular volume service instance 16:42:59 renuka: That's exactly the config I want to support 16:43:19 +1 16:43:20 you mean like how it was in nova-volume? with the blockdevicemapper or whatever it was called? 16:43:22 Does it not work today because of a particular driver? 16:43:25 renuka: It's not something we *didn't* want, we just didn't get to it 16:43:32 renuka: Hopefully via a scheduler plugin so it is a simple config option 16:44:00 do you mean nova-volume has it and not cinder? 16:44:09 renuka: scheduler is the missing piece, combined with folks using lvm/local didsks 16:44:12 disks 16:44:24 nova-volume does not have it except via a hack 16:44:28 renuka: No, I nova-volume does not hav it either 16:44:32 i didn't actually understand what you asked i suppose, based on other people's feedback. ignore me please :P 16:45:02 renuka: I aim to make it an option you can turn on if you know your driver supports it 16:45:06 chalupaul: no worries the bdm is for attached BD's to instances etc 16:45:15 wait, we end up casting a request to anything that could be nova-volume, right 16:45:15 yeah 16:45:29 which means any of the service instances could pick it up 16:45:45 renuka: create-from-snapshot now goes direct to a named volume node 16:45:59 renuka: This I aim to fix 16:46:07 DuncanT: renuka: because it has to if they're using LVM 16:46:58 jgriffith: Yup, hence making it an option. If I can make the LVM driver log errors or something if the two are combined then I will 16:47:07 So I feel we need to make cinder drivers strictly control planes. This will help us with HA, upgrades and make things easier 16:47:18 is that doable? 16:47:27 renuka: I think it depends on the driver 16:47:28 renuka: That does not work for LVM driver 16:47:40 renuka: Not sure about other drivers 16:47:42 renuka: For some of us (SolidFire, Netapp etc) yes that works 16:47:51 It should work for iscsi 16:48:06 couldn't LVM be LVM over iscsi? 16:48:24 renuka: Problem comes in with snapshots/deletes etc 16:48:49 renuka: Although what DuncanT is proposing I believe will address this 16:49:12 LVM is presented via iscsi to the compute nodes anyway 16:49:16 yes 16:49:36 Right, so are we saying that after DuncanT's fix, we should be in a position to enforce drivers being only control planes? 16:50:10 Possibly, but LVM volumes may be a sticking point 16:50:16 This will also make things easier when we introduce more drivers into the play for multiple backends, right 16:50:22 Have to see what DuncanT has in mind 16:50:44 renuka: is there are blueprint for the multiple backends thing? 16:50:56 renuka: I don't know that it will make the multi back-ends much different 16:51:07 renuka: No, I don't have that in mind 16:51:13 not sure if there is today. We always spoke of this as a future feature 16:51:19 bswartz: rnirmal is working on that, I think he's going to hav a BP today or tomorrow 16:51:32 rnirmal: You around? 16:51:37 renuka: Just a config option you can turn on if you *know* your driver supports it 16:52:07 DuncanT: i see 16:53:56 Ok, sounds like everybody is going to be extremely busy the next week :) 16:54:03 Anything else before I wrap up? 16:54:27 anyone finds vol-acl useful? 16:54:30 i have a futuristic question if there's time 16:54:49 vol-acl? Never heard of it 16:54:58 jgriffith, i can give some status update on AZ 16:55:07 i register a BP https://blueprints.launchpad.net/cinder/+spec/volume-acl 16:55:45 what are your guys thoughts on shared volumes? like, being able to mount a volume on multiple instances for clustering stuff. 16:55:57 Vincent_Hou: I haven't decided what I think of that one yet 16:56:14 ok 16:56:20 chalupaul: true SAN, sounds great but I don't think it's something for Foslom at all 16:56:22 Vincent_Hou: Any more details? Can we re-use the swift acl rules? 16:56:51 chalupaul: Also not sure how you'd make that work w/iscsi 16:56:55 jgriffith: def not for folsom! But i've heard that there are polarized opinions on the matter :P 16:57:01 not that much yet. just a thought on it. 16:57:07 yeah it would have to be driver specific 16:57:08 chalupaul: Mounting in several instances gets really hard regarding data consistency and ordering... 16:57:19 chalupaul: yes, FC shops especially... so now we're talking adding FC support etc 16:57:28 chalupaul: the netapp driver already supports connecting a volume/lun to multiple instances 16:57:43 The problem isn't the storage device 16:57:59 That's easy to work around 16:58:01 yeah, but cinder will not let you attach to 2 places either 16:58:25 chalupaul: yeah, the limitation is in the management layer / DB schema 16:58:30 not witohut changes to the database and how it figures out if volumes are "available" 16:58:34 I think it's something to consider for Grizzly 16:58:39 cool 16:58:49 i'll make a g1 blueprint with some thoughts 16:58:54 when the time is right 16:58:56 sounds good 16:59:06 thanks, sorry for the grenade toss :P 16:59:21 chalupaul: LOL... nah, it's a good thing to have on the horizon 16:59:24 Vincent_Hou: Would be interested in seeing some use cases for vol_acl, but it looks interesting 16:59:25 chalupaul: i like the idea of multiple, read-only boot volumes... :) 16:59:53 me too :D 16:59:54 i will figure out something more. 17:00:29 ok, anything else real quick? 17:00:46 alright, thanks everyone 17:00:49 #endmeeting