16:02:13 <jgriffith> #startmeeting
16:02:14 <openstack> Meeting started Wed Aug  8 16:02:13 2012 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:38 <jgriffith> So first, I apologize for not updating the agenda this week and sending out a reminder
16:02:41 <jgriffith> :)
16:03:21 <jgriffith> Let's do things backwards today since we always run out of time...
16:03:37 <jgriffith> Does anybody have anyhting that they need help with or want to discuss?
16:03:47 <dricco> I'm here
16:03:55 <jgriffith> dricco: Hey there
16:04:42 <jgriffith> cool... so nobody has anything?
16:04:47 <dtynan> hey there. i wouldn't mind discussing boot-from-volume.
16:04:48 <jgriffith> This should be a quick meeting for a change
16:04:56 <bswartz> I have some submissions coming
16:05:03 <jgriffith> dtynan: sure..
16:05:04 <bswartz> nothing to talk about necessarily, just fyi
16:05:06 <jgriffith> #topic boot from volume
16:05:23 <jgriffith> dtynan: go for it
16:05:36 <dtynan> i'm working for HP cloud services, over the wall from Duncan & co.
16:05:54 <dtynan> i've been looking @ the create-volume stuff using an image_id.
16:06:02 <dtynan> (on Diablo)
16:06:25 <dtynan> just noticed, about 10 mins ago that someone else has put some changes in to Cinder for (what looks like) the same thing
16:06:27 <dtynan> ?
16:06:57 <jgriffith> Yeah, it's part of changes for bfv
16:07:37 <dtynan> i was curious if the bfv plans folded both the create-volume-from-image stuff with the boot-instance-from-bootable-volume.
16:08:16 <jgriffith> dtynan: It's seperate patches as I recall
16:08:36 <dtynan> to me, they're two separate steps - anyone else agree/disagree?
16:08:52 * jgriffith agrees
16:08:54 <jdurgin> I agree
16:09:21 <dtynan> is thingee here?
16:09:33 <thingee> am here
16:09:42 <jgriffith> dtynan: I'm confused... what's the question/issue?
16:10:09 <dtynan> is that your patch: "Change in openstack/cinder[master]: Added copy image to volume & create image of vol"
16:10:14 <thingee> no
16:10:21 <dtynan> ah. ok.
16:10:29 <jgriffith> dtynan: It's Unmesh
16:10:46 <dtynan> jgriffith: trying to figure out who else is working on this.
16:10:49 <jgriffith> dtynan: He took the blueprint and ran with it after it sat for a very long time
16:10:58 <dtynan> ah. ok.
16:11:01 <jgriffith> Unmesh Gurjar...
16:11:11 <Vincent_Hou> I think boot-instance-from-bootable-volume copies the image from swift to the volume, after the volume is attached to an instance. Can we do the copying without depending on an instance? I think it is necessary.
16:11:11 <jgriffith> https://blueprints.launchpad.net/openstack/?searchtext=create-volume-from-image
16:12:20 <dtynan> that's what i've been looking at. i have something working in Diablo which creates a new volume (independent of an instance) which is initialized from Glance.
16:13:20 <jgriffith> dtynan: Don't think anybody *knew* you were working on somehting
16:13:28 <bswartz> this looks good to me as well
16:13:36 <jgriffith> dtynan: Also, you have it in diablo, are you working on it for Folsom and going to release something?
16:13:58 <dtynan> yes
16:14:13 <jgriffith> dtynan: Ok, well there was no way for anybody to know that
16:14:25 <dtynan> haven't been working on it for long... :)
16:14:26 <jgriffith> dtynan: Also, just FYI feature freeze is next week
16:14:35 <dtynan> no pressure, then
16:14:39 <jgriffith> :)
16:15:05 <jgriffith> Are there significant differences between your work and what Unmesh has proposed?
16:15:25 <dtynan> i only caught it today, so i'm still reviewing.
16:15:31 <jgriffith> Ok
16:15:54 <jgriffith> dtynan: So the best thing to do would be see if it lines up with what you had planned
16:16:07 <jgriffith> dtynan: Feel free to suggest additions/mods
16:16:08 <dtynan> yeah, agreed.
16:16:21 <jgriffith> dtynan: Also, see if you can ping Unmesh and maybe work together with him
16:16:22 <dtynan> any1 else working on this?
16:16:41 <dtynan> jgriffth: will do.
16:16:51 * bswartz is not working on it but is interested
16:16:59 <jdurgin> I'm working on https://blueprints.launchpad.net/cinder/+spec/effecient-volumes-from-images, which depends on that
16:17:05 <jgriffith> dtynan: Unmesh was the only person I knew of working on this, and jdurgin
16:17:09 <jgriffith> nm... there ya go
16:17:11 <jgriffith> :)
16:17:16 <jdurgin> I don't seem to be able to assign myself to the blueprint though
16:17:51 <jgriffith> jdurgin: Done
16:17:57 <jdurgin> thanks
16:18:24 <jgriffith> jdurgin: If you have something in progress make sure you update status etc
16:18:37 <jdurgin> will do
16:19:30 <jgriffith> dtynan: Anything else?
16:19:44 <dtynan> nope. ta muchly.
16:19:56 <dtynan> i'll get in touch with you guys directly to sync up.
16:20:02 <jgriffith> sounds good
16:20:10 <jgriffith> #topic netapp questions
16:20:16 <jgriffith> bswartz: You're up
16:20:26 <chalupaul> hi bswartz
16:20:30 <bswartz> I don't have any questions
16:20:36 <bswartz> but I can answer them
16:20:46 <jgriffith> Oh, thought you said you had something to discuss :)
16:20:48 <bswartz> if anyone else wants to ask about what we're doing
16:21:06 <jgriffith> bswartz: Ok... I'll bite.  "what are you guys doing?"
16:21:08 <jgriffith> :)
16:21:39 <bswartz> I'm planning to submit a driver that lets cinder use ordinary files on a NFS server as block devices
16:21:57 <bswartz> with additional enhancements when that NFS server is a NetApp device
16:22:06 <jgriffith> bswartz: Ummmmm
16:22:24 <jgriffith> bswartz: Did you talk to anybody about this beforehand?
16:22:30 <jgriffith> bswartz: Did you submit a blueprint?
16:22:44 <bswartz> not yet
16:23:01 <bswartz> it's a self-contained driver at this point
16:23:22 <jgriffith> Ok, I'd suggest getting a blueprint out asap
16:23:26 <bswartz> shouldn't impact architecture much if at all
16:23:52 <jgriffith> bswartz: Also there were some lengthy discussions at the summit regarding NFS
16:23:56 <bswartz> I know, my appologies
16:24:09 <jgriffith> bswartz: Rob was present and I think there was a concensus regarding NFS support
16:24:36 <bswartz> yeah we some ideas for significant new NFS-related features but we plan to wait for the next release to address those
16:25:32 <jgriffith> So this brings up the question again... is Cinder the right place for NFS?
16:25:58 <jgriffith> jdurgin: thoughts on this?
16:26:03 <bswartz> There are arguments for and against
16:26:25 <chalupaul> i think it should be personally
16:26:30 <bswartz> the main argument for is that customers want to manage storage in 1 place if possible
16:26:55 <jdurgin> if it's being used to store image files accessed as block devices, I think it makes sense, just like a local storage driver would make sense
16:27:15 <jdurgin> if it's about exporting a filesystem directly to the guest, that's a bit different
16:27:35 <DuncanT> jdurgin: Agreed. It is filesystem-as-a-service that is contentious
16:27:41 <jgriffith> I don't want anything to do with the second
16:28:20 <jgriffith> bswartz: So what you're saying is a driver to present NFS as block storage backends?
16:28:24 <bswartz> I realize that the second one is controversial
16:28:34 <bswartz> yes, that's what we have planned for Folsom
16:28:45 <jgriffith> bswartz: Well you're running out of time
16:28:50 <bswartz> I don't expect our Folsom submission to be controversial
16:28:54 <renuka> jgriffith: FYI because of SM, xenserver supports NFS as a part of n-vol/cinder
16:29:20 <renuka> also netapp
16:29:58 <bswartz> also, we have customers who are already using NFS files are virtual block devices -- albeit without n-vol/cinder
16:30:07 <jgriffith> renuka: Sorry, I don't know what you mean "supports NFS as a paort of n-vol/cinder"
16:30:10 <bswartz> NFS files as* block devices
16:30:31 <DuncanT> NFS storing image files sounds fine...
16:31:18 <renuka> jgriffith: The storage manager driver uses xenserver storage manager, which already has support for talking to NFS, equallogic, netapp, iscsi, etc. So we can support NFS block devices on xenserver today. And that is a part of nova volume
16:31:36 <jgriffith> renuka: got it...thanks
16:32:05 <jgriffith> Alright, sounds good then
16:32:21 <jgriffith> bswartz: You really *need* to get a blueprint and get something submitted quickly though
16:32:33 <jgriffith> bswartz: like you said, no pressure
16:32:57 <jgriffith> bswartz: If it's a contained driver then it shouldn't be a big deal to get it in
16:33:16 <bswartz> jgriffith: that is the plan
16:33:30 <jgriffith> bswartz: I would expect that there will be support for non NetAPP NFS in some fashion as well no?
16:33:48 <jgriffith> bswartz: Or is this just a netapp driver of sorts?
16:34:15 <bswartz> jgriffith: Yes, although I'm not sure who would actually use that. It would be like a reference implementation.
16:35:07 <bswartz> jgriffith: That "yes" was to the first question. We will support non-NetApp devices as NFS servers as well.
16:35:17 <jgriffith> bswartz: got ya
16:35:29 <jgriffith> bswartz: Ok, sounds good.  I'll look forward to a blueprint and some code
16:36:17 <jgriffith> #topic https://blueprints.launchpad.net/openstack/?searchtext=create-volume-from-image
16:36:26 <jgriffith> oops
16:36:40 <jgriffith> #topic https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1028718
16:36:41 <uvirtbot> Launchpad bug 1028718 in nova "nova volumes are inappropriately clingy for ceph" [Medium,Triaged]
16:36:49 <jgriffith> DuncanT: care to share
16:37:27 <DuncanT> Basically I just want to add that we don't care which volume node gets requests, like ceph we only run multiple volume nodes for HA and load reasons
16:38:02 <DuncanT> The recent fix for steering volume-from-snapshot requests to a specific volume node is a regression for us
16:38:27 <DuncanT> I was wondering how much work in that direction people are willing to consider?
16:38:58 <jdurgin> I'm interested in fixing it, but doubt I'll get to it this week
16:39:23 <DuncanT> Are we limited to hacks like --host or can we promote this to a first-class configuration?
16:39:59 <jdurgin> I'm not that familiar with the scheduler layer, but I'd like it to be first class configuration
16:40:09 <jgriffith> DuncanT: We can make it first class
16:40:17 <jgriffith> DuncanT: Just need to have a resource and time
16:40:19 <jdurgin> i.e. a host_agnostic option or something
16:40:30 <DuncanT> Awesome... Expect bugs / patches soon :-)
16:40:35 <jgriffith> DuncanT: :)
16:40:47 <jgriffith> As long as the bugs come with patches that's great :)
16:41:36 <DuncanT> (That's the extent of my questions on the subject)
16:41:43 <jgriffith> DuncanT: Ok, thanks
16:41:49 <renuka> I have a question
16:42:23 <renuka> I am surprised that we dont have multiple volume/cinder instances performing behaving only as control planes
16:42:37 <renuka> i dont understand this dependence on a particular volume service instance
16:42:59 <DuncanT> renuka: That's exactly the config I want to support
16:43:19 <jdurgin> +1
16:43:20 <chalupaul> you mean like how it was in nova-volume? with the blockdevicemapper or whatever it was called?
16:43:22 <renuka> Does it not work today because of a particular driver?
16:43:25 <jgriffith> renuka: It's not something we *didn't* want, we just didn't get to it
16:43:32 <DuncanT> renuka: Hopefully via a scheduler plugin so it is a simple config option
16:44:00 <renuka> do you mean nova-volume has it and not cinder?
16:44:09 <jgriffith> renuka: scheduler is the missing piece, combined with folks using lvm/local didsks
16:44:12 <jgriffith> disks
16:44:24 <DuncanT> nova-volume does not have it except via a hack
16:44:28 <jgriffith> renuka: No, I nova-volume does not hav it either
16:44:32 <chalupaul> i didn't actually understand what you asked i suppose, based on other people's feedback. ignore me please :P
16:45:02 <DuncanT> renuka: I aim to make it an option you can turn on if you know your driver supports it
16:45:06 <jgriffith> chalupaul: no worries the bdm is for attached BD's to instances etc
16:45:15 <renuka> wait, we end up casting a request to anything that could be nova-volume, right
16:45:15 <chalupaul> yeah
16:45:29 <renuka> which means any of the service instances could pick it up
16:45:45 <DuncanT> renuka: create-from-snapshot now goes direct to a named volume node
16:45:59 <DuncanT> renuka: This I aim to fix
16:46:07 <jgriffith> DuncanT: renuka: because it has to if they're using LVM
16:46:58 <DuncanT> jgriffith: Yup, hence making it an option. If I can make the LVM driver log errors or something if the two are combined then I will
16:47:07 <renuka> So I feel we need to make cinder drivers strictly control planes. This will help us with HA, upgrades and make things easier
16:47:18 <renuka> is that doable?
16:47:27 <jgriffith> renuka: I think it depends on the driver
16:47:28 <DuncanT> renuka: That does not work for LVM driver
16:47:40 <DuncanT> renuka: Not sure about other drivers
16:47:42 <jgriffith> renuka: For some of us (SolidFire, Netapp etc) yes that works
16:47:51 <renuka> It should work for iscsi
16:48:06 <renuka> couldn't LVM be LVM over iscsi?
16:48:24 <jgriffith> renuka: Problem comes in with snapshots/deletes etc
16:48:49 <jgriffith> renuka: Although what DuncanT is proposing I believe will address this
16:49:12 <jgriffith> LVM is presented via iscsi to the compute nodes anyway
16:49:16 <jgriffith> yes
16:49:36 <renuka> Right, so are we saying that after DuncanT's fix, we should be in a position to enforce drivers being only control planes?
16:50:10 <jgriffith> Possibly, but LVM volumes may be a sticking point
16:50:16 <renuka> This will also make things easier when we introduce more drivers into the play for multiple backends, right
16:50:22 <jgriffith> Have to see what DuncanT has in mind
16:50:44 <bswartz> renuka: is there are blueprint for the multiple backends thing?
16:50:56 <jgriffith> renuka: I don't know that it will make the multi back-ends much different
16:51:07 <DuncanT> renuka: No, I don't have that in mind
16:51:13 <renuka> not sure if there is today. We always spoke of this as a future feature
16:51:19 <jgriffith> bswartz: rnirmal is working on that, I think he's going to hav a BP today or tomorrow
16:51:32 <jgriffith> rnirmal: You around?
16:51:37 <DuncanT> renuka: Just a config option you can turn on if you *know* your driver supports it
16:52:07 <renuka> DuncanT: i see
16:53:56 <jgriffith> Ok, sounds like everybody is going to be extremely busy the next week :)
16:54:03 <jgriffith> Anything else before I wrap up?
16:54:27 <Vincent_Hou> anyone finds vol-acl useful?
16:54:30 <chalupaul> i have a futuristic question if there's time
16:54:49 <DuncanT> vol-acl? Never heard of it
16:54:58 <winston-d> jgriffith, i can give some status update on AZ
16:55:07 <Vincent_Hou> i register a BP https://blueprints.launchpad.net/cinder/+spec/volume-acl
16:55:45 <chalupaul> what are your guys thoughts on shared volumes? like, being able to mount a volume on multiple instances for clustering stuff.
16:55:57 <jgriffith> Vincent_Hou: I haven't decided what I think of that one yet
16:56:14 <Vincent_Hou> ok
16:56:20 <jgriffith> chalupaul: true SAN, sounds great but I don't think it's something for Foslom at all
16:56:22 <DuncanT> Vincent_Hou: Any more details? Can we re-use the swift acl rules?
16:56:51 <jgriffith> chalupaul: Also not sure how you'd make that work w/iscsi
16:56:55 <chalupaul> jgriffith: def not for folsom! But i've heard that there are polarized opinions on the matter :P
16:57:01 <Vincent_Hou> not that much yet. just a thought on it.
16:57:07 <chalupaul> yeah it would have to be driver specific
16:57:08 <DuncanT> chalupaul: Mounting in several instances gets really hard regarding data consistency and ordering...
16:57:19 <jgriffith> chalupaul: yes, FC shops especially... so now we're talking adding FC support etc
16:57:28 <bswartz> chalupaul: the netapp driver already supports connecting a volume/lun to multiple instances
16:57:43 <jgriffith> The problem isn't the storage device
16:57:59 <jgriffith> That's easy to work around
16:58:01 <chalupaul> yeah, but cinder will not let  you attach to 2 places either
16:58:25 <bswartz> chalupaul: yeah, the limitation is in the management layer / DB schema
16:58:30 <chalupaul> not witohut changes to the database and how it figures out if volumes are "available"
16:58:34 <jgriffith> I think it's something to consider for Grizzly
16:58:39 <chalupaul> cool
16:58:49 <chalupaul> i'll make a g1 blueprint with some thoughts
16:58:54 <chalupaul> when the time is right
16:58:56 <jgriffith> sounds good
16:59:06 <chalupaul> thanks, sorry for the grenade toss :P
16:59:21 <jgriffith> chalupaul: LOL... nah, it's a good thing to have on the horizon
16:59:24 <DuncanT> Vincent_Hou: Would be interested in seeing some use cases for vol_acl, but it looks interesting
16:59:25 <dtynan> chalupaul: i like the idea of multiple, read-only boot volumes... :)
16:59:53 <chalupaul> me too :D
16:59:54 <Vincent_Hou> i will figure out something more.
17:00:29 <jgriffith> ok, anything else real quick?
17:00:46 <jgriffith> alright, thanks everyone
17:00:49 <jgriffith> #endmeeting