15:01:09 <bswartz> #startmeeting manila
15:01:10 <openstack> Meeting started Thu Feb  6 15:01:09 2014 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:13 <openstack> The meeting name has been set to 'manila'
15:01:15 <ndipanov> garyk, ah yes
15:01:28 <bswartz> hello guys
15:01:35 <scottda> Hi
15:01:41 <ndipanov> well those are all easy to fix so no big deal
15:01:43 <aostapenko> Hello
15:01:44 <vponomaryov> Hello
15:01:46 <xyang2> hi
15:01:46 <csaba|afk> hello
15:01:47 <ndn9797> Hi..
15:01:47 <bill_az> Hi
15:01:57 <yportnova> hi
15:02:10 <achirko> hello
15:02:21 <bswartz> wow lots of people here today
15:02:24 <bswartz> that's good
15:02:39 <bswartz> and I'm glad freenode got over the DDoS attack from last weekend -- that was annoying
15:03:35 <bswartz> I don't suppose we have rraja here today
15:03:48 <bswartz> did all of you see his email?
15:03:57 <scottda> yes
15:04:10 <bswartz> I'd like to spend some time talking about taht
15:04:12 <csaba> #link http://thread.gmane.org/gmane.comp.cloud.openstack.devel/15983
15:04:23 <bswartz> csaba: ty!
15:05:08 <bswartz> I'd also like to revisit the neutron/nova/service VM networking stuff
15:05:39 <bswartz> it will matter even more for the gateway-mediated stuff than for the generic driver I think
15:05:43 <bswartz> but first
15:05:51 <bswartz> #topic dev status
15:06:07 <bswartz> vponomaryov: do you have updates like usual?
15:06:13 <vponomaryov> yes
15:06:18 <vponomaryov> Dev status:
15:06:27 <vponomaryov> 1) Bugfixing.
15:06:27 <vponomaryov> Main forces were directed to bugfixing this week. Bugs for share networks available on launchpad.
15:06:37 <vponomaryov> 2) BP https://blueprints.launchpad.net/manila/+spec/share-network-activation-api
15:06:37 <vponomaryov> gerrit: https://review.openstack.org/#/c/71497/ (client)
15:06:37 <vponomaryov> TODO: server side implementation
15:07:01 <vponomaryov> Generic driver - https://review.openstack.org/#/c/67182/
15:07:01 <vponomaryov> Some improvements. Now, it works much faster using python paramiko module for ssh instead using venv with ssh client...
15:07:23 <bswartz> let's talk about (2) briefly
15:08:18 <bswartz> the idea behind the share network activation API is that under the original design, we didnt' actually create a vserver until the first share was created
15:08:19 <vponomaryov> ok, anyone have questions about bp https://blueprints.launchpad.net/manila/+spec/share-network-activation-api ?
15:08:58 <bswartz> this API allows us to create it early -- which is good for stuff like validating the parameters that were passed in when the share network was created
15:09:23 <bswartz> Can I assume everything thinks that's a good thing?
15:09:47 <vponomaryov> everyone?
15:09:59 <bswartz> okay silence is consent
15:10:38 <bswartz> anyone who wants to provide review input to the generic driver, now is the time
15:10:45 <bswartz> I want to merge this in the next week
15:11:03 <vponomaryov> bswartz: we too
15:11:27 <bswartz> note that we may still modify it in the future -- in particular we may do some of the things rraja suggests
15:11:47 <aostapenko> unittests are in progress, and there will be some minor changes
15:11:58 <bswartz> but with I-3 coming I want to have feature completeness for at least a few drivers
15:12:04 <vponomaryov> also, want to remind, that generic driver still requires lightweight image with nfs and samba services for generic driver
15:12:27 <bswartz> vponomaryov: where is that list of requirements documented?
15:12:47 <csaba> vponomaryov: I'm doing some work in that direction
15:13:04 <aostapenko> bswartz: I think that we should merge what we have now, and then make some changes
15:13:05 <vponomaryov> bswartz: we haven't documented such stuff
15:13:13 <csaba> hopefully I can present next week
15:13:23 <vponomaryov> csaba: thanks
15:13:26 <bswartz> let's write down a list of all of the things the generic driver will depend on from the glance image
15:13:43 <bswartz> obviously an SSH server is required, as well nfs-kernel-server and samba
15:14:03 <bswartz> does it matter if it's samba3 or samba4?
15:14:28 <bswartz> are there any other subtle requirements? does the image need cloud-init?
15:15:01 <vponomaryov> image should have server and client sides for NFS and Samba
15:15:19 <bswartz> why would the image need NFS/samba clients?
15:15:26 <vponomaryov> because this image can be used as client VM image for mounting shares
15:15:42 <bswartz> what shares would it mount?
15:15:42 <vponomaryov> unified for both purposes
15:15:50 <vponomaryov> manila's shares
15:15:57 <bswartz> oh you mean rraja's ideas?
15:15:57 <aostapenko> it requires cloud init for key injection, but we have alter auth thru password
15:16:31 <bswartz> let's hold off on the gateway stuff -- I just want to document what's required for the generic driver
15:16:35 <aostapenko> we do not need samba/nfs clients
15:16:38 <vponomaryov> bswartz: I mean use case, not only creation of share, but using it too. On client's VM
15:16:47 <bswartz> there will be additional requirements if we also use these images as gateways
15:17:06 <bswartz> yes but the client VMs will be some other glance image
15:17:19 <bswartz> and those images will be tenant-owned
15:17:23 <bswartz> this image will be admin-owned
15:17:27 <vponomaryov> bswartz: yes, it is in wishlist
15:18:26 <bswartz> okay
15:18:42 <bswartz> #topic networking
15:19:43 <bswartz> so we were able to make the generic driver work with separate networks last week
15:20:15 <bswartz> each service VM gets its own service network, and the network is joined to the tenant network with a virtual router
15:20:43 <bswartz> I'm pretty satisfied that this approach works, but the downside is that we use up 8 IPs (a /29 CIDR) for every service VM
15:21:02 <aostapenko> bswartz: we can even use clusters in service subnet in future
15:21:08 <bswartz> also there's a small chance that the IP we choose for any given service VM conflicts with something in the tenant's universe
15:22:14 <bswartz> I'm still interested in putting the service VMs directly on the tenant networks if/when we can solve the issues currently preventing that
15:22:30 <bswartz> scottda: did you discover anything new since last week?
15:22:39 <scottda> Nothing earth-shattering...
15:22:54 <bswartz> scotta: do you just want to share with the team what we discussed last week
15:22:59 <scottda> The Neutron people have the idea of a Distributed Virtual Router (DVR)
15:23:10 <scottda> https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
15:23:12 <aostapenko> bswartz: not only tenant universe, cloud universe, because tenants networks can be connected
15:23:38 <scottda> They are actively working on this, but don't expect it to be in in Icehouse. Probably will be this summer.
15:24:03 <bswartz> aostapenko: yes but openstack already manages IPs for the whole cloud -- what it doesn't control is what the tenant connects in from the outside world
15:24:19 <scottda> It will do VM-to-VM intra-tenant routing, but for inter-tenant it will still go out to a Network node. It will have slightly better performance, but not quite what manila wants.
15:24:58 <bswartz> performance is on motivation
15:25:18 <bswartz> s/ on/ one/
15:25:25 <scottda> With the proper champion to write the code and push the blueprint, the DVR can, and probably some day will, be enhanced to have a VM-to-VM intra-tenant connectivity option. But that is in the future.
15:26:05 <bswartz> but for me the main thing is that hardware-based drivers will actually be able to directly join tenant networks, and it seems better for the generic driver to have the same behavior -- if only for consistency and common testing
15:26:46 <scottda> That is the synopsis
15:27:00 <bswartz> thanks scottda
15:27:50 <bswartz> so the plan here is -- we're going to continue with aostapenko's approach of creating a /29 for each instance of the generic driver
15:28:06 <bswartz> however we're going to monitor neutron to see if they give us a better alternative in the future
15:28:58 <achirko> people in Neutron are also working on Service VM infrastructure - https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
15:29:05 <bswartz> #topic gateway-mediated
15:29:05 <aostapenko> bswartz: we can even extend to /28 if we want to launch clusters of service vms
15:29:47 <bswartz> achirko: that's also very interesting to me
15:29:59 <bswartz> I should probalby join some neutron meetings and +1000 that BP
15:30:04 <achirko> we can get some feedback from them on our approach, but it probably will slow down Generic Driver delivery
15:30:31 <bswartz> achirko: we don't need to slow down anything -- we can go forward with the current approach
15:30:35 <vbellur> bswartz: all of us could probably +1000 that :)
15:30:37 <bswartz> achirko: if this BP happens we can go back and update
15:30:59 <bswartz> okay we're on a new topic though
15:31:08 <bswartz> and since we still don't have rraja, I'll drive
15:31:35 <bswartz> so I'll repost the link
15:31:35 <bswartz> #link http://thread.gmane.org/gmane.comp.cloud.openstack.devel/15983
15:31:44 <csaba> I can talk on behalf of rraja
15:31:50 <bswartz> oh okay
15:32:00 <bswartz> csaba: take it away!
15:32:43 <csaba> well the basic idea is that if we think of various storage backends
15:33:27 <csaba> ie. there is Cinder as with generic driver, there could be lvm, ganesha, gluster-nfs whatnot...
15:34:03 <csaba> which are implemented or wip as single tenant drives
15:34:09 <csaba> *drivers
15:34:18 <csaba> running on hypervisor
15:34:37 <csaba> now what they would do in a generic driver like architecture
15:34:40 <csaba> is not much different
15:35:01 <csaba> just their activity wouild have to lifted to the service VM
15:36:02 <csaba> so what we thought, if the archictecture could be split to a backend exporter and a network plumbing component...
15:36:24 <bswartz> okay so let me summarize and see if I'm off base
15:36:38 <csaba> then it would be easy to leverage those other efforts tand use in a multi-tenant way
15:36:52 <bswartz> we could implement gateway-mediated access with the following:
15:37:12 <bswartz> 1) add a network connection from the generic driver's service VM to the backend storage network
15:37:41 <bswartz> 2) add filesystem clients to the service VM
15:38:19 <bswartz> 3) implement the backend to just serve filesystems to a single storage network
15:38:52 <bswartz> 4) bridge the backend filesystem onto the tenant network using an NFS server in the server VM, either ganesha-nfs or nfs-kernel-server
15:39:37 <bswartz> is that it?
15:40:35 <csaba> what you mean by 3)
15:40:52 <csaba> "single storage network"?
15:40:57 <vbellur> that sounds right to me
15:41:03 <bswartz> I think the same thing you meant by (10:33:59 AM) csaba: which are implemented or wip as single tenant drives
15:41:09 <csaba> ah OK
15:41:27 <csaba> fine
15:41:37 <bswartz> the backends for the gateway-mediated mode wouldn't need to understand tenants really
15:41:46 <bswartz> because the VMs would do that translation
15:42:28 <bswartz> okay so I'd like to have some discussion on the difference between ganesha-nfs and nfs-kernel-server in this context
15:42:48 <bswartz> does redhat have a preference?
15:43:13 <scottda> +1 to flexibility in choice of NFS server
15:43:24 <csaba> well point in ganesha is having pluggable storage backends
15:44:01 <csaba> but then kernel nfs is more mature / known ... so it's really good to allow a choiec
15:44:50 <bswartz> hmm
15:45:04 <bswartz> I kind of don't like giving users an option here
15:45:16 <bswartz> it seems like supporting both will double the testing effort and chances for bugs
15:45:31 <bswartz> it would be better to agree on one and implement that i think
15:45:52 <bswartz> ofc that wouldn't stop someone from also adding a patch to support the other -- but I feel like we should have a recommended approach
15:46:00 <csaba> well we don't need support both
15:46:34 <csaba> one can be chosen as supported... down the road
15:46:57 <bswartz> so what I'm asking is, do you have a preference at this time?
15:46:59 <vbellur> bswartz:  how do we have nfs-kernel-server reach out to various storage backends?
15:47:11 <csaba> point is to ease development efforts... for various multi-tenant pocs
15:47:44 <bswartz> vbellur I think nfs-kernel-server layers ontop of VFS inside the Linux kernel, so any filesystem that has a kernel-mode driver will work underneath it
15:48:20 <bswartz> if you cound FUSE as a kernel-mode driver than I think literally anything will work
15:48:27 <bswartz> s/cound/count/
15:48:45 <vbellur> bswartz: gluster does not have a kernel mode driver and the performance implications of a fuse mount being an export would be pretty bad
15:48:52 <bswartz> ah
15:49:06 <bswartz> well that's a fairly good argument for preferring ganesha-nfs then
15:49:17 <bswartz> esp if redhat wants to be the first working implementation of this mode
15:49:49 <bswartz> anyone see a serious downside to ganesha-nfs?
15:49:50 <vbellur> bswartz: the very reason we implemented our own NFS server was due to severe performance implications that we experienced with exporting a fuse mount point.
15:49:51 <vponomaryov> addition +1 for ganesha can be its possibility to use over linux containers
15:50:06 <bswartz> vponomaryov: yes I was going to mention that too
15:51:18 <vbellur> so can we have nfs-ganesha as the default for v1?
15:52:19 <vponomaryov> vbellur: does ganesha have stable releases for most of distros?
15:52:20 <aostapenko> if we use lxc provided by nova, we could not launch other hypervisors vms in the case of single node instalation
15:52:28 <bswartz> vbellur: yeah that works for me -- although that approach diverges a little more from teh generic driver than nfs-kernel-server would
15:53:38 <vbellur> vponomaryov: we have packages for CentOS & Fedora. We also can work with Ganesha community for other distros.
15:53:39 <bswartz> aostapenko: I don't understand your comment
15:55:00 <vponomaryov> bswartz: he mant, that if we use LXC VMs, we can not use VMs with other hypervisors
15:55:07 <bswartz> regarding package and distros, I see it as an administrator's job to provide the glance image that will become the service VM -- if RH-based distros are better suited to running gluster and ganesha then administrators will probably choose those
15:55:09 <vponomaryov> s/mant/meant/
15:55:43 <bswartz> yeah LXC has various downsides which we discussed a few months ago
15:55:46 <aostapenko> bswartz: but we should be able to launch other vms in cloud. not service
15:56:14 <bswartz> but LXC has some very interesting advantages too -- I want to come back and look at LXC sometime
15:57:15 <bswartz> #topic open discussion
15:57:22 <bswartz> okay anything else before our time is up?
15:57:23 <aostapenko> bswartz: i had many exciting nights with lxc, it would be great for performance if we use it
15:58:09 <bswartz> aostapenko: lol
15:58:34 <bswartz> aostapenko: I know it has much lower overhead which is good for scalability in low-load situations
15:59:33 <bswartz> okay thanks everyone
15:59:38 <vbellur> bswartz: thanks!
15:59:41 <aostapenko> thanks, bye
15:59:41 <vponomaryov> thanks
15:59:42 <bswartz> see you next week
15:59:49 <scottda> bye
15:59:56 <vbellur> bye all
15:59:57 <bswartz> #endmeeting