15:01:16 <bswartz> #startmeeting manila
15:01:17 <openstack> Meeting started Thu Jan 30 15:01:16 2014 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:20 <openstack> The meeting name has been set to 'manila'
15:01:26 <bswartz> hello folks
15:01:27 <aostapenko> hello, everyone
15:01:29 <yportnova> hi
15:01:32 <caitlin56> hello
15:01:34 <vponomaryov> Hello
15:01:35 <scottda> Hi
15:01:39 <gregsfortytwo1> hi
15:01:47 <bswartz> #link https://wiki.openstack.org/wiki/ManilaMeetings
15:01:49 <shamail> Hi
15:01:54 <rraja> hi
15:01:56 <csaba> hello
15:02:16 <bswartz> some of us were having a discussion earlier this week that I think the whole group would find interesting
15:02:28 <bswartz> (also I'm hoping someone has a genius idea that will solve the problem)
15:02:45 <bill_az_> Hi everyone
15:03:04 <bswartz> so the issue is in the generic driver design, with how it connects to the tenant network
15:03:58 <bswartz> Our initial hope was that we could simply create a network port on the tenant's network and assign that to the service VM
15:04:32 <bswartz> however we don't want to make the service VM owned by the tenant for a variety of reasons
15:04:45 <ndn9797> Hi, I'm Nagesh, from Bangalore, joining manila meeting for first time..
15:04:47 <bswartz> 1) We don't want it to count against the tenant's quotas
15:04:54 <bswartz> 2) We don't want the tenant to accidentally delete it
15:05:41 <yportnova> ndn9797: Hi
15:05:42 <bswartz> 3) We want to hide manila backend details from the tenant to the extent possible
15:05:52 <bswartz> ndn9797: hello and welcome!
15:06:19 <bswartz> so it's preferrable for the service VMs created by the generic driver to be owned by some special service tenant
15:06:48 <bswartz> however it seems that either nova or neutron or both won't let us connect a VM owned by one tenant to a network owned by another tenant
15:07:21 <bswartz> one potential workaround is to allow the service VMs to have their own separate network and to simply provide routing between that network and tenant networks
15:07:34 <bswartz> I find that workaround to be complex and error prone
15:07:56 <gregsfortytwo1> there isn't an infrastructure for "admin" nodes or whatever to be connected to anybody they want?
15:08:03 <bswartz> but without making changes to nova or neutron or both, it seems to be the only option
15:08:29 <vponomaryov> gregsfortytwo1: problem in assigning user tenant port to service tenant vm
15:08:50 <bswartz> oops I forgot to set the topic
15:08:59 <bswartz> #topic Generic Driver Networking
15:09:21 <shamail> Would assigning a static IP to the service VM bypass the separate tenant/network issue?  Assuming there's a way for us to do this.
15:09:23 <bswartz> okay so hopefully most of you understand the problem
15:09:50 <aostapenko> I have news since yesterday, it's quite easy to configure routers, so we will not need any additional route rules in vm in our private network
15:10:00 <bswartz> shamail: the IP will end up getting assigned by neutron and will be effectively static
15:10:02 <ndn9797> I understand the problem now. Thanks <bswartz>
15:10:03 <shamail> The issue is with floating IPs and being assigned cross-tenants, no?  Might be completely mistaken since I wasn't in the discussions earlier.
15:10:07 <bswartz> but it will be an IP from a separate network
15:10:12 <gregsfortytwo1> has anybody talked to people in the other groups yet? I'm surprised there's not an accepted model already
15:10:41 <bswartz> gregsfortytwo1: I think that nobody is doing anything that requires what we want
15:10:50 <bswartz> since you always have the option of setting up virtual routes between tenants
15:10:51 <shamail> But the assignment mechanism (e.g. Neutron) is the issue, not the address space itself so couldn't we find an alternative way to assign?
15:11:00 <gregsfortytwo1> everybody else is still just doing things through the hypervisor? really?
15:11:05 <bswartz> no it's not actually an IP issue
15:11:16 <shamail> Okay.
15:11:34 <bswartz> we can allocate a port on the tenants network, we just can't assign it to our VM through nova
15:11:59 <bswartz> we could still fake it and use the IP within the VM -- but without actual bridging of the packets thats useless
15:12:49 <bswartz> gregsfortytwo1: I'm not sure what you mean
15:12:57 <aostapenko> we'd like to share docs about how generic driver is implemented now
15:13:12 <bswartz> gregsfortytwo1: I think people are using their own tenant networks currently, and when they need connectivity they're setting it up through neutron routes
15:13:15 <aostapenko> https://docs.google.com/a/mirantis.com/drawings/d/1sDPO9ExTb3zn-GkPwbl1jiCZVl_3wTGVy6G8GcWifcc/edit
15:13:25 <aostapenko> https://docs.google.com/a/mirantis.com/drawings/d/1Fw9RPUxUCh42VNk0smQiyCW2HGOGwxeWtdVHBB5J1Rw/edit
15:13:35 <aostapenko> https://docs.google.com/a/mirantis.com/document/d/1WBjOq0GiejCcM1XKo7EmRBkOdfe4f5IU_Hw1ImPmDRU/edit
15:13:57 <bswartz> the ideal case for me is that we can create a VM that right on the tenant's network but the VM is hidden from them and doesn't count against their quotas
15:14:10 <caitlin56> Could we do this the opposite way? Rather than giving each machine in each tenant network access to the management network, could we give the management machine a floating IP on each tenant network?
15:14:29 <bswartz> caitlin56: yes that's an option, but it has some problems
15:15:08 <bswartz> in particular, the security services that the storage server needs to communicate with will be inside the tenant network
15:15:25 <bswartz> so the "service VM" will need routes back into the tenant network
15:15:34 <caitlin56> There is an even better option, but I do not think neutron supports it, you place the management computer in a "DMZ VLAN" and allow routing to the DMZ from any tenant network.
15:15:43 <bswartz> we may be able to get that to work by just setting up the routes
15:16:05 <bswartz> caitlin56: we need 2-way routing though
15:16:13 <caitlin56> The gotcha is that all tenant networks have to be NATtted to a unique address on the tenant network.
15:16:20 <gregsfortytwo1> I'm not a network engineer so this is all a bit beyond me, but what's error-prone about setting up the routes?
15:16:57 <aostapenko> bswartz: yes, we can do so, and it's not difficult
15:16:57 <caitlin56> gregsfortytwo: the problem is that the tenant networks can all think they are 10.0.0.0/8
15:17:10 <bswartz> gregsfortytwo1: perhaps error-prone was the wrong phrase -- what I meant was that there's a lot of complexity with network routes and testing all the corner cases seems like a huge problem
15:17:49 <bswartz> aostapenko thinks he can make it work so it's what we're going to try
15:17:52 <gregsfortytwo1> isn't that Somebody Else's Problem? (where Somebody Else is neutron)
15:17:59 <scottda> If the ideal solution involves work from Neutron (and perhaps Nova), it might be worth trying to get a blueprint together for those team. Neutron doesn't seem to be too locked down for new features.
15:18:09 <bswartz> I just wanted to throw this problem out to the larger group to see if we could find a better way
15:18:15 <gregsfortytwo1> we don't just say "connect this IP and this network" or similar?
15:19:19 <bswartz> gregsfortytwo1: it's a question of the routing rules -- which packets go to which gateways
15:19:28 <caitlin56> The trick is that you need very specific firewall and NAT rules to be set up properly. You want to allow each tenant network to use the same addresses, and you doi not want to allow general traffic from the tenant networks.
15:20:08 <bswartz> consider this: the service network will have sime CIDR range, and that range will need to be mapped into every tenant's routing tables
15:20:20 <caitlin56> The setup I have seen, each tenant is NATted to the DMZ, and routing is limited to the DMZ subnet (i.e. no packets transit the DMZ).
15:20:33 <bswartz> if any tenant is using those same addresses for something (perhaps a VPN connectiong to a corporate network) we will get an address collision
15:20:42 <bswartz> connection*
15:20:50 <caitlin56> bswartz: it's ont he
15:21:04 <caitlin56> DMZ thaty we need to do the routing.
15:21:39 <shamail> Wouldn't that still require something to maintain the NAT entries and provision the addresses?  How would this be from a maintenance perspective?
15:21:46 <caitlin56> The assumption is that the DMZ has a public IPV4 or ipv6 subnet - something a tenant should not be using.
15:22:14 <bswartz> caitlin56: you're proposing that service VMs be given public addresses?
15:22:31 <caitlin56> You allow a route from each tenant network to that specific public subnet,  and you NAT tranlate every packet when routed.
15:22:57 <bswartz> caitlin56: how do the service VMs tunnel back into the tenant networks?
15:23:00 <caitlin56> It can be PNAT or block NAT translation, but here PNAT seems more appropriate.
15:23:22 <aostapenko> we'd like to avoid using NAT at all because of nfs limitations
15:23:28 <caitlin56> That's where you need NAT rather than PNAT.
15:24:01 <bswartz> yeah the whole idea is to deliver a driver that works similarly to a VLAN-based mutlitenancy solution which requires full 2-way network connectivity
15:24:19 <bswartz> NAT causes problems
15:24:26 <caitlin56> The only way to avoid NAT is to have the management server run lots of network namespaces and essentially "route" to the correct process before doing IP routing.
15:24:37 <bswartz> if NAT is going to be required then I'm inclined to use the gateway-mediated multitenancy model
15:25:47 <scottda> Sorry I may have missed this, but why not a Neutron admin subnet?
15:26:02 <bswartz> scottda: you may need to explain what that is...
15:26:16 <aostapenko> We're using 1 service network and multiple subnets in it with different cidrs
15:26:53 <scottda> Some feature, yet to be implemented, that allows an admin to setup neutron networking across tenants.
15:27:07 <scottda> and subnets
15:27:26 <caitlin_56> Sorry about the drop. Anyway there is ultimately only two choices: you create a set of unique IP addresses visisble tot he manager (via NAT or PNAT), or you create slave processes which have uniwuenetwork namespaces and hence can join the tenant network without reservation.
15:27:56 <bswartz> scottda: that sounds like exactly what we need
15:28:09 <bswartz> scottda: if it's not implemented though then we're still left without a solution
15:28:21 <csaba> bswartz: as of gateway mediated, in the original concept gateway was equal to hypervisor, but for sake of network separation we thought we'd need service vm-s for gateway role so as long as this view held, I don't see how is that simpler as of network plumbing
15:28:25 <scottda> Well, if the ideal place for this is in Neutron, we could attempt to get it in Neutron.
15:29:02 <scottda> It might take a bit more time, but we will be living with this solution for a long time. Years.
15:29:05 <caitlin_56> But if our plan is to get Neutron improved first we will need an interim workaround.
15:29:08 <bswartz> I do think the ideal place is in neutron
15:29:21 <bswartz> Or nova
15:29:39 <bswartz> some way to create special vms that span tenants
15:30:02 <bswartz> scottda: are in a position to get something like that implemented?
15:30:08 <scottda> I'm not certain, but my instinct is that it will be easier to get a change through Neutron than Nova. Just from a standpoint that Neutron is newer, and changing faster.
15:30:30 <scottda> I have teammates who work on Neutron. I can certainly fly the idea past them to get some feedback.
15:30:42 <caitlin_56> Linux namespaces provides the tools needed. Trying to bypass neutron to use them will be very tricky.
15:31:05 <scottda> They might have already discussed such a thing, or have a quick answer like "sure, sounds doable" or "no way, already thought of that" I really don't know.
15:31:27 <bswartz> caitlin_56: there's nothing technically complicated about allowing a service VM to be directly on a tenant's network
15:31:44 <bswartz> it's an administrative limitation that prevents us from doing what we want
15:32:04 <bswartz> possibly there are security implications
15:32:26 <caitlin_56> How do you know which tenant you are talking to?
15:32:53 <bswartz> there's one VM per tenant
15:33:12 <aostapenko> actually one vm per share-network
15:33:14 <bswartz> so you have a 1-to-1 mapping and you can track that
15:33:21 <caitlin_56> How do you talk with tenant X? You have to select the  namespace before the IP address is any good.
15:33:28 <bswartz> well yeah, thanks aostapenko
15:33:52 <bswartz> caitlin_56: it's no different than if tenant X had created a new VM himself
15:34:06 <bswartz> however the owner of that VM needs to be us
15:34:24 <bswartz> or the "service tenant" as we've been saying
15:34:53 <bswartz> there is the issue of how the manila-share service talks to these special VMs, but that's also a solvable problem
15:35:35 <scottda> I'll start a dialogue with some Neutron devs about feasibility of an admin network. There's still the issue of the service VM.
15:36:05 <bswartz> scottda: thx -- I'll follow up w/ you after this meeting because I'm very interested in that approach
15:36:12 <caitlin_56> scottda: this is a general problem, not unique to NFS. So neutron should be willing to listen.
15:36:43 <scottda> Yes, I'll need to get some more info as I'm not as up to speed on Manila as I'd liket to be
15:36:53 <bswartz> okay enough on that
15:36:57 <scottda> caitlin_56: agreed. I think I can sell it :)
15:37:00 <bswartz> let's jump to dev status
15:37:05 <bswartz> #topic dev status
15:37:20 <vponomaryov> i will update about it
15:37:32 <bswartz> vponomaryov: ty
15:37:40 <vponomaryov> Dev status:
15:37:40 <vponomaryov> 1) Generic driver - https://review.openstack.org/#/c/67182/
15:37:50 <vponomaryov> Lion part of work is done
15:37:50 <vponomaryov> TODO:
15:37:50 <vponomaryov> a) Finalize work of routers and routes between Vm in service tenant and vm in user tenant
15:37:50 <vponomaryov> b) Write unittest
15:38:17 <vponomaryov> Info, mentioned before:
15:38:17 <vponomaryov> https://docs.google.com/a/mirantis.com/drawings/d/1sDPO9ExTb3zn-GkPwbl1jiCZVl_3wTGVy6G8GcWifcc/edit
15:38:17 <vponomaryov> https://docs.google.com/a/mirantis.com/drawings/d/1Fw9RPUxUCh42VNk0smQiyCW2HGOGwxeWtdVHBB5J1Rw/edit
15:38:17 <vponomaryov> https://docs.google.com/a/mirantis.com/document/d/1WBjOq0GiejCcM1XKo7EmRBkOdfe4f5IU_Hw1ImPmDRU/edit
15:38:44 <vponomaryov> 2) NetApp Cmode driver - https://review.openstack.org/#/c/59100/
15:38:45 <vponomaryov> TODO: bugfixing and retesting
15:39:07 <vponomaryov> 3) We have open item about share-networks, that should be disscussed and if accepted will take some time for implementation.
15:39:34 <bswartz> vponomaryov:  what open issue?
15:39:38 <vponomaryov> Should I begin with open item?
15:39:44 <bswartz> vponomaryov: yes pls
15:39:57 <vponomaryov> Open item: With current code, Vserver (VM) will be created (with data from share-network) only on first share creation call.
15:40:13 <vponomaryov> its true for both drivers
15:40:19 <vponomaryov> Problem:
15:40:19 <vponomaryov> Current realisation assumes creation share and Vserve for itr. And it can be failed due to improper share-network data. So, user would like to use already activated share-networks, and wait much less time.
15:40:19 <vponomaryov> Also, due to mechanism of mapping security service to share network, we shouldn't try create Vserver (VM) on share network creation.
15:40:45 <vponomaryov> Proposal:
15:40:45 <vponomaryov> We can have command for share-network like "initialize" or "run", to make it, when we (admin) are ready.
15:41:06 <bswartz> vponomaryov: you're saying we don't error check the parameters used for share-network-create until long after the API call succeeds?
15:41:46 <vponomaryov> we are checking this data on creation of Vserver, and we do this on first share creation call
15:42:01 <bswartz> well here's the thing
15:42:26 <bswartz> even if we create a vserver before we need to, we may need to create another one at some point in the future
15:42:44 <bswartz> any share-create call can result in a new vserver getting created
15:43:03 <vponomaryov> yes, use r will have a choise between active share-networks
15:43:12 <vponomaryov> which one to use
15:43:17 <bswartz> now I agree that its probably worthwhile to at least validate the params passed in 1 time at the beginning
15:43:45 <bswartz> but that won't prevent us from having issues if something changes behind our back
15:44:13 <vponomaryov> so, we should not only check, we should create Vserver
15:44:41 <vponomaryov> if its creation is successfull, we have active share-network
15:44:42 <bswartz> anyone else have an opinion here?
15:44:58 <caitlin_56> bswartz: that is an unsolvable problem. The fact that you cannot launch a VM right now does not mean that the configuration is incorrect - only that it probably is.
15:44:59 <bswartz> I think I can live with creating an empty vserver, just to validate the parameters passed in
15:45:47 <vponomaryov> caitlin_56: +1, a lot of issues won't be caused by improper data itself
15:46:03 <vponomaryov> we should know, we have a Vserver
15:46:11 <vponomaryov> and can crete share
15:46:28 <caitlin_56> You either allow "speculative" networks or you prevent some legitimate configuration from being accepted due to a temporary network glitch.
15:47:32 <bswartz> caitlin_56: are you against the proposal?
15:47:49 <bswartz> I think it's fine to do it -- the biggest downside is wasted resources
15:48:57 <bswartz> okay if noone opposed then I say go ahead with it vponomaryov
15:49:09 <bswartz> #topic open discussion
15:49:16 <vponomaryov> ok
15:49:23 <caitlin_56> bswartz: I'd raise a error, but allow an operator to override - "no this config is correct even if you can't do it right now."
15:49:26 <bswartz> any other topics for this week?
15:49:55 <bswartz> caitlin_56: if it won't work right now the tenant can try agian later when it will work
15:49:56 <vponomaryov> A couple with launchpad BPs
15:50:15 <bswartz> caitlin_56: there's little value it setting something up early if that thing is useless until a later time
15:51:33 <vponomaryov> bswartz: BP https://blueprints.launchpad.net/manila/+spec/join-tenant-network can be marked as implemented,
15:51:33 <vponomaryov> bugs left according to its changes. + approved change for share-networks
15:51:34 <bswartz> vponomaryov: I'm here
15:51:55 <bswartz> I'm there rather
15:52:22 <vponomaryov> and second
15:52:33 <vponomaryov> If we drop xml support, BP https://blueprints.launchpad.net/manila/+spec/xml-support-to-client should be marked as invalid.
15:52:40 <vponomaryov> Do we drop XML support?
15:52:44 <bswartz> ack my browser is crashing
15:53:20 <bswartz> oh yes
15:53:39 <bswartz> it came to my attention this week that the Nova project is dropping support for XML in their v3 API
15:53:54 <bswartz> so I see no reason to support XML in any version of our API
15:54:04 <ndn9797_> fine with XML drop..
15:55:07 <vponomaryov> I have no questions
15:55:13 <vponomaryov> thanks
15:55:26 <bswartz> okay
15:55:29 <bswartz> thanks everyone
15:55:52 <aostapenko> thanks, bye
15:55:54 <bswartz> #endmeeting