15:08:22 <Swami_> #startmeeting Distributed-virtual-router
15:08:23 <openstack> Meeting started Wed Dec 18 15:08:22 2013 UTC and is due to finish in 60 minutes.  The chair is Swami_. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:08:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:08:26 <openstack> The meeting name has been set to 'distributed_virtual_router'
15:08:41 <Swami_> #info Meeting-agenda
15:09:03 <Swami_> I want to go over the Google doc for the Distributed Router
15:09:27 <Swami_> I am not sure if anyone of you got a chance to review the doc, but I have been constantly updating the doc.
15:10:13 <Swami_> #link https://docs.google.com/document/d/1iXMAyVMf42FTahExmGdYNGOBFyeA4e74sAO3pvr_RjA/edit
15:11:26 <Swami_> jamiec: Did you get a chance to review the doc
15:11:39 <Swami_> safchain: did you get a chance to review the doc
15:11:59 <safchain> Swami_, sorry, I didn't but I will for sure
15:12:02 <jamiec_> yes, I have had a look through earlier today
15:12:52 <Swami_> So I think we are almost close in defining the requirements for the DVR. This design doc addresses both East-West and North-South.
15:13:38 <Swami_> In terms of providing the services, I have been having discussions with Sumit and Nachi on what services should be distributed and what should be centralized.
15:14:27 <Swami_> Right now we would be supporting a hybrid mode where tenants can have a Logical network node and regulare compute node with distributed routers.
15:15:11 <Swami_> There is a slight difference between the current gateway model and North-South gateway model in our proposal.
15:15:33 <jamiec_> That would make sense to keep the dvr implementation light (hybrid)
15:16:26 <jamiec_> so, just floating ip and gateway distributed?
15:16:28 <Swami_> In order to reduce the number of IP's consumed by the External Gateway residing in the Compute Nodes, we have made a decision to split the Gateway functionality from the router.
15:17:10 <Swami_> This would not affect the existing gateway, but apart from the existing gateway we will be having an additional field to add the "id" of the Distributed-router-gateway to the router object
15:18:06 <Swami_> jamiec: yes just the floating ip and gateway functionality will be distributed. If required Firewall as a service can be also distributed, I checked with Sumit and he mentioned that there should not be any issue in that.
15:18:29 <jamiec_> that's reducing IPs consumed for ext gateway IP on internal networks?
15:19:06 <Swami_> Reducing the IP's for the external gateway.(Public IPs).
15:20:02 <Swami_> We still have flexible option, such that if any tenant wants to have their own gateway, then they can add it. If not they can share it.
15:20:18 <safchain> Swami_, does it mean that the router for ext gateway are not fully distributed, I mean on each compute node ?
15:20:56 <Swami_> safchain: It is fully distributed.
15:21:19 <Swami_> On Each compute node, there will be at least one "distributed-router-gateway".
15:21:33 <jamiec_> ok. so, for internal the distributed gw IP can be shared across all compute and handled with local arp response from ovs ?
15:21:59 <Swami_> If each tenant want to have their own Distributed-router-gateway then that is also possible.
15:22:34 <Swami_> We also did some proof of concept to have Overlapping IPs in a same comupte node and still goes through a single Distributed-router-gateway on that compute node
15:23:32 <hemanthravi> distr-router-gw is realized only through flows in OVS or is there a namespace for fwding on each compute node?
15:23:59 <Swami_> safchain: To show you how the Distributed-router-gateways are distributed, you can take a look at the picture in the google doc, at the end of the doc, "Bridge between network node and compute node".
15:24:37 <Swami_> That will show for tenant A he is using the Network node and distributed router combo and for Tenant B he is using the distributed router feature on all compute nodes.
15:25:16 <safchain> Swami_, Yes I'm doing it
15:25:17 <Swami_> hemanthravi: There is a namespace for fwding in each compute node, this is exactly similar to the legacy gw functionality but we have split it apart.
15:26:05 <hemanthravi> Is the namespace in the data-path for the east-west traffic too?
15:27:15 <Swami_> hemathravi: For east west traffic there is no requirement for the "distributed-router-gateway". All we need is to populate some rules in the ovs-flow table to route the traffic. prevent arps from going out. So our dvr agent will populate the router arp table.
15:27:22 <hemanthravi> swami: when you get a chance can you label the namespaces if they are shown in the diag
15:27:36 <hemanthravi> swami: got it
15:27:42 <Swami_> hemanthravi: yes I will do.
15:28:05 <safchain> Swami_, You said that you reduced the number of IPs consumed by ext. gw, two compute nodes/tenant could share the same ext. gw ?
15:28:29 <Swami_> Just for other people's info, currently the "Internal router" "IR" and the "EGA" they run their own namespaces.
15:29:22 <Swami_> safchain: Sorry if I have miss-stated, each compute node will at least have one "ext.gw". Any two tenants residing on the compute node can share the same "ext-gw"
15:29:39 <safchain> Swami_, ok
15:30:03 <safchain> Swami_, just to be sure, one ip per compute node at least
15:30:19 <hemanthravi> swami: won't this create a explosion of ext-net addr required?
15:30:22 <Swami_> safchain: Yes you are right, one ip per compute at least.
15:30:47 <safchain> Swami_, ok
15:31:10 <Swami_> hemanthravi: I don't think so, there will be only one external-net-id per Ext-gw.
15:31:38 <Swami_> If tenant requires different external-net-id, then they will have to create additional ext-gw to connect to the ext-net.
15:31:49 <hemanthravi> will each compute-node need an addr on external-net
15:32:30 <Swami_> hemanthravi: Yes.
15:33:24 <hemanthravi> swami: is the IR's func to be a proxy ARP for east-west router IP?
15:33:37 <hemanthravi> I meant the IR namespace
15:34:03 <Swami_> hemathravi; if you look at the db model, you can realize that each GW, will have different ports on each compute node, so each port will have its own IP address and Mac unique for its compute node.
15:34:58 <safchain> Swami_, firewall (fwaas) would be on distributed-router-gateway ?
15:35:25 <Swami_> hemanthravi: Yes it would act as the proxy ARP. Also any traffic that moves out of the compute Node in a East-West Traffic will have a Unique MAC as the source address. This unique mac will be added by the br-int rule when the packet leaves the br-int.
15:36:09 <Swami_> safchain: I discussed the firewall with sumit and he seems comfortable distributing the firewall, but I still need to take a look at the firewall and see if there is any change required.
15:36:44 <Swami_> Right now the firewall agent is integrated with the L3-agent, so if we are creating a "dvr-agent", then we might also include the firewall agent as part of the dvr-agent.
15:39:00 <Swami_> Do you guys have any other questions
15:39:11 <hemanthravi> Is the floating-ip func handled in EGA ns
15:39:28 <Swami_> hemanthravi: yes it is handled in EGA.
15:40:22 <jamiec_> Swami_, in the case of a tenant using only basic distributed L3 (no advanced service) is a network node still needed for DHCP?
15:41:15 <Swami_> jamiec: Good question. Our current assumption is, yes it is required. Since we were not planning to distribute the DHCP service, it is a requirement.
15:42:16 <jamiec_> understood.
15:43:23 <Swami_> jamiec: I think we may need to address the DHCP distribution after we complete our existing DVR work.
15:43:52 <Swami_> jamiec: if you have any thoughts on also distributing the DHCP, it would be great if you can send me your thoughts.
15:44:07 <jamiec_> yes, it is lightweight - but the current implementations aren't well suited to distributing.
15:44:08 <hemanthravi> does the floating-ip range have to be statically distr among the EGAs
15:44:50 <jamiec_> swami, yes, that's worth thinking about later (DHCP)
15:45:06 <Swami_> hemanthravi: I don't think you need to distribute the floating-ip range among the EGAs. You can still have a single pool and can assign from that pool.
15:46:28 <hemanthravi> thx, pool managed by the db like the network node
15:46:44 <Swami_> hemanthravi: yes that is our thought right now.
15:47:54 <Swami_> #action I would recommend that all go through the Google doc and provide your feedback
15:48:33 <Swami_> Also if possible we can have a meeting (phone call) to go over the doc, and detailed items if you are ok.
15:48:40 <safchain> for the traffic from a vm to ext. net, the traffic goes directly from the vm through distributed-router-gateway ?
15:48:47 <hemanthravi> swami: added some comments but most of them are resolved today
15:49:31 <safchain> no traffic through the IR in that case ?
15:49:48 <Swami_> safchain: The traffic from the VM, hits the br-int, then it reaches the "IR", in the IR it makes a decision, to route the traffic to the EG, since it is an outside network and the traffic will flow through the EG
15:50:04 <safchain> ok, thx
15:51:02 <Swami_> safchain: If it is for East-West, the router will forward the traffic to the br-int, the br-int will then replace the "source-mac" to the "unique mac" assigned to the br-int per compute node and then will forward the packet to the br-tun.
15:51:21 <safchain> I will have some questions about the overlapping, I will add them to the gdoc
15:52:17 <hemanthravi> Is it possible to make the IR and EG be the same namespace like in the network node
15:52:19 <Swami_> safchain: Yes go ahead and add your comments. Right now with single EG, with overlapping IP, we are using linux contrack utils in the EG namespace to distinguish the interfaces it came in.
15:53:20 <Swami_> hemanthravi: Yes we can do it and it is more simpler, then each IR will consume a EG and you might have to have multiple IPs per compute node, that was the reason we moved to a shared EG model.
15:53:22 <safchain> ok, thx
15:54:12 <Swami_> ok guys.
15:54:22 <Swami_> we will meet next week.
15:54:45 <jamiec_> thanks Swami, I'll also add any comments to gdoc later this morning..
15:54:50 <hemanthravi> thx
15:54:51 <Swami_> Meanwhile we want to have a voice call to discuss, we can have one.
15:54:56 <Swami_> What do you guys think
15:55:15 <Swami_> Next week can we have a voice call, so that everyone can provide your thoughts and feedback.
15:55:22 <hemanthravi> +1 for the voice call
15:55:26 <jamiec_> works for me
15:55:35 <safchain> ok for me
15:55:53 <Swami_> Ok, then I will send a meeting invite for all,  Will the same time work for all.
15:56:23 <Swami_> Next week we have holiday here, so I am fine with any timings.
15:56:55 <safchain> time is fine for me
15:56:58 <Swami_> #action Swami to sending meeting invite with bridge numbers to the sub-team.
15:57:05 <hemanthravi> works for me
15:57:20 <jamiec_> also ok
15:57:41 <Swami_> #action Swami will also update the Google doc with more flow diagrams to show the L2-L3 communication.
15:57:49 <Swami_> Thanks guys for your time.
15:57:57 <hemanthravi> swami: thx
15:58:01 <Swami_> Talk to you all next week same time.
15:58:05 <jamiec_> thankyou
15:58:05 <Swami_> bye for now.
15:58:09 <safchain> Swami_, thanks for the meeting, bye all, talk next week
15:58:15 <Swami_> #endmeeting