Thursday, 2015-06-04

*** armax has quit IRC01:00
*** openstack has joined #openstack-neutron-ovn01:21
*** openstack has joined #openstack-neutron-ovn01:36
*** openstack has quit IRC01:52
*** openstack has joined #openstack-neutron-ovn01:53
*** armax has joined #openstack-neutron-ovn05:36
*** armax has quit IRC08:11
gsagierussellb : i am looking at the security groups implementation, and we probably will need to hold all the ports and security rules in memory of the ML212:45
gsagiemech driver12:45
russellbok12:46
gsagiebecause when security rule changes, need to update all relevant ports12:46
russellbright, but i'd expect at that point you'd go pull the port info you need from the db12:47
russellbi don't think we want to have a copy of the db in memory and have to be sure we keep it up to date properly12:48
gsagiewhat do you mean? i know that a security rule changed, now i need to know all the ports that have that security group set as their security group12:48
gsagiewill need to iterate the entire db12:48
gsagieunless you have another idea12:49
russellbi haven't looked at the db api12:51
russellbif there's not already a query written, i'd write a query that can give you all the ports with that security group12:52
gsagieok will do :) or try to13:27
gsagiehavent touch the db layer much but guess its time to learn13:27
russellb:)13:28
russellbi haven't touched it in neutron yet13:28
russellbgsagie: are you familiar with provider networks in neutron?  i'm writing up something about provider networks and OVN, curious if you'd like to review13:33
gsagiei can look if you want13:34
russellbok13:34
russellbmaybe i should just post to our gerrit13:34
russellbyeah, i'll do that ...13:34
openstackgerritRussell Bryant proposed stackforge/networking-ovn: docs: separate design docs from general docs  https://review.openstack.org/18839313:38
*** shettyg has joined #openstack-neutron-ovn14:01
mesteryrussellb: Curious about you using the DCO in your recent commits. :)14:17
russellbkind of a habit, and we clarified in the dev docs what it means, and that it's welcome in openstack commits since it doesn't hurt: http://docs.openstack.org/infra/manual/developers.html#using-signed-off-by14:25
shettygQuestion for OpenStack guys. Kubernetes has a feature called "services". To summarize the feature, it provides multiple public IP addresses that all point to a single private IP address. I am trying to do a 1:1 mapping between Kubernetes features and OpenStack features and was wondering whether it is possible.14:28
russellbi guess that would be "floating IPs" in OpenStack14:28
russellbyou can allocate floating IPs, which are generally public, and have them mapped to existing ports14:29
*** armax has joined #openstack-neutron-ovn14:31
openstackgerritMerged stackforge/networking-ovn: docs: separate design docs from general docs  https://review.openstack.org/18839314:34
shettygrussellb: thanks. can one update a floating ip to point to a different IP address, via some openstack api?14:36
russellbyes14:36
russellbthat's the "floating" part, it can be dynamically moved around to be mapped to different addresses14:36
gsagieshettyg : do you happen to know if any work started regarding the L3 design ? or the distributed ovsdb ?14:37
shettyggsagie: Work has been started on distributed ovsdb. blp had done some work on that front using raft algorithm. Andy Zhou looks to have taken over from him to see how to take it to practical conclusion, I think.14:39
shettygL3 design: There is talk on the approach. But I haven't asked questions to know the thoughts14:40
gsagieshettyg : thanks14:41
russellbwould be great to see more of that on the ovs-dev list15:27
russellbcompletely opaque to us15:27
shettygrussellb: I think it is pretty much still in 'thought' phase or the right approach phase. I am sure people will post it to dev mailing list once there is something concrete.15:43
shettygBtwn, I am looking at Kubernetes integration with OpenStack OVN. If you are familiar with Kubernetes, would you be interested in designing it?15:45
russellbI'm not familiar with it16:12
russellbnot enough to design that16:12
russellbi know what it is, but that's about it :)16:12
shettygI have only started looking at it. I can write a github  summary sometime today. Providing per container networking for Kubernetes container with OVN is easier to do. But the additional features like floating ips, load balancers etc is more OpenStack involved and will likely need OpenStack experts involved.16:16
russellbdefinitely happy to contribute!16:18
russellbprobably others that would be interested too, i'll let you know16:20
*** marun has joined #openstack-neutron-ovn16:22
marunrussellb: hi!16:22
russellbshettyg: marun (just joined) is much more experienced with Neutron than me, and has also been looking at kubernetes.  I would definitely ping him with your ideas16:22
russellbmarun: backlog here http://eavesdrop.openstack.org/irclogs/%23openstack-neutron-ovn/%23openstack-neutron-ovn.2015-06-04.log.html16:24
* marun looking16:24
russellbmarun: shettyg was also the one who drove the OVN design that led to http://networking-ovn.readthedocs.org/en/latest/containers.html16:25
shettygmarun: I don't know how familiar you are with OVN+OpenStack+ containers design. To summarize OVN is capable of providing per container networking for containers running in VMs via neutron.16:25
marunshettyg: I'm going to review that link real quick.16:25
russellbmarun: which i implemented for now using a data bag ;-p16:25
marunheh16:26
russellbbut i think the VLAN-aware VMs proposal can be made to fit this use case16:26
marunrussellb: hmmm16:27
marunrussellb: I'm not sure I understand the use case16:28
russellbk, i'll try to clarify16:28
russellbstart with an openstack cloud using OVN as the neutron backend16:28
marunrussellb: I remember you sent me a ml link, I didn't get a chance to read in detail.  Should I do so to spare you some trouble?16:28
russellbi can boot VMs on it, i have tenant networks, the usual16:28
russellbi don't mind trying a tl;dr16:29
marunok :)16:29
russellbof course, you can also run containers in those VMs16:29
russellbit's also common to create overlay networks among containers (flannel, etc)16:29
russellbOVN can be used inside those VMs to do the same thing16:29
russellbbut there's a possible optimization here16:30
russellbwhat we're proposing is that you tell Neutron about the networks you want for your containers16:30
russellband let them be implemented by OVN providing networking for Neutron16:30
russellbwhich should provide better performance16:31
russellband digging into details of how that works, you tell neutron that traffic from each container will be tagged with a VLAN tag16:31
russellbso the hypervisor can differentiate traffic from the VM from the traffic from each container16:31
russellbthat's really not a very good tl;dr, because it's still kind of long16:32
russellbhttps://github.com/openvswitch/ovs/blob/ovn/ovn/CONTAINERS.OpenStack.md16:32
marunSo the idea is not to have l2 segmentation at the vm level and at the container level16:33
marunsince there would be performance and complexity costs16:33
russellblogically still have that segmentation16:33
russellbjust that we let the underlying network implement it16:34
russellbinstead of as another layer of overlay16:34
marunI've almost got it (I'm slow, apologies)16:35
marunso logically there is nesting16:35
russellbno worries!16:35
marunrussellb: coming at it from the other side, how would kub communicate with neutron?16:37
* russellb has no idea16:38
russellbi haven't thought that far16:38
russellbshettyg was looking at that though16:38
marunrussellb: at least today, the kublet process would be in the vm and that would be where port creation would originate from16:38
russellbi've only really thought it through from a connectivity point of view16:38
russellbok, so i guess kublet would need some neutron credentials and know where the API is16:39
russellbalso assumes the neutron API is accessible from the VM16:39
marunthat's reasonable I think16:39
marunI'm less sure how the 'vif plug' would work16:39
russellbso for that, we were assuming the VM would run OVS16:40
russellband each container would get hooked up to OVS16:40
russellband have its traffic tagged with a VLAN id16:40
marunthat's a bit messier for a generic neutron solution16:40
marunbut maybe I'm overthinking16:40
russellbwell, yeah16:40
russellbthis concept doesn't exist in neutron today16:40
russellbbut there's a proposal for "VLAN aware VMs"16:40
russellbwhere you define a port, and then create sub-ports16:41
russellbwhich basically matches what we're trying to achieve16:41
russellbi haven't really followed up on the spec yet, i just found out it existed at summit16:41
russellbbut in general, we need to be able to create child ports in neutron16:41
marunright16:41
marunIt is good food for though16:41
russellbso the VM has to know its own port16:42
russellband then it can create ports for containers that run in the VM, listing the VM's port as the parent16:42
marunWhen I've thought about vm-hosted kub with neutron-managed networking, I was confused about how we would allow something like the ovs agent on the vm to talk to the server via rpc16:42
marunBut if the vm simply has a consistent ovs setup, then the underlying neutron implementation is completely separate16:43
marunThat's a good approach, I think.16:43
russellbawesome, that's what we were hoping16:43
russellbsome of the complexity is left to the VM to implement16:43
russellbbut not sure how else to do it16:43
marunsomeone running kub wouldn't care if ovs was setup locally so long as they could use the cloud networking backend16:43
shettygmarun: I was thinking it this way. Today in each minion, you can place a network plugin. So when a pod gets created, the network plugin gets called. But other than the pod id, the network plugin does not have any context, so it will contact a daemon running in kubernetes master with a unused vlan in the minion + vif id of minion. The daemon in master queries the api server to get networking context and makes a call to Neutron to create the port. N16:43
marunso as you say, the trick is to enable logical child ports so that the vm vlans could be handled by the compute host properly16:44
russellbmarun: yep, and for the OVN ML2 mech driver today, we do it with binding:profile ... which made me feel dirty16:44
russellbbinding:profile includes the parent and vlan tag16:45
marunshettyg: hmmm16:45
shettygThe usp is that you can now have a kubernetes pod talk to a VM in OpenStack for a service. You can also use load balancers of OpenStack. The bigger USp is that you can not apply security policies to your containers in the hypervisors.16:45
shettyg*not = now16:46
marunUSp?16:46
shettygusp = unique selling point.16:47
marunshettyg: haven't heard that one before :)16:47
russellbi haven't either16:48
marunshettyg: I have a poc for kub/neutron integration that just uses hard-coded values for network etc for now16:48
marunshettyg: but yeah, it makes sense to have looked up via a centralized service16:49
shettygWith a centralized service, you only need to store neutron credentials at one place.16:49
shettygBut, if the master goes down, you are screwed. But as I see it, in case of Kubernetes, the master going down is a problem anyway16:50
shettygmarun: What about Kubernetes services? They provide you a public ip for a pod. And when the pod goes down and gets created in a different host, that public ip should still access the pod. Are using Neutron floating ips for that?16:52
marunshettyg: floating ips are a bit of a mess16:56
marunshettyg: at least by default16:56
shettygSo what is your approach towards Kubernetes services concept?16:58
marunshettyg: I haven't thought that through, to be honest.16:59
marunshettyg: In the scheme we're discussing, neutron would be responsible for private address assignment?17:02
shettygmarun: yes17:03
marunshettyg: Is there a reason not to have kub 'public ips' be neutron private ips?17:03
marunshettyg: and then associate a floating ip with the kub service ips?17:03
marunshettyg: that wouldn't require modifying any neutron abstraction17:03
shettygmarun: I see what you mean. Would that not mean that traffic going to a pod will always need to go through kube master?17:04
marunshettyg: If a pod moves to a different host, does it retain the same ip?17:04
marunshettyg: It would mean that pod ips would be private by default17:05
marunshettyg: and service ips could optionally be made public by associating a floating ip17:06
shettygmarun: It need not retain the same ip. If it gets a different ip, the idea was that the we need an api in Neutron that will now point the public ip to that.17:06
marunshettyg: change the floating ip associating, right17:07
marunshettyg: that already exists17:07
marunshettyg: So it would probably be a matter of watching for kub events that signified a pod move and update the networking accordingly17:07
marunshettyg: as I understand it, there isn't much desire to integrate this capability directly into kub itself17:08
shettygI see what you mean.17:10
shettygI like your approach too. I will have to think about it.17:10
marunCool, me too.17:11
marunIf you're interested I'd suggest touching base periodically.  There aren't many people focused on this problem and I appreciate having other perspectives.17:12
*** marun has quit IRC17:12
shettygmarun: me too. Always good to talk things loudly as you get to know other possibilities.17:12
*** marun has joined #openstack-neutron-ovn17:13
marunshettyg: I'm having trouble wrapping my head around how to support both vm and non-vm deployed scenarios in a reasonable way.17:14
marunshettyg: It's not clear to me which use case is most important.17:14
marunshettyg: In either case, there is the suggestion that segmentation is desirable, but I find that pretty confusing given that kub doesn't really have the concept of 'users'.17:16
marunNot in the way openstack does, at least.17:16
marunWhich for me calls into question how segmentation would be managed.17:16
openstackgerritRussell Bryant proposed stackforge/networking-ovn: Document ideas for supporting provider networks  https://review.openstack.org/18851917:17
marunshettyg: Have you seen calico's coreosfest presentation?17:17
shettygmarun: No. I haven't. Is it interesting?17:17
marunshettyg: I think so: https://www.youtube.com/watch?list=PLlh6TqkU8kg8Ld0Zu1aRWATiqBkxseZ9g&v=44wOK9ObAzk17:17
marunshettyg: It gets away from l2 entirely, but I think it's interesting to think about providing isolation with edge-only filtering.17:18
marunshettyg: Even in the case of l2, though, their suggestion of augmenting the kub pod config to support 'intent' could be useful.17:18
shettygmarun: listening17:19
marunshettyg: Such that it would allow app developers to indicate who should be able to talk to who without necessarily specifying the implementation17:19
marunshettyg: such that both l2 segmentation or l3 isolation could both accomplish the configured state17:20
marunshettyg: I'm not sure the k8s team would be amenable to supporting that kind of configuration, though.17:20
marunshettyg: since my understanding is they want core kub to closely match what they want to support on gke17:21
shettygmarun: In OVN, we will have distributed firewall too. Wherein firewall follows the container interface.17:21
marunshettyg: does that require that ovs firewall be ready?17:21
shettygmarun: yes.17:22
marunshettyg: cool17:23
shettygI am not able to digest the information that only developers add the firewall rules via pod json though. I would imagine that it would also be the cloud manager's job.17:24
marunshettyg: I'm sure there would have to be a review process for config that was going to be released to production, and non-production would be limited by default.17:25
marunshettyg: But I think my desire to trust developers by default may be at odds with how some organizations work.17:26
marun(I'm just glad I don't work at those places)17:26
russellb+117:26
russellband it's kind of anti-cloud17:26
marunrussellb: I'm starting to realize just how limited we are in the networking world by established practice and convention.17:27
russellbdon't crush my spirit so soon17:27
marunrussellb: hah17:27
marunrussellb: ovn/ovs has the advantage of using l2 precepts that are generally accepted17:27
marunrussellb: it's not revolutionary in any sense of the word, it's basically virtualized l217:28
marunrussellb: in the same way that virtualized computers are way easier to sell than something like containers, virtualized l2 is an easy sell17:28
* russellb nods17:28
marunrussellb: but when people talk about ditching l2 because we can do more interesting things at the l3-only layer (calico, contrail, facebook, etc), I don't think it's going to become commonplace anytime soon17:29
marunrussellb: the people in charge will have to retire first17:29
marun(or the org has to have competitive pressures and talent to justify the radical moves)17:30
russellband then we've got NFV folks wanting to use this stuff for non-IP traffic17:31
marunrussellb: yeah, that will definitely persist17:31
russellbso at least the virtual L2 solutions serve them too17:32
marunrussellb: for sure. legacy business will continue for some time17:32
russellbbut like you said, just part of how the existing world limits things17:32
russellbinteresting to think about17:32
marunrussellb: I'm a software engineer, though, not a network engineer.  I'm a bit frustrated at how slow things work in the ops-y world.17:33
russellbsame17:33
marunrussellb, shettyg: interesting conversation, thank you.  I'm off to lunch!17:34
russellbthanks marun!  hope we can stay in touch on all of this17:35
marunrussellb: for sure :)17:35
*** marun has quit IRC17:39
*** yapeng has joined #openstack-neutron-ovn17:47
*** yapeng has quit IRC18:39
*** ajo has quit IRC18:47
*** hitalia has joined #openstack-neutron-ovn18:58
*** hitalia has quit IRC19:00
*** marun has joined #openstack-neutron-ovn20:00
*** shettyg has quit IRC23:53

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!