14:02:11 <apuimedo> #startmeeting kuryr
14:02:12 <openstack> Meeting started Mon Nov  7 14:02:11 2016 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:16 <openstack> The meeting name has been set to 'kuryr'
14:02:27 <apuimedo> Hello everybody and welcome to another Kuryr weekly IRC meeting
14:02:35 <apuimedo> who's here to chat?
14:02:40 <vikasc> o/
14:02:44 <garyloug> o/
14:02:48 <lmdaly> o/
14:02:48 <limao_> o/
14:02:50 <yedongcan> o/
14:02:56 <ivc_> o/
14:03:04 * pc_m lurking
14:03:24 <Drago1> o/
14:03:35 <apuimedo> Welcome everybody!
14:03:45 <ltomasbo> o/
14:03:52 <apuimedo> #topic kuryr-lib
14:04:21 <apuimedo> Today I'll be pushing the new version of the kuryr-lib CNI driver
14:04:48 <apuimedo> missing still the OVO binding :/
14:05:39 <mchiappero> o/
14:05:55 <apuimedo> anything else on kuryr-lib?
14:06:37 <apuimedo> #topic kuryr-libnetwork
14:07:12 <apuimedo> #info mchiappero and lmdaly reported a bug in how we handle the creation and deletion of interfaces in libnetwork that impacts us specially when working in container-in-vm mode
14:07:21 <irenab> sorry for joining late
14:07:54 <apuimedo> The issue is about when we delete and create the virtual devices for the container
14:08:32 <apuimedo> they report that libnetwork expects deletion on deleteendpoint, whereas we were doing it in 'leave'
14:08:32 <irenab> apuimedo, link?
14:08:35 <apuimedo> sure
14:08:55 <mchiappero> maybe bug is a misleading term
14:09:20 <mchiappero> if you have good contacts with any folk in docker let's check with them
14:09:23 <apuimedo> darn, can't find the link now
14:09:34 <apuimedo> mchiappero: banix does, but he didn't join today
14:09:42 <mchiappero> https://github.com/docker/libnetwork/issues/1520
14:09:45 <limao_> https://bugs.launchpad.net/neutron/+bug/1639186
14:09:45 <openstack> Launchpad bug 1639186 in neutron "qos max bandwidth rules not working for neutron trunk ports" [Low,Confirmed] - Assigned to Luis Tomas Bolivar (ltomasbo)
14:09:58 <apuimedo> limao_: that's a different one :P
14:10:04 <apuimedo> plenty of bugs to go around
14:10:06 <apuimedo> :P
14:10:08 <apuimedo> we
14:10:09 <limao_> yeah..
14:10:11 <apuimedo> we'll get to it
14:10:14 <limao_> just find..
14:10:18 <apuimedo> anyway. I'll find it later
14:10:25 <mchiappero> I tried once again to ping someone in #docker-network
14:10:31 <mchiappero> without success
14:10:45 <mchiappero> tried pinging mrjana
14:10:47 <mchiappero> nothing
14:10:56 <apuimedo> the fact of the matter is that libnetwork attempts to move the device after the 'leave'. Which is good because we are supposed to delete the ipvlan/macvlan devices
14:11:08 <apuimedo> and for that they have to be in the host namespace (so we find them)
14:11:14 <apuimedo> with veths we do not have the problem
14:11:27 <apuimedo> due to the fact that if we delete teh host side veth
14:11:34 <apuimedo> the other one gets removed as well
14:11:44 <apuimedo> so we got a pass with doing things earlier
14:11:56 <janonymous> o/
14:12:19 <janonymous> sorry got late
14:12:21 <apuimedo> I believe that mchiappero and lmdaly have a patch in the works for solving this
14:12:24 <apuimedo> janonymous: no worries
14:12:38 <apuimedo> so I wait eagerly to get it in ;-)
14:13:01 <mchiappero> we'll be pushing shortly
14:13:03 <apuimedo> #action mchiappero lmdaly to push the libnetwork fix for container device deletion ordering
14:13:13 <mchiappero> the missing piece is the doc change
14:13:26 <apuimedo> mchiappero: you can definitely put that on a follow up patch
14:13:33 <apuimedo> now, on to the bug limao mentioned
14:13:43 <apuimedo> #link https://bugs.launchpad.net/neutron/+bug/1639186
14:13:43 <openstack> Launchpad bug 1639186 in neutron "qos max bandwidth rules not working for neutron trunk ports" [Low,Confirmed] - Assigned to Luis Tomas Bolivar (ltomasbo)
14:14:12 <apuimedo> #info ltomasbo has been checking the vlan-aware-vms neutron reference implementation for completeness
14:14:36 <apuimedo> #info ltomasbo has found out that Neutron QoS does not get applied to the container subports
14:14:41 <ltomasbo> yep, and find out that QoS cannot be applied on trunk ports
14:14:48 <ltomasbo> neither parent not subports
14:15:07 <irenab> DSCP marking as well?
14:15:13 <apuimedo> ltomasbo: I'd rather say 'cannot be applied on trunk ports with the current plumbing'
14:15:15 <apuimedo> :P
14:15:19 <ltomasbo> didn't try, but perhaps that will work
14:15:39 <apuimedo> irenab: that will only affect real hw, won't it?
14:16:09 <irenab> I just think saying QoS cannot be applied to trunk ports is too generic
14:16:12 <apuimedo> (and possibly customized tc rules at the host's egress, of coure)
14:16:21 <apuimedo> irenab: I agree with that sentiment
14:16:23 <apuimedo> :-)
14:16:38 <apuimedo> ltomasbo is looking at other ways to enable it
14:16:45 <irenab> apuimedo, seems its in neutron domain to fix, right?
14:16:49 <apuimedo> so subports and parent ports get the same level of QoS
14:16:52 <apuimedo> irenab: it is
14:16:59 <apuimedo> ltomasbo works on both sides ;-)
14:17:07 <ltomasbo> yep, it is in vlan-aware-vm part
14:17:15 <irenab> great, thanks
14:17:43 <apuimedo> ltomasbo: maybe you can explain a bit the two ways that you are looking at (container per vlan and networking per vlan)
14:17:49 <ltomasbo> apuimedo: actually it is not that they get the same QoS
14:17:49 <apuimedo> s/networking/network/
14:17:53 <ltomasbo> is that it is not enforce at all
14:18:23 <ltomasbo> due to the way the VMs are connected to the br-int when in vlan-aware-vm mode
14:18:36 <apuimedo> ltomasbo: I meant that they should get the get QoS applied as if they were unrelated ports
14:18:44 <irenab> apuimedo, can you please give a quick update on the nested case progress?
14:18:45 <apuimedo> s/get QoS/QoS/
14:19:03 <ltomasbo> and yes, I can explain what I've been trying for the containers in vlan-aware-vms
14:19:18 <apuimedo> irenab: that is what we are doing. ltomasbo will now explain a bit about his experiments
14:19:34 <apuimedo> for kuryr-libnetwork we covered the bug in interface deletion earlier
14:19:44 <ltomasbo> yep, I've been trying two different ways of providing vlan networks to nested containers
14:19:46 <apuimedo> when serving the container-in-vm case
14:20:09 <ltomasbo> the first scenario is when we have one subport (one vlan) per container
14:20:22 <ltomasbo> independently of if they are on the same neutron network or not
14:20:54 <irenab> ltomasbo, vlan maps to the network of type vlan?
14:21:16 <irenab> or just vlan for Container separation on host?
14:21:46 <apuimedo> this means that connectivity between containers on the same machine always goes down to br-int on the host. So security groups are applied, QoS may get applied there as well once it is fixed, but you can only have 4096 container on the host
14:21:47 <ltomasbo> it is vlan just up to the trunk port
14:21:56 <ltomasbo> trunk bridge, sorry
14:22:02 <apuimedo> well, less than that, but in the ballpark
14:22:04 <apuimedo> :P
14:22:15 <ltomasbo> and then it will be encapsulated as the neutron network (vlan, vxlan, ...)
14:22:24 <apuimedo> irenab: vlan to separate inside the VM
14:22:28 <irenab> ltomasbo, got it, thanks
14:22:45 <ltomasbo> the other way is also using ipvlan
14:22:51 <ltomasbo> and have one subport per network
14:23:34 <apuimedo> irenab: in the previous way, you basically have the VM eth0 and kuryr createds eth.X vlan devices and moves them into the containers
14:23:43 <ltomasbo> and then, inside the VM, all the container belonging to the same network, will get connected through the same subport
14:24:09 <ltomasbo> but they need to create a port and include it into the allowed address pair, as in the current ipvlan implementation
14:24:14 <apuimedo> so there's two calls to do to neutron. Create a port and make it a subport of the VM trunk port
14:24:27 <apuimedo> (in the previous way)
14:25:11 <ltomasbo> apuimedo: yes
14:25:25 <irenab> apuimedo, so the kuryr part that does it is already wip?
14:25:32 <ltomasbo> and in the second case, there is one to get the port and one to include it into the allowed pairs of the subport
14:25:43 <apuimedo> in this other way (one vlan per network used for containers in the VM), there's also two calls. One to create the port to reserve the IP in the subnet, the other to update the port that is actually a subport of the VM so that it has the new IP as an allowed one
14:25:50 <apuimedo> right
14:25:55 <vikasc> https://review.openstack.org/#/c/361993/
14:26:06 <apuimedo> vikasc: :-)
14:26:12 <vikasc> apuimedo, irenab i started this part :)
14:26:24 <irenab> vikasc, cool!
14:26:25 <apuimedo> vikasc: we'd need to make it configurable for the two modes
14:26:55 <vikasc> apuimedo, yeah makes sense, for ipval as well
14:27:00 <vikasc> *ipvlan
14:27:01 <irenab> so back to the QoS problem
14:27:06 <apuimedo> :-)
14:27:30 <irenab> what is the use case, any libnetwork API or more looking forward case?
14:27:41 <apuimedo> #action vikasc to work with ltomasbo to put the vlan management in sync with the two ways ltomasbo is experimenting with
14:28:18 <apuimedo> irenab: well, it's mostly for when k8s adds QoS support
14:28:26 <apuimedo> so we can map it to Neutron
14:28:46 <irenab> apuimedo, is it planned for 1.6?
14:28:49 <apuimedo> as part of the work to check that vlan aware VMs are 'ready' ™
14:28:57 <apuimedo> haven't checked
14:29:16 <irenab> I see, so more checking if it works as expected
14:29:19 <apuimedo> so it is more of a priority for Neutron to get this fixed than for us :P
14:29:24 <vikasc> apuimedo, was wondering what pros vlan has ipvlan, cons is lower limit of 4096?
14:29:24 <apuimedo> we have time
14:29:43 <irenab> there is also proposal for CoS support (vlan prio)
14:30:02 <apuimedo> vikasc: the pros of using vlan per container are that you are getting full neutron networking to the containers, including SG
14:30:18 <apuimedo> for Container network per vlan
14:30:19 <vikasc> apuimedo, ahh , got it. thanks
14:30:42 <apuimedo> the advantage is that you are not limited to <4095 containers on the VM
14:31:18 <vikasc> apuimedo, scalibility vs control
14:31:23 <apuimedo> the disadvantage is that if you wanted to have different security/policy applied to containers of the same container network
14:31:34 <apuimedo> you would not be able to let Neutron on the host handle it
14:31:49 <apuimedo> vikasc: that's a way to put it
14:31:51 <apuimedo> ;-)
14:31:52 <vikasc> apuimedo, makes sense
14:32:33 <apuimedo> #info limao has been working on revamping the rally tests, to test the cost we incur on container creation going to neutron
14:32:48 <apuimedo> it will probably get merged this week
14:33:20 <apuimedo> so we'll have better data to take into account when deciding default networking modes
14:33:30 <apuimedo> (and we can track perf regressions hopefully)
14:35:17 <irenab> apuimedo, short question regarding k8s implementation
14:35:27 <apuimedo> irenab: let's move over to the topic then
14:35:29 <apuimedo> :-)
14:35:33 <apuimedo> #topic kuryr-kubernetes
14:35:39 <apuimedo> irenab: go ahead!
14:35:42 <apuimedo> :-)
14:36:06 <irenab> is there  any list of working items, trello board to track the work you , vikasc and ivc_ currently doing?
14:36:50 * apuimedo is ashamed
14:36:57 <apuimedo> I have the trello board
14:37:07 <apuimedo> but I have not updated it since the week before the summit
14:37:13 <apuimedo> I'll put it up to date again today
14:37:28 <apuimedo> let me put the link
14:37:31 * vikasc was a bit idle for some time and will be catching up on reviewing ivc_ patches
14:37:46 <irenab> thanks a lot! it will be very helpful for reviews
14:37:54 <ivc_> too late vikasc, toni already merged those :P
14:38:14 <vikasc> ivc_, :D
14:38:19 <ivc_> but i've got 2-3 more on the way, just need the cni driver in kuryr-lib
14:38:41 <vikasc> ivc_, will try out merged code then
14:39:25 <apuimedo> #link https://trello.com/b/1Ij919E8/networking
14:39:36 <apuimedo> if anybody is missing access, let me know and I'll add you
14:39:51 <apuimedo> ivc_: you need to rebase the handler patch
14:40:12 <apuimedo> https://review.openstack.org/#/c/391329/
14:40:22 <ivc_> apuimedo, you mean namespace?
14:40:25 <apuimedo> yup
14:40:33 <ivc_> oh we wont need it for some time now
14:40:34 <apuimedo> but IIRC it also needs other changes
14:40:55 <ivc_> it will just lurk there with 'wip' status
14:41:23 <apuimedo> since you changed the approach of having watcher per namespace resource to the one of the prototype (one watcher per resource, and let the handlers care, if necessary, abou the namespaces)
14:41:27 <apuimedo> ivc_: got it
14:41:50 <apuimedo> anybody can feel free to take items from that board
14:42:01 <ivc_> yup. it will get used once we get to sec-groups/network per namespace
14:42:04 <apuimedo> but if there's already somebody on it, do talk to each other
14:42:10 <apuimedo> ivc_: right
14:42:32 <apuimedo> irenab: did you have some other question on the k8s front?
14:42:48 <apuimedo> the summary is, ivc_ is waiting for my kuryr-lib CNI patch
14:43:03 <apuimedo> and in the meantime we can prepare the WIP of the other handlers
14:43:22 <irenab> I just wanted to see the plan, since had silly questions on the patch ivc_ posted last week :-)
14:43:23 <apuimedo> and prototype the cluster membership, since it is quite orthogonal
14:43:31 <irenab> I will check the board
14:43:38 <apuimedo> there's rarely silly questions
14:44:12 <apuimedo> except whether salmiak is better than chocolate. That would be silly
14:44:22 <irenab> :-)
14:44:28 <vikasc> there is a saying.. questions are never silly, answers can be :)
14:44:32 <apuimedo> :-)
14:44:36 <ivc_> apuimedo, what other handlers do we expect besides podport and service?
14:44:52 <irenab> endpoints?
14:45:02 <apuimedo> ivc_: I was implicitly referring to the service one (which we read from endpoints)
14:45:02 <ivc_> irenab, thats service
14:45:24 <irenab> network policy
14:45:28 <apuimedo> I think it can be put as WIP more or less like the namespaces one is
14:45:31 <apuimedo> irenab: also
14:45:46 <ivc_> network policy we'll probably need to handle in podport mostly
14:46:06 <apuimedo> I'd suggest that whoever wants to do that, take a look at how the ovn k8s implementation did it, so that we can have some reference
14:46:13 <ivc_> https://review.openstack.org/#/c/376045/
14:46:20 <ivc_> ^ service/endpoints
14:46:21 <apuimedo> ivc_: we will need to watch the policy objects
14:46:46 <irenab> apuimedo, do you have a reference to ovn?
14:46:51 <apuimedo> sure
14:47:06 <ivc_> apuimedo, yes, but my point is that we prolly cant start on policies before we finish port bindings
14:47:08 <apuimedo> irenab: https://github.com/openvswitch/ovn-kubernetes
14:47:19 <irenab> ivc_, agree
14:47:25 <irenab> apuimedo, thanks
14:47:30 <apuimedo> ivc_: it can't be tested. But one can start checking how others mapped it
14:47:39 <apuimedo> and start prototyping
14:48:03 <apuimedo> or at least breaking down things to do in the trello board
14:48:17 <irenab> sorry, have to leave. Will catch up on meeting log
14:48:17 <apuimedo> irenab: do you think you could take up the checking part?
14:48:22 <apuimedo> ok irenab
14:48:25 <apuimedo> thanks for joining
14:48:43 <irenab> apuimedo, will check if I have cycles to spend on it
14:48:50 <apuimedo> ivc_: thanks for the link to the services patch, it escaped my eye
14:48:55 <apuimedo> thanks irenab
14:49:07 <apuimedo> any other topic on k8s?
14:49:11 <apuimedo> ivc_:
14:49:56 <ivc_> got some ideas about net.policies and labels, that we can have a sec.group per label
14:50:12 <ivc_> i.e. multiple labels -> multiple sec.groups
14:50:39 <ivc_> but need to experiment with that
14:50:46 <apuimedo> same here
14:51:03 <apuimedo> I have some ideas, but need to check if it will work out with the selectors
14:52:11 <apuimedo> #topic open discussion
14:52:21 <apuimedo> anybody else has other topics?
14:53:57 <apuimedo> #action apuimedo to update the trello board
14:54:01 <apuimedo> alrigth then. Thank you all for joining!
14:54:05 <apuimedo> #endmeeting