14:02:11 #startmeeting kuryr 14:02:12 Meeting started Mon Nov 7 14:02:11 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:16 The meeting name has been set to 'kuryr' 14:02:27 Hello everybody and welcome to another Kuryr weekly IRC meeting 14:02:35 who's here to chat? 14:02:40 o/ 14:02:44 o/ 14:02:48 o/ 14:02:48 o/ 14:02:50 o/ 14:02:56 o/ 14:03:04 * pc_m lurking 14:03:24 o/ 14:03:35 Welcome everybody! 14:03:45 o/ 14:03:52 #topic kuryr-lib 14:04:21 Today I'll be pushing the new version of the kuryr-lib CNI driver 14:04:48 missing still the OVO binding :/ 14:05:39 o/ 14:05:55 anything else on kuryr-lib? 14:06:37 #topic kuryr-libnetwork 14:07:12 #info mchiappero and lmdaly reported a bug in how we handle the creation and deletion of interfaces in libnetwork that impacts us specially when working in container-in-vm mode 14:07:21 sorry for joining late 14:07:54 The issue is about when we delete and create the virtual devices for the container 14:08:32 they report that libnetwork expects deletion on deleteendpoint, whereas we were doing it in 'leave' 14:08:32 apuimedo, link? 14:08:35 sure 14:08:55 maybe bug is a misleading term 14:09:20 if you have good contacts with any folk in docker let's check with them 14:09:23 darn, can't find the link now 14:09:34 mchiappero: banix does, but he didn't join today 14:09:42 https://github.com/docker/libnetwork/issues/1520 14:09:45 https://bugs.launchpad.net/neutron/+bug/1639186 14:09:45 Launchpad bug 1639186 in neutron "qos max bandwidth rules not working for neutron trunk ports" [Low,Confirmed] - Assigned to Luis Tomas Bolivar (ltomasbo) 14:09:58 limao_: that's a different one :P 14:10:04 plenty of bugs to go around 14:10:06 :P 14:10:08 we 14:10:09 yeah.. 14:10:11 we'll get to it 14:10:14 just find.. 14:10:18 anyway. I'll find it later 14:10:25 I tried once again to ping someone in #docker-network 14:10:31 without success 14:10:45 tried pinging mrjana 14:10:47 nothing 14:10:56 the fact of the matter is that libnetwork attempts to move the device after the 'leave'. Which is good because we are supposed to delete the ipvlan/macvlan devices 14:11:08 and for that they have to be in the host namespace (so we find them) 14:11:14 with veths we do not have the problem 14:11:27 due to the fact that if we delete teh host side veth 14:11:34 the other one gets removed as well 14:11:44 so we got a pass with doing things earlier 14:11:56 o/ 14:12:19 sorry got late 14:12:21 I believe that mchiappero and lmdaly have a patch in the works for solving this 14:12:24 janonymous: no worries 14:12:38 so I wait eagerly to get it in ;-) 14:13:01 we'll be pushing shortly 14:13:03 #action mchiappero lmdaly to push the libnetwork fix for container device deletion ordering 14:13:13 the missing piece is the doc change 14:13:26 mchiappero: you can definitely put that on a follow up patch 14:13:33 now, on to the bug limao mentioned 14:13:43 #link https://bugs.launchpad.net/neutron/+bug/1639186 14:13:43 Launchpad bug 1639186 in neutron "qos max bandwidth rules not working for neutron trunk ports" [Low,Confirmed] - Assigned to Luis Tomas Bolivar (ltomasbo) 14:14:12 #info ltomasbo has been checking the vlan-aware-vms neutron reference implementation for completeness 14:14:36 #info ltomasbo has found out that Neutron QoS does not get applied to the container subports 14:14:41 yep, and find out that QoS cannot be applied on trunk ports 14:14:48 neither parent not subports 14:15:07 DSCP marking as well? 14:15:13 ltomasbo: I'd rather say 'cannot be applied on trunk ports with the current plumbing' 14:15:15 :P 14:15:19 didn't try, but perhaps that will work 14:15:39 irenab: that will only affect real hw, won't it? 14:16:09 I just think saying QoS cannot be applied to trunk ports is too generic 14:16:12 (and possibly customized tc rules at the host's egress, of coure) 14:16:21 irenab: I agree with that sentiment 14:16:23 :-) 14:16:38 ltomasbo is looking at other ways to enable it 14:16:45 apuimedo, seems its in neutron domain to fix, right? 14:16:49 so subports and parent ports get the same level of QoS 14:16:52 irenab: it is 14:16:59 ltomasbo works on both sides ;-) 14:17:07 yep, it is in vlan-aware-vm part 14:17:15 great, thanks 14:17:43 ltomasbo: maybe you can explain a bit the two ways that you are looking at (container per vlan and networking per vlan) 14:17:49 apuimedo: actually it is not that they get the same QoS 14:17:49 s/networking/network/ 14:17:53 is that it is not enforce at all 14:18:23 due to the way the VMs are connected to the br-int when in vlan-aware-vm mode 14:18:36 ltomasbo: I meant that they should get the get QoS applied as if they were unrelated ports 14:18:44 apuimedo, can you please give a quick update on the nested case progress? 14:18:45 s/get QoS/QoS/ 14:19:03 and yes, I can explain what I've been trying for the containers in vlan-aware-vms 14:19:18 irenab: that is what we are doing. ltomasbo will now explain a bit about his experiments 14:19:34 for kuryr-libnetwork we covered the bug in interface deletion earlier 14:19:44 yep, I've been trying two different ways of providing vlan networks to nested containers 14:19:46 when serving the container-in-vm case 14:20:09 the first scenario is when we have one subport (one vlan) per container 14:20:22 independently of if they are on the same neutron network or not 14:20:54 ltomasbo, vlan maps to the network of type vlan? 14:21:16 or just vlan for Container separation on host? 14:21:46 this means that connectivity between containers on the same machine always goes down to br-int on the host. So security groups are applied, QoS may get applied there as well once it is fixed, but you can only have 4096 container on the host 14:21:47 it is vlan just up to the trunk port 14:21:56 trunk bridge, sorry 14:22:02 well, less than that, but in the ballpark 14:22:04 :P 14:22:15 and then it will be encapsulated as the neutron network (vlan, vxlan, ...) 14:22:24 irenab: vlan to separate inside the VM 14:22:28 ltomasbo, got it, thanks 14:22:45 the other way is also using ipvlan 14:22:51 and have one subport per network 14:23:34 irenab: in the previous way, you basically have the VM eth0 and kuryr createds eth.X vlan devices and moves them into the containers 14:23:43 and then, inside the VM, all the container belonging to the same network, will get connected through the same subport 14:24:09 but they need to create a port and include it into the allowed address pair, as in the current ipvlan implementation 14:24:14 so there's two calls to do to neutron. Create a port and make it a subport of the VM trunk port 14:24:27 (in the previous way) 14:25:11 apuimedo: yes 14:25:25 apuimedo, so the kuryr part that does it is already wip? 14:25:32 and in the second case, there is one to get the port and one to include it into the allowed pairs of the subport 14:25:43 in this other way (one vlan per network used for containers in the VM), there's also two calls. One to create the port to reserve the IP in the subnet, the other to update the port that is actually a subport of the VM so that it has the new IP as an allowed one 14:25:50 right 14:25:55 https://review.openstack.org/#/c/361993/ 14:26:06 vikasc: :-) 14:26:12 apuimedo, irenab i started this part :) 14:26:24 vikasc, cool! 14:26:25 vikasc: we'd need to make it configurable for the two modes 14:26:55 apuimedo, yeah makes sense, for ipval as well 14:27:00 *ipvlan 14:27:01 so back to the QoS problem 14:27:06 :-) 14:27:30 what is the use case, any libnetwork API or more looking forward case? 14:27:41 #action vikasc to work with ltomasbo to put the vlan management in sync with the two ways ltomasbo is experimenting with 14:28:18 irenab: well, it's mostly for when k8s adds QoS support 14:28:26 so we can map it to Neutron 14:28:46 apuimedo, is it planned for 1.6? 14:28:49 as part of the work to check that vlan aware VMs are 'ready' ™ 14:28:57 haven't checked 14:29:16 I see, so more checking if it works as expected 14:29:19 so it is more of a priority for Neutron to get this fixed than for us :P 14:29:24 apuimedo, was wondering what pros vlan has ipvlan, cons is lower limit of 4096? 14:29:24 we have time 14:29:43 there is also proposal for CoS support (vlan prio) 14:30:02 vikasc: the pros of using vlan per container are that you are getting full neutron networking to the containers, including SG 14:30:18 for Container network per vlan 14:30:19 apuimedo, ahh , got it. thanks 14:30:42 the advantage is that you are not limited to <4095 containers on the VM 14:31:18 apuimedo, scalibility vs control 14:31:23 the disadvantage is that if you wanted to have different security/policy applied to containers of the same container network 14:31:34 you would not be able to let Neutron on the host handle it 14:31:49 vikasc: that's a way to put it 14:31:51 ;-) 14:31:52 apuimedo, makes sense 14:32:33 #info limao has been working on revamping the rally tests, to test the cost we incur on container creation going to neutron 14:32:48 it will probably get merged this week 14:33:20 so we'll have better data to take into account when deciding default networking modes 14:33:30 (and we can track perf regressions hopefully) 14:35:17 apuimedo, short question regarding k8s implementation 14:35:27 irenab: let's move over to the topic then 14:35:29 :-) 14:35:33 #topic kuryr-kubernetes 14:35:39 irenab: go ahead! 14:35:42 :-) 14:36:06 is there any list of working items, trello board to track the work you , vikasc and ivc_ currently doing? 14:36:50 * apuimedo is ashamed 14:36:57 I have the trello board 14:37:07 but I have not updated it since the week before the summit 14:37:13 I'll put it up to date again today 14:37:28 let me put the link 14:37:31 * vikasc was a bit idle for some time and will be catching up on reviewing ivc_ patches 14:37:46 thanks a lot! it will be very helpful for reviews 14:37:54 too late vikasc, toni already merged those :P 14:38:14 ivc_, :D 14:38:19 but i've got 2-3 more on the way, just need the cni driver in kuryr-lib 14:38:41 ivc_, will try out merged code then 14:39:25 #link https://trello.com/b/1Ij919E8/networking 14:39:36 if anybody is missing access, let me know and I'll add you 14:39:51 ivc_: you need to rebase the handler patch 14:40:12 https://review.openstack.org/#/c/391329/ 14:40:22 apuimedo, you mean namespace? 14:40:25 yup 14:40:33 oh we wont need it for some time now 14:40:34 but IIRC it also needs other changes 14:40:55 it will just lurk there with 'wip' status 14:41:23 since you changed the approach of having watcher per namespace resource to the one of the prototype (one watcher per resource, and let the handlers care, if necessary, abou the namespaces) 14:41:27 ivc_: got it 14:41:50 anybody can feel free to take items from that board 14:42:01 yup. it will get used once we get to sec-groups/network per namespace 14:42:04 but if there's already somebody on it, do talk to each other 14:42:10 ivc_: right 14:42:32 irenab: did you have some other question on the k8s front? 14:42:48 the summary is, ivc_ is waiting for my kuryr-lib CNI patch 14:43:03 and in the meantime we can prepare the WIP of the other handlers 14:43:22 I just wanted to see the plan, since had silly questions on the patch ivc_ posted last week :-) 14:43:23 and prototype the cluster membership, since it is quite orthogonal 14:43:31 I will check the board 14:43:38 there's rarely silly questions 14:44:12 except whether salmiak is better than chocolate. That would be silly 14:44:22 :-) 14:44:28 there is a saying.. questions are never silly, answers can be :) 14:44:32 :-) 14:44:36 apuimedo, what other handlers do we expect besides podport and service? 14:44:52 endpoints? 14:45:02 ivc_: I was implicitly referring to the service one (which we read from endpoints) 14:45:02 irenab, thats service 14:45:24 network policy 14:45:28 I think it can be put as WIP more or less like the namespaces one is 14:45:31 irenab: also 14:45:46 network policy we'll probably need to handle in podport mostly 14:46:06 I'd suggest that whoever wants to do that, take a look at how the ovn k8s implementation did it, so that we can have some reference 14:46:13 https://review.openstack.org/#/c/376045/ 14:46:20 ^ service/endpoints 14:46:21 ivc_: we will need to watch the policy objects 14:46:46 apuimedo, do you have a reference to ovn? 14:46:51 sure 14:47:06 apuimedo, yes, but my point is that we prolly cant start on policies before we finish port bindings 14:47:08 irenab: https://github.com/openvswitch/ovn-kubernetes 14:47:19 ivc_, agree 14:47:25 apuimedo, thanks 14:47:30 ivc_: it can't be tested. But one can start checking how others mapped it 14:47:39 and start prototyping 14:48:03 or at least breaking down things to do in the trello board 14:48:17 sorry, have to leave. Will catch up on meeting log 14:48:17 irenab: do you think you could take up the checking part? 14:48:22 ok irenab 14:48:25 thanks for joining 14:48:43 apuimedo, will check if I have cycles to spend on it 14:48:50 ivc_: thanks for the link to the services patch, it escaped my eye 14:48:55 thanks irenab 14:49:07 any other topic on k8s? 14:49:11 ivc_: 14:49:56 got some ideas about net.policies and labels, that we can have a sec.group per label 14:50:12 i.e. multiple labels -> multiple sec.groups 14:50:39 but need to experiment with that 14:50:46 same here 14:51:03 I have some ideas, but need to check if it will work out with the selectors 14:52:11 #topic open discussion 14:52:21 anybody else has other topics? 14:53:57 #action apuimedo to update the trello board 14:54:01 alrigth then. Thank you all for joining! 14:54:05 #endmeeting