14:02:07 #startmeeting kuryr 14:02:08 Meeting started Mon Oct 31 14:02:07 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:11 The meeting name has been set to 'kuryr' 14:02:32 Hello everybody and welcome to the first post Ocata Summit Kuryr weekly IRC meeting 14:02:45 I expect most of the people to be on holiday or still traveling back 14:02:53 hi! 14:02:55 o/ 14:02:56 but let's see if we get somebody to join 14:03:51 ivc_: irenab: you there? 14:04:01 o/ 14:04:47 very well, let's get started 14:05:02 first of all, I want to thank you for joining in the work sessions 14:05:30 #topic kuryr-libnetwork 14:06:11 #action need to get the ipvlan/macvlan that was used for the demo commited upstream 14:06:45 limao: you wanted to submit a vlan driver and some network QoS options too, right? 14:07:02 Yes 14:07:13 QoS option 14:07:28 limiting max rate? 14:07:55 there is three kind of qos 14:07:56 the idea would be to define it as one of the libnetwork option key-values 14:08:00 currently in neutron 14:08:16 and that then it would apply to the ports of that docker neutron-backed network 14:08:20 right 14:08:31 yes 14:09:00 would that be needed for the trunk port use case? 14:09:10 will not be enough to apply the QoS in the subport? 14:10:04 Currently it can be only done in network level 14:10:07 ltomasbo: I'm unsure of how it behaves in neutron in subports 14:10:32 never try it either 14:10:54 Yeah, I can try it to see what it happened 14:11:26 that would be great 14:11:37 apuimedo, unless something changed drastically, subports are very similar to regular ports, so qos should work for them 14:11:42 #action limao to check how network QoS applies in neutron to ports and subports 14:12:04 ivc_: yes, but it's a bit of a question whether trunk port restrictions apply to subports 14:12:08 at least in my mind 14:12:16 (I did not read the ref impl) 14:12:46 apuimedo, do we want qos on trunk or subport level? 14:13:02 or both? 14:13:07 I guess subport level, right? 14:13:46 well, if it would be possible to support it in both it would be great, but our biggest concern is to have it working at the subport level 14:14:28 it might be interesting to be able to prioritise one subports over others 14:15:17 that would probably require qos on trunk port 14:16:06 Yeah, container level qos 14:16:12 aye 14:16:28 well, subport QoS means that the host must be able to classify by subport origin 14:16:42 and apply different htb/hsfc classes 14:16:52 that's how I expect it to work 14:17:01 with ipvlan/macvlan, neutron will not support it 14:17:01 but again, no idea how it was actually implemented 14:17:14 limao: right. We'll have to add that 14:17:35 with ovs driver, neutron use qos in ovs 14:17:50 #action talk with Neutron folks about other segmentation types 14:18:14 limao: you wanted to add the vlan driver to kuryr-lib, right? 14:18:49 apuimedo: vlan driver? 14:19:10 I'm not sure which part you are talking about 14:20:31 limao: well, we have ipvlan and macvlan, this was about adding vlan 14:20:34 for container-in-vm 14:20:54 but I think you mentioned using it for having vlan with bare-metal also 14:21:12 No need add vlan driver in kuryr 14:21:33 macvlan and ipvlan can work in baremetal 14:21:37 well, you need the container device to come out tagged 14:21:40 the vlan is in neutron side 14:21:44 for neutron default segmentation 14:21:44 aren't they go to be directly connected to the ethX.vlan? 14:22:02 No do not need container device go out with tagged 14:22:04 basically when you join an endpoint to the container 14:22:06 each container to each vlan? or there may be containers in the same vlan? 14:22:20 you need to created a vlan device linked to eth0 of the VM 14:22:30 same baremetal host with same vlan 14:22:34 and then, on the host side, neutron agent will handle it for you 14:23:53 apuimedo: if you are talking about macvlan and ipvlan work on baremetal server which created by ironic, it did not need add vlan driver in kuryr 14:24:45 it(Baremetal server) can work with currently vm-nested macvlan and ipvlan 14:25:20 ok 14:25:30 so not for the baremetal case 14:25:38 but we still need vlan for the nested case 14:25:50 so that we can use default neutron subport segmentation 14:26:00 ltomasbo: will you look into that? 14:26:09 btw, I asked jlibosva and he says: the qos is applied on port on br-int, which in this case is an spi- port 14:26:15 apuimedo, you mean vlan-aware-vms? 14:26:44 I will test it, yes vlan-aware-vm plus kuryr 14:26:53 ltomasbo: great 14:26:54 and automatize the binding 14:27:03 ivc_: yup 14:27:14 oh... you mean vlan-aware-vms , sorry for misunderstand 14:27:41 limao: no, no. When I first asked you, I was asking about both cases ;-) 14:28:14 https://review.openstack.org/361993 14:29:06 Has vikas choudhary started some work on this? 14:29:45 * apuimedo checking 14:30:29 right. This work needs to be resumed 14:30:58 but, IIRC, the vlan will be passed as part of the vif oslo versioned object 14:31:05 so not much management will be needed 14:31:21 Yeah, then we need to sync with vikasc about the status 14:31:52 #action apuimedo to sync with vikasc about vlan manager patch 14:31:58 limao, looking at that review, i'm not sure if "For each container, a subport will be created in neutron and a vlan will 14:31:58 be allocated" is right (vlan is per-network/subnet, not per port), i'll take a closer look on that patch 14:32:18 #topic kuryr-lib 14:32:32 now that the summit is over, I can resume the cni work 14:32:48 the other item is the oslo versioned object binding interface 14:32:56 ivc_ : thanks , it is a patch by vikasc 14:33:15 #action apuimedo to work on a binding interface that supports oslo versioned vif objects 14:33:33 well, and the vlan stuff we just talked about 14:33:40 anything else about kuryr-lib? 14:36:17 #topic kuryr-kubernetes 14:37:04 #info Today we got a couple devstack fixes merged https://review.openstack.org/390820 https://review.openstack.org/391116 14:37:31 #info we also merged more handler patches https://review.openstack.org/388658 https://review.openstack.org/388506 https://review.openstack.org/386192 14:37:46 #info and a watcher patch https://review.openstack.org/376043 14:37:55 we are missing the kuryr-lib cni part 14:38:13 and we'll be able to test the port translation handlers 14:38:17 and have CI for that 14:40:19 anything else about kuryr-kubernetes? 14:41:57 only that we decided to use plugins 14:42:28 ivc_: can you expand on that 14:42:35 so I can put it on an #info ? 14:43:27 well we discussed during summit work sessions that we want to make it configurable what handlers are enabled 14:43:40 ah, right 14:43:51 its already in the architecture, just need to implement the framework 14:43:58 #action apuimedo to send the summit meeting notes 14:45:17 thanks ivc_ for the reminder 14:45:24 i've posted some notes on etherpads during sessions 14:45:31 it will help me with sending the notes :-0 14:45:34 :-) 14:45:35 maybe you could salvage something from there :) 14:45:44 I'll definitely do 14:45:53 otherwise I'll forget even more 14:46:02 #topic open discussion 14:46:13 does anybody have another topic to talk about? 14:46:59 did we get some attention from other teams/communities during summit? 14:47:33 we did have a joint session with Magnum 14:47:48 and they helped us understand better their requirements 14:47:50 like that someone mentioned opendaylight guys have some interest in kuryr 14:48:08 yes, I think we'll get some odl people to work on kuryr 14:48:13 also nfv people 14:48:22 and kolla-kubernetes 14:49:16 portdirect showed us a prototype he has https://github.com/portdirect/harbor for deploying openstack on top of kubernetes with kuryr-kubernetes (he uses a modified midokura PoC) 14:50:21 I think he uses it with OVN instead of the ref impl 14:50:23 we also working on having ost on top of k8s in mirantis :) 14:50:42 ivc_: You should definitely check out his PoC 14:50:51 maybe he can make us an online demo on bluejeans 14:51:16 the idea is that harbor will need less code if we can get its pieces into kuryr and kolla kubernetes 14:51:17 https://www.openstack.org/videos/video/mirantis-evolve-or-die-enterprise-ready-openstack-upgrades-with-kubernetes 14:51:25 i think thats the session 14:51:36 ivc_: is that the work of the tcpcloud people? 14:51:41 aye 14:51:48 cool 14:52:05 I wonder if mirantis has considered joining in kolla-kubernetes 14:53:38 can't say for sure 14:54:48 ;-) 14:54:59 alright. Anything else before we close the meeting? 14:56:17 very well then 14:56:25 thanks you all for joining 14:56:30 #endmeeting