14:02:07 <apuimedo> #startmeeting kuryr
14:02:08 <openstack> Meeting started Mon Oct 31 14:02:07 2016 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:11 <openstack> The meeting name has been set to 'kuryr'
14:02:32 <apuimedo> Hello everybody and welcome to the first post Ocata Summit Kuryr weekly IRC meeting
14:02:45 <apuimedo> I expect most of the people to be on holiday or still traveling back
14:02:53 <ltomasbo> hi!
14:02:55 <limao> o/
14:02:56 <apuimedo> but let's see if we get somebody to join
14:03:51 <apuimedo> ivc_: irenab: you there?
14:04:01 <ivc_> o/
14:04:47 <apuimedo> very well, let's get started
14:05:02 <apuimedo> first of all, I want to thank you for joining in the work sessions
14:05:30 <apuimedo> #topic kuryr-libnetwork
14:06:11 <apuimedo> #action need to get the ipvlan/macvlan that was used for the demo commited upstream
14:06:45 <apuimedo> limao: you wanted to submit a vlan driver and some network QoS options too, right?
14:07:02 <limao> Yes
14:07:13 <limao> QoS option
14:07:28 <ltomasbo> limiting max rate?
14:07:55 <limao> there is three kind of qos
14:07:56 <apuimedo> the idea would be to define it as one of the libnetwork option key-values
14:08:00 <limao> currently in neutron
14:08:16 <apuimedo> and that then it would apply to the ports of that docker neutron-backed network
14:08:20 <apuimedo> right
14:08:31 <limao> yes
14:09:00 <ltomasbo> would that be needed for the trunk port use case?
14:09:10 <ltomasbo> will not be enough to apply the QoS in the subport?
14:10:04 <limao> Currently it can be only done in network level
14:10:07 <apuimedo> ltomasbo: I'm unsure of how it behaves in neutron in subports
14:10:32 <ltomasbo> never try it either
14:10:54 <limao> Yeah, I can try it to see what it happened
14:11:26 <apuimedo> that would be great
14:11:37 <ivc_> apuimedo, unless something changed drastically, subports are very similar to regular ports, so qos should work for them
14:11:42 <apuimedo> #action limao to check how network QoS applies in neutron to ports and subports
14:12:04 <apuimedo> ivc_: yes, but it's a bit of a question whether trunk port restrictions apply to subports
14:12:08 <apuimedo> at least in my mind
14:12:16 <apuimedo> (I did not read the ref impl)
14:12:46 <ivc_> apuimedo, do we want qos on trunk or subport level?
14:13:02 <ivc_> or both?
14:13:07 <ltomasbo> I guess subport level, right?
14:13:46 <apuimedo> well, if it would be possible to support it in both it would be great, but our biggest concern is to have it working at the subport level
14:14:28 <ivc_> it might be interesting to be able to prioritise one subports over others
14:15:17 <ivc_> that would probably require qos on trunk port
14:16:06 <limao> Yeah, container level qos
14:16:12 <ivc_> aye
14:16:28 <apuimedo> well, subport QoS means that the host must be able to classify by subport origin
14:16:42 <apuimedo> and apply different htb/hsfc classes
14:16:52 <apuimedo> that's how I expect it to work
14:17:01 <limao> with ipvlan/macvlan, neutron will not support it
14:17:01 <apuimedo> but again, no idea how it was actually implemented
14:17:14 <apuimedo> limao: right. We'll have to add that
14:17:35 <limao> with ovs driver, neutron use qos in ovs
14:17:50 <apuimedo> #action talk with Neutron folks about other segmentation types
14:18:14 <apuimedo> limao: you wanted to add the vlan driver to kuryr-lib, right?
14:18:49 <limao> apuimedo: vlan driver?
14:19:10 <limao> I'm not sure which part you are talking about
14:20:31 <apuimedo> limao: well, we have ipvlan and macvlan, this was about adding vlan
14:20:34 <apuimedo> for container-in-vm
14:20:54 <apuimedo> but I think you mentioned using it for having vlan with bare-metal also
14:21:12 <limao> No need add vlan driver in kuryr
14:21:33 <limao> macvlan and ipvlan can work in baremetal
14:21:37 <apuimedo> well, you need the container device to come out tagged
14:21:40 <limao> the vlan is in neutron side
14:21:44 <apuimedo> for neutron default segmentation
14:21:44 <ltomasbo> aren't they go to be directly connected to the ethX.vlan?
14:22:02 <limao> No do not need container device go out with tagged
14:22:04 <apuimedo> basically when you join an endpoint to the container
14:22:06 <ltomasbo> each container to each vlan? or there may be containers in the same vlan?
14:22:20 <apuimedo> you need to created a vlan device linked to eth0 of the VM
14:22:30 <limao> same baremetal host with same vlan
14:22:34 <apuimedo> and then, on the host side, neutron agent will handle it for you
14:23:53 <limao> apuimedo: if you are talking about macvlan and ipvlan work on baremetal server which created by ironic, it did not need add vlan driver in kuryr
14:24:45 <limao> it(Baremetal server) can work with currently vm-nested macvlan and ipvlan
14:25:20 <apuimedo> ok
14:25:30 <apuimedo> so not for the baremetal case
14:25:38 <apuimedo> but we still need vlan for the nested case
14:25:50 <apuimedo> so that we can use default neutron subport segmentation
14:26:00 <apuimedo> ltomasbo: will you look into that?
14:26:09 <ltomasbo> btw, I asked jlibosva and he says: the qos is applied on port on br-int, which in this case is an spi- port
14:26:15 <ivc_> apuimedo, you mean vlan-aware-vms?
14:26:44 <ltomasbo> I will test it, yes vlan-aware-vm plus kuryr
14:26:53 <apuimedo> ltomasbo: great
14:26:54 <ltomasbo> and automatize the binding
14:27:03 <apuimedo> ivc_: yup
14:27:14 <limao> oh... you mean vlan-aware-vms , sorry for misunderstand
14:27:41 <apuimedo> limao: no, no. When I first asked you, I was asking about both cases ;-)
14:28:14 <limao> https://review.openstack.org/361993
14:29:06 <limao> Has vikas choudhary started some work on this?
14:29:45 * apuimedo checking
14:30:29 <apuimedo> right. This work needs to be resumed
14:30:58 <apuimedo> but, IIRC, the vlan will be passed as part of the vif oslo versioned object
14:31:05 <apuimedo> so not much management will be needed
14:31:21 <limao> Yeah, then we need to sync with vikasc about the status
14:31:52 <apuimedo> #action apuimedo to sync with vikasc about vlan manager patch
14:31:58 <ivc_> limao, looking at that review, i'm not sure if "For each container, a subport will be created in neutron and a vlan will
14:31:58 <ivc_> be allocated" is right (vlan is per-network/subnet, not per port), i'll take a closer look on that patch
14:32:18 <apuimedo> #topic kuryr-lib
14:32:32 <apuimedo> now that the summit is over, I can resume the cni work
14:32:48 <apuimedo> the other item is the oslo versioned object binding interface
14:32:56 <limao> ivc_ : thanks , it is a patch by vikasc
14:33:15 <apuimedo> #action apuimedo to work on a binding interface that supports oslo versioned vif objects
14:33:33 <apuimedo> well, and the vlan stuff we just talked about
14:33:40 <apuimedo> anything else about kuryr-lib?
14:36:17 <apuimedo> #topic kuryr-kubernetes
14:37:04 <apuimedo> #info Today we got a couple devstack fixes merged https://review.openstack.org/390820 https://review.openstack.org/391116
14:37:31 <apuimedo> #info we also merged more handler patches https://review.openstack.org/388658 https://review.openstack.org/388506 https://review.openstack.org/386192
14:37:46 <apuimedo> #info and a watcher patch https://review.openstack.org/376043
14:37:55 <apuimedo> we are missing the kuryr-lib cni part
14:38:13 <apuimedo> and we'll be able to test the port translation handlers
14:38:17 <apuimedo> and have CI for that
14:40:19 <apuimedo> anything else about kuryr-kubernetes?
14:41:57 <ivc_> only that we decided to use plugins
14:42:28 <apuimedo> ivc_: can you expand on that
14:42:35 <apuimedo> so I can put it on an #info ?
14:43:27 <ivc_> well we discussed during summit work sessions that we want to make it configurable what handlers are enabled
14:43:40 <apuimedo> ah, right
14:43:51 <ivc_> its already in the architecture, just need to implement the framework
14:43:58 <apuimedo> #action apuimedo to send the summit meeting notes
14:45:17 <apuimedo> thanks ivc_ for the reminder
14:45:24 <ivc_> i've posted some notes on etherpads during sessions
14:45:31 <apuimedo> it will help me with sending the notes :-0
14:45:34 <apuimedo> :-)
14:45:35 <ivc_> maybe you could salvage something from there :)
14:45:44 <apuimedo> I'll definitely do
14:45:53 <apuimedo> otherwise I'll forget even more
14:46:02 <apuimedo> #topic open discussion
14:46:13 <apuimedo> does anybody have another topic to talk about?
14:46:59 <ivc_> did we get some attention from other teams/communities during summit?
14:47:33 <apuimedo> we did have a joint session with Magnum
14:47:48 <apuimedo> and they helped us understand better their requirements
14:47:50 <ivc_> like that someone mentioned opendaylight guys have some interest in kuryr
14:48:08 <apuimedo> yes, I think we'll get some odl people to work on kuryr
14:48:13 <apuimedo> also nfv people
14:48:22 <apuimedo> and kolla-kubernetes
14:49:16 <apuimedo> portdirect showed us a prototype he has https://github.com/portdirect/harbor for deploying openstack on top of kubernetes with kuryr-kubernetes (he uses a modified midokura PoC)
14:50:21 <apuimedo> I think he uses it with OVN instead of the ref impl
14:50:23 <ivc_> we also working on having ost on top of k8s in mirantis :)
14:50:42 <apuimedo> ivc_: You should definitely check out his PoC
14:50:51 <apuimedo> maybe he can make us an online demo on bluejeans
14:51:16 <apuimedo> the idea is that harbor will need less code if we can get its pieces into kuryr and kolla kubernetes
14:51:17 <ivc_> https://www.openstack.org/videos/video/mirantis-evolve-or-die-enterprise-ready-openstack-upgrades-with-kubernetes
14:51:25 <ivc_> i think thats the session
14:51:36 <apuimedo> ivc_: is that the work of the tcpcloud people?
14:51:41 <ivc_> aye
14:51:48 <apuimedo> cool
14:52:05 <apuimedo> I wonder if mirantis has considered joining in kolla-kubernetes
14:53:38 <ivc_> can't say for sure
14:54:48 <apuimedo> ;-)
14:54:59 <apuimedo> alright. Anything else before we close the meeting?
14:56:17 <apuimedo> very well then
14:56:25 <apuimedo> thanks you all for joining
14:56:30 <apuimedo> #endmeeting