14:05:09 <apuimedo> #startmeeting kuryr
14:05:09 <janonymous> o/ Hi xD
14:05:10 <openstack> Meeting started Mon Jun 12 14:05:09 2017 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:05:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:05:12 <irenab> hi
14:05:13 <openstack> The meeting name has been set to 'kuryr'
14:05:17 <garyloug> o/
14:05:21 <mchiappero> o/
14:05:22 <limao> o/
14:05:22 <apuimedo> WHos' here for the show?
14:05:23 <janonymous> o/
14:05:27 <kevinz> o/
14:06:06 <apuimedo> Thank you all for showing up
14:06:09 <apuimedo> and sorry for the delay
14:06:14 <apuimedo> #topic kuryr-libnetwork
14:07:21 <apuimedo> We got some new patches on kuryr-libnetwork land last week
14:07:59 <apuimedo> #info kuryr-libnetwork moved to use the etcd devstack plugin
14:08:31 <apuimedo> it now uses the etcd configured port from the plugin
14:08:47 <apuimedo> the tag extension check got fixed
14:09:11 <apuimedo> #info there's some code submitted for a bugfix
14:09:15 <apuimedo> #link https://review.openstack.org/#/c/470773/
14:09:27 <apuimedo> #action limao and apuimedo to review
14:09:43 <apuimedo> also egonzalez reported some problem with the zun integration
14:09:48 <apuimedo> we'll have to take a look
14:09:51 <apuimedo> anything else?
14:10:07 <irenab> apuimedo: any bp for zun integration?
14:10:27 <irenab> maybe on zun project
14:11:01 <apuimedo> irenab: probably on zun
14:11:16 <apuimedo> irenab: the problem is with the binding
14:11:23 <apuimedo> it is not detecting the type apparently
14:11:37 <irenab> vif type => binding type?
14:11:56 <apuimedo> right
14:11:57 <irenab> ok, I guess we can deal having bug reported
14:12:06 <dmellado> A binding script for this type can't be found
14:12:09 <dmellado> irenab: ^^
14:12:34 <irenab> dmnew type that libnetwork integ does not support?
14:12:43 <egonzalez> not sure about any bp in zun, this is the main change https://review.openstack.org/#/c/453387/
14:12:56 <irenab> egonzalez: thanks
14:14:04 <irenab> good to see its coming
14:14:07 <apuimedo> irenab: no, it is ovs
14:15:00 <irenab> got it
14:15:17 <apuimedo> #topic kuryr-kubernetes
14:15:38 <janonymous> Sorry a bit busy these days, Minor update on client migration, i am working on the migration with reporting and updating possible issues upstream in k8s repo  that might affect kuryr, like https://github.com/kubernetes-incubator/client-python/issues/240 and others
14:15:43 <apuimedo> #info Initial port pool support has been finally merged. Expect performance gains
14:16:21 <apuimedo> janonymous: thanks for the update!
14:16:37 <apuimedo> #info macvlan support for pod-in-VM has been finally merged!
14:16:41 <janonymous> apuimedo: :)
14:17:05 <apuimedo> #info bearer token support has been merged too. This clears the way for containerized kuryr controller
14:17:31 <apuimedo> also, because it is a damn bother to generate key and cert for the controller :P
14:18:09 <apuimedo> #info genericvif is now neutronvif
14:18:15 <apuimedo> just a rename
14:18:38 <apuimedo> #info SR-IOV pike spec has been merged as well
14:19:02 <apuimedo> kzaitsev_pi: you'll have some rebases to do though
14:19:59 <apuimedo> #info OpenDaylight is working on integrating with kuryr kubernetes, they sent a sample devstack local.conf https://review.openstack.org/#/c/471012/
14:21:09 <apuimedo> #info work is lining up for functional and full stack tests. A new repo is being created for the tempest plugin. It will use the same k8s incubator python client janonymous is trying to port kuryr-kubernetes to
14:21:09 <dmellado> apuimedo: is that the proper link?
14:21:18 <dmellado> I don't see any local.conf there
14:21:27 <apuimedo> dmellado: nope. Obviously I fucked up
14:21:30 <apuimedo> xD
14:21:32 <dmellado> xD
14:21:35 <dmellado> -1 xD
14:21:48 <apuimedo> #link https://review.openstack.org/449309
14:21:52 <dmellado> thanks!
14:22:37 <apuimedo> Anything else on kuryr-kubernetes?
14:23:00 <kevinz> I have one
14:23:01 <apuimedo> If anybody wants to know, the next big thing to tackle, now that we merged so much stuff today
14:23:10 <apuimedo> should be the cni daemon split
14:23:19 <janonymous> \o/ :D
14:23:23 <apuimedo> and the containerization that vikasc started a week or two ago
14:23:28 <apuimedo> kevinz: go ahead
14:23:40 <kevinz> I'm working on integrate kuryr-kubernetes with magnum
14:23:45 <apuimedo> ah!
14:23:47 <apuimedo> That's great
14:23:49 <dmellado> awesome ;)
14:23:53 <kevinz> Thx
14:23:54 <apuimedo> how may we help you?
14:23:57 <kevinz> :-)
14:24:05 <apuimedo> kevinz: should it use packages or source?
14:24:19 <dmellado> and the next question is, do you have any review around? xD
14:24:42 <kevinz> From magnum side they want kuryr-kubernetes in container
14:24:50 <apuimedo> kevinz: that's good
14:25:03 <kevinz> Yeah I have one, but still in investigation
14:25:05 <apuimedo> kevinz: both the controller and the cni side?
14:25:27 <kevinz> Yes I think both in container is well
14:26:04 <kevinz> https://blueprints.launchpad.net/magnum/+spec/integrate-kuryr-kubernetes
14:26:10 <kevinz> Here is the link in magnum
14:26:28 <apuimedo> thanks
14:26:34 <apuimedo> #link https://blueprints.launchpad.net/magnum/+spec/integrate-kuryr-kubernetes
14:27:05 <apuimedo> kevinz: note that you should probably configure kuryr controller in macvlan mode
14:27:15 <apuimedo> at least until heat has the support for trunk ports merged
14:27:34 <apuimedo> oh, I see that you have that in the BP already!
14:27:46 <kevinz> apuimedo: yes, I have used the heat patch to test: https://review.openstack.org/442496
14:28:29 <apuimedo> kevinz: well, the good thing is that you can do without with the merged-today macvlan support
14:28:34 <apuimedo> kevinz: another question
14:28:47 <apuimedo> where should magnum be pulling the container from?
14:28:59 <apuimedo> and what to you use to deploy kubernetes? Kubeadm?
14:29:01 <kevinz> Yesterday I file a bug: https://bugs.launchpad.net/bugs/1697279
14:29:01 <openstack> Launchpad bug 1697279 in kuryr-kubernetes "devstack fail when create_k8s_router_fake_service in overcloud" [Undecided,New]
14:29:12 <apuimedo> ah, yes
14:29:17 <apuimedo> I saw it
14:29:42 <kevinz> apuimedo: do I do something wrong with this?
14:30:10 <kevinz> apuimedo: Magnum use hyperkube to deploy k8s:-)
14:30:42 <apuimedo> like us then
14:30:47 <kevinz> apuimedo: and pulling the hyperkube from gcr.io
14:30:53 <apuimedo> kevinz: if you had the ovs probe disabled
14:30:57 <apuimedo> it should not have failed
14:31:10 <apuimedo> I'm right now testing the exact same you are doing
14:31:22 <kevinz> apuimedo: Thx very much
14:31:22 <apuimedo> for https://review.openstack.org/#/c/472763/
14:31:53 <apuimedo> kevinz: I'm hoping to get my patch to work between tonight and tomorrow, so I will find out any issues that there could be
14:32:07 <kevinz> apuimedo: I'm very happy to see this devstack heat script
14:32:08 <apuimedo> if you want to stick around #openstack-kuryr we can keep looking at the issues
14:32:19 <apuimedo> kevinz: we already have it for baremetal
14:32:25 <kevinz> apuimedo: OK thx a lot
14:32:28 <apuimedo> and it has been a lifesaver for me
14:32:33 <apuimedo> so much time saved
14:32:34 <Irenab_> I will check this pa ch tomorrow as well
14:32:44 <apuimedo> I hope this overcloud one will prove as useful
14:32:44 <kevinz> haha  It's cool
14:32:53 <apuimedo> at least until we have the magnum integration finished
14:33:03 <apuimedo> thanks Irenab_
14:33:22 <apuimedo> kevinz: to summarize. We need to:
14:33:31 <apuimedo> -  address this overcloud router issue
14:33:41 <apuimedo> - finish containerizing the controller and cni
14:34:31 <apuimedo> alright, anything else on kuryr-k8s land?
14:34:34 <kevinz> apuimedo: Thx ! That's great  for your help
14:34:58 <kevinz> apuimedo: I think that will be easier for kuryr with magnum
14:35:27 <apuimedo> kevinz: I'm looking forward to that a lot
14:35:31 <apuimedo> #topic fuxi
14:35:36 <apuimedo> #chair hongbin
14:35:37 <openstack> Current chairs: apuimedo hongbin
14:35:46 <apuimedo> hongbin: you have the floor
14:36:26 <zengchen1> it seems that hongbin is not online.
14:36:35 <ltomasbo> hi all, sorry I'm late, was giving a presentation... Will read the logs
14:37:36 <apuimedo> alright then. I'll give the updates
14:37:44 <apuimedo> #info fuxi-golang is about to merge
14:37:57 <zengchen1> the repository of fuxi-golang is set up. we can start work on it.
14:38:25 <apuimedo> #info fuxi-kubernetes initial patch is approved as well
14:38:32 <apuimedo> ah. That is great!
14:38:53 <zengchen1> apuimedo:I have a question about fuxi-k8s.
14:38:57 <apuimedo> go ahead
14:39:17 <hongbin> o/
14:39:18 <zengchen1> apuimedo:when mounting volume on node which runs process of kubelet, the volume driver need to know whether the node is VM or bare metal machines. If the node is bare metal machine, then just mount volume to it. However, if it is VM, then it has to mount volume via Nova. How does Kuryr do when it supplies network for K8S?
14:39:54 <apuimedo> zengchen1: with trunk ports or allowed IP addresses
14:40:22 <Irenab_> It's a matter of proper driver configuration
14:40:45 <Irenab_> Nested versus be
14:40:55 <Irenab_> Bare metal
14:41:40 <apuimedo> zengchen1: I would just target baremetal at first
14:42:23 <zengchen1> apuimedo: does kuryr just supply network for container which runs on baremetal?
14:42:49 <apuimedo> zengchen1: no. It does both
14:42:52 <Irenab_> I thought the use case for Fuxi-k8s is bare metal only
14:42:56 <apuimedo> but first we started with baremetal
14:43:12 <apuimedo> Irenab_: I assume for pod-in-vm they could provide manila as a novelty
14:44:01 <apuimedo> zengchen1: in your case, you'd just need to do the following
14:44:14 <apuimedo> fuxi-kubernetes handler would be configured as pod-in-vm
14:44:26 <apuimedo> s/handler/driver/
14:44:35 <apuimedo> then the driver would first attach the nova
14:44:51 <apuimedo> then the flexvolume would put into container
14:44:56 <apuimedo> zengchen1: right?
14:45:20 <apuimedo> s/would first attach the nova/would first attach using nova/
14:45:21 <zengchen1> apuimedo:you mean driver runs in pod?
14:45:47 <apuimedo> zengchen1: controller can run on a pod optionally, but that's not what I meant
14:46:00 <zengchen1> apuimedo:got it.
14:46:25 <apuimedo> what I meant is that fuxi-kubernetes can see where the pod is scheduled, if the node is a nova instance, it can call nova to attach the volume, right?
14:46:46 <zengchen1> apuimedo:yes
14:47:08 <apuimedo> good
14:47:14 <apuimedo> anything else on fuxi land?
14:47:23 <hongbin> nothing else from me
14:47:45 <apuimedo> #action irenab to review https://review.openstack.org/#/c/470923/4
14:47:53 <apuimedo> #topic open discussion
14:48:03 <apuimedo> any other topic before we close shop?
14:48:23 <hongbin> i have one for libnetwork
14:49:01 <hongbin> apuimedo: could i go ahead right now?
14:49:51 <apuimedo> hongbin: sure
14:50:06 <hongbin> apuimedo: i wanted to discuss this proposal: https://blueprints.launchpad.net/kuryr-libnetwork/+spec/existing-subnet
14:50:27 <hongbin> want to get some early feedback first, do you think if it is a good idea?
14:51:14 <apuimedo> when would these options be passed
14:51:34 <hongbin> apuimedo: on docker run
14:51:45 <hongbin> apuimedo: sorry, on docker network create
14:51:49 <apuimedo> hongbin: ah
14:51:58 <apuimedo> I was gonna say. I don't recall docker run taking options
14:52:01 <apuimedo> :P
14:52:03 <apuimedo> that would be great
14:52:06 <apuimedo> though
14:52:29 <hongbin> apuimedo: it will pass a driver-specific option (i.e. kuryr.subnet.uuid)
14:52:46 <hongbin> apuimedo: it will be the same way we passed network/subnetpool
14:52:50 <apuimedo> hongbin: and how will you store it for the ipam to know which subnets should it use?
14:53:30 <hongbin> apuimedo: there are several options, one is store it as a tag
14:53:53 <hongbin> apuimedo: then, when the ipam is search a subnet, it searches a specific tag first
14:54:02 <Irenab_> Do you expect network to exist as well?
14:54:40 <hongbin> Irenab_: i think yes, if there is a existing subnet, the network has to be pre-existed
14:55:14 <Irenab_> I wonder if his is not redundant for the network option
14:55:40 <apuimedo> hongbin: I'm fine with it. But I'd really like the BP to have a description of the possibilities for mapping which subnets should ipam use
14:55:54 <hongbin> Irenab_: the project this proposal is going to solve is the overlapping cidr, so the network option couldn't resolve this problem
14:56:22 <hongbin> apuimedo: sure, i can write the details into the whiteboard
14:56:59 <Irenab_> Please address e use case it serves,, overlapping ips
14:57:21 <apuimedo> thanks hongbin!
14:57:29 <apuimedo> alright. Closing up
14:57:33 <apuimedo> Thank you all for joining!
14:57:35 <apuimedo> #endmeeting