14:01:30 <apuimedo> #startmeeting kuryr
14:01:31 <openstack> Meeting started Mon Sep 19 14:01:30 2016 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:33 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:35 <openstack> The meeting name has been set to 'kuryr'
14:01:45 <apuimedo> Hello! Who's here for the kuryr meeting?
14:01:50 <vikasc> o/
14:01:53 <limao_> o/
14:01:54 <tonanhngo> o/
14:01:56 <lmdaly> o/
14:02:21 <ivc_> o/
14:03:06 <apuimedo> #info vikasc, limao_, tonanhngo, lmdaly, ivc_ and apuimedo present
14:03:15 <apuimedo> Thank you for joining
14:03:23 <apuimedo> #topic kuryr-lib
14:03:42 <apuimedo> https://wiki.openstack.org/wiki/Meetings/Kuryr#Meeting_September_19th.2C_2016
14:04:02 <apuimedo> #topic kuryr-lib: REST/rpc
14:04:34 <apuimedo> #info vikasc has some patches for it already
14:04:38 <irenab> apuimedo: will partially attend
14:05:02 <apuimedo> lmdaly: I would like to understand if the approach vikasc is pushing works for the PoC you have
14:05:02 <vikasc> apuimedo, https://review.openstack.org/#/c/342624/
14:05:37 <vikasc> apuimedo, this is the first patch toi be reviewd
14:05:43 <vikasc> *to
14:05:44 <janonymous> o/
14:06:24 <apuimedo> welcome janonymous
14:06:34 <apuimedo> #link https://review.openstack.org/#/c/342624/
14:06:40 <pablochacin> o/
14:06:47 <apuimedo> welcome pablochacin
14:07:19 <apuimedo> vikasc: lmdaly: we have to make it in a way that is configurable with both approaches, address/pairs and trunk/subport
14:07:33 <apuimedo> vikasc: did you introduce such a config option already?
14:07:43 <vikasc> apuimedo, not yet
14:08:38 <vikasc> apuimedo, i have introduced, a configuration option to tell if baremetal or nested-containers
14:08:48 <apuimedo> I'm aware
14:08:54 <apuimedo> I think we can merge that one
14:09:13 <vikasc> apuimedo, it is like one more variable applicable with nested-containers
14:09:25 <apuimedo> #action apuimedo to review and possibly merge https://review.openstack.org/362023
14:09:41 <apuimedo> #action vikasc to add a second config patch for selecting the container-in-vm approach
14:10:14 <apuimedo> the same setting should be used for both the host side and the VM side
14:10:15 <vikasc> apuimedo, so to answer your question.. not added yet this new configuration option in nested-containers case.
14:11:09 <apuimedo> very well
14:11:11 <vikasc> apuimedo, by host side you mean on master node?
14:11:12 <apuimedo> we'll get to that
14:11:18 <apuimedo> vikasc: right
14:11:31 <vikasc> apuimedo, yeah..right
14:11:42 <hongbin> o/
14:11:59 <apuimedo> vikasc: possibly we can run more than one master node and have them HAed with corosync-pacemaker
14:12:07 <apuimedo> welcome hongbin
14:12:09 <vikasc> apuimedo, in my current understanding rpc patches will work for ip/macvlan approch as well
14:12:21 <apuimedo> good
14:12:36 <apuimedo> #action apuimedo irenab to review all the rpc patches
14:12:54 <apuimedo> #topic kuryr-lib: specs/devref
14:13:04 <vikasc> apuimedo, i need to rebase rpc patches.. i was updating rest patches only by now
14:13:26 <vikasc> apuimedo, will do
14:13:43 <vikasc> apuimedo, we should review first kuryr-lib repo patches
14:13:46 <apuimedo> #info after going through the mailing list for a week, I think the option that generates the larger consensus is to keep the specs on openstack/kuryr and to put devrefs close to the code, i.e. in the repo where the implementation will go
14:13:57 <apuimedo> vikasc: absolutely
14:14:56 <vikasc> apuimedo, i will understand from you HA scenario offline
14:15:13 <apuimedo> vikasc: sure, we can discuss it later
14:15:17 <apuimedo> #topic kuryr-libnetwork: credential selection
14:15:54 <apuimedo> #info: Keystone v3 support was merged, but Kuryr will still prefer cloud.yaml auth options over kuryr.conf, as it was doing up until now
14:16:04 <apuimedo> banix brought my attention to it
14:16:21 <apuimedo> and we should try reverting that preference in a way that doesn't break fullstack/rally tests.
14:16:53 <banix> apuimedo: how is this done? i mean where is this being done?
14:17:38 <apuimedo> banix: let me link it
14:18:21 <apuimedo> #link https://github.com/openstack/kuryr-libnetwork/blob/master/kuryr_libnetwork/controllers.py#L74-L76
14:18:34 <apuimedo> first it goes to the cloud yaml in line 74
14:18:46 <apuimedo> otherwise it goes to getting it from kuryr.conf in L76
14:18:58 <apuimedo> it's possible that just swapping those lines fixes it
14:19:28 <banix> isee; now i remember; was done for the gate job at the time i think
14:20:01 <banix> will try after the call and let you know; may be can fix it before the release?
14:20:49 <apuimedo> very well
14:20:54 <apuimedo> banix: that would be ideal
14:21:06 <banix> sure
14:21:11 <apuimedo> #action banix to test swapping the lines
14:21:29 <apuimedo> #info there is a new trello board to track work on Kuryr https://trello.com/b/1Ij919E8
14:21:49 <apuimedo> #info committers to kuryr can ping me for being added as members to the trello team
14:22:18 <apuimedo> #topic: kuryr-libnetwork: release
14:22:56 <apuimedo> #info apuimedo sent a patch to get the kuryr-lib requirement in requirements.txt but it was rejected due to global requirements freeze
14:23:15 <apuimedo> #action apuimedo to request an exception. Which should be granted since nobody else depends on it.
14:24:08 <apuimedo> #topic: kuryr-libnetwork: container-in-vm
14:24:23 <apuimedo> well, we covered most of it in the kuryr-lib topic
14:24:39 <apuimedo> vikasc: anything to add that is kuryr-libnetwork specific?
14:25:02 <vikasc> apuimedo, i think thats pretty much all
14:25:07 <apuimedo> very well
14:25:16 <apuimedo> #topic kuryr-libnetwork:general
14:25:37 <apuimedo> #info dongcanye got a patch merged for adding endpoint operation info
14:26:03 <apuimedo> banix: have you tried kuryr with the new swarm?
14:26:20 <banix> won’t work
14:26:29 <apuimedo> :/
14:26:39 <banix> new swarm is tied to Docker Overlay driver
14:26:50 <banix> they say this will be fixed in 1.13
14:26:52 <apuimedo> banix: it can't use any remote/ipam drivers?
14:26:58 <banix> no
14:27:02 <banix> as in 1.12
14:27:02 <vikasc> :D
14:27:06 <apuimedo> good Lord
14:27:10 <apuimedo> that is disappointing
14:27:17 <apuimedo> banix: do you know when 1.13 is coming?
14:27:23 <banix> if you use the new swarm; not sure if you can disable it
14:27:38 <apuimedo> #info Docker 1.12 swarm mode is not compatible with Kuryr, expected to be fixed on 1.13
14:28:08 <banix> not sure; in a couple of months? :)
14:28:28 <vikasc> apuimedo, would it make sense to add this under "limitations"?
14:28:33 <apuimedo> (屮゚Д゚)屮
14:28:42 <banix> i think so
14:28:50 <apuimedo> yes, please
14:28:52 <apuimedo> let's do that
14:28:57 <banix> sure
14:29:11 <apuimedo> #action banix to update *limitations* with the swarm issue
14:29:11 <vikasc> apuimedo, i can do this
14:29:15 <apuimedo> oops
14:29:18 <vikasc> np
14:29:37 <apuimedo> banix: vikasc: whoever feels like doing it. I put banix since he can maybe add the backstory links
14:29:48 <vikasc> sure
14:30:06 <apuimedo> Finally, I want to draw attention to the multitenancy story for kuryr-libnetwork
14:30:29 <apuimedo> banix: do I remember it properly that we do not receive any env var on enpoint joining request?
14:31:03 <apuimedo> but we can on the other hand receive network operation options
14:31:11 <apuimedo> on network commands
14:31:19 <banix> yes
14:31:44 <banix> but the options are limited to network create only if my info is not outdated
14:32:05 <apuimedo> that's what I remember as well
14:32:15 <banix> which may be good enough
14:32:34 <banix> we had done some work around multi tenancy; let me revisit and get back to you all
14:32:38 <apuimedo> banix: what I was thinking was, that when users create the networks, they provide keystone auth info
14:33:10 <apuimedo> and then, when joining a network, we do not check, but match on Docker's network uuid
14:33:27 <apuimedo> we find the network that has the docker network uuid tag
14:33:38 <apuimedo> and create the port there, and bind
14:34:17 <banix> yes thats an option
14:34:21 <apuimedo> banix: please, check that workaround and let us now on #openstack-kuryr ;-)
14:34:25 <apuimedo> moving on
14:34:30 <apuimedo> #topic kubernetes: devstack
14:34:49 <apuimedo> #info apuimedo started WIP patch for kubernetes devstack
14:35:01 <apuimedo> #link https://review.openstack.org/371432
14:35:27 <apuimedo> it consists on running etcd3, kubernetes-api, kubernetes-controller-manager and kubernetes-scheduler all in docker containers
14:35:43 <apuimedo> raven, kubelet and CNI on the bare metal
14:36:05 <apuimedo> I still have some issues with run_process and docker run, but I think I'll be able to solve it soon enough
14:36:27 <apuimedo> after that, we can start having gates when we add new kubernetes integration code
14:36:37 <vikasc> apuimedo, thanks for the great effort!!!
14:36:46 <apuimedo> you're welcome
14:36:55 <devvesa> +1
14:36:58 <apuimedo> any question on the kubernetes devstack?
14:37:03 <apuimedo> devvesa: nice to see you
14:37:17 <devvesa> thanks. I've been late today
14:37:23 <apuimedo> no problem
14:37:30 <apuimedo> you made it in time for your section
14:37:42 <devvesa> So the -1 Workflow on the devstack patch means that it fails?
14:37:46 <apuimedo> yes
14:37:51 <vikasc> why raven in bm?
14:37:55 <apuimedo> it's for me to tell people that I need to fix stuff
14:38:00 <pablochacin> apuimedo: the usual concern about the future of hyperkube
14:38:05 <devvesa> I prefer a WIP on message commit
14:38:12 <apuimedo> vikasc: so that you can edit code while developing and you don't need to regenerate a container for it
14:38:16 <devvesa> because I understood that it was not meant to be merged
14:38:29 <vikasc> apuimedo, cool
14:38:30 <apuimedo> pablochacin: hyperkube doesn't seem to be going away, if it does, though
14:38:50 <apuimedo> we'll use an all in one binary
14:38:55 <devvesa> last time i download the last version released on the hyperkube container (1.3.7)
14:38:58 <apuimedo> I'll set up a job somewhere that compiles it
14:39:01 <apuimedo> and downloads it
14:39:02 <devvesa> so they kind of maintain it
14:39:16 <hongbin> apuimedo: i think you want to make the version of hyperkube configurable
14:39:19 <apuimedo> devvesa: both gcr.io and quay.io are maintaining their own versions of it
14:39:27 <apuimedo> hongbin: I made it configurable ;-)
14:39:41 <apuimedo> https://review.openstack.org/#/c/371432/3/devstack/settings
14:39:50 <apuimedo> the devstack design is
14:40:08 <hongbin> apuimedo: looks good
14:40:10 <apuimedo> you can enable disable each service and point your local.conf to existing services if you have them
14:40:22 <apuimedo> and you can also specify etcd/hyperkube versions and origin
14:40:38 <apuimedo> (in case somebody prefers quay's more than gcr
14:40:40 <apuimedo> for example
14:40:53 <irenab> apuimedo: it requires a bit of documentation, since too many options can be confusing
14:41:02 <apuimedo> irenab: good point
14:41:24 <apuimedo> #action apuimedo to make a local.conf.sample with a lot of comments that explain the options
14:41:28 <irenab> apuimedo: or blog :-)
14:41:32 <vikasc> apuimedo, +1
14:41:46 <apuimedo> irenab: if you can, review the patch reminding me to add that to local.conf.sample
14:41:50 <pablochacin> apuimedo: do you know what magnun project is doing? they also need kubernetes, right?
14:41:54 <irenab> apuimedo: sure
14:42:03 <apuimedo> pablochacin: you have here the best person to answer that
14:42:06 <apuimedo> hongbin: ^^
14:42:12 <pablochacin> :-/
14:42:24 <vikasc> :)
14:42:42 <hongbin> magnum does something similiar (pull docker image for kube components)
14:43:05 <hongbin> kubelet is provided by the os
14:43:07 <apuimedo> Ideally, we should have a container registry in infra
14:43:21 <apuimedo> that can be used for jobs, kolla, official OpenStack containers, etc
14:43:29 <apuimedo> but I think that's not ready yet
14:43:53 <apuimedo> #topic py3 asyncio progress
14:44:00 <apuimedo> devvesa: pablochacin: any news on that front?
14:44:17 <devvesa> apuimedo: the expected ones this week: 0
14:44:30 <banix> by the way, i think we will have kuryr in kole by the next release
14:44:44 <devvesa> sorry but we have been busy in other stuff
14:45:02 <irenab> banix: Kolla?
14:45:06 <apuimedo> devvesa: very well. That's perfectly understandable
14:45:10 <apuimedo> yes
14:45:13 <banix> yes
14:45:25 <apuimedo> I'll put an #info in open discussion about it :-)
14:45:37 <apuimedo> I should have done it in the kuryr-libnetwork topic
14:45:41 <apuimedo> I'm in the moon
14:45:46 <banix> sorry for digression
14:45:52 <apuimedo> #topic kubernetes: py2/3 eventlet
14:46:23 <devvesa> apuimedo: sorry if you already said it, but... any news on the CNI plugin?
14:46:30 <devvesa> s/plugin/driver
14:46:44 <apuimedo> devvesa: nope, finishing devstack first, so I can have it gated
14:46:55 <apuimedo> #info ivc_ pushed a devref that designs a very different approach than the one apuimedo's devref was pushing for
14:46:56 <devvesa> perfectly understandable :)
14:47:06 <apuimedo> We had a videochat discussion (video available on demand)
14:47:35 <banix> link please
14:48:02 <apuimedo> #action devvesa vikasc pablochacin irenab to review https://review.openstack.org/#/c/369105/
14:48:56 <apuimedo> also, ivc_ started implementing one of it's first pieces, the port provider
14:48:57 <ivc_> just a note, https://review.openstack.org/#/c/369105/ is wip and i'm hoping to update it soon with more recent information
14:49:17 <apuimedo> #link https://review.openstack.org/#/c/370284/
14:49:22 <apuimedo> thanks ivc_
14:49:33 <apuimedo> ivc_: any other news?
14:49:44 <ivc_> yup cleaning up other parts
14:50:01 <ivc_> and sort of refactoring the service/endpoints dependencies issue
14:50:13 <apuimedo> ivc_: can't wait to see about that in the devref
14:50:15 <apuimedo> :-)
14:50:29 <ivc_> actually there's no more dependency issue :)
14:50:45 <ivc_> so i'm removing all the junk code related to it now
14:51:26 <ivc_> got it sorted out by applying lbaas pool annotation to endpoints instead of service
14:51:27 <apuimedo> ivc_: now I'm even looking forward to seeing it more
14:51:50 <apuimedo> interesting
14:52:21 <irenab> ivc_: using lbaasV2?
14:52:27 <ivc_> irenab yes
14:52:31 <apuimedo> so until you see endpoint event with annotation, you send it to /dev/null
14:52:35 <irenab> great
14:52:50 <apuimedo> nice idea
14:52:50 <ivc_> apuimedo yup
14:53:12 <apuimedo> ivc_: but it pisses me off. Why didn't I think about doing it like that....
14:53:21 <apuimedo> anyway, let's move on
14:53:24 <ivc_> apuimedo ")
14:53:27 <apuimedo> #topic open discussion
14:54:24 <banix> here are related patches regarding Kuryr integration in Kolla: https://review.openstack.org/#/c/298894/  https://review.openstack.org/#/c/364662/  https://review.openstack.org/#/c/370382/
14:54:58 <apuimedo> #info huikang scored a big buzzer beater for Kuryr getting it inside Kolla's newton release just as the clock was expiring. I can't thank enough all the efforts from huikang, sdake, inc0 and Jeffrey
14:55:20 <banix> +1
14:55:23 <apuimedo> it will be very nice for kuryr-libnetwork development to be able to use kolla instead of devstack :-)
14:55:48 <vikasc> macvlan v/s ipvlan
14:56:11 <apuimedo> #info In order to be on more consumer ready kolla usage, we should have RPM/deb packages and add kolla code to consume those as well
14:56:21 <apuimedo> right
14:56:41 <apuimedo> limao_: vikasc: lmdaly please, explain about the macvlan/ipvlan. I gotta run
14:56:47 <apuimedo> #chair banix
14:56:48 <openstack> Current chairs: apuimedo banix
14:56:58 <apuimedo> banix: please finish the meeting
14:57:03 <banix> k
14:57:09 <irenab> 3 mins left
14:57:30 <apuimedo> #info limao made a very nice document manually testing the ipvlan proposal with macvlan https://lipingmao.github.io/2016/09/18/kuryr_macvlan_ipvlan_datapath_poc.html
14:57:36 <apuimedo> (even using ipam)
14:58:02 <vikasc> Only pro in using ipvlan seems to be the limit on number of virtual interfaces before interface enters promiscous mode
14:58:53 <limao_> ipvlan need more high kernel version
14:58:55 <limao_> http://hicu.be/macvlan-vs-ipvlan
14:59:11 <limao_> a blog about pro and con
14:59:28 <hongbin> yes, i would say the requirement of high kernel version will cause problem
14:59:34 <irenab> limao_: thanks for sharing
14:59:37 <limao_> maybe we can consider support both ipvlan and macvlan
14:59:45 <vikasc> IMO, promiscous mode should not be as major concern as latest kernel requirement
14:59:50 <banix> #info a blog about pro and con : http://hicu.be/macvlan-vs-ipvlan
15:00:07 <lmdaly> I see no issue with the possibility of introducing a macvlan option
15:00:14 <banix> running out of time; should perhaps end now and continue on kuryr channel if need be
15:00:19 <banix> #endmeeting