14:01:30 #startmeeting kuryr 14:01:31 Meeting started Mon Sep 19 14:01:30 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:35 The meeting name has been set to 'kuryr' 14:01:45 Hello! Who's here for the kuryr meeting? 14:01:50 o/ 14:01:53 o/ 14:01:54 o/ 14:01:56 o/ 14:02:21 o/ 14:03:06 #info vikasc, limao_, tonanhngo, lmdaly, ivc_ and apuimedo present 14:03:15 Thank you for joining 14:03:23 #topic kuryr-lib 14:03:42 https://wiki.openstack.org/wiki/Meetings/Kuryr#Meeting_September_19th.2C_2016 14:04:02 #topic kuryr-lib: REST/rpc 14:04:34 #info vikasc has some patches for it already 14:04:38 apuimedo: will partially attend 14:05:02 lmdaly: I would like to understand if the approach vikasc is pushing works for the PoC you have 14:05:02 apuimedo, https://review.openstack.org/#/c/342624/ 14:05:37 apuimedo, this is the first patch toi be reviewd 14:05:43 *to 14:05:44 o/ 14:06:24 welcome janonymous 14:06:34 #link https://review.openstack.org/#/c/342624/ 14:06:40 o/ 14:06:47 welcome pablochacin 14:07:19 vikasc: lmdaly: we have to make it in a way that is configurable with both approaches, address/pairs and trunk/subport 14:07:33 vikasc: did you introduce such a config option already? 14:07:43 apuimedo, not yet 14:08:38 apuimedo, i have introduced, a configuration option to tell if baremetal or nested-containers 14:08:48 I'm aware 14:08:54 I think we can merge that one 14:09:13 apuimedo, it is like one more variable applicable with nested-containers 14:09:25 #action apuimedo to review and possibly merge https://review.openstack.org/362023 14:09:41 #action vikasc to add a second config patch for selecting the container-in-vm approach 14:10:14 the same setting should be used for both the host side and the VM side 14:10:15 apuimedo, so to answer your question.. not added yet this new configuration option in nested-containers case. 14:11:09 very well 14:11:11 apuimedo, by host side you mean on master node? 14:11:12 we'll get to that 14:11:18 vikasc: right 14:11:31 apuimedo, yeah..right 14:11:42 o/ 14:11:59 vikasc: possibly we can run more than one master node and have them HAed with corosync-pacemaker 14:12:07 welcome hongbin 14:12:09 apuimedo, in my current understanding rpc patches will work for ip/macvlan approch as well 14:12:21 good 14:12:36 #action apuimedo irenab to review all the rpc patches 14:12:54 #topic kuryr-lib: specs/devref 14:13:04 apuimedo, i need to rebase rpc patches.. i was updating rest patches only by now 14:13:26 apuimedo, will do 14:13:43 apuimedo, we should review first kuryr-lib repo patches 14:13:46 #info after going through the mailing list for a week, I think the option that generates the larger consensus is to keep the specs on openstack/kuryr and to put devrefs close to the code, i.e. in the repo where the implementation will go 14:13:57 vikasc: absolutely 14:14:56 apuimedo, i will understand from you HA scenario offline 14:15:13 vikasc: sure, we can discuss it later 14:15:17 #topic kuryr-libnetwork: credential selection 14:15:54 #info: Keystone v3 support was merged, but Kuryr will still prefer cloud.yaml auth options over kuryr.conf, as it was doing up until now 14:16:04 banix brought my attention to it 14:16:21 and we should try reverting that preference in a way that doesn't break fullstack/rally tests. 14:16:53 apuimedo: how is this done? i mean where is this being done? 14:17:38 banix: let me link it 14:18:21 #link https://github.com/openstack/kuryr-libnetwork/blob/master/kuryr_libnetwork/controllers.py#L74-L76 14:18:34 first it goes to the cloud yaml in line 74 14:18:46 otherwise it goes to getting it from kuryr.conf in L76 14:18:58 it's possible that just swapping those lines fixes it 14:19:28 isee; now i remember; was done for the gate job at the time i think 14:20:01 will try after the call and let you know; may be can fix it before the release? 14:20:49 very well 14:20:54 banix: that would be ideal 14:21:06 sure 14:21:11 #action banix to test swapping the lines 14:21:29 #info there is a new trello board to track work on Kuryr https://trello.com/b/1Ij919E8 14:21:49 #info committers to kuryr can ping me for being added as members to the trello team 14:22:18 #topic: kuryr-libnetwork: release 14:22:56 #info apuimedo sent a patch to get the kuryr-lib requirement in requirements.txt but it was rejected due to global requirements freeze 14:23:15 #action apuimedo to request an exception. Which should be granted since nobody else depends on it. 14:24:08 #topic: kuryr-libnetwork: container-in-vm 14:24:23 well, we covered most of it in the kuryr-lib topic 14:24:39 vikasc: anything to add that is kuryr-libnetwork specific? 14:25:02 apuimedo, i think thats pretty much all 14:25:07 very well 14:25:16 #topic kuryr-libnetwork:general 14:25:37 #info dongcanye got a patch merged for adding endpoint operation info 14:26:03 banix: have you tried kuryr with the new swarm? 14:26:20 won’t work 14:26:29 :/ 14:26:39 new swarm is tied to Docker Overlay driver 14:26:50 they say this will be fixed in 1.13 14:26:52 banix: it can't use any remote/ipam drivers? 14:26:58 no 14:27:02 as in 1.12 14:27:02 :D 14:27:06 good Lord 14:27:10 that is disappointing 14:27:17 banix: do you know when 1.13 is coming? 14:27:23 if you use the new swarm; not sure if you can disable it 14:27:38 #info Docker 1.12 swarm mode is not compatible with Kuryr, expected to be fixed on 1.13 14:28:08 not sure; in a couple of months? :) 14:28:28 apuimedo, would it make sense to add this under "limitations"? 14:28:33 (屮゚Д゚)屮 14:28:42 i think so 14:28:50 yes, please 14:28:52 let's do that 14:28:57 sure 14:29:11 #action banix to update *limitations* with the swarm issue 14:29:11 apuimedo, i can do this 14:29:15 oops 14:29:18 np 14:29:37 banix: vikasc: whoever feels like doing it. I put banix since he can maybe add the backstory links 14:29:48 sure 14:30:06 Finally, I want to draw attention to the multitenancy story for kuryr-libnetwork 14:30:29 banix: do I remember it properly that we do not receive any env var on enpoint joining request? 14:31:03 but we can on the other hand receive network operation options 14:31:11 on network commands 14:31:19 yes 14:31:44 but the options are limited to network create only if my info is not outdated 14:32:05 that's what I remember as well 14:32:15 which may be good enough 14:32:34 we had done some work around multi tenancy; let me revisit and get back to you all 14:32:38 banix: what I was thinking was, that when users create the networks, they provide keystone auth info 14:33:10 and then, when joining a network, we do not check, but match on Docker's network uuid 14:33:27 we find the network that has the docker network uuid tag 14:33:38 and create the port there, and bind 14:34:17 yes thats an option 14:34:21 banix: please, check that workaround and let us now on #openstack-kuryr ;-) 14:34:25 moving on 14:34:30 #topic kubernetes: devstack 14:34:49 #info apuimedo started WIP patch for kubernetes devstack 14:35:01 #link https://review.openstack.org/371432 14:35:27 it consists on running etcd3, kubernetes-api, kubernetes-controller-manager and kubernetes-scheduler all in docker containers 14:35:43 raven, kubelet and CNI on the bare metal 14:36:05 I still have some issues with run_process and docker run, but I think I'll be able to solve it soon enough 14:36:27 after that, we can start having gates when we add new kubernetes integration code 14:36:37 apuimedo, thanks for the great effort!!! 14:36:46 you're welcome 14:36:55 +1 14:36:58 any question on the kubernetes devstack? 14:37:03 devvesa: nice to see you 14:37:17 thanks. I've been late today 14:37:23 no problem 14:37:30 you made it in time for your section 14:37:42 So the -1 Workflow on the devstack patch means that it fails? 14:37:46 yes 14:37:51 why raven in bm? 14:37:55 it's for me to tell people that I need to fix stuff 14:38:00 apuimedo: the usual concern about the future of hyperkube 14:38:05 I prefer a WIP on message commit 14:38:12 vikasc: so that you can edit code while developing and you don't need to regenerate a container for it 14:38:16 because I understood that it was not meant to be merged 14:38:29 apuimedo, cool 14:38:30 pablochacin: hyperkube doesn't seem to be going away, if it does, though 14:38:50 we'll use an all in one binary 14:38:55 last time i download the last version released on the hyperkube container (1.3.7) 14:38:58 I'll set up a job somewhere that compiles it 14:39:01 and downloads it 14:39:02 so they kind of maintain it 14:39:16 apuimedo: i think you want to make the version of hyperkube configurable 14:39:19 devvesa: both gcr.io and quay.io are maintaining their own versions of it 14:39:27 hongbin: I made it configurable ;-) 14:39:41 https://review.openstack.org/#/c/371432/3/devstack/settings 14:39:50 the devstack design is 14:40:08 apuimedo: looks good 14:40:10 you can enable disable each service and point your local.conf to existing services if you have them 14:40:22 and you can also specify etcd/hyperkube versions and origin 14:40:38 (in case somebody prefers quay's more than gcr 14:40:40 for example 14:40:53 apuimedo: it requires a bit of documentation, since too many options can be confusing 14:41:02 irenab: good point 14:41:24 #action apuimedo to make a local.conf.sample with a lot of comments that explain the options 14:41:28 apuimedo: or blog :-) 14:41:32 apuimedo, +1 14:41:46 irenab: if you can, review the patch reminding me to add that to local.conf.sample 14:41:50 apuimedo: do you know what magnun project is doing? they also need kubernetes, right? 14:41:54 apuimedo: sure 14:42:03 pablochacin: you have here the best person to answer that 14:42:06 hongbin: ^^ 14:42:12 :-/ 14:42:24 :) 14:42:42 magnum does something similiar (pull docker image for kube components) 14:43:05 kubelet is provided by the os 14:43:07 Ideally, we should have a container registry in infra 14:43:21 that can be used for jobs, kolla, official OpenStack containers, etc 14:43:29 but I think that's not ready yet 14:43:53 #topic py3 asyncio progress 14:44:00 devvesa: pablochacin: any news on that front? 14:44:17 apuimedo: the expected ones this week: 0 14:44:30 by the way, i think we will have kuryr in kole by the next release 14:44:44 sorry but we have been busy in other stuff 14:45:02 banix: Kolla? 14:45:06 devvesa: very well. That's perfectly understandable 14:45:10 yes 14:45:13 yes 14:45:25 I'll put an #info in open discussion about it :-) 14:45:37 I should have done it in the kuryr-libnetwork topic 14:45:41 I'm in the moon 14:45:46 sorry for digression 14:45:52 #topic kubernetes: py2/3 eventlet 14:46:23 apuimedo: sorry if you already said it, but... any news on the CNI plugin? 14:46:30 s/plugin/driver 14:46:44 devvesa: nope, finishing devstack first, so I can have it gated 14:46:55 #info ivc_ pushed a devref that designs a very different approach than the one apuimedo's devref was pushing for 14:46:56 perfectly understandable :) 14:47:06 We had a videochat discussion (video available on demand) 14:47:35 link please 14:48:02 #action devvesa vikasc pablochacin irenab to review https://review.openstack.org/#/c/369105/ 14:48:56 also, ivc_ started implementing one of it's first pieces, the port provider 14:48:57 just a note, https://review.openstack.org/#/c/369105/ is wip and i'm hoping to update it soon with more recent information 14:49:17 #link https://review.openstack.org/#/c/370284/ 14:49:22 thanks ivc_ 14:49:33 ivc_: any other news? 14:49:44 yup cleaning up other parts 14:50:01 and sort of refactoring the service/endpoints dependencies issue 14:50:13 ivc_: can't wait to see about that in the devref 14:50:15 :-) 14:50:29 actually there's no more dependency issue :) 14:50:45 so i'm removing all the junk code related to it now 14:51:26 got it sorted out by applying lbaas pool annotation to endpoints instead of service 14:51:27 ivc_: now I'm even looking forward to seeing it more 14:51:50 interesting 14:52:21 ivc_: using lbaasV2? 14:52:27 irenab yes 14:52:31 so until you see endpoint event with annotation, you send it to /dev/null 14:52:35 great 14:52:50 nice idea 14:52:50 apuimedo yup 14:53:12 ivc_: but it pisses me off. Why didn't I think about doing it like that.... 14:53:21 anyway, let's move on 14:53:24 apuimedo ") 14:53:27 #topic open discussion 14:54:24 here are related patches regarding Kuryr integration in Kolla: https://review.openstack.org/#/c/298894/ https://review.openstack.org/#/c/364662/ https://review.openstack.org/#/c/370382/ 14:54:58 #info huikang scored a big buzzer beater for Kuryr getting it inside Kolla's newton release just as the clock was expiring. I can't thank enough all the efforts from huikang, sdake, inc0 and Jeffrey 14:55:20 +1 14:55:23 it will be very nice for kuryr-libnetwork development to be able to use kolla instead of devstack :-) 14:55:48 macvlan v/s ipvlan 14:56:11 #info In order to be on more consumer ready kolla usage, we should have RPM/deb packages and add kolla code to consume those as well 14:56:21 right 14:56:41 limao_: vikasc: lmdaly please, explain about the macvlan/ipvlan. I gotta run 14:56:47 #chair banix 14:56:48 Current chairs: apuimedo banix 14:56:58 banix: please finish the meeting 14:57:03 k 14:57:09 3 mins left 14:57:30 #info limao made a very nice document manually testing the ipvlan proposal with macvlan https://lipingmao.github.io/2016/09/18/kuryr_macvlan_ipvlan_datapath_poc.html 14:57:36 (even using ipam) 14:58:02 Only pro in using ipvlan seems to be the limit on number of virtual interfaces before interface enters promiscous mode 14:58:53 ipvlan need more high kernel version 14:58:55 http://hicu.be/macvlan-vs-ipvlan 14:59:11 a blog about pro and con 14:59:28 yes, i would say the requirement of high kernel version will cause problem 14:59:34 limao_: thanks for sharing 14:59:37 maybe we can consider support both ipvlan and macvlan 14:59:45 IMO, promiscous mode should not be as major concern as latest kernel requirement 14:59:50 #info a blog about pro and con : http://hicu.be/macvlan-vs-ipvlan 15:00:07 I see no issue with the possibility of introducing a macvlan option 15:00:14 running out of time; should perhaps end now and continue on kuryr channel if need be 15:00:19 #endmeeting