14:01:40 #startmeeting kuryr 14:01:41 Meeting started Mon Mar 13 14:01:40 2017 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:44 The meeting name has been set to 'kuryr' 14:02:01 Hello everybody and welcome to the weekly kuryr IRC meeting 14:02:09 o/ 14:02:11 Who's here? 14:02:17 ltomasboz: what's with the 'z'? 14:02:21 hi 14:02:25 are you luis evil twin? 14:02:28 o/ 14:02:29 o/ 14:02:31 o/ 14:02:37 upss, didn't realize about that! 14:02:40 o/ 14:03:17 :D 14:03:19 fixed! 14:03:44 :-) 14:03:56 alright. Let's get the party started 14:04:05 #topic kuryr-libnetwork 14:04:38 yedongcan: what's the status on the IPv6 support on the latest Docker releases? Is the right info making it into the ipam server? 14:05:22 apuimedo: I'm not check it 14:06:45 http://paste.openstack.org/show/602463/ 14:06:59 this is what I got playing with yedongcan patch 14:07:26 ok 14:07:40 irenab_: version? 14:08:14 the one that comes with devstack 14:08:35 irenab_: do you have it handy? 14:08:39 to run docker --verison 14:08:41 *version? 14:08:55 o/ 14:08:55 do not have the setup currently, can update later 14:08:56 anyway. irenab_: it seems it worked, right? 14:09:10 ok 14:09:46 I had the 2 port issue, since it is fixed in seprate patch, but in overall seems the network and subnets created properly 14:09:57 very well 14:09:59 :-) 14:10:07 you probably got the Docker 17.x version then 14:10:13 alright a +2 for me 14:10:38 #action limao to approve and w+1 if he agrees https://review.openstack.org/#/c/442294/6 14:10:41 apuimedo: I will complete the verification and vote +2 as well 14:11:06 irenab_: ah, perfect 14:11:10 :-) 14:11:31 irenab_: I think with the progress this made in the last week, we are safe to mark this for release 14:11:41 s/this/ipv6 support/ 14:11:41 apuimedo: I agree 14:11:45 very well 14:12:01 o/ 14:12:17 #info IPv6 slated for inclusion in the upcoming kuryr-libnetwork release 14:12:27 thanks for the hard work yedongcan and limao 14:12:39 there are also hongbin’s patches to include 14:12:46 yes 14:13:12 #action apuimedo: irenab_: limao: to review https://review.openstack.org/#/c/442525/ 14:13:12 #link https://review.openstack.org/#/c/439932/ 14:13:20 irenab_: let's get to those 14:15:12 ok, I put to merge the dual stack port one 14:15:37 +1, the dual stack port one looks good to me as well 14:16:15 lets try to merge the fullstack patch as well 14:16:29 #action apuimedo to review https://review.openstack.org/444661 https://review.openstack.org/444601 https://review.openstack.org/424889 14:16:37 thanks a lot hongbin for all these patches 14:16:58 my pleasure 14:17:46 anything else on kuryr-libnetwork? 14:18:57 alright, let's get all these that were mentioned in and cut a release 14:19:35 then we'll gather reqs for a maybe shorter pike release (if the community decides to have a nominal Pike release) 14:19:46 #topic fuxi 14:19:50 #chair hongbin 14:19:50 Current chairs: apuimedo hongbin 14:19:52 hi 14:19:56 you have the floor, hongbin 14:19:57 :-) 14:20:17 i don't have too much to update for last week, it seems the china team was working on fuxi-kubernetes 14:20:48 :O 14:20:49 nice 14:20:53 any news on that front? 14:21:00 i rely the vpg feekback to them, and it seems they have no problem to adopt the advice on reusing kuryr-kubernetes 14:21:03 did you discuss the VTG feedback? 14:21:09 ah, good! 14:21:22 apuimedo: that is all from me 14:21:35 hongbin: should we expect them to send the patch to kuryr-kubernetes to make storage handlers configurable? 14:21:53 apuimedo: yes, i think so 14:22:03 very well :-) 14:22:07 looking forward to that! 14:22:14 #topic kuryr-kubernetes 14:22:44 #info We set up Three milestones and a Pike series for Kuryr-kubernetes 14:23:13 we're still release independent 14:23:59 but we'll track the development better this way 14:24:21 ltomasbo's port pooling bp was the first to be processed this way 14:24:41 https://blueprints.launchpad.net/kuryr-kubernetes/+spec/ports-pool 14:25:01 #link https://blueprints.launchpad.net/kuryr-kubernetes/+spec/ports-pool 14:25:06 apuimedo: Can you please elaborate on 3 milestones? 14:25:27 the typical pike1, pike2, pike3 that official projects follow 14:25:37 dates? 14:25:41 to estimate for which milestone we expect features 14:26:07 #link https://launchpad.net/kuryr-kubernetes/pike 14:26:09 irenab_: ^^ 14:26:49 #info irenab updated the wiki with the contribution guidelines https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies 14:26:52 thanks irenab_! 14:26:55 apuimedo: another alternative release model is cycle-with-intermediate , official project could follow this model as well, the main different is there is no 3 milestones, you can release as many as you wanted as long as there is a final release at the end of the cycle 14:27:19 apuimedo: just fyi 14:27:21 hongbin: yes, that's more accurate to what we want actually 14:27:45 we will make releases more or less when we decide to, but we'll try to come up with releases at the end of cycle 14:28:13 so that they are requirements aligned, get announcement together, etc 14:28:23 hongbin: thanks for bringing it up 14:28:28 apuimedo: related to wiki update, https://review.openstack.org/#/c/443598/ 14:28:34 oh! 14:28:36 I missed it 14:28:56 ivc_: please merge https://review.openstack.org/#/c/443598/1 14:28:59 :-) 14:29:31 apuimedo: the propoese release model is also for kuryr-libnetwork? 14:30:02 irenab_: not put into effect yet, we should vote on it 14:30:26 it depends on the process I mentioned before of gathering goals for next release 14:30:29 it makes sense to keep similar models 14:30:35 and see if we have something for pike timeline 14:30:50 otherwise I'd propose we start on the queens release 14:31:54 ivc_: mind if I update https://review.openstack.org/#/c/433359/ to have _ensure and the docstrings? 14:33:39 apuimedo i'll do that :) was planning to update it once pushing the other patch (to have a link as irenab_ suggested) 14:33:40 since it's the last blocking tihng 14:33:50 ah, ok, didn't realize that 14:34:00 ivc_: let me know if I can lend a hand anyway 14:34:02 ;_) 14:35:25 apuimedo i could use a time machine or a cloning vat 14:35:26 ltomasbo: can you please share the meassurements you showed during VTG? 14:35:37 ivc_: we can also merge https://review.openstack.org/#/c/442866/ 14:35:42 sure 14:36:13 ivc_: or the Dragon ball Hyperbolic Time Chamber 14:36:15 :-) 14:36:48 I will add it to the card in the trello board 14:37:32 I would like to ask for reviews to the updated version of the devref 14:37:32 ltomasbo: good 14:37:38 https://review.openstack.org/#/c/427681/ 14:37:42 ltomasbo: please, ask ask 14:37:44 :-) 14:37:45 and the patch series 14:37:50 apuimedo about docker-py, i'm thinking if we might need it for fullstack/functional tests. but i'm ok dropping it in any case for now 14:38:17 ivc_: if we can't make do with client-python or kubectl we'll add it again 14:38:27 ltomasbo: right 14:38:33 that's the biggest todo 14:38:52 #action apuimedo ivc_ irenab_: review https://review.openstack.org/#/q/status:open+project:openstack/kuryr-kubernetes+branch:master+topic:bp/ports-pool 14:38:59 and the reworked spec 14:39:26 I'll be working on the port discovery based on what we discussed on the VTG 14:39:36 but I found a couple of issues I would like to discuss here 14:39:41 good 14:39:45 go ahead 14:40:06 we decided to check for device_owner to find kuryr:compute 14:40:22 and that will work for the baremetal case 14:40:30 ltomasbo: compute:kuryr 14:40:41 but how will we know about the nested one? 14:40:56 finding subports in the trunk is easy, but how to now which trunk ports belong to kuryr 14:41:05 and which ones do not 14:41:08 we know the trunk port, so all its subports should be kuryr owned 14:41:25 we now the trunk port of the kuryr-controller 14:41:34 not of the workers, right? 14:41:44 s/now/know 14:41:53 ltomasbo: trunk ports should be tagged 14:41:55 ltomasbo why not the workers? 14:41:57 of each worker node 14:42:14 the trunk port is not created by kuryr 14:42:36 so that should be a requirement? the entity creating the trunk port should tag it as kuryr? 14:42:52 but it is the one that specified in the config file, right? 14:43:10 not sure why tag is required? 14:43:44 irenab_: you mean that kuryr controller will do a get for kubernetes nodes and find the trunks from that? 14:43:49 we have worker_node_subnets in the config 14:43:50 right? 14:43:55 not the worker nodes 14:43:57 ltomasbo we know worker trunk port because thats how we know where to add sub-ports, dont we? 14:44:15 ivc_: we do it on demand 14:44:33 IIRC 14:44:45 ivc_, once the kubernetes decide on the VM/trunk port 14:44:49 we do know where the pod is 14:44:49 apuimedo does it matter? we need to know where to add subport 14:45:01 but for the pool, we don't have the pod 14:45:29 we just need the ip of the VM to discover where to add the subport 14:45:45 ip of the VM == IP of the trunk port 14:45:51 yep 14:45:52 ltomasbo ^^ 14:45:54 that is the assumption 14:46:20 ltomasbo it is assumption, but thats what the sub-port creation logic is based on 14:46:28 yes 14:46:34 ltomasbo so cleanup logic should follow the same 14:46:48 maybe I did not explain the problem very well 14:46:53 not creation, not deletion problem 14:47:05 the problem is when rebooting the kuryr-controller 14:47:15 and we need to discover the ports that we created for the pool 14:47:36 as we need to get them by asking neutron 14:47:48 so you have worker odes IPs => get trunk ports => get sub ports 14:47:49 and the trunk ports are not created by kuryr 14:47:50 query IPs of the nodes and get related trunk ports? 14:48:16 so, you propose to include a new config info including the worker nodes ips? 14:48:18 ivc_: you mean using the GET on the nodes resource, right? 14:48:42 ltomasbo: node ips can be retrieved from k8s API server 14:48:44 we don't have info about the worker nodes IPs AFAIK for now 14:49:19 ltomasbo: when you GET nodes in k8s API 14:49:21 you get the IPs 14:49:23 IIRC 14:49:28 irenab_, ok 14:49:30 correct 14:49:34 that should work 14:49:48 apuimedo thats one way to do it. there might be complications where you have mixed nodes (both baremetal and vms) in the same cluster 14:49:51 I'll work on that! thanks! 14:49:57 we just assume that all nodes are managed by kuryr network 14:50:21 ivc_, the kuryr-controller driver is different for baremetal and nested 14:50:22 I think its ligimit assumption for now 14:50:47 ltomasbo it is now, but does not mean it will be that way forever :) 14:51:07 driver for what? 14:51:09 :D 14:51:14 ivc_, true 14:51:27 on a related note, for the baremetal case 14:51:35 ltomasbo i'm pretty certain eventually we'll get to a driver that could support both (e.g. based on K8s Node annotation or smth) 14:51:52 vif driver? 14:52:07 when discovering pre-created ports, we can use the device_owner, but how do we know where they were bond (once the CNI pool is in place) 14:52:38 irenab_ might be a bit more complicated that just vif driver 14:52:42 ivc_, agree 14:53:27 so future driver, I just was not sure if I follow the discussion 14:53:37 ltomasbo: bond? 14:53:53 bind 14:54:13 what server is supposed to be? 14:54:22 ltomasbo it will probably depend on a tech used. the logic could be different for container-in-vm (where you could use K8s Node's IP to figure out the trunk port) and for bare-metal 14:55:00 ivc_, yes, definitely. I will use the K8s Node for the nested 14:55:05 neutron port should have host_id set 14:55:25 and, for now, until CNI pool is there, for the baremetal we don't need to discover where they are 14:55:39 irenab_ yes, but for bare-metal we could probably just use device_owner 14:56:27 why CNI pool is changing it? 14:56:43 once CNI pool is there, the ports are also related to specific servers 14:56:49 last two minutes of meeting 14:56:50 and we need to know where they were 14:56:51 :-) 14:57:04 lets move that discussion to the channel 14:57:08 if there is no cni pool, we don't mind 14:57:14 sure 14:57:26 very well 14:57:32 Thank you all for joining! 14:57:43 see you in #openstack-kuryr 14:57:47 #endmeeting