14:01:40 <apuimedo> #startmeeting kuryr
14:01:41 <openstack> Meeting started Mon Mar 13 14:01:40 2017 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:44 <openstack> The meeting name has been set to 'kuryr'
14:02:01 <apuimedo> Hello everybody and welcome to the weekly kuryr IRC meeting
14:02:09 <ltomasboz> o/
14:02:11 <apuimedo> Who's here?
14:02:17 <apuimedo> ltomasboz: what's with the 'z'?
14:02:21 <irenab_> hi
14:02:25 <apuimedo> are you luis evil twin?
14:02:28 <alraddarla> o/
14:02:29 <hongbin> o/
14:02:31 <yedongcan> o/
14:02:37 <ltomasboz> upss, didn't realize about that!
14:02:40 <garyloug> o/
14:03:17 <ltomasbo> :D
14:03:19 <ltomasbo> fixed!
14:03:44 <apuimedo> :-)
14:03:56 <apuimedo> alright. Let's get the party started
14:04:05 <apuimedo> #topic kuryr-libnetwork
14:04:38 <apuimedo> yedongcan: what's the status on the IPv6 support on the latest Docker releases? Is the right info making it into the ipam server?
14:05:22 <yedongcan> apuimedo: I'm not check it
14:06:45 <irenab_> http://paste.openstack.org/show/602463/
14:06:59 <irenab_> this is what I got playing with yedongcan patch
14:07:26 <apuimedo> ok
14:07:40 <apuimedo> irenab_: version?
14:08:14 <irenab_> the one that comes with devstack
14:08:35 <apuimedo> irenab_: do you have it handy?
14:08:39 <apuimedo> to run docker --verison
14:08:41 <apuimedo> *version?
14:08:55 <mchiappero> o/
14:08:55 <irenab_> do not have the setup currently, can update later
14:08:56 <apuimedo> anyway. irenab_: it seems it worked, right?
14:09:10 <apuimedo> ok
14:09:46 <irenab_> I had the 2 port issue, since it is fixed in seprate patch, but in overall seems the network and subnets created properly
14:09:57 <apuimedo> very well
14:09:59 <apuimedo> :-)
14:10:07 <apuimedo> you probably got the Docker 17.x version then
14:10:13 <apuimedo> alright a +2 for me
14:10:38 <apuimedo> #action limao to approve and w+1 if he agrees https://review.openstack.org/#/c/442294/6
14:10:41 <irenab_> apuimedo: I will complete the verification and vote +2 as well
14:11:06 <apuimedo> irenab_: ah, perfect
14:11:10 <apuimedo> :-)
14:11:31 <apuimedo> irenab_: I think with the progress this made in the last week, we are safe to mark this for release
14:11:41 <apuimedo> s/this/ipv6 support/
14:11:41 <irenab_> apuimedo: I agree
14:11:45 <apuimedo> very well
14:12:01 <ivc_> o/
14:12:17 <apuimedo> #info IPv6 slated for inclusion in the upcoming kuryr-libnetwork release
14:12:27 <apuimedo> thanks for the hard work yedongcan and limao
14:12:39 <irenab_> there are also hongbin’s patches to include
14:12:46 <apuimedo> yes
14:13:12 <apuimedo> #action apuimedo: irenab_: limao: to review https://review.openstack.org/#/c/442525/
14:13:12 <hongbin> #link https://review.openstack.org/#/c/439932/
14:13:20 <apuimedo> irenab_: let's get to those
14:15:12 <apuimedo> ok, I put to merge the dual stack port one
14:15:37 <hongbin> +1, the dual stack port one looks good to me as well
14:16:15 <irenab_> lets try to merge the fullstack patch as well
14:16:29 <apuimedo> #action apuimedo to review https://review.openstack.org/444661 https://review.openstack.org/444601 https://review.openstack.org/424889
14:16:37 <apuimedo> thanks a lot hongbin for all these patches
14:16:58 <hongbin> my pleasure
14:17:46 <apuimedo> anything else on kuryr-libnetwork?
14:18:57 <apuimedo> alright, let's get all these that were mentioned in and cut a release
14:19:35 <apuimedo> then we'll gather reqs for a maybe shorter pike release (if the community decides to have a nominal Pike release)
14:19:46 <apuimedo> #topic fuxi
14:19:50 <apuimedo> #chair hongbin
14:19:50 <openstack> Current chairs: apuimedo hongbin
14:19:52 <hongbin> hi
14:19:56 <apuimedo> you have the floor, hongbin
14:19:57 <apuimedo> :-)
14:20:17 <hongbin> i don't have too much to update for last week, it seems the china team was working on fuxi-kubernetes
14:20:48 <apuimedo> :O
14:20:49 <apuimedo> nice
14:20:53 <apuimedo> any news on that front?
14:21:00 <hongbin> i rely the vpg feekback to them, and it seems they have no problem to adopt the advice on reusing kuryr-kubernetes
14:21:03 <apuimedo> did you discuss the VTG feedback?
14:21:09 <apuimedo> ah, good!
14:21:22 <hongbin> apuimedo: that is all from me
14:21:35 <apuimedo> hongbin: should we expect them to send the patch to kuryr-kubernetes to make storage handlers configurable?
14:21:53 <hongbin> apuimedo: yes, i think so
14:22:03 <apuimedo> very well :-)
14:22:07 <apuimedo> looking forward to that!
14:22:14 <apuimedo> #topic kuryr-kubernetes
14:22:44 <apuimedo> #info We set up Three milestones and a Pike series for Kuryr-kubernetes
14:23:13 <apuimedo> we're still release independent
14:23:59 <apuimedo> but we'll track the development better this way
14:24:21 <apuimedo> ltomasbo's port pooling bp was the first to be processed this way
14:24:41 <ltomasbo> https://blueprints.launchpad.net/kuryr-kubernetes/+spec/ports-pool
14:25:01 <apuimedo> #link https://blueprints.launchpad.net/kuryr-kubernetes/+spec/ports-pool
14:25:06 <irenab_> apuimedo: Can you please elaborate on 3 milestones?
14:25:27 <apuimedo> the typical pike1, pike2, pike3 that official projects follow
14:25:37 <irenab_> dates?
14:25:41 <apuimedo> to estimate for which milestone we expect features
14:26:07 <apuimedo> #link https://launchpad.net/kuryr-kubernetes/pike
14:26:09 <apuimedo> irenab_: ^^
14:26:49 <apuimedo> #info irenab updated the wiki with the contribution guidelines https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies
14:26:52 <apuimedo> thanks irenab_!
14:26:55 <hongbin> apuimedo: another alternative release model is cycle-with-intermediate , official project could follow this model as well, the main different is there is no 3 milestones, you can release as many as you wanted as long as there is a final release at the end of the cycle
14:27:19 <hongbin> apuimedo: just fyi
14:27:21 <apuimedo> hongbin: yes, that's more accurate to what we want actually
14:27:45 <apuimedo> we will make releases more or less when we decide to, but we'll try to come up with releases at the end of cycle
14:28:13 <apuimedo> so that they are requirements aligned, get announcement together, etc
14:28:23 <apuimedo> hongbin: thanks for bringing it up
14:28:28 <irenab_> apuimedo: related to wiki update, https://review.openstack.org/#/c/443598/
14:28:34 <apuimedo> oh!
14:28:36 <apuimedo> I missed it
14:28:56 <apuimedo> ivc_: please merge https://review.openstack.org/#/c/443598/1
14:28:59 <apuimedo> :-)
14:29:31 <irenab_> apuimedo: the propoese release model is also for kuryr-libnetwork?
14:30:02 <apuimedo> irenab_: not put into effect yet, we should vote on it
14:30:26 <apuimedo> it depends on the process I mentioned before of gathering goals for next release
14:30:29 <irenab_> it makes sense to keep similar models
14:30:35 <apuimedo> and see if we have something for pike timeline
14:30:50 <apuimedo> otherwise I'd propose we start on the queens release
14:31:54 <apuimedo> ivc_: mind if I update https://review.openstack.org/#/c/433359/ to have _ensure and the docstrings?
14:33:39 <ivc_> apuimedo i'll do that :) was planning to update it once pushing the other patch (to have a link as irenab_ suggested)
14:33:40 <apuimedo> since it's the last blocking tihng
14:33:50 <apuimedo> ah, ok, didn't realize that
14:34:00 <apuimedo> ivc_: let me know if I can lend a hand anyway
14:34:02 <apuimedo> ;_)
14:35:25 <ivc_> apuimedo i could use a time machine or a cloning vat
14:35:26 <irenab_> ltomasbo: can you please share the meassurements you showed during VTG?
14:35:37 <apuimedo> ivc_: we can also merge https://review.openstack.org/#/c/442866/
14:35:42 <ltomasbo> sure
14:36:13 <apuimedo> ivc_: or the Dragon ball Hyperbolic Time Chamber
14:36:15 <apuimedo> :-)
14:36:48 <ltomasbo> I will add it to the card in the trello board
14:37:32 <ltomasbo> I would like to ask for reviews to the updated version of the devref
14:37:32 <apuimedo> ltomasbo: good
14:37:38 <ltomasbo> https://review.openstack.org/#/c/427681/
14:37:42 <apuimedo> ltomasbo: please, ask ask
14:37:44 <apuimedo> :-)
14:37:45 <ltomasbo> and the patch series
14:37:50 <ivc_> apuimedo about docker-py, i'm thinking if we might need it for fullstack/functional tests. but i'm ok dropping it in any case for now
14:38:17 <apuimedo> ivc_: if we can't make do with client-python or kubectl we'll add it again
14:38:27 <apuimedo> ltomasbo: right
14:38:33 <apuimedo> that's the biggest todo
14:38:52 <apuimedo> #action apuimedo ivc_ irenab_: review https://review.openstack.org/#/q/status:open+project:openstack/kuryr-kubernetes+branch:master+topic:bp/ports-pool
14:38:59 <apuimedo> and the reworked spec
14:39:26 <ltomasbo> I'll be working on the port discovery based on what we discussed on the VTG
14:39:36 <ltomasbo> but I found a couple of issues I would like to discuss here
14:39:41 <apuimedo> good
14:39:45 <apuimedo> go ahead
14:40:06 <ltomasbo> we decided to check for device_owner to find kuryr:compute
14:40:22 <ltomasbo> and that will work for the baremetal case
14:40:30 <irenab_> ltomasbo: compute:kuryr
14:40:41 <ltomasbo> but how will we know about the nested one?
14:40:56 <ltomasbo> finding subports in the trunk is easy, but how to now which trunk ports belong to kuryr
14:41:05 <ltomasbo> and which ones do not
14:41:08 <irenab_> we know the trunk port, so all its subports should be kuryr owned
14:41:25 <ltomasbo> we now the trunk port of the kuryr-controller
14:41:34 <ltomasbo> not of the workers, right?
14:41:44 <ltomasbo> s/now/know
14:41:53 <apuimedo> ltomasbo: trunk ports should be tagged
14:41:55 <ivc_> ltomasbo why not the workers?
14:41:57 <irenab_> of each worker node
14:42:14 <ltomasbo> the trunk port is not created by kuryr
14:42:36 <ltomasbo> so that should be a requirement? the entity creating the trunk port should tag it as kuryr?
14:42:52 <irenab_> but it is the one that specified in the config file, right?
14:43:10 <irenab_> not sure why tag is required?
14:43:44 <apuimedo> irenab_: you mean that kuryr controller will do a get for kubernetes nodes and find the trunks from that?
14:43:49 <ltomasbo> we have worker_node_subnets in the config
14:43:50 <ltomasbo> right?
14:43:55 <ltomasbo> not the worker nodes
14:43:57 <ivc_> ltomasbo we know worker trunk port because thats how we know where to add sub-ports, dont we?
14:44:15 <apuimedo> ivc_: we do it on demand
14:44:33 <apuimedo> IIRC
14:44:45 <ltomasbo> ivc_, once the kubernetes decide on the VM/trunk port
14:44:49 <ltomasbo> we do know where the pod is
14:44:49 <ivc_> apuimedo does it matter? we need to know where to add subport
14:45:01 <ltomasbo> but for the pool, we don't have the pod
14:45:29 <ltomasbo> we just need the ip of the VM to discover where to add the subport
14:45:45 <irenab_> ip of the VM == IP of the trunk port
14:45:51 <ltomasbo> yep
14:45:52 <ivc_> ltomasbo  ^^
14:45:54 <ltomasbo> that is the assumption
14:46:20 <ivc_> ltomasbo it is assumption, but thats what the sub-port creation logic is based on
14:46:28 <ltomasbo> yes
14:46:34 <ivc_> ltomasbo so cleanup logic should follow the same
14:46:48 <ltomasbo> maybe I did not explain the problem very well
14:46:53 <ltomasbo> not creation, not deletion problem
14:47:05 <ltomasbo> the problem is when rebooting the kuryr-controller
14:47:15 <ltomasbo> and we need to discover the ports that we created for the pool
14:47:36 <ltomasbo> as we need to get them by asking neutron
14:47:48 <irenab_> so you have worker odes IPs => get trunk ports => get sub ports
14:47:49 <ltomasbo> and the trunk ports are not created by kuryr
14:47:50 <ivc_> query IPs of the nodes and get related trunk ports?
14:48:16 <ltomasbo> so, you propose to include a new config info including the worker nodes ips?
14:48:18 <apuimedo> ivc_: you mean using the GET on the nodes resource, right?
14:48:42 <irenab_> ltomasbo: node ips can be retrieved from k8s API server
14:48:44 <ltomasbo> we don't have info about the worker nodes IPs AFAIK for now
14:49:19 <apuimedo> ltomasbo: when you GET nodes in k8s API
14:49:21 <apuimedo> you get the IPs
14:49:23 <apuimedo> IIRC
14:49:28 <ltomasbo> irenab_, ok
14:49:30 <irenab_> correct
14:49:34 <ltomasbo> that should work
14:49:48 <ivc_> apuimedo thats one way to do it. there might be complications where you have mixed nodes (both baremetal and vms) in the same cluster
14:49:51 <ltomasbo> I'll work on that! thanks!
14:49:57 <irenab_> we just assume that all nodes are managed by kuryr network
14:50:21 <ltomasbo> ivc_, the kuryr-controller driver is different for baremetal and nested
14:50:22 <irenab_> I think its ligimit assumption for now
14:50:47 <ivc_> ltomasbo it is now, but does not mean it will be that way forever :)
14:51:07 <irenab_> driver for what?
14:51:09 <ltomasbo> :D
14:51:14 <ltomasbo> ivc_, true
14:51:27 <ltomasbo> on a related note, for the baremetal case
14:51:35 <ivc_> ltomasbo i'm pretty certain eventually we'll get to a driver that could support both (e.g. based on K8s Node annotation or smth)
14:51:52 <irenab_> vif driver?
14:52:07 <ltomasbo> when discovering pre-created ports, we can use the device_owner, but how do we know where they were bond (once the CNI pool is in place)
14:52:38 <ivc_> irenab_ might be a bit more complicated that just vif driver
14:52:42 <ltomasbo> ivc_, agree
14:53:27 <irenab_> so future driver, I just was not sure if I follow the discussion
14:53:37 <apuimedo> ltomasbo: bond?
14:53:53 <ltomasbo> bind
14:54:13 <ltomasbo> what server is supposed to be?
14:54:22 <ivc_> ltomasbo it will probably depend on a tech used. the logic could be different for container-in-vm (where you could use K8s Node's IP to figure out the trunk port) and for bare-metal
14:55:00 <ltomasbo> ivc_, yes, definitely. I will use the K8s Node for the nested
14:55:05 <irenab_> neutron port should have host_id set
14:55:25 <ltomasbo> and, for now, until CNI pool is there, for the baremetal we don't need to discover where they are
14:55:39 <ivc_> irenab_ yes, but for bare-metal we could probably just use device_owner
14:56:27 <irenab_> why CNI pool is changing it?
14:56:43 <ltomasbo> once CNI pool is there, the ports are also related to specific servers
14:56:49 <apuimedo> last two minutes of meeting
14:56:50 <ltomasbo> and we need to know where they were
14:56:51 <apuimedo> :-)
14:57:04 <ivc_> lets move that discussion to the channel
14:57:08 <ltomasbo> if there is no cni pool, we don't mind
14:57:14 <ltomasbo> sure
14:57:26 <apuimedo> very well
14:57:32 <apuimedo> Thank you all for joining!
14:57:43 <apuimedo> see you in #openstack-kuryr
14:57:47 <apuimedo> #endmeeting