14:00:26 <apuimedo> #startmeeting kuryr
14:00:26 <openstack> Meeting started Mon Feb  6 14:00:26 2017 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:27 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:30 <openstack> The meeting name has been set to 'kuryr'
14:00:42 <apuimedo> Hello everybody and welcome to another Kuryr weekly IRC meeting
14:00:47 <apuimedo> Who's here for the meeting?
14:00:54 <ivc_> o7
14:00:59 <limao> o/
14:01:03 <alraddarla> o/
14:01:04 <yedongcan> o/
14:01:10 <mchiappero> o/
14:01:39 <irenab> hi
14:02:17 <apuimedo> good. Nice showing!
14:02:22 <apuimedo> #topic kuryr-lib
14:02:37 <ltomasbo> o/
14:03:22 <apuimedo> #info hongbin has sent a patch for fixing the kuryr-libnetwork devref https://review.openstack.org/#/c/426644/
14:03:30 <garyloug> o/
14:03:51 <apuimedo> As you can see, that patch is to openstack/kuryr, where the devref used to live. I think it is overdue that we move it to openstack/kuryr-libnetwork
14:04:00 <apuimedo> Anybody opposes that?
14:04:13 <apuimedo> (my proposal is to merge this patch and then move the content)
14:04:57 <irenab> +1
14:05:18 <janonymous> o/
14:05:19 <limao> +1
14:05:21 <mchiappero> +1
14:05:34 <yedongcan> agree
14:06:02 <apuimedo> good
14:06:09 <apuimedo> anybody volunteers?
14:06:11 <apuimedo> :P
14:06:25 <mchiappero> -1
14:06:27 <mchiappero> :P
14:07:14 <alraddarla> I can do it
14:07:26 <apuimedo> great!
14:07:29 <apuimedo> thanks
14:08:04 <apuimedo> #action alraddarla to move the kuryr-libnetwork devref to openstack/kuryr-libnetwork
14:08:07 <apuimedo> good
14:08:49 <apuimedo> #action apuimedo, limao, irenab to review https://review.openstack.org/#/c/427533/
14:09:09 <apuimedo> Anything else about kuryr-lib?
14:10:17 <apuimedo> good. Moving on!
14:10:23 <apuimedo> #topic kuryr-libnetwork
14:12:03 <apuimedo> #info There's been some work on optimizing the subnetpool handling https://review.openstack.org/420610 and https://review.openstack.org/427923
14:12:09 <apuimedo> It's a really good thing
14:13:17 <apuimedo> There's a good number of patches to improve kuryr-libnetwork this week. I think that after these subnetpool and tag patches that are posted we should cut a release and branch out Ocata
14:13:41 <apuimedo> #action apuimedo limao vikas and irenab to review https://review.openstack.org/#/q/project:openstack/kuryr-libnetwork
14:14:11 <apuimedo> Do you agree on having these in for branching ocata and cutting a release?
14:14:39 * apuimedo looking for feedback on things that we may need to wait for or things we should delay
14:14:40 <irenab> apuimedo: I am not sure, but seems we may missed the proper dates: http://git.net/ml/openstack-dev/2017-01/msg00317.html
14:15:20 <apuimedo> irenab: AFAIK we are not bound to official release dates since we are release-independent
14:15:36 <irenab> apuimedo: great, then its ok
14:15:50 <apuimedo> #action apuimedo to check with openstack/release if we can cut an ocata branch at a later date
14:16:18 <apuimedo> This, of course, would mark the first time that we cut a release branch and backport fixes
14:17:20 <apuimedo> If anybody wants to volunteer to be handle the first line of reviews for kuryr-libnetwork backports it will be great
14:17:36 <apuimedo> Anything else on kuryr-libnetwork?
14:18:49 <apuimedo> very well. Moving on!
14:18:58 <apuimedo> I can feel people are waiting for the coming section
14:19:03 <apuimedo> #topic kuryr-kubernetes
14:20:02 <apuimedo> #info the first patch for Kubernetes ClusterIP services support has been approved and is being merged https://review.openstack.org/#/c/427440/
14:20:58 <apuimedo> #info One or two more patches are still expected for having functional Kubernetes ClusterIP services backed by neutron-lbaasv2
14:21:21 <apuimedo> ivc_: can you describe a bit the remaining patches that are coming up?
14:21:44 <ivc_> apuimedo sure
14:21:59 <apuimedo> thanks
14:21:59 <ivc_> there are 2 parts left from https://review.openstack.org/#/c/376045/
14:22:11 <apuimedo> #link https://review.openstack.org/#/c/376045/
14:22:21 <ivc_> the driver and the Endpoints handler
14:23:26 <apuimedo> very well
14:23:35 <apuimedo> looking forward to them
14:23:35 <ivc_> for clarity it would probably make sense to keep them separate
14:23:40 <apuimedo> Agreed
14:23:44 <ivc_> to avoid really huge patches ofc
14:24:07 <apuimedo> yes, let's be benevolent towards reviewers :-)
14:24:09 <ivc_> but the problem with that is that you wont be able to verify the first patch
14:24:48 <apuimedo> of course
14:25:20 <apuimedo> it's the price to pay. Gerrit should have whole branch merge at once
14:25:32 <apuimedo> but oh well. Maybe some day
14:25:55 <irenab> ivc_: you can keep patches dependant
14:26:35 <apuimedo> #info ltomasbo has been driving work on resource pools with a spec https://review.openstack.org/427681 and a basic implementation of Port resource reutilization https://review.openstack.org/426687
14:26:38 <ivc_> irenab yes but what im saying is if i add driver first there's no code that would use it
14:26:44 <apuimedo> irenab: I meant more for testing and CI
14:27:45 <apuimedo> there was some discussion on the channel on the merit of the resource management approach between ivc_ and ltomasbo
14:28:02 <ltomasbo> apuimedo, I'm still working on that patch
14:28:14 <ivc_> ltomasbo the patch or the devref?
14:28:23 <ltomasbo> and I believem the advantage for the nested case will be larger
14:28:24 <apuimedo> ivc_: was arguing for an increased focus on reutilization of the already bound and set up ovs devices
14:28:36 <apuimedo> while the patch currently is optimizing for Neutron interactions
14:28:55 <ltomasbo> as the port is plugged to the VM and attached as a subport, so it will only be to link the veth to the VM vnic
14:29:01 * apuimedo trying to fill in onlookers, please correct if I misrepresented it
14:29:02 <ltomasbo> on the patch, I will also update the devref
14:29:19 <ivc_> i'm still confident we need to delay this optimisation until we get daemon/exec split
14:29:41 <apuimedo> ltomasbo: IIUC in the nested case, it will only be about creating a new vlan device (and then updating the subport name), is that right?
14:30:11 <ltomasbo> my idea is to have a pool of subports, already with their own vlan
14:30:25 <ltomasbo> and then it is just a matter of linking the container to an already available subport/vlan
14:30:38 <ltomasbo> so, it will be just changing subport name
14:30:40 <hongbin> o/
14:30:41 <ivc_> ltomasbo have you performed any benchmarks?
14:30:59 <ltomasbo> trying to, but I need a bigger machine
14:31:08 <apuimedo> ivc_: that may be. I wonder if even without the split we can reuse the bound devices and save the ovs agent dance
14:31:16 <ltomasbo> but I'll definitely do
14:31:32 <ivc_> ltomasbo it would be nice to have some profiling details before we start optimising
14:31:43 <irenab> ivc_: did you do some profiling?
14:31:46 <apuimedo> hongbin: nice to see you. We'll do the fuxi section after this one (thanks for the kuryr-libnetwork work, we already talked about it earlier)
14:31:50 <ivc_> irenab a bit
14:32:00 <ltomasbo> yes, but it will be also dependant on the concurrence of port creation
14:32:00 <hongbin> apuimedo: ack
14:32:04 <ivc_> on a devstack env running inside vm
14:32:28 <ltomasbo> we have measure some delays of up to 10 seconds when creating several ports in parallel, around 10
14:32:30 <irenab> ivc_: so you see the RPC part is more meaningful than neutron api calls?
14:32:31 <apuimedo> which brings me to
14:32:53 <apuimedo> we should really get this in https://review.openstack.org/#/c/422946/
14:33:16 <ivc_> irenab from what i've seen the call to neutron's port_create takes much less time than the polling for status==ACTIVE
14:33:38 <apuimedo> ivc_: Since I prefer you to work on the driver and handler service patches, mind if I take over that patch and address irenab's comments?
14:33:50 <ivc_> irenab but port_create also degrades as you increase the number of concurrent requests
14:34:03 <apuimedo> ivc_: it degrades a lot
14:34:12 <apuimedo> that's why batch creation will be useful
14:34:17 <ivc_> yup
14:34:19 <irenab> sounds like both optimizations are useful
14:34:21 <ltomasbo> and even more for subports...
14:34:28 <ltomasbo> yep, I agree on that
14:35:08 <apuimedo> ltomasbo: I'd say, even more for non vlan-aware-VMs pod-in-VM, IIRC the worst perf was when you do a lot of calls to add allowed-address-pairs
14:35:24 <apuimedo> irenab: they are. I agree though that the order matters
14:35:31 <apuimedo> and we should be looking to perform the split soon
14:35:33 <ivc_> but optimising vif binding later will require a bit different approach than optimising port_create with pooling now
14:36:08 <ivc_> so my point is generally about not wasting time on pool now and wait until we can do both optimisations
14:36:15 <apuimedo> ivc_: I favor working on both parts of the spec
14:36:15 <ivc_> which requires the split
14:36:33 <irenab> I suggest to finilize the next optimization step over the devref that ltomasbo proposed
14:36:56 <ltomasbo> apuimedo, in pod in vms you need to call allowed-address-pairs, but in vlan-aware-vms you need to call attach_subport
14:36:56 <apuimedo> ivc_: I don't think working on the devref is wasting time
14:36:56 <ivc_> irenab yes, but the patch could wait
14:37:01 <ltomasbo> so, similar I assume
14:37:06 <apuimedo> and the patch is sorta PoC
14:37:14 <ivc_> apuimedo irenab ltomasbo also i think that should not be a devref, but rather a spec
14:37:39 <irenab> ivc_: for split, optimization?
14:37:39 <ivc_> followed by bp
14:37:46 <apuimedo> ltomasbo: I'm more comfortable batching attach_subport than I am with allowed_address-paris
14:37:46 <ivc_> irenab for optimisation
14:37:49 <apuimedo> *pairs
14:38:18 <ltomasbo> apuimedo, sure!
14:38:46 <irenab> apuimedo: ivc_ ltomasbo do you want to set some chat to converge on the optimization?
14:39:38 <ivc_> irenab sure
14:39:45 <ltomasbo> ok
14:39:52 <apuimedo> I think we all agree on what needs to be done. It's a bit of a matter of setting priorities
14:40:05 <irenab> of course anyone else who is willing to join is more than welcome
14:40:06 <apuimedo> split before or split after
14:40:09 <apuimedo> basically
14:40:12 <apuimedo> I'll send an invite
14:40:27 <irenab> apuimedo: thanks
14:40:36 <ivc_> we had that planned long ago, i just did not expect someone to start working on it that early :)
14:40:57 <apuimedo> ivc_: people scratch their own itch when it itches
14:41:00 <apuimedo> :P
14:41:01 <irenab> lets just kmake sure there is an alignment
14:41:08 <ivc_> true
14:41:23 <apuimedo> #action apuimedo to send an invite to a video meeting about resource management to mailing list
14:41:50 <apuimedo> changing topics a bit
14:42:02 <apuimedo> I've been researching on external access to services
14:42:14 <apuimedo> particularly the load balancer type
14:42:54 <apuimedo> I'll send a proposal soon. The idea is to make ivc_'s service handler allocate a fip and annotate for it
14:43:30 <apuimedo> and have the existing k8s openstack cloudprovider add an option to use kuryr netowrking, which means it will jsut wait for the annotation and report it
14:43:31 <irenab> apuimedo: fip for vip?
14:43:48 <apuimedo> irenab: fip, the vip is already handled by ivc_'s code
14:44:06 <irenab> allocate fip for vip?
14:44:15 <apuimedo> I've been trying to reach Angus Lees on #openstack-containers without success, I'll try email
14:44:23 <apuimedo> irenab: that's right
14:44:29 <irenab> apuimedo: try k8s slack
14:44:41 <apuimedo> irenab: right. I'll check if he's there
14:44:43 <apuimedo> thanks
14:44:48 <irenab> https://kubernetes.slack.com/messages/sig-openstack/
14:45:15 <apuimedo> I've also been investigating on openshift router pod, which gives routed access to services
14:45:38 <apuimedo> I think it gives a nice alternative (or at least some food for thought)
14:46:06 <apuimedo> basically what it does is have a pod that loadbalances the access to the endpoints
14:46:33 <apuimedo> this way (although not with openshift's impl) one can work around the udp limitations
14:46:35 <ivc_> apuimedo loadbalances how?
14:46:48 <apuimedo> ivc_: openshift uses haproxy, so it's a no for UDP
14:46:54 <apuimedo> but it checks the endpoints itself
14:47:07 <apuimedo> the one they have is just for http and https
14:47:15 <mchiappero> irenab: cannot access
14:47:18 <apuimedo> so you only need one fip for all the services
14:47:23 <apuimedo> it resolves based on service name
14:47:25 <apuimedo> fqdn
14:47:29 <ivc_> apuimedo that reminds me of haproxy we have in LBaaS
14:47:32 <apuimedo> which is interesting
14:47:42 <apuimedo> (and could be useful for ingress controllers)
14:47:55 <apuimedo> anyway, for udp I suppose you'd do something like ipvs on a pod
14:47:56 <irenab> mchiappero: you should register from here: http://slack.k8s.io/
14:48:10 <mchiappero> irenab: thanks
14:48:41 <irenab> sorry, need to leave
14:49:05 <apuimedo> irenab: thanks for joining
14:49:27 <apuimedo> ivc_: anyway, this was basically a food for thought
14:49:58 <apuimedo> since we may have users that don't want a full FIP for a service, and sharing load balancers brings some complications as we already discussed other times
14:49:58 <ivc_> apuimedo i'd love to see k8s driver in octavia
14:50:05 <apuimedo> ivc_: me too
14:50:17 <apuimedo> but AFAIK nobody is driving it yet
14:50:36 <apuimedo> ivc_: I'll try to inquire about that in Atlanta
14:50:47 <apuimedo> Anything else about kuryr-kubernetes?
14:51:14 <ivc_> apuimedo doesn't seem as if there's a lot of work to make that driver
14:51:31 <apuimedo> ivc_: with haproxy? Probably not
14:51:36 <ivc_> yep
14:51:44 <apuimedo> it's more about HOW
14:51:59 <ivc_> i'd say more about WHO :)
14:52:16 <apuimedo> in other words, I don't suppose they'd like to tie it to Kuryr, so...
14:52:57 <apuimedo> alright
14:52:58 <apuimedo> moving on
14:53:01 <apuimedo> #topic fuxi
14:53:07 <hongbin> hi
14:53:11 <apuimedo> hongbin: sorry for the short time
14:53:21 <apuimedo> #chair hongbin
14:53:21 <openstack> Current chairs: apuimedo hongbin
14:53:22 <hongbin> np , i don't have too much to update this week
14:53:43 <hongbin> just a note that the kubernetes-fuxi proposal has been approved: https://review.openstack.org/#/c/423791/
14:54:25 <hongbin> also, at last week, contributors submitted several fixes: https://review.openstack.org/#/q/project:openstack/fuxi
14:54:46 <hongbin> i personally mainly worked on the kuryr-libnetwork, so don't have too much at fuxi last week
14:54:56 <hongbin> apuimedo: ^^
14:54:59 <apuimedo> thanks again for the kuryr-libnetwork work
14:55:02 <apuimedo> :-)
14:55:11 <hongbin> my pleasure
14:55:13 <apuimedo> limao: please help me review the fuxi patches
14:55:26 <apuimedo> #topic general discussion
14:55:32 <limao> apuimedo: sure
14:55:55 <apuimedo> #info I will be posting the calendar for the VTG this week
14:56:12 <apuimedo> please check the mailing list and let me know if there are any incompatibilities
14:57:33 <alraddarla> Won't be able to join any of the PTG sessions....they are way too early in my time zone :P
14:58:19 <apuimedo> Please remember to vote for the sessions you want to see on https://etherpad.openstack.org/p/kuryr_virtual_gathering_2017h1
14:58:29 <apuimedo> alraddarla: you mean VTG
14:58:32 <apuimedo> :-)
14:58:37 <alraddarla> yes, sorry VTG*
14:58:39 <apuimedo> alraddarla: East coast?
14:58:40 <alraddarla> :)
14:58:47 <alraddarla> central
14:59:00 <apuimedo> like hongbin maybe
14:59:08 <apuimedo> ok, I'll move it to later time
14:59:16 <apuimedo> alraddarla: but do vote
14:59:18 <apuimedo> :-)
14:59:28 <apuimedo> Very well, if there is anything else, bring it up on the channel
14:59:37 <apuimedo> Thanks to all of you for joining in the meeting!
14:59:39 <apuimedo> #endmeeting