14:00:26 #startmeeting kuryr 14:00:26 Meeting started Mon Feb 6 14:00:26 2017 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:30 The meeting name has been set to 'kuryr' 14:00:42 Hello everybody and welcome to another Kuryr weekly IRC meeting 14:00:47 Who's here for the meeting? 14:00:54 o7 14:00:59 o/ 14:01:03 o/ 14:01:04 o/ 14:01:10 o/ 14:01:39 hi 14:02:17 good. Nice showing! 14:02:22 #topic kuryr-lib 14:02:37 o/ 14:03:22 #info hongbin has sent a patch for fixing the kuryr-libnetwork devref https://review.openstack.org/#/c/426644/ 14:03:30 o/ 14:03:51 As you can see, that patch is to openstack/kuryr, where the devref used to live. I think it is overdue that we move it to openstack/kuryr-libnetwork 14:04:00 Anybody opposes that? 14:04:13 (my proposal is to merge this patch and then move the content) 14:04:57 +1 14:05:18 o/ 14:05:19 +1 14:05:21 +1 14:05:34 agree 14:06:02 good 14:06:09 anybody volunteers? 14:06:11 :P 14:06:25 -1 14:06:27 :P 14:07:14 I can do it 14:07:26 great! 14:07:29 thanks 14:08:04 #action alraddarla to move the kuryr-libnetwork devref to openstack/kuryr-libnetwork 14:08:07 good 14:08:49 #action apuimedo, limao, irenab to review https://review.openstack.org/#/c/427533/ 14:09:09 Anything else about kuryr-lib? 14:10:17 good. Moving on! 14:10:23 #topic kuryr-libnetwork 14:12:03 #info There's been some work on optimizing the subnetpool handling https://review.openstack.org/420610 and https://review.openstack.org/427923 14:12:09 It's a really good thing 14:13:17 There's a good number of patches to improve kuryr-libnetwork this week. I think that after these subnetpool and tag patches that are posted we should cut a release and branch out Ocata 14:13:41 #action apuimedo limao vikas and irenab to review https://review.openstack.org/#/q/project:openstack/kuryr-libnetwork 14:14:11 Do you agree on having these in for branching ocata and cutting a release? 14:14:39 * apuimedo looking for feedback on things that we may need to wait for or things we should delay 14:14:40 apuimedo: I am not sure, but seems we may missed the proper dates: http://git.net/ml/openstack-dev/2017-01/msg00317.html 14:15:20 irenab: AFAIK we are not bound to official release dates since we are release-independent 14:15:36 apuimedo: great, then its ok 14:15:50 #action apuimedo to check with openstack/release if we can cut an ocata branch at a later date 14:16:18 This, of course, would mark the first time that we cut a release branch and backport fixes 14:17:20 If anybody wants to volunteer to be handle the first line of reviews for kuryr-libnetwork backports it will be great 14:17:36 Anything else on kuryr-libnetwork? 14:18:49 very well. Moving on! 14:18:58 I can feel people are waiting for the coming section 14:19:03 #topic kuryr-kubernetes 14:20:02 #info the first patch for Kubernetes ClusterIP services support has been approved and is being merged https://review.openstack.org/#/c/427440/ 14:20:58 #info One or two more patches are still expected for having functional Kubernetes ClusterIP services backed by neutron-lbaasv2 14:21:21 ivc_: can you describe a bit the remaining patches that are coming up? 14:21:44 apuimedo sure 14:21:59 thanks 14:21:59 there are 2 parts left from https://review.openstack.org/#/c/376045/ 14:22:11 #link https://review.openstack.org/#/c/376045/ 14:22:21 the driver and the Endpoints handler 14:23:26 very well 14:23:35 looking forward to them 14:23:35 for clarity it would probably make sense to keep them separate 14:23:40 Agreed 14:23:44 to avoid really huge patches ofc 14:24:07 yes, let's be benevolent towards reviewers :-) 14:24:09 but the problem with that is that you wont be able to verify the first patch 14:24:48 of course 14:25:20 it's the price to pay. Gerrit should have whole branch merge at once 14:25:32 but oh well. Maybe some day 14:25:55 ivc_: you can keep patches dependant 14:26:35 #info ltomasbo has been driving work on resource pools with a spec https://review.openstack.org/427681 and a basic implementation of Port resource reutilization https://review.openstack.org/426687 14:26:38 irenab yes but what im saying is if i add driver first there's no code that would use it 14:26:44 irenab: I meant more for testing and CI 14:27:45 there was some discussion on the channel on the merit of the resource management approach between ivc_ and ltomasbo 14:28:02 apuimedo, I'm still working on that patch 14:28:14 ltomasbo the patch or the devref? 14:28:23 and I believem the advantage for the nested case will be larger 14:28:24 ivc_: was arguing for an increased focus on reutilization of the already bound and set up ovs devices 14:28:36 while the patch currently is optimizing for Neutron interactions 14:28:55 as the port is plugged to the VM and attached as a subport, so it will only be to link the veth to the VM vnic 14:29:01 * apuimedo trying to fill in onlookers, please correct if I misrepresented it 14:29:02 on the patch, I will also update the devref 14:29:19 i'm still confident we need to delay this optimisation until we get daemon/exec split 14:29:41 ltomasbo: IIUC in the nested case, it will only be about creating a new vlan device (and then updating the subport name), is that right? 14:30:11 my idea is to have a pool of subports, already with their own vlan 14:30:25 and then it is just a matter of linking the container to an already available subport/vlan 14:30:38 so, it will be just changing subport name 14:30:40 o/ 14:30:41 ltomasbo have you performed any benchmarks? 14:30:59 trying to, but I need a bigger machine 14:31:08 ivc_: that may be. I wonder if even without the split we can reuse the bound devices and save the ovs agent dance 14:31:16 but I'll definitely do 14:31:32 ltomasbo it would be nice to have some profiling details before we start optimising 14:31:43 ivc_: did you do some profiling? 14:31:46 hongbin: nice to see you. We'll do the fuxi section after this one (thanks for the kuryr-libnetwork work, we already talked about it earlier) 14:31:50 irenab a bit 14:32:00 yes, but it will be also dependant on the concurrence of port creation 14:32:00 apuimedo: ack 14:32:04 on a devstack env running inside vm 14:32:28 we have measure some delays of up to 10 seconds when creating several ports in parallel, around 10 14:32:30 ivc_: so you see the RPC part is more meaningful than neutron api calls? 14:32:31 which brings me to 14:32:53 we should really get this in https://review.openstack.org/#/c/422946/ 14:33:16 irenab from what i've seen the call to neutron's port_create takes much less time than the polling for status==ACTIVE 14:33:38 ivc_: Since I prefer you to work on the driver and handler service patches, mind if I take over that patch and address irenab's comments? 14:33:50 irenab but port_create also degrades as you increase the number of concurrent requests 14:34:03 ivc_: it degrades a lot 14:34:12 that's why batch creation will be useful 14:34:17 yup 14:34:19 sounds like both optimizations are useful 14:34:21 and even more for subports... 14:34:28 yep, I agree on that 14:35:08 ltomasbo: I'd say, even more for non vlan-aware-VMs pod-in-VM, IIRC the worst perf was when you do a lot of calls to add allowed-address-pairs 14:35:24 irenab: they are. I agree though that the order matters 14:35:31 and we should be looking to perform the split soon 14:35:33 but optimising vif binding later will require a bit different approach than optimising port_create with pooling now 14:36:08 so my point is generally about not wasting time on pool now and wait until we can do both optimisations 14:36:15 ivc_: I favor working on both parts of the spec 14:36:15 which requires the split 14:36:33 I suggest to finilize the next optimization step over the devref that ltomasbo proposed 14:36:56 apuimedo, in pod in vms you need to call allowed-address-pairs, but in vlan-aware-vms you need to call attach_subport 14:36:56 ivc_: I don't think working on the devref is wasting time 14:36:56 irenab yes, but the patch could wait 14:37:01 so, similar I assume 14:37:06 and the patch is sorta PoC 14:37:14 apuimedo irenab ltomasbo also i think that should not be a devref, but rather a spec 14:37:39 ivc_: for split, optimization? 14:37:39 followed by bp 14:37:46 ltomasbo: I'm more comfortable batching attach_subport than I am with allowed_address-paris 14:37:46 irenab for optimisation 14:37:49 *pairs 14:38:18 apuimedo, sure! 14:38:46 apuimedo: ivc_ ltomasbo do you want to set some chat to converge on the optimization? 14:39:38 irenab sure 14:39:45 ok 14:39:52 I think we all agree on what needs to be done. It's a bit of a matter of setting priorities 14:40:05 of course anyone else who is willing to join is more than welcome 14:40:06 split before or split after 14:40:09 basically 14:40:12 I'll send an invite 14:40:27 apuimedo: thanks 14:40:36 we had that planned long ago, i just did not expect someone to start working on it that early :) 14:40:57 ivc_: people scratch their own itch when it itches 14:41:00 :P 14:41:01 lets just kmake sure there is an alignment 14:41:08 true 14:41:23 #action apuimedo to send an invite to a video meeting about resource management to mailing list 14:41:50 changing topics a bit 14:42:02 I've been researching on external access to services 14:42:14 particularly the load balancer type 14:42:54 I'll send a proposal soon. The idea is to make ivc_'s service handler allocate a fip and annotate for it 14:43:30 and have the existing k8s openstack cloudprovider add an option to use kuryr netowrking, which means it will jsut wait for the annotation and report it 14:43:31 apuimedo: fip for vip? 14:43:48 irenab: fip, the vip is already handled by ivc_'s code 14:44:06 allocate fip for vip? 14:44:15 I've been trying to reach Angus Lees on #openstack-containers without success, I'll try email 14:44:23 irenab: that's right 14:44:29 apuimedo: try k8s slack 14:44:41 irenab: right. I'll check if he's there 14:44:43 thanks 14:44:48 https://kubernetes.slack.com/messages/sig-openstack/ 14:45:15 I've also been investigating on openshift router pod, which gives routed access to services 14:45:38 I think it gives a nice alternative (or at least some food for thought) 14:46:06 basically what it does is have a pod that loadbalances the access to the endpoints 14:46:33 this way (although not with openshift's impl) one can work around the udp limitations 14:46:35 apuimedo loadbalances how? 14:46:48 ivc_: openshift uses haproxy, so it's a no for UDP 14:46:54 but it checks the endpoints itself 14:47:07 the one they have is just for http and https 14:47:15 irenab: cannot access 14:47:18 so you only need one fip for all the services 14:47:23 it resolves based on service name 14:47:25 fqdn 14:47:29 apuimedo that reminds me of haproxy we have in LBaaS 14:47:32 which is interesting 14:47:42 (and could be useful for ingress controllers) 14:47:55 anyway, for udp I suppose you'd do something like ipvs on a pod 14:47:56 mchiappero: you should register from here: http://slack.k8s.io/ 14:48:10 irenab: thanks 14:48:41 sorry, need to leave 14:49:05 irenab: thanks for joining 14:49:27 ivc_: anyway, this was basically a food for thought 14:49:58 since we may have users that don't want a full FIP for a service, and sharing load balancers brings some complications as we already discussed other times 14:49:58 apuimedo i'd love to see k8s driver in octavia 14:50:05 ivc_: me too 14:50:17 but AFAIK nobody is driving it yet 14:50:36 ivc_: I'll try to inquire about that in Atlanta 14:50:47 Anything else about kuryr-kubernetes? 14:51:14 apuimedo doesn't seem as if there's a lot of work to make that driver 14:51:31 ivc_: with haproxy? Probably not 14:51:36 yep 14:51:44 it's more about HOW 14:51:59 i'd say more about WHO :) 14:52:16 in other words, I don't suppose they'd like to tie it to Kuryr, so... 14:52:57 alright 14:52:58 moving on 14:53:01 #topic fuxi 14:53:07 hi 14:53:11 hongbin: sorry for the short time 14:53:21 #chair hongbin 14:53:21 Current chairs: apuimedo hongbin 14:53:22 np , i don't have too much to update this week 14:53:43 just a note that the kubernetes-fuxi proposal has been approved: https://review.openstack.org/#/c/423791/ 14:54:25 also, at last week, contributors submitted several fixes: https://review.openstack.org/#/q/project:openstack/fuxi 14:54:46 i personally mainly worked on the kuryr-libnetwork, so don't have too much at fuxi last week 14:54:56 apuimedo: ^^ 14:54:59 thanks again for the kuryr-libnetwork work 14:55:02 :-) 14:55:11 my pleasure 14:55:13 limao: please help me review the fuxi patches 14:55:26 #topic general discussion 14:55:32 apuimedo: sure 14:55:55 #info I will be posting the calendar for the VTG this week 14:56:12 please check the mailing list and let me know if there are any incompatibilities 14:57:33 Won't be able to join any of the PTG sessions....they are way too early in my time zone :P 14:58:19 Please remember to vote for the sessions you want to see on https://etherpad.openstack.org/p/kuryr_virtual_gathering_2017h1 14:58:29 alraddarla: you mean VTG 14:58:32 :-) 14:58:37 yes, sorry VTG* 14:58:39 alraddarla: East coast? 14:58:40 :) 14:58:47 central 14:59:00 like hongbin maybe 14:59:08 ok, I'll move it to later time 14:59:16 alraddarla: but do vote 14:59:18 :-) 14:59:28 Very well, if there is anything else, bring it up on the channel 14:59:37 Thanks to all of you for joining in the meeting! 14:59:39 #endmeeting