14:00:29 #startmeeting kuryr 14:00:30 Meeting started Mon Dec 4 14:00:29 2017 UTC and is due to finish in 60 minutes. The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:33 The meeting name has been set to 'kuryr' 14:00:43 Hi folks, who's here today for the kuryr meeting? ;) 14:01:04 ltomasbo: dulek irenab ? 14:01:17 #chair ltomasbo dulek irenab 14:01:18 Current chairs: dmellado dulek irenab ltomasbo 14:01:27 hi 14:01:57 hi leyal ;) 14:02:22 #topic kuryr-kubernetes 14:02:33 o/ 14:02:57 Anything on kuryr-kubernetes today? ;) 14:03:19 o/ 14:04:38 o/ 14:04:53 maybe I can just say something about the scale testing I did last week 14:04:56 so, from my side. Updates! After having some issues, I'm doing progress on the gating, will be starting sending patches and I'll be using the gerrit topic for it 14:05:04 so I'd appreciate reviews on them 14:05:08 go for it ltomasbo 14:05:38 just wanted to say that performance was ok, I created a lot of containers with ODL as backend 14:05:57 and the results were similar to the ones apuimedo obtained before for OVN 14:06:18 but this time we tested with dulek's patches regarding kuryr-containerized 14:06:44 also, recovering the pre-created ports (around 1100 ports) took instead of 10-15 minutes, just 7 seconds after last modifications 14:06:52 so, that was nice to see! 14:07:03 awesome! 14:07:04 and that is pretty much it from my side 14:07:11 anyone has anything else to share? 14:08:08 dmellado: libnetwork? 14:08:24 #topic kuryr-libnetwork 14:08:30 hongbin: shoot ;) 14:08:33 just one thing from me 14:08:44 there is a sr-iov patch have been there for review 14:08:47 #link https://review.openstack.org/#/c/500436/ 14:09:03 i have another person verified this patch is working 14:09:07 I'll take a look and also add a few 'stakeholders' 14:09:11 thanks hongbin 14:09:16 and there is a demo for it, appreciate reviews :) 14:09:26 dmellado: thanks, that is all from my side 14:09:48 hi, sorry for being late 14:10:00 #topic fuxi/fuxi-kubernetes 14:10:03 Hi irenab ;) 14:11:20 #topic Open Discussion 14:11:31 so, anything else to share anyone? ;) 14:11:57 ltomasbo: maybe you can share a bit more details on scle testing? 14:12:04 scale 14:12:26 irenab, sure 14:12:34 it was not as big as the one Toni did 14:12:46 but I tested with 8 compute nodes, on around 30 VMs 14:12:55 I also not sure that got any details on the one that Toni did 14:13:02 and make some tests with 100, 240, 480 and 800 pods 14:13:35 I basically tested the amount of time it take to get all the pods up and running 14:13:37 so the objective is to check bulk creation/deletion? 14:13:47 with ports-pool 14:14:05 well, actually I already had the pools populated 14:14:21 so, objective was also to double chck the time to actually onboard the existing ports into the pools 14:14:43 case for the congtroller restart? 14:14:51 controller 14:14:52 yep 14:15:16 or even installation, we have a playbook to install it and we precreate the ports as part of the provisioning 14:15:29 and then the kuryr-controller will load them right away 14:15:55 I think we should consider to make pools driver the default one 14:16:01 but I actually tested it by killing the controller and see how much time it took 14:16:20 there is a huge difference with pools actually 14:16:23 ltomasbo: did you have nay further discussion with dulek regarding the pool + cni daemon 14:16:33 ? 14:16:40 and it will make neutron to behave better too (as otherwise creating 900 is kind of like a DDoS to it) 14:17:00 irenab: Not really. I'm still exploring how this is working and how it is supposed to work when CNI daemon is choosing the VIF. 14:17:01 folks, I've a home emergency here, could you please take care of the meeting? 14:17:03 yea, totally agree 14:17:07 #chair irenab 14:17:07 Current chairs: dmellado dulek irenab ltomasbo 14:17:09 thanks! 14:17:11 I got a bit stuck with the testing and other stuff last week, so, not really in-depth discussions 14:17:35 irenab: I prefer to write a PoC myself first to understand the problems that can be in the design. 14:17:45 And it's going pretty slowly unfortunately. :( 14:18:00 dulek: sure, it is not trivial 14:18:35 irenab: Yeah, even k8s API is against me (500 error message on #openstack-kubernetes). 14:19:32 dulek: tomorrow will be another day :-) 14:19:48 irenab: I hope to deal with that today. :D 14:20:01 any other topic to discuss? 14:20:25 short update 14:20:34 sent first draft of openshift L7-router https://review.openstack.org/#/c/523900/ - appreciate reviews - got review from irena :-) 14:20:50 Do you know if k8s native services (no kuryr backend , use kube-proxy & IPTABLES ) are HA'ed? 14:21:07 or there a single point of failure at VIP ? 14:21:27 yboaron_: I think it is managed as a k8s service, so should be sort of HA-ed 14:22:24 #action eveyone please review https://review.openstack.org/#/c/523900/ 14:22:25 OK , that means neutron LBAAS should be HA also , I guess 14:22:33 yboaron_: And it isn't? 14:22:41 yboaron_: Even if you have multiple controllers? 14:22:49 OCTAVIA default in non HA 14:23:23 OCAVIA default configuration is NON HA , 14:23:37 so it is basically the Octavia configuration 14:23:40 yboaron_: Sure, so is any OpenStack service. 14:23:52 I'm talking about the AMAPHORA VMs 14:23:55 yboaron_: I'm talking about possibility of running HA. 14:24:08 we have single point of failure 14:24:17 yboaron_: Is it possible to run multiple amphora VMs? 14:24:42 yes , it's possible to run 2 if we set the ocatavia configuration 14:25:14 I think this is must for production, but will be too heavy for the default devstack 14:25:43 irenab: Yup, my thoughts as well. 14:25:53 I agree , it worth adding it in our documentation 14:26:05 yboaron_: good idea 14:26:18 I'll take care of it 14:26:31 great, thanks 14:26:41 any other topic? 14:28:03 I guess we can save 30 mins to hack more kuryr stuff 14:28:15 +1 on that :D 14:28:22 thank you all for joining 14:28:29 #endmeeting