14:00:29 <dmellado> #startmeeting kuryr
14:00:30 <openstack> Meeting started Mon Dec  4 14:00:29 2017 UTC and is due to finish in 60 minutes.  The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:31 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:33 <openstack> The meeting name has been set to 'kuryr'
14:00:43 <dmellado> Hi folks, who's here today for the kuryr meeting? ;)
14:01:04 <dmellado> ltomasbo: dulek irenab ?
14:01:17 <dmellado> #chair ltomasbo dulek irenab
14:01:18 <openstack> Current chairs: dmellado dulek irenab ltomasbo
14:01:27 <leyal> hi
14:01:57 <dmellado> hi leyal ;)
14:02:22 <dmellado> #topic kuryr-kubernetes
14:02:33 <ltomasbo> o/
14:02:57 <dmellado> Anything on kuryr-kubernetes today? ;)
14:03:19 <yboaron_> o/
14:04:38 <hongbin> o/
14:04:53 <ltomasbo> maybe I can just say something about the scale testing I did last week
14:04:56 <dmellado> so, from my side. Updates! After having some issues, I'm doing progress on the gating, will be starting sending patches and I'll be using the gerrit topic for it
14:05:04 <dmellado> so I'd appreciate reviews on them
14:05:08 <dmellado> go for it ltomasbo
14:05:38 <ltomasbo> just wanted to say that performance was ok, I created a lot of containers with ODL as backend
14:05:57 <ltomasbo> and the results were similar to the ones apuimedo obtained before for OVN
14:06:18 <ltomasbo> but this time we tested with dulek's patches regarding kuryr-containerized
14:06:44 <ltomasbo> also, recovering the pre-created ports (around 1100 ports) took instead of 10-15 minutes, just 7 seconds after last modifications
14:06:52 <ltomasbo> so, that was nice to see!
14:07:03 <dmellado> awesome!
14:07:04 <ltomasbo> and that is pretty much it from my side
14:07:11 <dmellado> anyone has anything else to share?
14:08:08 <hongbin> dmellado: libnetwork?
14:08:24 <dmellado> #topic kuryr-libnetwork
14:08:30 <dmellado> hongbin: shoot ;)
14:08:33 <hongbin> just one thing from me
14:08:44 <hongbin> there is a sr-iov patch have been there for review
14:08:47 <hongbin> #link https://review.openstack.org/#/c/500436/
14:09:03 <hongbin> i have another person verified this patch is working
14:09:07 <dmellado> I'll take a look and also add a few 'stakeholders'
14:09:11 <dmellado> thanks hongbin
14:09:16 <hongbin> and there is a demo for it, appreciate reviews :)
14:09:26 <hongbin> dmellado: thanks, that is all from my side
14:09:48 <irenab> hi, sorry for being late
14:10:00 <dmellado> #topic fuxi/fuxi-kubernetes
14:10:03 <dmellado> Hi irenab ;)
14:11:20 <dmellado> #topic Open Discussion
14:11:31 <dmellado> so, anything else to share anyone? ;)
14:11:57 <irenab> ltomasbo: maybe you can share a bit more details on scle testing?
14:12:04 <irenab> scale
14:12:26 <ltomasbo> irenab, sure
14:12:34 <ltomasbo> it was not as big as the one Toni did
14:12:46 <ltomasbo> but I tested with 8 compute nodes, on around 30 VMs
14:12:55 <irenab> I also not sure that got any details on the one that Toni did
14:13:02 <ltomasbo> and make some tests with 100, 240, 480 and 800 pods
14:13:35 <ltomasbo> I basically tested the amount of time it take to get all the pods up and running
14:13:37 <irenab> so the objective is to check bulk creation/deletion?
14:13:47 <ltomasbo> with ports-pool
14:14:05 <ltomasbo> well, actually I already had the pools populated
14:14:21 <ltomasbo> so, objective was also to double chck the time to actually onboard the existing ports into the pools
14:14:43 <irenab> case for the congtroller restart?
14:14:51 <irenab> controller
14:14:52 <ltomasbo> yep
14:15:16 <ltomasbo> or even installation, we have a playbook to install it and we precreate the ports as part of the provisioning
14:15:29 <ltomasbo> and then the kuryr-controller will load them right away
14:15:55 <irenab> I think we should consider to make pools driver the default one
14:16:01 <ltomasbo> but I actually tested it by killing the controller and see how much time it took
14:16:20 <ltomasbo> there is a huge difference with pools actually
14:16:23 <irenab> ltomasbo: did you have nay further discussion with dulek regarding the pool + cni daemon
14:16:33 <irenab> ?
14:16:40 <ltomasbo> and it will make neutron to behave better too (as otherwise creating 900 is kind of like a DDoS to it)
14:17:00 <dulek> irenab: Not really. I'm still exploring how this is working and how it is supposed to work when CNI daemon is choosing the VIF.
14:17:01 <dmellado> folks, I've a home emergency here, could you please take care of the meeting?
14:17:03 <irenab> yea, totally agree
14:17:07 <dmellado> #chair irenab
14:17:07 <openstack> Current chairs: dmellado dulek irenab ltomasbo
14:17:09 <dmellado> thanks!
14:17:11 <ltomasbo> I got a bit stuck with the testing and other stuff last week, so, not really in-depth discussions
14:17:35 <dulek> irenab: I prefer to write a PoC myself first to understand the problems that can be in the design.
14:17:45 <dulek> And it's going pretty slowly unfortunately. :(
14:18:00 <irenab> dulek: sure, it is not trivial
14:18:35 <dulek> irenab: Yeah, even k8s API is against me (500 error message on #openstack-kubernetes).
14:19:32 <irenab> dulek: tomorrow will be another day :-)
14:19:48 <dulek> irenab: I hope to deal with that today. :D
14:20:01 <irenab> any other topic to discuss?
14:20:25 <yboaron_> short update
14:20:34 <yboaron_> sent first draft of openshift L7-router https://review.openstack.org/#/c/523900/ - appreciate reviews  - got review from irena :-)
14:20:50 <yboaron_> Do you know if k8s native services (no kuryr backend , use kube-proxy & IPTABLES  ) are HA'ed?
14:21:07 <yboaron_> or there a single point of failure at VIP ?
14:21:27 <irenab> yboaron_: I think it is managed as a k8s service, so should be sort of HA-ed
14:22:24 <irenab> #action eveyone please review https://review.openstack.org/#/c/523900/
14:22:25 <yboaron_> OK , that means neutron  LBAAS  should be HA also , I guess
14:22:33 <dulek> yboaron_: And it isn't?
14:22:41 <dulek> yboaron_: Even if you have multiple controllers?
14:22:49 <yboaron_> OCTAVIA default in non HA
14:23:23 <yboaron_> OCAVIA default configuration is NON HA ,
14:23:37 <irenab> so it is basically the Octavia configuration
14:23:40 <dulek> yboaron_: Sure, so is any OpenStack service.
14:23:52 <yboaron_> I'm talking about the AMAPHORA VMs
14:23:55 <dulek> yboaron_: I'm talking about possibility of running HA.
14:24:08 <yboaron_> we have single point of failure
14:24:17 <dulek> yboaron_: Is it possible to run multiple amphora VMs?
14:24:42 <yboaron_> yes , it's possible to run 2 if we set the ocatavia configuration
14:25:14 <irenab> I think this is must for production, but will be too heavy for the default devstack
14:25:43 <dulek> irenab: Yup, my thoughts as well.
14:25:53 <yboaron_> I agree , it worth adding it in our documentation
14:26:05 <irenab> yboaron_: good idea
14:26:18 <yboaron_> I'll take care of it
14:26:31 <irenab> great, thanks
14:26:41 <irenab> any other topic?
14:28:03 <irenab> I guess we can save 30 mins to hack more kuryr stuff
14:28:15 <ltomasbo> +1 on that :D
14:28:22 <irenab> thank you all for joining
14:28:29 <irenab> #endmeeting