14:04:54 <irenab> #startmeeting kuryr
14:04:54 <openstack> Meeting started Mon Nov 20 14:04:54 2017 UTC and is due to finish in 60 minutes.  The chair is irenab. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:04:55 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:04:57 <openstack> The meeting name has been set to 'kuryr'
14:05:14 <irenab> hello everyone
14:05:22 <dulek> o/
14:05:28 <ltomasbo> o/
14:06:25 <irenab> hi guys, lets wait one more minute for people to join
14:07:10 <irenab> well, lets start
14:07:30 <irenab> #topic kuryr-libnetwork
14:08:11 <irenab> few patches were proposed and merged recently
14:08:30 <irenab> mostly driven by zoom team
14:08:43 <yboaron__> Hi Folks , Sorry I must leave early today , will re-catch later
14:09:21 <irenab> #topic kuryr-kubernetes
14:09:30 <irenab> yboaron__: do you want to update?
14:10:10 <yboaron__> irenab, nothing special - I'm debugging the route feature
14:10:29 <yboaron__> irenab, thanks
14:10:47 <ltomasbo> I have something on kuryr-kubernetes, irenab perhaps you can take a look at: https://review.openstack.org/#/c/519704/
14:11:05 <irenab> yboaron__: Do you plan to submit rst spec or just prefer people to review gdoc?
14:11:27 <ltomasbo> apuimedo and dulek already reviewed it. It is about skiping calls to neutron during pods boot up time (addressing ivc oslo.caching TODO)
14:11:34 <irenab> ltomasbo: sure, will take a look
14:11:50 <ltomasbo> irenab, thanks!
14:12:19 <irenab> ltomasbo: you can add me to the list of patch reviewers, then I won’t miss it
14:12:36 <ltomasbo> ohh, sorry, I forgot about that! will do in a sec.
14:12:47 <apuimedo> \o/
14:12:58 <irenab> apuimedo: you are alive!!!
14:13:07 <irenab> #chair apuimedo
14:13:08 <openstack> Current chairs: apuimedo irenab
14:13:16 <apuimedo> to some degree
14:13:19 <apuimedo> :-)
14:13:22 <apuimedo> where are we at?
14:13:38 <irenab> switched to kuryr-kubernetes updates
14:13:49 <apuimedo> ah good
14:14:11 <apuimedo> I discovered that my openshift support patch has the openshift api lb member wrong
14:14:21 <apuimedo> I had only tested pod communication :P
14:14:51 <apuimedo> irenab: we should try to merge the cni daemon patch
14:15:01 <apuimedo> and start thinking about cutting an intermediary release
14:15:06 <apuimedo> what do you all think?
14:15:11 <irenab> apuimedo: agreed
14:15:13 <dulek> :)
14:15:25 <dulek> So here's the status copy paste:
14:15:26 <dulek> I'm still working on CNI daemon. The first two patches in the chain are fine and ready to be merged. I'm currently debugging the daemon in containerized mode - there's an issue ltomasbo found.
14:15:28 <apuimedo> dulek: any update on the openshift gate?
14:16:10 <dulek> apuimedo: Aww, I've forgot about that, need to check out why it's failing.
14:16:12 <apuimedo> dulek: running it with k8s or with openshift? my openshift pods start faster somehow
14:16:22 <dulek> k8s.
14:16:28 <apuimedo> we should really update k8s to use kubeadm or binaries at some point
14:16:46 <dulek> I agree.
14:16:57 <apuimedo> I'll take a look at it today
14:17:08 <apuimedo> (the new k8s deployment
14:17:10 <apuimedo> )
14:17:21 <apuimedo> otherwise we can't work on crd
14:17:43 <dulek> Yup, that's right.
14:17:50 <apuimedo> irenab: any update on network policy?
14:17:51 <irenab> crd for waht?
14:17:59 <irenab> what feature?
14:18:06 <apuimedo> irenab: in case we need them for cni side vif assignment
14:18:25 <irenab> apuimedo: network policy spec is up for review, please take a look
14:18:55 <irenab> https://review.openstack.org/#/c/519239/
14:19:14 <apuimedo> #info vif handler and driver design document merged https://review.openstack.org/#/c/513715/ This should open the door for multiple concurrent vif driver operation
14:19:25 <apuimedo> irenab: link?
14:19:39 <irenab> ^^
14:19:45 <apuimedo> damn
14:19:47 <apuimedo> I'm blind
14:19:49 <apuimedo> xD
14:19:59 <apuimedo> #link https://review.openstack.org/#/c/519239/
14:20:31 <irenab> apuimedo: openshift additons are tracked via kuryr-k8s launchpad bugs/bps?
14:20:39 <apuimedo> #action apuimedo to review networkpolicy spec
14:20:51 <apuimedo> irenab: I think so
14:21:14 <apuimedo> https://blueprints.launchpad.net/kuryr-kubernetes/+spec/devstack-openshift-support
14:21:33 <apuimedo> I think yboaron's also has a bp
14:21:36 <irenab> apuimedo: there is also the route stuff
14:22:16 <irenab> apuimedo: please take a look ifit has, I couldn’t find
14:22:33 <apuimedo> #link https://blueprints.launchpad.net/kuryr-kubernetes/+spec/openshift-router-support
14:22:59 <irenab> :-)
14:23:18 <apuimedo> btw, for those testing with Octavia, I resized my vagrant env vars to 32GiB and 4 cores and now stacking is not so painful
14:23:48 <irenab> apuimedo: maybe need to update in repo?
14:23:49 <apuimedo> maybe also because I'm using an lvm pool for the disk
14:23:51 <apuimedo> :P
14:24:16 <apuimedo> I feel like defaulting to a mem amount not present in most developer laptops would be a bit mean
14:24:23 <apuimedo> I can only run that on my desktop
14:24:42 <apuimedo> irenab: but having some comment on the vagrant file about recommended size
14:24:45 <apuimedo> would be good
14:25:20 <apuimedo> irenab: do you know if there's been any movement on the port-behind-port thing?
14:25:23 <irenab> indeed, especially it is the default option in devstack for reference implementation
14:25:50 <irenab> I know oanson plans to propose it in neutron
14:26:05 <irenab> and have implementation in Dragonflow
14:26:20 <apuimedo> has any of you tried kuryr with provider networks?
14:26:42 <irenab> apuimedo: can ypu please elaborate?
14:26:43 <apuimedo> irenab: do you think they'll demand a ml2/ovs impl?
14:27:05 <apuimedo> well, I would like to know if we use ironic to provision bare metal
14:27:28 <irenab> apuimedo: probably, but I think it is mostly the way to define topology
14:27:29 <apuimedo> if we can then use kuryr to create ports in the same provider network the baremetal host uses and bind them
14:27:44 <apuimedo> ltomasbo is going to investigate
14:28:15 <ltomasbo> :)
14:28:33 <apuimedo> ltomasbo: you have to put this in background https://www.youtube.com/watch?v=Jne9t8sHpUc
14:28:35 <apuimedo> xD
14:28:46 <irenab> what triggered the port behind port, is actually the Octavia integration, that just sets allowed address pair for the VIP port inside amphora VM
14:29:08 <ltomasbo> apuimedo, xD
14:29:22 <apuimedo> irenab: oh yeah, I know it's unrelated
14:29:24 <apuimedo> :-)
14:29:31 <apuimedo> It just popped in my mind at the same time xD
14:29:52 <apuimedo> actually port-behind-port popped in my head for a different reason
14:30:18 <apuimedo> let's say somebody wants to use kuryr with ipvlan and have the services provided by kube-proxy
14:30:25 <apuimedo> that would make it work beautifully
14:30:31 <irenab> apuimedo: the opposite, it is very related, since LB type of k8s servide support exposed issues, that ‘port behind port’ should fix
14:31:05 <apuimedo> irenab: no, I meant unrelated to my ironic/provider-network question
14:31:24 <irenab> apuimedo: to many threads you are running on the same time :-)
14:31:40 <ltomasbo> xD
14:32:22 <apuimedo> irenab: like the kuryr-controller
14:32:26 <apuimedo> I'll end up crashing
14:33:07 <irenab> apuimedo: back to the case you raised
14:33:21 <apuimedo> irenab: what do you think of this idea of port-behind-port for maclan/ipvlan based deployment (with lbaasv2 or kubeproxy optional services)
14:33:23 <apuimedo> ?
14:33:31 <irenab> what would be the use case to have kuryr bind ports and use kube-proxy for services?
14:34:07 <irenab> apuimedo: I am not sure I understand what you have in mind
14:34:07 <apuimedo> well, as I understand, there's plenty of people that use OSt without octavia
14:34:47 <irenab> apuimedo: agree, it will be more light way aproach
14:35:06 <apuimedo> actually the easystack.cn people modified kuryr to work like that
14:35:15 <apuimedo> but I suppose that they use allowed address pairs
14:35:17 <apuimedo> atm
14:35:30 <apuimedo> which is way way clunkier than port-behind-port
14:35:34 <irenab> apuimedo any pointers to the details?
14:35:42 <apuimedo> sure
14:35:53 <apuimedo> ping me later and I'll send you the slides
14:35:58 <irenab> thanks
14:36:07 <apuimedo> unfortunately I couldn't find their modified code
14:36:13 <irenab> do you mind also update regarding the summit?
14:36:30 <apuimedo> sure
14:36:34 <irenab> and what did you do to dmellado?
14:36:39 <apuimedo> #Info in the summit there was a workshop with kuryr-kubernetes
14:36:53 <apuimedo> irenab: jet lag killed him brain, we're trying to find another one for him
14:37:03 <irenab> apuimedo: :-)
14:37:03 <apuimedo> s/him/his/
14:37:35 <apuimedo> #Info in the summit easystack presented their k8s cloud based on kuryr-kubernetes and their own cinder volume driver
14:37:52 <apuimedo> they talked about 4 different modifications they did to kuryr
14:38:18 <apuimedo> their cloud did not support trunk ports
14:38:35 <apuimedo> so they use macvlan or ipvlan with allowed ipaddress pairs
14:38:43 <apuimedo> they opted to use kube proxy for control plane speed
14:38:49 <apuimedo> and to support nodeports and such
14:38:51 <apuimedo> I think
14:40:01 <apuimedo> irenab: nothing else of note in the summit from what my jet lagged self could attend to
14:40:14 <apuimedo> (apart from the interesting dragonflow workshop)
14:40:25 <irenab> how many people attended the kuryr workshop?
14:40:46 <apuimedo> 30 or so I'd say, which was about 80% of the room
14:40:49 <irenab> hope the project will gain more contributers
14:41:03 <apuimedo> an nsx folk said he'd try to use it
14:41:13 <apuimedo> I wonder if it will need some patch to work
14:41:30 <apuimedo> (in kvm mode, not in vsphere)
14:41:54 <irenab> we also want to try using kuryr to integrate directly with dragonflow, without neutron
14:42:28 <apuimedo> irenab: do you have bulk ops? Or do you think you'd be fast enough that pooling would not be necessary?
14:42:42 <ltomasbo> irenab, there is a similar work being done for ODL
14:43:00 <irenab> ltomasbo: amazing how these both projects aligned :-)
14:43:02 <apuimedo> ltomasbo: irenab: right. In odl it's golang based
14:43:19 <ltomasbo> irenab, yep! same paths, same bugs :D
14:43:42 <irenab> apuimedo: not sure yet about perfromance
14:44:18 <irenab> but it is just a thought fro now, no work has started
14:44:50 <apuimedo> ok
14:45:14 <apuimedo> anything else for the meeting?
14:46:29 <irenab> nop
14:47:59 <apuimedo> alrighty!
14:48:06 <apuimedo> thank you all for joining!
14:48:08 <apuimedo> #endmeeting