14:00:03 <gsagie> #startmeeting kuryr
14:00:04 <openstack> Meeting started Mon Aug 15 14:00:03 2016 UTC and is due to finish in 60 minutes.  The chair is gsagie. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:08 <openstack> The meeting name has been set to 'kuryr'
14:00:28 <gsagie> Hello everyone, who is here to the meeting?
14:00:32 <vikasc> o/
14:00:40 <limao> o/
14:00:42 <gsagie> i assume we are going to have a quick one as some people couldnt come today
14:00:53 <gsagie> #info vikasc, limao, gsagie in the meeting
14:01:10 <gsagie> and since we are currently in kind of transition phase
14:01:23 <gsagie> #topic kuryr-libnetwork refactor
14:01:31 <gsagie> vikasc, the stage is yours
14:01:44 <vikasc> thanks gsagie
14:02:14 <vikasc> split is over but issue is it is using kuryr-lib from external repo
14:02:19 <gsagie> devvesa joining the meeting?
14:02:34 <devvesa> gsagie: yes.
14:02:42 <gsagie> #info devvesa in meeting as well
14:02:52 <vikasc> functional tests need to be fixed yet
14:02:54 <gsagie> vikasc: what is the problem?
14:03:21 <apuimedo> I finally could make it
14:03:24 <vikasc> i talked to Toni
14:03:29 <apuimedo> thanks gsagie for chairing it though :P
14:03:38 <gsagie> #chair apuimedo
14:03:39 <openstack> Current chairs: apuimedo gsagie
14:03:42 <apuimedo> :-)
14:03:50 <gsagie> apuimedo.. :) take the lead
14:03:52 <vikasc> apuimedo suggested to make a release first
14:03:55 <apuimedo> very well
14:03:59 <apuimedo> vikasc: indeed
14:04:05 <vikasc> yeah
14:04:15 <gsagie> release for Kuryr?
14:04:19 <vikasc> yes
14:04:20 <apuimedo> I think we're very close to that yes.
14:04:27 <apuimedo> release for kuryr-lib (openstack/kuryr)
14:04:40 <apuimedo> then kuryr-libnetwork and kuryr-kubernetes can depend on it
14:04:50 <apuimedo> (more on kuryr-kubernetes on the next topic)
14:05:10 <gsagie> okie, let me know which patch you want as HEAD and i can do it
14:05:56 <gsagie> even that i dont see why this is blocking us from just cloning and installing it in devstack (as long as we use devstack)
14:06:16 <apuimedo> gsagie: you actually have a point with that
14:06:38 <vikasc> i also would like to discuss the process for merging the patches in kuryr-libnetwork which might have dependency on patch in kuryr-lib
14:06:41 <apuimedo> devstack should probably get kuryr-lib master and kuryr-libnetwork master
14:06:46 <vikasc> gsagie, devstack should be working
14:07:02 <apuimedo> vikasc: the policy should be that it always goes first to kuryr-lib
14:07:11 <apuimedo> so that the devstack tests can work
14:07:41 <apuimedo> but yes, it requires devstack to use master of both
14:07:54 <vikasc> tests on kuryr-libnetwork will keep failing until kuryr-lib patch will not merge
14:08:30 <gsagie> vikasc: do you have links for the reviews? so we can give them priority
14:08:35 <gsagie> and review them faster
14:08:43 <apuimedo> yes, it would be nice to solve this today
14:08:45 <vikasc> gsagie,  i will appreciate that
14:09:01 * vikasc detting link
14:09:09 <vikasc> *getting
14:09:43 <vikasc> #link https://review.openstack.org/#/c/342624/
14:10:23 <vikasc> https://review.openstack.org/#/c/340147/
14:10:54 <vikasc> these are patches for rest driver and rpc_driver for kuryr-lib
14:11:23 <vikasc> corresponding patch for each is there in kuryr-libnetwork
14:11:32 <apuimedo> vikasc: you want to have all kuryr-libnetwork usage go through rest?!
14:12:16 <vikasc> apuimedo,  i would prefer both options, rest and rpc
14:13:23 <vikasc> apuimedo, changes for making it so are there in rpc_driver patch
14:13:35 <apuimedo> can you expand a bit on both options, for reference of all
14:13:43 <vikasc> sure
14:13:44 <apuimedo> a request to libnetwork arrives
14:13:52 <apuimedo> then, how the two flows differ
14:14:18 <vikasc> if configuration is done to use rest_driver
14:14:54 <vikasc> neutron rest apis will be invoked
14:15:32 <vikasc> which is similar to how we do today.. just a driver layer is introduced
14:15:49 <vikasc> if configuration is done to use rpc
14:16:34 <vikasc> neutron client on libnetwork side will be a rest client
14:17:07 <vikasc> requests will be sent over amqp chaanel to rpc_server
14:17:10 <apuimedo> you mean that we'll have another server that will receive the calls
14:17:15 <apuimedo> and will forward them to neutron
14:17:17 <apuimedo> ?
14:17:35 <vikasc> apuimedo, exaclty
14:18:05 <vikasc> and this rpc_server will be client to neutron then
14:18:17 <vikasc> not the libnetwork
14:18:32 <gsagie> i guess its done mostly for the case we want to have "Ravens" even for libnetwork
14:18:38 <apuimedo> and where will the rpc server live?
14:18:43 <gsagie> so not every node has access to Neutron, right?
14:18:44 <apuimedo> openstack/kuryr?
14:18:50 <apuimedo> gsagie: exactly
14:19:01 <apuimedo> the rpc server should be in an admin owned M
14:19:03 <apuimedo> *VM
14:19:04 <vikasc> apuimedo, yes
14:19:07 <vikasc> since it is common
14:19:28 <vikasc> kuryr-k8s will also use it
14:19:37 <apuimedo> cool
14:19:41 <apuimedo> alright
14:20:19 <vikasc> rpc_server will be enhanced for supporting nested ontainer use case as well
14:20:27 <apuimedo> ok
14:20:32 <vikasc> *container
14:20:38 <apuimedo> #action apuimedo gsagie to review the rpc/rest patcher
14:20:40 <apuimedo> *patches
14:20:48 <apuimedo> they probably need better commit messages :P
14:20:57 <apuimedo> anything else about libnetwork?
14:20:59 <vikasc> apuimedo, sure will do
14:21:03 <banix> hi; sorry for being late; will review too
14:21:09 <apuimedo> thanks banix
14:21:10 <vikasc> thanks banix
14:21:26 <vikasc> apuimedo, nothing for now
14:21:28 <apuimedo> #topic kuryr-kubernetes
14:22:19 <devvesa> hi apuimedo
14:22:46 <vikasc> i am setting up kuryr-k8s on a two node k8s cluster
14:23:05 <vikasc> ATM raven is listening to events
14:23:15 <vikasc> cin driver is to be configured yet
14:23:19 <vikasc> *cni
14:23:25 <apuimedo> devvesa: has a proposal to make
14:23:38 <apuimedo> vikasc: how did you deploy?
14:23:44 <apuimedo> is irenab here?
14:23:50 <devvesa> Yep. I am willing to push the code we have on midonet/kuryr on openstack/kuryr-kubernetes
14:23:57 <devvesa> apuimedo: no. she is at the doctor now
14:24:01 <vikasc> apuimedo,  k8s cluster i deployed manually on vms
14:24:23 <apuimedo> ok
14:24:44 <vikasc> and then openstack i setup running controller on k8s-master and addtional neutron services on nodes
14:25:00 <apuimedo> devvesa: The problem with that is that the repository already exists, so that's a lot of patches that will have to be pushed and reviewed one by one
14:25:00 <gsagie> irenab: couldnt come
14:25:05 <gsagie> irenab couldnt come
14:25:48 <vikasc> apuimedo, that might need to be restructure according to recent split
14:26:24 <apuimedo> vikasc: yes, at the very least, it should be made to use kuryr-lib
14:26:32 <devvesa> my proposal is to move our fork (restructured) in a midonet/kuryr-kubernetes fork and then push it all in once
14:26:36 <apuimedo> though I suspect it will be a very little change
14:26:40 <devvesa> we know is not ready for production
14:26:44 <vikasc> apuimedo, yes  irealised that when was setting it up
14:26:49 <apuimedo> devvesa: you mean a single patch?
14:27:09 <devvesa> yes. and start working from it
14:27:11 <apuimedo> it would probably better to make at least one patch per component
14:27:23 <devvesa> what do you mean 'component'?
14:27:32 <devvesa> watcher?
14:27:35 <apuimedo> watcher and each translator
14:28:13 <devvesa> As the community decides
14:28:51 <apuimedo> gsagie: banix: vikasc: what do you think? Single patch or one per component?
14:29:02 <banix> the more patches the better
14:29:13 <limao> +1
14:29:15 <vikasc> banix,  +1
14:29:53 <apuimedo> banix: well, the more the better not always, if we did it in all the patches there is in midonet/kuryr it would never end :P
14:30:03 <apuimedo> devvesa: is that okay for you?
14:30:29 <gsagie> yeah, faster review
14:30:46 <devvesa> I know my proposal on a single patch is not very canonical, but it leads to a situation where you'll depend on my speed
14:30:52 <vikasc> apuimedo, some bigger logical patches will be fine
14:31:17 <vikasc> apuimedo, seperated by functionalities
14:31:23 <devvesa> giving in less patches will let people to hands on and modify (there is a lot to modify) faster
14:31:44 <apuimedo> devvesa: it's just the watcher and deps
14:31:51 <apuimedo> and then one patch per translator
14:32:00 <apuimedo> that should be like 4 or 5 patches, right?
14:32:29 <devvesa> apuimedo: sounds good
14:32:41 <apuimedo> thanks
14:32:58 <devvesa> I will be on PTO mostly the rest of the week... so expect it by the week of 22th
14:33:01 <apuimedo> looking forward to those a lot
14:33:10 <apuimedo> cool. I'll still be on holidays
14:33:15 <apuimedo> but we'll try to review
14:33:32 <apuimedo> vikasc: banix: I count on you for those reviews :P
14:33:44 <banix> :))
14:33:58 <vikasc> apuimedo,  :)
14:34:01 <devvesa> Yeah, and don't be so hard, we know that is not production-ready code :)
14:34:29 <devvesa> BTW, I've been working on a lot of documentation last week, so it may help to get more contributors...
14:34:38 <apuimedo> yes, we have to do a lot of refactoring
14:34:42 <apuimedo> but after those patches
14:35:06 <vikasc> apuimedo, anyways as i am setting up.. so already reviewing
14:35:07 <vikasc> :)
14:35:28 <apuimedo> cool
14:35:31 <apuimedo> alright
14:35:35 <devvesa> vikasc: I'll try to be more reachable since now (not this week :) ) If you have questions about deployment, please ask
14:36:10 <apuimedo> anything else about k8s?
14:36:30 <vikasc> devananda, that will be really helpful
14:36:53 <vikasc> devvesa, i might need your help on setting up driver
14:37:20 <devvesa> vikasc: i can help you now on this :)
14:37:25 <vikasc> devvesa, i will give a try myself first.. it looks straight forward
14:37:26 <devvesa> well, after the meeting
14:37:50 <vikasc> devananda, i am not planning to work today :P
14:37:53 <vikasc> oops
14:37:57 <vikasc> devvesa,
14:37:59 <apuimedo> good
14:38:07 <devvesa> vikasc: then tomorrow I will be available too
14:38:27 <vikasc> devvesa, great!!
14:38:42 <apuimedo> #topic open discussion
14:38:55 <apuimedo> anybody else has any other topic?
14:39:04 <limao> Back to release kuryr-lib
14:39:40 <limao> I remembered last time apuimedo said it need keystone v3 patch merged
14:39:45 <limao> then release it
14:39:47 <apuimedo> limao: right
14:40:12 <apuimedo> I'm currently on holidays, but I hope I can get to finish it today/tomorrow
14:40:16 <apuimedo> so then reviews
14:40:18 <limao> so will the rpc_driver / rest_driver also in this scope?
14:40:22 <apuimedo> and we can get to merging it
14:40:51 <limao> cool!
14:41:17 <apuimedo> vikasc: limao: I think it would be possible to have the kuryr-libnetwork rest_driver (which is status_quo) not need any change in kuryr-lib
14:41:18 <limao> just want to understand the scope of the first release
14:41:22 <apuimedo> make a 0.1.0 release
14:41:27 <apuimedo> then make a 0.2.0 release
14:41:37 <apuimedo> sorry
14:41:40 <apuimedo> 1.0.0
14:41:43 <apuimedo> then 1.1.0
14:41:51 <limao> agree
14:42:42 <vikasc> apuimedo, changes are needed. This is the patch for same https://review.openstack.org/#/c/342624/
14:43:18 <vikasc> apuimedo, i mean changes for rest driver in kuryr-lib
14:43:58 * apuimedo looking
14:44:50 <apuimedo> vikasc: ok. I understand
14:45:08 <apuimedo> limao: vikasc: ok. Let's get those in for 1.0.0 too
14:45:42 <vikasc> apuimedo, thanks.. I will keep addressing review comments as quickly as possible
14:46:00 <apuimedo> very well
14:46:05 <apuimedo> any other topic?
14:46:24 <apuimedo> #info kuryr-lib and kuryr-libnetwork to get the initial rest driver for 1.0.0
14:47:10 <limao> another quick question about MTU, did anyone tried to modify MTU in remote driver?
14:47:23 <apuimedo> limao: I did not
14:47:33 <limao> the network of neutron maybe vlan or overlay
14:47:43 <apuimedo> we'd need some patch to allow modification on the neutron network, right?
14:48:20 <limao> I think it should be in docker
14:48:36 <limao> docker should load the mtu of neutron network
14:48:37 <apuimedo> limao: I think it should have to be on both
14:48:58 <apuimedo> kuryr should request the mtu size to neutron
14:49:04 <apuimedo> and docker should set the mtu for the veth
14:49:24 <apuimedo> with CNI and k8s I think we'll be able to do it all without docker help
14:50:32 <limao> neutron mtu is caculated auto base on what segment it use
14:51:01 <apuimedo> I thought that depended on with ml2 driver / plugin you use
14:51:07 <apuimedo> but I have never tried it :P
14:52:42 <apuimedo> if that is the case, we'll need to ask docker for a patch that can read the mtu that we'd have to report
14:52:48 <apuimedo> and that they set it on the veth
14:52:50 <apuimedo> limao: ^^
14:53:09 <limao> yeah, for overlay driver in docker, it fixed recently
14:53:21 <vikasc> apuimedo, docker rejected all my patches :(
14:53:24 <limao> but it did not have similiar thing in remote driver
14:53:58 <vikasc> apuimedo, which were needed for overlapping cidrs
14:54:24 <vikasc> apuimedo, we will have to keep managing with current short term fix
14:54:58 <apuimedo> limao: we'll push for the changes in the remote driver
14:55:23 <apuimedo> limao: could you send to the mailing list an email describing the issue and with links to the pull requests that fixed it for the overlay driver
14:55:29 <limao> cool, thanks, no question from me
14:55:34 <apuimedo> so that we can push or maybe make a pull request?
14:55:41 <limao> I have sent one last week
14:56:15 <limao> I will update later in the thread
14:57:08 <apuimedo> ok, thanks
14:57:14 <apuimedo> I somehow missed it
14:57:16 <apuimedo> sorry
14:57:23 <apuimedo> time to close the meeting
14:57:41 <apuimedo> thank you all for joining, devvesa gsagie vikasc banix!
14:57:45 <apuimedo> #endmeeting