14:00:03 #startmeeting kuryr 14:00:04 Meeting started Mon Aug 15 14:00:03 2016 UTC and is due to finish in 60 minutes. The chair is gsagie. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:08 The meeting name has been set to 'kuryr' 14:00:28 Hello everyone, who is here to the meeting? 14:00:32 o/ 14:00:40 o/ 14:00:42 i assume we are going to have a quick one as some people couldnt come today 14:00:53 #info vikasc, limao, gsagie in the meeting 14:01:10 and since we are currently in kind of transition phase 14:01:23 #topic kuryr-libnetwork refactor 14:01:31 vikasc, the stage is yours 14:01:44 thanks gsagie 14:02:14 split is over but issue is it is using kuryr-lib from external repo 14:02:19 devvesa joining the meeting? 14:02:34 gsagie: yes. 14:02:42 #info devvesa in meeting as well 14:02:52 functional tests need to be fixed yet 14:02:54 vikasc: what is the problem? 14:03:21 I finally could make it 14:03:24 i talked to Toni 14:03:29 thanks gsagie for chairing it though :P 14:03:38 #chair apuimedo 14:03:39 Current chairs: apuimedo gsagie 14:03:42 :-) 14:03:50 apuimedo.. :) take the lead 14:03:52 apuimedo suggested to make a release first 14:03:55 very well 14:03:59 vikasc: indeed 14:04:05 yeah 14:04:15 release for Kuryr? 14:04:19 yes 14:04:20 I think we're very close to that yes. 14:04:27 release for kuryr-lib (openstack/kuryr) 14:04:40 then kuryr-libnetwork and kuryr-kubernetes can depend on it 14:04:50 (more on kuryr-kubernetes on the next topic) 14:05:10 okie, let me know which patch you want as HEAD and i can do it 14:05:56 even that i dont see why this is blocking us from just cloning and installing it in devstack (as long as we use devstack) 14:06:16 gsagie: you actually have a point with that 14:06:38 i also would like to discuss the process for merging the patches in kuryr-libnetwork which might have dependency on patch in kuryr-lib 14:06:41 devstack should probably get kuryr-lib master and kuryr-libnetwork master 14:06:46 gsagie, devstack should be working 14:07:02 vikasc: the policy should be that it always goes first to kuryr-lib 14:07:11 so that the devstack tests can work 14:07:41 but yes, it requires devstack to use master of both 14:07:54 tests on kuryr-libnetwork will keep failing until kuryr-lib patch will not merge 14:08:30 vikasc: do you have links for the reviews? so we can give them priority 14:08:35 and review them faster 14:08:43 yes, it would be nice to solve this today 14:08:45 gsagie, i will appreciate that 14:09:01 * vikasc detting link 14:09:09 *getting 14:09:43 #link https://review.openstack.org/#/c/342624/ 14:10:23 https://review.openstack.org/#/c/340147/ 14:10:54 these are patches for rest driver and rpc_driver for kuryr-lib 14:11:23 corresponding patch for each is there in kuryr-libnetwork 14:11:32 vikasc: you want to have all kuryr-libnetwork usage go through rest?! 14:12:16 apuimedo, i would prefer both options, rest and rpc 14:13:23 apuimedo, changes for making it so are there in rpc_driver patch 14:13:35 can you expand a bit on both options, for reference of all 14:13:43 sure 14:13:44 a request to libnetwork arrives 14:13:52 then, how the two flows differ 14:14:18 if configuration is done to use rest_driver 14:14:54 neutron rest apis will be invoked 14:15:32 which is similar to how we do today.. just a driver layer is introduced 14:15:49 if configuration is done to use rpc 14:16:34 neutron client on libnetwork side will be a rest client 14:17:07 requests will be sent over amqp chaanel to rpc_server 14:17:10 you mean that we'll have another server that will receive the calls 14:17:15 and will forward them to neutron 14:17:17 ? 14:17:35 apuimedo, exaclty 14:18:05 and this rpc_server will be client to neutron then 14:18:17 not the libnetwork 14:18:32 i guess its done mostly for the case we want to have "Ravens" even for libnetwork 14:18:38 and where will the rpc server live? 14:18:43 so not every node has access to Neutron, right? 14:18:44 openstack/kuryr? 14:18:50 gsagie: exactly 14:19:01 the rpc server should be in an admin owned M 14:19:03 *VM 14:19:04 apuimedo, yes 14:19:07 since it is common 14:19:28 kuryr-k8s will also use it 14:19:37 cool 14:19:41 alright 14:20:19 rpc_server will be enhanced for supporting nested ontainer use case as well 14:20:27 ok 14:20:32 *container 14:20:38 #action apuimedo gsagie to review the rpc/rest patcher 14:20:40 *patches 14:20:48 they probably need better commit messages :P 14:20:57 anything else about libnetwork? 14:20:59 apuimedo, sure will do 14:21:03 hi; sorry for being late; will review too 14:21:09 thanks banix 14:21:10 thanks banix 14:21:26 apuimedo, nothing for now 14:21:28 #topic kuryr-kubernetes 14:22:19 hi apuimedo 14:22:46 i am setting up kuryr-k8s on a two node k8s cluster 14:23:05 ATM raven is listening to events 14:23:15 cin driver is to be configured yet 14:23:19 *cni 14:23:25 devvesa: has a proposal to make 14:23:38 vikasc: how did you deploy? 14:23:44 is irenab here? 14:23:50 Yep. I am willing to push the code we have on midonet/kuryr on openstack/kuryr-kubernetes 14:23:57 apuimedo: no. she is at the doctor now 14:24:01 apuimedo, k8s cluster i deployed manually on vms 14:24:23 ok 14:24:44 and then openstack i setup running controller on k8s-master and addtional neutron services on nodes 14:25:00 devvesa: The problem with that is that the repository already exists, so that's a lot of patches that will have to be pushed and reviewed one by one 14:25:00 irenab: couldnt come 14:25:05 irenab couldnt come 14:25:48 apuimedo, that might need to be restructure according to recent split 14:26:24 vikasc: yes, at the very least, it should be made to use kuryr-lib 14:26:32 my proposal is to move our fork (restructured) in a midonet/kuryr-kubernetes fork and then push it all in once 14:26:36 though I suspect it will be a very little change 14:26:40 we know is not ready for production 14:26:44 apuimedo, yes irealised that when was setting it up 14:26:49 devvesa: you mean a single patch? 14:27:09 yes. and start working from it 14:27:11 it would probably better to make at least one patch per component 14:27:23 what do you mean 'component'? 14:27:32 watcher? 14:27:35 watcher and each translator 14:28:13 As the community decides 14:28:51 gsagie: banix: vikasc: what do you think? Single patch or one per component? 14:29:02 the more patches the better 14:29:13 +1 14:29:15 banix, +1 14:29:53 banix: well, the more the better not always, if we did it in all the patches there is in midonet/kuryr it would never end :P 14:30:03 devvesa: is that okay for you? 14:30:29 yeah, faster review 14:30:46 I know my proposal on a single patch is not very canonical, but it leads to a situation where you'll depend on my speed 14:30:52 apuimedo, some bigger logical patches will be fine 14:31:17 apuimedo, seperated by functionalities 14:31:23 giving in less patches will let people to hands on and modify (there is a lot to modify) faster 14:31:44 devvesa: it's just the watcher and deps 14:31:51 and then one patch per translator 14:32:00 that should be like 4 or 5 patches, right? 14:32:29 apuimedo: sounds good 14:32:41 thanks 14:32:58 I will be on PTO mostly the rest of the week... so expect it by the week of 22th 14:33:01 looking forward to those a lot 14:33:10 cool. I'll still be on holidays 14:33:15 but we'll try to review 14:33:32 vikasc: banix: I count on you for those reviews :P 14:33:44 :)) 14:33:58 apuimedo, :) 14:34:01 Yeah, and don't be so hard, we know that is not production-ready code :) 14:34:29 BTW, I've been working on a lot of documentation last week, so it may help to get more contributors... 14:34:38 yes, we have to do a lot of refactoring 14:34:42 but after those patches 14:35:06 apuimedo, anyways as i am setting up.. so already reviewing 14:35:07 :) 14:35:28 cool 14:35:31 alright 14:35:35 vikasc: I'll try to be more reachable since now (not this week :) ) If you have questions about deployment, please ask 14:36:10 anything else about k8s? 14:36:30 devananda, that will be really helpful 14:36:53 devvesa, i might need your help on setting up driver 14:37:20 vikasc: i can help you now on this :) 14:37:25 devvesa, i will give a try myself first.. it looks straight forward 14:37:26 well, after the meeting 14:37:50 devananda, i am not planning to work today :P 14:37:53 oops 14:37:57 devvesa, 14:37:59 good 14:38:07 vikasc: then tomorrow I will be available too 14:38:27 devvesa, great!! 14:38:42 #topic open discussion 14:38:55 anybody else has any other topic? 14:39:04 Back to release kuryr-lib 14:39:40 I remembered last time apuimedo said it need keystone v3 patch merged 14:39:45 then release it 14:39:47 limao: right 14:40:12 I'm currently on holidays, but I hope I can get to finish it today/tomorrow 14:40:16 so then reviews 14:40:18 so will the rpc_driver / rest_driver also in this scope? 14:40:22 and we can get to merging it 14:40:51 cool! 14:41:17 vikasc: limao: I think it would be possible to have the kuryr-libnetwork rest_driver (which is status_quo) not need any change in kuryr-lib 14:41:18 just want to understand the scope of the first release 14:41:22 make a 0.1.0 release 14:41:27 then make a 0.2.0 release 14:41:37 sorry 14:41:40 1.0.0 14:41:43 then 1.1.0 14:41:51 agree 14:42:42 apuimedo, changes are needed. This is the patch for same https://review.openstack.org/#/c/342624/ 14:43:18 apuimedo, i mean changes for rest driver in kuryr-lib 14:43:58 * apuimedo looking 14:44:50 vikasc: ok. I understand 14:45:08 limao: vikasc: ok. Let's get those in for 1.0.0 too 14:45:42 apuimedo, thanks.. I will keep addressing review comments as quickly as possible 14:46:00 very well 14:46:05 any other topic? 14:46:24 #info kuryr-lib and kuryr-libnetwork to get the initial rest driver for 1.0.0 14:47:10 another quick question about MTU, did anyone tried to modify MTU in remote driver? 14:47:23 limao: I did not 14:47:33 the network of neutron maybe vlan or overlay 14:47:43 we'd need some patch to allow modification on the neutron network, right? 14:48:20 I think it should be in docker 14:48:36 docker should load the mtu of neutron network 14:48:37 limao: I think it should have to be on both 14:48:58 kuryr should request the mtu size to neutron 14:49:04 and docker should set the mtu for the veth 14:49:24 with CNI and k8s I think we'll be able to do it all without docker help 14:50:32 neutron mtu is caculated auto base on what segment it use 14:51:01 I thought that depended on with ml2 driver / plugin you use 14:51:07 but I have never tried it :P 14:52:42 if that is the case, we'll need to ask docker for a patch that can read the mtu that we'd have to report 14:52:48 and that they set it on the veth 14:52:50 limao: ^^ 14:53:09 yeah, for overlay driver in docker, it fixed recently 14:53:21 apuimedo, docker rejected all my patches :( 14:53:24 but it did not have similiar thing in remote driver 14:53:58 apuimedo, which were needed for overlapping cidrs 14:54:24 apuimedo, we will have to keep managing with current short term fix 14:54:58 limao: we'll push for the changes in the remote driver 14:55:23 limao: could you send to the mailing list an email describing the issue and with links to the pull requests that fixed it for the overlay driver 14:55:29 cool, thanks, no question from me 14:55:34 so that we can push or maybe make a pull request? 14:55:41 I have sent one last week 14:56:15 I will update later in the thread 14:57:08 ok, thanks 14:57:14 I somehow missed it 14:57:16 sorry 14:57:23 time to close the meeting 14:57:41 thank you all for joining, devvesa gsagie vikasc banix! 14:57:45 #endmeeting