03:02:02 #startmeeting kuryr 03:02:03 Meeting started Tue Jun 28 03:02:02 2016 UTC and is due to finish in 60 minutes. The chair is tfukushima. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:02:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:02:06 The meeting name has been set to 'kuryr' 03:02:48 Who's up for the Kuryr weekly meeting? 03:03:12 o/ 03:03:21 o/ 03:04:56 tfukushima, ping 03:05:15 #info tango, limao, vikasc and tfukushima are present 03:05:56 Hi folks, let me start with the overlapping IPAM issue 03:06:25 #topic Overlapping IPAM 03:07:18 So I can see patches submitted by vikasc. I reviewed some. 03:07:41 #link kuryr patches https://review.openstack.org/#/q/project:openstack/kuryr 03:08:09 tfukushima, due to recently merged "neutron prt active" patch of banix, some of the unit tests are failing 03:08:47 vikasc: His patch was failing or his patch makes your patch failed? 03:08:50 tfukushima, I am working on fixing those 03:09:13 tfukushima, his patch made my patch test cases failed :) 03:09:21 tfukushima, which were under review 03:09:34 Ok, that's unfortunate. :-/ 03:09:44 tfukushima, to be precise, he added one more parameter to _get_fake_port api 03:09:57 I see. 03:10:05 tfukushima, i suspect that is causing trouble, looking into it 03:10:43 Good. I'll continue reviewing your patches as well. 03:10:50 thanks tfukushima 03:11:11 vikasc: Any update on the pull request against Docker upstream? 03:11:26 tfukushima, unfortunately no 03:11:53 It seems they were busy to add new features like embedded Swam to 1.12. :-p 03:12:19 tfukushima, probably yes 03:13:02 Anyways, take it easy. Making their CI happy would make the merge more possible, I guess. 03:13:51 tfukushima, i will work on fixing test cases, initially i was hoping some signal from them. But your point makes more sense on fixing CI 03:14:27 tfukushima, once my current patches in Kuryr are in good shape will work on docker PR test cases 03:15:09 vikasc: Yeah, at that point they'd take your pull requests seriously because they're (probably) too busy to look at every patch. 03:15:44 tfukushima, agreed, broken CI may also be one more reason for extra delay 03:16:36 tfukushima, I too was thinking on similar lines, and now after your suggestion adding this to my TODO list 03:17:10 vikasc: Thanks. Do you have any other updates you might want to share? 03:17:38 tfukushima, not on ipam, I want discussion on nested-vm/refactoring 03:18:06 Ok, let's move to that topic. 03:18:12 tfukushima, thanks 03:18:21 #topic Netsed VM/containers 03:18:37 vikasc: Please go ahead. 03:19:14 tfukushima, i was discussing this with some of teammates over irc as well as on ml. 03:19:30 tfukushima, please give me a moment , i will share ml link 03:20:08 http://lists.openstack.org/pipermail/openstack-dev/2016-June/098209.html 03:20:08 Please take your time. :-) 03:20:45 #link [openstack-dev] [Kuryr] Refactoring into common library and kuryr-libnetwork + Nested_VMs http://lists.openstack.org/pipermail/openstack-dev/2016-June/098209.html 03:21:23 tfukushima, shall we have a very small break, so that all can be sync going through ml link? 03:22:05 Sure. Let's have 5 mins break then. 03:22:12 tfukushima, +1 03:26:09 nested-vm spec did not have implementation level details. So i want to share, discuss and get validated my thi nking with all team members 03:26:16 *thinking 03:27:24 logic from current libnetwork which makes neutron calls can be moved to kuryr-controller. 03:28:09 current kuryr will be modified to make calls to kuryr-controller for any network related services 03:28:27 kuryr-controller is responsible for: 03:28:40 1. making calls to other openstack services 03:28:58 2. allocating segmentation id for containers on each vm 03:29:00 vikasc: What kind of neutron calls would move? Like _get_networks_by_attrs and others? 03:29:36 tfukushima, yes.. all neutron apis like get_port, create_subnet etc 03:30:09 tfukushima, Otherwise we will have to store credentials on each vm 03:31:47 Sorry for interrupting. Please go on if you didn't finish it yet. 03:32:38 tfukushima, kuryr part running on each vm will be responsible only for: 03:33:28 1. configuring local (inside vm) backened(ovs or midonet) for containers traffic tagging 03:33:55 2. vif binding 03:34:44 But for anything related to neutron or allocation of tags will be done by central kuryr-controller 03:36:02 central kuryr-controller, apart from talking to neutron and per-vm tags allocation , will also be responsible for configuring ovs on compute machine for traffic untagging 03:36:23 tfukushima, my first iteration is done :) 03:37:02 tfukushima, not sure how much clear i was :) 03:38:28 vikasc: Ok, so how the component on each VM (kuryr-agent?) talk to the centralized kuryr-controller? 03:38:52 tfukushima, how about rest calls? 03:39:05 tfukushima, just like a client 03:39:27 Oh, I guessed it'd be done through RPC or something. 03:39:38 tfukushima, since it has to react to calls from libnetwork 03:40:41 Ok, anyways I got the point. 03:40:42 tfukushima, initially i was thinking of rpc but then rest calls was making more sense 03:41:24 tfukushima, so if get some improvements suggestion , i will update the current nested-vm spec 03:42:30 vikasc: is the agent talking to the controller now? 03:42:48 tfukushima, you mean in current code? 03:42:57 right 03:43:08 tfukushima, no.. 03:43:11 vikasc: tango is not me. :-) 03:43:20 oh, sorry 03:43:42 np :) 03:43:57 tango, no.. currently agent/client/driver talks directly to neutron 03:44:28 ok, so what does the controller do now? 03:44:40 tango, there is no controller :D 03:45:01 ah ok, me learning Kuryr :) 03:45:06 tango, everything is in libnetwork driver only 03:45:15 tango, np :) 03:45:19 I guess he meant kuryr/controller.py. 03:45:59 tango, current controller.py has all linetwork api handling methods 03:46:35 It'd delegate the actual Neutron API calls to the centralized "kuryr-controller". That's the idea I got from the discussion. 03:46:53 tfukushima, +1 03:47:14 ok, I think I get the idea 03:47:48 tfukushima, would you suggest to update the nested-vm spec with the approach discussed here or should i wait more for inputs on ml 03:47:53 or irc or meetings 03:48:35 tfukushima, i discussed with gal also. He was also a bit positive about the approach. What would you suggest 03:48:50 libnetwork requires only the remote driver to provides the HTTP APIs and its concern is just the interface of the input/output from/to Docker. Currently kuryr/controller.py calls the Neutron API directly by itself but there'd be more nodes and you'll get more responsibility for their configurations. So we want to have the centralized "kuryr-controller" to handle all the Neutron API calls. 03:49:32 tfukushima, exactly, nicely put 03:49:56 Thanks Taku, that's very helpful 03:50:24 vikasc: if you are going to update the spec, please also update the part about Magnum's API for container, which has been removed. 03:50:47 tango, sure.. I will take care 03:51:40 tfukushima, do you see any big hole in approach? 03:51:46 So internally either RPC or REST API call works anyways for the way to communicate with kuryr-controller. 03:52:13 tfukushima, +1 03:53:41 My personal preference is to make the communication done through the RPC with Protocol Buffers/gRPC but I'm not sure if it's possible in OpenStack world. 03:54:09 I don't know how other OpenStack projects are handling the RPC needs. 03:54:09 tfukushima, lets add this as options in spec 03:54:46 At least, I can't find protobuf library in global-requirements.txt: https://github.com/openstack/requirements/blob/master/global-requirements.txt 03:55:13 Maybe they're just using AMQP for that. 03:55:21 tfukushima, i also think so 03:56:02 tfukushima, only 5 mins left 03:56:40 Anyways I totally agree on the big idea. We need to investigate and think about the communication methods. I prefer RPC for the internal communications since it's considered private. 03:56:58 But REST/HTTP API works as well. 03:57:06 Ok, let's wrap it up. 03:57:17 tfukushima, I would add these as alternatives 03:57:27 tfukushima, one more quick point 03:58:23 tfukushima, all this work will be applicable for k8s integration also, whatever we have today in kuryr-k8s-integration spec it is control path only. 03:59:09 tfukushima, should we add one more section there for data path and refereing to nested-vm spec 03:59:24 *nested-container 03:59:49 vikasc: Yes, exactly. We can reuse this mechanism for future-added components as well. 04:00:05 Yes, I will add the description for that. 04:00:14 I have a really quick question, just to clarify: is the openvswitch agent required on all nodes in the Swarm cluster? 04:00:27 tfukushima, exactly for any COEs.. please provide your opinion on ml also. 04:00:34 tango, you also please :) 04:01:06 vikasc: certainly 04:01:19 tango: Yes if you're using Open vSwitch under your controller. 04:01:20 tfukushima, tango, there is lot of work left. :) 04:01:39 ok thanks 04:01:56 Ok, guys we're running out our time. 04:02:09 tango, ovs-agent is required on all compute machines for configuration of untagging part 04:02:47 Thanks for confirming 04:03:10 tango, please note that traffic on vm's trunk port will come with different tags, now tag to subport mapping will be handled/configured by ovs-agent 04:03:52 tfukushima, done :) 04:04:11 Good. So let's finish our meeting. 04:04:23 Sorry for my disoganization this time. :-/ 04:04:43 tfukushima, np :) 04:04:51 Good thing nobody is trying to kick us out :) 04:05:02 :D 04:05:09 Yeah. 04:05:24 limao: If you want to discuss something, we can do that in #openstack-kuryr channel. 04:05:39 yeah,thanks 04:05:50 Alright, thanks for attending guys. 04:06:03 #endmeeting