15:00:02 #startmeeting kuryr 15:00:03 Meeting started Mon Aug 10 15:00:02 2015 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:06 The meeting name has been set to 'kuryr' 15:00:12 Hi! 15:00:17 o/ 15:00:20 Hello everyone 15:00:23 Welcome to the second meeting of Kuryr! 15:00:37 So, who's here for the meeting? 15:00:51 I'm here. 15:00:58 same 15:01:14 diga? 15:01:22 hey 15:02:31 #info: banix, gsagie, tfukushima, irenab and apuimedo in the meeting 15:02:53 It's probably a bit late for diga 15:03:18 Alright, let's get started 15:03:39 According to the agenda today (thanks banix for updating it) 15:03:57 apuimedo: link? 15:03:59 #link https://wiki.openstack.org/wiki/Meetings/Kuryr#Agenda 15:04:05 #info first topic is #vif-binding-unbinding 15:04:31 we talked with diga earlier and he said he will put an etherpad with the TODO's on that front 15:04:35 diga, gsagie_ and tfukushima are working on this afaik 15:04:44 very well 15:05:15 apuimedo: how do you see this going forward? has anyone verified that the current vif-binding mechanism is good enough for our use case? 15:05:22 or its something that still needs to be verified? 15:05:32 Hello daneyon 15:05:36 welcome 15:05:56 The base idea is that Kuryr will use pyroute2 to create the veth pair, will create the namespace, will put one end to the namespace and then hand over the other veth to the vif-binding infrastructure (port-type dependant) 15:06:12 gsagie: for OVS that i have tried, it works fine 15:06:50 for midonet getting the vif device name, namespace and port uuid is enough too, like for OVS 15:07:26 but who will forward the API call ? 15:07:37 which API call? 15:07:43 Kuryr will be located at each compute node? 15:07:47 yes 15:07:58 it is a requirement of libnetwork 15:08:03 it is a local thing 15:08:31 ok so each one at each compute node only gets the local port creations API calls? from the remote driver 15:08:44 exactly 15:09:12 and the vif_binding code is something that will be in the altered OVS agent code 15:09:15 container 15:09:24 exactly 15:09:30 ok 15:09:35 understood :) 15:09:50 how about having kuryr runing on every node and being the agent essentially as well? 15:09:56 kuryr can be configured to support and have binding code for different port types on different compute nodes 15:10:01 are there any issues, in doing that? 15:10:16 banix: it is as you say 15:10:34 got it. thx. misunderstood at first. 15:10:38 if the port type supports it 15:10:42 banix: its basically the agent, but its running as a container 15:10:47 on the compute node 15:10:49 apuimedo: so its similar to workflow of nova-docker to some extent 15:10:54 yup 15:10:57 irenab: indeed 15:11:05 for example, AFAIK, OVS will still have the agent seeing the port created with the uuid 15:11:26 and then the agent will request info for setting up the flows 15:11:34 now, I expect that in Kuryr deployements 15:12:18 whoever packages the kuryr:ovs, will include both vif-binding and the agent in the deployment in containers with dependency with compose or something 15:12:25 (if you use something like Kolla) 15:12:45 (I may have triggered sdake by mentioning kolla :P ) 15:13:25 tfukushima: gsagie_: please sync up with diga to get the etherpad started for this 15:13:36 Ok. 15:13:42 and let's see if we can have a meaningful draft for next Monday 15:13:46 apuimedo : ok, i will ping him tommorow so we can start sharing tasks on it 15:13:50 one thing I wanted to point out 15:14:21 is that you will probably have noticed that I said that Kuryr will plug one veth to the container 15:14:53 however, if and when we support containers on VMs (like ovn has), that part will need to be pluggable as well 15:15:06 I expect to tackle that issue farther down the line though 15:15:12 apuimedo: one veth per port? 15:15:14 apuimedo : that was actually my next question, 15:15:49 apuimedo can you expand on "support containers on VMs". 15:15:52 apuimedo : how is that model where the agent (Kuryr) on the compute node interacts with container managment systems which use nested VM's 15:15:56 irenab: that is somethng that we will probably have to revisit 15:16:05 will kuryr be unable to support containers in vm's out the gate? 15:16:14 apuimedo: “ support containers on VMs” cis relevant to Magnum as well 15:16:19 as in that model there would probably not be one veth per port 15:17:06 daneyon: well, the first milestone is only for containers running on the same OS as where Kuryr and the network agent run 15:17:34 apuimedo ok 15:17:38 apuimedo : but that is no longer in Neutron scope, for example in OVN we can configure it since OVN has a specific solution for this, but other plugins/mech drivers might not have that support 15:18:12 but the goal is to be able to facilitat the task for OVN/midonet etc when they can multiplex/demultiplex (with a small agent on the VM) to provide networking for containers running on a VM 15:18:13 and what if one way of deploying is running those agents in the VM and coordinating between the host and vm agents? 15:18:22 diga is coming 15:18:30 I think some of these requires changes to Neutron that we need to follow and those interested contribute to. I put a couple of links in the next section of Agenda. 15:18:38 Hi Guys 15:18:42 hi diga! 15:18:55 sorry got delay 15:19:04 daneyon: yes, a multi agent solution is the base assumption 15:19:12 diga: thanks for joining so late 15:19:20 :) 15:19:32 #info first milestone: networking for containers running on the same OS scope 15:20:12 daneyon: What is magnum plan for nested VM's, you are running an agent per VM? 15:20:25 #action (mid term) spot Neutron changes needed for VM container networking 15:20:55 also, do we need in this case a local (to the VM) remote driver to catch the API calls? 15:21:00 gsagie_: currently neutron provides a network to the VMs and then the VMs set up overlay networking with flannel on vxlan mode 15:21:20 I would like us to be able to provide that once we can accomodate a solution like ovn's 15:21:34 (and get rid of the double overlay) 15:21:49 apuimedo: yes 15:21:50 gsagie_ the libnet remote driver runs in the vm and manages the container net between the vm's running containers/pods. other than requesting a neutron net to join, the container net does nothing with the cloud net. 15:21:52 sounds good 15:22:52 #action diga to create the vif-binging-unbinding etherpad 15:23:09 yes apuimedo 15:23:14 ;-) 15:23:30 daneyon: can you elaborate on "requesting a neutron net to join"? 15:23:38 daneyon, apuimedo : why i am asking, is because basically now that remote driver which sits in the VM need to call Neutron API and understand also the VM port in neutron (to support cases like OVN) <-- hope its clear, so now its syncing between the compute host Kuryr and the one in the VM 15:23:49 unless i miss something, but we can tackle this when we get there 15:24:13 when you create a bay (i.e. a cluster of docker/k8s vm's) you specify the uuid of the neutron net to attach the bay nodes to 15:24:42 same thing as specifying a neutron net id when instantiating a nova vm from the cli 15:24:54 so each bay will consist of a single /16 network? 15:25:31 daneyon: (/16 neutron network) 15:25:57 daneyon: so if i understand correctly, you add the VM that host the containers to all the possible networks (depending on the nested containers clusters) 15:26:39 gsagie_: I don't think so, a VM belongs to a single bay, so a VM only needs to be on a single network 15:26:49 gsagie_ understood 15:26:50 and provide as much ports on that network as pods has 15:26:53 ahh ok, that make sense 15:27:23 well daneyon is that the case? one bay, one network? 15:27:25 gsagie_ what about specifying some type of container port extension? 15:27:43 daneyon: container port extension? 15:27:45 daneyon: thats actually work that we are doing in OVN, you have a parent port 15:27:52 magnum supports adding bay nodes to a single network. 15:28:30 daneyon: so the container port has as parent port the VM network, we use Neutron binding_profile for that, but its hopefully going to be part of Neutron's API in the future 15:28:40 there is a link on the agenda to spec for trunk ports 15:28:41 the VM port (not network) 15:28:54 #link https://wiki.openstack.org/wiki/Neutron/TrunkPort 15:29:08 daneyon: and with that you can attach the nested containers to different Neutron networks 15:29:24 gsagie_ ok, thx for the info 15:29:27 while the VM is still in one network (which will probably be the containers orchestration managment network) 15:29:53 it doesn’t seem close to be complete but some patches are expected to be up for review soon 15:30:29 banix: yeah, thanks for the link, the current solution in OVN is to use the binding_profile for that until the API is finilized 15:30:39 so that can be used as a temporary solution as well 15:30:41 banix: will it released in L cycle? 15:30:56 banix yes each bay is constructed from a baymodel (essentially a template of the bay). When you spawn a may from the baymodel, all the nodes in the bay are on the same neutron net. 15:30:58 yalie: considering the state as of now, i do not think so 15:31:06 banix: ths 15:32:26 daneyon: got it. thx 15:32:52 gsagie_: please keep in mind the VM container trunking when drafting the vif-binding-unbinding with diga and tfukushima 15:32:52 so we have two use cases to consider for future work: OVN and Magnum 15:33:14 to see if it will pose problems for that second milestone 15:33:31 apuimedo : sure will do 15:33:34 ;-) 15:33:37 thanks 15:33:48 apuimedo: YAGNI? :-) 15:33:59 yagni? 15:34:14 lets get to the first milestone :-) 15:34:55 irenab: I just want to make vif-binding-unbinding as minimal as possible 15:35:05 apuimedo: +1 15:35:14 for the first milestone 15:35:30 yes, and probably as little intrusive as possible to the plugins 15:35:33 and if it is cheap to do, without needing to change that part for the following milestone 15:35:40 that's right 15:35:48 alright, let's move on 15:35:57 and keep this discussion on the etherpad and the ML 15:36:18 #link https://etherpad.openstack.org/p/kuryr-configuration 15:36:29 banix created the etherpad for configuration 15:36:45 this is essentially just a place to discuss. nothing substantial there yet 15:37:29 please update and note your name so we can follow up 15:37:31 and rightly spotted the issue, two kinds of configuration 15:37:37 docker side and neutron side 15:37:55 whether we keep one thing on its respective side 15:38:03 or we have it together 15:38:46 What would be the Docker configuration? I can imagine the port number could be one. 15:38:55 a second point, whether we want it in /etc/kuryr, env or docker libkv 15:39:23 tfukushima: that is the minimal config indeed 15:39:28 if to use a socket 15:39:33 tfukushima: yes that’s teh main one 15:39:37 *socket file 15:39:43 and if not, which port to listen to 15:40:13 apuimedo: REST end point? 15:40:17 banix: let's list the config options in that file 15:40:36 apuimedo: sure 15:40:41 irenab: we must implement all the REST endpoints that the plugin interface determines 15:40:45 tfukushima: right? 15:41:03 yes. 15:41:10 considering how libnetwork plugins are organized, i think these can be all specified in one place 15:41:41 ok 15:41:47 at least for the things we have in mind right now; may be there will be others that require docker 15:42:17 my initial position is that it would be best if we could keep all the config together 15:42:47 tfukushima: how hard would it be to read this info from whichever kv store Docker uses? 15:42:47 let us put all possible options we can thing of on teh etherpad and make a decision. 15:42:52 apuimedo: i agree 15:43:01 s/thing/think 15:43:16 obviously the easiest way is to have a /etc/kuryr/kuryr.conf 15:43:31 +1 15:43:34 that can be overridden by env variables 15:43:44 apuimedo: and this is traditional openstack way 15:44:09 irenab: we are the bridge between two communities though, so I want to consider both options 15:44:18 which reminds me 15:44:20 i think that’s easiest and make most sense but may be missing something; at the end of the day docker shouldn’t be concerned with one of its plugins configuration 15:44:43 banix: agree 15:44:48 #action tfukushima to talk to mrjana in #docker-network about retrieving network extra data from the libnetwork kv store 15:44:48 apuimedo: It'd not be so hard to read that info from the key-value store but we need to add the config for that. However, libkv abstracts the backend and we can't know if it's etcd, Consul or ZooKeeper. 15:45:21 tfukushima: the multi backend thing is actually what makes me lean towards /etc/kuryr/kuryr.conf 15:45:25 good point 15:45:56 do we all agree on /etc/kuryr/kuryr.conf? 15:46:04 +1 from me 15:46:05 apuimedo: +1 15:46:14 apuimedo: /etc/kuryr/kuryr.conf can have sections such as [default], [neutron], [docker] to hold various pieces of config 15:46:28 SourabhP: good point 15:46:32 +1 15:46:35 +1 15:46:35 nice to see you in the meeting as well 15:46:43 yes 15:46:45 you too yalie 15:46:48 The /etc/kuryr/kuryr.conf is also good to integrate with config management systems such as puppet to deploy kuryr 15:46:52 ok then 15:47:11 #info Kuryr will have its info in /etc/kuryr/kuryr.conf 15:47:27 #info with sections 15:47:57 about the action point I put above for tfukushima 15:48:09 it is so that we are able to retrieve the network name 15:48:30 and be able to use it for creating the network in neutron and have user friendly names 15:48:35 apuimedo: i saw your comment on review; did not understand but probably best to follow up there 15:48:50 banix: irenab didn't get it either 15:48:56 so the fault is mine :P 15:49:02 apuimedo: ahhh user friendliness 15:49:19 I'll try to put it in a hopefully clearer way 15:49:42 cool :) 15:49:54 So I'm just wondering names are so essential. Because users interact with "docker network" commands and it provides names to users but not for the remote drivers. 15:50:03 Did we consider kuryr to use DB? Or it was desiced to be stateless? 15:50:26 irenab: it will hurt my heart if we have to use a DB 15:50:51 apuimedo: using DB could be hepful to keep network mapings 15:50:57 it would be Docker (KV store/DB) -> Kuryr (DB) -> Neutron (DB) 15:51:09 I don't like DBs enough to have three layers of them :P 15:51:20 apuimedo: i see your point; will discuss on review 15:51:20 remote drivers need to store info in the k/v store... correct? 15:51:25 but if we can't find a good way after talking to docker people 15:51:30 we may be forced to 15:51:30 apuimedo: lol, let try to find wayto leverage 2 others 15:52:04 daneyon: they should be able to use it, yes 15:52:06 yeah we should avoid it at all cost 15:52:12 it can be whatever though 15:52:30 so probably we should initially restrict which libkv backends we support 15:52:43 as I'm not aware of a libkv like library in Python 15:53:23 apuimedo: i think the question irenab asks is the important one, will kuryr be stateless or not 15:53:23 apuimedo if the libkv backend store adheres to the libkv api, why should it matter? 15:54:06 apuimedo i can understand using one of the libkv stores as a reference for the work being done 15:54:14 daneyon: libkv is in Go 15:54:26 so we can't use it to store information on etcd 15:54:34 we would have to use an etcd client for that 15:54:45 why do we need a DB at all? to store what? i am sure i am missing something here 15:54:48 or is there some interface I'm missing with the remote api? 15:54:56 banix: my goal is fully stateless 15:54:59 so is docker, so what is the problem if libkv is in go? All we would be doing is making api calls. 15:55:11 apuimedo: excellent. +1 15:55:12 but if we bump in too many issues, we have to keep the door open 15:55:21 sure 15:55:27 I guess we should consider restart/upgare and see how it plays and if kuryr can true stateless 15:55:34 I want to keep Kuryr as simple as possible and thus I want it to be stateless. I think it can be stateless but I'm not sure if there'd be some needs for the stete information.d 15:55:50 daneyon: which is the API calls for remote drivers to ask for storing things into libkv? 15:56:13 if there is any need for state it should be in libkv IMO 15:56:15 irenab: unless we could sit these out and let docker and Neutron do the work… 15:56:29 apuimedo i need to look it up. let me find a ref and i'll post it 15:56:36 daneyon: thanks 15:56:46 to the neutron irc channel 15:56:53 thanks 15:57:06 #info The goal is a stateless daemon 15:57:18 apuimedo: this may require to store some bits in neutron or try to keep only docker kv? 15:57:57 irenab: hopefully nowhere, if we see the need, in kv, if we are forced to, in neutron db 15:58:13 let's try to avoid it 15:58:17 i don't see how it will be stateless, the driver will need to store info in some backend. the backend can be libkv or have the driver use neutron's db backend 15:58:18 the simpler the better 15:58:36 daneyon: which info? the mappings? 15:59:21 daneyon: networks/endpoints will be entities in neutron, but we miss clean way to map names/ids 15:59:23 maybe it's just net name/id, info being stored... if that's the case then i guess kuryr will just leverage neutron for that 15:59:37 banix: did you get in contact with the Kolla people? I see the blueprint, thanks for that! 16:00:06 i need to leave the meeting 16:00:11 i think we have run out of time 16:00:19 apuimedo: I simply created a blueprint (does it show it is August? :) ) and it is now moved from new to discuss state 16:00:21 daneyon: thanks a lot for attending daneyon. It is much appreciated 16:00:33 thanks banix 16:00:39 same, gotta go , thanks everyone for the good points 16:00:40 #link https://blueprints.launchpad.net/kolla/+spec/kuryr-docker-plugin 16:01:06 will sync with diga and tfukushima tommorow about the vif-binding 16:01:07 #action tfukushima to email the Mailing list with the findings about the kv store and the mappings 16:01:19 item abour Magnum already discussed so dont need to cover 16:01:23 thanks a lot to everybody for attending! 16:01:30 #endmeeting