15:00:02 <apuimedo> #startmeeting kuryr
15:00:03 <openstack> Meeting started Mon Aug 10 15:00:02 2015 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:06 <openstack> The meeting name has been set to 'kuryr'
15:00:12 <apuimedo> Hi!
15:00:17 <banix> o/
15:00:20 <gsagie_> Hello everyone
15:00:23 <apuimedo> Welcome to the second meeting of Kuryr!
15:00:37 <apuimedo> So, who's here for the meeting?
15:00:51 <tfukushima> I'm here.
15:00:58 <gsagie_> same
15:01:14 <gsagie_> diga?
15:01:22 <irenab> hey
15:02:31 <apuimedo> #info: banix, gsagie, tfukushima, irenab and apuimedo in the meeting
15:02:53 <apuimedo> It's probably a bit late for diga
15:03:18 <apuimedo> Alright, let's get started
15:03:39 <apuimedo> According to the agenda today (thanks banix for updating it)
15:03:57 <irenab> apuimedo: link?
15:03:59 <banix> #link https://wiki.openstack.org/wiki/Meetings/Kuryr#Agenda
15:04:05 <apuimedo> #info first topic is #vif-binding-unbinding
15:04:31 <gsagie_> we talked with diga earlier and he said he will put an etherpad with the TODO's on that front
15:04:35 <apuimedo> diga, gsagie_ and tfukushima are working on this afaik
15:04:44 <apuimedo> very well
15:05:15 <gsagie_> apuimedo: how do you see this going forward? has anyone verified that the current vif-binding mechanism is good enough for our use case?
15:05:22 <gsagie_> or its something that still needs to be verified?
15:05:32 <gsagie_> Hello daneyon
15:05:36 <gsagie_> welcome
15:05:56 <apuimedo> The base idea is that Kuryr will use pyroute2 to create the veth pair, will create the namespace, will put one end to the namespace and then hand over the other veth to the vif-binding infrastructure (port-type dependant)
15:06:12 <banix> gsagie: for OVS that i have tried, it works fine
15:06:50 <apuimedo> for midonet getting the vif device name, namespace and port uuid is enough too, like for OVS
15:07:26 <gsagie_> but who will forward the API call ?
15:07:37 <apuimedo> which API call?
15:07:43 <gsagie_> Kuryr will be located at each compute node?
15:07:47 <apuimedo> yes
15:07:58 <apuimedo> it is a requirement of libnetwork
15:08:03 <apuimedo> it is a local thing
15:08:31 <gsagie_> ok so each one at each compute node only gets the local port creations API calls? from the remote driver
15:08:44 <apuimedo> exactly
15:09:12 <gsagie_> and the vif_binding code is something that will be in the altered OVS agent code
15:09:15 <gsagie_> container
15:09:24 <apuimedo> exactly
15:09:30 <gsagie_> ok
15:09:35 <gsagie_> understood :)
15:09:50 <banix> how about having kuryr runing on every node and being the agent essentially as well?
15:09:56 <apuimedo> kuryr can be configured to support and have binding code for different port types on different compute nodes
15:10:01 <banix> are there any issues, in doing that?
15:10:16 <apuimedo> banix: it is as you say
15:10:34 <banix> got it. thx. misunderstood at first.
15:10:38 <apuimedo> if the port type supports it
15:10:42 <gsagie_> banix: its basically the agent, but its running as a container
15:10:47 <gsagie_> on the compute node
15:10:49 <irenab> apuimedo: so its similar to workflow of nova-docker to some extent
15:10:54 <banix> yup
15:10:57 <apuimedo> irenab: indeed
15:11:05 <apuimedo> for example, AFAIK, OVS will still have the agent seeing the port created with the uuid
15:11:26 <apuimedo> and then the agent will request info for setting up the flows
15:11:34 <apuimedo> now, I expect that in Kuryr deployements
15:12:18 <apuimedo> whoever packages the kuryr:ovs, will include both vif-binding and the agent in the deployment in containers with dependency with compose or something
15:12:25 <apuimedo> (if you use something like Kolla)
15:12:45 <apuimedo> (I may have triggered sdake by mentioning kolla :P )
15:13:25 <apuimedo> tfukushima: gsagie_: please sync up with diga to get the etherpad started for this
15:13:36 <tfukushima> Ok.
15:13:42 <apuimedo> and let's see if we can have a meaningful draft for next Monday
15:13:46 <gsagie_> apuimedo : ok, i will ping him tommorow so we can start sharing tasks on it
15:13:50 <apuimedo> one thing I wanted to point out
15:14:21 <apuimedo> is that you will probably have noticed that I said that Kuryr will plug one veth to the container
15:14:53 <apuimedo> however, if and when we support containers on VMs (like ovn has), that part will need to be pluggable as well
15:15:06 <apuimedo> I expect to tackle that issue farther down the line though
15:15:12 <irenab> apuimedo: one veth per port?
15:15:14 <gsagie_> apuimedo : that was actually my next question,
15:15:49 <daneyon> apuimedo can you expand on "support containers on VMs".
15:15:52 <gsagie_> apuimedo : how is that model where the agent (Kuryr) on the compute node interacts with container managment systems which use nested VM's
15:15:56 <apuimedo> irenab: that is somethng that we will probably have to revisit
15:16:05 <daneyon> will kuryr be unable to support containers in vm's out the gate?
15:16:14 <banix> apuimedo: “ support containers on VMs” cis relevant to Magnum as well
15:16:19 <apuimedo> as in that model there would probably not be one veth per port
15:17:06 <apuimedo> daneyon: well, the first milestone is only for containers running on the same OS as where Kuryr and the network agent run
15:17:34 <daneyon> apuimedo ok
15:17:38 <gsagie_> apuimedo : but that is no longer in Neutron scope, for example in OVN we can configure it since OVN has a specific solution for this, but other plugins/mech drivers might not have that support
15:18:12 <apuimedo> but the goal is to be able to facilitat the task for OVN/midonet etc when they can multiplex/demultiplex (with a small agent on the VM) to provide networking for containers running on a VM
15:18:13 <daneyon> and what if one way of deploying is running those agents in the VM and coordinating between the host and vm agents?
15:18:22 <gsagie_> diga is coming
15:18:30 <banix> I think some of these requires changes to Neutron that we need to follow and those interested contribute to. I put a couple of links in the next section of Agenda.
15:18:38 <diga> Hi Guys
15:18:42 <gsagie_> hi diga!
15:18:55 <diga> sorry got delay
15:19:04 <apuimedo> daneyon: yes, a multi agent solution is the base assumption
15:19:12 <apuimedo> diga: thanks for joining so late
15:19:20 <diga> :)
15:19:32 <apuimedo> #info first milestone: networking for containers running on the same OS scope
15:20:12 <gsagie_> daneyon: What is magnum plan for nested VM's, you are running an agent per VM?
15:20:25 <apuimedo> #action (mid term) spot Neutron changes needed for VM container networking
15:20:55 <gsagie_> also, do we need in this case a local (to the VM) remote driver to catch the API calls?
15:21:00 <apuimedo> gsagie_: currently neutron provides a network to the VMs and then the VMs set up overlay networking with flannel on vxlan mode
15:21:20 <apuimedo> I would like us to be able to provide that once we can accomodate a solution like ovn's
15:21:34 <apuimedo> (and get rid of the double overlay)
15:21:49 <gsagie_> apuimedo: yes
15:21:50 <daneyon> gsagie_ the libnet remote driver runs in the vm and manages the container net between the vm's running containers/pods. other than requesting a neutron net to join, the container net does nothing with the cloud net.
15:21:52 <gsagie_> sounds good
15:22:52 <apuimedo> #action diga to create the vif-binging-unbinding etherpad
15:23:09 <diga> yes apuimedo
15:23:14 <apuimedo> ;-)
15:23:30 <apuimedo> daneyon: can you elaborate on "requesting a neutron net to join"?
15:23:38 <gsagie_> daneyon, apuimedo : why i am asking, is because basically now that remote driver which sits in the VM need to call Neutron API and understand also the VM port in neutron (to support cases like OVN) <-- hope its clear, so now its syncing between the compute host Kuryr and the one in the VM
15:23:49 <gsagie_> unless i miss something, but we can tackle this when we get there
15:24:13 <daneyon> when you create a bay (i.e. a cluster of docker/k8s vm's) you specify the uuid of the neutron net to attach the bay nodes to
15:24:42 <daneyon> same thing as specifying a neutron net id when instantiating a nova vm from the cli
15:24:54 <apuimedo> so each bay will consist of a single /16 network?
15:25:31 <apuimedo> daneyon: (/16 neutron network)
15:25:57 <gsagie_> daneyon: so if i understand correctly, you add the VM that host the containers to all the possible networks (depending on the nested containers clusters)
15:26:39 <apuimedo> gsagie_: I don't think so, a VM belongs to a single bay, so a VM only needs to be on a single network
15:26:49 <daneyon> gsagie_ understood
15:26:50 <apuimedo> and provide as much ports on that network as pods has
15:26:53 <gsagie_> ahh ok, that make sense
15:27:23 <banix> well daneyon is that the case? one bay, one network?
15:27:25 <daneyon> gsagie_ what about specifying some type of container port extension?
15:27:43 <apuimedo> daneyon: container port extension?
15:27:45 <gsagie_> daneyon: thats actually work that we are doing in OVN, you have a parent port
15:27:52 <daneyon> magnum supports adding bay nodes to a single network.
15:28:30 <gsagie_> daneyon: so the container port has as parent port the VM network, we use Neutron binding_profile for that, but its hopefully going to be part of Neutron's API in the future
15:28:40 <banix> there is a link on the agenda to spec for trunk ports
15:28:41 <gsagie_> the VM port (not network)
15:28:54 <banix> #link https://wiki.openstack.org/wiki/Neutron/TrunkPort
15:29:08 <gsagie_> daneyon: and with that you can attach the nested containers to different Neutron networks
15:29:24 <daneyon> gsagie_ ok, thx for the info
15:29:27 <gsagie_> while the VM is still in one network (which will probably be the containers orchestration managment network)
15:29:53 <banix> it doesn’t seem close to be complete but some patches are expected to be up for review soon
15:30:29 <gsagie_> banix: yeah, thanks for the link, the current solution in OVN is to use the binding_profile for that until the API is finilized
15:30:39 <gsagie_> so that can be used as a temporary solution as well
15:30:41 <yalie> banix: will it released in L cycle?
15:30:56 <daneyon> banix yes each bay is constructed from a baymodel (essentially a template of the bay). When you spawn a may from the baymodel, all the nodes in the bay are on the same neutron net.
15:30:58 <banix> yalie: considering the state as of now, i do not think so
15:31:06 <yalie> banix: ths
15:32:26 <banix> daneyon: got it. thx
15:32:52 <apuimedo> gsagie_: please keep in mind the VM container trunking when drafting the vif-binding-unbinding with diga and tfukushima
15:32:52 <banix> so we have two use cases to consider for future work: OVN and Magnum
15:33:14 <apuimedo> to see if it will pose problems for that second milestone
15:33:31 <gsagie_> apuimedo : sure will do
15:33:34 <apuimedo> ;-)
15:33:37 <apuimedo> thanks
15:33:48 <irenab> apuimedo: YAGNI? :-)
15:33:59 <apuimedo> yagni?
15:34:14 <irenab> lets get to the first milestone :-)
15:34:55 <apuimedo> irenab: I just want to make vif-binding-unbinding as minimal as possible
15:35:05 <irenab> apuimedo: +1
15:35:14 <apuimedo> for the first milestone
15:35:30 <gsagie_> yes, and probably as little intrusive as possible to the plugins
15:35:33 <apuimedo> and if it is cheap to do, without needing to change that part for the following milestone
15:35:40 <apuimedo> that's right
15:35:48 <apuimedo> alright, let's move on
15:35:57 <apuimedo> and keep this discussion on the etherpad and the ML
15:36:18 <apuimedo> #link https://etherpad.openstack.org/p/kuryr-configuration
15:36:29 <apuimedo> banix created the etherpad for configuration
15:36:45 <banix> this is essentially just a place to discuss. nothing substantial there yet
15:37:29 <banix> please update and note your name so we can follow up
15:37:31 <apuimedo> and rightly spotted the issue, two kinds of configuration
15:37:37 <apuimedo> docker side and neutron side
15:37:55 <apuimedo> whether we keep one thing on its respective side
15:38:03 <apuimedo> or we have it together
15:38:46 <tfukushima> What would be the Docker configuration? I can imagine the port number could be one.
15:38:55 <apuimedo> a second point, whether we want it in /etc/kuryr, env or docker libkv
15:39:23 <apuimedo> tfukushima: that is the minimal config indeed
15:39:28 <apuimedo> if to use a socket
15:39:33 <banix> tfukushima: yes that’s teh main one
15:39:37 <apuimedo> *socket file
15:39:43 <apuimedo> and if not, which port to listen to
15:40:13 <irenab> apuimedo: REST end point?
15:40:17 <apuimedo> banix: let's list the config options in that file
15:40:36 <banix> apuimedo: sure
15:40:41 <apuimedo> irenab: we must implement all the REST endpoints that the plugin interface determines
15:40:45 <apuimedo> tfukushima: right?
15:41:03 <tfukushima> yes.
15:41:10 <banix> considering how libnetwork plugins are organized, i think these can be all specified in one place
15:41:41 <irenab> ok
15:41:47 <banix> at least for the things we have in mind right now; may be there will be others that require docker
15:42:17 <apuimedo> my initial position is that it would be best if we could keep all the config together
15:42:47 <apuimedo> tfukushima: how hard would it be to read this info from whichever kv store Docker uses?
15:42:47 <banix> let us put all possible options we can thing of on teh etherpad and make a decision.
15:42:52 <banix> apuimedo: i agree
15:43:01 <banix> s/thing/think
15:43:16 <apuimedo> obviously the easiest way is to have a /etc/kuryr/kuryr.conf
15:43:31 <diga> +1
15:43:34 <apuimedo> that can be overridden by env variables
15:43:44 <irenab> apuimedo: and this is traditional openstack way
15:44:09 <apuimedo> irenab: we are the bridge between two communities though, so I want to consider both options
15:44:18 <apuimedo> which reminds me
15:44:20 <banix> i think that’s easiest and make most sense but may be missing something; at the end of the day docker shouldn’t be concerned with one of its plugins configuration
15:44:43 <irenab> banix: agree
15:44:48 <apuimedo> #action tfukushima to talk to mrjana in #docker-network about retrieving network extra data from the libnetwork kv store
15:44:48 <tfukushima> apuimedo: It'd not be so hard to read that info from the key-value store but we need to add the config for that. However, libkv abstracts the backend and we can't know if it's etcd, Consul or ZooKeeper.
15:45:21 <apuimedo> tfukushima: the multi backend thing is actually what makes me lean towards /etc/kuryr/kuryr.conf
15:45:25 <apuimedo> good point
15:45:56 <apuimedo> do we all agree on /etc/kuryr/kuryr.conf?
15:46:04 <apuimedo> +1 from me
15:46:05 <irenab> apuimedo: +1
15:46:14 <SourabhP> apuimedo: /etc/kuryr/kuryr.conf can have sections such as [default], [neutron], [docker] to hold various pieces of config
15:46:28 <apuimedo> SourabhP: good point
15:46:32 <diga> +1
15:46:35 <yalie> +1
15:46:35 <apuimedo> nice to see you in the meeting as well
15:46:43 <gsagie_> yes
15:46:45 <apuimedo> you too yalie
15:46:48 <SourabhP> The /etc/kuryr/kuryr.conf is also good to integrate with config management systems such as puppet to deploy kuryr
15:46:52 <apuimedo> ok then
15:47:11 <apuimedo> #info Kuryr will have its info in /etc/kuryr/kuryr.conf
15:47:27 <apuimedo> #info with sections
15:47:57 <apuimedo> about the action point I put above for tfukushima
15:48:09 <apuimedo> it is so that we are able to retrieve the network name
15:48:30 <apuimedo> and be able to use it for creating the network in neutron and have user friendly names
15:48:35 <banix> apuimedo: i saw your comment on review; did not understand but probably best to follow up there
15:48:50 <apuimedo> banix: irenab didn't get it either
15:48:56 <apuimedo> so the fault is mine :P
15:49:02 <banix> apuimedo: ahhh user friendliness
15:49:19 <apuimedo> I'll try to put it in a hopefully clearer way
15:49:42 <banix> cool :)
15:49:54 <tfukushima> So I'm just wondering names are so essential. Because users interact with "docker network" commands and it provides names to users but not for the remote drivers.
15:50:03 <irenab> Did we consider kuryr to use DB? Or it was desiced to be stateless?
15:50:26 <apuimedo> irenab: it will hurt my heart if we have to use a DB
15:50:51 <irenab> apuimedo: using DB could be hepful to keep network mapings
15:50:57 <apuimedo> it would be Docker (KV store/DB) -> Kuryr (DB) -> Neutron (DB)
15:51:09 <apuimedo> I don't like DBs enough to have three layers of them :P
15:51:20 <banix> apuimedo: i see your point; will discuss on review
15:51:20 <daneyon> remote drivers need to store info in the k/v store... correct?
15:51:25 <apuimedo> but if we can't find a good way after talking to docker people
15:51:30 <apuimedo> we may be forced to
15:51:30 <irenab> apuimedo: lol, let try to find wayto leverage 2 others
15:52:04 <apuimedo> daneyon: they should be able to use it, yes
15:52:06 <banix> yeah we should avoid it at all cost
15:52:12 <apuimedo> it can be whatever though
15:52:30 <apuimedo> so probably we should initially restrict which libkv backends we support
15:52:43 <apuimedo> as I'm not aware of a libkv like library in Python
15:53:23 <banix> apuimedo: i think the question irenab asks is the important one, will kuryr be stateless or not
15:53:23 <daneyon> apuimedo if the libkv backend store adheres to the libkv api, why should it matter?
15:54:06 <daneyon> apuimedo i can understand using one of the libkv stores as a reference for the work being done
15:54:14 <apuimedo> daneyon: libkv is in Go
15:54:26 <apuimedo> so we can't use it to store information on etcd
15:54:34 <apuimedo> we would have to use an etcd client for that
15:54:45 <banix> why do we need a DB at all? to store what? i am sure i am missing something here
15:54:48 <apuimedo> or is there some interface I'm missing with the remote api?
15:54:56 <apuimedo> banix: my goal is fully stateless
15:54:59 <daneyon> so is docker, so what is the problem if libkv is in go? All we would be doing is making api calls.
15:55:11 <banix> apuimedo: excellent. +1
15:55:12 <apuimedo> but if we bump in too many issues, we have to keep the door open
15:55:21 <banix> sure
15:55:27 <irenab> I guess we should consider restart/upgare and see how it plays and if kuryr can true stateless
15:55:34 <tfukushima> I want to keep Kuryr as simple as possible and thus I want it to be stateless. I think it can be stateless but I'm not sure if there'd be some needs for the stete information.d
15:55:50 <apuimedo> daneyon: which is the API calls for remote drivers to ask for storing things into libkv?
15:56:13 <apuimedo> if there is any need for state it should be in libkv IMO
15:56:15 <banix> irenab: unless we could sit these out and let docker and Neutron do the work…
15:56:29 <daneyon> apuimedo i need to look it up. let me find a ref and i'll post it
15:56:36 <apuimedo> daneyon: thanks
15:56:46 <daneyon> to the neutron irc channel
15:56:53 <apuimedo> thanks
15:57:06 <apuimedo> #info The goal is a stateless daemon
15:57:18 <irenab> apuimedo: this may require to store some bits in neutron or try to keep only docker kv?
15:57:57 <apuimedo> irenab: hopefully nowhere, if we see the need, in kv, if we are forced to, in neutron db
15:58:13 <apuimedo> let's try to avoid it
15:58:17 <daneyon> i don't see how it will be stateless, the driver will need to store info in some backend. the backend can be libkv or have the driver use neutron's db backend
15:58:18 <apuimedo> the simpler the better
15:58:36 <apuimedo> daneyon: which info? the mappings?
15:59:21 <irenab> daneyon: networks/endpoints will be entities in neutron, but we miss clean way to map names/ids
15:59:23 <daneyon> maybe it's just net name/id, info being stored... if that's the case then i guess kuryr will just leverage neutron for that
15:59:37 <apuimedo> banix: did you get in contact with the Kolla people? I see the blueprint, thanks for that!
16:00:06 <daneyon> i need to leave the meeting
16:00:11 <daneyon> i think we have run out of time
16:00:19 <banix> apuimedo: I simply created a blueprint (does it show it is August? :) ) and it is now moved from new to discuss state
16:00:21 <apuimedo> daneyon: thanks a lot for attending daneyon. It is much appreciated
16:00:33 <apuimedo> thanks banix
16:00:39 <gsagie_> same, gotta go , thanks everyone for the good points
16:00:40 <apuimedo> #link https://blueprints.launchpad.net/kolla/+spec/kuryr-docker-plugin
16:01:06 <gsagie_> will sync with diga and tfukushima tommorow about the vif-binding
16:01:07 <apuimedo> #action tfukushima to email the Mailing list with the findings about the kv store and the mappings
16:01:19 <banix> item abour Magnum already discussed so dont need to cover
16:01:23 <apuimedo> thanks a lot to everybody for attending!
16:01:30 <apuimedo> #endmeeting