15:03:17 <banix> #startmeeting kuryr
15:03:17 <openstack> Meeting started Tue Jan 26 15:03:17 2016 UTC and is due to finish in 60 minutes.  The chair is banix. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:03:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:03:20 <openstack> The meeting name has been set to 'kuryr'
15:03:22 <fawadkhaliq> apuimedo: lol
15:03:25 <apuimedo> it is alive!
15:03:29 <gsagie> maybe it is :)
15:03:33 <apuimedo> :-)
15:03:47 <apuimedo> banix, you are chair and table
15:03:52 <apuimedo> set the topics ;-)
15:03:59 <banix> #chair apuimedo
15:04:00 <openstack> Current chairs: apuimedo banix
15:04:04 <apuimedo> darn
15:04:12 <apuimedo> that happens for talking too much
15:04:17 <banix> :)
15:04:20 <fawadkhaliq> good one banix ;-)
15:04:25 <apuimedo> #topic kubernetes libnetwork usage
15:04:34 <apuimedo> mmm
15:04:44 <irenab> apuimedo: not sure about the topic ...
15:04:53 <apuimedo> seems it doesn't work
15:04:53 <irenab> its either first orsecond ...
15:05:01 <apuimedo> well, everybody read it
15:05:19 <apuimedo> in this topic I would like to go through the current state
15:05:40 <irenab> kube already has CNI support:   https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/network/cni
15:05:53 <irenab> and it seems to be the prefered way to go
15:05:55 <apuimedo> and in my opinion I would put to rest the possibility of just holding out for the possibility of just using an eventual libnetwork support
15:06:09 <apuimedo> and relying on configuration to plug to a network
15:06:17 <apuimedo> do we all agree with that?
15:06:25 <gsagie> i do
15:06:26 <vikasc> +1
15:06:28 <irenab> +1
15:06:35 <qwebirc16548> um, what does that mean exactly?
15:06:44 <banix> +0.8
15:06:55 <fkautz> It's unlikely k8s will support libnetwork at this point
15:06:58 <qwebirc16548> I do not see much movement toward libnetwork among the kube community
15:07:00 <baohua_> +0.5
15:07:31 <qwebirc16548> If you do not care about k9s load balancers, we already have all that we need from k8s
15:07:34 <fkautz> Unless something fundamentally changes in either k8s or docker philosophy
15:07:44 <qwebirc16548> we can write a CNI plugin that invokes `docker network` to do what is needed.
15:08:11 <banix> qwebirc16548: that’s what I am wondering if we can do
15:08:18 <irenab> qwebirc16548: can you elaborate?
15:08:24 <qwebirc16548> sure...
15:08:32 <fawadkhaliq> qwebirc16548: not sure its a straight forward mapping
15:08:34 <fawadkhaliq> irenab: +1
15:08:51 <qwebirc16548> In the CNI plugin's add-container-to-network operation, use `docker network` to disconnect from "none" and then connect to the desired network.
15:09:00 <qwebirc16548> This is assuming it can determine the desired network.
15:09:02 <apuimedo> qwebirc16548: doing a CNI plugin that calls docker network is still abandoning direct libnetwork ideas
15:09:12 <qwebirc16548> I am thinking of a simple scenario with one network per k8s Namespace
15:09:33 <apuimedo> qwebirc16548: are you Shu Tao?
15:09:49 <qwebirc16548> When the CNI plugin invokes `docker network`, this goes through libnetwork.  So no abandonment there.
15:10:01 <qwebirc16548> No, I am not Shu Tao.  But I know him.
15:10:10 <vikasc> qwebirc16548: namespace maps to tenants in k8s
15:10:13 <apuimedo> Mike Spreitzer then I bet :P
15:10:18 <qwebirc16548> bingo
15:10:22 <apuimedo> anyway
15:10:40 <vikasc> qwebirc16548: then one tenant having more networks is very likely
15:10:44 <qwebirc16548> I am supposing each tenant gets one network
15:10:52 <qwebirc16548> Yes, that's the next level
15:11:00 <qwebirc16548> doing that will require more
15:11:09 <vikasc> qwebirc16548: hmm
15:11:21 <apuimedo> doing a CNI translation to the current libnetwork driver is still doing a CNI plugin
15:11:35 <qwebirc16548> so?
15:11:41 <apuimedo> so, can we establish that we are doing that before deciding in which manner go?
15:11:59 <banix> yes, but it will be a simpler one which will be able to utlize our Kuryr plugin
15:12:03 <qwebirc16548> I will do it myself if nobody else gets to it sooner
15:12:31 <apuimedo> ok, moving on
15:12:38 <fkautz> i don't think it'll be that simple, but worth looking into
15:12:39 <gsagie> so you are basically adding lib network support to cabernets, in some way which is probably not optimal but good enough
15:12:41 <apuimedo> #topic CNI plugin
15:12:48 <gsagie> kubernetes
15:12:54 <salv-orlando> gsagie: love that autocorrect
15:12:56 <qwebirc16548> BTW I see work in the k8s networking group that will pass more info, so that a tenant can have multiple networks
15:13:01 <gsagie> :)
15:13:01 <vikasc> cni plugin takes just two args, ADD and DELETE. To utilize it existing kuryr only way is to refactor kuryr
15:13:02 <apuimedo> xD
15:13:23 <apuimedo> qwebirc16548: for the openshift-namespaces
15:13:35 <apuimedo> I also saw that
15:13:45 <salv-orlando> vikasc: which I agree, but what is the contributors' opinion on developing and maintaining both plugins?
15:13:48 <qwebirc16548> Unfortunately I have been away for a week, wrestling with Ursula
15:14:00 <apuimedo> alright, so basically we have two approaches
15:14:09 <banix> vikasc: I think the suggestion is to use the docker (network) API to implement the CNI API
15:14:11 <qwebirc16548> I think I want to make a counter-proposal, that keeps tenant = Namespace, but have not written it yet
15:14:14 <apuimedo> refactor kuryr and have two different frontends, libnetwork and cni
15:14:36 <apuimedo> or have cni be a translation to libnetwork with a bit of extra configuration/assumptions
15:14:50 <baohua_> like this idea
15:14:51 <vikasc> banix: yes, in tha case we will not be able to leverage any of existing work
15:15:07 <banix> vikasc: why not?
15:15:16 <apuimedo> baohua_: which idea?
15:15:33 <baohua_> your idea or two front end but one kuryr, should try leverage current kuryr code at most
15:15:35 <salv-orlando> apuimedo: the latter implies you're assuming CNI will always back docker.
15:15:38 <fkautz> apuimedo: are you suggesting a side channel with extra info passed to support anything libnetwork doesn't?
15:15:50 <salv-orlando> now that might true but at least on the table docker's not the only one
15:15:59 <apuimedo> salv-orlando: well, I'm cheating actually
15:16:11 <salv-orlando> apuimedo: you're so openstack ;)
15:16:14 <apuimedo> I actually think that it is not option 1 or option 2
15:16:36 <apuimedo> I think it is option 1 for right now. Then make a proper option 2
15:16:46 <irenab> apuimedo: can you please identify option 1 and option2, so we can refer during discussion
15:16:50 <baohua_> should not wait the decision from k8s team, so start with option #1
15:16:54 <salv-orlando> I mean option 2 could be a short term implementation for the CNI plugin
15:17:00 <salv-orlando> and then we can iterate over it
15:17:33 <qwebirc16548> can someone please clarify which is 1 and what is 2?
15:17:34 <apuimedo> option T -> translate
15:17:40 <vikasc> banix: i meant whatever we have in kuryr today is libnetwok apis handlers. calling docker network will not fit into cni model
15:17:52 <qwebirc16548> why not?
15:17:53 <apuimedo> option F -> different frontends
15:18:05 <baohua_> option #F can keep the code more clean, and flexible for further possible change.
15:18:22 <apuimedo> baohua_: agreed
15:18:24 <banix> yes, the choice is whether we expand the scope of Kuryr to support container networking (docker and others) through OpenStack networking. I think the answer is yes.
15:18:25 <fkautz> Having worked on building such a bridge, I agree with vikasc
15:18:27 <baohua_> option #T requires assumption they finally get to libnetwork
15:18:29 <apuimedo> IMO that's the way to go for the future
15:18:58 <irenab> so option F is to have CNI to call neutron?
15:19:01 <apuimedo> option T would be something that we use short term an for no longer than the end of the Mitaka cycle
15:19:14 <apuimedo> irenab: that's right
15:19:15 <qwebirc16548> vikasc: why does calling `docker network` CLI not fit into the CNI model?
15:19:59 <fkautz> qwebirc16548: part of it is what info is owned by the user vs libnetwork, e.g. Things like which network namespace is being used is not exposed in libnetwork
15:20:02 <apuimedo> qwebirc16548: I don't think that it can't fit. I think that it may not offer as much potential as a specialized frontend in the mid/long term
15:20:19 <fkautz> K8s wrote a blog post on why libnetwork doesn't work for them on their blog
15:20:24 <vikasc> apuimedo: +1
15:20:26 <fkautz> Many of the concerns will apply here
15:20:28 <vikasc> fkautz: +1
15:20:51 <qwebirc16548> Yes, I read that blog post and agree
15:20:52 <fkautz> But a lot of it is in who owns what and what gets exposed
15:20:54 <banix> #link http://blog.kubernetes.io/2016/01/why-Kubernetes-doesnt-use-libnetwork.html
15:20:55 <baohua_> oh, more concerns now.
15:20:57 <qwebirc16548> Both sides have moved on since
15:21:01 <fkautz> Can't make decisions if you don't have the info
15:21:03 <irenab> k8s is not Docker specific, so option T will limit
15:21:18 <fawadkhaliq> irenab: +1
15:21:24 <qwebirc16548> oh, that's newer
15:21:30 <vikasc> irenab: nice point
15:21:40 <baohua_> so we have the same context now
15:21:53 <salv-orlando> I see translation only as a stopgap measure - ie: something that can work while a proper plugin not dependent on libnetwork is implemented
15:21:54 <banix> so let’s back up for a second if you all agree
15:22:06 <irenab> can we drop option T ?
15:22:12 <apuimedo> qwebirc16548: I see value on right now having work on a thin CNI plugin that does network per namespace.
15:22:24 <apuimedo> as salv-orlando says, a stopgap measure
15:22:26 <fkautz> Right, that's my primary concern with a bridge. I would love to see a generic bridge exist. Maybe a bridge specific for Kuryr is more attainable because you can make assumptions
15:22:39 <qwebirc16548> Yes, I suggest starting there and then expanding as both k8s and docker evolve
15:22:52 <banix> the main issue here is that we are saying we want to support all contain run times and if thats the case relying on libnetwork doesn’t make sense. long term.
15:22:54 <banix> right?
15:22:57 <vikasc> i would prefer a seperate plugin sharing libs with libnetwork plugin
15:23:28 <irenab> vikasc: I share same understanding
15:23:30 <apuimedo> banix: that's right. I never intended kuryr to be just a libnetwork driver, but bringing neutron to containers
15:23:30 <salv-orlando> vikasc: I think that's attainable at no cost.
15:23:54 <baohua_> banix: agree, we should expand more.
15:24:02 <vikasc> salv-orlando: i too believe so
15:24:10 <apuimedo> Do we all agree that long term, we see option F as the right one?
15:24:14 <baohua_> apuimedo:+1
15:24:18 <apuimedo> I'll put my +1
15:24:22 <vikasc> +1
15:24:25 <banix> apuimedo: yes
15:24:34 <fkautz> +1 on F
15:25:02 <fawadkhaliq> apuimedo: +1 yes, until there is one option left ;-)
15:25:05 <apuimedo> irenab: ^^, gsagie ^^
15:25:06 <irenab> +1
15:25:09 <gsagie> +1
15:25:26 <apuimedo> alright, I think we have good enough support
15:25:42 <baohua_> agile decision:)
15:25:46 <irenab> the question if we start with option F or you still want short term option T?
15:25:48 <apuimedo> #info The long term goal is to have different frontends for different platforms
15:26:14 <apuimedo> #topic implementation of a works-now translation layer for Kubernetes (with docker)
15:26:15 <vikasc> my opinion start with F
15:26:19 <baohua_> i suggest start with option #F directly
15:26:23 <irenab> +1
15:26:28 <gsagie> start with option F, we can work in parallel if anyone wants to start working on the T option as well
15:26:33 <apuimedo> first of all
15:26:38 <vikasc> gsagie: +1
15:26:44 <apuimedo> don't get so hasty with the votes :P
15:26:59 <irenab> gsagie: agree, it is ortogonal
15:27:00 <apuimedo> (╯° °)╯彡┻━┻
15:27:12 <fkautz> +1 chair flip :p
15:27:21 <fkautz> I assume that's a bench
15:27:29 <apuimedo> please qwebirc16548, could you give us more information about the design for the translation layer?
15:27:37 <qwebirc16548> sure
15:27:38 <apuimedo> fkautz: it's a bar stool
15:27:44 <baohua_> guangdang~
15:27:46 <qwebirc16548> I am thinking of starting simple, as I said...
15:28:08 <qwebirc16548> Actually, in my environment this is all behind another API layer anyway, which makes it easier...
15:28:27 <qwebirc16548> With a k8s NS per tenant, and a network per tenant, all not visible to the real client
15:28:32 <apuimedo> with the level of information that reaches the CNI plugin, qwebirc16548, what is missing for doing the translation?
15:28:34 <qwebirc16548> implicit isolation between tenants
15:28:59 <qwebirc16548> My current understanding is nothing is lacking for my under-the-covers isolation
15:29:10 <qwebirc16548> I can invent a network name based on the Namespace
15:29:11 <irenab> qwebirc16548: seems you assume specific app deployment use case, right?
15:29:13 <qwebirc16548> and away I go.
15:29:23 <qwebirc16548> Yes, like I said.
15:29:35 <irenab> namespace = Tenant?
15:29:36 <qwebirc16548> that is, what I said is all that I assume (that I remember right now)
15:29:40 <qwebirc16548> yes
15:30:06 <irenab> one network per Tenant?
15:30:14 <qwebirc16548> irenab: yes
15:30:30 <qwebirc16548> As I said, my use pattern is implicit isolation between tenants.
15:30:46 <irenab> qwebirc16548: does it correspondsto the kube-sig-net proposel 2?
15:30:55 <salv-orlando> qwebirc16548: which is cool. What would be the corresponding topology in openstack?
15:31:06 <salv-orlando> I ask because we might have to punch holes here and there
15:31:07 <qwebirc16548> Have not had time to study those enough yet.  Been wrestling with Ursula.
15:31:18 <irenab> https://docs.google.com/document/d/1W14C03dsBbZi23FN0LmDe1MI0WLV_0-wf_mRxLr2-g4/edit
15:31:19 <apuimedo> would it be a single network, single subnet per tenant?
15:31:53 <irenab> it maybe subnet per Node
15:31:55 <qwebirc16548> Yes, I think one subnet per network
15:32:24 <apuimedo> qwebirc16548: what to do with the cluster ips?
15:32:43 <apuimedo> cause I assume we'd not have kube-proxy running to set iptables redirects for those
15:32:44 <qwebirc16548> in k8s speak, "cluster IPs" are the IPs handed out to the containers,right?
15:32:59 <irenab> to services
15:33:02 <qwebirc16548> Like I said, I have a caveat: we do not care about k8s load balancers
15:33:02 <apuimedo> qwebirc16548: it's an ip that takes you to any of the pods
15:33:08 <baohua_> service layer ip
15:33:27 <apuimedo> in our prototype we use LBaaS for those
15:33:29 <qwebirc16548> Oh, right.  The terms are cluster IPs (for the hosts), pod IPs, and service IPs. IIRC
15:33:35 <baohua_> it's not binding with lb
15:33:40 <apuimedo> so we had a separate subnet for cluster ips
15:33:58 <irenab> qwebirc16548: how the service level connectiviy will work without load balancers?
15:34:07 <fkautz> Wouldn't eliminating load balancers effectively eliminate k8s services?
15:34:14 <qwebirc16548> k8s has LBs for services and for ingress resources.
15:34:15 <baohua_> the cluster ip is handled by kube-proxy
15:34:39 <qwebirc16548> The caveat I said means you don't care about k8s services and ingress resources.
15:34:46 <irenab> qwebirc16548: so you plan to keep using kube-proxy as is?
15:34:47 <fkautz> There is a node port for service, but I recall that not being preferred
15:34:53 <qwebirc16548> If you want to use those, then you have to think harder.
15:35:34 <qwebirc16548> No, I mean a starting level of development that does not support k8s services and ingress resources.
15:35:40 <qwebirc16548> that makes kube-proxy irrelevant.
15:36:13 <qwebirc16548> If you want to integrate with kube-proxy then you need non-trivial changes in k8s...
15:36:24 <qwebirc16548> or maybe lots of Neutron work with FWaaS.
15:36:50 <qwebirc16548> I have not finished thinking that through.  Been wrestling with Ursula.
15:37:37 <apuimedo> qwebirc16548: Ursula?
15:37:40 <qwebirc16548> we might be able to use FWaaS to do something like what OpenShift did
15:37:52 <qwebirc16548> Ursula is ansible to install OpenStack
15:37:58 <qwebirc16548> (yet another)
15:37:59 <apuimedo> ah!
15:38:07 <apuimedo> no more installers!
15:38:09 <baohua_> yet another!
15:38:12 <irenab> apuimedo: you should have know it ;-)
15:38:17 <qwebirc16548> sorry, it has some local currency
15:38:28 <apuimedo> y=ー( ゚д゚)・∵.
15:38:40 <qwebirc16548> even though too few people seem to recognize the author that gave us the word "ansible"
15:39:01 <fkautz> Nothing wrong with long distance communication!
15:39:07 <fkautz> FTL
15:39:25 <irenab> so back to kube on kuryr, do we have a plan?
15:39:34 <fawadkhaliq> qwebirc16548: apuimedo, I think it would be great if we agree upon on the use case we are trying to address in the stop gap solution and then it will be easier to agree on a solution if any.
15:39:39 <apuimedo> irenab: you mean an option T plan
15:39:57 <qwebirc16548> Do you guys think FWaaS is up to the task?
15:40:00 <apuimedo> fawadkhaliq: precisely. I want to set the minimums
15:40:03 <irenab> fawadkhaliq: +1 on starting with use case
15:40:07 <salv-orlando> irenab: idk it seems to me the plan is to wait for qwebirc16548 to finish his fight with Ursula.
15:40:09 <fawadkhaliq> qwebirc16548: lol nope
15:40:12 <qwebirc16548> If so then I think we can do a stop-gap that uses FWaaS in a pattern like OpenShift did.
15:40:18 <irenab> salv-orlando: :-)
15:40:30 <vikasc> lol
15:40:39 <apuimedo> FWaaS as a requirement is a bit of a non-starter
15:40:46 <apuimedo> I'd rather rely on other primitives
15:40:47 <fawadkhaliq> exactly
15:40:50 <fkautz> the load balancers in open shift is a userspace app, we could use it here maybe?
15:40:55 <irenab> security groups?
15:41:19 <apuimedo> fkautz: LBaaS and security groups should get option T a long way
15:41:21 <qwebirc16548> I think it could be done with SGs, but it's kinda ugly
15:41:33 <qwebirc16548> My deepest worry about SGs is two:
15:41:45 <apuimedo> for that, we should probably make kuryr let you chose sg finally :P
15:41:52 <qwebirc16548> 1 is robustness, 2 is performance of control plane, 3 is performance of data plane
15:41:56 <fkautz> in the long run, I think LBaaS through neutron or another means is best :)
15:41:59 <apuimedo> cause the libnetwork driver still does not let you
15:42:13 <fkautz> User space packet copying will eventually affect perf
15:42:24 <salv-orlando> qwebirc16548: I assume you're referring to neutron ref impl. I wonder why that would not be a worry with fwaas though
15:42:49 <qwebirc16548> Doing the OpenShift pattern with SGs means you limit what can be sent, not what can be received.   So one sender that breaks out has complete access to everyone.
15:43:22 <qwebirc16548> salv-orlando: I am discussing worries about using SGs to get isolation between tenants
15:43:34 <qwebirc16548> when kube-proxy is being sued.
15:43:37 <qwebirc16548> used.
15:43:51 <apuimedo> I'm not familiar enough with the reference impl to comment
15:43:53 <salv-orlando> qwebirc16548: service-2-pod networking then. ok.
15:44:06 <irenab> In my opinion we should try to start aligned with kube-sig-net proposal, probably the most basic case
15:44:23 <qwebirc16548> salv-orlando: that's why you have to limit on the sending side.  Everyone has to be able to receive from the kube-proxy
15:44:26 <qwebirc16548> proxies
15:44:27 <apuimedo> but, couldn't we start working on the kube-proxy alternative for option F and option T could leverage it?
15:45:06 <qwebirc16548> working on a kube-proxy involves k8s changes...
15:45:12 <qwebirc16548> which are being discussed there.
15:45:13 <apuimedo> as in, no kube-proxy, but something that makes API calls to Neutron
15:45:15 <salv-orlando> apuimedo: that is a viable option but you might want to discuss with k8s upstream too
15:45:59 <apuimedo> salv-orlando: I was thinking pluggability at the k8s API server level actually
15:46:10 <apuimedo> I haven't checked if that requires k8s changes yet
15:46:19 <fkautz> apuimedo: right now is the best time to bring that up
15:46:32 <apuimedo> fkautz: you are completely right
15:46:34 <fkautz> K8s is talking about long term shape of networking
15:46:45 <fkautz> Thursday at I think 5pm est is a meeting for k8s
15:46:47 <fkautz> Definitely join it
15:46:48 <salv-orlando> apuimedo: pluggability in the kube-proxy so far is an if statement where one branch loads the deprecated userspace LB
15:46:53 <irenab> apuimedo: I think it can be done wothout changes, just watching the API objects should be enough, like kube-proxy or skyDNS do
15:47:06 <apuimedo> imho, and with having read the API server
15:47:08 <salv-orlando> it's wedds 2PM PST for this week
15:47:31 <fkautz> Oh, Wednesday this week, good to know. :x
15:47:48 <apuimedo> we (k8s integrators) could get a lot of flexibility out of being able to create entities from the API objects
15:47:59 <apuimedo> and then have the CNI layer just plug into those
15:48:18 <irenab> apuimedo: +1
15:48:22 <apuimedo> (also would help with latency elimination going all the way to neutron and beyond for some things)
15:49:02 <irenab> apuimedo: so CNI mostly for binding?
15:49:13 <apuimedo> that's one way
15:49:51 <apuimedo> banix: I don't see you around ;-)
15:49:54 <irenab> I think it is expected tocall IPAM driver too
15:50:14 <banix> apuimedo: Tables don’t talk much
15:50:46 <apuimedo> irenab: sure. A part of the IPAM would be at CNI level
15:51:14 <irenab> apuimedo: we have 10  mins left. Can you get some summary on next steps?
15:51:20 <apuimedo> in my mind we need either that, or we need to contribute to the kube-proxy pluggability significantly
15:51:54 <apuimedo> because its current level of pluggability, as salv-orlando says, is just not where we'd need it to be for option F
15:52:48 <qwebirc16548> I would like to see a long term design, aligned with ks8 net sig thinking
15:52:49 <salv-orlando> apuimedo: well... you might want to consider running an os-kube-proxy
15:52:57 <qwebirc16548> That requires some thinking and writing on both sides
15:52:57 <apuimedo> Also, we probably will need somewhere to perform the translations of "series of `allofrom`" to Neutron entities
15:53:15 <salv-orlando> at the end of the day k8s components are all loosely coupled bits that communicated with the API server
15:53:25 <qwebirc16548> I am not  quite ready to spout the design now.
15:53:29 <fkautz> I agree entirely with qwebirc16548's comment on alignment
15:53:56 <salv-orlando> neither am I - I am just saying let's not constraint ourselves into anything and keep all options on the table
15:54:16 <apuimedo> I agree with that
15:54:37 <apuimedo> Kuryr set out to work as a bridge with the communities
15:54:50 <apuimedo> and we have to be more present in the k8s community right now
15:54:57 <apuimedo> that said
15:55:39 <apuimedo> At least I, need to learn better the options for retrieving the level of information we need for creating the load balancers, modelling allowFrom to subnets, etc
15:56:10 <apuimedo> and it does not sound to me like the idea they had of pushing more and more info to CNI is the right way
15:56:26 <irenab> apuimedo: I think we should decide on the kuryr components/libs
15:56:42 <apuimedo> (it sounds a bit too much like the problems we've had with the slightly arbitrary kind of information we get from libnetwork)
15:57:10 <apuimedo> irenab: ?
15:57:39 <irenab> we may need some API watcher , which does not fit well into kurrent on node kuryr service
15:57:43 <apuimedo> what I'd like to do, as it appears simpler in my mind is to first
15:58:14 <apuimedo> 1.- Take a few Kubernetes typical usages and redraw them using Neutron components
15:58:29 <apuimedo> 2. find the best places in k8s to create them
15:58:56 <apuimedo> 3. Make those possible in k8s if they are not or settle for good approximations
15:59:30 <apuimedo> 2-3 need quite a bit of participation upstream
16:00:15 <apuimedo> qwebirc16548: Do you want to replicate the OpenShift networking model or something a bit different?
16:00:48 <irenab> qwebirc16548: any link you can share on openShift network model?
16:00:49 <qwebirc16548> I'm not real enthused about that one, but it's an existing design that works in k8s without doing violence to it.
16:01:05 <qwebirc16548> I do not have a link.  I can write very briefly here, then have to go to next mtg.
16:01:15 <qwebirc16548> Each tenant has one private network
16:01:33 <qwebirc16548> each VM/container can send to any other on the private network (subject to SGs) and to services
16:01:37 <qwebirc16548> I am not sure which ones
16:01:45 <qwebirc16548> Mainly tenant's own services, I suppose
16:02:04 <qwebirc16548> Later
16:02:16 <fawadkhaliq> irenab: https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/networking.html
16:02:23 <irenab> thanks
16:02:34 <apuimedo> #link https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/networking.html
16:03:08 <irenab> I would say we need to identigy the use case to support first
16:03:20 <fawadkhaliq> irenab: +1
16:03:36 <fkautz> and unfortunately, that is still a moving target, but I think we have enough now that we can approximate
16:04:07 <apuimedo> fkautz: it is a moving target indeed. But we can help moving it
16:04:14 <fkautz> +1
16:04:19 <apuimedo> if we agree on what we want it to be
16:04:25 <fawadkhaliq> fkautz: agree. we can plan for it
16:04:41 <fawadkhaliq> apuimedo: exactly
16:04:54 <apuimedo> that's why I said I wanted to translate deployment configurations into neutron primitives
16:04:58 <apuimedo> agree on those
16:05:13 <apuimedo> and then help move the k8s target to make those possible
16:05:25 <irenab> apuimedo: this is right, we need to identify the scenario we want to support
16:05:51 <apuimedo> so far it seems like they are still planning for different modes
16:05:56 <irenab> shall we start some use cases doc/etherpad?
16:05:58 <apuimedo> one where there is full isolation
16:06:08 <banix> apuimedo: time for a few action items
16:06:14 <apuimedo> and every connectivity is defined via allowfrom
16:06:23 <vikasc> irenab: can we use existing k8s etherpad?
16:06:32 <irenab> lets add a section there
16:06:38 <apuimedo> and another mode where there is connectivity between pods but services require the allowfrom
16:06:42 <apuimedo> banix: agreed
16:07:01 <fkautz> Link?
16:07:06 <irenab> vikasc: we already have use cases section :-)
16:07:11 <irenab> https://etherpad.openstack.org/p/kuryr_k8s
16:07:18 <irenab> need to add content
16:07:22 <fkautz> Thanks, I'll contribute to it as well
16:07:25 <fawadkhaliq> #link https://etherpad.openstack.org/p/kuryr_k8s
16:07:35 <vikasc> irenab: lets do :)
16:07:37 <apuimedo> #action everybody come to next monday meeting with a translation of deployments into neutron primitives
16:07:49 <apuimedo> please put a link to the diagram in the etherpad
16:08:06 <irenab> apuimedo: so we can have few to chose from :-) ?
16:08:16 <apuimedo> irenab: or rather to converge
16:08:23 <irenab> +1
16:08:36 <apuimedo> who can make tomorrow's meeting with k8s-sig-net?
16:08:52 <fkautz> I will be there
16:08:59 <banix> i can
16:09:05 <vikasc> vikasc: i will also attend
16:09:07 <apuimedo> I'll do my best to make it too
16:09:14 <irenab> same here
16:09:24 <fawadkhaliq> will try
16:09:40 <apuimedo> #action fkautz banix vikasc irenab to try to find out on the pushing infor to CNI versus pluggability at a higher level
16:09:56 <apuimedo> fawadkhaliq: if you can try as well even better :-)
16:10:21 <fawadkhaliq> apuimedo: will do :-)
16:10:36 <apuimedo> banix: do you think qwebirc16548 could come to next kuryr meeting with a bit more defined plan for option T
16:10:40 <apuimedo> so we can decide on it
16:11:09 <apuimedo> in the worst case. I believe that option F should be developed in a way that still allows for option T to happen
16:11:10 <banix> apuimedo: will talk to him but you know Ursala is pretty merciless ;)
16:11:30 <apuimedo> ¯\_(ツ)_/¯
16:11:32 <banix> i agree
16:11:38 <fawadkhaliq> rofl
16:11:49 <apuimedo> installers are a plague
16:11:55 <apuimedo> anyway
16:12:16 <apuimedo> I'd like to have more info on option T to see the level of effort that would have to go into it
16:12:23 <apuimedo> and if we can make it for Mitaka
16:13:01 <apuimedo> banix: irenab gsagie vikasc fawadkhaliq fkautz: any action I am missing?
16:13:02 <banix> so we are going to discuss during the weekly meeting next week? and then of need be another of these meetings I suppose
16:13:31 <apuimedo> I'd rather have it in the usual slot next week. If we see that the meetings start to overflow
16:13:38 <apuimedo> we can make a kuryr-sig-k8s
16:13:40 <apuimedo> :P
16:13:43 <irenab> banix: I think we agreed to adduse cases in the etherpad
16:13:54 <banix> yes
16:13:54 <fawadkhaliq> apuimedo: nope. looks good. it's a good start.
16:13:57 <apuimedo> going once
16:14:13 <apuimedo> •_•)
16:14:14 <banix> #endmeeting