15:03:54 <gsagie> #startmeeting kuryr
15:03:56 <openstack> Meeting started Mon Mar 14 15:03:54 2016 UTC and is due to finish in 60 minutes.  The chair is gsagie. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:03:57 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:03:59 <openstack> The meeting name has been set to 'kuryr'
15:04:01 <mspreitz> o/
15:04:30 <gsagie> #info fawadkhaliq, banix, salv-orlando, mspreitz, gsagie in meeting
15:04:49 <janki91> o/
15:05:11 <gsagie> Hello everyone, apuimedo and irena are not here and they have some important updates for the Kubernetes integration but lets do a quick sync
15:05:20 <gsagie> #info janki91 in meeting too :)
15:05:26 <gsagie> Hi janki91, welcome
15:05:37 <janki91> Hi gsagie
15:05:42 <gsagie> hi irenab, having some connection problems?
15:05:49 <irenab> hi
15:05:51 <gsagie> #topic Kubernetes integration
15:05:54 <irenab> sorry, had to restart the computer ....
15:05:57 <gsagie> you came just in time :)
15:05:59 <janki91> Thank you, This is my first time in the meeting
15:06:19 <gsagie> janki91: cool, nice to see you and thanks for joining
15:06:20 <irenab> Shall I provide a update?
15:06:26 <gsagie> irenab: yep
15:06:45 <gsagie> #link kubernetes integration spec https://review.openstack.org/#/c/281132/
15:07:07 <irenab> So there is a spec for review focusing on basic (no policy support) connectivity use case
15:07:21 <irenab> we started to POC on the implementation
15:07:49 <irenab> having 2 components: API watecher and translator (Raven) and  kuryr CNI plugin
15:08:24 <gsagie> irenab: i think that apuimedo plan was to send a separate more detail spec for each of the parts (in addition to yours) is this still the case?
15:08:26 <irenab> There will be devrefs pushed for review as soon as we have basic end to end wofkflow
15:08:36 <gsagie> ohh okie great
15:09:00 <irenab> we almost have it working, but no ping yet :-)
15:09:15 <irenab> Please review the spec
15:09:17 <gsagie> i reviewed your patch, i think the mapping looks good and over all the approach seems the best option we came up
15:09:27 <gsagie> but will give a chance for others to comment as well
15:09:38 <gsagie> but good work
15:09:43 <irenab> great, so will be more patches in comming days for detailed design and code
15:10:12 <irenab> There is also Mike’s patch with Policy mapping
15:10:20 <gsagie> this is just for the initial connectivity right? nothing in terms of services or DNS discovery yet
15:10:44 <banix> irenab: wrt the use of FIP, using a network with publicly routable IP addresses could be an option?
15:11:01 <irenab> gsagie: including services
15:11:03 <gsagie> #link https://review.openstack.org/#/c/290172/ Kubernetes integration by translate
15:11:12 <mspreitz> I also have a simple no-policy patch, https://github.com/kubernetes/kubernetes/pull/21956
15:11:29 <irenab> banix: I beleive so, if you choose to have this subnet defined as cluster service ip range
15:11:57 <irenab> mspreitz: do you think it complimentary or different to the patch I push?
15:12:24 <mspreitz> irenab: I was not aware of your patch, need to learn about it
15:12:27 <salv-orlando> stupid question... is 290172 inline with 281132 or is it an alternative? I've not reviewed yet, just curious
15:12:54 <salv-orlando> irenab: even more stupidly I asked your same question I think
15:13:06 * salv-orlando wins the idiot of the day prize
15:13:08 <mspreitz> I think they overlap
15:13:13 <irenab> salv-orlando: I am confused too :-)
15:13:19 <gsagie> i think its an alternative, i havent looked at Mike's patch yet but i believe this is what we talked about integrating with Kubernetesdirectly (the watcher and CNI) or translating the Docker calls
15:13:27 <mspreitz> I do not fully understand 281132, wonder if I can ask some stupid questions
15:13:30 <gsagie> But havent looked at Mikes patch
15:13:44 <gsagie> mspreitz: sure :) thats the perfect time for it
15:13:56 <irenab> mspreitz: there is no such thing as stupid question
15:14:08 <mspreitz> I am really confused about the remarks in 281132 about Service Discovery...
15:14:28 <irenab> On making skyDNS work?
15:14:30 <mspreitz> k8s already has service discovery and it is orthogonal to what we are talking about, as far as I can tell
15:14:48 <mspreitz> The k8s DNS app already works, needs no help from Neutron
15:14:59 <mspreitz> The k8s DNS app reads from the k8s apiserver
15:15:05 <mspreitz> probelm?  What problem?
15:15:13 <irenab> As far as I understand skyDNS is deployed in the same way as the app service
15:15:30 <mspreitz> what do you mean by "the app service" ?
15:15:32 <irenab> in a pod managed by RC and service
15:15:49 <irenab> there is service template and RC template for skyDNS
15:15:54 <salv-orlando> irenab: so perhaps you just mean that we should ensure every pod reaches the skydns pods
15:16:10 <irenab> salv-orlando: yes
15:16:17 <mspreitz> OK, maybe this is a good time to ask another stupid question...
15:16:25 <mspreitz> If you mean "no policy", that includes "no isolation", right?
15:16:48 <irenab> I mean not supporting the Network Policy k8s spec
15:17:13 <salv-orlando> or to put it in another support k8s networking as it is today
15:17:26 <mspreitz> you two just disagreed, I suspect
15:17:41 <mspreitz> What do you think about this: implicit policy that isolates tenants?
15:18:05 <irenab> salv-orlando: thank you, exactly what I meant
15:18:35 <irenab> mspreitz: I think currently there is no multi tenancy in k8s
15:18:47 <mspreitz> does "no policy" include " but there is implicit isolation between tenants"?
15:19:08 <mspreitz> There are no tenants in k8s, but there are tenants in Neutron.  So let's be explicit about this.
15:19:26 <mspreitz> Is 281132 trying to introduce multi-tenancy to k8s?
15:19:48 <irenab> mspreitz: I beleive its the same approach as we took with libnetwork, start with single tenant
15:20:19 <irenab> not at its first interation
15:20:20 <mspreitz> So are we all agreed that "no policy" means "not even implicit isolation between tenants"?
15:21:02 <irenab> I agree
15:21:44 <irenab> Later the tenant can be provided via k8s label/annotation or separate namespace
15:22:21 <gsagie> yes
15:22:23 <irenab> but first lets make the current k8s networking work with neutron
15:22:24 <mspreitz> salv-orlando: do you agree that "no policy" means "not even implicit isolation between tenants"?
15:23:35 <mspreitz> I guess he is busy with something else...
15:23:41 <salv-orlando> mspreitz: seems a legit interpreation
15:23:41 <mspreitz> irenab: where is your patch?
15:24:00 <gsagie> mspreitz: so now that we agree on that, are there any more unclear issues?
15:24:04 <mspreitz> yes
15:24:07 <mspreitz> what is the DNS problem?
15:24:18 <gsagie> mspreitz: https://review.openstack.org/#/c/281132/12
15:24:18 <mspreitz> I see no problem
15:24:34 <mspreitz> yes, I am looking at rev 12 of that spec
15:24:43 <gsagie> the question is how traffic is forwarded to the skyDNS instance
15:24:56 <gsagie> transparently to the deployed networking solution
15:25:12 <mspreitz> If we forget about DNS for a moment, the spec is about how apps talk to each other.  Now, what additional problem does DNS bring?
15:25:12 <irenab> I stated this more like a requirement than a challenge
15:25:59 <irenab> alternativer option is to use env variables for service name resolution
15:26:34 <mspreitz> Let me put it this way.  In the no-policy case, once we solve the problem ignoring DNS, we are done.  There is no additional problem in supporting the DNS app.
15:26:50 <gsagie> mspreitz: i am not sure directly for the SkyDNS integration part, but for example for kube-proxy some solutions dont want to use iptables port forwarding as done right now
15:27:14 <gsagie> so since i am unfamiliar with the DNS integration, irenab, is there any similar problem in this regards ?
15:27:47 <mspreitz> gsagie: Suppose we solve the problem ignoring DNS.  Now we have solution in which apps can talk to each other.  Does DNS bring any additional problem, in the no-policy case?
15:27:56 <irenab> I think skyDNS is self contained, watching the API changes, maintains the registry and answers requests
15:28:36 <irenab> If removing skyDNS fro the spec makes it easier, I can remove it. Its more for the requirements
15:28:42 <gsagie> mspreitz: i agree
15:29:02 <mspreitz> gsagie: You mean you agree that DNS brings no additional problem in that case?
15:29:13 <gsagie> i dont think it brings any problem if Kubernetes takes care for everything internally in the apps and this all is just 2 ports for Kuryr
15:29:28 <gsagie> that needs connectivity
15:30:16 <mspreitz> gsagie: So I think you agree that in the no-policy case there is no need to say anything about DNS because it is just another app.
15:30:33 <mspreitz> BTW, that is true regardless of which kube-proxy is used.
15:30:38 <gsagie> mspreitz: yes i agree
15:30:42 <irenab> mspreitz: its part of the k8s deployment that expected to work
15:31:00 <mspreitz> irenab: Do you agree that in the no-policy case there is no need to say anything about DNS because it is just another app?
15:31:05 <gsagie> mspreitz: i only gave kube-proxy as an example, because you can refer to it similar like "an app"
15:31:19 <mspreitz> gsagie: no.  the kube-proxy is not just another app.
15:31:21 <gsagie> so it should also work in that logic out of the box
15:31:21 <irenab> mspreitz: yes
15:31:32 <gsagie> mspreitz : okie :) so we agree
15:31:44 <gsagie> was just trying to make sure the DNS is just an app
15:31:50 <irenab> its will be deployed in jube-system namespace, but the rest is as with any other app
15:31:51 <mspreitz> irenab: do you have a patch that implements the no policy case?
15:32:01 <irenab> mspreitz: its in progress
15:32:03 <gsagie> and not closer to kube-proxy, just since i havent looked deeply at how it works
15:32:26 <mspreitz> You may want to look at https://github.com/kubernetes/kubernetes/pull/21956, it is a simple no-policy solution.
15:32:44 <mspreitz> Very simple, not even specific to Neutron.  It is a CNI plugin that connects to a configured pre-existing Docker network.
15:32:48 <irenab> mspreitz: Will check this. thanks
15:32:50 <gsagie> there is a POC done in another branch, i believe irena, tfukushima, apuimedo and devvesa should send it to Kuryr upstream soon
15:33:06 <irenab> gsagie: correct
15:33:19 <mspreitz> should others look at it now, or wait for the k8s PR?
15:33:22 <gsagie> mspreitz: i think its worth sending this upstream to Kuryr just as well, i personally dont want to limit us
15:33:53 <gsagie> #link mspreitz no-policy kubernetes integration solution https://github.com/kubernetes/kubernetes/pull/21956,
15:33:55 <gsagie> #link mspreitz no-policy kubernetes integration solution https://github.com/kubernetes/kubernetes/pull/21956
15:34:29 <gsagie> mspreitz: i think we should all review both patches and better understand the differences
15:34:38 <mspreitz> I have no problem with another PR.  Should we look at that POC now, or wait for the PR?
15:34:42 <gsagie> and then we can talk about it next meeting
15:35:24 <gsagie> mspreitz: i havent looked closer at your patch yet, and i will this week and the implementation so think its early to compare
15:35:46 <gsagie> i think we should review irenab spec and the implementation should be upstreamed soon
15:35:58 <gsagie> irenab: you think you will upload it for review by next meeting?
15:36:08 <irenab> gsagie: I think so
15:36:16 <gsagie> okie great
15:36:57 <gsagie> mspreitz: lets continue this discussion during the week and lets all review both solutions until next meeting, at least spec wise
15:37:06 <mspreitz> OK but we are not done here
15:37:07 <gsagie> anything else on this topic?
15:37:13 <mspreitz> yes...
15:37:18 <gsagie> mspreitz: agreed :)
15:37:22 <mspreitz> (1) which kube-proxy; (2) with policy.
15:37:39 <mspreitz> (3) node ports; (4) ingress
15:37:52 <mspreitz> probably more
15:37:56 <mspreitz> let's start with (1)?
15:38:20 <gsagie> i think that since we wont have time to cover everything, lets arrange a specific meeting for this
15:38:36 <mspreitz> OK with me
15:38:49 <irenab> +1
15:38:50 <gsagie> mspreitz: any prefered time/date for everyone?
15:39:03 <gsagie> banix, irenab, fawadkhaliq, salv-orlando
15:39:10 <irenab> gsagie: your time preferences work for me
15:39:14 <mspreitz> I work EDT hours
15:39:14 <gsagie> :)
15:39:24 <irenab> except for next week, I am on PTO
15:39:28 <banix> sametime tomorrow?
15:39:37 <gsagie> sametime tomorrow works for me
15:39:44 <mspreitz> Same time tomorrow i have another meeting already
15:39:44 <salv-orlando> 15UTC tomorrow should work for me
15:39:50 <fawadkhaliq> this week similar time should be good.
15:39:52 <banix> or wednesday
15:39:53 <mspreitz> hour earlier works for me
15:39:58 <salv-orlando> but generally speaking works every day
15:39:58 <irenab> 30 mins earlier will be better for me, but I can manage the current timeslot
15:40:04 <mspreitz> or 1500UTC wed also works for me
15:40:16 <gsagie> okie 1500UTC wed works for everyone?
15:40:25 <salv-orlando> gsagie: works for me
15:40:30 <banix> yes
15:40:35 <gsagie> will send an invite to everyone and hopefully apuimedo and taku can join as well
15:40:37 <irenab> better either 30 mins befoe or after
15:40:44 <salv-orlando> if we can finalize it now I'll add to my calendar so there will be some chances I won't forget about it
15:40:49 <mspreitz> 1430UTC wed works fine for me
15:40:59 <gsagie> 14:30? :)
15:40:59 <salv-orlando> +1
15:41:02 <irenab> hope toni can join, taku I beleive cannot, its too late for him
15:41:05 <gsagie> banix: ?
15:41:11 <gsagie> fawadkhaliq: ?
15:41:15 <banix> good by me
15:41:21 <irenab> +1
15:41:24 <salv-orlando> is fawadkhaliq pst?
15:41:25 <fawadkhaliq> works thanks
15:41:27 <gsagie> okie great
15:41:29 <fawadkhaliq> yes, PST
15:41:34 <gsagie> so wed 14:30
15:41:47 <banix> i bet Toni is looking for goat cheese in the country today; must be back by tomorrow
15:42:13 <gsagie> #info we will do a kubernetes integration specific meeting on wed (3/16) 1430 UTC in #openstack-kuryr
15:42:17 <gsagie> okie
15:42:25 <gsagie> #topic nested containers and Magnum integration
15:42:44 <gsagie> We are running out of time :) fawadkhaliq the stage is yours
15:42:55 <fawadkhaliq> gsagie: thanks, i will be quick
15:43:02 <fawadkhaliq> spec is merged. thanks all for the reviews.
15:43:07 <gsagie> #action gsagie to send an invite about the meeting to everyone
15:43:24 <fawadkhaliq> next step, work on the design for Kuryr agent internals.
15:43:44 <fawadkhaliq> I will hold a offline discussion over ML or openstack-kuryr to discuss
15:43:48 <gsagie> fawadkhaliq: cool, i also have another person that can help you with the implementation
15:43:58 <fawadkhaliq> gsagie: perfect, that would be great
15:44:02 <gsagie> i will introduce you two and feel free to share some load with him
15:44:05 <gsagie> okie
15:44:10 <fawadkhaliq> that's all for now.
15:44:22 <irenab> this was quick :-)
15:44:27 <gsagie> okie grea, anything else on this topic from anyone?
15:44:29 <fawadkhaliq> irenab:  told you ;-)
15:44:39 <gsagie> btw, we have the Magnum team approved the spec and plan right?
15:44:46 <gsagie> i saw some reviews
15:44:53 <mspreitz> Is there a go client for Neutron & Keystone?
15:45:06 <fawadkhaliq> gsagie: that's correct
15:45:15 <fawadkhaliq> we had +1 from hongbin
15:45:15 <gsagie> ok cool
15:45:28 <irenab> fawadkhaliq: does it have Vlan aware API as a dependency?
15:45:32 <fawadkhaliq> mspreitz: I am not aware of any
15:45:33 <mspreitz> And: is there a python client for k8s?
15:45:46 <gsagie> i would still love if we can email Adriand/Daneyon to review the plan as well, i will forward it to them eitherway
15:46:14 <fawadkhaliq> irenab: that's a dependency right now, we might leverage the OVN mechanism is vlan aware vms is delayed.
15:46:19 <gsagie> irenab: i think we agreed that we dont want this blocking us, we can use the same trick as OVN
15:46:29 <gsagie> using the binding profile
15:46:44 <fawadkhaliq> gsagie: agreed
15:47:10 <irenab> gsagie: fawadkhaliq : meaning only plugin/MD that supports the trick will work
15:47:32 <gsagie> irenab: yes, how do you currently configure this in Midonet?
15:47:38 <gsagie> is it supported?
15:47:42 <irenab> not yet
15:47:48 <fawadkhaliq> irenab: yes, thats even true for vlan-aware-vms support. So in either case, all plugins will have to have support.
15:48:04 <irenab> fawadkhaliq: agree
15:48:29 <gsagie> i think its a relatively easy patch to add support, until vlan-aware-vms is fully merged
15:48:29 <irenab> the ovn trick is simple, main work on the backend side anyway
15:48:41 <fawadkhaliq> irenab: exactly
15:48:52 <gsagie> okie, lets go for the next one
15:49:01 <gsagie> #topic existing networks and use of neutron tags
15:49:07 <gsagie> banix: :)
15:49:41 <banix> #link https://review.openstack.org/#/c/288737/ is for use of tags with our current implementation
15:50:02 <gsagie> banix: cool, i just saw yesterday you asked for help regarding the tests
15:50:08 <gsagie> everything is solved or you still need it?
15:50:14 <banix> then #link https://review.openstack.org/#/c/281132/ for supporting existing networks; once approved, the patch itself is ready to go too
15:50:34 <banix> a recheck solved teh fullstack fail
15:50:37 <gsagie> #link https://review.openstack.org/#/c/288737/   use of neutron tags
15:50:48 <banix> the rally check fail also seems a gate issue
15:50:53 <gsagie> #link https://review.openstack.org/#/c/281132/ support existing networks
15:51:00 <gsagie> banix: okie, will take a look
15:51:12 <gsagie> #topic packaging
15:51:19 <banix> so should be in good shape to get this done this week
15:51:38 <gsagie> banix: ok cool, good job, i will make sure to review it this week and hopefully can get it merged
15:52:03 <gsagie> salv-orlando: not sure regarding the OVS+Kolla work
15:52:21 <salv-orlando> gsagie: I am not actively working on that unfortunately
15:52:34 <banix> Hui Kang on our side is working on it
15:52:41 <gsagie> salv-orlando: okie
15:52:49 <gsagie> banix: okie great
15:52:57 <banix> he has been delayed by other issues, will see if we can get him move this forward
15:53:31 <gsagie> there is another thing that i was supposed to look at and didnt get a chance and thats the failing fullstack tests that Baohua Yang added
15:53:36 <gsagie> regarding container connect/disconnect
15:54:00 <gsagie> #action gsagie look at failing fullstack tests https://review.openstack.org/#/c/265105/
15:54:07 <gsagie> #topic open discussion
15:54:45 <gsagie> One update, it seems that we started getting more attention in the ops mailing list, so i believe there is a good chance there will be a containers session in the ops day in Austin
15:55:01 <gsagie> so obviously everyone that are interested should join, will send details if i have anything concrete
15:55:37 <mspreitz> Is there a python client for k8s?
15:55:44 <gsagie> mspreitz: yes
15:55:51 <mspreitz> where?
15:56:00 <gsagie> mspreitz: you can see it in the Magnum repo, they use Swagger to compile it
15:56:11 <mspreitz> thanks.
15:56:16 <fawadkhaliq> https://pypi.python.org/pypi/pykube/0.4.0
15:56:18 <mspreitz> Is there a go client for Neutron and Keystone?
15:56:18 <irenab> https://pypi.python.org/pypi/pykube/0.4.0
15:56:30 <gsagie> not sure about the go client, i havent seen one
15:56:49 <gsagie> #link Kubernetes python client https://pypi.python.org/pypi/pykube/0.4.0
15:56:59 <irenab> https://github.com/pquerna/go-keystone-client
15:57:00 <gsagie> maybe for open contrail
15:57:12 <fawadkhaliq> mspreitz: don't think so. never saw one. maybe check on ML to double check
15:57:23 <mspreitz> "work in progress, DO NOT USE" !!
15:57:35 <gsagie> Thanks everyone for joining the meeting and see you all on wed :) will send an invite
15:57:39 <salv-orlando> mspreitz: https://github.com/rackspace/gophercloud
15:57:49 <salv-orlando> possubly tailored on rackspace cloud though
15:57:53 <irenab> bye!
15:57:59 <salv-orlando> adieuuu
15:58:00 <fawadkhaliq> bye
15:58:03 <banix> bye
15:58:04 <mspreitz> salv-orlando: thanks!
15:58:05 <mspreitz> bye
15:58:10 <gsagie> #link go SDK for openstack https://github.com/rackspace/gophercloud
15:58:15 <gsagie> #endmeeting