15:03:54 #startmeeting kuryr 15:03:56 Meeting started Mon Mar 14 15:03:54 2016 UTC and is due to finish in 60 minutes. The chair is gsagie. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:03:57 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:03:59 The meeting name has been set to 'kuryr' 15:04:01 o/ 15:04:30 #info fawadkhaliq, banix, salv-orlando, mspreitz, gsagie in meeting 15:04:49 o/ 15:05:11 Hello everyone, apuimedo and irena are not here and they have some important updates for the Kubernetes integration but lets do a quick sync 15:05:20 #info janki91 in meeting too :) 15:05:26 Hi janki91, welcome 15:05:37 Hi gsagie 15:05:42 hi irenab, having some connection problems? 15:05:49 hi 15:05:51 #topic Kubernetes integration 15:05:54 sorry, had to restart the computer .... 15:05:57 you came just in time :) 15:05:59 Thank you, This is my first time in the meeting 15:06:19 janki91: cool, nice to see you and thanks for joining 15:06:20 Shall I provide a update? 15:06:26 irenab: yep 15:06:45 #link kubernetes integration spec https://review.openstack.org/#/c/281132/ 15:07:07 So there is a spec for review focusing on basic (no policy support) connectivity use case 15:07:21 we started to POC on the implementation 15:07:49 having 2 components: API watecher and translator (Raven) and kuryr CNI plugin 15:08:24 irenab: i think that apuimedo plan was to send a separate more detail spec for each of the parts (in addition to yours) is this still the case? 15:08:26 There will be devrefs pushed for review as soon as we have basic end to end wofkflow 15:08:36 ohh okie great 15:09:00 we almost have it working, but no ping yet :-) 15:09:15 Please review the spec 15:09:17 i reviewed your patch, i think the mapping looks good and over all the approach seems the best option we came up 15:09:27 but will give a chance for others to comment as well 15:09:38 but good work 15:09:43 great, so will be more patches in comming days for detailed design and code 15:10:12 There is also Mike’s patch with Policy mapping 15:10:20 this is just for the initial connectivity right? nothing in terms of services or DNS discovery yet 15:10:44 irenab: wrt the use of FIP, using a network with publicly routable IP addresses could be an option? 15:11:01 gsagie: including services 15:11:03 #link https://review.openstack.org/#/c/290172/ Kubernetes integration by translate 15:11:12 I also have a simple no-policy patch, https://github.com/kubernetes/kubernetes/pull/21956 15:11:29 banix: I beleive so, if you choose to have this subnet defined as cluster service ip range 15:11:57 mspreitz: do you think it complimentary or different to the patch I push? 15:12:24 irenab: I was not aware of your patch, need to learn about it 15:12:27 stupid question... is 290172 inline with 281132 or is it an alternative? I've not reviewed yet, just curious 15:12:54 irenab: even more stupidly I asked your same question I think 15:13:06 * salv-orlando wins the idiot of the day prize 15:13:08 I think they overlap 15:13:13 salv-orlando: I am confused too :-) 15:13:19 i think its an alternative, i havent looked at Mike's patch yet but i believe this is what we talked about integrating with Kubernetesdirectly (the watcher and CNI) or translating the Docker calls 15:13:27 I do not fully understand 281132, wonder if I can ask some stupid questions 15:13:30 But havent looked at Mikes patch 15:13:44 mspreitz: sure :) thats the perfect time for it 15:13:56 mspreitz: there is no such thing as stupid question 15:14:08 I am really confused about the remarks in 281132 about Service Discovery... 15:14:28 On making skyDNS work? 15:14:30 k8s already has service discovery and it is orthogonal to what we are talking about, as far as I can tell 15:14:48 The k8s DNS app already works, needs no help from Neutron 15:14:59 The k8s DNS app reads from the k8s apiserver 15:15:05 probelm? What problem? 15:15:13 As far as I understand skyDNS is deployed in the same way as the app service 15:15:30 what do you mean by "the app service" ? 15:15:32 in a pod managed by RC and service 15:15:49 there is service template and RC template for skyDNS 15:15:54 irenab: so perhaps you just mean that we should ensure every pod reaches the skydns pods 15:16:10 salv-orlando: yes 15:16:17 OK, maybe this is a good time to ask another stupid question... 15:16:25 If you mean "no policy", that includes "no isolation", right? 15:16:48 I mean not supporting the Network Policy k8s spec 15:17:13 or to put it in another support k8s networking as it is today 15:17:26 you two just disagreed, I suspect 15:17:41 What do you think about this: implicit policy that isolates tenants? 15:18:05 salv-orlando: thank you, exactly what I meant 15:18:35 mspreitz: I think currently there is no multi tenancy in k8s 15:18:47 does "no policy" include " but there is implicit isolation between tenants"? 15:19:08 There are no tenants in k8s, but there are tenants in Neutron. So let's be explicit about this. 15:19:26 Is 281132 trying to introduce multi-tenancy to k8s? 15:19:48 mspreitz: I beleive its the same approach as we took with libnetwork, start with single tenant 15:20:19 not at its first interation 15:20:20 So are we all agreed that "no policy" means "not even implicit isolation between tenants"? 15:21:02 I agree 15:21:44 Later the tenant can be provided via k8s label/annotation or separate namespace 15:22:21 yes 15:22:23 but first lets make the current k8s networking work with neutron 15:22:24 salv-orlando: do you agree that "no policy" means "not even implicit isolation between tenants"? 15:23:35 I guess he is busy with something else... 15:23:41 mspreitz: seems a legit interpreation 15:23:41 irenab: where is your patch? 15:24:00 mspreitz: so now that we agree on that, are there any more unclear issues? 15:24:04 yes 15:24:07 what is the DNS problem? 15:24:18 mspreitz: https://review.openstack.org/#/c/281132/12 15:24:18 I see no problem 15:24:34 yes, I am looking at rev 12 of that spec 15:24:43 the question is how traffic is forwarded to the skyDNS instance 15:24:56 transparently to the deployed networking solution 15:25:12 If we forget about DNS for a moment, the spec is about how apps talk to each other. Now, what additional problem does DNS bring? 15:25:12 I stated this more like a requirement than a challenge 15:25:59 alternativer option is to use env variables for service name resolution 15:26:34 Let me put it this way. In the no-policy case, once we solve the problem ignoring DNS, we are done. There is no additional problem in supporting the DNS app. 15:26:50 mspreitz: i am not sure directly for the SkyDNS integration part, but for example for kube-proxy some solutions dont want to use iptables port forwarding as done right now 15:27:14 so since i am unfamiliar with the DNS integration, irenab, is there any similar problem in this regards ? 15:27:47 gsagie: Suppose we solve the problem ignoring DNS. Now we have solution in which apps can talk to each other. Does DNS bring any additional problem, in the no-policy case? 15:27:56 I think skyDNS is self contained, watching the API changes, maintains the registry and answers requests 15:28:36 If removing skyDNS fro the spec makes it easier, I can remove it. Its more for the requirements 15:28:42 mspreitz: i agree 15:29:02 gsagie: You mean you agree that DNS brings no additional problem in that case? 15:29:13 i dont think it brings any problem if Kubernetes takes care for everything internally in the apps and this all is just 2 ports for Kuryr 15:29:28 that needs connectivity 15:30:16 gsagie: So I think you agree that in the no-policy case there is no need to say anything about DNS because it is just another app. 15:30:33 BTW, that is true regardless of which kube-proxy is used. 15:30:38 mspreitz: yes i agree 15:30:42 mspreitz: its part of the k8s deployment that expected to work 15:31:00 irenab: Do you agree that in the no-policy case there is no need to say anything about DNS because it is just another app? 15:31:05 mspreitz: i only gave kube-proxy as an example, because you can refer to it similar like "an app" 15:31:19 gsagie: no. the kube-proxy is not just another app. 15:31:21 so it should also work in that logic out of the box 15:31:21 mspreitz: yes 15:31:32 mspreitz : okie :) so we agree 15:31:44 was just trying to make sure the DNS is just an app 15:31:50 its will be deployed in jube-system namespace, but the rest is as with any other app 15:31:51 irenab: do you have a patch that implements the no policy case? 15:32:01 mspreitz: its in progress 15:32:03 and not closer to kube-proxy, just since i havent looked deeply at how it works 15:32:26 You may want to look at https://github.com/kubernetes/kubernetes/pull/21956, it is a simple no-policy solution. 15:32:44 Very simple, not even specific to Neutron. It is a CNI plugin that connects to a configured pre-existing Docker network. 15:32:48 mspreitz: Will check this. thanks 15:32:50 there is a POC done in another branch, i believe irena, tfukushima, apuimedo and devvesa should send it to Kuryr upstream soon 15:33:06 gsagie: correct 15:33:19 should others look at it now, or wait for the k8s PR? 15:33:22 mspreitz: i think its worth sending this upstream to Kuryr just as well, i personally dont want to limit us 15:33:53 #link mspreitz no-policy kubernetes integration solution https://github.com/kubernetes/kubernetes/pull/21956, 15:33:55 #link mspreitz no-policy kubernetes integration solution https://github.com/kubernetes/kubernetes/pull/21956 15:34:29 mspreitz: i think we should all review both patches and better understand the differences 15:34:38 I have no problem with another PR. Should we look at that POC now, or wait for the PR? 15:34:42 and then we can talk about it next meeting 15:35:24 mspreitz: i havent looked closer at your patch yet, and i will this week and the implementation so think its early to compare 15:35:46 i think we should review irenab spec and the implementation should be upstreamed soon 15:35:58 irenab: you think you will upload it for review by next meeting? 15:36:08 gsagie: I think so 15:36:16 okie great 15:36:57 mspreitz: lets continue this discussion during the week and lets all review both solutions until next meeting, at least spec wise 15:37:06 OK but we are not done here 15:37:07 anything else on this topic? 15:37:13 yes... 15:37:18 mspreitz: agreed :) 15:37:22 (1) which kube-proxy; (2) with policy. 15:37:39 (3) node ports; (4) ingress 15:37:52 probably more 15:37:56 let's start with (1)? 15:38:20 i think that since we wont have time to cover everything, lets arrange a specific meeting for this 15:38:36 OK with me 15:38:49 +1 15:38:50 mspreitz: any prefered time/date for everyone? 15:39:03 banix, irenab, fawadkhaliq, salv-orlando 15:39:10 gsagie: your time preferences work for me 15:39:14 I work EDT hours 15:39:14 :) 15:39:24 except for next week, I am on PTO 15:39:28 sametime tomorrow? 15:39:37 sametime tomorrow works for me 15:39:44 Same time tomorrow i have another meeting already 15:39:44 15UTC tomorrow should work for me 15:39:50 this week similar time should be good. 15:39:52 or wednesday 15:39:53 hour earlier works for me 15:39:58 but generally speaking works every day 15:39:58 30 mins earlier will be better for me, but I can manage the current timeslot 15:40:04 or 1500UTC wed also works for me 15:40:16 okie 1500UTC wed works for everyone? 15:40:25 gsagie: works for me 15:40:30 yes 15:40:35 will send an invite to everyone and hopefully apuimedo and taku can join as well 15:40:37 better either 30 mins befoe or after 15:40:44 if we can finalize it now I'll add to my calendar so there will be some chances I won't forget about it 15:40:49 1430UTC wed works fine for me 15:40:59 14:30? :) 15:40:59 +1 15:41:02 hope toni can join, taku I beleive cannot, its too late for him 15:41:05 banix: ? 15:41:11 fawadkhaliq: ? 15:41:15 good by me 15:41:21 +1 15:41:24 is fawadkhaliq pst? 15:41:25 works thanks 15:41:27 okie great 15:41:29 yes, PST 15:41:34 so wed 14:30 15:41:47 i bet Toni is looking for goat cheese in the country today; must be back by tomorrow 15:42:13 #info we will do a kubernetes integration specific meeting on wed (3/16) 1430 UTC in #openstack-kuryr 15:42:17 okie 15:42:25 #topic nested containers and Magnum integration 15:42:44 We are running out of time :) fawadkhaliq the stage is yours 15:42:55 gsagie: thanks, i will be quick 15:43:02 spec is merged. thanks all for the reviews. 15:43:07 #action gsagie to send an invite about the meeting to everyone 15:43:24 next step, work on the design for Kuryr agent internals. 15:43:44 I will hold a offline discussion over ML or openstack-kuryr to discuss 15:43:48 fawadkhaliq: cool, i also have another person that can help you with the implementation 15:43:58 gsagie: perfect, that would be great 15:44:02 i will introduce you two and feel free to share some load with him 15:44:05 okie 15:44:10 that's all for now. 15:44:22 this was quick :-) 15:44:27 okie grea, anything else on this topic from anyone? 15:44:29 irenab: told you ;-) 15:44:39 btw, we have the Magnum team approved the spec and plan right? 15:44:46 i saw some reviews 15:44:53 Is there a go client for Neutron & Keystone? 15:45:06 gsagie: that's correct 15:45:15 we had +1 from hongbin 15:45:15 ok cool 15:45:28 fawadkhaliq: does it have Vlan aware API as a dependency? 15:45:32 mspreitz: I am not aware of any 15:45:33 And: is there a python client for k8s? 15:45:46 i would still love if we can email Adriand/Daneyon to review the plan as well, i will forward it to them eitherway 15:46:14 irenab: that's a dependency right now, we might leverage the OVN mechanism is vlan aware vms is delayed. 15:46:19 irenab: i think we agreed that we dont want this blocking us, we can use the same trick as OVN 15:46:29 using the binding profile 15:46:44 gsagie: agreed 15:47:10 gsagie: fawadkhaliq : meaning only plugin/MD that supports the trick will work 15:47:32 irenab: yes, how do you currently configure this in Midonet? 15:47:38 is it supported? 15:47:42 not yet 15:47:48 irenab: yes, thats even true for vlan-aware-vms support. So in either case, all plugins will have to have support. 15:48:04 fawadkhaliq: agree 15:48:29 i think its a relatively easy patch to add support, until vlan-aware-vms is fully merged 15:48:29 the ovn trick is simple, main work on the backend side anyway 15:48:41 irenab: exactly 15:48:52 okie, lets go for the next one 15:49:01 #topic existing networks and use of neutron tags 15:49:07 banix: :) 15:49:41 #link https://review.openstack.org/#/c/288737/ is for use of tags with our current implementation 15:50:02 banix: cool, i just saw yesterday you asked for help regarding the tests 15:50:08 everything is solved or you still need it? 15:50:14 then #link https://review.openstack.org/#/c/281132/ for supporting existing networks; once approved, the patch itself is ready to go too 15:50:34 a recheck solved teh fullstack fail 15:50:37 #link https://review.openstack.org/#/c/288737/ use of neutron tags 15:50:48 the rally check fail also seems a gate issue 15:50:53 #link https://review.openstack.org/#/c/281132/ support existing networks 15:51:00 banix: okie, will take a look 15:51:12 #topic packaging 15:51:19 so should be in good shape to get this done this week 15:51:38 banix: ok cool, good job, i will make sure to review it this week and hopefully can get it merged 15:52:03 salv-orlando: not sure regarding the OVS+Kolla work 15:52:21 gsagie: I am not actively working on that unfortunately 15:52:34 Hui Kang on our side is working on it 15:52:41 salv-orlando: okie 15:52:49 banix: okie great 15:52:57 he has been delayed by other issues, will see if we can get him move this forward 15:53:31 there is another thing that i was supposed to look at and didnt get a chance and thats the failing fullstack tests that Baohua Yang added 15:53:36 regarding container connect/disconnect 15:54:00 #action gsagie look at failing fullstack tests https://review.openstack.org/#/c/265105/ 15:54:07 #topic open discussion 15:54:45 One update, it seems that we started getting more attention in the ops mailing list, so i believe there is a good chance there will be a containers session in the ops day in Austin 15:55:01 so obviously everyone that are interested should join, will send details if i have anything concrete 15:55:37 Is there a python client for k8s? 15:55:44 mspreitz: yes 15:55:51 where? 15:56:00 mspreitz: you can see it in the Magnum repo, they use Swagger to compile it 15:56:11 thanks. 15:56:16 https://pypi.python.org/pypi/pykube/0.4.0 15:56:18 Is there a go client for Neutron and Keystone? 15:56:18 https://pypi.python.org/pypi/pykube/0.4.0 15:56:30 not sure about the go client, i havent seen one 15:56:49 #link Kubernetes python client https://pypi.python.org/pypi/pykube/0.4.0 15:56:59 https://github.com/pquerna/go-keystone-client 15:57:00 maybe for open contrail 15:57:12 mspreitz: don't think so. never saw one. maybe check on ML to double check 15:57:23 "work in progress, DO NOT USE" !! 15:57:35 Thanks everyone for joining the meeting and see you all on wed :) will send an invite 15:57:39 mspreitz: https://github.com/rackspace/gophercloud 15:57:49 possubly tailored on rackspace cloud though 15:57:53 bye! 15:57:59 adieuuu 15:58:00 bye 15:58:03 bye 15:58:04 salv-orlando: thanks! 15:58:05 bye 15:58:10 #link go SDK for openstack https://github.com/rackspace/gophercloud 15:58:15 #endmeeting