03:02:00 #startmeeting kuryr 03:02:01 Meeting started Tue Apr 5 03:02:00 2016 UTC and is due to finish in 60 minutes. The chair is tfukushima. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:02:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:02:05 The meeting name has been set to 'kuryr' 03:02:19 hi 03:02:33 Hi, I think we have the weekly Kuryr meeting today. 03:02:47 that's what my calendar says 03:03:14 Who's up for the meeting? 03:03:27 o/ 03:04:21 Ok, it seems like we are only attendees. 03:04:35 #info mspreitz and tfukushima are present 03:04:58 then maybe it's a short meeting 03:05:21 Unfortunately we don't have the agenda, so let's focus on the k8s topic. 03:06:09 OK. I have a devref up for review. It's WIP, but I'll take any comments anyone wants to offer now. 03:06:12 #topic k8s integration 03:06:41 #info mspreitz submitted his devref and it's ready for the review 03:07:06 #link mspreitz's devref https://review.openstack.org/#/c/290172/ 03:07:07 hey guys, sorry I am late. 03:07:24 Oh, hi fawadkhaliq. Welcome. 03:07:49 #info fawadkhaliq is present as well 03:07:50 I do have a few discussion questions. 03:08:09 mspreitz: Go ahead. 03:08:38 Using an overlay network has this bad effect: it completely bypasses the stuff that kube-proxy does in each host's main network namespace. 03:08:54 Leading us to want to put up an alternate service proxying solution. 03:09:09 Now the easiest thing to do is create a load balancer in a pod, for each service. 03:09:14 That sucks pretty hard. 03:09:40 And raises a question: can our alternate service proxy solution set the service's "service cluster IP" to be a pod's pod IP? 03:10:05 mspreitz: Even with your proposal and without OVS datapath? 03:11:07 I think I am speaking of an issue that applies with any overlay network that eschews the main network namespace of the hosts. 03:11:10 I thought the service cluster IP can be a VIP associated with the load balancer and Pod IPs can be managed as the Pool members. 03:12:02 My point is that if we are using an overlay network that avoids the hosts' main network namespace then we have an issue for implementing that VIP 03:12:59 For example, one possible solution is to use a Neutron load balancer. One of those must have its VIP on the same network as the endpoints, right? 03:13:09 Or has that changed in recent releases? 03:14:11 The VIP can be the IP address on the different subnet AFAIK. 03:14:28 Different subnet but it must be the same network? 03:14:59 mspreitz: I'm not 100% sure but I believe so. 03:16:13 Well, my basic question is this: what part of the system is in charge of setting a service's "service cluster IP"? 03:17:41 mspreitz: Usually it's kube-proxy. Ok, I must be missing the problem you're facing. 03:18:23 That would be a good answer. My point is that if we are not using the kube-proxy then: can our alternative set the "service cluster IP" and is there any constraint on what that is set to? 03:19:14 So services should be able to be communicated even if they're in the different namespaces? 03:19:55 Since we have been assuming that one tenant can have multiple namespaces then yes, there has to be some ability to communicate between namespaces. 03:20:15 (but I do not understand the connection to my quesetion) 03:20:45 s/quesetion/question/ 03:21:14 So your question was "is there any constraint on what that is set to"? 03:21:57 It's really a two part question. (1) What sets the "service cluster IP" of a service? And, if our alternate proxy can do that, (2) can our alternate set the "service cluster IP" to anything it likes? 03:23:08 fawadkhaliq: would you like some time to get your client under control? 03:23:51 mspreitz: Thanks for your clarification. So in my API watcher proposal, which I would mention later, 1. The API watcher would take care of it and 2. The subnet of service IP needs to be allocated besides the pod subnets. 03:24:23 mspreitz: I am sorry. Flaky Internet, can't do much, unfortunately.. 03:24:31 by "besides" do you mean (a) it has to be distinct or (b) it has to be somehow alongside or alongwith? 03:25:05 I'd say rather a. 03:25:52 tfukushima: are you saying that we can not make an alternate proxy solution that uses a Neutron load balancer whose VIP is on the same subnet as the endpoints? 03:26:18 But in the deployment script usually they're declared differently as FLANNEL_NET=172.16.0.0/16 and SERVICE_IP_RANGE=192.168.0.0/24, for instance. 03:26:39 I know that 03:26:48 but that is only a matter of some existing practice 03:26:55 I am asking about what the code requires 03:27:31 In fact, you are quoting only some configuration of some stuff we will not use, so its relevance is not obvious. 03:28:31 Well, it's the default behaviour, so I thought it could be a good starting point. 03:28:56 But, again, that's configuration of stuff (flannel, kube-proxy) that we will not use. 03:29:27 OK, it looks like more progress on this will not happen right now... 03:29:34 Let me ask on another front.. 03:30:16 I wonder about not using an overlay. But that raises questions about multi-tenancy. How does Calico handle that? 03:31:31 I don't have enough knowledge to say something unfortunately. https://github.com/projectcalico/calico-cni/tree/master/calico_cni 03:32:09 fawadkhaliq? 03:32:38 I am not sure if Calico handles multi-tenancy.. 03:32:47 I know about the general capabilities of Calico though 03:32:51 OK, no more progress on that here now. 03:32:55 I am done. 03:34:39 #action Everyone reviews mspreitz's devref for k8s translate 1 and 2 03:35:24 Ok, so as I was talking, I wrote the devref for the API watcher. 03:35:48 #link the API watcher devref https://review.openstack.org/#/c/301426/ 03:36:08 I should have marked it as WIP and it needs better commit message. 03:37:13 Although I'd modify it accordingly, I'd appreciate if you could take a look at it when you have some time. 03:37:58 It's not in the perfect shape but I just wanted some feedback kicking the early start off. 03:38:09 There's an implementation too, right? 03:38:59 We will make them public when its gets the good shape. 03:39:18 Actually we have some bugs and problems :-/ 03:39:58 But basically I put necessarily information for the implementation in the devref. 03:40:53 Please help me remember. There is some implementation that was up for review recently. 03:41:56 Really? I must have missed it. 03:42:27 OK, maybe it was something else or from someone else. 03:43:10 I haven't seen this devref before, will have to comment on it later. 03:43:46 Anyways please take a look at it if you have any chance. 03:43:56 That's it from me. 03:44:24 fawadkhaliq: We're having the meeting without any agenda. Do you have anything you want to talk about? 03:44:55 tfukushima: not really. Just that we are preparing the content for Austin sessions. 03:44:57 thanks 03:45:21 Ok, cool. 03:45:36 mspreitz: Do you have anything else? If you don't I'd like to end the meeting. 03:45:42 I have nothing else. 03:46:24 Good. Thank you for attending, guys. Let's discuss on #openstack-kuryr if you found anything. 03:46:39 #endmeeting