03:02:00 <tfukushima> #startmeeting kuryr
03:02:01 <openstack> Meeting started Tue Apr  5 03:02:00 2016 UTC and is due to finish in 60 minutes.  The chair is tfukushima. Information about MeetBot at http://wiki.debian.org/MeetBot.
03:02:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
03:02:05 <openstack> The meeting name has been set to 'kuryr'
03:02:19 <mspreitz> hi
03:02:33 <tfukushima> Hi, I think we have the weekly Kuryr meeting today.
03:02:47 <mspreitz> that's what my calendar says
03:03:14 <tfukushima> Who's up for the meeting?
03:03:27 <mspreitz> o/
03:04:21 <tfukushima> Ok, it seems like we are only attendees.
03:04:35 <tfukushima> #info mspreitz and tfukushima are present
03:04:58 <mspreitz> then maybe it's a short meeting
03:05:21 <tfukushima> Unfortunately we don't have the agenda, so let's focus on the k8s topic.
03:06:09 <mspreitz> OK.  I have a devref up for review.  It's WIP, but I'll take any comments anyone wants to offer now.
03:06:12 <tfukushima> #topic k8s integration
03:06:41 <tfukushima> #info mspreitz submitted his devref and it's ready for the review
03:07:06 <tfukushima> #link mspreitz's devref https://review.openstack.org/#/c/290172/
03:07:07 <fawadkhaliq> hey guys, sorry I am late.
03:07:24 <tfukushima> Oh, hi fawadkhaliq. Welcome.
03:07:49 <tfukushima> #info fawadkhaliq is present as well
03:07:50 <mspreitz> I do have a few discussion questions.
03:08:09 <tfukushima> mspreitz: Go ahead.
03:08:38 <mspreitz> Using an overlay network has this bad effect: it completely bypasses the stuff that kube-proxy does in each host's main network namespace.
03:08:54 <mspreitz> Leading us to want to put up an alternate service proxying solution.
03:09:09 <mspreitz> Now the easiest thing to do is create a load balancer in a pod, for each service.
03:09:14 <mspreitz> That sucks pretty hard.
03:09:40 <mspreitz> And raises a question: can our alternate service proxy solution set the service's "service cluster IP" to be a pod's pod IP?
03:10:05 <tfukushima> mspreitz: Even with your proposal and without OVS datapath?
03:11:07 <mspreitz> I think I am speaking of an issue that applies with any overlay network that eschews the main network namespace of the hosts.
03:11:10 <tfukushima> I thought the service cluster IP can be a VIP associated with the load balancer and Pod IPs can be managed as the Pool members.
03:12:02 <mspreitz> My point is that if we are using an overlay network that avoids the hosts' main network namespace then we have an issue for implementing that VIP
03:12:59 <mspreitz> For example, one possible solution is to use a Neutron load balancer.  One of those must have its VIP on the same network as the endpoints, right?
03:13:09 <mspreitz> Or has that changed in recent releases?
03:14:11 <tfukushima> The VIP can be the IP address on the different subnet AFAIK.
03:14:28 <mspreitz> Different subnet but it must be the same network?
03:14:59 <tfukushima> mspreitz: I'm not 100% sure but I believe so.
03:16:13 <mspreitz> Well, my basic question is this: what part of the system is in charge of setting a service's "service cluster IP"?
03:17:41 <tfukushima> mspreitz: Usually it's kube-proxy.  Ok, I must be missing the problem you're facing.
03:18:23 <mspreitz> That would be a good answer.  My point is that if we are not using the kube-proxy then: can our alternative set the "service cluster IP" and is there any constraint on what that is set to?
03:19:14 <tfukushima> So services should be able to be communicated even if they're in the different namespaces?
03:19:55 <mspreitz> Since we have been assuming that one tenant can have multiple namespaces then yes, there has to be some ability to communicate between namespaces.
03:20:15 <mspreitz> (but I do not understand the connection to my quesetion)
03:20:45 <mspreitz> s/quesetion/question/
03:21:14 <tfukushima> So your question was "is there any constraint on what that is set to"?
03:21:57 <mspreitz> It's really a two part question.  (1) What sets the "service cluster IP" of a service?  And, if our alternate proxy can do that, (2) can our alternate set the "service cluster IP" to anything it likes?
03:23:08 <mspreitz> fawadkhaliq: would you like some time to get your client under control?
03:23:51 <tfukushima> mspreitz: Thanks for your clarification. So in my API watcher proposal, which I would mention later, 1. The API watcher would take care of it and 2. The subnet of service IP needs to be allocated besides the pod subnets.
03:24:23 <fawadkhaliq> mspreitz: I am sorry. Flaky Internet, can't do much, unfortunately..
03:24:31 <mspreitz> by "besides" do you mean (a) it has to be distinct or (b) it has to be somehow alongside or alongwith?
03:25:05 <tfukushima> I'd say rather a.
03:25:52 <mspreitz> tfukushima: are you saying that we can not make an alternate proxy solution that uses a Neutron load balancer whose VIP is on the same subnet as the endpoints?
03:26:18 <tfukushima> But in the deployment script usually they're declared differently as FLANNEL_NET=172.16.0.0/16 and SERVICE_IP_RANGE=192.168.0.0/24, for instance.
03:26:39 <mspreitz> I know that
03:26:48 <mspreitz> but that is only a matter of some existing practice
03:26:55 <mspreitz> I am asking about what the code requires
03:27:31 <mspreitz> In fact, you are quoting only some configuration of some stuff we will not use, so its relevance is not obvious.
03:28:31 <tfukushima> Well, it's the default behaviour, so I thought it could be a good starting point.
03:28:56 <mspreitz> But, again, that's configuration of stuff (flannel, kube-proxy) that we will not use.
03:29:27 <mspreitz> OK, it looks like more progress on this will not happen right now...
03:29:34 <mspreitz> Let me ask on another front..
03:30:16 <mspreitz> I wonder about not using an overlay.  But that raises questions about multi-tenancy.  How does Calico handle that?
03:31:31 <tfukushima> I don't have enough knowledge to say something unfortunately. https://github.com/projectcalico/calico-cni/tree/master/calico_cni
03:32:09 <mspreitz> fawadkhaliq?
03:32:38 <fawadkhaliq> I am not sure if Calico handles multi-tenancy..
03:32:47 <fawadkhaliq> I know about the general capabilities of Calico though
03:32:51 <mspreitz> OK, no more progress on that here now.
03:32:55 <mspreitz> I am done.
03:34:39 <tfukushima> #action Everyone reviews mspreitz's devref for k8s translate 1 and 2
03:35:24 <tfukushima> Ok, so as I was talking, I wrote the devref for the API watcher.
03:35:48 <tfukushima> #link the API watcher devref https://review.openstack.org/#/c/301426/
03:36:08 <tfukushima> I should have marked it as WIP and it needs better commit message.
03:37:13 <tfukushima> Although I'd modify it accordingly, I'd appreciate if you could take a look at it when you have some time.
03:37:58 <tfukushima> It's not in the perfect shape but I just wanted some feedback kicking the early start off.
03:38:09 <mspreitz> There's an implementation too, right?
03:38:59 <tfukushima> We will make them public when its gets the good shape.
03:39:18 <tfukushima> Actually we have some bugs and problems :-/
03:39:58 <tfukushima> But basically I put necessarily information for the implementation in the devref.
03:40:53 <mspreitz> Please help me remember.  There is some implementation that was up for review recently.
03:41:56 <tfukushima> Really? I must have missed it.
03:42:27 <mspreitz> OK, maybe it was something else or from someone else.
03:43:10 <mspreitz> I haven't seen this devref before, will have to comment on it later.
03:43:46 <tfukushima> Anyways please take a look at it if you have any chance.
03:43:56 <tfukushima> That's it from me.
03:44:24 <tfukushima> fawadkhaliq: We're having the meeting without any agenda. Do you have anything you want to talk about?
03:44:55 <fawadkhaliq> tfukushima: not really. Just that we are preparing the content for Austin sessions.
03:44:57 <fawadkhaliq> thanks
03:45:21 <tfukushima> Ok, cool.
03:45:36 <tfukushima> mspreitz: Do you have anything else? If you don't I'd like to end the meeting.
03:45:42 <mspreitz> I have nothing else.
03:46:24 <tfukushima> Good. Thank you for attending, guys. Let's discuss on #openstack-kuryr if you found anything.
03:46:39 <tfukushima> #endmeeting