03:02:54 <tfukushima> #startmeeting kuryr
03:02:55 <openstack> Meeting started Tue May 31 03:02:54 2016 UTC and is due to finish in 60 minutes.  The chair is tfukushima. Information about MeetBot at http://wiki.debian.org/MeetBot.
03:02:56 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
03:02:58 <openstack> The meeting name has been set to 'kuryr'
03:03:32 <tfukushima> Hi, who's up for the Kuryr weekly meeting?
03:03:35 <vikasc> o/
03:04:36 <tfukushima> vikasc: It seems you and me. :-)
03:04:41 <vikasc> :D
03:04:48 <vikasc> banix, was also here
03:04:49 <tfukushima> #info vikasc and tfukushima are present
03:05:23 <tfukushima> banix: Do you want to join us?
03:05:43 <banix> yes i am here. Vikas woke me up :) just kidding
03:06:23 <tfukushima> #info banix is also present
03:06:30 <vikasc> i would like to discuss addressspaces/scopes on my turn :)
03:07:10 <tfukushima> So, we don't have the agenda today unfortunately
03:07:29 <tfukushima> #topic IPAM driver
03:07:46 <tfukushima> Let's start off with addressspaces/scopes topic.
03:08:11 <vikasc> thanks
03:08:16 <vikasc> https://github.com/docker/docker/issues/23025
03:09:06 <vikasc> Recently banix  observed this problem in subnet determining logic where two docker networks are launched with same cidr subnets
03:10:18 <vikasc> As tfukushima banix , we have already discussed this on ml, it should be fine if I skip problem description part
03:10:39 <banix> yes pls go ahead
03:11:17 <vikasc> now to solve this problem, I could see two possible solutions. One short term and other long term permanent
03:11:48 <vikasc> long term permanent will be mapping docker addressSpaces with neutron addressScopes
03:12:10 <vikasc> have mentione details here in this bp https://blueprints.launchpad.net/kuryr/+spec/address-scopes-spaces
03:12:50 <vikasc> and problem is many backends dont support neutron addressscopes at the moment
03:13:35 <tfukushima> Hmm, what is the short term solution?
03:13:42 <vikasc> so for that, we can fix this issue by fetching poolId info using docker "network inspect" api for particular network, at network creation time
03:14:20 <vikasc> but unfortunately, at the moment docker inspect api doesnt give poolid info
03:14:57 <vikasc> for that i have raised a pull request in docker, https://github.com/docker/docker/pull/23028
03:15:11 <tfukushima> Dah, that's unfortunate.
03:15:25 <vikasc> one more problem is,
03:15:27 <banix> would using ipam-options be an option? as ugly as that may be?
03:15:51 <vikasc> though internally docker ipam supports addressspaces, it doesnt provide an option to pass addressspaces
03:16:07 <vikasc> banix, I tried that too
03:16:25 <vikasc> but nothing from ipam options is passed to network create command arguments
03:17:29 <vikasc> I have opened an issue with docker for providing option to set address-spaces at network creation time, https://github.com/docker/docker/issues/23076
03:17:30 <banix> and at the cli for network create, the network id is not known yet to have it passed to ipam driver
03:17:51 <vikasc> banix, exactly
03:18:13 <vikasc> not sure how other drivers are able to manage
03:19:06 <vikasc> some relevant discussion about addressscopes and all can also be found on here, https://github.com/docker/libnetwork/issues/489
03:19:31 <vikasc> Though its very long
03:20:15 <vikasc> i went through it and summary is like addressscopes is the answer to handle this mapping between poolids and cidrs as per aboch
03:21:02 <vikasc> At one instance in this discussion, he said that in future he will be adding support for addressscopes in UI/cli, but as of now its not there
03:21:23 <vikasc> now waiting for thoughts from banix tfukushima
03:22:38 <tfukushima> Probably we should document this discussion since there's no complete solution just for now.
03:23:29 <vikasc> tfukushima, document means like some spec? I have tried to cover most of it in bp
03:23:36 <banix> i think the main thing is the feedback we get from docker guys so we can get an idea what the chances of vikas’s proposal is
03:23:39 <tfukushima> Some improvements need to be done both in Docker and our libnetwork plugin.
03:24:06 <vikasc> tfukushima, agree
03:25:02 <vikasc> so in docker i am pushing for approaches, short term(inspect) and long term(addressspace option)
03:25:11 <vikasc> but not getting any responses
03:25:34 <banix> we are going through a long weekend in the us; hopefully we hear from them soon
03:25:49 <vikasc> Would appreciate if some of our kuryr community guys help there and chime in
03:26:00 <tfukushima> I'd like to see some notes in, the devref for instance, other than the blueprint since it's not much visible from users in my opinion.
03:26:18 <vikasc> sure tfukushima
03:26:27 <vikasc> will addup there current state
03:26:37 <tfukushima> vikasc: Thanks.
03:27:10 <tfukushima> #link blueprint for IPAM issue https://bugs.launchpad.net/kuryr/+bug/1585572
03:27:10 <openstack> Launchpad bug 1585572 in kuryr "Incorrect logic of subnet fetching in /IpamDriver.RequestAddress handling" [High,In progress] - Assigned to vikas choudhary (choudharyvikas16)
03:27:26 <vikasc> tfukushima,  this is related to same discussion
03:28:46 <tfukushima> Ooops, it was a bug report, isn't it?
03:28:56 <vikasc> tfukushima, yes
03:28:58 <vikasc> :)
03:29:33 <vikasc> tfukushima, i got it.. you want me to add a spec in devref for this addressspaces/scopes mapping, mentioning current status
03:30:14 <tfukushima> vikasc: Yes, it'd be very appreciated.
03:30:26 <vikasc> tfukushima, sure.. no problem
03:30:59 <tfukushima> Good.
03:31:44 <vikasc> my part is done
03:31:50 <tfukushima> Maybe it takes some time and the doc would be helpful users until we have the legitimate solution for the issue.
03:32:07 <vikasc> tfukushima, +1
03:32:19 <tfukushima> Ok, thanks, vikasc. Do you have anything to add, banix?
03:32:29 <banix> no i am good
03:32:53 <tfukushima> Let's go on.
03:33:03 <banix> i will follow up vikas’s issues at libnetwork, etc
03:33:17 <vikasc> banix, i reaaly need you help there :)
03:33:29 <tfukushima> banix: Thanks.
03:33:34 <vikasc> these docker folks seems very rigid and inflexible
03:35:07 <tfukushima> That's a known issue since the beginning. :-)
03:35:11 <vikasc> :D
03:35:13 <tfukushima> I understand they have tons of tasks and they don't have enough resources to handle all of issues though.
03:35:22 <vikasc> agree
03:35:50 <vikasc> and sometimes that kind of control is required for project stability and performance
03:36:14 <tfukushima> Yes. It'd take some time and let's not to have expectations too high.
03:36:23 <vikasc> :)
03:36:51 <tfukushima> #topic K8s integration
03:37:29 <tfukushima> So my devref was merged a while ago.
03:38:24 <tfukushima> #link Raven devref https://review.openstack.org/#/c/301426/
03:38:46 <tfukushima> As you know, it has many typos and style issues.
03:39:09 <tfukushima> I'll fix them but they'd be done separately.
03:39:14 <vikasc> incrementally we can improve
03:40:07 <vikasc> i have one general query, will ask in "open discussion"
03:40:49 <tfukushima> Ok, I don't have much progress to we can move there shortly.
03:41:30 <tfukushima> We'll submit the implementation patches to the upstream in the near future.
03:42:28 <tfukushima> Although we'd have some discussion on some things like the usage of Python3.
03:42:50 <tfukushima> That's it from my side.
03:42:55 <vikasc> tfukushima, please conside me available in case any help is needed.
03:43:27 <tfukushima> vikasc: Sure. I will.
03:43:39 <vikasc> ok, shall i ask my query now?
03:43:57 <tfukushima> #topic Open discussion
03:44:09 <tfukushima> vikasc: Please go ahead.
03:44:15 <vikasc> i was going through nested-containers spec, https://github.com/openstack/kuryr/blob/master/doc/source/specs/mitaka/nested_containers.rst#L87
03:44:46 <vikasc> it is talking about kuryr agent and kuryr server. agent on all compute machines and server on controller
03:45:25 <vikasc> I could not understand the objective of kuryr server.
03:46:11 <vikasc> kuryr agent on compute machines itself can query neutron for internal tags, subport ids, etc etc
03:47:10 <vikasc> Can you guys please share your understanding?
03:47:38 <vikasc> or should i post it on ml
03:49:12 <tfukushima> It is the best to ask fawadkhaliq but he's not here today. So, probably posting to the ML is the nice way to do that.
03:49:51 <vikasc> tfukushima, thanks.. had that in mind. But thought may be can ask fawad directly in meeting
03:50:18 <vikasc> tfukushima, what is your take on this.
03:51:34 <vikasc> tfukushima, I was thinking may be its too naive to ask.
03:51:46 <tfukushima> From this description https://github.com/openstack/kuryr/blob/master/doc/source/specs/mitaka/nested_containers.rst#L185-L196 , it seems he thought there're some cases we need to separate the functionality of Kuryr between the VM and the host.
03:52:52 <tfukushima> Or Kuryr server is the single instance across the hosts.
03:53:23 <tfukushima> For instance, it could be an API watcher in K8s context.
03:53:25 <vikasc> tfukushima, may be kuryr server is for the hosts functionality running on controller
03:53:42 <vikasc> and agent inside vm.
03:54:26 <tfukushima> Yes, I was forgetting that was the nested container proposal.
03:54:59 <vikasc> tfukushima, hmm
03:55:09 <tfukushima> vikasc: I can only give my guesses and it's the best to ask him. :-)
03:55:24 <tfukushima> Or  his implementations would follow.
03:55:26 <vikasc> tfukushima, thanks , will catch up him :)
03:55:58 <tfukushima> Good.
03:56:24 <tfukushima> So let's wrap it up.
03:56:33 <tfukushima> #link Current Kuryr open patches https://review.openstack.org/#/q/project:openstack/kuryr
03:57:11 <tfukushima> Ooops, that was all patches.
03:57:22 <vikasc> :)
03:57:26 <tfukushima> #link Current (real) Kuryr open patches https://review.openstack.org/#/q/project:openstack/kuryr+status:open
03:58:08 <tfukushima> We have a bunch of patches, so please look at them, guys when you have some time.
03:58:24 <banix> will do
03:58:30 <vikasc> will do
03:58:52 <tfukushima> #action vikasc asks fawadkhaliq about "Kuryr server"
03:59:03 <tfukushima> #action Everyone reviews the patches
03:59:23 <tfukushima> Ok, that's it. Thanks for attending guys.
03:59:29 <vikasc> thanks
03:59:39 <tfukushima> #endmeeting