15:01:50 #startmeeting kuryr 15:01:51 Meeting started Mon Aug 17 15:01:50 2015 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:54 The meeting name has been set to 'kuryr' 15:02:01 Hi everybody! 15:02:05 hello :) 15:02:11 Welcome to the third Kuryr meeting 15:02:17 Show of hands! 15:02:24 I'm here. 15:02:25 o/ 15:02:26 o/ 15:02:40 o/ 15:03:07 o/ 15:03:08 heh 15:03:17 #indo gsagie, tfukushima, diga, banix, daneyon and apuimedo present 15:03:23 #info gsagie, tfukushima, diga, banix, daneyon and apuimedo present 15:03:26 :P 15:03:32 alright 15:03:37 let's get going 15:03:44 Thank you all for joining today 15:03:49 yw 15:03:53 o? 15:03:56 o/ 15:04:01 apuimedo: welcome 15:04:27 #info topic: vif-binding-unbinding 15:04:43 diga: tfukushima: what's the status? 15:04:59 apuimedo: how about using #topic :) 15:05:08 https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism 15:05:29 #topic vif-binding-unbinding 15:05:32 good point! 15:05:45 thanks banix 15:06:05 #link https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism 15:06:08 And we have a brief etherpad spec as well. 15:06:11 I am working on VIF binding/unbinding work, will push the code by tomorrow, because IT guys blocked the gerrit 15:06:12 #link https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding 15:06:29 tfukushima: the blueprint comes from the etherpad 15:06:46 yes 15:06:48 Ah, Ok. 15:06:59 apuimedo: i talked with diga today and we had a question, why would Kuryr need to create the namespace? is that not something that docker does and send to the driver? 15:07:43 gsagie: kuryr doesnt need to do that 15:07:48 gsagie: in the libnetwork specification they talk about creating the sandbox that then is used by the container 15:08:09 I told diga to check how other libnetwork drivers do it. 15:08:20 specifically I pointed him to the calico driver 15:08:31 apuimedo : ahh ok 15:08:35 diga: any conclusion ? 15:08:39 yes, gsagie I talked with apuimedo later on, he has given some pointers 15:08:42 Actually, I think we can let Docker create namespaces and we just put veth endpoints into the containers. 15:08:58 yes, thats what i had in mind, this makes more sense 15:08:59 #link https://github.com/midonet/midonet/blob/master/tools/docker/mm-docker.sh#L28-L39 15:09:13 tfukushima: do you receive the namespace with the endpoint join request? 15:09:27 gsagie: I didn't get time to look at it in the day due to I am working code, will surely look at it in the night 15:09:32 apuimedo: yes 15:09:45 so that's it then ;-) 15:10:35 diga: tfukushima: who is working on the system for calling the different executables depending on the neutron port type? 15:10:44 #link https://github.com/openstack/kuryr/blob/master/doc/source/design.rst#user-workflow 15:10:53 See step 4. 15:11:13 sandboxkey 15:11:17 perfect 15:11:49 #info: NetworkDriver.Join gives Kuryr the Network Namespace location to put the veth 15:11:51 I am creating API for VIF binding/unbinding part 15:12:00 thanks tfukushima 15:12:07 we have not yet started on that 15:12:09 diga: any update on that? 15:12:12 ah, ok 15:12:49 Tomorrow I will start on the executable 15:12:51 #action diga to update on the executable calling via gerrit ;-) 15:13:25 :) 15:13:35 anything else about the binding-unbinding for Milestone 1 (simple bare-metal binding and unbinding)? 15:14:08 not from me 15:14:14 nothing from my side as well 15:14:31 very well 15:15:03 #topic libnetwork <-> docker network mapping 15:15:51 Here as discussed in the mailing list, we need to start mapping docker features (for example port-mapping) and see how we address them 15:16:02 which we cant map directly to neutron API's 15:16:13 gsagie: this is now just about name mapping 15:16:14 i can start making a list 15:16:20 ahh ok 15:16:25 we'll get to that ;-) 15:16:37 tfukushima: found a github issue where libnetwork maintainers specifically reject the idea of more information being provided on network creation requests 15:16:50 #linke https://github.com/docker/libnetwork/issues/139 15:16:52 #link https://github.com/docker/libnetwork/issues/139 15:17:00 thanks Taku ;-) 15:17:01 so this is for the name mapping? 15:17:06 yes 15:17:17 I'm also asking how it's going in the current situation. 15:17:19 #link https://github.com/docker/libnetwork/issues/414 15:17:20 "The name is a construct that is needed only in the management layer of the solution" 15:17:52 they think that things like the network name are changeable and thus only relevant to libnetwork 15:18:03 so they should not be visible to the drivers 15:18:32 with this in mind, the clear solution is to store new networks in Neutron with the network id string we get from docker as their name 15:18:40 can we recieve the name by API call? 15:18:50 according to the id 15:18:51 gsagie: no, we can't 15:19:06 in-flight requests for the same Object do not work 15:19:15 It's not available during we're calling /NetworkDriver.CreateNetwork, for instance. 15:19:19 #link https://github.com/docker/libnetwork/issues/414 15:19:29 And there's not update API so far. 15:19:56 However, even with the simplification of storing the networks in Neutron with the docker id, there is the issue of how to connect to pre-existing neutron networks 15:20:20 at this point defining a label neutron_net=neutron_net_id seems the most likely answer 15:20:41 what do you mean "connect to pre-existing neutron networks" ? 15:21:11 gsagie: one of the goals of Kuryr is to allow to plug containers in neutron networks that were not created in Neutron by Kuryr 15:21:38 i see 15:21:48 you would implicitly "create" the network with the Docker API, but Kuryr would see that the network exists in Neutron and would just use it and report creation success 15:22:10 i guess same for pre-created ports, for fast creation 15:22:33 yup 15:22:38 exact same thing 15:23:05 you can update a network name however 15:23:11 in Neutron, as far as i remember 15:23:15 tfukushima is investigating if the labels that are defined for a network on networkcreate time are accessible at endpoint creation time 15:23:20 if that is not the case 15:23:32 we need to store the mapping somewhere 15:24:05 (maybe create a metadata field in the neutron schema or try to abuse the data storage of libnetwork) 15:24:15 apuimedo: can you ellaborate? what mapping? 15:24:27 gsagie: that would be very unfriendly to operators 15:25:00 banix: sure thing. Mapping the docker_id to the real pre-existing Neutron network uuid that is backing the network (as specified in the label) 15:25:48 hmmmm 15:26:32 #action tfukushima to find out about the label persistence or libnetwork abuse 15:26:56 otherwise we need to extend in neutron or delay the connection to pre-existing networks for another milestone 15:27:23 (or do what gsagie proposes, but I find it a very user unfriendly ;-) ) 15:27:56 anything else about the name mapping? 15:27:57 apuimedo: yeah i agree, of course we can also think about a Kuryr extension that hold these mappings in Neutron 15:28:21 gsagie: that's the last resource that I'd ask you and irenab to look at 15:28:22 but guess thats part of the label investigation 15:28:34 okie, will do apuimedo 15:28:47 gsagie: that would be developers unfriendly 15:28:59 #action gsagie to look at putting hte network resource info in neutron as a last resort option 15:29:19 #topic feature mapping 15:29:35 gsagie: can you give an intro? 15:29:58 i have started to write a spec for Kuryr in Neutron specs repo for liberty: https://review.openstack.org/#/c/213490/ 15:30:12 #linkhttps://review.openstack.org/#/c/213490/ 15:30:16 #link https://review.openstack.org/#/c/213490/ 15:30:52 basically i am trying to map all the uses and use cases that we can currently think of for Kuryr, including integration with Magnum and Kolla and put it there 15:31:33 its still work in progress, but please review and comment 15:31:40 gsagie: I looked at the spec and I'll make some comments later 15:31:51 Maybe put some follow-up path for some section 15:32:03 #action all to review the spec 15:32:17 apuimedo: yeah, i am going to write more on each of the use cases, i uploaded it in the middle 15:32:33 It looks good although it seems I'm not allowed to give you +1. 15:32:36 apuimedo: and we can manage each one in a blue print 15:32:58 tfukushima: you should be able to review.. its in neutron repository 15:33:02 gsagie: pls keep it as WIP if that is the case so don’t get blamed for incompleteness, etc 15:33:03 gsagie: I'm wondering if we should put sections for different container orchestration solutions or if they should be separate specs 15:33:11 banix: its WIP :) 15:33:32 i think the latest patch may have removed the WIP flag 15:33:40 banix: is right 15:33:41 gsagie: Ooops, I found it in the form appeared when I press Review button indeed. 15:34:05 yeah, thanks banix will fix 15:34:22 what do you all think about the orchestrators, separate specs or sections? 15:34:47 apuimedo : thats good idea, will add that, probably should be something like "Deployment setups" 15:34:50 or something like that 15:35:01 apuimedo the kuryr spec should address container cluster/orchestration engines 15:35:09 haven’t had a chance to review. will do today 15:35:26 daneyon: indeed, I was just wondering if it should be sub-specs in separate files ;-) 15:35:29 I suggest usings a seperate section in the spec for this purpose 15:35:42 daneyon: agreed ;-) 15:35:47 thx 15:35:57 daneyon: we want to hear the magnum team comments on that, and any missing parts for your use cases 15:36:04 #info we'll be adding cluster/orchestration engines sections in the kuryr spec 15:36:39 gsagie for sure. I will review later today and provide my feedback. Will also share with the rest of the magnum community 15:36:44 daneyon: i have been reading your etherpad, we are planning to address some usefull things, for example mapping between kubernetes services to Neutron load balancing API, so i think there are some interesting future possbilities 15:36:51 #info it is important to discuss about the expectations/assumptions those engines have and how they are affected by Kuryr 15:37:09 gsagie that's great to hear. 15:37:23 daneyon: we'll be attending the next network meeting too ;-) 15:37:30 thx 15:37:41 daneyon: can you share here the next time/date ? because i know you are rotating on that 15:37:47 thank you for always joining on such inconvenient hour 15:37:51 sure 1 sec 15:37:56 I don't think the networking one rotates 15:37:59 does it? 15:38:11 in the page it was written that it does.. but dont know 15:38:40 #topic configuration management 15:38:52 banix: any update on this front? 15:38:52 The Magnum Container Networking Subteam will meet on 8/20 @ 1800 UTC 15:38:54 #link https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting 15:39:02 daneyon: thanks 15:41:29 banix? 15:41:46 btw, apuimedo, irena suggested that we consider virtual or "semi-virtual" sprint 15:41:50 gsagie: sorry got distracted for a minute 15:42:14 apuimedo: nothing new, will work on it this week 15:42:59 #action banix to update on configuration management for the next meeting 15:43:16 #topic sprint proposal 15:43:52 #info irenab proposed a virtual/semi-virtual sprint 15:44:07 she mentioned she will be in your office at around september 15:44:15 #action irenab gsagie to propose dates and format 15:44:23 okie 15:44:34 it would be nice to tackle milestone 1 with it 15:44:52 #topic VM container networking 15:45:45 #info kuryr drivers will have a brainstorming session about VM container networking for Milestone 2 on Wednesday 15:45:58 :) 15:46:17 #action apuimedo to bring diagrams of two proposals to spark discussion 15:46:23 apuimedo where will the brainstorming session take place? 15:46:39 it is on the invite, it will be over Google Hangout 15:47:04 Nice! 15:47:04 apuimedo do u have a link to the invite? 15:47:08 apuimedo: maybe publish link, if anyone here might want to join 15:47:58 #link https://plus.google.com/hangouts/_/midokura.com/kuryr 15:48:07 apuimedo thx 15:48:22 if there are issues of people unable to join we can try some alternative 15:49:05 apuimedo that link takes me to the meeting. Do you have a link that provides the date and time of the meeting? 15:49:07 #info the goal of the meeting is to come out of it with an agreement of a plan A and B for the VM container use case 15:49:38 daneyon: unfortunately not, I'll put it here in info 15:49:44 ok 15:50:01 just want to make sure i have it in my calendar 15:50:05 #info August 19th 15UTC 15:50:36 #topic Open floor 15:50:59 Does anybody wish to bring up some other topic for discussion? 15:51:22 wanted to suggest combining tfukushima ’s several patches into one. Does that makes sense? 15:51:47 It's going to be a relatively big patch, which I wanted to avoid. 15:52:01 I'm Ok about it though. 15:52:04 #link https://review.openstack.org/#/q/status:open+project:openstack/kuryr,n,z 15:52:05 banix: can you link to the patches you'd combine? 15:52:23 i think several small patches are always better and easier to review, but thats just personal 15:52:33 personally I like the split 15:52:38 tfukushima: I thought one patch would be easier to review and get erged so we have a first functioning version out there 15:52:47 it seems like they are nicely split by functionality 15:52:48 just a suggestion 15:53:15 #action apuimedo to finally review and W+1 the patches that have reviews 15:53:23 +1 ;) 15:53:40 that would be another way of doing it :) apuimedo 15:53:44 anything else? 15:53:45 good day/night everyone and thanks for the meeting! 15:53:50 I need to add more tests for validations and failures though. 15:53:52 thank you 15:53:56 thanks all 15:53:56 bye! 15:53:58 Thank you. 15:54:12 tfukushima: i think you can just do it in another patch if thats more convinent for you 15:54:15 to just get this merged now 15:54:21 agreed 15:54:36 +1 15:54:37 Ok. Will do. 15:54:45 thank you all for the attendance! 15:54:50 #endmeeting