18:00:24 #startmeeting container-networking 18:00:25 Meeting started Thu Jul 30 18:00:24 2015 UTC and is due to finish in 60 minutes. The chair is daneyon_. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:29 The meeting name has been set to 'container_networking' 18:00:36 #topic roll call 18:00:39 Adrian Otto 18:00:48 Daneyon here 18:01:30 o/ 18:01:34 Lets wait a couple minutes for others to join. 18:01:34 o/ 18:01:59 o/ 18:02:58 #topic Review Networking Spec Submission/Feedback 18:03:04 #link https://review.openstack.org/#/c/204686/ 18:03:20 We had a ton of feedback, mainly from the neutron community 18:03:35 Lots of -1's from the neutron community 18:04:33 It appears that mestery (neutron ptl) would +1 the spec if we did away with the network_backend abstraction and supported libnetwork and libnetwork only 18:04:46 this would align us with the kuryr project 18:05:01 #link https://github.com/openstack/kuryr/ 18:05:41 The biggsest question mark is supporting flannel. flannel is not a libnetwork remote driver 18:06:05 I have found that it should be possible to have flnnel work with libnetwork 18:06:10 daneyon_: I think there is a middle ground 18:06:22 flannel would use the libnetwork native bridge driver 18:06:23 we could state an intent to use libnetwork to the extent possible. 18:06:38 in the case of flannel, we only have the option of integrating using the bridge interface 18:06:41 daneyon_: how "serious" kuryr project, sorry but there are no sources and documentation? 18:07:07 long-term someone could create a flannel remote driver, it would still use the native bridge driver for L2. This is becasue flannel is a L3-only solution 18:07:13 so that will need to be an exception until someone creates a viable solution to the absence of a libnetwork remote driver for flannel 18:08:05 adrian_otto the solution may currently exist. IMO it's just a matter of validating that flannel can work with libnetwork's native bridge driver. 18:08:18 if it does not pan out, then i agree that their needs to be an exception 18:08:51 ok, who can perform that validation, and on what timeframe? 18:09:22 eghobo i believe the kuryr project has good intentions. It's super new... my issue is that code is dropping without any detailed design specs. I don;t like that approach. 18:09:57 however, i'm happy to see the neutron community addressing container networking. 18:10:09 daneyon_: does kuryr at least have regular team meetings? 18:10:10 adrian_otto i am going to validate it 18:10:47 i would be further along, but my lab was moved and i have been dealing with lab changes slowing me down and also i was part of the kolla midcycle. 18:11:11 i would expect that i finalize the validation and do a write-up by the end of next week 18:11:16 daneyon_ rocked the midcycle btw ;) 18:11:26 * sdake thanks daneyon_ heartely :) 18:11:31 it does not appear in the wii at all 18:11:40 adrian_otto i have yet to see any details about kuryr meetings. 18:11:54 if i don;t see soemthing soon, i will contact Gal to get the details 18:12:11 well look, we can't take a nonexistent thing seriously 18:12:14 #action danehans contact Gal to get Kuryr meeting details. 18:12:24 sdake happy to help. 18:12:55 if it's not an openstack team, has no specs, has no wiki page, does nothing yet, and just a code repo, then we can't be expected to take a dependency on it. 18:13:01 adrian_otto does the eval time period i provided work for you? 18:13:14 so my guidance is to participate as it evolves, and advance with our plans in parallel 18:13:22 we are not taking a dep on a stackforge project - I will -2 any such review imo :) 18:13:55 we can have further discussions at our normal team meeting if someone wants to try to change my mind :) 18:14:10 daneyon_: I'm satisfied with the proposed timeframe. I'm actually not concerned about when we finish. What I care most about is that we are clear about delegating the work, and tracking it to completion. 18:14:19 adrian_otto agreed. i don;t plan to depend on the project anytime soon. If we can make the neutron community happy by loosing the network_backend abstration by standardizing on libnetwork, it may be worth our while to do the flannel/libnet eval 18:14:23 I also don't like depends on kuryr until it is mature 18:14:35 my issue with standardizing on libnet goes beyond neutron/kuryr 18:14:44 if a project is mature or not is not a deal killer for me hongbin, its the namespace 18:14:52 i don't think kuryr should standardize on libnet 18:15:07 and i will continue to let that be known in that community 18:15:09 sdake: kuryr is already under openstack namespace 18:15:25 oh ok then, my objection is not valid then :) 18:15:34 until their is a container networking industry standard, i think it's not wise to do. 18:15:40 sdake: it was inserted without community discussion 18:16:03 especially when k8s has a pluggable networking model and I think coe's is where the long-term value is 18:16:05 ack, I just want people to know where i stand with dependencies, and a stackforge project as a dep I am -2 on 18:16:08 I can summarize the process as an executive aciton by the Neutron PTL, to which there was no timely objection 18:16:41 hindsite 20/20 but there should have atleast been a core reviewer vote 18:17:16 if Neutron wants to make a docker driver for OpeNStack networking, that's their prerogative 18:17:17 adrian_otto understood. if the eval goes as planned, we should have a good understanding of the implementation details. I'll create bp's for each and work with others to divide-up the tasks and track to completion. 18:17:31 but they can't expect us to standardize on that without talking with us about it first. 18:17:45 or showing any form of written plan 18:18:23 sdake agreed. even if we standardize on libnetwork, we will not depend on kuryr anytime soon. btw kuryr went straight into the big tent?????? 18:18:23 daneyon_: tx! 18:18:46 daneyon_: This is actually a gap in our governance process 18:18:56 daneyon_ the big tent permits a PTL to insert new repos related to their project 18:19:08 I don't think you should be allowed to just grab scope like that without any email to the dev list, or any prior discussion with stakeholders 18:19:11 I think there is probably a discussion t obe had about how that should happen 18:19:41 the thing is we trust our ptls to "do the right thing" 18:19:42 yes, that's something I'm planning to raise with the TC, because it goes against our community values. 18:19:48 id we standardize on libbnetwork, then we will be in alginment with kuryr w/o depending (atm) on their code. it's just making sure both projects are marching towards the same direction. 18:19:51 but there is no written formality around what the right hting is 18:20:06 adrian_otto ya - it depends on if the right thing is done/not done 18:20:07 in this case, we really needed to have a discussion before that governance review was merged 18:20:14 sdake ack 18:20:36 FTR this subteam has been operating in the correct way. 18:21:13 this == containers_networking subteam 18:21:15 which subteam adrian_otto 18:21:26 thanks ;) 18:21:34 adrian_otto would you be willing to own an action of requesting the neutron ptl to create a kuryr design spec? 18:21:50 I asked in the review and got no feedback 18:21:53 daneyon_ what adrain_otto said he would do is take itu pwith the tc 18:21:54 from what I can tell kuryr has not organized an openstack team yet. 18:22:02 sdake ack 18:22:12 i'm bringing up a different request 18:22:14 which I think is an appropriate solution 18:23:00 IMO I think it's important that Kuryr provide technical details. As of today, the project has a few sentenes of what it is and that's it. 18:23:01 daneyon_: yes. Assignme an action item to request a kuryr design spec 18:23:59 #action adrian_otto To formally request that the Neutron/Kuryr PTL submit a Kuryr design spec. 18:24:05 tx 18:24:14 thx 18:24:46 any other details related to the magnum net spec that we should discuss? 18:25:25 ok, then let's move on 18:25:33 #topic open discussion 18:25:43 I have been diving into the libnetwork, primarily the remote drivers code. 18:25:47 #link https://github.com/docker/libnetwork/tree/master/drivers/remote 18:26:16 ^ I want to make sure i understand libnet, especially the remote driver code in greet detail. 18:26:23 I am also starting to see Kuryr code drop and have been reviewing the initial commits. 18:26:24 daneyon_: just curious why you against to taking libnetwork model 18:26:30 #link https://github.com/openstack/kuryr/commits/master 18:26:56 eghobo I am for the libnetwork model. i'm just against standardizing on it atm. 18:27:16 personally, i don;t like standardizing on tech that's not a standard. 18:27:21 what do others think? 18:27:29 +1 18:27:41 again, especially since k8s has its own pluggable network model/implementation. 18:27:42 we don't need to standardize, but we can state an intent to use something 18:27:51 personally I agree with Kyle and libnetwork model looks right (no surprises Docker has strong networking team) 18:27:59 and if that turns out not to meet our needs, we will adjust expectations and change direction 18:28:06 +1 adrian_otto 18:28:47 we can merge a spec, and then have subsequent changes proposed against it. 18:28:51 we should agree that there are not so many standards in container world now ;) 18:29:03 we could even anticipate that and put a version number in it 18:29:12 adrian_otto understood. If we intend to use libnetwork and intend to support k8s pluggble net, then that's where things get hairy and the neutron team is unhappy. 18:29:35 because i think we would need to implement an abstration such as network_backend 18:29:46 we don't have a compulsion to support k8s pluggable yet 18:29:55 so let's cross that bridge when we come to it 18:29:57 if we don't then trying to support both could get messy 18:30:22 adrian_otto good point 18:30:31 think of this as a sequence 18:30:47 we state what we expect the long term vision to be, and step 1 toward it 18:31:03 and set expectations that we could revise the direction or the vision based on what we learn 18:31:28 adrian_otto until i can validate that flannel can work with libnet's native bridge driver, i want to pause the spec. then we can update it based on the results of the eval 18:31:39 that's totally appropriate 18:32:05 I suggest that you toggle the review to WIP 18:32:18 adrian_otto agreed, we'll cross that bridge later. i just want the subteam to know where my head is at. 18:32:19 and just put a comment on the end to expect a revision 18:32:43 adrian_otto I'm make the changes to the spec after our meeting 18:32:50 tx 18:33:14 #action danehans to update network spec review to WIP and add comment to expect a revision 18:33:51 I am also starting to see Kuryr code drop and have been reviewing the initial commits. 18:33:56 #link https://github.com/openstack/kuryr/commits/master 18:34:22 adrian_otto ^ is one of the reasons why i am asking for a kuryr design spec 18:34:33 It appears the Kuryr code is modeled from calico 18:34:38 #link https://github.com/Metaswitch/calico-docker 18:35:16 i'll give everyone 5 minutes for a quick review of the links 18:35:37 Let me know if you have any questions, concerns, ideas, etc.. 18:35:54 daneyon_: what's your opinion about calico? 18:36:18 eghobo I really like their approach to container networking. 18:36:31 no overlays, a router on each host 18:36:40 uses bgp for routing 18:36:54 i'm a big fan of bgp since it scales 18:37:11 i've never been much for overlays 18:37:20 I aways was scary about bgp in dc, but may be ;) 18:37:49 network engineers understand ctp/ip, routing protocols (i.e. bgp) and calico seems to hit the sopt their 18:38:08 eghobo why? 18:38:34 s/ctp/tcp/ 18:39:36 mostly because I don't know it too well ): and one simple mistake can kill whole traffic 18:40:09 but facebook and fastly folks thinks it's good idea 18:40:58 who is working on Calico? 18:41:15 FB+Fast.ly? 18:41:33 eghobo their is a fair bit of a learning curve with bgp. I think you can make a lot of different operational mistakes that can cause huge problems in a data center. fortunaly bgp has been operating in dc's and the internet for a long time and the op's folks have it down. bgp also has preventative measures for reducing mistakes. 18:41:40 the description of the approach looks pretty compelling 18:41:54 Metaswitch is behind Calico 18:42:21 Some of their team have been involved in libnetwork from early on. 18:42:43 adrian_otto I agree, I like their approach 18:43:15 and hopefully we can support calico as a libnetwork driver when we get past this magnum netwokring design phase 18:43:34 this approach is similar to that of distributed routing / vrouter / OpenContrail 18:44:35 suro-patz It seems to be a design approach that several vendors are starting to get behind. 18:44:58 IMO because the overlay approach has issues 18:45:13 daneyon_: are you proposing calico-plugin instead of kuryr, for neutron to connect to libnetwork? 18:46:52 suro-patz I am proposing that we eval flannel with libnetwork's native bridge driver. If it works, then we can focus on libnetwork and not implement the network_backend abstraction... at least initiaally. as adrian_otto stated, we could end up going down that road but it's not critical atm. 18:47:20 if the eval goes as planned, update the spec with my findings and modify the implementation details accordinlly. 18:48:10 then create seperate bp's for each implementation detail and work with the community to divide up the tasks, track them to completion and celebreate with a bottle of wine we we complete all ;-) 18:48:16 IMHO, what magnum networking team wants to achieve is integration of COE with OpenStack's networking mechanism, i.e. Neutron - Now what neutron uses to connect to libnetwork, is not a real problem for magnum 18:48:39 we just want to have a default/precribable plugin for neutron to do so 18:48:53 suro-patz Calico is a libnetwork remote driver, so vendors should be able to easily add their driver if we do this right. 18:48:56 prescribable 18:49:41 My focus will be to make sure flannel works under this new model so we can use it for k8s and swarm coe's. 18:49:45 daneyon_: Practically calico is a neutron plugin, so in my view a replacement for Kuryr 18:50:01 similarly operators can use any of the SDN provider 18:50:22 it can be plumgrid/contrail depending on what they have 18:50:49 one thing to note is calico is a vendor plugin 18:51:28 suro-patz atm i believe their is a seperation between the container and cloud infra networking. I would like our focus to be on the container networking. that's why all the debate is related to libnetwork and flannel 18:51:49 when container/cloud infra networking start to integrate, that's when the line will blur 18:52:39 My understanding was magnum was trying to provide the integration platform for bridging cloud networking and container networking 18:52:43 to your point though, with container networking in magnum, i want to focus on implementing flannel for k8s and either flannel or one of libnetwork's native drivers for swarm 18:53:08 the way we have been providing identity/auto-scaling integration platform 18:53:09 i want to sync the container networking default with the coe. 18:53:51 looking forward to discussing in details at the MidCycle 18:54:12 while mkaing it easy for 3rd parties to add their libnetwork remote driver and make it easy for users to run containers, while allowing advanced users to perform advanced container networking functions 18:54:13 I would like to identify a volunter to make a presentation on libnet and calico on day 2 18:54:18 trying to strick a balance 18:54:23 possibly separate presentations on each 18:54:37 or identify a video we can watch as a team and discuss. 18:54:48 adrian_otto I could do the ppt, but i will be remote 18:54:53 I'm referring to the Midcycle now 18:55:15 instead of just calico, i would like to touch on each of the libnet remote drivers 18:55:36 I think that would be really helpful 18:55:55 we can find a way to make that work as a remote presenter 18:56:00 adrian_otto underdtood. i can create and deliver the ppt. Unfortunaly I will not be onsite for the midcycle 18:56:15 adrian_otto should work just fine through webex 18:56:22 i can be on video and share the ppt 18:56:55 ok, https://etherpad.openstack.org/p/magnum-liberty-midcycle-topics 18:57:08 let's juggle that around a bit to find the best time to fit that in on day 2 18:57:20 we are down to our last few minutes 18:57:32 Any parting questions, thoughts, etc? 18:58:16 OK 18:58:25 The midcycle meeting's location is not there on etherpad. Do you mind if new people interested in the project want to join in? 18:58:35 I really appreciate everyone's participation. 18:58:47 Have a great day! 18:58:59 #endmeeting