17:01:17 <cathy_> #startmeeting service_chaining
17:01:18 <openstack> Meeting started Thu Aug  6 17:01:17 2015 UTC and is due to finish in 60 minutes.  The chair is cathy_. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:01:22 <openstack> The meeting name has been set to 'service_chaining'
17:01:32 <cathy_> hello everyone
17:01:33 <johnsom> o/
17:01:38 <vikram_> Hi
17:01:38 <pcarver> hi
17:01:48 <abhisriatt> hi, this is abhinav.
17:01:58 <abhisriatt> from AT&T Research.
17:02:13 <s3wong> abhisriatt: welcome to the meeting
17:02:23 <cathy_> abhisriatt: hi, welcome to the meeitng
17:02:23 <pcarver> Thanks for joining us Abhinav
17:02:29 <LouisF> i have posted update to https://review.openstack.org/#/c/204695 API doc
17:02:39 <abhisriatt> Thanks Guys.
17:02:52 <pcarver> cathy_: Abhinav is one of the people I mentioned who has worked on the flow steering work internally at AT&T
17:03:04 <cathy_> pcarver: great
17:03:10 <xiaodongwang> hi
17:03:28 <Brian___> hi all
17:03:44 <cathy_> so maybe today we can have abhisriatt give a instrocution on the flow steering work at AT&T?
17:03:55 <cathy_> Brian___: xiaodongwang Hi
17:04:27 <LouisF> and also https://review.openstack.org/#/c/207251 Port Chain API and DB
17:04:36 <LouisF> hi xiaodongwang
17:04:58 <MaxKlyus> hi from NetCracker research group
17:05:10 <LouisF> hi MaxKlyus
17:05:12 <cathy_> Louis has posted an update to the API doc. I saw some new comments. Could everyone please review the latest one and give all your comments so that we can get them addressed and merged?
17:05:27 <cathy_> MaxKlyus: Hi, welcome!
17:05:36 <s3wong> MaxKlyus: welcome to the meeting!
17:06:09 <LouisF> cathy_: yes vikram had some more comments I will post a new patch later today to address them
17:06:29 <vikram_> LouisF: I have few more :-)
17:06:31 <cathy_> LouisF: I saw that Jenkins -1 on https://review.openstack.org/#/c/207251.
17:06:49 <cathy_> LouisF: Thanks!
17:06:58 <LouisF> cathy_: yes still some pep8 issues
17:07:25 <LouisF> cathy_: will ork through them today
17:07:26 <cathy_> vikram_: could you please post them today so that Louis can address them all in one shot ?
17:07:28 <LouisF> work
17:07:39 <cathy_> LouisF: Thanks!
17:09:10 <cathy_> While Louis fixes the pep8 issue, everyone can start review the "Port Chain API and DB" codes please.
17:09:32 <vikram_> cathy_: sure
17:10:11 <cathy_> vikram_: thanks! BTW, I remember you previously signed up on the Horizon part code support for this project. How is that working going?
17:10:14 <s3wong> cathy_: yes
17:10:34 <cathy_> s3wong: thanks.
17:11:46 <vikram_> cathy_: I think we got to finalize the API's + Server code is needed for testing..
17:12:00 <vikram_> cathy_: Framework is done
17:12:02 <cathy_> Also could everyone get on the OVS Driver and Agent  spec and do a detail review of it? https://review.openstack.org/#/c/208663/
17:13:21 <s3wong> cathy_: sure
17:13:33 <abhisriatt> cathy_:sure
17:13:34 <cathy_> vikram_: Agree with you. But I will not expect much change on the API side. So to speed up the coding work, we cna do this in parallel. I expect the API will be merged soon. What do you think?
17:14:02 <cathy_> s3wong: abhisriatt thanks.
17:14:45 <vikram_> cathy_: +1, we are doing it .. My only concern is getting the API's finalisied soon.. It's impacting both horizon and CLI
17:15:01 <cathy_> vikram_: Sure. Agree with you.
17:15:12 <cathy_> vikram_: Thanks for starting the coding work.
17:16:22 <cathy_> vikram_: so you will post all your comments today and Louis will address all comments and post a new version for final review. Let's get the API finalized as soon as possible so that it will not impact other pieces of work
17:16:42 <vikram_> cathy_: Sure. Will do!
17:16:56 <LouisF> cathy_: will do
17:17:26 <cathy_> vikram_: LouisF Thanks, folks!
17:17:43 <cathy_> Any other topic you have in mind?
17:17:46 <abhisriatt> cathy_:if you want, I can give you brief overview of our work done in AT&T on flow steering.
17:18:02 <cathy_> abhisriatt: sure, please go ahead
17:19:26 <abhisriatt> The flow steering project that we started in Research was to give control to deploy Tenant-defined middle-boxes to give mor control to cloud tenants.
17:19:57 <abhisriatt> sorry..to give contro to tenants to deploy middlebox of their choice.
17:21:17 <LouisF> abhisriatt: middleboxes being service functions?
17:21:25 <abhisriatt> The idea is that tenants will request some services, especially security services firewall, IDS, IPS, etc., that will run inside MBs.
17:21:52 <abhisriatt> LouisF: Yes, like  firewall, IDS, IPS, etc.
17:22:15 <LouisF> abhisriatt: ok
17:23:01 <abhisriatt> The cloud provider’s job is to accept the request from tenants and setup the networking pipes in a way that packets should flow through these MBs.
17:23:08 <LouisF> abhisriatt: so it is possible to steer traffic to these service?
17:23:56 <abhisriatt> LouisF: Yes.
17:24:05 <Mohankumar__> sorry for joining late
17:24:15 <cathy_> Mohankumar__: it is oK
17:24:53 <cathy_> abhisriatt: This requirement is what service chain project can provide
17:25:13 <pcarver> If I can give a little more detail, the AT&T work originally started out doing QoS (DSCP bit manipulation) in OvS using an external service that integrated with OpenStack APIs.
17:25:45 <pcarver> Abhinav's work was to then extend that framework to do flow steering which is basically the same intent as service chaining
17:25:52 <abhisriatt> Our APIs or CLIs are simple and in the form of: Source (VM or external traffic), destination VM (or any local VM), MB1, MB2,… MBn
17:26:33 <cathy_> abhisriatt: Your APIs are very similar to the API of the service chain project.
17:26:40 <abhisriatt> i.e. any traffic from source to destination should flow through these set of MBs.
17:27:11 <abhisriatt> cathy_: Yes. that’s why pcarver asked me to join this work..
17:27:18 <LouisF> abhisriatt: looks like close alignment with the port-chain apis
17:27:28 <pcarver> We've worked through the prototype stage and have some experience with the necessary OvS flowmods required
17:27:44 <cathy_> pcarver: great.
17:28:09 <LouisF> pcarver: excellent please jump in on suggestions on the ovs driver and agent
17:28:13 <pcarver> The implementation wasn't conceived as a part of OpenStack, but rather sitting outside and interacting with Nova, Neutron, and Keystone, as well as interacting directly with OvS
17:28:14 <abhisriatt> LouisF: Yes, and we are actually designing new set of APIs that actually look very similar to what you guys have on the wiki.
17:28:50 <pcarver> The networking-sfc work differs mainly in being a part of OpenStack rather than sitting slightly outside of it
17:29:27 <pcarver> But with both the QoS and flow steering we've had some experience with how to manipulate OvS flow mods without disrupting Neutron's use of OvS
17:30:00 <abhisriatt> As Paul mentioned, we extensively used OVS flow mods to route packets from one MB to other without disrupting any existing flows or traffic.
17:30:32 <LouisF> abhisriatt: that is exactly what we need to do for port-chains
17:30:41 <cathy_> pcarver: "OVS flowmods" refer to the interface between OVS Agent and OVS on the same computing node or  OVS driver on Neutron Server and OVS Agent on the Computing node?
17:31:01 <pcarver> My thinking is that we should model the networking-sfc interactions with OvS after the networking-qos interactions, but that we can leverage some of Abhinav's and other AT&T folks experience
17:31:29 <LouisF> abhisriatt: do you support ant form of load distribution across a set of MBs in a port-group
17:31:35 <pcarver> cathy_: yes, flowmod meaning the "magic incantations" that we need to put into OvS to make things happen
17:31:36 <LouisF> any ^
17:31:53 <cathy_> abhisriatt: Yes, our design on the data path flow steering is similar to yours: route packets from one MB to other without disrupting any existing flows or traffic.
17:32:18 <abhisriatt> cathy_: flowmods are nothing but open flow rules that are inserted in OVS.
17:32:55 <abhisriatt> LouisF: We are currently working on load balancing across MBs, and we call it scalable flow steering.
17:33:33 <pcarver> In our implementation the thing that puts the flowmods into OvS is an independent server process outside of Neutron and doesn't use any Neutron agents
17:34:10 <abhisriatt> Here again, we are using Open flow features such as multipath, learning rules to load balance across many MBs.
17:34:38 <pcarver> But I think it should be adaptable, at least the underlying building blocks
17:34:54 <abhisriatt> pcarver:Yes.
17:34:55 <LouisF> abhisriatt: https://review.openstack.org/#/c/208663 descibes OF group-mod for doing load balancing across a group
17:35:30 <LouisF> abhisriatt: suggestions welcome
17:35:40 <cathy_> pcarver: agree with you, I think this project can leverage your OVS flow table part can
17:35:53 <abhisriatt> LouisF: sure, I will take a look at.
17:37:02 <cathy_> Instead of using an external server process to program the OVS, in our project the path will be OVS Driver on Neutron server talking with OVS Agent and the OVS Agent programming the OVS via openflow commands
17:37:47 <abhisriatt> cathy_: Ideally, that should be the design.
17:38:18 <cathy_> abhisriatt: Louis has posted the link which described the design. Could you get on that and give your input and comments?
17:38:22 <abhisriatt> However, we started this project with QoS and the OVS agent cannot create queues in OVS. That’s why we have to use an external process to achieve that.
17:39:20 <cathy_> abhisriatt: could you clarify what you mean by "can not create queues in OVS"? What are the queues used for?
17:40:05 <abhisriatt> cathy_: to rate limit the flows—a functionality needed by the bandwidth reservation project.
17:40:49 <pcarver> cathy_: That's some of the history I referenced briefly. The AT&T flow steering work was built on top of a QoS project. That piece of it isn't especially relevant to networking-sfc project, but was just the framework that we started with
17:40:51 <cathy_> abhisriatt: so that is related to your QoS funcitonality, right?
17:41:13 <abhisriatt> cathy_:yes. not related to networking-sfc.
17:41:28 <cathy_> pcarver: Yes, that is what i think. This part is not relevant to the service chaining feature itself
17:41:40 <pcarver> Since the OpenStack networking-qos work is proceeding independently we don't really need to focus on that part
17:41:59 <cathy_> pcarver: cool. we are in sync
17:42:40 <pcarver> Essentially we had an existing server framework that handled Neutron, Nova, and Keystone interaction and Abhinav worked on adding flow steering via OvS flowmods into that existing QoS framework
17:42:45 <cathy_> abhisriatt: pcarver when do you think you can carve out the OVS part of codes for use in networking-sfc project?
17:43:06 <abhisriatt> BTW, we made our controller open source.
17:43:21 <abhisriatt> https://github.com/att/tegu
17:43:44 <pcarver> cathy_: I think the first step is to get Abhinav aligned on which existing reviews are touching the OvS agent and which pieces are currently just stubbed out or don't exist at all
17:43:45 <cathy_> abhisriatt: OK, thanks.
17:43:53 <s3wong> abhisriatt: Go code!
17:44:07 <cathy_> s3wong: :-)
17:44:10 <abhisriatt> :)
17:44:30 <abhisriatt> Actually, it is written in “GO”, too.
17:44:37 <cathy_> pcarver: Ok, let's do that first. Thanks for bringing in abhisriatt !
17:45:03 <pcarver> He needs to get oriented on the structure of the networking-sfc code base and then he can start bringing in his experience from Tegu (that's the name of our server)
17:45:20 <s3wong> abhisriatt: that's what I meant: code in Go :-)
17:45:25 <abhisriatt> okay :)
17:46:08 <cathy_> pcarver: sure.
17:46:41 <pcarver> I haven't been through all the reviews yet. Has anyone started touching OvS or is it all still stubs at this time?
17:47:39 <s3wong> pcarver: I started to look at it --- but in gerrit review (not merged yet), the dummy driver (stub) is all there is
17:47:46 <cathy_> pcarver: still stubs. But we are working on the codes now
17:48:10 <LouisF> pcarver: driver manager and dummy driver only
17:48:17 <pcarver> cathy_: Ok, we need to get Abhinav sync'd up so that he doesn't do duplicate work
17:48:25 <cathy_> Let's get the design reviewed and agreed first. Then we can start posting codes and review the codes.
17:48:53 <vikram_> +1
17:48:54 <cathy_> pcarver: yes, I am thinking we cna divide the coding work among us.
17:49:32 <cathy_> s3wong: has also signed on the the OVS part of code development. Actually the design posted is co-authored with s3wong
17:49:53 <s3wong> yes
17:50:14 <cathy_> s3wong: Thanks for your insight and comtributiont!
17:50:31 <cathy_> contribution!
17:51:53 <s3wong> pcarver, abhisriatt: please review the OVS driver spec and we will iterate the design from there
17:52:10 <pcarver> s3wong: will do
17:52:13 <abhisriatt> s3wong: will do
17:52:27 <cathy_> As for the coding part, I am thinking someone should get a basic OVS  framework code and then we can have a meeting to review the framework and then divide the detail code development work among us?
17:53:04 <s3wong> cathy_: framework? as in the OVS driver on Neutron server?
17:53:13 <cathy_> Ok with this or any other suggestion how we avoid duplicate work as vikram_ pointed out?
17:54:14 <cathy_> s3wong: by framework I mean OVS driver on Neutron server, OVS agent on compute node, and OVS itself on Compute node
17:54:30 <s3wong> cathy_: that's everything :-)
17:55:04 <cathy_> I think we might need a consistent framework without much coding detail so that we each of us start coding, we do not have big mismatch
17:55:27 <MaxKlyus> sorry I have an open question to everybody, what do you think about OVS mpls based traffic chaining?
17:55:33 <cathy_> s3wong: no code detail, just framework:-)
17:56:10 <LouisF> MaxKlyus: various drivers can be used
17:56:27 <pcarver> MaxKlyus: raw MPLS or MPLS over GRE or UDP?
17:56:51 <MaxKlyus> raw MPLS multiple stack
17:57:00 <pcarver> We're definitely thinking about MPLS, but pretty much orthogonal to service chaining
17:57:11 <cathy_> MaxKlyus: we are running out of time. Maybe we cna discuss this in the next meeting. I think out framework should be able to support multiple transport and encap in the data path
17:57:31 <s3wong> or move discussion to openstack-neutron
17:58:33 <MaxKlyus> it will be great
17:58:34 <cathy_> MaxKlyus: good question. We can discuss it in the next meeting or on openstack-neutron. Actually this question was touched before in the original API review.
17:59:54 <abhisriatt> One last question: how are you guys thinking of steering flows from one MB to another.
18:00:13 <cathy_> OK, folks. Thanks for joining the meeting and all the discussions. I think we are making good progress! Let's continue the discussion in next meeting.
18:00:19 <cathy_> bye now
18:00:22 <LouisF> by
18:00:26 <s3wong> abhisriatt: a chain of two Neutron ports?
18:00:34 <abhisriatt> yes
18:00:34 <s3wong> OK. Thanks guys!
18:00:40 <abhisriatt> bye
18:00:40 <cathy_> abhisriatt: wera re running out of time, let address that in next meeting.
18:00:46 <abhisriatt> cathy_:okay
18:00:54 <Mohankumar__> bye
18:00:58 <MaxKlyus> ok
18:01:01 <vikram_> bye
18:01:03 <MaxKlyus> thanks a lot
18:01:09 <MaxKlyus> bye
18:01:14 <cathy_> #stopmeeting
18:01:21 <s3wong> endmeeting?
18:01:25 <cathy_> #endmeeting