15:32:13 #startmeeting Networking Advanced Services 15:32:14 Meeting started Tue Oct 22 15:32:13 2013 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:32:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:32:17 The meeting name has been set to 'networking_advanced_services' 15:32:27 Greetings! 15:32:32 HI 15:32:38 thanks all for joining, its a bit inconvenient for those in PDT, we can try and change that in the future 15:32:53 one request - let's have one conversation thread so as to avoid confusion on what is being discussed 15:33:16 i have four broad topics on the agenda 15:33:33 but please feel free to suggest more as we go along 15:33:55 hi 15:34:01 lets first follow up from items we discussed last week 15:34:05 hi amotoki SridarK 15:34:13 #topic service insertion and chaining 15:34:20 #link https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering 15:34:38 can you list the 4 topics before we dive in 15:34:52 yeah sure - 15:35:00 the first one i just mentioned above 15:35:06 the other three areL 15:35:17 Service VMs - mechanism 15:35:28 Service VMs - policy (this is your blueprint) 15:35:49 Extensible APIs for advanced services (this is enikanorov's topic) 15:36:00 anything more to add geoffarnold? 15:36:28 just that the policy topic needs to embrace vms and physical resources 15:36:38 geoffarnold: sure 15:36:44 lets discuss when we come to that 15:36:56 geoffarnold: point well taken, its not just service VMs 15:37:13 ok going back to "service insertion and chaining" 15:37:29 SumitNaiksatam: saw also 'Service agents' topic in your email 15:37:36 there was 5 15:37:41 quite a few folks posted comments on the google doc, and I have responded 15:37:50 obondarev: ah, sorry I missed that 15:37:52 thanks 15:38:03 I should have not relied on my memory :-) 15:38:09 yes, we will discuss that 15:38:25 ok on "service insertion and chaining" 15:38:35 this is still WIP, anyone has any thoughts/reservations? 15:38:48 there were some face to face discussions as well 15:38:52 and good feedback 15:39:40 folks got a chance to read the google doc? 15:39:59 i know last time some folks had not yet read it 15:40:12 enikanorov: sent a couple of comments over email 15:40:17 Sumit: Is the list of use cases indicating what will be targeted first? 15:40:29 bmelande: good question 15:40:46 i don't think we can bite that much right away 15:41:12 seems like Nachi is not here 15:41:39 Agree, seems ambitious to attempt to cover all in I 15:41:48 we were discussing earlier - for implementation we might just try to do a basic chain first - firewall and VPN 15:42:25 bmelande: hopefully the API and model can handle all other chains as well 15:42:39 bmelande: but we won't target that for reference implementation 15:42:52 bmelande: certainly vendors can support a lot more 15:43:41 any thoughts on the single service insertion? 15:44:35 SumitNaiksatam: I am reading it now. I will leave comment or mail to you. I haven't fully understand service_insertion_types BITW and tap are represented in neutron model. 15:44:40 so we all in agreement, so far on the proposal? ;-) 15:44:49 the question on workflow: is it going to change if we introduce insertion mode? 15:44:52 amotoki: thanks 15:45:28 amotoki: there is currently no example of BITW and Tap 15:45:31 workflow of service creation 15:45:34 i meant 15:45:46 amotoki: actually currently we have only L3 based 15:46:11 amotoki: so in that context, I think the key is to be able to capture the insertion_context correctly for these modes 15:46:32 what we have in the design spec is a suggestion, but i think we can evolve that 15:46:40 looking forward to your comments 15:47:09 I would like to see more end-to-end workflow analysis 15:47:12 enikanorov_: the workflow might not change if we use default insertion mode, right? 15:47:37 yes, I guess the ability to use default should remain 15:47:39 enikanorov_: or you have identified some deviation? 15:47:45 all the way from EC2 API calls to low level actions 15:48:01 geoffarnold: will try :-) 15:48:04 that's only question i have at the moment on insertion 15:48:35 enikanorov_: ok let's work through the workflow for each of LBaaS, FWaaS and VPNaaS 15:48:41 not here, offline that is :-) 15:48:43 sure 15:49:07 #action SumitNaiksatam to work with enikanorov_ on workflow 15:49:20 #action geoffarnold to review :-) 15:49:26 yup ;-) 15:49:52 amotoki: more thoughts, or should we move to the next topic? 15:50:03 SumitNaiksatam: go ahead now. 15:50:05 next. tempos fugit 15:50:19 #topic common L3 agent framework 15:50:39 I believe Nachi is not here 15:50:45 obondarev: you had some thoughts? 15:51:55 i'm not fuly aware of activity on this front. Have we moved to l3 agent that loads service drivers? 15:51:57 last time we discussed: https://docs.google.com/presentation/d/1e85n2IE38XoYwlsqNvqhKFLox6O01SbguZXq7SnSSGo/edit#slide=id.p 15:52:13 enikanorov_: I don't think so 15:52:21 enikanorov_: at least not for fwaas 15:52:26 Are there additional documents/bps to the one that Nachi has made on this topic? 15:52:34 enikanorov_: and vpnaas still inherits 15:52:34 i thought that was a consensus back in time when the code was pushed 15:52:40 SumitNaiksatam: I see 15:53:04 bmelande: not that i am aware off, sorry i should have researched 15:53:13 my question is "do we need to implement all services on a single l3-agent?" An alternative is to chain l3-agent namespaces though i am not sure it can cover all possible cases. 15:53:15 bmelande: i was thinking nachi was going to be here 15:53:33 amotoki: good point on chaining namespaces 15:53:46 amotoki: that makes it easier to realize service chains as well 15:54:13 amotoki: is that something you want to propose? 15:54:23 nothing from me 15:54:27 or if you have already, pointer will help 15:54:34 amotoki: ok 15:54:45 i have no material at now. just an idea. 15:54:48 Is there any expecation of number of chains? 15:54:51 folks, could you explain what is namespaces chaining? 15:55:01 amotoki: go ahead 15:55:39 what i am thing is to create several namespace and create veth paris between two namespace. 15:55:57 ok, i see 15:56:14 to add to that, each service specific construct will be in a different namespace 15:56:27 with namespaces we may be limited to one host service chainging? 15:56:27 remember we have l3 agent scheduling, so in case of service chaining whole chaing should be scheduled to 1 agent 15:56:32 (just thinking aloud) 15:56:36 so for example, fwaas rules in a different namespace 15:56:45 amotoki: right? 15:57:26 what do you mean by "fwaas rules in a different namespace"? 15:57:58 what i think is VPNaaS in one ns and FWaaS in another. 15:58:04 amotoki: the fwaas functionality is realized as iptables configuration in the same namespace as the router 15:58:10 amotoki: exactly 15:58:18 amotoki: thats what i meant 15:58:23 amotoki: with namespaces we may be limited to one host service chainging? 15:58:25 Does this assume L3 agent is managed by a driver under an L3 plugin (to accommodate alternative L3 providers, HW and SW)? 15:58:42 ok folks, hang on 15:58:43 Same pattern as LBaaS 15:58:50 How about evolving the L3 agent so it can configure remote "entities"? 15:58:50 we have some questions in the buffer 15:58:56 one sec 15:59:04 lets answer yamahata's question first 15:59:05 we need to investigate it more... 15:59:34 yamahata: can you clarify what you mean by the number of chains? 16:00:09 yamahata: the API and model should be agnostic of the number of chains 16:00:16 I concerned too many netns and veth. performance degradation. 16:00:27 yamahata: ah, ok 16:00:29 yamahata: i agree. 16:00:35 Probably it can be addressed later for performance and scalability. 16:00:36 yamahata: that is implementation 16:00:52 yamahata: right, but good point to keep in mind 16:00:52 we need to check the performance when we talk about l3-agent implementation. 16:01:01 amotoki: agreed 16:01:04 SumitNaiksatam, agreed. 16:01:10 Won't that be part of scheduling 16:01:25 i.e taking into account performance 16:01:42 next question was from shivharis 16:01:56 shivharis: we are talking about this in the context of reference of implementation 16:02:26 shivharis: that already uses namespace implementation and is limited to the host on which the L3 agent runs 16:02:41 shivharis: chaining namespaces would not change any of that 16:02:54 we should be able to chain not only with namespace, but across hosts as well 16:02:57 next question was from geoffarnold 16:03:09 shivharis: in the current model it may be limited on one host, but we can enhance neutron model and implemetnation to connect two interfaces on differnt host with p-to-p link. 16:03:27 ok, for now 16:03:51 make it general purpose later 16:03:54 "one host" is useless for real world 16:03:58 geoffarnold: we are talking in the context of reference implementation, which only deals with SW not HW 16:04:25 so am i 16:04:35 geoffarnold: :-) 16:04:41 geoffarnold: thats reference implementation 16:04:54 geoffarnold: we can have a separate discussion on how to enhance it 16:05:11 framework should accommodate both simple ref imll and real world 16:05:14 for reference implementation it should be ok 16:05:17 impl 16:05:22 bmelande: i think your question is along similar lines 16:05:29 geoffarnold: good point 16:05:41 SumitNaiksatam: Yes it was :-) 16:05:57 #action suggestion to enhance L3 agent framework, nacho to contact geoffarnold shivharis 16:06:14 #action suggestion to enhance L3 agent framework, nachi to contact geoffarnold shivharis 16:06:19 and bob 16:06:24 lets take the next topic 16:06:37 #action suggestion to enhance L3 agent framework, nachi to contact geoffarnold shivharis bmelande 16:06:43 happy? :-) 16:06:51 geoffarnold: Yes, thanks, I want to be part of that discussion too 16:06:53 #topic Service VMs - Mechanism 16:07:06 #link https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms 16:07:12 greg_r: there? 16:07:18 yes, thanks 16:07:30 good input from ftf last week 16:07:37 want to give a quick summary of the discussion over the last week? 16:08:07 gathering up feedback from comments and from the ftf 16:08:16 major item is the data model 16:08:26 and the use cases 16:08:57 one point i would like to clarify is implementation 16:09:22 of the 4 use cases identified, want to understand most common case 16:09:44 the 4 use cases are: private, shared, multi-service, and scale-out 16:10:06 my guess is that the first one, private, is the simplest and most common case 16:10:10 greg_r: i vote for starting with private 16:10:11 greg_r: I have been time to spend on implementation and is offering to help with that. 16:10:26 and so would be most likely to be the first to implement 16:10:34 greg_r: have been -> have been given 16:10:41 bmelande: thats great 16:11:04 it sounds like we are in agreement? 16:11:11 clarification....? 16:11:32 top priority to implement is private use case. 16:11:33 greg_r: But nothing prevents going furhter than that right? 16:11:37 Private - app configures - vs. infrastructure - Neutron configures 16:11:51 right, only time and resources 16:12:34 Let's put al of those use cases in an LBaaS context 16:12:37 geoffarnold: yes 16:12:53 rather than abstract 16:13:36 geoffarnold: lbaas implemented as a VM, right? 16:13:42 +1. It is simple. In addition LBaaS instance can work with one port :-) 16:13:42 so the LBaaS driver cares about the distinction 16:13:47 yes 16:13:49 we could do any of LBaaS, FWaaS, VPNaaS whichever is easier 16:14:11 ok, you mean to add in the spec? 16:14:12 enikanorov_ can correct me, but LBaaS is not a VM 16:14:18 yes 16:14:22 right. reference impl is a process on host 16:14:28 not a vm 16:14:30 "private" is confusing (conflated with "guest mode" where it's a really private part of an app topology) 16:14:37 enikanorov_ obondarev thanks 16:14:52 geoffarnold: agree, private is confusing 16:15:00 But LBaaS ref imll still has a driver, right? 16:15:06 geoffarnold: sure 16:15:12 geoffarnold: driver yes, but not VM 16:15:31 geoffarnold: all services have drivers today 16:15:44 :-) 16:16:11 point is, above driver nobody knows what the use case - shared, scale out, etc. - is 16:16:16 Anything based on namespaces ought to be pretty easy to put in a VM, or? 16:16:16 at different levels of maturity (before enikanorov_ corrects me :-)) 16:16:31 bmelande: i agree 16:16:35 I'm just looking at abstractions 16:16:47 geoffarnold: you have suggestion on better term to use instead of private? 16:16:52 or we can take this offline 16:17:02 offline for taxonomy 16:17:12 Who manages a service VM in the context "private"? neutorn or a tenant? 16:17:21 #action greg_r geoffarnold to brainstorm on taxonomy 16:17:30 ok 16:17:34 amotoki: i guess neutron. throught the service driver 16:17:43 amotoki: better be neutron 16:17:46 amotoki: Me too, neutron should. 16:17:47 yes, neutron 16:17:48 enikanorov_: i agree 16:17:48 otherwise it's going to be quite complex for the user 16:18:01 i agree. it is same as what i think. 16:18:11 Uninterested in tenant-managed VMs in this context 16:18:19 ok we are running low on time 16:18:29 greg_r: anything more to add or can we go to the next topic 16:18:39 go on 16:18:43 But decision as to shared vs. multi-service is a driver issue 16:18:43 folks, we will have this as on going meeting 16:18:51 so we will be back next week as well 16:19:08 and then in the bar in Hong Kong 16:19:22 :) 16:19:33 decisions will be faster for sure 16:19:33 geoffarnold: Ha! (a la chris matthews!) 16:19:42 #topic Service VMs - Policy 16:19:56 #link https://blueprints.launchpad.net/neutron/+spec/dynamic-network-resource-mgmt 16:20:01 geoffarnold: over to you 16:20:24 this is all about allocating scarce/different resources 16:20:26 geoffarnold: do you want to bring the rest of us unto speed as to where we are going with this 16:20:27 hw and sw 16:20:41 sure 16:20:59 the DNRM BP is really all about the end-to-end use cases 16:21:31 that makes it too big for OpenStack, but I really don't want to lose the context 16:21:59 geoffarnold: you mentioned you were going to break it down? 16:22:33 canonical use case: how do I (a cloud operator) set things up so production LB traffic goes to the physical F5 fleet and dev/test goes to virtual Netscalers? 16:22:49 Obvious way to break it up is... 16:23:01 Producer-pool-consumer 16:23:18 Producer manages (discovers, provisions) resources 16:23:47 Consumer selects a resource from what's available based on a policy 16:23:50 that looks like a higher level problem than what neutron is solving, no? 16:24:18 Not really. Look at the inventory blueprint for LBaaS 16:24:32 Multivendor support is essential 16:24:45 geoffarnold: i believe that's enikanorov_ blueprint? :-) 16:24:47 Physical resources are (always) scarce 16:24:51 Yup 16:25:18 So we need a way of selecting a resource and from that locating the driver that handles it 16:25:49 geoffarnold: can we make the strategy pluggable? 16:25:53 M0ost work so far assumes that Neutron API call provides selection criterion 16:26:00 strategy for selection 16:26:06 But that's too limiting - needs to be pluggable 16:26:11 Bingo 16:26:16 ok good 16:26:41 so i think one blueprint can be around the framework to support the strategy 16:26:51 with a dumb default policy 16:27:01 It affects all Neutron services where multiple resource from multiple vendors are in play 16:27:12 and then separate blueprints for different stategies 16:27:13 "dumb policy" is right for ref arch 16:27:22 geoffarnold: exactly 16:27:29 that's what out PoC will show in hong kong 16:27:37 geoffarnold: nice 16:27:38 our 16:27:56 But it does cut across a lot of stuff :-( 16:28:10 geoffarnold: its good to see end to end action 16:28:21 we are running short on time 16:28:27 Let me do the carve-up before next week 16:28:33 geoffarnold: great 16:28:41 lets move on the next topic? 16:28:52 Anyone wants to discuss offline, contact me 16:28:58 #topic Extensible API: deal with growing services 16:29:03 ok 16:29:07 this is Eugene's proposed topic for the Summit 16:29:13 http://summit.openstack.org/cfp/details/22 16:29:17 enikanorov_: over to you 16:29:22 so this one is about how to exopose vendor-specific features through the API 16:29:33 and this is not limited to adv services 16:29:43 i saw similar session proposal for ml2 drivers from amotoki 16:29:53 i guess the same could be applied there 16:29:57 enikanorov_: you mention moving advance services' api to core? 16:30:08 that one of the steps in that direction 16:30:16 but not essential, i guess 16:30:25 ok, can you elaborate a little? 16:30:36 in fact we already have extensions for extensions, i think 16:30:37 btw, folks we are at the hour mark 16:30:44 but i don't think there is another meeting 16:30:50 ok, i'll try to make it short 16:30:51 so we can continue until we are kicked out 16:30:59 enikanorov_: sorry continue 16:31:14 enikanorov_: take your tie 16:31:15 time 16:31:26 so what i'd like to see is ability for vendors to make their specific extensions that are not in 'common' location 16:31:49 enikanorov_: some plugins are already doing this, right? 16:31:56 that has some benefits, including more simple review/discussions penalty 16:32:07 nicira has some in their private directory structure 16:32:13 *penalty=process 16:32:15 so does big switch and i believe Cisco as well 16:32:21 right 16:32:59 such framework would require dispatching mechanism that will forward rest call to a proper driver 16:33:16 and at this point i'm insterested in how this could be applied for, say, fwaas 16:33:31 for lbaas it looks relatively simple since we have 'plugin driver' notion 16:33:41 and fwaas seems to have device drivers only 16:33:53 and communication goes through the rpc/agent 16:34:03 enikanorov_: fwaas will comply with the service_type framework 16:34:28 enikanorov_: ok 16:34:34 currently it seems to me that it would require to create the same 'plugin_drivers' for fwaas 16:34:42 although they can be trivial 16:34:55 enikanorov_: so the vendor extensions will be in the same neutron tree, right? 16:35:17 extensions will be in neutron tree, but they will not be loaded like common extensions 16:35:26 enikanorov_: ah ok 16:35:35 enikanorov_: i missed that part 16:35:51 instead, I'm planning that REST API layer wil ask plugin for resources/attr maps and embed them into the resulting API 16:36:16 instead of checking for 'supported ext alias' and using preloaded comon extension 16:36:53 enikanorov_: when we make an API call for /extensions it will return all the loaded extensions including the ones selectively loaded for the vendors? 16:36:55 this way we could control API set by simply defining providers for the service, and also avoid the need to place vendor's extensions into common space 16:37:24 SumitNaiksatam: that's a good question. I think it should return everything that is supported 16:37:40 everything that is loaded i mean 16:37:43 enikanorov_: you mean supported, or loaded? 16:37:48 enikanorov_: ah ok 16:37:51 enikanorov_: i agree 16:38:06 enikanorov_: this seems like a good approach to me, but I have dived deeper 16:38:19 other folks have thoughts? 16:38:33 i think this is pretty relevant with the proliferation of services and related extensions 16:38:34 so essential part of such framework would be dispatching mechanism that will forward rest cals to appropriate driver 16:38:40 and extensions of extensions :-) 16:38:50 enikanorov_: ok 16:38:53 sorry have not read the BP but where does the dispatching mechanism exist - in the vendor plugin ? 16:39:19 SridarK: yes, i think it should go to the plugin (generic plugin) 16:39:27 enikanorov_: is there a blueprint? 16:40:04 enikanorov_: thx ok makes sense 16:40:11 i guess there is no bp for this particular task (dispatching). Currently I'm planning to use api-core-for-services as a scope 16:40:23 enikanorov_: ok 16:40:24 probably it makes sense to break it down to parts 16:40:52 anyone else has thoughts on this? 16:41:13 i guess we have slowly started to loose people 16:41:25 i hope i didn't overload folks :) 16:41:29 enikanorov_: thanks. that totally make sense to me. i am thinking simlar but just start to study. I am half asleep.... 16:41:52 amotoki: thanks 16:41:53 amotoki: apologies for the timing, thanks a lot of attending 16:42:00 #topic open discussion 16:42:21 anything else to discuss, or to put on the agenda for next week? 16:42:24 one remaining thing is 16:42:30 enikanorov_: go ahead 16:42:38 vendor-specific configuration 16:43:01 currently radware is working on their lbaas driver trying to put their conf in neutron.conf 16:43:09 and they have reasonable question 16:43:21 that it might be not the best place for their specific conf 16:43:40 so it may make sense introduce yet another conf file for services 16:43:45 what do you think? 16:44:01 enikanorov_: i had earlier suggested this for lbaas 16:44:14 enikanorov_: if you recall in the reviews 16:44:14 We do this for fwaas 16:44:33 that time you went with the approach of creating a new section in the neutron.con :-) 16:44:46 since there seemed to be a proliferation of conf files 16:44:55 i think separate might be better 16:45:37 any other items to discuss? 16:45:40 yeah, agree 16:45:43 how about craeting conf file per service and having "config files" parameter in [default] seciotn in neutron.conf? 16:45:47 (now I agree :) 16:45:59 amotoki: that might work as well 16:46:21 amotoki: what about just specifying needed files in cmd line? 16:46:38 IMO many --config-file is not easy to manage 16:46:53 s/--config-file/--config-file options/ 16:47:05 so the difficulty is in cmd options or in the number of conf files? 16:47:10 stuffing everything in neutron.conf should not be encouraged - since we can specify multiple conf files at startup 16:47:12 enikanorov_: will you be capturing this requirement somewhere? 16:47:31 i mean splitting up the conf file? 16:47:36 SumitNaiksatam: under discussion right now. I hope it will be covered in HK 16:48:02 #action enikanorov_ to capture splitting up of conf files for advances services 16:48:20 ok folks, I think its getting too late for amotoki, perhaps for enikanorov_ as well :-) 16:48:43 unless there is something else to discuss, we can end this meeting 16:48:55 seems that we had productive discussion :) 16:48:55 #info etherpad for pre-summit discussions on this meeting/topic is here: https://etherpad.openstack.org/p/NeutronAdvancedServices 16:49:05 enikanorov_: thanks for participating 16:49:10 Sumit: thanks for organizing 16:49:15 and to the others as well 16:49:15 +1 shivharis 16:49:17 ! 16:49:19 shivharis: thanks 16:49:26 thanks everyone! good night. 16:49:31 amotoki: thanks 16:49:36 alright by everyone 16:49:43 #endmeeting