17:02:34 #startmeeting tacker 17:02:35 Meeting started Tue Nov 10 17:02:34 2015 UTC and is due to finish in 60 minutes. The chair is sridhar_ram. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:02:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:02:39 The meeting name has been set to 'tacker' 17:02:45 #topic Roll Call 17:02:54 o/ 17:02:56 who is here for Tacker weekly meeting ? 17:03:02 o/ 17:03:03 vishwanathj: hi there! 17:03:13 sridhar_ram, tbh, Hi 17:03:22 hello 17:04:08 Hi 17:05:15 bobh: you there ? 17:05:58 lets start .. 17:06:06 s3wong: hi 17:06:07 hello 17:06:25 o/ 17:06:30 I think we have a quorum 17:06:36 #topic Agenda 17:06:40 #link https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Nov_10.2C_2015 17:07:07 First, welcome to the first meeting of Mitaka Cycle 17:07:59 #chair s3wong bobh 17:08:00 Current chairs: bobh s3wong sridhar_ram 17:08:08 #topic Announcements 17:08:19 Mitaka Schedule - #link https://wiki.openstack.org/wiki/Mitaka_Release_Schedule 17:09:07 While will use "M" milestones as guides to orient our activities in Mitaka 17:09:22 M1 is Dec 1-3 17:10:38 We have a proposal out to expand core team - #link http://lists.openstack.org/pipermail/openstack-dev/2015-November/078971.html 17:11:04 sripriya_: hope you've been nice to everybody here ;-) 17:11:31 +1, congrats sripriya_ 17:11:37 well deserved 17:11:45 +1 17:11:51 sridhar_ram: hope so :-) 17:11:51 sripriya_, congrats, way to go 17:11:55 sridhar_ram: didn't even see the email, sorry 17:12:00 +1 17:12:09 thanks everyone 17:12:18 +1 17:13:10 reply to ML also 17:13:13 if you can send an email in ML that would be great 17:13:18 s3wong: thanks 17:13:28 lets move on... 17:13:45 #topic Tacker Mitaka Priorities 17:14:19 Etherpad link #link https://etherpad.openstack.org/p/tacker-mitaka-priorities 17:14:52 Lets discuss the entries here. 17:15:15 any general questions / comments ? 17:15:57 FWIW, it is an ambitious plan and we really could use more devs! 17:17:33 a specific entry that is missing is auto-scaling .. any one thinks that is super important ? 17:17:59 sridhar_ram: when are we looking for blueprints submission for the features listed? 17:18:29 sripriya_: yes, we should get going on blueprints for most of the entries there 17:18:32 sridhar_ram: There hasn't been a lot of interest in auto-scaling from telcos - manual scaling is more interesting to them 17:18:49 sridhar_ram: so maybe some support of stack-update would be in order 17:18:53 scaling is a VNFM function according to ETSI MANO, is it not? 17:19:05 bobh: agree on auto-scaling observation 17:20:01 vishwanathj: I was wondering how high value it is to spend our precious bandwidth 17:20:44 bobh: manual stack-update is something we could absorb ... 17:20:45 if there are no use cases or demand, we should list it as a known limitation or gap to be taken up when there is a demand 17:21:34 sridhar_ram: we can also look to see if/how the existing tosca-parser supports auto-scaling - we might get some of that support for free with the TOSCA parser changes 17:22:04 vishwanathj: agree, one thing we are realizing - and other pointing it out - this whole orchestration is a huge space. we intentionally let the boundaries emerge - naturally based on customer input 17:23:01 bobh: sounds good.. prashanthD is still interested in this area. Perhaps bobh you can guide him to contribute to this specific narrow use-case 17:23:52 blueprints wise we already have one for SFC.. 17:23:58 sridhar_ram: sounds good. I might suggest investigating VNFD update and stack-update as solutions for manual scaling until we get a use case for auto-scaling 17:24:21 bobh: sounds like a plan 17:24:39 I think the TOSCA parser changes will require three BPs, one each in tosca-parser, heat-translator and tacker 17:24:43 we need new blueprints for tosca-parser work, multi-vim, 17:25:18 bobh: sure, make sense 17:25:22 sridhar_ram: agree 17:26:09 Enhanced VNF placement will need a blueprint too.. again this is a vast subject. vishwanathj you need to clearly scope this out 17:26:30 sure 17:26:42 For Auto Flavor / Network create I'd suggest to use simpler RFE process 17:26:53 tbh: what do you think ? 17:27:28 sridhar_ram, make sense 17:27:54 tbh: cool.. 17:28:11 tbh: You might want to look at the existing tosca-parser/heat-translator functionality to see if it supports creating flavors/networks 17:28:48 bobh, sure, I will take a look at it 17:29:15 For some of the efforts we also need more folks to join the different tracks .. 17:29:42 for e.g. enhanced vnf placement and multi-vim needs more devs 17:30:07 any new contributors here interested to join ? 17:30:16 tbh has volunteered and is interested in the enhanced vnf placement effort 17:30:28 existing members - please spread the word 17:30:32 sridhar_ram, yeah, I am interested in vnf placement 17:31:18 vishwanathj: tbh: excellent, in fact those are related areas.. some of the extra_specs stuff goes into flavors 17:31:39 Could someone clue me in as to what vnf placement means beyond existing functionality in OpenStack? 17:32:53 brucet: there is a whole laundry list - starting with placing VMs with correct NUMA topology, cpu-pinning,... 17:32:57 brucet, this would be taking into consideration CPU Pinning, SR_IOV and NUMA awareness... 17:33:23 Is there a doc for this under Tacker? 17:33:25 brucet: Also affinity/anti-affinity, server groups, availability zones... 17:34:20 brucet, there is not a doc right now, but I shall be producing one after my investigations 17:34:32 sridhar_ram: I need to leave early today, I'll catch up with the meeting notes. 17:34:36 brucet: end goal is to place the VNF (imagine a set of VDUs / VMs) in the most optimal way for *maximum* performance 17:34:47 Understood. 17:34:47 bobh: sure, ttyl 17:35:11 I'm just trying to see if there's new functionality required beyond what's a;ready in OpenStack 17:35:53 brucet: for now, we are required to work with what's available in openstack... 17:36:02 Ah.... OK 17:36:35 Sorry. I'm just trying to come up to speed. 17:36:50 brucet: no problem at all, thanks for the questions 17:37:33 in fact this whole area (efficient placement) is quite interesting topic .. 17:38:14 So as I understand it, the goal for now would be to map OpenStack functionality to VNF placement requirements for ETSI 17:38:20 Then see if there are any gaps. 17:38:30 vishwanathj: that's why we need some concrete goals to validate what we delivered ... imagine, using this feature, doing a 10g line rate passthru using VNF placed by Tacker 17:39:38 brucet: spot on. nova team has done many things in this area for nfv. we will be the "user" of those features and bug them to fix / enhance as needed 17:39:48 Got it. 17:39:49 sridhar_ram, good point....brucet, be tuned for a spec 17:39:59 be tuned -> stay tuned 17:40:07 Makes perfect sense 17:40:26 I would be happy to join effort 17:40:57 brucet, looking forward to your review and comments 17:41:03 OK 17:41:04 once I have my spec out 17:41:05 brucet: awesome, please do.. any contribution is welcome. you can start w/ reviews 17:41:14 Perfect for me 17:41:48 On a different topic - SFC - I couldn't be more happier on the progress.. 17:43:39 sridhar_ram: how is that progressing? Is there going to be a demo during the OPNFV summit? 17:44:04 s3wong: I was looking up an email link to share here... 17:45:04 Checkout this email thread in opnfv ML - #link http://lists.opnfv.org/pipermail/opnfv-tech-discuss/2015-November/006330.html 17:45:22 s3wong: yes, my understanding is there is going to be a demo 17:46:15 This comment in that thread made me happy - "Seems like as far as SFC is concerned , Tacker is the center of the universe" 17:46:25 +1 17:46:27 +1 17:46:31 :-) 17:46:47 seems we are indeed doing something that make sense for the nfv world :) 17:47:26 any thing else on what is in store for Mitaka ? 17:47:39 prashantD_: hi there 17:48:06 prashantD_: we were just talking about how much we should do in "VNF scaling" in Mitaka 17:48:23 sridhar_ram: probably you can call out for any volunteers on ML for other Mitaka features just in case anyone is interested 17:48:26 prashantD_: please reach out to bobh 17:49:22 Can I ask a question about SFC? 17:49:36 sripriya_: good idea...will give a shout out in the ML. Based on my Tokyo summit conversation we should get more folks. Lets see. 17:49:41 brucet: shoot 17:50:14 Seems like SFC would be potentially used internally by tacker but not exposed in any Tacker APIs, correct? 17:50:25 * sridhar_ram 10min mark 17:50:43 brucet: by SFC, you mean networking-sfc APIs? 17:50:56 Trying to remember how ETSI Mano describes SFC usage 17:51:00 brucet: for now SFC will be expose as an post VNF instantiation API 17:51:27 OK. So ability to chain VNFs? 17:51:54 Neutron Service Function Chaining APIs 17:51:54 brucet: in a follow on phase we will start supporting VNFFGD (Forwarding Graph Descriptor) to automatically render the chains without the need to invoke Tacker SFC APIs 17:52:24 OK. I need to look at Mano VNF FGD 17:52:33 So we need to map that to Neutron SFC 17:53:07 brucet: yes, as sridhar_ram mentioned, we will likely kick off with a Tacker NB APIs for SFC setup after VNFs are instantiated; in the future VNFFG is well defined, we will use that as NB 17:53:25 OK 17:53:36 brucet: on SB, particularly for setting up traffic plumbing, we should actively integrate with networking-sfc, that's for sure 17:53:56 The Neutron SFC guys are expecting that 17:54:19 s3wong is one of that neutron-sfc guy :) 17:54:30 Ah..... OK 17:54:41 Newbie 17:54:42 brucet: on the email thread sridhar_ram sent out from opnfv-tech-discuss, various people are suggesting having Tacker plugging in SDN controllers directly 17:54:53 brucet: no worries! 17:54:58 hi all, just a quick question, when "neutron-sfc" is mentioned, are you referring to "openstack/networking-sfc"? 17:55:35 igordcard: yes 17:55:41 we looked briefly into that, in a certain extend, we may NEED to do that anyway --- for example, ODL SFC actually have a templatization side of SFC setup, which something like Neutron port-chaining would not be able to fully support 17:56:22 s3wong: that area is still fluid IMO.. 17:56:56 sridhar_ram: certainly networking-sfc APIs integration is in order for us 17:57:00 s3wong: my preference if is we can normalize everything behind neutron-sfc API.. that's the best. But I also realize the world is not perfect! 17:57:32 sridhar_ram: obviously we don't want to adopt networking-sfc but having it turned out to be less function than what Tim and Dan are doing 17:58:11 s3wong: as long as we give a nice stab at tacker sfc-driver abstract class with say 70% adopting neutron-sfc with some odd balls directly going to their controller .. that might be one future 17:58:28 sridhar_ram: that's what I think will be the case as well 17:58:28 we are almost out of time 17:58:31 2 minutes 17:58:39 Not sure why any calls would be needed directly to SDN controller 17:58:56 let wraps.. we can contrinue the discussion next week.. 17:59:00 brucet: it depends on the SDN controller 17:59:06 OK 17:59:15 Would like to see the use case 17:59:30 Folks with Mitaka deliverable please start working in the blueprints.. 17:59:50 sridhar_ram: mine already have a bp, right? :-) 17:59:51 even some simple WIPs blueprints will be nice to see by next week 18:00:03 s3wong: you are covered! 18:00:09 times up... 18:00:15 Bye 18:00:17 thanks for joining folks! 18:00:17 bye, folks! 18:00:25 thanks 18:00:31 #endmeeting