17:00:24 <sridhar_ram> #startmeeting tacker
17:00:31 <openstack> Meeting started Tue Feb 23 17:00:24 2016 UTC and is due to finish in 60 minutes.  The chair is sridhar_ram. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:33 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:35 <openstack> The meeting name has been set to 'tacker'
17:00:41 <sridhar_ram> #topic Roll Call
17:00:44 <vishwana_> o/
17:00:45 <dkushwaha_> o/
17:00:46 <sridhar_ram> who is here for tacker ?
17:00:54 <brucet> hello
17:01:03 <prashantD> o/
17:01:07 <sripriya> o/
17:01:16 <santoshk> Hello
17:01:20 <sridhar_ram> howdy everyone!
17:01:27 <sridhar_ram> lets start...
17:01:32 <sridhar_ram> #topic Agenda
17:01:39 <sridhar_ram> #info #link https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Feb_23.2C_2016
17:01:58 <sridhar_ram> #topic Annoucements
17:02:36 <sridhar_ram> Big tent application is under review by OpenStack TC - #link https://review.openstack.org/#/c/276417
17:02:53 <sridhar_ram> progressing well with many +1 votes!
17:03:24 <vishwana_> way to go
17:04:11 <sripriya> nice!
17:04:17 <sridhar_ram> Mitaka release schedule is at #link http://releases.openstack.org/mitaka/schedule.html
17:04:39 <sridhar_ram> we are at R-6 (release - 6weeks)
17:04:52 <sridhar_ram> the windows is closing soon for Mitaka :(
17:05:16 <sridhar_ram> Nothing else to announce from my side...
17:05:36 <sridhar_ram> hope we will have something *big* to announce next week :)
17:05:50 <vishwana_> looking forward to that big announcement
17:05:56 <sridhar_ram> vishwana_: yep
17:06:05 <sridhar_ram> fingers crossed!
17:06:18 <sridhar_ram> #topic Enhanced VNF Placement update
17:06:29 <sridhar_ram> Spec at #link https://review.openstack.org/#/c/257847
17:06:39 <sridhar_ram> vishwana_: can you please share updates where we are ?
17:06:59 <vishwana_> sridhar_ram, sure....
17:08:06 <vishwana_> FYI: I have an initial Work In Progress patchset https://review.openstack.org/#/c/269295/ that validates some of whats documented in the blueprint....
17:09:41 <vishwana_> I have been able to test successfully the VNFD template proposals in the blueprint for CPU Pinning, Huge Pages, NUMA topology in my WIP patchset....
17:10:04 <sridhar_ram> vishwana_: cool, what is remaining to wrap up the spec ?
17:10:33 <vishwana_> I anticipate that the VNFD template proposal will undergo changes in terminology with feedback from Bob and you .....
17:11:23 <sridhar_ram> vishwana_: yes, I'm working with TOSCA nfv group to introduce these granular cpu-archiecture attributes in TOSCA NFV profile..
17:12:12 <vishwana_> I am currently calling out some properties under host section ....
17:12:48 <sridhar_ram> vishwana_: current thought is it will be moved as "capabilities" of the VDU node itself
17:12:57 <vishwana_> I expect those to be renamed as tosca.nodes.Compute
17:13:22 <sridhar_ram> vishwana_: okay
17:13:23 <vishwana_> sridhar_ram, yes
17:14:19 <sridhar_ram> I can help to get the template part of this spec.
17:14:29 <vishwana_> the one other major part is to see how sr-iov capability would be specified....
17:15:06 <sridhar_ram> ignoring tacker for a moment, how it is specified in heat hot templates ?
17:15:21 <brucet> vishwna_: This was going to bemy question
17:15:57 <brucet> Is the plan to translate everything to HOT? If so, the spec should have an example translation.
17:16:28 <vishwana_> sridhar_ram, brucet, great question ...
17:16:37 <vishwana_> I am currently in the midst of setting up a server with a Intel NIC card that has SR-IOV capabilities....
17:17:47 <vishwana_> am first trying to make sure that I am successfully able to create a SR-IOV port using neutron followed by booting a VM using Nova by specifying the sriov port...
17:18:57 <brucet> So first test a non automated setup
17:19:04 <sridhar_ram> vishwana_: make sense
17:19:15 <vishwana_> have run into some issues ..... consulting with sgordon to resolve the issue.....once the issue is resolved, I will investigate how it works with heat
17:19:40 <sridhar_ram> vishwana_: sounds good / rather practical !
17:20:06 <sridhar_ram> given our release timeline, my suggestion would be keep sr-iov as a stretch goal and in a separate patchset ..
17:20:29 <sridhar_ram> let the CPU related placement be on the initial patchset and continue review, merge..
17:20:33 <vishwana_> sridhar_ram, great suggestion, makes sense keeping the release timeline in mind
17:21:04 <sridhar_ram> I don't want to risk this whole feature just for sr-iov...
17:21:32 <sridhar_ram> however, as I imagine, you would keep pushing to get sr-iov in.. we still have room
17:21:52 <vishwana_> from closing the spec perspective, I need to respond to bobh and your initial comments and incorporate the TOSCA NVF terminology
17:22:28 <sridhar_ram> lets make a call on sr-iov by end of this week...
17:22:45 <vishwana_> those are the items I think of as big items for now as related to closing the spec.....
17:22:48 <sridhar_ram> whether it is going to stay in this spec or will get split out into a separate one
17:22:58 <vishwana_> sridhar_ram, agree on the call by end of the week for sr-iov
17:23:17 <sridhar_ram> vishwana_: sounds good, anything else ?
17:23:32 <sridhar_ram> anyone else have any question on this topic ?
17:24:08 <vishwana_> sridhar_ram, I am expecting that network creation will be part of automatic resource creation and will be leveraged by the enhanced vnf code
17:24:21 <sridhar_ram> vishwana_: for sr-iov part ?
17:24:36 <sripriya> sridhar_ram: EPA will consume tosca parser changes from bobh?
17:25:18 <sridhar_ram> sripriya: yes
17:25:38 <vishwana_> sridhar_ram, in general for both any networks specified as part of network_interfaces section under VDU
17:25:48 <sridhar_ram> vishwana_: you should plan to move your WIP to be based on bobh's WIP
17:25:58 * sridhar_ram looking for bobh's WIP link
17:26:09 <sripriya> so there is no extra functionality to be  handled by Tacker or is the entire template offloaded to tosca parser?
17:26:16 <sridhar_ram> #link https://review.openstack.org/#/c/278121/
17:26:57 <sridhar_ram> there are two parts...
17:27:20 <vishwana_> sridhar_ram, makes sense that my WIP be based on bobh's WIP
17:27:31 <sridhar_ram> 1) the spec portion will go into tosca-parser (or for now stay in tacker_nfv.yaml overlay over nfv profile)
17:27:56 <sridhar_ram> 2) HOT translation portion need to be coded in tacker itself (and not in heat-translator)
17:28:06 <vishwana_> I see
17:28:11 <sripriya> ah ok..
17:28:52 <brucet> <sridar_ram> I thought that the HOT generator was also part of the translator
17:29:39 <brucet> TOSCA NFV >> Intermediate format >> Hot or Tacker feature implementation
17:29:46 <sridhar_ram> brucet: as discussed in previous call there will be a portion of HOT translation in tacker related to flavor creation
17:30:50 <brucet> Meaning that the flavor is created outside of HOT??
17:30:50 <sridhar_ram> heat-translator's flavor handling is something we couldn't consume..
17:31:41 <sridhar_ram> bobh would have the answer, he is not around today
17:31:53 <brucet> OK.
17:32:38 <sridhar_ram> we don't have both bobh and tbh in this call today.. there are in the food chain in front of vishwana_
17:32:50 <sridhar_ram> that would flush out our dependencies ..
17:33:36 <vishwana_> agree
17:33:48 <sridhar_ram> anything else on this? if not lets move on..
17:34:11 <sridhar_ram> #topic Liberty Release updates
17:34:28 <sridhar_ram> For the folks who weren't tracking...
17:34:52 <sridhar_ram> we have a stable/liberty but we haven't pushed a release pypi for liberty..
17:35:21 <sridhar_ram> it was pending on few things .. particularly on device API deprecation
17:35:41 <sridhar_ram> now dkushwaha is helping to get the deprecation done...
17:36:20 <sridhar_ram> now I'm looking to make a liberty release once that lands
17:36:36 <dkushwaha_> sridhar_ram, most probably I will finish it by tomorrow
17:36:44 <sridhar_ram> if anyone have some critical things they need to see it into Liberty please cherrypick ASAP
17:37:09 <sridhar_ram> dkushwaha_: np, thanks for the getting this in!
17:37:39 <sridhar_ram> this release will be consumed by few downstream projects like OPNFV
17:38:26 <sridhar_ram> moving on...
17:38:38 <sridhar_ram> #link VNF Scaling
17:38:48 <sridhar_ram> #topic VNF Scaling
17:39:20 <sridhar_ram> I'd like to have some preliminary discussion on VNF scaling ... particularly at the requirement level
17:39:49 <brucet> <sridhar_ram> did you see my email yesterday??
17:39:50 <sridhar_ram> what kind of VNF scaling you folks would like to see "enabled" through Tacker ?
17:40:04 <sridhar_ram> brucet: yes,
17:40:18 <brucet> OK.
17:40:28 <sridhar_ram> brucet: sorry didn't had time to reply, we should do these discussion in the ML, btw
17:40:45 <brucet> ML??
17:40:53 <brucet> Mailing list
17:40:54 <brucet> OK
17:41:00 <brucet> I'll send now
17:41:06 <sridhar_ram> mailing-list - openstack-dev with [tacker] tag
17:41:14 <prashantD> sridhar_ram : we have any input from outside, about what are interested in?
17:41:46 <sridhar_ram> however I want to intentionally skip the implementation for a moment and think about what make sense for NFV operator
17:42:07 <brucet> OK
17:43:10 <sridhar_ram> prashantD: I can share what I've been asked to provide off tacker.. I'd also like to get folks opinion here ...
17:44:00 <sridhar_ram> brucet: prashantD: my questions are around manual scaling vs auto scaling...
17:44:24 <sridhar_ram> if auto-scaling are we looking at traditional cpu load
17:45:20 <brucet> I would think you would want to take any type of application generated signal to trigger a scaling operation.
17:45:44 <brucet> I think this can actually be done with Heat using signals
17:45:59 <sridhar_ram> brucet: agree, scaling based on "triggers"..
17:46:16 <sridhar_ram> trigger could be from application or even some external monitoring s/w
17:46:27 <brucet> Yes
17:46:54 <sridhar_ram> one of the trigger could be manual ? from the operator ?
17:47:08 <brucet> Yes
17:47:32 <sridhar_ram> btw, there are some operators interested in manual "VNF" scaling .. when tacker is used as a VNF manager.
17:47:41 <brucet> remember the discussion on Senlin a while ago?
17:48:04 <brucet> I know Senlin includes a method for an application to generate a trigger.
17:48:23 <sridhar_ram> there would be a higher level NSD getting a scale up event and NSO will call Tacker to scale up and that scale up will look like "manual" scale up request
17:48:26 <brucet> And Senlin functionality will ve incorporated into Heat as Resources
17:49:03 <brucet> That's the approach I would use
17:49:05 <sridhar_ram> yeah, we need to look into Senlin, I checked with their PTL, heat resource for Senlin resources will be available in Miraka
17:49:18 <brucet> Ah... Very good
17:49:52 <sridhar_ram> I'm trying to see how we could stack the chips..
17:50:18 <sridhar_ram> get a simple "trigger" based scale up / down framework..
17:50:22 <brucet> The problem that I see is there is nothing defined in Tosca NFV for scaling policy
17:50:37 <sridhar_ram> a specific VDU will scale up / down within a {min, max} range
17:51:02 <sridhar_ram> there are some constructs in Simple Profile we can consume
17:51:06 <brucet> Yes. But where would trigger be defined?
17:51:14 <sridhar_ram> they have a nice event --> trigger --> action sequence
17:51:14 <brucet> For triggers??
17:51:34 <brucet> If you could point me to it I qwould appreciate
17:51:42 <prashantD> sridhar_ram, brucet: yes, how will user trigger scale event with senlin?
17:52:15 <brucet> There is an HTTP based API that Senlin listens for to trigger events
17:52:43 <sridhar_ram> IMO we need a tacker VDU trigger into Senlin to be in the loop..
17:53:10 <prashantD> i see, so user will have to use a non-tacker interface to trigger scale event?
17:53:20 <sridhar_ram> this trigger could be used (a) for manual scale request coming to tacker or (b) any mon driver in tacker getting a "scale up" resp
17:53:31 <brucet> You could abstract the Senlin interface with a Tacker interface
17:53:59 <brucet> So NFV app >> Tacker Trigger >> Senlin Trigger
17:54:22 <sridhar_ram> yeah, something like that..
17:54:35 <brucet> Or if Tosca defines triggers then this could be used to drive Senlin Trigger API
17:54:47 <sridhar_ram> prashantD: we need some experimentation trying out Senlin
17:55:19 <brucet> I actually thought there was a trigger mechanism in Heat today without Senlin.
17:55:26 <brucet> I can check into this.
17:55:43 <prashantD> brucet : in heat there is a stack-update
17:55:58 <brucet> And you send a signal??
17:56:00 <sridhar_ram> if I step back I see two models... (a) set and forget model - scale using cpu load, mem, etc that Senlin can figure out (b) based on trigger that Tacker will "facilitate"
17:56:05 <prashantD> which allows you to scale and that is what i am doing
17:56:18 <brucet> OK
17:56:53 <brucet> If we could start without depending on Senlin I think that would be better
17:57:22 <sridhar_ram> do you folks see value in implementing manual scaling using heat stack-update until we get some sense out of Senlin ?
17:57:31 <brucet> Yes
17:57:52 <brucet> The basics between manual and automated scaling are the same
17:58:03 <sridhar_ram> prashantD: did you re-write your spec into one for manual scaling ?
17:58:08 <sridhar_ram> brucet: agree
17:58:09 <brucet> You need to create an autoscaling group in Heat in both cases
17:58:29 <prashantD> sridhar_ram : i wrote a separate manual scaling spec
17:58:34 <brucet> Is there already a scaling spec?
17:58:48 * sridhar_ram almost out of time
17:58:50 <prashantD> https://review.openstack.org/#/c/283163/
17:58:57 <brucet> I will look at this now
17:59:04 <brucet> I didn;t know there was already a apec
17:59:13 <sridhar_ram> brucet: prashantD: lets continue the discussion in that spec..
17:59:20 <prashantD> brucet : there is also a auto-scaling one, based on ceilometer though
17:59:27 <sridhar_ram> brucet: looks this showed up could be days back :)
17:59:47 <sridhar_ram> lets continue the discussion next week..
17:59:55 <brucet> In any case, I'll be happy to review exosting spec.
18:00:03 <brucet> No need to send email to list then.
18:00:15 <sridhar_ram> brucet: agree, lets continue in gerrit!
18:00:21 <sridhar_ram> times up..
18:00:26 <sridhar_ram> thanks folks ..
18:00:32 <sridhar_ram> #endmeeting