16:00:12 <sripriya> #startmeeting tacker
16:00:13 <openstack> Meeting started Tue Nov 22 16:00:12 2016 UTC and is due to finish in 60 minutes.  The chair is sripriya. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:16 <openstack> The meeting name has been set to 'tacker'
16:00:30 <sripriya> hello tackers
16:00:32 <sripriya> #topic Roll Call
16:00:36 <tbh> o/
16:00:38 <janki> o/
16:00:39 <marchetti> o/
16:00:41 <diga> o/
16:01:17 <sripriya> tbh janki marchetti diga hi!
16:01:23 <tung_doan> Hi all
16:01:33 <sripriya> tung_doan: hi
16:01:34 <janki> hey
16:01:37 <marchetti> hi all
16:01:55 <marchetti> btw, this is mike_m, i lost my nick
16:02:22 <sripriya> marchetti: it took me some time to figure out :-)
16:03:06 <sripriya> #chair tbh
16:03:07 <openstack> Current chairs: sripriya tbh
16:03:14 <sripriya> lets get started
16:03:29 <sripriya> #topic agenda
16:03:45 <sripriya> https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Nov_22nd.2C_2016
16:03:50 <sripriya> #link https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Nov_22nd.2C_2016
16:04:30 <sripriya> we can quickly go through the patches and then talk about the ocata spec
16:04:56 <sripriya> since diga and marchetti are also here we can touch upon pecan update and multi vim support
16:05:07 <s3wong> hello
16:05:20 <sripriya> #topic Tacker dsvm gate failure
16:05:24 <sripriya> s3wong: hi
16:05:47 <sripriya> as you have observed, tacker dsvm job is broken and hence all patches are failing
16:05:53 <diga> sripriya: yes, we should discuss on that
16:06:14 <sripriya> this is due to mysql versions we are using on the gate
16:06:47 <sripriya> aodh folks helped us identify this error since this error was specifically seen for aodh mysql installation
16:07:18 <sripriya> we use mysql 5.5 version on gate with ubuntu trusty
16:07:38 <sripriya> we need to move to 5.6 to update all our jobs to ubuntu-xenial
16:08:16 <sripriya> i will work with infra teams to update our tacker jobs to use xenial as a priority
16:08:19 <manikanta_> Hi All
16:08:59 <sripriya> this is a heads up to all of us that if we are using mysql 5.5 version in your development, please upgrade to 5.6 version
16:09:25 <sripriya> manikanta_: hello
16:09:38 <sripriya> any questions or thoughts team?
16:10:12 <diga> yes
16:11:03 <sripriya> diga: did you have something to share?
16:11:32 <diga> sripriya: My base API framework is ready
16:12:01 <sripriya> diga: we will get to it, we are 1st knocking off the agenda items in meeting wiki
16:12:02 <diga> sripriya: I want to push it under tacker
16:12:05 <sripriya> next topic
16:12:10 <sripriya> #topic Adding status field in VNFD
16:12:37 <sripriya> here is the patch https://review.openstack.org/#/c/373157/
16:13:14 <manikanta_> Regarding review comments on https://review.openstack.org/#/c/373157/ , instead of displaying the constant VNFD status as ACTIVE or NA
16:13:19 <sripriya> manikanta_: i know you wanted to discuss more on this patch with rest of folks, take it away
16:13:36 <manikanta_> is it better we have a status field in VNFD object ?
16:13:55 <manikanta_> gongysh suggested this, Need inputs from others as well ?
16:15:16 <tbh> whether it is "state" or "status" I don't think the end user is not using that info for any purpose
16:15:37 <sripriya> manikanta_: interesting, till now we have treated only VNF as the entity going through different life cycle management states, VNFD is more of a static catalog from which VNF is created
16:16:03 <manikanta_> state or status here refers to proper onboarding of the VNFD in catlogue, IMO
16:16:21 <tbh> are we using VNFD status like enable/disable? I mean enabled are the ones from which launch VNF?
16:16:21 <sripriya> manikanta_: tbh it also doesnt make sense to display status as N/A for VNFD
16:16:26 <janki> IMO, status of vnfd is not much used
16:16:39 <tbh> sripriya, yes
16:17:01 <manikanta_> So everytime, We assue that the state of VNFD is ACTIVE ?? in all case ?
16:17:12 <sripriya> janki: i think the patch is dealing with VNFD events and not to the end user
16:18:06 <janki> sripriya, manikanta_ VNFD is not a running entity that it will have stages. once created, it is static siting in db
16:18:10 <tbh> sripriya, yes eventually logs for the user? and is there any other event for VNFD?
16:18:40 <sripriya> so when an user queries VNFD events, even though the actual VNFD entries we do not maintain any status, we need to keep this consistent with VNFD events
16:19:11 <sripriya> tbh: we also access the specific VNFD events through Horizon right?
16:19:44 <tbh> sripriya, yes we can access
16:19:59 <sripriya> manikanta_: i don't know if ACTIVE can be used since it is easier to confuse with VNF status ACTIVE
16:20:26 <janki> we can use "created"
16:20:44 <janki> or uploaded
16:21:01 <sripriya> or onboarded?
16:21:02 <janki> or onboarded - this goes with horizon term too
16:21:07 <manikanta_> janki: sripriya : How about onboarded
16:21:14 <janki> sripriya, same thoughts :)
16:21:20 <sripriya> since we onboard VNFD and deploy VNF
16:21:22 <sripriya> :-)
16:21:36 <sripriya> do we agree with onboarded then?
16:21:46 <tbh> +1
16:21:50 <manikanta_> sripriya: +1 from myside
16:22:10 <sripriya> cool, manikanta_ please update the patch with the comments
16:22:24 <janki> will this state be displayed on CLI and horizon too?
16:22:30 <manikanta_> sripriya, janki tbh : Thanks for the inputs
16:22:36 <sripriya> janki: yes
16:22:41 <sripriya> next topic,
16:22:42 <manikanta_> sripriya, : Will update the same and respond to gongysh
16:22:48 <sripriya> #topic Fix hard coded VDU in alarm monitor
16:23:38 <sripriya> tung_doan: where do we stand as of now for this patch https://review.openstack.org/#/c/382479/
16:23:49 <tung_doan> sripriya: thanks.. my patch is almost done.. just need reviews..
16:24:08 <tung_doan> sripriya: just one concern and need disucss with you guys...
16:24:18 <sripriya> tung_doan: was there any dependent patches that needed to be reviewed before we review this?
16:24:26 <sripriya> tung_doan: sure please
16:24:55 <tung_doan> sripriya: regarding to scaling use case, when we support both scaling in/out...
16:25:21 <tung_doan> sripriya: alarm format need to know specific scaling action (in/out)
16:25:38 <tung_doan> sripriya: but they are not shown in tosca-template
16:25:55 <sripriya> tung_doan: okay
16:25:58 <tung_doan> sripriya: that why i need to parse to get scaling in/out
16:26:11 <tung_doan> sripriya: does it make sense?
16:26:11 <sripriya> tung_doan: can you share a sample tosca template
16:26:20 <sripriya> tung_doan: it is easier to understand that way
16:26:37 <tung_doan> sripriya: https://review.openstack.org/#/c/382479/17/samples/tosca-templates/vnfd/tosca-vnfd-alarm-scale.yaml
16:27:13 <sripriya> tung_doan: i think till now it was assumed it would always be a scale out operation
16:27:35 <tung_doan> sripriya: yes.. but new patch was fixed this.
16:28:35 <sripriya> tung_doan: so right now you point to the same scaling policy in both cases?
16:29:07 <tung_doan> sripriya: right.. but it will be parsed.. like this: https://review.openstack.org/#/c/382479/17/doc/source/devref/alarm_monitoring_usage_guide.rst@136
16:29:40 <tung_doan> sripriya: just look in alarm url
16:30:10 <tung_doan> sripriya: we have SP1-in and Sp-out for scaling-in and scaling-out
16:31:04 <sripriya> tung_doan: ok so we need specific policies based on alarm triggers
16:31:20 <tung_doan> sripriya: that's right
16:31:21 <sripriya> tung_doan: so what is the current implementation with your patch?
16:32:11 <tung_doan> sripriya: i already tested.. scaling in/out supported, fixed VDU hardcoded was done using metadata
16:32:36 <tung_doan> sripriya: almost items for alarm RFE were done
16:33:10 <sripriya> tung_doan: i see that you apply the scaling policy based on the operator specified in alarm policy
16:34:17 <sripriya> tung_doan: we may be missing some edge cases here, but let us get this version out and later work on enhancing it
16:34:37 <tung_doan> sripriya: ok.. thanks
16:34:59 <tung_doan> sripriya:also, please show me your suggestion if possible, thanks
16:35:35 <sripriya> team, please review this patch , we need to get this in to newton and make a stable release, kindly provide your comments or leave your +1s
16:35:46 <sripriya> tung_doan: will take a closer look at the patch today
16:35:55 <sripriya> tung_doan: was there any other patch related to this topic?
16:36:31 <tung_doan> sripriya: i already mentioned to you about scaling stucks in PENDING_SCALING_STATE
16:37:00 <sripriya> tung_doan: is this for the scale_in?
16:37:02 <tung_doan> sripriya: i will look into that later..
16:37:07 <sripriya> tung_doan: okay
16:37:12 <sripriya> moving on
16:37:16 <sripriya> #topic VNF Container spec
16:37:24 <tung_doan> sripriya: right. actually heat fixed this
16:37:41 <sripriya> janki: take it away
16:37:46 <sripriya> #link https://review.openstack.org/#/c/397233/
16:37:49 <sripriya> tung_doan: ack
16:37:50 <tung_doan> sripriya: so we can leverage it
16:37:55 <janki> sripriya,  I have replied to the comments
16:38:19 <janki> I am thinking of calling magnum apis directly instead of going via heat
16:38:38 <janki> we can also directly supply dockerfile for vnf creation
16:38:49 <sripriya> janki: help me understand, when we discussed at the design summit, we talked about zun for container life cycle management
16:39:00 <janki> in similar lines to supporting HOT template for vnf creation
16:39:22 <janki> sripriya, yes zun is the best approach.
16:39:42 <janki> since zun is not fully ready, going with magnum for the first iteration
16:40:03 <sripriya> janki: so does magnum also support container creation?
16:40:18 <sripriya> janki: magnum is supposed to manage COEs right?
16:40:31 <diga> sripriya: yes, its COE
16:40:51 <janki> sripriya, yes magum is to manage COE. but it does create containers in bay and then manage the bay
16:41:09 <janki> tbh correct me if i am worng here
16:41:14 <diga> sripriya: I think we should go with magnum
16:41:36 <diga> janki: now bay called as cluster
16:41:40 <sripriya> janki: i guess those are specific container platform commands and not openstack api commands
16:43:08 <janki> sripriya, i think there are openstack commands too. need to check on this
16:43:09 <diga> sripriya: no, all the commands are openstack apis
16:43:26 <sripriya> janki: diga : okay and this is through magnum?
16:43:34 <janki> sripriya, yes
16:43:37 <diga> sripriya: yes
16:43:38 <tbh> sripriya, janki I feel magnum won't serve the purpose for the following reasons ... 1)It will launch the containers in the nova instance, which will be extra overhead 2)sometimes if we want to scale out the NF, it may launch new nova instance(if the existing VMs serving COE is out of capacity), so again introducing VM problems here 3)If we choose magnum, we need to select COE first and develop those specific yaml files
16:44:18 <janki> tbh +1. but this is for the first iteration I am proposing
16:44:37 <sripriya> tbh: thanks for the info
16:44:46 <sripriya> janki: tbh: how about zun?
16:45:01 <sripriya> i did not see any REST api docs of the project, i may be missing something
16:45:03 <tbh> janki, even in the first iteration if we consider magnum, and then in future if we want to move to zun, we need to change lot of codebase which is undesirable
16:45:03 <diga> tbh: I dont agree with you, we dont need to launch nova instance when we want to scale, that;s the reason cluster part comes into picture
16:45:31 <diga> tbh: COE files are already there, we just want to reuse
16:45:40 <janki> zun is not fully ready. last I heard, they were writing a scheduler for spawning containers
16:45:50 <sripriya> janki: okay
16:45:58 <tbh> diga, cluster can come into picture inside the bay like pod kind of thing, or more of nova instances in single bay
16:46:15 <tbh> diga, at some of point of time, we have to launch new VM again
16:46:22 <diga> tbh: bay is collection of nova instances
16:46:23 <janki> instead of waiting for zun and siting idle, lets start with some POC kind of work and get things in discussion/improvement
16:46:42 <janki> sripriya, tbh ^
16:46:46 <tbh> diga, yes, it is also limited I feel
16:47:13 <diga> tbh: I dont think, I have written some part of COE two release before
16:47:26 <tbh> diga, same here :)
16:47:34 <diga> tbh: it works well, & tested on coreos & fedora-atomic
16:48:11 <sripriya> tbh: janki: diga: let us introduce magnum as the 1st container support and later introduce zun as a 2nd support
16:48:24 <janki> sripriya, +1
16:48:37 <tbh> sripriya, janki I feel either go with zun or heat docker plugin(if not deprecated). or tacker own apis for container
16:49:08 <diga> sripriya: +1 but I dont know when zun will get stable, I think not for this release at least
16:49:12 <sripriya> tbh: that is also a good point, does heat directly interface with docker?
16:49:23 <diga> yeah
16:49:35 <diga> heat has docker plugin implemented
16:49:43 <janki> I am also emphasising on using method 3 of spec - Directly pass in Dockerfile to create a VNF. This would bypass
16:49:43 <janki> tosca-parser, Heat and call Magnum APIs directly.
16:50:01 <janki> tbh sripriya heat-docker is deprecated/removed
16:50:17 <tbh> sripriya, but I don't think magnum will be sufficient, even in the case of SFC
16:50:25 <sripriya> tbh: my only concern for direct interface is we dont want to own too much of this logic in tacker since that is not the main goal for the orchestrator and may have challenges when we integrate with other VIMs
16:50:43 <diga> janki: I would like you to cover all this point in the spec
16:50:56 <sripriya> tbh: we are still in a nascent stage for this :-)
16:50:57 <janki> diga all the points are covered
16:50:58 <diga> sripriya: +1
16:51:33 <diga> sripriya: any how we need, third layer to manage orchestration
16:51:38 <janki> writing APIs in tacker would be like duplicating some portion of zun
16:51:46 <tbh> sripriya, but my concern, we have to completely move to new project if we choose magnum
16:51:53 <sripriya> alright, some good thoughts here, let us continue to iterate the spec based on this comments
16:52:25 <sripriya> tbh: tacker already has started to interact with multiple projects for many cloud functionalities
16:53:03 <tbh> sripriya, yes, but here I mean pick one project for one or two cycles is not good I think
16:53:21 <sripriya> tbh: i do not think we will remove one and add a new one
16:53:29 <sripriya> tbh: they can evolve in parallel
16:53:34 <sripriya> tbh: what do you think?
16:54:04 <janki> sripriya, tbh we can have this feature as a techpreview and keep on improving over the coming releases
16:54:11 <tbh> sripriya, I am thinking for magnum, it may happen :)
16:54:17 <sripriya> tbh: please add your thoughts on the spec and we can discuss on gerrit
16:54:34 <tbh> sripriya, sure
16:54:35 <sripriya> we have 2 more specs to touch upon quickly
16:55:16 <sripriya> #topic vmware vim support
16:55:19 <sripriya> marchetti: are you planning to create a spec for supporting vmware as a VIM?
16:56:06 <marchetti> I'm still trying to figure out what is needed.  It may be that VIO is the best option.
16:56:14 <marchetti> VIO == Vmware Integrated OpenStack
16:56:36 <marchetti> If that is the option, then technically speaking it should already be supported
16:56:46 <sripriya> marchetti: if you need some information from Tacker perspective, someone can tag with you for this spec
16:57:00 <marchetti> sripriya: ok thanks
16:57:00 <sripriya> marchetti: right
16:57:33 <sripriya> marchetti: let us discuss more on this in tacker channel after the meeting
16:57:42 <marchetti> sripriya: ok
16:57:43 <sripriya> #topic pecan framework
16:57:54 <sripriya> diga: do you have a spec created for this?
16:58:06 <diga> yes
16:58:10 <diga> Already pushed
16:58:11 <sripriya> link please?
16:58:25 <diga> 1 min
16:58:54 <diga> https://review.openstack.org/#/c/368511/
16:58:58 <sripriya> please add the cores to the spec and feel free to ping folks on tacker channel
16:59:08 <diga> sripriya: sure
16:59:08 <sripriya> team, please review this spec and provide your thoughts
16:59:27 <sripriya> time is up team
16:59:31 <sripriya> thanks for attending!
16:59:38 <diga> sripriya: about pecan code, can I create one folder under tacker
16:59:41 <sripriya> #endmeeting