08:02:13 <dkushwaha> #startmeeting tacker
08:02:14 <openstack> Meeting started Tue Aug  6 08:02:13 2019 UTC and is due to finish in 60 minutes.  The chair is dkushwaha. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:02:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
08:02:17 <openstack> The meeting name has been set to 'tacker'
08:02:25 <dkushwaha> #topic Roll Call
08:02:36 <dkushwaha> who is here for Tacker weekly meeting ?
08:02:42 <nitinuikey> Hi
08:02:45 <keiko-k> o/
08:02:53 <hyunsikyang> Hi
08:03:00 <takahashi-tsc> Hi
08:03:11 <joxyuki> hi
08:03:43 <dkushwaha> howdy all
08:04:28 <dkushwaha> #chair joxyuki
08:04:29 <openstack> Current chairs: dkushwaha joxyuki
08:04:40 <dkushwaha> #topic BP
08:05:32 <dkushwaha> enable updating VNF parameters
08:05:40 <dkushwaha> #link https://review.opendev.org/#/c/672199/
08:06:05 <dkushwaha> joxyuki, I don't have no comment on that
08:06:26 <joxyuki> dkushwaha, thanks for your review
08:06:34 <dkushwaha> joxyuki, could you please update it, so we will merge it
08:06:52 <joxyuki> ok. will do it soon
08:07:58 <dkushwaha> joxyuki, I have one thing, like how to update image, I mean abot software update things, which we needs to handle, but do not have comment on it for now
08:09:27 <joxyuki> dkushwaha, it depends on heat. As for image, I think target instance is re-created.
08:10:25 <joxyuki> so user have to be carefully what he is going to do.
08:11:23 <dkushwaha> joxyuki, yes, but the cases like: 1: how to handle already attached block storages. 2: what if user want to just update some patch on its Os.
08:12:55 <dkushwaha> joxyuki, so, for now I suggest to support update parameters, and later we can work for such other cases
08:13:13 <dkushwaha> thoughts..
08:14:02 <joxyuki> dkushwaha, what do you mean other cases?
08:14:51 <joxyuki> Are they case1 and 2 just you mentioned above?
08:15:03 <dkushwaha> joxyuki, not sure about all cases, but among them 2 as i mentioned in above comment
08:15:57 <joxyuki> dkushwaha, got it.
08:17:22 <joxyuki> As for case 2, I think tacker need to issue commands, such as apt/yum/patch, in the instance.
08:18:26 <joxyuki> because heat doesn't support such use case, maybe.
08:18:42 <dkushwaha> joxyuki, I see.
08:20:54 <dkushwaha> joxyuki, ok, so please update spec, I will be give my +2, and if further no any comment byt others, we willb merge
08:21:42 <joxyuki> dkushwaha, yes
08:21:46 <tpatil> dkushwaha: I want to discuss about VNF packages support for VNF onboarding specs
08:22:05 <dkushwaha> tpatil, sure
08:22:08 <tpatil> specs: https://review.opendev.org/#/c/582930
08:22:37 <tpatil> We are planning to add new RPC API in tacker-conductor for processing vnf packages
08:23:12 <tpatil> so I would like to ask question whether tacker-conductor service is installed on controller node or on a separate node in the production env.
08:24:03 <dkushwaha> tpatil, its installed on controller
08:24:09 <tpatil> Generally for HA, tacker.service will be installed on multiple controller nodes ( 2 or 3)
08:24:29 <tpatil> I have seen one patch where monitoring is moved to tacker.conductor
08:25:07 <dkushwaha> tpatil, which patch? you mean mistral-monitoring patch?
08:25:22 <tpatil> that patch is not yet merged, but monitoring same vnF from 2 or 3 controller nodes would be problematic
08:25:25 <tpatil> yes
08:27:34 <dkushwaha> tpatil, i needs to re-check that patch, but as I remember, conductor is to communicate with, not for monitoring
08:28:03 <tpatil> in our specs, we want to process vnf packages in tacker conductor, for that, we need to extract the csar zip in a folder which will be made configurable.
08:28:25 <tpatil> once the csar zip is extracted, we want to keep the files as is until the vnf package is deleted
08:29:35 <tpatil> now if tacker-conductor is running on multiple nodes for HA, we will need to cleanup the extracted data from folder when vnf package will be deleted from tacker.conductor from all nodes
08:31:05 <dkushwaha> tpatil, just trying to understand, why new API on conductor?
08:31:09 <tpatil> for that we will need to introduce periodic tasks in conductor for clean up of deleted VNF packages
08:32:20 <tpatil> we want to add processing of vnf package code in conductor
08:32:31 <tpatil> as it would be lengthy tasks
08:32:44 <dkushwaha> make sense
08:32:53 <tpatil> and also conductor manager, we can introduce the periodic task for cleanup
08:34:29 <joxyuki> tpatil, why is it priodic? when VNF package delete is called, tacker will delete it.
08:34:55 <joxyuki> s/priodic/periodic/
08:35:33 <tpatil> but if you run multiple tacker.conductor service, the request will be processed by only one service
08:35:58 <joxyuki> understand
08:36:02 <tpatil> in that case, some of the extracted csar data from one of the controller node won't be deleted
08:38:20 <dkushwaha> tpatil, seems, we missed this case in spec.
08:38:35 <dkushwaha> i needs to re-llok into it
08:38:35 <nitinuikey> @tpatil so you mean periodic task will clean up vnf data from all the tacker conductor nodes?
08:38:44 <dkushwaha> re-look
08:38:46 <tpatil> yes, I will update the specs as now I'm clear that tacker.conductor will be installed on the controller node
08:39:02 <hyunsikyang> IMO, If you want to change the conductor architecture, it is anoother issue.
08:39:44 <hyunsikyang> dkushwaha, now is tacker support multiple conductor and service?
08:39:49 <tpatil> nitinuikey: it will be deleted from one of the controller node when user will delete the vnf package, and from other controller node, if any data is there ,it will be cleaned up in the periodic tasks
08:40:21 <nitinuikey> tpatil understood
08:43:17 <dkushwaha> hyunsikyang,  some actions cannot access tacker database directly
08:43:36 <dkushwaha> hyunsikyang, so conductor server was introduced to do database access for those actions
08:44:16 <dkushwaha> hyunsikyang, but yes, it looks an issue to have multiple conductor
08:45:05 <hyunsikyang> dkushwaha, yes. I think so. thanks
08:46:19 <shubham_potale> FYI tpatil lost internet connection
08:46:28 <dkushwaha> tpatil, please update spec, i will check again
08:46:34 <dkushwaha> oh
08:46:59 <nitinuikey> dkushwaha we will inform them if he will not able to connect
08:47:04 <shubham_potale> dkushwaha: tpatil here, sure i will update the specs
08:47:14 <dkushwaha> hyunsikyang, could you please help to review https://review.opendev.org/#/c/582930
08:47:31 <dkushwaha> tpatil, thanks
08:49:24 <dkushwaha> moving next..
08:49:43 <dkushwaha> Prometheus plugin support
08:49:54 <dkushwaha> #link https://review.opendev.org/#/c/540416/
08:50:39 <dkushwaha> jaewook_oh_, any update from your side?
08:50:58 <jaewook_oh_> Umm I updated the bp and I checked your comments
08:52:34 <dkushwaha> jaewook_oh_, I just commented some nits.
08:54:04 <jaewook_oh_> Yes, and I've updated the bp from patch set 29 to patch set 30. Some new comments from the reviewers would be appreciated.
08:54:43 <dkushwaha> Folks, As we have to freeze spec soon, so please help to review specs on priority.
08:54:54 <dkushwaha> jaewook_oh_, ok
08:55:23 <jaewook_oh_> And as you said it is not for container-based vnf only, and I've changed the commit title, but that made some error.
08:56:51 <jaewook_oh_> That's why I couldn't change it... and I think creating new bp would be nice in this case :(
08:56:53 <dkushwaha> moving next..
08:57:14 <dkushwaha> #topic Open Discussion
08:58:15 <dkushwaha> tpatil, as in last meeting discussion about cp-auto-heal
08:58:30 <dkushwaha> https://github.com/openstack/tacker/blame/master/tacker/vnfm/policy_actions/vdu_autoheal/vdu_autoheal.py#L51
08:59:02 <dkushwaha> tpatil, it does not heal CP values, but its name only.
08:59:57 <dkushwaha> so once a vnf(i.e. VDU) heal, it lost there ip, and created new one
09:00:22 <tpatil> yes, but if Mac address is there, it would assign same ip address
09:00:29 <dkushwaha> so every time we comes with new ip
09:01:49 <dkushwaha> oh, time up folks
09:01:54 <dkushwaha> Closing this meeting
09:02:00 <takahashi-tsc> FYI, We checked how it works, and IP address is not changed.
09:02:01 <dkushwaha> thanks all for joining
09:02:04 <tpatil> I don't recollect every thing at this point, will update later
09:02:45 <dkushwaha> takahashi-tsc, tpatil we can continue on tacker channel for further discussion
09:02:49 <dkushwaha> #endmeeting
09:03:04 <tpatil> sure
16:00:27 <openstack> slaweq: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
16:00:28 <slaweq> hi
16:00:47 <slaweq> #endmeeting