08:03:10 #startmeeting tacker 08:03:11 Meeting started Tue Nov 27 08:03:10 2018 UTC and is due to finish in 60 minutes. The chair is dkushwaha. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:03:12 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 08:03:15 The meeting name has been set to 'tacker' 08:03:37 hi 08:03:38 #topic Roll Call 08:03:45 hi 08:03:49 o/ 08:03:51 who is here for Tacker weekly meeting ? 08:04:02 O/ 08:04:05 hello YanXing_an phuoc 08:05:01 ok lets start 08:05:06 #chair phuoc YanXing_an 08:05:07 Current chairs: YanXing_an dkushwaha phuoc 08:05:51 #topic BerlinSummit 08:06:42 I missed the all core members this time. As no one in summit this time 08:07:09 hi 08:07:14 hope to see in next summit :) 08:07:19 hi joxyuki 08:08:02 me too 08:08:15 wish being there in next summit 08:09:09 I had been asked a question in summit regarding Tacker deployment on container. but I had not tried it, so i was not able to explain it that time. Is anyone deployed on container? 08:09:52 users can use kolla to deploy tacker on container 08:10:42 phuoc, yea I see. have you tried? 08:11:05 not yet 08:11:17 ok, i will try some days. 08:11:21 but I will try it soon 08:11:35 phuoc, cool 08:11:42 kolla seems to be good for deploying openstack 08:12:50 phuoc, yea, so many attendees joined kolla project onboarding & update session this time. and room was almost packed 08:13:46 ok, move on. 08:13:51 #topic BPs 08:14:54 seems multiple patches on test-addition-refactoring 08:15:04 That is great 08:15:24 YanXing_an, Thanks for leading this BP 08:15:59 https://etherpad.openstack.org/p/test-addition-refactoring 08:16:17 you can see the detail plan and the status 08:17:10 that looks good to me 08:17:29 great 08:17:37 YanXing_an, nice work 08:18:00 :) 08:18:26 YanXing_an, could you please prioritize skipped cases. I think it will help to reduces some rework 08:18:43 YanXing_an, as in point 3 08:20:52 dkushwaha, sure, these skipped case will be reopened during point 2, and will have high priority 08:21:33 YanXing_an, Thanks 08:22:32 phuoc, do you have something to talk ? 08:23:11 dkushwaha, I plan to help force delete resources 08:23:24 I will upload some patches soon 08:23:43 phuoc, sounds good. 08:24:20 phouc: Will it delete resource from heat as well or only tacker? 08:24:38 phuoc: Sorry to misspell your name 08:24:56 phuoc, i had just submited initial draft on spec, 08:25:43 phuoc, https://review.openstack.org/#/c/602528/ 08:25:49 dkushwaha, I will look at it 08:25:51 We are also trying to fix one similar issue : https://review.openstack.org/#/c/618086/ 08:26:21 Got one comment from Yan Xing an, will address his comment soon 08:26:44 phuoc: Please take a look at this patch and give us your feedback 08:26:46 tpatil, I saw your patch 08:27:43 phuoc: We will upload a new PS which will cover interacting with heat to ensure all resources from VIM are deleted before deleting the VNF 08:28:03 I will add --force in tacker delete commands first 08:28:43 tpatil, and I will make it compatible with your patch too 08:29:28 phuoc: Ok, Great. Thank you 08:31:23 np :) 08:33:20 tpatil, phuoc IMO, force-delete should be for the case only when normal delete not able to clean the resources. So even there any error from backend(i.e. heat), it should move forward and clean entries from tacker 08:35:32 yes, it should cover all cases, in which we cant not remove resources 08:36:20 dkushwaha, agree with you, force-delete hardly fail to delete 08:36:45 dkushwaha: Yes, I have understood it but it's also important to delete the heat resources before actually cleaning entries from tacker. That's my main point. Otherwise as an operator, those resource would need to be cleaned up manually 08:37:10 another thing is, we can not control all the external behavior where normal workflow failed. So instead of blocking to delete from backend, we can just log error message and move forward step 08:39:18 Some indication to operator is useful. Maybe an event will do with info like stack id or whatever. 08:39:27 yes, we may log heat and mistral resource to let users delete it manually 08:39:58 phuoc, +1 08:41:42 tpatil, yes, but my point is, we might stuck in never ending loop, so for cleaner approach, just request to delete, if fail from backend, log error message, and then clean from tacker side. 08:43:17 moving on 08:43:37 joxyuki, do you have something to talk ? 08:44:05 nothing from me 08:44:07 dkushwaha: sounds good to me 08:44:08 Hi 08:44:22 As comment given on patch https://review.openstack.org/#/c/612595/9 and discussion in the summit : �vdu_autoheal" monitoring policy action implementation 08:44:31 should be as per ETSI standard HealVnfRequest interface https://www.etsi.org/deliver/etsi_gs/NFV-SOL/001_099/003/02.05.01_60/gs_NFV-SOL003v020501p.pdf 08:44:44 Started working on the same but have some quires detail description is at : http://paste.openstack.org/show/735705/ and http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000172.html so needs some feedback and inputs. 08:47:56 bhagyashris, IMO in tosca.datatypes.nfv.VnfHealOperationConfiguration, there are missing actions on it 08:48:05 bhasyashris, I will investigate and reply to your query. 08:48:59 bhagyashris, after summit i just joined my work yesterday, and unfortunately I could not look into that. I will check and respond. 08:49:14 ok 08:49:17 bhagyashris, what's the difference between VnfHealRequest with alarm monitor 08:49:20 Thank you :) 08:49:31 phouc, additional parameters such as action can be defined as a parameter. 08:50:08 joxyuki, yes, we should have to define them 08:50:22 main oi 08:50:25 sorry 08:51:26 move on 08:52:05 #topic Open Discussion 08:53:25 no any update from my side 08:53:34 my team have a focus time before the end of this year, so i wish we can finish all UT case refactoring in next month, so it�s very kindly to review all patches and give feedback, thanks. 08:54:48 sure YanXing_an , and thanks again for being more active on that 08:55:41 Do we have something to talk now? otherwise we can close this meeting. 08:56:41 ok, Thanks to all Folks :) 08:56:51 Closing this meeting for now 08:57:05 #endmeeting