16:00:33 #startmeeting tacker 16:00:34 Meeting started Tue Jun 14 16:00:33 2016 UTC and is due to finish in 60 minutes. The chair is sridhar_ram. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:37 The meeting name has been set to 'tacker' 16:00:43 #topic Roll Call 16:00:55 o/ 16:01:02 o/ 16:01:11 o/ 16:01:30 o/ 16:01:43 howdy all ! let's give a min before we start... 16:02:30 Hi all 16:02:33 i know bobh and tbh are out today 16:02:37 tung_doan: hi there! 16:02:42 KanagarajM: are you here ? 16:02:50 sridhar_ram: hi 16:03:03 okay, lets start... 16:03:07 #topic Agenda 16:03:14 #link https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_June_14.2C_2016 16:03:31 anything else to discuss ? we might have some time for open topics 16:04:04 alright... 16:04:11 #topic Annoucements 16:04:36 We have a bug fix release 0.3.1 out for Mitaka ... 16:04:44 tacker 0.3.1 (mitaka) bug fix release 16:04:50 #link http://lists.openstack.org/pipermail/openstack-announce/2016-June/001213.html 16:04:56 tacker-horizon 0.3.1 (mitaka) bug fix release 16:05:00 http://lists.openstack.org/pipermail/openstack-announce/2016-June/001216.html 16:05:06 python-tackerclient 0.3.1 (mitaka) bug fix release 16:05:10 http://lists.openstack.org/pipermail/openstack-announce/2016-June/001223.html 16:05:33 Thanks for all those cherrypicks... 16:06:14 Also, I'd like to thank the reviewers as we have picked up our review response / merge rate! 16:06:41 Heads up, i'm out whole next week for OPNFV Summit @ Berlin.. 16:06:57 I'll cancel next week's mtg unless someone else wants to host it 16:07:35 moving on.. 16:07:47 #topic Monitoring & Scaling specs 16:08:08 https://review.openstack.org/#/c/318577/7/specs/newton/manual-and-auto-scaling.rst 16:08:28 KanagarajM: tung_doan: do you 16:08:32 * sridhar_ram oops 16:08:54 KanagarajM: tung_doan: do you've any specific subtopics / design issues in these specs to discuss now ? 16:08:59 sridhar_ram: also, my spec is updated: https://review.openstack.org/#/c/306562/12/specs/newton/alarm-based-monitoring-driver.rst 16:09:02 sridhar_ram: I have got some comments from sripriya and answered them ... and today i tried to define the schema for the new types 16:09:24 sridhar_ram: yeah, so would like to get feedback on https://review.openstack.org/#/c/329528/1/tacker/vm/tosca/lib/tacker_defs.yaml 16:09:49 sridhar_ram: yes, some things related to alarm-url in monitoring driver 16:10:01 unfortunately we are missing both bobh & tbh who can advise on tosca lib 16:10:04 sridhar_ram: trying to introduce a scalegroup after refering a tosca simple profile 16:10:23 sridhar_ram: please have a look at L114 16:10:40 * sridhar_ram is looking 16:10:42 KanagarajM: sridhar_ram: i would like to know if it is beneficial to show scaling stats all the way to the user or are they supposed to get that from heat? 16:11:31 KanagarajM: hang on, lets take one at a time :) 16:11:56 sridhar_ram: yeah, sure, will wait on scaling :) 16:12:26 for TOSCA changes for both monitoring and scaling, tacker_defs.yaml is the right approach as this is not really coming from TOSCA NFV profile 16:12:40 basing it off TOSCA Simple Profile is the right approach 16:13:12 fyi... TOSCA Simple Profile is here http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/TOSCA-Simple-Profile-YAML-v1.0.html 16:13:50 tung_doan: do this make sense? we shd insert Monitoring node type in tacker_defs.yaml 16:13:52 KanagarajM: why don't we push this new type to tosca parser project? 16:14:20 KanagarajM: oh, sure we could.. that is the preferred approach 16:14:30 sridhar_ram: sure. 16:15:03 sridhar_ram: ok. thanks i will update the spec with the new def in place, 16:15:10 if we are blocked due to some reason in tosca-parser project, we can temporarily host it in tacker_defs.yaml 16:15:46 tung_doan: you had a question on alarm-url in monitoring driver ? 16:16:01 sridhar_ram: sure. I will try to catch sahdev 16:16:05 sridhar_ram: right... let take a lookpleae :) 16:16:20 tung_doan: link please ? 16:16:31 sridhar_ram: https://review.openstack.org/#/c/306562/12/specs/newton/alarm-based-monitoring-driver.rst 16:16:39 Line 114 16:16:49 sripriya: I think, we could retrieve from heat when ever user make a request, or do you want to store in the deviceattributes table about the current state of scaling? 16:17:26 tung_doan: i have doubts exactly in the same spot as well.. 16:17:30 KanagarajM: probably we could take this up once we finish monitoring discussion? 16:18:21 sripriya: Ah, sure. thats right :) 16:18:24 sridhar_ram: does it make sense, sridhar? 16:18:39 KanagarajM: tung_doan: referring to monitoring policy id and scaling id in the API URI doesn't make sense 16:19:18 these id, as far as I understand, refers to the policy name in the TOSCA template ? 16:19:28 sridhar_ram: its an name of the policies 16:19:32 sridhar_ram: yeah right 16:19:52 so the URL will look like ... 16:19:56 sridhar_ram: but srihar.. monitoirng policy can some actions 16:20:18 "v1.0/vnfs/df2323-234234df2-23c23f32-3r4r234/vdu1_cpu_usage_monitoring_policy/ 16:20:31 RESTful API does allows to have action names it in 16:20:40 sridhar_ram: sounds good 16:20:52 sridhar_ram: yes, its should be fine IMO 16:21:11 instead of action-id, it could be action-name 16:21:24 okay, this means we need to decompose tosca template node names into addressable attributes 16:21:26 so that it would confuse by terms 16:21:28 KanagarajM: again that is unique only tot he VNF 16:21:28 doable 16:21:52 KanagarajM: i could have another VNF with same policy name and the url will only defer on the uuid 16:22:05 sripriya: yes, 16:22:41 KanagarajM: i already mentioned "action_name" in my spec 16:22:58 tung_doan: sure. 16:23:03 tung_doan: KanagarajM: we need a pick one name! 16:23:30 sridhar_ram: that's why I come today :) 16:23:46 tung_doan: cool, worth it :) 16:23:56 tung_doan: another question.. 16:24:17 KanagarajM: could I mentioned to metadata in my spec 16:24:33 KanagarajM: in case if auto-scaling 16:24:35 tung_doan: are we going to run a oslo_service/wsgi endpoint to take these webhook callback ? 16:25:05 sridhar_ram: yes, i believe so 16:25:10 tung_doan: does it going a separate threads to process these callbacks ? 16:25:32 sridhar_ram: +1 16:25:37 okay, then we need this part to be scalable (in the sense, to set num_thread/ workers) 16:25:44 sridhar_ram: right 16:26:04 sridhar_ram: similar concept of heat :) 16:26:23 KanagarajM: sure, now how secure this can be ... ? 16:26:57 what if some malicious code can call this webhook ? is there a randon webhook identifier for each invocation ? 16:27:01 sridhar_ram: its right question, heat earlier gives by means of EC2 signing and i think now keystone deprecating it 16:27:37 sridhar_ram: so, we need to with tacker RBAC and in case of signaling we should see how to make ceilometer to invoke tacker with required credentails in place 16:28:25 KanagarajM: i thought other projects generate a specific only time identifier (per webhook) 16:28:50 sridhar_ram: time identifier ? 16:28:53 .. and as long as the original call is https we would have some level of protection 16:29:04 oops, i meant one-time identifier 16:29:48 sridhar_ram: no, its getting used always in case of EC2 signed url in auto-scaling 16:30:02 we can take this offline, but we need some solution to secure the calls coming through this channel (webhook) 16:30:15 sridhar_ram: yes, 16:30:16 KanagarajM: i see 16:30:33 tung_doan: just to wrap up monitoring.... 16:30:35 tung_doan: if possible, could you please check with ceilometer 16:30:49 sridhar_ram: Kanagaraj: OK 16:31:23 tung_doan: can you please describe your design for the oslo_service/wsgi endpoint, multi-thread handler for the webhook callback and how you are planning to have this scale ? 16:31:41 tung_doan: .. in meant, in your next patchset 16:32:14 sridhar_ram: Ok.. I will think about them... 16:32:27 IMO, with that and a some kind of handle on the callback security, we should wrap this spec and land it 16:32:49 let's move onto scaling... 16:33:09 KanagarajM: sorry, you got interrupted.. please continue 16:33:39 sridhar_ram: for scaling, i belive we could go with https://review.openstack.org/#/c/329528/1/tacker/vm/tosca/lib/tacker_defs.yaml 16:33:42 sridhar_ram: anw, please review my spec.. thanks 16:34:02 tung_doan: will do, thanks again for join at this late hour 16:34:06 *joining 16:34:43 sridhar_ram: and i will place it in tacker and contiune the dev 16:35:14 sridhar_ram: in parallel, i would check with sahdev on how to take it tosca-parser + heat-translator 16:35:25 sridhar_ram: is that fine? 16:36:17 KanagarajM: fine, we still need to follow up https://review.openstack.org/#/c/302636/ ? 16:36:51 sridhar_ram: yes, sure 16:37:46 KanagarajM: for the call back handler .. 16:38:01 KanagarajM: .. you need to coordinate w/ tung_doan, correct? 16:38:28 sridhar_ram: for call back, i have asked tung_doan to check with ceilometer 16:38:38 and follow up on it. 16:38:43 okay.. thanks 16:38:47 one last.. 16:39:13 have you figured out the dependency between your two work items ? who is going to go first ? 16:39:54 again, you can decide offline.. but it will be good to have a plan.. 16:40:00 sridhar_ram: it should go in parallel, and for the auto-scaling from the alarm-monitor can go last 16:40:04 .. so that you don't trip each other :) 16:40:23 sridhar_ram: 1. scaling and/or monitoring 2. auto-scaling from monitor driver 16:40:38 KanagarajM: sounds good :) 16:40:48 tung_doan: KanagarajM: thanks! 16:40:56 sridhar_ram: np :) 16:41:02 sridhar_ram: so i don't see any dependency for manual scaling 16:41:24 okay.. i've a clarification on that, but will take it offline.. 16:41:30 KanagarajM: sridhar_ram: should we capture scalign stats in tacker? 16:41:36 *scaling 16:41:40 sripriya: you had some questions, and i answered, kindly let me know your feed back 16:41:56 KanagarajM: what you mean by scaling stats ? 16:41:57 KanagarajM: sure, will respond to that 16:42:29 sridhar_ram: no. of vdus currently scaled and related metrics? 16:42:33 sridhar_ram: sripriya has mentioned that its better toc apture the current state like number of VDUS 16:43:01 sridhar_ram: with scaling coming in, we may start off with 2 instances per vdu and then scale out to 3 based on policies 16:43:27 that will be usefull 16:43:29 sridhar_ram: the only way to see this is going through heat 16:43:43 apart from the event-logging on every scale event 16:44:12 sridhar_ram: yes... 16:44:38 KanagarajM: we got to probe heat to get that stat ? again, depends on the effort .. we can always do this in a follow on 16:45:05 sridhar_ram: yeah, thats better plan. 16:45:26 KanagarajM: sounds good 16:45:34 KanagarajM: i'd suggest we split it into a follow on RFE if that is going to be long poll 16:45:49 sridhar_ram: yeah sure. 16:46:12 sridhar_ram: it would be great if you could help to merge the spec before your OPNFV summit trip :) 16:46:37 sridhar_ram: i could use that week to impl 16:46:39 yes, we should shoot for that.. :) 16:47:01 sripriya: i would seek your help too :) 16:47:07 sridhar_ram: yeah sure :) 16:47:17 KanagarajM: sure 16:47:21 tung_doan: KanagarajM: lets sync up to catchup in the #tacker channel and/or ML to keep this moving.. so that we don't wait for this weekly meeting 16:47:39 moving to the next topic... 16:47:52 sridhar_ram: agree 16:47:53 sridhar_ram: sure. 16:48:02 #topic Midcycle Meetup - Virtual vs F2F 16:48:32 based on the doodle pool .. http://doodle.com/poll/2p62zzgevg6h5xkn 16:49:10 .. we unanimously prefer a virtual midcycle meetup 16:49:36 i propose we do a two-day Virtual Meetup with two different timeslots.. 16:50:03 .. one Asia friendly and another w/ US / Europe 16:50:11 thoughts ? 16:51:15 +1 16:51:20 anyways, i will create another doodle poll to finalize the timeslots 16:51:28 +1 16:51:31 +1 16:51:55 sridhar_ram: nice 16:52:14 i was start an etherpad as well.. and we should start collecting ideas for topics to discuss 16:52:43 Here it is https://etherpad.openstack.org/p/tacker-newton-midcycle 16:53:22 zeih offered an EU location.. i wish we can go there for tacker midcycle one day :) 16:53:33 moving on... 16:53:51 #topic Bugs, RFE and Open Discussion 16:54:09 I know we have many bugs and RFEs in flight... 16:54:36 in fact, lots of small but significant things are coming in as RFEs.. 16:54:53 i'd like to thankg gongysh for all those oslo refactoring! 16:55:10 anyone have a specific bug or RFE to discuss ? 16:55:35 sridhar_ram: i am trying to set the retry count to 3 in heat driver 16:55:51 sridhar_ram: as 60 is big in number https://review.openstack.org/329527 16:55:56 KanagarajM: stack retry 16:55:58 ? 16:56:09 sridhar_ram: sripriya yes 16:57:01 sridhar_ram: yes it was 60 earlier 16:57:25 KanagarajM: well, what to say.. we don't give up trying that easily ;-) 16:57:36 sridhar_ram: :) 16:57:59 alright.. please keep the bug fixes, RFEs coming.. 16:58:00 KanagarajM: 300 was the time out keeping vms in mind which take nearly 2 -3 minutes to bring up an instance 16:58:03 sridhar_ram: in heat also we try for 3 as default for resource 16:58:24 KanagarajM: sounds good & make sense! 16:58:37 KanagarajM: tacker was false setting VNF to ERROR state even though the actual instance did come up after like 2 minutes on VMs with starved specs 16:59:09 KanagarajM: yeah, thats bad 16:59:23 we are out of time.. 16:59:33 thanks everyone who joined.. 16:59:42 again, no meeting next week.. 16:59:52 will meet the week after next... 17:00:00 thanks everyone! 17:00:05 have fun in Berlin! 17:00:09 bye 17:00:14 bye 17:00:15 sridhar_ram: will come to tacker 17:00:15 s3wong: thanks! 17:00:18 #endmeeting