15:00:02 #startmeeting monasca 15:00:02 Meeting started Wed Feb 17 15:00:02 2016 UTC and is due to finish in 60 minutes. The chair is rhochmuth. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:05 The meeting name has been set to 'monasca' 15:00:07 o/ 15:00:08 o/ 15:00:10 Hello 15:00:24 hi kamil 15:00:24 o/ 15:00:46 o/ 15:01:18 agenda is as follows: 15:01:19 Agenda for Wednesday February 17, 2016 (15:00 UTC) 15:01:19 1. Grafana v2 update 15:01:19 2. Agent thread-pool not restarting. 15:01:27 link at, https://etherpad.openstack.org/p/monasca-team-meeting-agenda 15:01:49 we should also probably look at outstanding reviews 15:02:02 hello 15:02:10 Hello 15:02:10 hi ho_away 15:02:16 hi pradipm 15:02:34 please feel free to update the agenda at the link 15:02:47 #topic grafana v2 15:03:02 Just a short update on this today 15:03:31 Grafana has some concerns about having another auth method, so the patches are on hold for the moment. 15:03:49 But they've offered to work with us to find a way to offer the integration. 15:04:13 I have a call with them later this week. 15:04:24 sounds goo 15:04:25 So this will happen, it just might take a little longer 15:04:39 That's all I have for the moment. 15:05:23 ok, thanks, hopefully i'll get to trying this out soon 15:05:30 short question. What would be the other auth-method? O-Auth?! 15:06:12 Possibly? Or maybe working out some sort of pluggable auth system to keystone doesn't need to be part of the main repo. 15:06:17 I'll know more next week 15:06:25 okay. Thx 15:06:35 let me know if you want me involved 15:06:41 will do 15:06:51 i had a discussion with raj 15:07:03 and they are doing some really cool things with grafana 15:07:37 so, next topic then 15:07:39 ? 15:07:45 sure 15:07:53 #topic agent thread-pool not restarting 15:07:59 This one is also mine 15:08:02 that doesnt' sound good 15:08:32 Basically in the collector monasca-agent uses a custom thread pool rather than a library. 15:08:49 But the terminate function in the thread pool doesn't actually terminate threads 15:09:15 So if a thread gets stuck it does a restart, which is terminate and join, and blocks there forever. 15:09:56 I'm just wondering if anyone who wrote the original thread pool is around to fix it? 15:10:12 Or maybe we need to switch to a thread pool library instead? 15:10:16 hmmm, probably not 15:11:05 so, if the thread is left in this state, what problem are you seeing, that we haven't seen after all this time 15:11:23 We have issues with agents locking up like this every few days, and they require a manual restarts 15:11:33 I'm not sure what causes threads to get stuck 15:11:43 hmmm, we haven't seen that 15:11:56 is it a specific plugin that is failing that we aren't running 15:12:02 possibly? 15:12:24 or is it any thread/plugin 15:12:25 No, I've seen it on http_check 15:12:33 And another that I can't remember 15:12:54 ok, so http_check was written by david schroeder 15:13:11 I think those are probably bugs in their own right, but the thread_pool shouldn't lock up. 15:13:38 so, if you have a library in mind that you want to switch too that sounds good to me 15:13:53 just need to ensure licensing is ok on it 15:14:06 apache, bsd, … 15:14:10 bu not gplv2 15:14:39 if you want to touch-base with david, then i can get you in contact with him 15:14:40 I tried switching to the multiprocessing module yesterday. That seems to be what the monasca thread pool is based on. 15:14:54 But there are architecture reasons that doesn't work 15:15:05 I'll have to look around for other options 15:15:31 we haven't seen any problems with our http_check 15:15:41 that we are performing in helion, btw 15:16:34 ok, i guess next topic 15:16:48 #topic Mitaka Release planing (ansible-installer, devstack, etc.) 15:16:57 okay that's mine 15:17:06 hi kamil 15:17:21 Hi. Openstack uses devstack to install a test/dev environment. I know that there exist already an integration for monasca. Do we plan to contribute this to devstack? 15:17:37 it is already done 15:17:45 ah okay... haven't seen it yet 15:17:58 See, https://github.com/openstack/monasca-api/tree/master/devstack 15:18:09 and will be the monasca-vagrant (ansible) also a part of mitaka release? 15:18:31 There is also a Vagrantfile in that directory that you can use to build in one step 15:18:54 monasca-vagrant is another development environment that predates devstack 15:19:09 we are still using monasca-vagrant 15:20:00 but, as it is only a dev environment, that repo isn't a part of the official release 15:20:27 okay... but the devstack integration is still in monasca-api repo. Do we want to contribute it to the devstack repo? 15:20:52 i mean to this one: https://git.openstack.org/openstack-dev/devstack 15:20:56 there are only specific "core" projects that are part of the devstack repo 15:21:06 all the new projects are done as plugins 15:21:24 so, there are projects like nova and swift that are part of devstack 15:21:41 but newer projects, like murano and manilla are not 15:22:18 okay. Thx. So that means, that we will need an installation script for log functionality 15:22:24 i forget the actual specificis of whether a project is in or not 15:22:41 i think we need to build a devstack plugin for the log api 15:22:58 one was already in progress or done 15:23:08 Good. Thanks for the information. That's all from my side 15:23:22 actually, i might be confused 15:23:28 there are tempest tests for the log api 15:23:41 but i don't see a devstack plugin 15:24:07 i think it would be developed similar to the current monasca plugin 15:24:07 yes. I also haven't seen a plugin from log-api 15:24:27 i would like to move off of the monasca-vagrant environment 15:24:31 at some point 15:24:41 it is time consuming managing two environments 15:25:12 there are several advantages of the monasca-vagrant env though 15:25:51 not sure how to resolve that 15:26:04 monasca-vagrant is a very good dev environment for testing purposes 15:26:32 so, i should point out that the monasca devstack plugin is used by the openstack ci system 15:27:16 every code review goes through openstack ci, which builds an devstack vm with monasca and then runs all the tempest tests against it 15:27:40 currently, those tests are run against mysql and influxdb 15:28:04 if you wanted to test hibernate/postgres, you could potentially add that as a configuration 15:28:50 we test both the python and java apis 15:28:59 but not all variations of databases 15:29:05 just mysql and influxdb 15:29:17 the gates only test the api, right? 15:29:44 i think yes 15:29:49 the gates test the api 15:29:54 so, we don't have gate jobs on the other monasca componenents 15:30:24 i think it would be great if somebody created a devstack plugin for each of the other monasca components 15:30:43 i.e., persister, threshold, notification 15:31:06 then we could get tempest tests and gate jobs on those componenents 15:31:30 i'm not sure what you are saying 15:31:42 usually teh tempest tests test the api 15:32:01 it would be nice if each component had tempest tests 15:32:15 the persister doesn't have an api 15:32:15 so, we could have gate jobs for each component 15:32:28 does tempest only tests api's? 15:32:36 the tempest tests are usually for testing apis 15:32:42 they are integration or system tests 15:32:53 ok, then other tests for the other components running in a gate job 15:33:20 so, if you check in a review for the persister, then the gate tests are run 15:33:45 there are gates on the persister, which include tempest 15:34:31 the monasca devstack plugin is used by the monasca-persister 15:34:32 does the persister have a devstack plugin? 15:34:42 oh, it is shared 15:34:51 yes, it is shared 15:35:15 it would be nice if it had it's own devstack plugin to isolate testing to just the persister 15:35:19 the devstack plugin in the monasca-api is used by multiple projects 15:36:24 interesting 15:36:33 When we mention enable_plugin monasca-api git://git.openstack.org/openstack/monasca-api - all the monasca mini services are getting installed (persistor, monasca-threh .. etc. etc. ..) right? 15:36:50 correct 15:38:51 #topic other 15:39:04 ho_away: are you still looking at anomaly detection? 15:39:41 rhochmuth* yep but mainly discussion with my colleague to explain of current implementation 15:40:11 i saw an interesting article on fujitus applying deep learning to classification of time series data yesterday 15:40:40 i've also started spinning up again on nonparametric statistics 15:40:45 rhochmuth: really, can you point out the link? 15:41:40 http://phys.org/news/2016-02-fujitsu-deep-technology-time-series-high.html 15:42:43 rhockmuth: thanks! i will find out the writer :-) 15:42:55 I am seeking an export opinion for one usecase .. if possible in monasca .......... 15:42:57 please note, that joe keen made some improvements to the kafka consumer in monasca-common 15:43:32 this bumps the persister performance probably somewhere in the 10-20 K messages per sec 15:44:01 java or python or both? 15:44:02 thanks ho_away 15:44:09 python 15:44:30 nice 15:44:45 java was always fast 15:45:34 these improvements address the python persister which we hadn't done as much analysis on 15:45:53 so, does anyone want to talk about the log-api performance 15:47:41 ok, i've left my analysis and latest performance numbers 15:47:49 Unfortunaly, i am not aware of the current status of log-api. I think witek wrote you an email 15:49:48 ok, sounds like we are wrapping up 15:49:54 I am seeking an export opinion for one usecase .. if I can discuss once here. 15:50:00 sure 15:50:10 the floor is yours 15:50:29 Is there a way to run some simple rules on the alarm-history output? 15:50:51 like what 15:51:14 for example: over a time of say 5 hours, if IOPS have been breached in some pattern over 80% 15:51:53 i don't believe that we have any filters/options on alarm-history at this time 15:51:54 there is a topic in kafka that any consumer can consume from 15:52:20 if you want to consume alarm state transition events that would be easy 15:52:39 I see. ... directly from kafka? 15:52:50 It's already in the alarm-state-history .. right? 15:53:12 You can also use the API 15:53:29 the entire alarm state history is in the API 15:53:59 you can also "subscribe" to alamrs via webhook notifications 15:54:00 thats correct. Is there any component in monasca that runs some form of rule-check on anywhere else? 15:54:12 no 15:54:26 Ok.... 15:54:47 there has been some interest in various forms of this 15:55:06 for example, there is a group that is interested in alarm clustering 15:55:08 any suggestion if I want to define some simple rules (filters as in alarm-history .. or similar) ... what shd be the best approach? 15:55:22 the Vitrage project is also interested in alarm correlation 15:55:42 get the alarms from the api and do something externally 15:55:44 Vitrage is in Monasca? I am not aware of .. sorry for ignorance. 15:55:57 Vitrage is another openstack project 15:56:05 actually, get the alarm-history... 15:56:17 https://wiki.openstack.org/wiki/Vitrage 15:56:19 make sense ...ddieterly 15:56:46 pradipm: there is an option in the cli to get the raw json, that would help with external tools 15:56:51 Thanks rhochmuth 15:56:53 i think understanding your specific use case would be good 15:57:14 sure ... 15:57:21 If you want to plugin to kafka the message schema is at, https://wiki.openstack.org/wiki/Monasca/Message_Schema 15:57:35 it depends on what you want to do though. 15:57:44 so basically I want to do some conformance on storage object underneath cinder/manila 15:57:54 you could filter alarm state transition events that are created by the threshold engine 15:58:15 so, i'm guess that would best be handled via the api 15:58:33 i should also point out that fabiog, who isn't here today, is adding support for Congress in Monasca 15:58:43 Congress is a policy engine 15:58:51 I see ... yeah .. something like that ... 15:59:04 Please check out, https://wiki.openstack.org/wiki/Congress 15:59:14 a fabiog would be the contact 15:59:20 can I configure a policy (like simple rules) and run on top of say alarm-state-history apu .. 15:59:23 this meeting is coming to an end i just realized 15:59:34 will try to chat to fabiog 15:59:59 thanks rhochmuth. thanks a lot for all the pointers. 16:00:07 by everyone 16:00:13 thanks rhochmuth and all :-) 16:00:29 #endmeeting