15:00:29 <rhochmuth> #startmeeting monasca
15:00:33 <openstack> Meeting started Wed Jan 20 15:00:29 2016 UTC and is due to finish in 60 minutes.  The chair is rhochmuth. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:37 <openstack> The meeting name has been set to 'monasca'
15:00:54 <bklei> o\
15:00:55 <rhochmuth> o/
15:00:59 <bklei> o/
15:01:00 <tsv> o/
15:01:00 <fabiog> o/
15:01:01 <ddieterly> o/
15:01:02 <witek> hi
15:01:03 <ho_away> o/
15:01:07 <shinya_kwbt> o/
15:01:12 <rhochmuth> Agenda for Wednesday January 20, 2016 (15:00 UTC)
15:01:13 <rhochmuth> 1. monasca-log-api[TSV]:
15:01:13 <rhochmuth> 1.1 Discuss adding batching support (v2.0/log/multiple)
15:01:13 <rhochmuth> 1.2 Discuss moving Dimensions to body instead of headers (similar to monasca-api)
15:01:14 <rhochmuth> 2. Translations for monasca-ui (Zanata)
15:01:14 <rhochmuth> 3. Tag/publish latest monasca-agent to pypi?
15:01:15 <rhochmuth> X. Question for Anomaly & Prediction Engine [ho_away]
15:01:23 <rhochmuth> Hello everyone
15:01:32 <rhochmuth> Light agenda today
15:01:35 <bklei> good morning
15:01:42 <rhochmuth> please add items at https://etherpad.openstack.org/p/monasca-team-meeting-agenda
15:02:01 <rhochmuth> First thing I want to cover is that there are a lot of reviews
15:02:05 <rhochmuth> in progress
15:02:16 <rhochmuth> I've been working my way though them
15:02:20 <rhochmuth> sorry about the delay
15:02:27 <rhochmuth> i could use help
15:02:41 <rhochmuth> don't every go on vacation
15:02:51 <rhochmuth> it is painful returning
15:03:14 <rhochmuth> anyway, things are starting to get back to where they were prior to the holidays in terms of outstanding reviews
15:03:23 <bklei> i'll try to help if you want, add me
15:03:29 <rhochmuth> thanks
15:03:34 <tsv> rhochmuth, I will continue with logging review
15:03:54 <witek> rhochmuth: thanks for taking a look at pull requests
15:04:12 <rhochmuth> witek: yes, i tried, but was unable to get it installed
15:04:19 <witek> i've seen
15:04:31 <witek> i'll take a look and update
15:04:39 <rhochmuth> Thanks
15:04:52 <rhochmuth> So, how about moving onto the first topic
15:05:01 <rhochmuth> #topic monasca-log-api
15:05:04 <rhochmuth> tsv you are up
15:05:18 <rhochmuth> batching support for the api
15:05:33 <tsv> thanks, we would like to work on adding batching support for the log api, anybody already working on this ?
15:05:53 <witek> not, yet, but we need it as well
15:06:26 <tsv> my team here could get started with that. witek, you ok with that ?
15:06:34 <witek> we have created some item in the wiki some time ago https://wiki.openstack.org/wiki/Monasca/Logging#Request_Headers
15:07:02 <witek> could you create a short blueprint for that
15:07:04 <tsv> i was looking at the monasca-api code and looks like it pretty much have everything we need to support batching
15:07:09 <tsv> sure
15:07:11 <witek> perhaps we could split the job
15:07:22 <witek> we have to update the agent as well
15:07:39 <tsv> witek, do we need a separate API for this ? yes I guess
15:08:04 <witek> aditional resource in log-api i would think
15:08:29 <rhochmuth> i think one of the central issue was in handling text logs
15:08:39 <rhochmuth> how do you know how a newline should be treated
15:09:04 <rhochmuth> the "multiple" endpoint would treat newline characters as delimters for log messages
15:09:13 <rhochmuth> that is what i recall
15:09:17 <rhochmuth> in the case of json
15:09:19 <tsv> rhochmuth, based on content-type ?
15:09:23 <rhochmuth> a single vs multi is not required
15:09:41 <rhochmuth> correct, the content type determines if it is a json or text log
15:10:02 <tsv> rhochmuth, i like that, that would keep it consistent with metrics API, for example
15:10:47 <rhochmuth> so in the metrics api you can supply a single metric in the json body
15:10:57 <rhochmuth> or you can supply multiple arrays as a json array
15:11:06 <witek> so is multiple intendet to send multiline log entries or mupltiple log entries?
15:11:09 <rhochmuth> we could have done something similar in the case of json logs
15:11:20 <rhochmuth> then we wouldn't have required a new endpoint
15:11:33 <rhochmuth> however, the problem has been how to handle text logs
15:12:00 <rhochmuth> i though multiple was to send multiple log lines
15:12:11 <rhochmuth> do i misunderstand that
15:12:19 <rhochmuth> that appears to be the way the python api is written
15:12:27 <witek> one log entry can consist of several lines
15:12:46 <witek> one could also send several log entries in single request
15:12:49 <tsv> so it is actually a single log entry with multiple lines ?
15:13:42 <rhochmuth> i guess i'm confused too
15:13:45 <witek> we have handled multiline log entries with logstash grok pattern
15:14:02 <rhochmuth> the python log api in on_post reveices a single request body and then publkishes to kafka
15:14:11 <rhochmuth> ahhh, i see
15:14:41 <rhochmuth> so, in that case we don't need the multiple endpoint
15:15:01 <fabiog> witek: but this is not the case of analyzing the log to understand relationship among strings?
15:15:02 <rhochmuth> i think i misunderstood the api
15:15:34 <fabiog> witek: what I mean is that in the batch log all the lines will be stored as messages in the queue and you can still correlate them and create single entries in ES
15:15:35 <rhochmuth> so, the "multiple" endpoint would be for handling multiple log files simultaneousely
15:16:30 <witek> fabiog: i see, yes, it would be useful to extend api for that
15:17:07 <fabiog> witek: so you have a single api
15:17:31 <fabiog> witek: then, if those lines are correlated is solved when the messages are interpreted and stored to ES
15:17:38 <fabiog> witek: makes sense?
15:18:32 <witek> at the moment multiline entries are correlated by logstash in transformer
15:18:52 <fabiog> yes, using patterns.
15:18:59 <rhochmuth> so for now, based on what i've heard is there any pressing need to add the "multiple" endpoint
15:19:01 <fabiog> but are those sent as single or multiple messages?
15:19:10 <fabiog> in the kafka queue?
15:19:22 <witek> agent sends them as single
15:19:27 <rhochmuth> from what i understand, it is sent as a single
15:20:00 <rhochmuth> so agent send multiple lines to the api as a text blob
15:20:09 <fabiog> right
15:20:12 <rhochmuth> the api publishes the same message body to kafka as a single message
15:20:27 <witek> rhochmuth: no
15:20:27 <rhochmuth> logstash does the parsing into multiple log messages
15:20:32 <rhochmuth> oops
15:20:34 <rhochmuth> sorry
15:20:36 <fabiog> that is the point rhochmuth
15:20:48 <fabiog> they already treat multi-line as multi-messages
15:20:54 <witek> agent sends line by line
15:21:01 <fabiog> so it is a matter of re-conciliate that
15:21:14 <rhochmuth> agent sending line by line is not going to be performance
15:21:17 <fabiog> so I think the current API can already handle multiple log entries
15:21:17 <rhochmuth> performant
15:21:19 <witek> transformer uses grok to correlate the lines for single log entry
15:21:45 <rhochmuth> ok, i take back everything i said
15:21:50 <tsv> if batching is supported by /single, is that good enough then?
15:21:59 <rhochmuth> the agent sends a single log line to the api
15:22:04 <fabiog> tsv: I think it could be
15:22:10 <rhochmuth> the api published to kafka the single message
15:22:21 <rhochmuth> logstash parses it
15:22:31 <rhochmuth> so, we need to add the multiple endpoint
15:22:35 <rhochmuth> correct?
15:22:41 <witek> yes
15:22:57 <witek> and I also see the need for endpoint 'bulk'
15:23:13 <fabiog> rhochmuth: no if logstash can make sense of the multiple messages and understand where a log ends and a new starts
15:23:18 <witek> for sending more then one log entries in one request
15:24:05 <rhochmuth> so, what was wrong with what i said above
15:24:24 <rhochmuth> can the "single" api handle multiple log messages?
15:24:28 <fabiog> well, if logstash can do that, then you don't need a new api
15:24:29 <rhochmuth> in a single request
15:24:42 <rhochmuth> correct, that was my point
15:24:52 <fabiog> a multi-line multiple logs will translate in several single messages in the queue
15:25:05 <fabiog> then is up to logstash to re-construct what messages goes with what
15:25:13 <rhochmuth> the difference between single vs multiple is there is some delimeter of messages
15:25:31 <rhochmuth> right?
15:25:41 <rhochmuth> like a newline character
15:25:45 <tsv> fabiog, how would a multi-line log message for a single entry be differentiated from multiple log entries for plain text ?
15:25:58 <rhochmuth> so, why do any parsing in the log api
15:26:03 <rhochmuth> let logstash handle it all
15:26:12 <fabiog> tsv: well for instance there is no date at the beginning of the second part of the message
15:26:48 <fabiog> rhochmuth: that is what I am trying to understand, if logstash can handle we should have 1 API endpoint, if not then we need 2
15:26:52 <tsv> fabiog, we don't have any schema for the plain text logs right ? do we ?
15:27:06 <rhochmuth> fabiog: ok, i agree
15:27:23 <fabiog> tsv: no, but logstash uses a pattern to parse the logs
15:27:33 <witek> tsv: we have only json
15:27:35 <fabiog> so you will need to create yours based on the log format you are ingesting
15:28:42 <tsv> all, why do we need to support plain text then ? could we always expect json payload ?
15:29:31 <tsv> the api builds the envelope anyway and it would be easy if it has to always handle a json payload ?
15:30:07 <rhochmuth> i think purse json would make things much simpler too
15:30:17 <ddieterly> seems like a separate design session is needed for this topic?
15:30:28 <rhochmuth> thank you moderator
15:30:32 <witek> :)
15:30:37 <tsv> :)
15:30:46 <fabiog> ddieterly: yeah, maybe would be good as a the mid-cycle topic
15:30:48 <ddieterly> you're welcome
15:30:58 <rhochmuth> alright, let's close on this one today
15:31:09 <witek> I would welcome a blueprint on that
15:31:15 <tsv> i can put together a blueprint for this
15:31:16 <rhochmuth> we'll have some email followup discussion plan on a session
15:31:20 <tsv> sure witek
15:31:21 <rhochmuth> thanks tsv
15:31:34 <rhochmuth> let's cover in mid-cycle
15:31:42 <witek> +1
15:32:03 <rhochmuth> #topic translations for monasca-ui
15:32:13 <tsv> i missed the mid-cycle timelines, when and where ?
15:32:27 <rhochmuth> wed/thurs feb 3rd and 4th
15:32:33 <fabiog> tsv: next wed and thu 7am-12pm PST
15:32:35 <rhochmuth> is will be remove via webex
15:32:44 <rhochmuth> two weeks
15:32:44 <tsv> rhochmuth, faibog, thanks
15:33:02 <bmotz> thanks - how will you circulate webex details?
15:33:54 <rhochmuth> openstack-dev [Monasca}
15:33:59 <bklei> maybe add to https://etherpad.openstack.org/p/monasca-team-meeting-agenda as well?
15:34:11 <fabiog> bmotz: bklei yes
15:34:17 <fabiog> I will add the coordinates there
15:34:18 <rhochmuth> i'll create an etherpad for the agenda
15:34:26 <bklei> perfect
15:34:30 <bmotz> great, thanks
15:34:36 <fabiog> once we have the page with the agenda
15:35:11 <rhochmuth> zanata posted a topic on translations
15:35:14 <witek> OpenStack uses Zanata for translations
15:35:22 <witek> no, it was me :)
15:35:29 <rhochmuth> ohhh, that isn't a person
15:35:33 <rhochmuth> sorry
15:35:33 <witek> https://wiki.openstack.org/wiki/Translations/Infrastructure
15:36:00 <rhochmuth> can't you just learn english
15:36:10 <witek> :)
15:36:28 <witek> yes i should :)-
15:36:29 <rhochmuth> ok, before i get in trouble again, what is zanata
15:36:52 <witek> service to handle translations
15:37:06 <witek> OpenStack uses it since Sept.
15:37:20 <witek> we could use it for monasca-ui
15:37:43 <witek> one has to configure the project in openstack-infra
15:37:59 <witek> and jenkins pulls the translation strings every day
15:38:46 <rhochmuth> it all sounds great to me
15:39:04 <rhochmuth> as i don't have any experience with this yet
15:39:04 <shinya_kwbt> Me too. I want to try translate in Japanese.
15:39:45 <witek> so we will push the config change to gerrit
15:39:54 <rhochmuth> shinya_kwbt: so are you working with witek on this?
15:40:30 <rhochmuth> witek: sounds good!
15:41:23 <rhochmuth> ok, sounds like we are all in agreement this is a good idea
15:41:28 <shinya_kwbt> O.K. I don't have experience with zanata. But I will listen to other person who often translate.
15:41:36 <rhochmuth> thanks witek and shinya_kwbt
15:42:22 <rhochmuth> #topic Tag/publish latest monasca-agent to pypi?
15:42:36 <rhochmuth> i guess that is another request to apply a tag
15:42:41 <rhochmuth> i'll do right after this meeting
15:42:44 <bklei> yes, that's us
15:42:45 <rhochmuth> sorry about the delay
15:42:46 <bklei> por favor
15:42:48 <bklei> np
15:42:59 <bklei> wasn't sure if there was a reason not to
15:43:10 <rhochmuth> i'm not aware of any reasons
15:43:32 <rhochmuth> there have been some changes that you'll want to checkout
15:43:32 <bklei> cool
15:43:52 <bklei> for sure, we haven't pulled an agent since October
15:44:16 <rhochmuth> from what i recall the changes that david schroeder made to vm monitoring are probably the most interesting
15:44:38 <rhochmuth> he modified vm.host_status and added vm.ping_check
15:44:53 <bklei> ok, will pull it into lab/test env as soon as you tag/publish
15:45:01 <rhochmuth> ok
15:45:15 <witek> could we tag monasca-log-api as well?
15:45:26 <rhochmuth> sure,
15:45:36 <rhochmuth> i'll tag the api and the agent
15:45:57 <rhochmuth> so, we have around 15 left
15:46:05 <rhochmuth> we coudl open the floor to any topics
15:46:09 <rhochmuth> at this point
15:46:20 <rhochmuth> worry, there was a question around anomaly detection
15:46:33 <rhochmuth> is ho_away here
15:46:37 <ho_away> thanks! this is first time to join this meeting. i'm really interested in anomaly & prediction engine. now i have a question about the current status and future plan.
15:46:59 <rhochmuth> so, about a year ago this was an area that i was investing a lot of time in
15:47:09 <rhochmuth> but, i haven't gotten back to it in a while
15:47:35 <rhochmuth> what would you like to work on
15:47:44 <ho_away> i read your code and i would like to move it ahead. what i can do for it?
15:47:44 <rhochmuth> i think monasca provides an excellet platform for building this
15:48:02 <ho_away> i think so
15:48:18 <tsv> witek, blueprint created: https://blueprints.launchpad.net/monasca/+spec/batching-support-for-log-api
15:48:43 <rhochmuth> i think there are lot's of areas to work on with respect to anomaly detection
15:48:47 <rhochmuth> tsv: thanks
15:49:11 <rhochmuth> it woudl be difficult to get you up to speed on it right now
15:49:23 <rhochmuth> perhaps a topic for another time or email exchanges
15:49:25 <witek> tsv: thanks
15:49:56 <ho_away> rhochmuth: thanks! i will send you email about what i want to do
15:50:03 <rhochmuth> ok, sounds good
15:50:09 <rhochmuth> are there other folks interested in this area
15:50:22 <rhochmuth> wondering if this shoudl be moved to openstack-dev list
15:50:23 <fabiog> rhochmuth: please sign me in :-)
15:50:26 <tgraichen> in using it :)
15:50:47 <rhochmuth> ho_away: sounds like you have some other interest
15:51:11 <rhochmuth> i would propose discussing in the openstack-dev [monasca] list
15:51:17 <rhochmuth> unless there is a better alternative
15:51:19 <ho_away> :-)
15:51:22 <witek> rhochmuth: +1
15:51:27 <rhochmuth> i'll need to pay attention to that list better
15:51:57 <rhochmuth> thanks ho_away
15:52:05 <fabiog> rhochmuth: you can send a meeting invite in the list and people interested can join
15:52:16 <rhochmuth> fabiog: yes i can
15:52:37 <tgraichen> is rbak around? any news from grafana2?
15:52:44 <fabiog> ho_away: what timezone are you in?
15:53:08 <ho_away> fabiog: +9
15:53:21 <ho_away> fabiog: i live in japan
15:53:43 <fabiog> ho_away: ok, so probably early morning is good for you
15:54:04 <fabiog> ho_away: early morning US time
15:54:05 <ho_away> fabiog: thanks! really appriciate it
15:54:25 <ho_away> fabiog: ok
15:54:36 <tgraichen> as rbak left just before i asked :) - any news from grafana2?
15:55:40 <bklei> he's coming
15:55:45 <rhochmuth> he's back
15:55:45 <rbak> I'm back
15:56:08 <tgraichen> any news on grafana2?
15:56:18 <rbak> Not much new on grafana.  The keystone integration works in that you can log into grafana2 with keystone reds
15:56:50 <rbak> I'm working on making those creds pass to the datasource so it can use those to authenticate to monasca
15:57:05 <rbak> That should be the last chunk of work
15:57:47 <rhochmuth> thanks rbak
15:57:52 <rhochmuth> is code posted?
15:58:07 <rbak> No, I keep meaning to do that.
15:58:12 <rbak> I'll do that this afternoon
15:58:15 <rhochmuth> thanks
15:58:27 <rhochmuth> please post to openstack-dev [monasca] list
15:58:42 <rhochmuth> sounds like tgraichen would like to get involved too
15:58:49 <tgraichen> cool - i'll have a look at how to maybe make it keystone v3 ready as soon as its posted somewhere
15:58:58 <rhochmuth> thanks
15:59:03 <tgraichen> and will test it of course
15:59:22 <rhochmuth> so, i have some actions
15:59:38 <rhochmuth> let's try to start using the openstack-dev list for correspondence during the week
15:59:47 <rhochmuth> thanks everyone
15:59:54 <fabiog> thanks, bye
16:00:04 <witek> tank you, cheers
16:00:07 <ho_away> thanks
16:00:24 <shinya_kwbt> bye :)
16:00:36 <rhochmuth> #endmeeting