15:00:44 <gordc> #startmeeting telemetry
15:00:45 <openstack> Meeting started Thu Dec 17 15:00:44 2015 UTC and is due to finish in 60 minutes.  The chair is gordc. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:46 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:49 <openstack> The meeting name has been set to 'telemetry'
15:01:02 <cdent> o/
15:01:15 <gordc> phew. wasn't sure if i had time right
15:01:31 <ildikov> o/
15:01:51 <sileht> o/
15:01:56 <_nadya_> o/
15:02:00 <pradk> o/
15:02:12 <gordc> let's go.
15:02:18 <gordc> #topic recurring: roadmap items (new/old/blockers) https://wiki.openstack.org/wiki/Telemetry/RoadMap
15:02:22 <llu> o/
15:02:26 <ityaptin> o/
15:02:33 <gordc> um... nothing really from me here.
15:02:45 <gordc> i guess we have a few new specs posted
15:02:58 <gordc> https://review.openstack.org/#/c/258283/
15:03:02 <r-mibu> o/
15:03:05 <gordc> ^ that may be controversial
15:03:22 <gordc> i'd probably suggest everyone ask their product teams if that's cool
15:03:30 <gordc> (or if they are using alarms)
15:04:07 <gordc> anyone with spec related work? before we head off for holidays?
15:04:24 <_nadya_> gordc: yep, all customers I know use mongo
15:04:36 <gordc> _nadya_: even for alarms?
15:04:44 <_nadya_> gordc: yep
15:04:56 <gordc> is it because they are sharing a single db?
15:05:15 <_nadya_> I didn't see the usage of separate_db_for_alarms feature
15:05:42 <_nadya_> gordc: yep, I think it's mostly because it's by default
15:06:00 <gordc> oh... i mean it's default for testing.
15:06:15 <gordc> i wouldn't say it's default for real life.
15:06:20 <jd__> o/
15:06:32 <gordc> i really hope people aren't looking at devstack and saying 'that's how we'll deploy'
15:06:49 <_nadya_> gordc: I meant they additional configuration is needed to use mongo for meters but sql for alarms
15:07:12 <gordc> _nadya_: definitely. two dbs are needed.
15:07:18 <idegtiarov> o/
15:07:23 <gordc> but you have two different services too.
15:07:49 <gordc> _nadya_: keep in mind, 'ceilometer alarms' is gone.
15:08:01 <_nadya_> gordc: probably you mean ceilometer alarms :)
15:08:02 <_nadya_> yep
15:08:16 <gordc> _nadya_: maybe check with your product team on how they use aodh
15:08:18 <_nadya_> all customers I'm talking about uses Kilo :)
15:08:26 <gordc> _nadya_: lol got it
15:08:33 <gordc> at least it's not icehouse anymore
15:08:51 <_nadya_> gordc: regarding specs, one question from me
15:09:23 <gordc> sure
15:09:26 <_nadya_> gordc: timeseries dispatcher - is it viable?
15:09:40 <_nadya_> let me find a link
15:10:31 <_nadya_> #link https://review.openstack.org/#/c/216838/
15:10:40 <gordc> _nadya_: i'm not going to block it...
15:10:52 <gordc> although i think if someone works on it. it should be out of tree.
15:11:17 <gordc> i really don't want to learn another db and try to pretend i'm an expert in it
15:11:23 <_nadya_> ok, I see
15:11:47 <gordc> _nadya_: it should be able to be built in own repo and have same testing.
15:12:05 <gordc> rohit_: ^ did you start working on it for monasca?
15:12:41 <_nadya_> gordc: probably, the repositories should be under telemetry? especially if we are talking about monasca
15:13:26 <gordc> _nadya_: well they don't necessarily have to be managed by us ( i think some projects would get angry if that did happen )
15:13:42 <rohit_> gordc: yes, working on the diff repo approach for monasca-ceilometer
15:13:42 <gordc> _nadya_: https://wiki.openstack.org/wiki/Telemetry
15:14:13 <gordc> _nadya_: that's how we've been tracking projects associated/extending some parts of aodh/ceilometer/gnocchi
15:14:31 <gordc> rohit_: cool stuff. let us know if you run into anything.
15:14:36 <_nadya_> ah, right, I remember this :)
15:14:47 <gordc> _nadya_: are you working on the timeseries dispatcher or you asking about status?
15:14:51 <rohit_> Gordc:  www.github.com/openstack/monasca-ceilometer
15:15:08 <rohit_> gordc: sure
15:15:38 <_nadya_> gordc: no, I don't work on this. I just hear that some people want influx but not as a part of Gnocchi
15:16:48 * jd__ yawns
15:17:13 <gordc> _nadya_: sure. um. well i don't think we have anyone actually working on it so...
15:17:53 <gordc> _nadya_: whatever people want to work on i guess. just don't expect the resource to magically appear :)
15:18:05 <_nadya_> gordc: probably. I was interested in status from community perspective
15:18:37 <_nadya_> jd__: sorry :)
15:18:52 <gordc> _nadya_: personally i think the current api is way too verbose so it's probably in best interest to not continue shoving different dbs at it.
15:18:57 <jd__> ;)
15:19:19 <gordc> _nadya_: but that's just personal view. i'm sure others have differing comments
15:19:30 <_nadya_> ok, I think we can move on
15:19:49 <gordc> _nadya_: kk. so yeah long story short, it's there, no one is working onit.
15:20:03 <gordc> anyone else before we move on to aodh?
15:20:35 <gordc> #topic aodh
15:21:04 <gordc> anyone have updates here? not relating to discussion above regarding db support in aodh
15:22:19 <gordc> #topic ceilometer
15:22:37 <gordc> floor is yours
15:22:42 <gordc> _nadya_: ^
15:22:47 <_nadya_> cool
15:22:58 <r-mibu> gordc: sorry, one question to you regarding aodh
15:23:12 <gordc> r-mibu: sure
15:23:13 <r-mibu> are you working on aodhclient?
15:23:16 <gordc> #topic aodh
15:23:31 <gordc> r-mibu: oh, yeah. i probably should've mentioned that
15:23:47 <gordc> https://github.com/chungg/python-aodhclient
15:23:53 <gordc> i have this created.
15:24:07 <r-mibu> nice!
15:24:09 <gordc> it basically is a copy of gnocchiclient with all the gnocchi ripped out
15:24:26 <jd__> cool!
15:24:26 <gordc> i'll work on adding it to openstack repo and then we can start adding in the appropriate commands
15:24:57 <llu> hooray :)
15:25:06 <r-mibu> https://review.openstack.org/#/c/258552/
15:25:16 <r-mibu> is this also related to aodhclient?
15:25:28 <gordc> r-mibu: yeah.
15:25:39 <gordc> jd__: you have a patch that is even better?
15:26:15 <gordc> r-mibu: basically i need that because when we bring in aodh tarball into aodhclient for testing, it doesn't pull in test-requirements
15:26:40 <r-mibu> ah, ok
15:27:00 <jd__> yes
15:27:15 <r-mibu> thanks
15:27:26 <gordc> jd__: shall we wait for it?
15:27:28 <llu> gordc: newbie question, why need pymysql for a clientlib?
15:27:41 <llu> should it all about restful API?
15:27:52 <gordc> llu: we have functional tests in our client which actually call service
15:28:15 <gordc> so the test will actually spin up aodh-api service
15:28:53 <jd__> gordc: he's online and ready to be arpproved
15:29:07 <gordc> jd__: oh ok. i'll look later.
15:29:19 <gordc> i guess that gate job does nothing ;)
15:29:23 <llu> then why the aodh itself doesn't bring in pymysql?
15:29:47 <gordc> llu: it does, but it's a test-requirement... so it's not 'really' bringing it in
15:30:46 <jd__> my patch fixes that
15:30:49 <jd__> bright smile
15:31:39 <gordc> https://review.openstack.org/#/c/258613
15:33:00 <gordc> all's good?
15:33:21 <cdent> EVERYTHING
15:34:03 <gordc> too optimistic
15:34:06 <gordc> tone it down
15:34:13 <gordc> #topic ceilometer: distributed cache
15:34:22 <gordc> _nadya_: sorry. now is good.
15:34:24 <_nadya_> so, I'd like to talk about nova polling improving
15:34:30 <_nadya_> #link https://review.openstack.org/#/c/209799/
15:35:06 <_nadya_> as I understand, the main show stopper now is multiple notification agents running
15:35:13 <_nadya_> gordc: is this correct?
15:35:28 <_nadya_> liusheng: hi :)
15:36:10 <gordc> _nadya_: sort of... also having a really tight coupling between polling agents and notification agents
15:36:56 <_nadya_> gordc: is this bad?
15:37:30 <gordc> _nadya_: slight preference if we could avoid it.
15:38:25 <gordc> but i guess biggest problem was multiple notificaiton agents, and the global cache required.
15:39:25 <_nadya_> gordc: ok. so my question is: what options do we have about distr cache?
15:39:56 <gordc> we can connect to pollsters to the cache
15:39:59 <_nadya_> gordc: do I understand correctly that we cannot store only recent metadata?
15:40:20 <gordc> _nadya_: i think we can store the metadata.
15:40:35 <_nadya_> gordc: the latest only?
15:40:36 <gordc> whatever 'metadata' is
15:40:47 <llu> can we first figure out in what situations the nova instance metadata will change without getting notified from hypervisor?
15:41:06 <gordc> ... what do you want with old metadata?
15:41:17 <gordc> llu: you mean from an event?
15:41:20 <_nadya_> gordc: out-of-order samples
15:42:19 <gordc> _nadya_: if the cache is at notification agent?
15:42:21 <llu> gordc: can we directly listen to hypervisor callback for a change? ang get the latest metadata in compute agent through nova api?
15:42:35 <_nadya_> gordc: yep
15:43:11 <gordc> llu: i'm not sure how that works? does it work like how 'changes-since' param wors?
15:43:50 <llu> that's why i'm asking if we know any situations in nova in which vm instance metadata changes, but without any hypervisor notification
15:44:03 <gordc> _nadya_: i'm not sure how you'd handle it something that is already been processed in your pipeline and sent to collector/db/somewhere else
15:44:17 <_nadya_> llu: as I understand, we are afraid that we may lost notifications
15:44:57 <gordc> llu: i think the proposal was liusheng doesn't want to make that secondary call to check if we need to grab metadata
15:45:10 <_nadya_> gordc: ok, take a look at my comment for the spec when you have time
15:45:37 <gordc> right now we grab meter value, and then do a secondary call to check if we need to use metadata
15:45:40 <llu> oops, sorry i might be out of date for that spec
15:45:41 <gordc> s/use/get/
15:45:53 <gordc> llu: no worries. so am i.
15:46:09 <gordc> _nadya_: sure, i'll read comment
15:46:17 <_nadya_> that's sad that liusheng is not here :(
15:46:46 <_nadya_> I wanted to do some POC during the christmas holidays
15:47:35 <gordc> i think he's sleeping... or partying... or something.lol
15:47:45 <_nadya_> ok then... So may I ask about distributed cache for transformers? :)
15:47:49 <gordc> _nadya_: i do understand his proposal though (since i work with him)
15:47:58 <gordc> _nadya_: go ahead
15:48:43 <jd__> transformers? robots like in the movies?
15:49:00 <gordc> jd__: save them for future
15:49:41 <_nadya_> gordc: I think I should prepare some test results to prove that it works?
15:50:15 <gordc> _nadya_: sure? i'm not really sure how we prove it works.lol
15:50:45 <gordc> like it doesn't create bottleneck?
15:51:41 <_nadya_> gordc: 200 000 messages in queue. The first run 10 notification agents  with current architecture. Collect rate metrics. The second run is the same data with the cache.
15:52:35 <gordc> but you'd still need to validate the output?
15:52:57 <gordc> also, we need to validate it works for all the different transfomers.
15:53:18 <_nadya_> yep, I think that test data may be just continuously increasing data
15:53:24 <gordc> but yeah i guess go ahead if this is something you're interested in :)
15:53:51 <_nadya_> ok
15:54:06 <gordc> or you could find someone else doing the same... i know the requeue architecture is people use (not saying i did a lot of research)
15:54:18 <gordc> let's go to llu really quick
15:54:32 <gordc> #topic gnocchi: bad resource id
15:54:37 <gordc> llu: go^
15:54:44 <gordc> 6 min :)
15:54:50 <llu> it's all from https://bugs.launchpad.net/ceilometer/+bug/1518338
15:54:50 <openstack> Launchpad bug 1518338 in Ceilometer "Ceilometer doesn't build the data correclty for Gnocchi when you send hypervisors HW metrics using snmpd and agent-central" [High,In progress] - Assigned to Lianhao Lu (lianhao-lu)
15:54:58 <gordc> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082474.html
15:55:11 <llu> after the email discussiion, the things i got from there are:
15:55:32 <llu> 1. add quote into gnocchiclient to support "/" as in resource_id
15:55:43 <gordc> llu: that won't work :)
15:56:00 <llu> why?
15:56:14 <llu> didn't know that
15:56:21 <gordc> llu: we tried. i need to dig history but it'll resolve it even if you encode it
15:56:24 <gordc> cdent: ^
15:56:49 <cdent> gordc?
15:57:02 <gordc> cdent: read 5 lines
15:57:10 <gordc> or 6
15:57:11 <cdent> I did
15:57:17 <gordc> it don't work? right?
15:57:31 <gordc> the encoding '/' bit
15:57:51 <cdent> I haven't tried the various tricks in gnocchi, but in general it's web server dependent on how to solve it.
15:58:23 <cdent> The lame way to deal with it is to double encode, something like %252F
15:58:41 <gordc> llu 2.? :)
15:58:55 <cdent> but I hate that. Depending on the wsgi container being used you can get at the raw url
15:58:57 <gordc> cdent: ok. and that works in all cases?
15:59:06 <gordc> cdent: got it
15:59:21 <llu> 2. add a new attribute into generic resource type to store the user passed in original id
15:59:23 <cdent> It's not a problem I've tried to solve the lame way. I've done things like this:
15:59:51 <gordc> 2. that could make sense
15:59:53 <gordc> jd__: ?
15:59:55 <llu> 3. add new resource type for snmp metrics and nova compute metrics
16:00:04 <emagana> gordc and cdent we are going to start the next meeting
16:00:06 <cdent> https://github.com/tiddlyweb/tiddlyweb/blob/master/tiddlyweb/web/serve.py#L95
16:00:11 * cdent waves
16:00:27 <gordc> aight. let's carry on at ceilomter
16:00:30 <gordc> #endmeeting