15:00:40 <gordc> #startmeeting telemetry
15:00:41 <openstack> Meeting started Thu Nov 19 15:00:40 2015 UTC and is due to finish in 60 minutes.  The chair is gordc. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:44 <openstack> The meeting name has been set to 'telemetry'
15:00:57 <gordc> first telemetry meeting.
15:01:24 <Guest91492> o/
15:01:31 <liusheng> o/
15:01:50 <r-mibu> o/
15:01:55 <ildikov> o/
15:01:57 <llu> o/
15:02:01 <ityaptin> o/
15:02:30 <pradk> o/
15:02:35 <sileht> o/
15:02:35 <nadya_> o/
15:02:44 <gordc> nadya_: good to know who you are
15:02:48 <gordc> let's start
15:02:50 <nadya_> :)
15:03:09 <gordc> #topic recurring: roadmap items (new/old/blockers) https://wiki.openstack.org/wiki/Telemetry/RoadMap
15:03:14 <gordc> so i updated
15:03:27 <gordc> (most) of the ceilometer wiki to point to telemetry
15:03:42 <gordc> i also updated the list of roadmap items that currently don't have owners
15:04:00 <gordc> feel free to add items you think i missed
15:04:06 <gordc> or grab an item yourself
15:04:43 <gordc> if you noticed i missed any ceilometer->telemetry items, please let me know as well.
15:05:00 <jd__> o/
15:05:06 <jd__> gordc: the spec repo
15:05:12 <gordc> and i guess last note is please review specs already up https://review.openstack.org/#/q/project:openstack/ceilometer-specs,n,z
15:05:27 <gordc> jd__: you mean move that to telemetry-specs?
15:05:34 <jd__> gordc: would make sense I guess
15:06:15 <gordc> i left it unchanged currently because all our bp still are under ceilometer launchpad
15:06:47 <gordc> jd__: shall we open up bp links for aodh and gnocchi?
15:06:58 <jd__> if needed
15:07:05 <jd__> if we have specs for these projects
15:07:34 <gordc> jd__: you know how easy a repo rename is?
15:07:48 <gordc> what happens to the docs it publishes?
15:09:10 <gordc> err, i'll look into it i guess... for now keep it pointing to ceilometer-specs
15:09:30 <gordc> any other items?
15:10:03 <gordc> #topic aodh topics
15:10:29 <gordc> nothing listed, but just a reminder, if someone wants to step up and be a lead for aodh, please do so
15:11:24 <nadya_> gordc: what means 'lead'?
15:11:40 <gordc> anyone else have aodh items? anyone take a look at this? http://lists.openstack.org/pipermail/openstack-dev/2015-November/079614.html
15:11:42 <nadya_> gordc: ptl-helper responsible for aodh?
15:11:48 <gordc> nadya_: basically.
15:12:08 <gordc> nadya_: just keep track of any important work items.
15:12:19 <gordc> nadya_: track aodh bugs
15:12:36 <gordc> nadya_: i'm basically trying to get others to do my job :)
15:13:03 <nadya_> gordc: yep, I see :)
15:13:52 <gordc> ok so we can move on.
15:14:17 <gordc> take a look at that list if you have opinions on vitrage or root cause analysis.
15:14:29 <gordc> #topic ceilometer topics
15:15:01 <gordc> no real updates here, i released ceilometerclient 2.0.1 this week
15:15:12 <gordc> errr... that's really it.
15:15:43 <gordc> any one else?
15:15:43 <llu> one question, do we need release notes for client?
15:15:44 <gordc> or we'll just jump into ityaptin spec discussion
15:16:06 <sileht> I'm fighting to get ceilometer working with keystoneauth1
15:16:08 <gordc> llu: i ask dhellmann it's for services currently
15:16:21 <gordc> llu: but they'll probably be applied to libs/clients later
15:16:31 <gordc> sileht: we need a new ceilometerclient?
15:16:54 <sileht> gordc, perhaps I'm not sure yet
15:17:03 <gordc> i'm really hoping we can get that gnocchiclinet patch in... i'm watching that.
15:17:12 <sileht> me too
15:17:47 <gordc> cool cool. let us know if you hit any blockers.
15:17:52 <ityaptin> Does anyone has a experience with a libvirt? We met with this issue  in our deployments https://bugs.launchpad.net/ceilometer/+bug/1457440
15:17:52 <openstack> Launchpad bug 1457440 in Ceilometer "Compute agent virDomainGetBlockInfo error for instances with RBD backend" [Medium,Triaged]
15:17:52 <sileht> I have some issue on gate only :(
15:18:33 <sileht> ityaptin, looks like a kvm/libvirt issue not a ceilometer one
15:18:33 <ityaptin> Nobody wants to fix it :-(
15:18:52 <gordc> sileht: delete the tests
15:18:52 <gordc> ityaptin: i saw that, was going to look at it yesterday but got distracted
15:18:53 <gordc> ityaptin: you try asking on openstack-nova?
15:19:06 <gordc> ityaptin: i think nobody knows how.
15:19:28 <ityaptin> Yes, I wrote a letter to openstack dev
15:19:30 <dguitarbite> ityaptin: Something to do with Libvirt prob. the version of libvirt.
15:19:32 <gordc> ityaptin: let me try pinging kvm person after this
15:19:47 <sileht> ityaptin, try to see if a more recent version of libvirt or kvm fix the issue
15:19:50 <jd__> gordc: re: repo rename, it's not that hard, but it requires a downtime so they do that once every 2 weeks or something on Saturday usually IIRC
15:19:59 <ityaptin> gordc, thanks!
15:20:06 <sileht> ityaptin, if not perhaps this is just not implements for rbd backend
15:20:18 <gordc> dguitarbite: cool. thanks for pointer
15:20:36 <gordc> sileht: that's what i thought too... but was too lazy to install everything and verify
15:20:52 <gordc> jd__: ack. i'll check with infra
15:21:12 <gordc> #action  gordc message infra to change ceilometer-specs to telemetry-specs
15:21:20 <ityaptin> sileht: If rbd doesn't support it, I will write a doc entry and update the bug
15:22:17 <sileht> ityaptin, I guess it doesn't support it, I have the latest kvm with the cloud-archive libvirst for trusty and got:
15:22:19 <sileht> virsh domblkinfo c04b85f2-aa2f-47d5-b412-2ad67c250e56 vda
15:22:21 <sileht> error: internal error: missing storage backend for network files using rbd protocol
15:24:07 <ityaptin> sileht: ok. so, I suggest additionally  catch this issue in virt inspector and put "%s is not supported in rbd" instead of trace
15:24:36 <sileht> ityaptin, looks good
15:24:55 <ityaptin> ityaptin: I will do.
15:25:00 <gordc> ityaptin: no problem with it either but i'll see if it's something wrong our end.
15:26:23 <gordc> cool let's move on to spec.
15:26:36 <gordc> #topic spec for new project with polling and notification (ityaptin)
15:26:40 <gordc> ityaptin: go for it.
15:26:49 <ityaptin> https://review.openstack.org/#/c/246314/3
15:27:43 <gordc> ityaptin: my main question was trying to figure out what exactly is being split
15:28:33 <nadya_> gordc: polling + notifications. Because notificaton agent is responsible for transforming now
15:29:09 <nadya_> gordc: and it seems difficult to break pipeline.yaml
15:29:39 <gordc> nadya_: so collector is remaining service in ceilometer?
15:29:47 <nadya_> yep
15:30:28 <nadya_> the new service will publish messages somewhere. and collector may be configured to catch them
15:31:09 <gordc> hmm.. what functionality does collector have without polling+notification agents?
15:31:44 <nadya_> storage. put and get
15:32:59 <ityaptin> gordc: In ceilometer we will still have an api for getting data and a collector for recording.
15:33:15 <gordc> nadya_: but it only understands polling/notification sample/event models
15:33:39 <gordc> ityaptin: i see.
15:34:48 <gordc> so i think originally the discussion at summit was that we could configure the notification agent build data in any pluggable model
15:35:19 <gordc> i think right now the polling/notification code is pretty separate already no? any other opinions?
15:35:51 <nadya_> gordc: collector will be used if users want to use Ceilometer storages. If they want to publish to Kafka or whatever, then they probably don't want collector
15:36:39 <nadya_> gordc: it is. but the same story was about alarming code
15:37:31 <nadya_> gordc: the idea about "the notification agent build data in any pluggable model" may be implemented as well during refactoring
15:38:05 * jd__ agrees with nadya_ and ityaptin, he thinks
15:38:41 <gordc> i'm pretty indifferent
15:39:23 <gordc> the remaining collector+api code will be pretty sparse
15:39:39 <gordc> especially if you consider the metering api to be in maintenance mode.
15:41:14 <nadya_> gordc: I'm afraid it's evolution :( People uses Ceilometer for collection mostly. External storages is very common solution
15:42:43 <gordc> nadya_: understood. just pointing out the packages are already like this.
15:43:31 <jd__> gordc: yeah then we need to move out the event part… and we're pretty done
15:43:32 <gordc> i'm pretty sure this is going to be moving 90% of code and keeping 10%.
15:43:54 <gordc> jd__: i think that's what we originally discussed
15:44:09 <jd__> that's what I wanted to do this cycle
15:44:17 <jd__> but if we do this one I'm not sure it's wise to do all in one cycle
15:44:59 <gordc> jd__: agreed
15:46:38 <gordc> nadya_: i'm indifferent, i don't think this will help much from a user or dev pov but i'm an old man and don't like change.
15:47:29 <gordc> but tbh, if it ends up being a 90/10 split in code, i don't see the value. unless we plan on adding functionality to that 10%
15:47:54 <nadya_> gordc: ok, I see your point. We may just start doing this and evaluate the results
15:48:42 <nadya_> and discuss events separation with jd__, because it's not clear how it may be done
15:49:03 <gordc> nadya_: yeah, that'd be good. give it a try.
15:49:18 <jd__> gordc: if the 10% are the ones deprecated I don't see the problem moving them out anyway?
15:49:25 <gordc> nadya_: i'm not sure how to split that and make it pluggable either
15:49:50 <gordc> jd__: are we moving the 10% because if we move collector/api, that requires new endpoint no?
15:49:58 <gordc> i think we're moving the 90%.
15:50:11 <jd__> yeah
15:50:18 <jd__> so it's better?
15:50:29 <jd__> Why are you whining then? :p
15:50:33 <gordc> also, we only really deprecrating the metering, events api we need to keep (or make better)
15:50:45 <jd__> gordc: yeah we need to split that, I said
15:51:06 <nadya_> we deprecating metering? come on!
15:51:43 <gordc> err. so to formalise, the idea is: move out polling+notification agent?
15:52:05 <jd__> sounds like it
15:52:14 <nadya_> gordc: yep! metering deprecation is another topic
15:52:34 <jd__> then we deprecate OpenStack
15:52:42 <gordc> nadya_: give it a try.
15:52:48 <nadya_> I just haven't heard about this yet :)
15:52:56 <nadya_> gordc: ok!
15:53:00 <gordc> jd__: write a spec
15:53:09 <jd__> gordc: we deprecated spec already.
15:53:27 <nadya_> btw, Monasca should be at this meeting :)?
15:53:29 <gordc> jd__: damn. we must move forward then.
15:53:36 <jd__> nadya_: is this a question?
15:53:44 <nadya_> jd__: yep
15:53:49 <jd__> nadya_: why should they?
15:53:50 <nadya_> they are telemetry
15:54:04 <nadya_> and this is telemetry meeting
15:54:12 <gordc> nadya_: we don't manage them...
15:54:23 <gordc> and we don't own telemetry space... just the name.lol
15:54:32 <jd__> they're not part of the telemetry project team
15:54:32 <gordc> changing to open discussion
15:54:37 <jd__> they never requested to join AFAIK
15:54:38 <gordc> #topic open discussion
15:55:06 <nadya_> gordc: I thought you may become their PTL accidentally
15:55:12 <jd__> lol
15:55:16 <gordc> hahahah!
15:55:28 <gordc> nadya_: man, people would be soooo angry.
15:55:53 <jd__> :))
15:56:06 <nadya_> :)
15:56:25 <jd__> you're such a troll nadya_
15:56:46 <gordc> nadya_: you will get me in trouble.
15:57:28 <nadya_> sorry ;)
15:57:35 <gordc> nadya_: our official response is we will listen to all ideas and are always open to integration.
15:57:57 <gordc> i learned politics from eglynn.
15:58:13 <gordc> ok we all good?
15:58:33 <nadya_> yep, I think so
15:58:59 <gordc> thanks everyone.
15:59:06 <gordc> #endmeeting