15:00:30 <jd__> #startmeeting ceilometer
15:00:31 <openstack> Meeting started Thu May 16 15:00:30 2013 UTC.  The chair is jd__. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:34 <openstack> The meeting name has been set to 'ceilometer'
15:00:39 <jd__> #link https://wiki.openstack.org/wiki/Meetings/MeteringAgenda
15:01:01 <dhellmann> o/
15:01:02 <jd__> hi everyone
15:01:03 <flwang> o/
15:01:05 <apmelton> o/
15:01:05 <dragondm> o/
15:01:07 <flwang> hi jd__
15:01:10 <sandywalsh> o/
15:01:11 <gordc> o/
15:01:11 <danspraggins> o/
15:01:31 <n0ano> o/
15:01:32 <jd__> as I already said, big agenda today, so let's try to focus :)
15:01:47 <jd__> #topic Last week action: jd__ and dhellmann to write formal statement about limiting support for pre-grizzly versions of ceilometer
15:01:54 * jd__ whispers
15:02:01 <eglynn> o/
15:02:05 <dhellmann> I haven't had a chance to look at that :-(
15:02:24 <jd__> I'll keep in #action too
15:02:34 <jd__> we can sync at a certain time if needed dhellmann
15:02:39 <epende> o/
15:02:52 <jd__> #action dhellmann and jd__ to write formal statement about limiting support for pre-grizzly versions of ceilometer
15:03:00 <dhellmann> jd__: yeah, we can talk about it via email
15:03:04 <litong> o/
15:03:07 <jd__> dhellmann: works for me :)
15:03:16 <jd__> #topic Last week action: blueprint-owners, set milestones on all blueprints
15:03:29 <jd__> #link https://blueprints.launchpad.net/ceilometer/havana
15:03:40 <thomasem> o/
15:03:41 <jd__> so it's almost ok now
15:04:05 <jd__> and we're ambitious :)
15:04:30 <jd__> #topic Tracking of bug #1176017, releasing of MIM
15:04:34 <uvirtbot> Launchpad bug 1176017 in ceilometer "Reinstate MongoDB testing with ming" [Critical,Fix committed] https://launchpad.net/bugs/1176017
15:04:43 <jd__> dhellmann: I think this is done right?
15:05:21 <dhellmann> yes, I think that just landed earlier today
15:05:35 <gordc> awesome! finally have the mongo tests running again.
15:05:59 <jd__> yeah that's reaaaallly great
15:06:15 <jd__> thanks dhellmann for the good job :)
15:06:29 <dhellmann> asalkeld made some great improvements to the tests lately, too, and we're trying to get those all working well
15:06:41 <nealph> dhellmann: thumbs up
15:06:51 <dhellmann> moving to testr in particular will let us use scenarios, and not have to subclass for different configurations
15:06:59 <dhellmann> will be good for the db tests
15:07:08 <jd__> dhellmann: yeah I remember you talking about this to me
15:07:18 <dhellmann> we'll get that in soon, but maybe not before h1
15:07:22 <jd__> (and me never having time to dive into it)
15:07:27 <dhellmann> having some issues with the notifier tests under testr
15:07:56 * jd__ thumbs up
15:08:05 <jd__> #topic Review Havana-1 milestone
15:08:13 <jd__> talking about h1, let's take a look
15:08:19 <jd__> #link https://launchpad.net/ceilometer/+milestone/havana-1
15:08:41 <jd__> we seem on good tracks
15:09:08 <jd__> I'm only worried about monitoring-physical-devices, I emailed Toni this week to have status
15:09:16 <jd__> we're supposed to have the patchset before the end of the week
15:09:25 <dhellmann> what's the difference between "fix committed" and "implemented" status?
15:09:27 <jd__> I'm very worried because I've the feeling it's going to be huged
15:09:38 <jd__> dhellmann: bug vs blueprint?
15:09:42 <dhellmann> yeah, I don't know if there's going to be time to review that before the deadline
15:09:47 <dhellmann> jd__: aha, yeah
15:10:07 <jd__> so I'm waiting for monitoring-physical-devices, and we'll see :(
15:10:20 <jd__> we may have to postpone to h2 if that doesn't arrive soon enough
15:10:24 <dhellmann> is it better to just move that to h2, or wait and see when we get the patch? I *really* don't want to rush something big
15:10:25 <sandywalsh> about 2wks to havana-1, yes?
15:10:33 <jd__> I've still no idea how the patchset is going to be and I'm scared
15:10:47 <dhellmann> yeah, h1 deadline is may 30
15:11:00 <eglynn> yeah punt to h2 sounds sensible if there's doubt
15:11:01 <flwang> jd__ is it possible to let Toni submit some patch for review?
15:11:06 <gordc> yeah, the blueprint reads like its a big patch.
15:11:26 <dhellmann> the patch can go in at any time, but if we move the target to h2 now then that's a signal we aren't going to rush the review
15:11:40 <jd__> flwang: it's so possible that I begged him to, but he said "end of the week" :)
15:11:53 <eglynn> dhellmann: agreed
15:12:06 <flwang> haha, fine :)
15:12:33 <jd__> dhellmann: I'll wait tomorrow: if no patch or big patch, I'll move to h2 -- how does that sound?
15:12:35 <flwang> based on my experience, it's a system management topic, I think it's a huge work
15:12:53 <dhellmann> jd__: that works for me
15:13:40 <jd__> #agreed jd__ to rescheduled monitoring-physical-devices if the patchset is too big or not sent to review before 17th may 2013
15:13:43 <jd__> -ed
15:14:29 <jd__> don't forget to review https://review.openstack.org/#/c/28800/
15:14:31 <jd__> it's for h1
15:14:49 <llu-laptop> o/
15:15:25 <jd__> #topic Releasing Ceilometer 2013.1.1
15:15:27 * dhellmann thinks jd__ should probably respond to comments
15:15:41 <jd__> dhellmann: I missed that then :(
15:16:11 <jd__> I did, my bad, will do, pff
15:16:28 <jd__> about 2013.1.1, so it's going to be done soon finally
15:16:42 <eglynn> for 2013.1.1 all the backports were done at the 11th hour, prolly should try to be more "ongoing" on this for 2013.1.2
15:16:54 <eglynn> i.e. for each bug fix you get landed on master
15:17:03 <eglynn> think about backporting potential
15:17:12 <dhellmann> +1
15:17:16 <eglynn> and tag the bug if it's a good candidate
15:17:22 <dhellmann> reviewers, too
15:17:38 <dhellmann> eglynn: tag it in launchpad?
15:17:44 <eglynn> even better, propose the backport yourself if it's a trivial cherrypick
15:17:57 <eglynn> dhellmann: yep, grizzly-backport-potential
15:18:07 * jd__ nods
15:18:25 <dhellmann> #info tag backportable bugs with grizzly-backport-potential
15:19:35 <eglynn> #link https://wiki.openstack.org/wiki/StableBranch#Appropriate_Fixes
15:19:51 <eglynn> ^^^ good summary of what's suitable for the stable branch
15:19:52 <uvirtbot> eglynn: Error: "^^" is not a valid command.
15:20:55 <gordc> eglynn, thanks for link. very helpful.
15:21:07 <jd__> #topic Idea: using URL as publishing targets formats
15:21:17 <jd__> Rational: with the UDP publisher, we may have several UDP targets, so we need more than one option for the whole plugin; that wasn't needed for the RPC because we don't specify the host on Ceilometer side
15:21:25 <jd__> toughts?
15:21:35 <dhellmann> seems reasonable to me
15:21:46 <jd__> thank you dhellmann, next topic then
15:21:48 * jd__ smiles
15:21:59 <dhellmann> we should come up with conventions
15:22:03 <dhellmann> but not in this meeting :-)
15:22:23 <jd__> you mean, about agreeing with my ideas, or about URL? :-)
15:22:24 <eglynn> so is the idea to invent a URI scheme for this?
15:22:42 <jd__> eglynn: udp://host:port doesn't look like a big invention
15:22:51 <jd__> rather than just 'udp' as it is now
15:23:00 <eglynn> jd__: true that :)
15:23:32 * eglynn wondering if there's prior art on this ...
15:23:38 <jd__> meter would be replaced by meter:// in a first time -- later we could use real RPC URL
15:23:38 <sandywalsh> what about destinations with no "standard" protocol?
15:23:47 <sandywalsh> what would a graphite URI look like?
15:23:57 <jd__> sandywalsh: graphite://whatyouwant
15:24:00 <dhellmann> the scheme in the url should be the plugin name in our publisher plugins
15:24:07 <sandywalsh> hmm
15:24:07 <jd__> sandywalsh: basically it would bea <publishername>://<whatever>
15:24:13 <jd__> -a
15:24:23 <dhellmann> and indeed, we might use statsd instead of udp, since those messages have to be formatted in a particular way, iirc
15:24:30 <jd__> dhellmann: exactly
15:24:32 <sandywalsh> when why bother making it look like a URI, you could use any separator
15:24:35 <jd__> UDP doesn't tell you how I format the data inside
15:24:37 <sandywalsh> s/when/then/
15:24:55 <jd__> sandywalsh: because 1. it's standard 2. it's going to be used by oslo.rpc at some point it seems
15:25:08 <mrutkows> the use of a actual protocol (scheme) for URL and URI scheme should not be confused
15:25:40 <sandywalsh> hmm
15:26:00 <dhellmann> we do this for the storage connection strings, too
15:26:16 * sandywalsh defers :)
15:26:30 <jd__> yeah, everybody does it for storage that's true :)
15:26:51 <jd__> ok, so I'll work on that at some point since it's going to be quickly a problem for UDP
15:26:52 <dragondm> yah, that is the way, sqlalchemy, etc does cnnection string
15:27:27 <jd__> #agreed use URL for publisher target formats in pipeline
15:27:42 <mrutkows> URI schemes can describe how path segments could be used for storage, but that is separate from protocol (transmission)
15:28:16 <jd__> mrutkows: the point being?
15:28:58 <mrutkows> if u use udp scheme to describe transport protocol, it should not necessarily imply how the data should be stored
15:29:12 <dhellmann> it won't
15:29:30 <dhellmann> interpreting anything other than the scheme in the url will be the responsibility of the publisher plugin
15:29:33 <mrutkows> thanks )
15:29:38 <eglynn> slightly related point ...
15:29:43 <eglynn> sqlalchemy/swift etc. also use the URI to carry credentials
15:29:47 <eglynn> would anything of that ilk be required for UDP?
15:29:53 <eglynn> (or do we rely on message payload signing?)
15:29:58 <jd__> eglynn: not at this point, maybe later yes
15:30:06 <jd__> eglynn: we can use query parameter in URI scheme
15:30:06 <eglynn> jd__: fair enough
15:30:47 <jd__> #topic Adding Gordon Chung to core reviewer team
15:30:59 <gordc> ... should i leave the room for this?
15:31:07 <jd__> lol, no
15:31:15 <eglynn> gordc: stay! :)
15:31:16 <jd__> there's nothing to be done, I think the delay will pass tomorrow and I'll be able to add you since nobody objected
15:31:30 <gordc> i'll close my eyes. be honest ppl.lol
15:31:37 <jd__> lol
15:31:53 <litong> @gordc, congrats.
15:32:01 <mrutkows> congrats Gordon
15:32:04 <llu-laptop> gordc: congrats
15:32:05 <eglynn> gordc: welcome to the inner sanctum! :)
15:32:09 <gordc> thanks for the support folks!
15:32:26 <flwang> @gordc, congrats
15:32:27 <jd__> hm I think I'll need nijaba_ on this one, I don't have the rights to do that lol
15:32:30 <dhellmann> congrats and welcome, gordc!
15:32:43 <jd__> nijaba_: LET IT GO! GIVE ME MY RIGHTS!
15:33:01 <gordc> jd__, lol
15:33:10 * gordc will try not to muck this up.
15:33:36 <xingzhou> gordc, con!
15:33:50 <jd__> #topic Consider use of Diamond instead of the CM Pollster
15:34:07 <jd__> sandywalsh: floor is yours
15:34:31 <sandywalsh> I'm going to reserve further comment for now. Look at Diamond and consider that it already does what we want. ...
15:34:50 <sandywalsh> I'm going to be working on the notifier next, so I'll have a more informed opinion at the next meeting
15:35:03 <sandywalsh> I'm a big fan of two things:
15:35:11 <sandywalsh> 1. not duplicating effort
15:35:36 <sandywalsh> 2. the trend that the "monitoring stack" is a set of Input-Do Something-Output components that fit together
15:35:50 <sandywalsh> which includes graphite, statsd, reimann, etc
15:36:06 <sandywalsh> so I don't think we should be trying to built the entire stack ourselves
15:36:15 <thomasem> https://github.com/BrightcoveOS/Diamond?
15:36:22 <sandywalsh> thomasem, yep
15:36:30 <sandywalsh> so, something to ponder :)
15:36:45 <dragondm> Yah, +1 to not duplicating effort.
15:36:49 <eglynn> what about the identity issue, is diamond extensible to reporting native openstack UUIDs?
15:36:53 <dragondm> ...and not re-inventing bugs
15:37:00 <thomasem> :D
15:37:01 <jd__> sandywalsh: ok, then feel free to re-add this to the agenda whenever you've enough information to discuss it
15:37:03 <eglynn> (for resources, users, tenants etc.)
15:37:04 <gordc> thomasem, thanks, was just going to ask for a link.
15:37:18 <thomasem> gordc: you bet
15:37:32 <eglynn> well mainly for resources, I guess users and tenants could be figured out in a round-about way
15:37:36 <sandywalsh> eglynn, I see it mostly as a tool for hypervisor/cpu/disk/etc polling ... not for openstack internals
15:37:50 <sandywalsh> eglynn, I see notifications -> events -> meters for that
15:38:12 <eglynn> sandywalsh: a-ha, OK
15:38:24 <dhellmann> we need a way to collect data so our HPC customers can charge for %CPU of the host box (not the VM)
15:38:25 <sandywalsh> eglynn, also, it's host-side, not instance/user-side ... but I suppose it could be too.
15:38:51 <dhellmann> diamond doesn't know who owns the vm consuming that cpu
15:39:15 <dragondm> it's easy enough to report the instanceid.
15:39:18 <sandywalsh> dhellmann, so, that would be something we would have to figure out anyway, so perhaps we would need a new diamond collector for that
15:39:18 <nealph> anyone know if this is overlap with healthnmon capability?
15:39:30 <dhellmann> sandywalsh: our current CPU collector already does this
15:39:32 <eglynn> yep, so we'd need to add that info somewhere in the pipeline
15:39:55 <sandywalsh> dhellmann, so convert it to be a diamond module and leverage the strength of that community
15:40:27 <sandywalsh> there is a counter-argument ... which depends on the architecture of the multi-publisher.
15:40:45 <dhellmann> if someone wants to do that, that's fine
15:40:53 <dhellmann> I'm just trying to make sure we understand the requirements
15:40:58 <dhellmann> it's not just "get the cpu utilization"
15:40:58 <sandywalsh> and that, potentially we are building a proper input->do_something->output widget
15:41:06 <dhellmann> it has to be tied to the instance owner for billing
15:41:11 <dhellmann> it's not just about monitoring
15:41:35 <jd__> if you can't generate meters based on the Counter format we defined, it's basically useless
15:42:07 <sandywalsh> dhellmann, depends on how you tackle the problem of CPU usage. If the hypervisor allows you to cap cpu based on flavor. But that's a longer discussion
15:42:28 <dhellmann> like I said, I don't care if we change this, but it has to do what we do now, not just collect monitoring data
15:42:42 <dhellmann> we can't ignore that requirement
15:43:01 <sandywalsh> jd__, by that logic we can't use ceilometer with any of the existing monitoring stack tools out there
15:43:33 <dhellmann> sandywalsh: that depends entirely on whether you view ceilometer as the destination of data, or the source of data
15:43:38 <sandywalsh> dhellmann, agreed ... but I think the delta of effort is smaller than having to write all these collectors/handlers over again
15:43:49 <sandywalsh> dhellmann, it should be both
15:43:50 <dhellmann> I thought we agreed that for monitoring, we would be a source but not a destination
15:43:58 <dhellmann> for metering we are a destination
15:44:22 <eglynn> jd__: if it gives enough information to potentially *map* to the counter format, then that could work, no?
15:44:23 <dhellmann> and for alarming, which is a subset of the monitoring data that's also tied to customer ids in a way that other tools don't do
15:44:34 <sandywalsh> if it's input->do_something->output as the core architecture, there is no real limitation on that
15:44:55 <dhellmann> the limitation is whether the thing collecting the data has all of the metadata, too
15:45:14 <jd__> eglynn: sure!
15:45:17 <dhellmann> but we're running out of time in this meeting
15:45:28 <sandywalsh> again ... just planting a seed for thought :)
15:45:53 <jd__> yeah, there's no harm in that :)
15:45:56 <jd__> #topic Open discussion
15:46:53 <eglynn> 2013.1.1 just released!
15:46:54 <eglynn> https://launchpad.net/ceilometer/grizzly/2013.1.1
15:47:08 <jd__> amazing
15:47:08 <eglynn> props to apevec!
15:48:41 <jd__> anything else or should I wrap up?
15:48:53 <eglynn> nowt else from me
15:49:36 * dhellmann has nothing to add
15:49:57 <jd__> #endmeeting