15:00:30 #startmeeting ceilometer 15:00:31 Meeting started Thu May 16 15:00:30 2013 UTC. The chair is jd__. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:34 The meeting name has been set to 'ceilometer' 15:00:39 #link https://wiki.openstack.org/wiki/Meetings/MeteringAgenda 15:01:01 o/ 15:01:02 hi everyone 15:01:03 o/ 15:01:05 o/ 15:01:05 o/ 15:01:07 hi jd__ 15:01:10 o/ 15:01:11 o/ 15:01:11 o/ 15:01:31 o/ 15:01:32 as I already said, big agenda today, so let's try to focus :) 15:01:47 #topic Last week action: jd__ and dhellmann to write formal statement about limiting support for pre-grizzly versions of ceilometer 15:01:54 * jd__ whispers 15:02:01 o/ 15:02:05 I haven't had a chance to look at that :-( 15:02:24 I'll keep in #action too 15:02:34 we can sync at a certain time if needed dhellmann 15:02:39 o/ 15:02:52 #action dhellmann and jd__ to write formal statement about limiting support for pre-grizzly versions of ceilometer 15:03:00 jd__: yeah, we can talk about it via email 15:03:04 o/ 15:03:07 dhellmann: works for me :) 15:03:16 #topic Last week action: blueprint-owners, set milestones on all blueprints 15:03:29 #link https://blueprints.launchpad.net/ceilometer/havana 15:03:40 o/ 15:03:41 so it's almost ok now 15:04:05 and we're ambitious :) 15:04:30 #topic Tracking of bug #1176017, releasing of MIM 15:04:34 Launchpad bug 1176017 in ceilometer "Reinstate MongoDB testing with ming" [Critical,Fix committed] https://launchpad.net/bugs/1176017 15:04:43 dhellmann: I think this is done right? 15:05:21 yes, I think that just landed earlier today 15:05:35 awesome! finally have the mongo tests running again. 15:05:59 yeah that's reaaaallly great 15:06:15 thanks dhellmann for the good job :) 15:06:29 asalkeld made some great improvements to the tests lately, too, and we're trying to get those all working well 15:06:41 dhellmann: thumbs up 15:06:51 moving to testr in particular will let us use scenarios, and not have to subclass for different configurations 15:06:59 will be good for the db tests 15:07:08 dhellmann: yeah I remember you talking about this to me 15:07:18 we'll get that in soon, but maybe not before h1 15:07:22 (and me never having time to dive into it) 15:07:27 having some issues with the notifier tests under testr 15:07:56 * jd__ thumbs up 15:08:05 #topic Review Havana-1 milestone 15:08:13 talking about h1, let's take a look 15:08:19 #link https://launchpad.net/ceilometer/+milestone/havana-1 15:08:41 we seem on good tracks 15:09:08 I'm only worried about monitoring-physical-devices, I emailed Toni this week to have status 15:09:16 we're supposed to have the patchset before the end of the week 15:09:25 what's the difference between "fix committed" and "implemented" status? 15:09:27 I'm very worried because I've the feeling it's going to be huged 15:09:38 dhellmann: bug vs blueprint? 15:09:42 yeah, I don't know if there's going to be time to review that before the deadline 15:09:47 jd__: aha, yeah 15:10:07 so I'm waiting for monitoring-physical-devices, and we'll see :( 15:10:20 we may have to postpone to h2 if that doesn't arrive soon enough 15:10:24 is it better to just move that to h2, or wait and see when we get the patch? I *really* don't want to rush something big 15:10:25 about 2wks to havana-1, yes? 15:10:33 I've still no idea how the patchset is going to be and I'm scared 15:10:47 yeah, h1 deadline is may 30 15:11:00 yeah punt to h2 sounds sensible if there's doubt 15:11:01 jd__ is it possible to let Toni submit some patch for review? 15:11:06 yeah, the blueprint reads like its a big patch. 15:11:26 the patch can go in at any time, but if we move the target to h2 now then that's a signal we aren't going to rush the review 15:11:40 flwang: it's so possible that I begged him to, but he said "end of the week" :) 15:11:53 dhellmann: agreed 15:12:06 haha, fine :) 15:12:33 dhellmann: I'll wait tomorrow: if no patch or big patch, I'll move to h2 -- how does that sound? 15:12:35 based on my experience, it's a system management topic, I think it's a huge work 15:12:53 jd__: that works for me 15:13:40 #agreed jd__ to rescheduled monitoring-physical-devices if the patchset is too big or not sent to review before 17th may 2013 15:13:43 -ed 15:14:29 don't forget to review https://review.openstack.org/#/c/28800/ 15:14:31 it's for h1 15:14:49 o/ 15:15:25 #topic Releasing Ceilometer 2013.1.1 15:15:27 * dhellmann thinks jd__ should probably respond to comments 15:15:41 dhellmann: I missed that then :( 15:16:11 I did, my bad, will do, pff 15:16:28 about 2013.1.1, so it's going to be done soon finally 15:16:42 for 2013.1.1 all the backports were done at the 11th hour, prolly should try to be more "ongoing" on this for 2013.1.2 15:16:54 i.e. for each bug fix you get landed on master 15:17:03 think about backporting potential 15:17:12 +1 15:17:16 and tag the bug if it's a good candidate 15:17:22 reviewers, too 15:17:38 eglynn: tag it in launchpad? 15:17:44 even better, propose the backport yourself if it's a trivial cherrypick 15:17:57 dhellmann: yep, grizzly-backport-potential 15:18:07 * jd__ nods 15:18:25 #info tag backportable bugs with grizzly-backport-potential 15:19:35 #link https://wiki.openstack.org/wiki/StableBranch#Appropriate_Fixes 15:19:51 ^^^ good summary of what's suitable for the stable branch 15:19:52 eglynn: Error: "^^" is not a valid command. 15:20:55 eglynn, thanks for link. very helpful. 15:21:07 #topic Idea: using URL as publishing targets formats 15:21:17 Rational: with the UDP publisher, we may have several UDP targets, so we need more than one option for the whole plugin; that wasn't needed for the RPC because we don't specify the host on Ceilometer side 15:21:25 toughts? 15:21:35 seems reasonable to me 15:21:46 thank you dhellmann, next topic then 15:21:48 * jd__ smiles 15:21:59 we should come up with conventions 15:22:03 but not in this meeting :-) 15:22:23 you mean, about agreeing with my ideas, or about URL? :-) 15:22:24 so is the idea to invent a URI scheme for this? 15:22:42 eglynn: udp://host:port doesn't look like a big invention 15:22:51 rather than just 'udp' as it is now 15:23:00 jd__: true that :) 15:23:32 * eglynn wondering if there's prior art on this ... 15:23:38 meter would be replaced by meter:// in a first time -- later we could use real RPC URL 15:23:38 what about destinations with no "standard" protocol? 15:23:47 what would a graphite URI look like? 15:23:57 sandywalsh: graphite://whatyouwant 15:24:00 the scheme in the url should be the plugin name in our publisher plugins 15:24:07 hmm 15:24:07 sandywalsh: basically it would bea :// 15:24:13 -a 15:24:23 and indeed, we might use statsd instead of udp, since those messages have to be formatted in a particular way, iirc 15:24:30 dhellmann: exactly 15:24:32 when why bother making it look like a URI, you could use any separator 15:24:35 UDP doesn't tell you how I format the data inside 15:24:37 s/when/then/ 15:24:55 sandywalsh: because 1. it's standard 2. it's going to be used by oslo.rpc at some point it seems 15:25:08 the use of a actual protocol (scheme) for URL and URI scheme should not be confused 15:25:40 hmm 15:26:00 we do this for the storage connection strings, too 15:26:16 * sandywalsh defers :) 15:26:30 yeah, everybody does it for storage that's true :) 15:26:51 ok, so I'll work on that at some point since it's going to be quickly a problem for UDP 15:26:52 yah, that is the way, sqlalchemy, etc does cnnection string 15:27:27 #agreed use URL for publisher target formats in pipeline 15:27:42 URI schemes can describe how path segments could be used for storage, but that is separate from protocol (transmission) 15:28:16 mrutkows: the point being? 15:28:58 if u use udp scheme to describe transport protocol, it should not necessarily imply how the data should be stored 15:29:12 it won't 15:29:30 interpreting anything other than the scheme in the url will be the responsibility of the publisher plugin 15:29:33 thanks ) 15:29:38 slightly related point ... 15:29:43 sqlalchemy/swift etc. also use the URI to carry credentials 15:29:47 would anything of that ilk be required for UDP? 15:29:53 (or do we rely on message payload signing?) 15:29:58 eglynn: not at this point, maybe later yes 15:30:06 eglynn: we can use query parameter in URI scheme 15:30:06 jd__: fair enough 15:30:47 #topic Adding Gordon Chung to core reviewer team 15:30:59 ... should i leave the room for this? 15:31:07 lol, no 15:31:15 gordc: stay! :) 15:31:16 there's nothing to be done, I think the delay will pass tomorrow and I'll be able to add you since nobody objected 15:31:30 i'll close my eyes. be honest ppl.lol 15:31:37 lol 15:31:53 @gordc, congrats. 15:32:01 congrats Gordon 15:32:04 gordc: congrats 15:32:05 gordc: welcome to the inner sanctum! :) 15:32:09 thanks for the support folks! 15:32:26 @gordc, congrats 15:32:27 hm I think I'll need nijaba_ on this one, I don't have the rights to do that lol 15:32:30 congrats and welcome, gordc! 15:32:43 nijaba_: LET IT GO! GIVE ME MY RIGHTS! 15:33:01 jd__, lol 15:33:10 * gordc will try not to muck this up. 15:33:36 gordc, con! 15:33:50 #topic Consider use of Diamond instead of the CM Pollster 15:34:07 sandywalsh: floor is yours 15:34:31 I'm going to reserve further comment for now. Look at Diamond and consider that it already does what we want. ... 15:34:50 I'm going to be working on the notifier next, so I'll have a more informed opinion at the next meeting 15:35:03 I'm a big fan of two things: 15:35:11 1. not duplicating effort 15:35:36 2. the trend that the "monitoring stack" is a set of Input-Do Something-Output components that fit together 15:35:50 which includes graphite, statsd, reimann, etc 15:36:06 so I don't think we should be trying to built the entire stack ourselves 15:36:15 https://github.com/BrightcoveOS/Diamond? 15:36:22 thomasem, yep 15:36:30 so, something to ponder :) 15:36:45 Yah, +1 to not duplicating effort. 15:36:49 what about the identity issue, is diamond extensible to reporting native openstack UUIDs? 15:36:53 ...and not re-inventing bugs 15:37:00 :D 15:37:01 sandywalsh: ok, then feel free to re-add this to the agenda whenever you've enough information to discuss it 15:37:03 (for resources, users, tenants etc.) 15:37:04 thomasem, thanks, was just going to ask for a link. 15:37:18 gordc: you bet 15:37:32 well mainly for resources, I guess users and tenants could be figured out in a round-about way 15:37:36 eglynn, I see it mostly as a tool for hypervisor/cpu/disk/etc polling ... not for openstack internals 15:37:50 eglynn, I see notifications -> events -> meters for that 15:38:12 sandywalsh: a-ha, OK 15:38:24 we need a way to collect data so our HPC customers can charge for %CPU of the host box (not the VM) 15:38:25 eglynn, also, it's host-side, not instance/user-side ... but I suppose it could be too. 15:38:51 diamond doesn't know who owns the vm consuming that cpu 15:39:15 it's easy enough to report the instanceid. 15:39:18 dhellmann, so, that would be something we would have to figure out anyway, so perhaps we would need a new diamond collector for that 15:39:18 anyone know if this is overlap with healthnmon capability? 15:39:30 sandywalsh: our current CPU collector already does this 15:39:32 yep, so we'd need to add that info somewhere in the pipeline 15:39:55 dhellmann, so convert it to be a diamond module and leverage the strength of that community 15:40:27 there is a counter-argument ... which depends on the architecture of the multi-publisher. 15:40:45 if someone wants to do that, that's fine 15:40:53 I'm just trying to make sure we understand the requirements 15:40:58 it's not just "get the cpu utilization" 15:40:58 and that, potentially we are building a proper input->do_something->output widget 15:41:06 it has to be tied to the instance owner for billing 15:41:11 it's not just about monitoring 15:41:35 if you can't generate meters based on the Counter format we defined, it's basically useless 15:42:07 dhellmann, depends on how you tackle the problem of CPU usage. If the hypervisor allows you to cap cpu based on flavor. But that's a longer discussion 15:42:28 like I said, I don't care if we change this, but it has to do what we do now, not just collect monitoring data 15:42:42 we can't ignore that requirement 15:43:01 jd__, by that logic we can't use ceilometer with any of the existing monitoring stack tools out there 15:43:33 sandywalsh: that depends entirely on whether you view ceilometer as the destination of data, or the source of data 15:43:38 dhellmann, agreed ... but I think the delta of effort is smaller than having to write all these collectors/handlers over again 15:43:49 dhellmann, it should be both 15:43:50 I thought we agreed that for monitoring, we would be a source but not a destination 15:43:58 for metering we are a destination 15:44:22 jd__: if it gives enough information to potentially *map* to the counter format, then that could work, no? 15:44:23 and for alarming, which is a subset of the monitoring data that's also tied to customer ids in a way that other tools don't do 15:44:34 if it's input->do_something->output as the core architecture, there is no real limitation on that 15:44:55 the limitation is whether the thing collecting the data has all of the metadata, too 15:45:14 eglynn: sure! 15:45:17 but we're running out of time in this meeting 15:45:28 again ... just planting a seed for thought :) 15:45:53 yeah, there's no harm in that :) 15:45:56 #topic Open discussion 15:46:53 2013.1.1 just released! 15:46:54 https://launchpad.net/ceilometer/grizzly/2013.1.1 15:47:08 amazing 15:47:08 props to apevec! 15:48:41 anything else or should I wrap up? 15:48:53 nowt else from me 15:49:36 * dhellmann has nothing to add 15:49:57 #endmeeting