21:01:35 <dhellmann> #startmeeting ceilometer
21:01:36 <openstack> Meeting started Wed May  8 21:01:35 2013 UTC.  The chair is dhellmann. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:39 <openstack> The meeting name has been set to 'ceilometer'
21:01:41 <dhellmann> #chair dhellmann
21:01:42 <openstack> Current chairs: dhellmann
21:01:47 <dhellmann> #link http://wiki.openstack.org/Meetings/MeteringAgenda
21:01:53 <dhellmann> Show of hands, who is around for the ceilometer meeting?
21:01:53 <sandywalsh> o/
21:01:56 <dhellmann> o/
21:01:56 <eglynn> o/
21:01:58 <litong> o/
21:01:58 <flwang> o/
21:01:59 <salmon_> o/
21:02:00 <n0ano> o/
21:02:00 <asalkeld> o/
21:02:13 <danspraggins> o/
21:02:21 <asalkeld> wow that was fast
21:02:25 <dhellmann> :-)
21:02:26 <dhellmann> #topic Tracking of bug #1176017, releasing of MIM
21:02:27 <uvirtbot> Launchpad bug 1176017 in ceilometer "Reinstate MongoDB testing with ming" [Critical,Confirmed] https://launchpad.net/bugs/1176017
21:02:30 <apmelton1> o/
21:02:32 <dhellmann> #info dhellmann has contacted the upstream developers again (today) and is waiting for a reply
21:02:50 <dhellmann> this issue is related to getting MIM into a state where we can have our tests depend on it
21:02:55 <asalkeld> we really need that in
21:03:00 <dhellmann> we're negotiating the best approach
21:03:20 <dhellmann> yeah
21:03:35 <DanD> o/
21:03:48 <dhellmann> I hate to fork it, since they're open to giving us the keys to make releases, but communication is slow.
21:04:07 <dhellmann> any other questions or comments on this before we move on?
21:04:14 <eglynn> yeah, forking would be a last resort IMO
21:04:32 <mrutkows> Joining Ceilometer meeting (late sorry)
21:04:44 <asalkeld> well, it could be a tempory measure
21:04:44 <dhellmann> welcome mrutkows, we're just getting started
21:04:47 <sandywalsh> dhellmann, we had to do the same thing with python-novaclient (which used to be a rackspace cloud client) ... no answer, we forked and moved on.
21:05:12 <dhellmann> sandywalsh: we've had an answer, including an offer of getting the password to push releases
21:05:30 <dhellmann> then we hit OSDS, and jd__ and I were incommunicado for a while
21:05:44 <sandywalsh> ah, gotcha
21:05:56 <dhellmann> let's give them a week and see where we get -- one of the guys is here in ATL so if he comes to the user group meeting tomorrow I can corner him
21:06:26 <dhellmann> jd__ wanted us to track this in our meetings until it's resolved, and to make sure everyone knew what was going on
21:06:33 <dhellmann> let's move on to the next topic
21:06:35 <dhellmann> #topic Folsom support with G2 candidate
21:06:40 <dhellmann> #link https://lists.launchpad.net/openstack/msg23332.html
21:06:44 <dhellmann> We have had several support requests or questions with old versions of ceilometer and folsom.
21:06:47 <dhellmann> What level of support do we want/need to provide?
21:07:38 * dhellmann hears crickets
21:07:39 <asalkeld> not sure, it is expensive from a develop pov
21:07:47 <eglynn> we need to be actively pushing to deprecate Folsom if at all possible
21:07:51 <DanD> dhellmann, is there a general policy on support of previous versions?
21:08:11 <DanD> or is there anyone using ceilometer with folsom in production?
21:08:12 <eglynn> at least getting the message accross that the Folsom support is strictly time limited
21:08:32 <dhellmann> DanD: there is, but the folsom version of ceilometer doesn't strictly fall under those rules because it wasn't incubated
21:08:41 <eglynn> DanD: ceilometer was only in incubation at Folsom time
21:08:53 <eglynn> (so not stable/folsom branch etc.)
21:09:07 <DanD> seems like thats the answer then :)
21:09:31 <eglynn> shardy was discussing the corresponding policy for Heat earlier
21:09:38 <dhellmann> that's what I think, too, but I thought we should talk about it :-)
21:10:07 <dhellmann> to be clear, this was a pre-release version of grizzly ceilometer with other folsom components
21:10:08 <eglynn> IIRC the idea was that heat:stable/grizzly would provde "best effort" support for folsom & grizzly
21:10:16 <dhellmann> so I suspect it was people trying out ceilometer against their existing clouds
21:10:43 <dhellmann> that *should* work if they run ceilometer on a separate server so they don't have the version conflict with dependencies (esp. the client libs)
21:10:44 <asalkeld> be careful of "best-effort" more like occasional-effort
21:10:46 <eglynn> not actively test against folsom, but also not knowingly merge anything to stable/grizzlyy that's know the break folsom
21:10:57 <dhellmann> asalkeld: +1 on occasional-effort
21:10:59 <eglynn> asalkeld: yep, I hear ya
21:11:17 <dhellmann> eglynn: we've already passed the point where our grizzly code runs cleanly with folsom
21:11:29 <dhellmann> after g2 we started deleting some of the compatibility stuff
21:11:44 <dhellmann> it is unusual to support mixed versions of components, isn't it?
21:11:51 <asalkeld> yea
21:11:59 <asalkeld> and makes things messy
21:12:13 <eglynn> dhellmann: OK, so it sounds like agressive pressure to move ceilo users off folsom is called for
21:12:34 <asalkeld> (or hire devs)
21:12:48 <eglynn> (the more we bend over backwards to support folsom, the slower folks will move off it I guess)
21:13:13 <dhellmann> do we need to make a formal statement about that somewhere, or just tell people we can't help them?
21:13:14 <asalkeld> we are not a huge team
21:13:38 <eglynn> an agreed formal statement would be good
21:13:43 <flwang> I think we need a statement for the support policy on the wiki or somewhere
21:13:45 <dhellmann> ok
21:14:00 <dhellmann> jd__ isn't here, should I assign that to him? :-)
21:14:11 <asalkeld> yea
21:14:13 <eglynn> go for it! ;)
21:14:20 <DanD> so is anyone currently running ceilometer against folsom in production?
21:14:35 <dhellmann> DanD: I've no idea :-/
21:14:35 <asalkeld> not sure
21:15:10 <dhellmann> as I said, it's probably fine to do so if it is deployed in a way that separates the dependencies -- the RPC side didn't actually change all that much
21:15:44 <dhellmann> #action jd__ and dhellmann to write formal statement about limiting support for pre-grizzly versions of ceilometer
21:16:10 <dhellmann> perhaps we will create a small market for contract work :-)
21:16:31 <asalkeld> seriously niche
21:16:31 <flwang> a start up for ceilometer support :)
21:16:51 <dhellmann> ok, moving on
21:16:55 <dhellmann> #topic Priority of metadata features of SQL and HBase drivers
21:16:58 <dhellmann> I'm not sure what this one means. Who added it to the agenda?
21:17:18 <eglynn> that's from the release status meeting yesterday
21:17:43 <shengjie> this is about whether we should keep those bps as essential or just high priority
21:17:46 <eglynn> so we had two BPs on hbase & sqlalchemy metadata query support both with Essetntial priority set
21:18:03 <eglynn> this is a red flag to the release mgmt folks
21:18:08 <asalkeld> # TODO implement metaquery support
21:18:08 <asalkeld> if len(metaquery) > 0:
21:18:08 <asalkeld> raise NotImplementedError('metaquery not implemented')
21:18:35 <eglynn> (as "Essential" == " we can't release Havana without this")
21:19:03 <dhellmann> I'd be comfortable saying that about sqlalchemy, but not hbase
21:19:03 <flwang> for sqlalchemy, I think it's Esstential
21:19:10 <eglynn> I think jd__'s motivation in setting these Essential was to ensure dev effort is dedicated to them early
21:19:11 <asalkeld> I think the sql version is more important
21:19:37 <eglynn> OK we've alreasy downgrade hbase to High (with shengjie's agreement)
21:19:48 <eglynn> *already
21:19:55 <shengjie> we are planning to do the HBase one anyway
21:20:00 <DanD> at the summit there was a discussion about removing some drivers that didn't have full support
21:20:04 <shengjie> so it doesn't make that much difference
21:20:24 <DanD> what is the requirement in terms of updates and how they propgate into the drivers?
21:20:38 <eglynn> so conclusion is to keep sqlalchemy as Essential?
21:20:48 <flwang> agree +1
21:20:49 <asalkeld> I think so eglynn
21:21:14 <eglynn> DanD: yep the idea was a driver wouldn't be shipped until considered feature-complete
21:21:15 <asalkeld> just means we _really_ need to find someone to do it
21:21:26 <sandywalsh> sorry, vpn died, what's the concern with sqlalchemy?
21:21:29 <dhellmann> DanD: I think that we agreed just to do that for new drivers, right?
21:21:33 <eglynn> (various ideas around keeping on a branch or in a contrib dir until then ...)
21:21:50 <asalkeld> sandywalsh, just we need everything implemeted
21:21:59 <eglynn> dhellmann: yep, just for new drivers IIRC
21:22:04 <dhellmann> sandywalsh: whether finishing the metadata feature is "essential" or just "high" priority
21:22:19 <dhellmann> while we're at it, how does this apply to the new alarm stuff asalkeld is doing?
21:22:28 <dhellmann> I'm behind on reviews, are there changesets to update all of the storage drivers?
21:22:28 <DanD> dhellman yes, that was the discussion. what I am trying to understand is what happens when a new API feature is added. who owns adding support for all the drivers?
21:22:35 <asalkeld> well I have done mongo and sql
21:22:36 <dhellmann> DanD: right, like alarms
21:22:44 <sandywalsh> dhellmann, right, we'll likely be going down that road as mysql is our main weapon ... not sure about our use of the metadata feature though
21:23:01 <epende> If the v2 API depends heavily on metadata, that would seem to make it hard to ship w/o
21:23:11 <dhellmann> sandywalsh: the metadata feature here is for querying against metadata values on resources, so it's pretty key for most realistic use cases
21:23:49 <sandywalsh> dhellmann, unless we point back to the underlying event as we'll be doing. But yes, we'll need "extra" attributes on aggregated metrics
21:23:58 <sandywalsh> ... at some point
21:23:58 <dhellmann> I see a few options
21:24:27 <dhellmann> I'm not sure what's best, though.
21:24:35 <asalkeld> this is the problem with adding new db drivers, adding new features get more and more complex
21:24:41 <dhellmann> I think if we do drop drivers, we should do it later in the release cycle
21:25:11 <DanD> we should at least have a catalog of "required" meta data and test for those
21:25:16 <sandywalsh> some db features should be considered essential and some optional (left to the driver maintainer)
21:25:33 <eglynn> yep, once released, v. hard to drop if not initially marked as contrib/experimental
21:25:38 <sandywalsh> metadata on metrics (if that's the only way to put extra attributes on metrics) ... should be essential
21:25:50 <sandywalsh> s/metrics/meters/
21:25:58 <asalkeld> yip
21:26:12 <eglynn> maybe we need to distinguish between "tier one" and "tier two" storage drivers
21:26:34 <sandywalsh> eglynn, t1 & t2 db api features, not drivers
21:26:36 <dhellmann> our feature set is really small, though, is it worth that complexity?
21:27:13 <dhellmann> otoh, if a driver doesn't support "alarms" but does support "meters" then it is still useful for some deployments
21:27:21 <sandywalsh> that is, I shouldn't have to implement events in all drivers. So long as there is one reference driver implementation.
21:27:24 <dhellmann> so maybe you're right, not "tiers" but list which features are fully supported
21:27:25 <shengjie> asalkeld: +1 on the APIs should be splitter to to tiers
21:27:34 <shengjie> s/to/two/
21:27:37 <DanD> so if a API feature is required an no one cares enough to update the driver, then it no longer qualifies and gets droppped?
21:27:52 <dhellmann> DanD: no, but we do document that case
21:27:57 <dhellmann> that way deployers can be informed
21:28:00 <shengjie> sorry meant to +1 sandy
21:28:08 <asalkeld> no worries
21:28:11 <dhellmann> and maybe if a driver isn't being kept up to date with lots of new features, we consider dropping it
21:28:37 <sandywalsh> dhellmann, +1
21:28:37 <eglynn> or moving it out to a contrib dir
21:28:47 <dhellmann> eglynn: meh, that's what git history is for
21:28:50 <shengjie> some of the features might be easy for sql , not easy for no-sql dbs
21:28:50 <litong> not only developers but also users , if an implementation can not keep up or has performance issues, then people will stop use it.
21:28:56 <asalkeld> well that indicates developer interest, not user intersest
21:29:00 <dhellmann> shengjie: and the other way around, too
21:29:13 <DanD> dhellmann so is it the responsiblity of the api server code to handle the fact that a driver doesn't implement a query?
21:29:13 <sandywalsh> it's a little tricky, since nosql is so easy to extend the schema in crazy ways, but a large effort in sql ... the nosql issue is when it comes to clustering, where sql shines.
21:29:14 <shengjie> dhellmann: true
21:29:19 <litong> I think it should be really just clearly documented. there is no need to really drop anything.
21:29:20 <eglynn> dhellmann: well contrib allows folks to continue to use it (at their own risk)
21:29:26 <dhellmann> DanD: no, the API may just return errors
21:29:48 <epende> dhellmann would no-op's be acceptable?
21:29:51 <sandywalsh> so, the mongo people can push the schema in ways that add a lot of work for the sql maintainers
21:30:03 <litong> @dhellmann, or the api will just say it can not support that request.
21:30:05 <dhellmann> epende: no, no-ops make it look like it's working when it isn't
21:30:26 <epende> good point
21:30:30 <dhellmann> sandywalsh: we're all the same people, so we should be able to control for that
21:30:42 <dhellmann> litong: right, via an error message
21:30:53 <dhellmann> I don't want to complicate the db api by having a way for the caller to ask "what do you do"?
21:31:03 <litong> status code 505
21:31:04 <sandywalsh> dhellmann, +1
21:31:46 <dhellmann> ok
21:31:55 <sandywalsh> then the question becomes, should the functionality go in tests.storage.base or test.storage.test_<foo>
21:32:03 <litong> if anyone really wants to support certain driver, then they will have to work harder to provide and maintain the driver.
21:32:08 <sandywalsh> if it's in the base, it should work for all drivers
21:32:10 <litong> this is Open Source after all.
21:32:20 <sandywalsh> if it's in the driver-specific test, it's optional
21:32:53 <dhellmann> sandywalsh: we can segregate the tests for each feature and use scenarios to include the drivers that should be tested
21:32:54 <litong> @sandywalsh, I think we are saying the samething.
21:32:54 <epende> 505 or 501?
21:32:58 <dhellmann> but that's an implementation detail
21:33:03 <dhellmann> we can work out the exact error code later
21:33:23 <sandywalsh> litong, yep, I think so
21:33:27 <litong> @epende, I can go either way.
21:33:27 <dhellmann> so it sounds like we've agreed that the db api will be defined in "chunks" and that a driver has to support a whole chunk or not claim to support it, is that right?
21:33:42 <dhellmann> and then we will document that support
21:33:46 <eglynn> sounds reasonble
21:33:49 <sandywalsh> yep
21:33:51 <asalkeld> yea, ok
21:34:09 <shengjie> agree
21:34:11 <eglynn> clearly doc'd DB v. "functional chunk" matrix
21:34:27 <dhellmann> #agreed the db api will be defined in "chunks" and that a driver has to support a whole chunk or not claim to support it. we will document which features are supported by each driver
21:34:34 <sandywalsh> and events are an optional chunk :p
21:34:50 <dhellmann> sandywalsh: seems fair enough
21:34:53 <litong> yes, clearly document what is working and what is not. with implementation returning either 501 or 505
21:35:03 <dhellmann> we have the base API now, the event API, and the alarm API
21:35:20 <epende> litong +1
21:35:40 <dhellmann> ok, good. anything else to add before moving on?
21:36:14 <shengjie> who should i talk to in terms of
21:36:25 <shengjie> adding event APi alarm API to HBase
21:36:48 <dhellmann> shengjie: probably sandywalsh
21:37:05 <sandywalsh> shengjie, looks like the alert api has landed already and I should be putting event api up this week.
21:37:14 <sandywalsh> (in the db anyway)
21:37:26 <shengjie> sandywalsh: cool, we take it off line then, Sandy
21:37:28 <dhellmann> ok, next topic
21:37:31 <dhellmann> #topic Milestone assignment of Havana BPs
21:37:34 <sandywalsh> the HP guys are going to be working on a web api proposal for events
21:37:42 <eglynn> that's another request from the release mgmt meeting yesterday
21:37:44 <dhellmann> #link https://blueprints.launchpad.net/ceilometer/havana
21:37:53 <eglynn> that Havana BPs are lined up with individual milestones if possible
21:37:58 <eglynn> h1, h2, h3 etc.
21:38:09 <dhellmann> so we need to declare when we think we will finish each blueprint?
21:38:10 <eglynn> makes it easier for release mgr to track progress
21:38:23 <eglynn> yes, an initial estimate at least
21:38:30 <dhellmann> ok
21:38:39 <dhellmann> so, everyone should go do that this week! :-)
21:38:40 <eglynn> (BPs can of course be bumped to a later milestone if necessary)
21:38:58 <eglynn> yep, please do folks, it'll make Thierry happy :)
21:39:08 <dhellmann> #action blueprint-owners, set milestones on all blueprints
21:39:17 <asalkeld> is that only approved ones?
21:39:31 <eglynn> yes, I would think so
21:39:32 <dhellmann> do we have any listed for havana that aren't approved?
21:39:42 <dhellmann> ah, yeah, "drafting"
21:40:02 <DanD> dhellman what is the deadline for submitting blueprints?
21:40:11 <eglynn> "drafting" == "under the radar for now" ?
21:40:22 <dhellmann> DanD: that's a question for jd__, but I thought it was before the summit
21:40:43 <sandywalsh> I have a "drafting" and to me it means "don't know what monsters live under those blankets just yet"
21:40:54 <eglynn> LOL :)
21:41:10 <shengjie> I have one waiting for approval
21:41:12 <sandywalsh> :)
21:41:17 <asalkeld> crazy level of project management for an opensource project IMO
21:41:28 <dhellmann> shengjie: prod jd__ to look at it?
21:41:40 <dhellmann> asalkeld: I think it gets more important when we have to coordinate with the doc team
21:41:53 <sandywalsh> asalkeld, it's all about the packaging apparently
21:41:57 <eglynn> re. submission deadline, I think a BP can still be submitted as long there's a developer willing to pick it up
21:42:06 <eglynn> DanD: ^^^
21:42:25 <shengjie> dhellmann: eglynn is looking at it :)
21:42:31 <DanD> dhellmann we put in an API blueprint but are still evaluating whether the current v2 API meets our requriements. I was assuming we would need to do a implementation blueprint if we find gaps
21:43:06 <DanD> or are we covered with the one I reviewed at the summit?
21:43:32 <dhellmann> DanD: good question :-/
21:43:39 <dhellmann> is that one on the havana list?
21:43:52 <DanD> didn't see it. but we can check again
21:44:13 <dhellmann> ok. if it's not there, you should probably get something together asap
21:44:18 <DanD> ok
21:44:40 <dhellmann> any other questions about blueprints?
21:45:17 <dhellmann> ok, moving on
21:45:18 <dhellmann> #topic Open discussion
21:45:52 <salmon_> I have a question
21:45:57 <flwang> dhellmann, where can I get the calendar to add it into my Notes?
21:45:58 <asalkeld> I could do with some ceilometerclient reviews
21:46:09 <dhellmann> flwang: the havana schedule?
21:46:21 <flwang> nope, the weekly meeting
21:46:23 <dhellmann> asalkeld: I have a huge backlog, but I'll try to prioritize those
21:46:31 <asalkeld> k
21:46:37 <salmon_> I'm thinking about monitoring all users actions. Is ceilometer good place to implement such functionality?
21:46:53 <dhellmann> flwang: there are links to HTML and ICS versions of the calendar on https://wiki.openstack.org/wiki/Meetings/MeteringAgenda#Agenda
21:47:04 <sandywalsh> salmon_, the event mechanism will help with that.
21:47:10 <dhellmann> oops, I mean https://wiki.openstack.org/wiki/Meetings/MeteringAgenda#Weekly_Metering_meeting
21:47:11 <eglynn> asalkeld: I'll pick up a few 1st thing tmrw also
21:47:29 <asalkeld> thanks eglynn
21:47:32 <dhellmann> asalkeld: yeah, it will be at least 14 hrs for me, too
21:47:41 <sandywalsh> salmon_, storing all the events that come out of nova, including the tenant id and Request ID ... essentially the entire user action.
21:47:56 <dhellmann> salmon_: what sandywalsh said :-)
21:48:03 <salmon_> sandywalsh: but, is ceilometer designed to do such things?
21:48:09 <sandywalsh> salmon_, it will be :)
21:48:11 <salmon_> or is it just side effect?
21:48:15 <dhellmann> salmon_: we're building that feature during this release
21:48:20 <sandywalsh> salmon_, that's the purpose of the feature
21:48:21 <flwang> I remembered that you posted an google calendar last time, then I added this meeting into my notes calendar
21:48:25 <dhellmann> that's what the event feature is for
21:48:26 <salmon_> ah, good :)
21:48:35 <sandywalsh> salmon_, if you need something today, you can look at the ServerActions table in nova
21:48:48 <dhellmann> flwang: https://www.google.com/calendar/ical/h102rn64cnl9n2emhc5i3hjjso%40group.calendar.google.com/public/basic.ics
21:49:04 <sandywalsh> salmon_, but that's not a CM thing.
21:49:06 <salmon_> sandywalsh: I need it also for glance so ServerActions is just part of solution
21:49:07 <dhellmann> #link https://www.google.com/calendar/ical/h102rn64cnl9n2emhc5i3hjjso%40group.calendar.google.com/public/basic.ics
21:49:11 <litong> @flwang, I tested that ics file and it worked great.
21:49:16 <litong> on notes of course.
21:49:20 <sandywalsh> salmon_, true
21:49:27 <dhellmann> salmon_: the event monitor in ceilometer will be able to listen to events from everywhere
21:49:32 <flwang> cool, that's what I want, thanks dhellmann and litong
21:49:41 <salmon_> dhellmann: cool
21:51:02 <dhellmann> does anyone else have anything?
21:51:05 <shengjie> dhellmann: btw, since we need more people working on HBase :) can u +2 this one https://review.openstack.org/#/c/28316/
21:51:33 <dhellmann> shengjie: that's on my review backlog, but I'm dealing with some internal stuff at dh this week so I'm falling behind :-/
21:52:09 <shengjie> dhellmann: I will force __jd to +2 it tomorrow, no worries :)
21:52:22 <dhellmann> sounds good :-)
21:52:34 <dhellmann> if that's all we have, we can end a few minutes early...
21:52:46 <litong> if anyone can help review this patch, that will be great.
21:52:47 <litong> https://review.openstack.org/#/c/27835/
21:52:47 <asalkeld> later all
21:52:52 <eglynn> cool, thanks all!
21:53:14 <dhellmann> ok, thanks everyone!
21:53:18 <dhellmann> #endmeeting