15:00:40 <jd__> #startmeeting ceilometer
15:00:41 <openstack> Meeting started Thu Apr  3 15:00:40 2014 UTC and is due to finish in 60 minutes.  The chair is jd__. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:42 <edhall> bye
15:00:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:44 <openstack> The meeting name has been set to 'ceilometer'
15:00:53 <dhellmann> o/
15:01:00 <ildikov_> o/
15:01:02 <nsaje> o/
15:01:03 <jd__> hey hey
15:01:08 <admin0> hey jd__
15:01:19 <nprivalova> hi
15:01:25 <admin0> hello
15:01:26 <eglynn> o/
15:01:35 <nealph> o/
15:01:45 <gibi> o/
15:01:51 <jd__> #topic Release status
15:01:55 <jd__> rc1 is out \o/
15:02:03 <nealph> that's fantastic!
15:02:03 <eglynn> w00t!
15:02:07 <ildikov_> \o/ :)
15:02:09 <jd__> doesn't look like we need a rc2 AFAIK
15:02:21 <jd__> eglynn is going to talk about known issues to release note
15:02:36 <admin0> can you guys also debate about a new publisher .. graphite ?
15:02:37 <ildikov_> jd__: what about the translation issue around all projects?
15:02:39 <sileht> o/
15:02:39 <eglynn> ... /me was just think we should collect known gotchas for the release notes
15:02:43 <admin0> that i submitted the blueprint to
15:03:02 <eglynn> #link https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse
15:03:08 <jd__> ildikov_: dunnow
15:03:24 <eglynn> at least two potential issues I can think of ...
15:03:24 <jd__> admin0: not now, but yes
15:03:40 <admin0> ok
15:03:54 <eglynn> 1. the "implicit" requirement for happybase version not specified in the offical requirements
15:03:58 <ildikov_> jd__: as I saw it was targeted to RC2 in several projects, but if we do not have any other reason for RC2, then I just asked :)
15:04:13 <eglynn> nprivalova: ^^^ does that make sense?
15:04:56 <nprivalova> eglynn: AFAIK correct requirements were merged
15:05:03 <nprivalova> eglynn: let me check
15:05:17 <eglynn> nprivalova: merged to master, but not milestone-proposed?
15:05:56 <nprivalova> eglynn: yep
15:06:03 <nprivalova> #link https://review.openstack.org/#/c/82438/
15:06:18 <eglynn> nprivalova: so not in RC1 in that case
15:06:25 <nprivalova> eglynn: right
15:06:37 <ildikov_> nprivalova: is it enough in the requirements that !=0.7?
15:07:12 <nprivalova> ildikov_: we support >0.4 and !=0.7
15:07:37 <nprivalova> I think we may add it to release note
15:07:55 <jd__> cool
15:08:00 <jd__> anything else?
15:08:01 <ildikov_> nprivalova: I just remembered that there was something about 0.6 also as it is buggy like 0.7, but maybe it's not my day as for questions :)
15:08:25 <eglynn> nprivalova: https://github.com/openstack/ceilometer/blob/master/requirements.txt is still happybase>=0.4,<=0.6
15:08:52 <eglynn> nprivalova: ^^^ isn't that problematic? (the =0.6 case)
15:09:14 <nprivalova> eglynn: it should be synced with global
15:09:21 <nprivalova> eglynn: ==0.6 is ok
15:09:46 <nprivalova> ildikov_: it was just mistake in message :) 0.6 is ok
15:09:48 <nprivalova> besides, I'd like discuss https://bugs.launchpad.net/ceilometer/+bug/1288284 . We add it to known issues?
15:09:49 <eglynn> nprivalova: a-ha, ok ... I thought we couldn't sync it up on Friday due to the dependency freeze (as you discussed with ttx)
15:09:50 <uvirtbot> Launchpad bug 1288284 in ceilometer "[Hbase]Resource with multiple meters is not handled correctly" [High,In progress]
15:10:04 <ildikov_> nprivalova: yep, it's on review now, I mean the sync
15:10:15 <ildikov_> nprivalova: a-ha, ok, cool :)
15:10:24 <eglynn> nprivalova: yep, sounds reasonable to note 1288284
15:10:30 <nprivalova> eglynn: ok
15:10:59 <eglynn> ok the other one I had in mind was the issue with multi-process/multi-worker collector running against postgres
15:11:15 <eglynn> (as discussed last week with gordc)
15:11:33 <nprivalova> yep, and DBDeadlock issue
15:11:52 <eglynn> seems we should include a caveat not to scale out the collector if running against postgres
15:11:57 <ildikov_> eglynn: is only postgres affected i nthis whole DB issue finally?
15:12:19 <eglynn> ildikov_: TBH I'm not 100% sure about that
15:13:06 <eglynn> ildikov_: discussed briefly with gordc yesterday http://eavesdrop.openstack.org/irclogs/%23openstack-ceilometer/%23openstack-ceilometer.2014-04-02.log
15:13:16 <ildikov_> eglynn: ok, I will try to check after meeting in the logs and mails
15:14:02 <eglynn> #action eglynn wrangle release notes "known issues" entries from SMEs
15:14:05 <nprivalova> we will test performance soon and will check it anyway
15:14:45 <eglynn> k, we can prolly move on if no-one else has other release-notable items in mind
15:15:26 <jd__> ack
15:15:34 * admin0 waits :)
15:15:36 <jd__> #topic Tempest integration
15:15:54 * isviridov also waits
15:16:24 <nprivalova> unfortunately no updates. We still have one collector (as it done by default) and tests fail
15:16:46 <jd__> ok
15:16:53 <jd__> #topic Release python-ceilometerclient?
15:17:03 <jd__> I think we should do that now
15:17:16 <eglynn> already done a couple days ago :)
15:17:18 <ildikov_> it's released \o/
15:17:29 <jd__> oh great, I was just checking and still saw a few patches
15:17:32 <eglynn> #link https://pypi.python.org/pypi/python-ceilometerclient/1.0.10
15:17:46 <jd__> thanks eglynn
15:17:56 <jd__> #topic Open discussion
15:18:24 <eglynn> ... /me just realized he released 1.0.10 on April Fool's Day ;)
15:18:45 <ildikov_> eglynn: LOL :)
15:19:15 <admin0> graphite publisher :) .. so that people do not have to install the full ceilometer components, but just get the graphs directly to graphite
15:19:16 <ildikov_> eglynn: it was a bit unbelievable that it finally got released ;)
15:19:30 <eglynn> ildikov_: ... it was a long time coming all right! :)
15:19:51 <eglynn> admin0: so do you want feedback on the idea or the code?
15:20:05 <eglynn> admin0: is it out for (WIP) review on gerrit?
15:20:11 <admin0> i have not uploaded the code yet .. i was waiting for your views on it before i upload the code
15:20:30 <admin0> someone said i should check before wrting a lot of codes
15:20:50 <nprivalova> it was me :)
15:20:50 <eglynn> admin0: but you don have some basic working code already, right?
15:21:05 <admin0> i do have a basic working code ready that is already working
15:21:18 <eglynn> admin0: (... maybe no tests, docco yet, but that's fine for WIP)
15:21:46 <nprivalova> the question from me was: why we need separate publisher for graphite? why UDP is not enough?
15:21:56 <ildikov_> BTW, do we want to have specialized publishers?
15:22:05 <dhellmann> does graphite want a special format for its packets?
15:22:12 <admin0> nprivalova:  with udp, i  was just getting the metrics, and i need a separate daemon running to convert the codes to the graphite format
15:22:48 <eglynn> admin0: ... my recommendation would be throw it up on gerrit so as to get early eyes on it, which would help to answer nprivalova's question in a more concrete way
15:22:51 <nprivalova> aha, so we may need some converter? not publisher actually?
15:23:23 <ildikov_> I think it is too specific to be placed inside Ceilometer
15:23:53 <eglynn> nprivalova: publisher is currently end of the sink chain, but I wonder could this conversion be done in a transformer?
15:24:04 <admin0> if ceilometer can send to a tcp port in the format say:    ceilometer.tenant(name/id).vm(name/id).disk.write.bytes 10  timestamp .. .. that would do
15:24:24 <nprivalova> eglynn: I'm thinking about transformer too...
15:25:44 <nprivalova> so I think that if code is ready admin0 may publish it and we will think about moving conversion to transformer or smth else
15:26:10 <ildikov_> eglynn: do we want to have client specific transformers in Ceilometer? maybe I'm on the wrong track with this question, but it seems a bit out of scope for me
15:26:13 <eglynn> nprivalova: though the publisher still needs the transformed data in sample-like form https://github.com/openstack/ceilometer/blob/master/ceilometer/publisher/utils.py#L74
15:26:46 <eglynn> ildikov_: well I suppose it doesn't necessarily need to live in the ceilometer tree
15:27:11 <ildikov_> eglynn: a-ha, ok, fair enough
15:27:14 <dhellmann> ildikov_: I don't see a problem with adding this. It's a very commonly used tool, and IIRC we talked about adding this eventually when we first made publishers pluggable.
15:27:15 <eglynn> ildikov_: ... but our pipeline should probably be flexible enough to accomodate an externally provided plugin
15:27:41 <nprivalova> but we may provide a mechanism to add such converters
15:27:46 <admin0> if using pipeline it can have tcp function and able to format (string ) the output, it should work
15:28:11 <eglynn> so I think first step is to look at admin0's code and if the coversions can be performed in a transformer
15:28:26 <nprivalova> eglynn: agreed
15:28:28 <eglynn> (while re-using the existing UDP publisher as nprivalova suggests)
15:28:53 <eglynn> admin0: can you propose a patch on gerrit with your code?
15:29:00 <admin0> i will start on it right now
15:29:03 <ildikov_> dhellmann: I see your point, but I'm also sure that there are several commonly used tools on the market, so I just wanted highlight this question earlier, than later to discuss the options
15:29:14 <eglynn> admin0: ty!
15:29:47 <dhellmann> ildikov_: I'd like to think we would take a big-tent approach for publishers, like we do with storage drivers
15:30:04 <admin0> i will beautify my codes, put comments on why i am doing something .. and then submit a review
15:30:53 * isviridov thinks if it is time for MagnetoDB blueprints?
15:30:54 <eglynn> dhellmann: ... which is a nice segueway into isviridov's topic
15:31:04 <ildikov_> dhellmann: we have some issues now we the supported DB drivers, it needs more than less refactor/redesign now, I just would like to avoid similar issues with publishers
15:31:55 <nprivalova> isviridov: please move on :)
15:32:08 <isviridov> So, I would suggest support of our MagnetoDB in Celiometer
15:32:11 <isviridov> #link https://blueprints.launchpad.net/ceilometer/+spec/magnetodb-driver
15:32:33 <dhellmann> ildikov_: yes, true. we should add the tests if possible, rather than refusing new plugins
15:33:02 <eglynn> isviridov: key questions in my mind ...
15:33:19 <isviridov> It is key-value and perfect for timeseries data
15:33:21 <isviridov> eglynn: pelase
15:33:37 <eglynn> isviridov: ... 1. would be providing full or partial feature parity with mongo/sqla drivers?
15:34:03 <eglynn> isviridov: ... 2. would you be sticking around as driver maintainer for at least the medium-term
15:34:14 <ildikov_> dhellmann: yes, I agree in this point, we should surely focus more on proper tests this time
15:34:46 <eglynn> isviridov: ... (we need to avoid "code drops" lacking active maintainership IMO)
15:34:54 <dhellmann> eglynn: +1
15:35:23 <ildikov_> eglynn: +1
15:35:59 <isviridov> eglynn: 1: it has http interface and actually covers  database behind.
15:36:39 <eglynn> isviridov: I'm thinking more of the "glue code" that lies between the ceilometer API and the native DB API
15:36:47 <isviridov> eglynn: 2. Yes. We have a plans also for future integration with Celiometer in metrics writing and collecting
15:37:43 <eglynn> isviridov: for each storage driver we have logic that maps between a high-level primitive like get_resources and the corresponding DB-specific queries
15:38:08 <eglynn> isviridov: in some case the semantics of these primitives are incomplete in some subset of the drivers
15:39:09 <eglynn> isviridov: so the question is not whether magnetoDB can do everything database-y
15:39:18 <isviridov> eglynn: yeap, it is clear.  We will drive inplementation of that part to make MagnetoDB + Celiometer better
15:39:35 <ildikov_> isviridov: did I see right that you plan to use Cassandra behind it?
15:40:35 <nprivalova> So Magneto looks like an additional layer between API and DBs... In future we may remove add DB-code from Ceilometer and use only Magneto API :)?
15:40:56 <isviridov> ildikov_: Cassandra is our primarry storage, the best tested and with best performance.
15:41:43 <isviridov> ildikov_: but we are planning support of HBase as one of next and started devstack integration for that.
15:42:02 <eglynn> jd__: you've already got a partially implemented "direct" cassandra driver for ceilo, or?
15:42:17 <jd__> eglynn: only write is supported
15:42:22 <ildikov_> isviridov: ok, I just asked because of future testing issues mainly
15:42:34 <eglynn> jd__: a-ha, k
15:42:44 <isviridov> nprivalova: exactly
15:43:07 <eglynn> (just wondering if the same cassandra benefits, i.e. TSD goodness etc., could be got via magnetoDB)
15:43:46 <isviridov> ildikov_: we have done with devstack integration and have tempest running on gates.
15:43:57 <ildikov_> eglynn: good point, I was thinking about the same
15:44:07 <eglynn> isviridov: so this TSD goodness in magnetoDB you speak of, is that mostly derived from the Cassandra underpinnings?
15:44:09 <nprivalova> I have two more questions. 1. What about sql and 2. Complex queries
15:44:28 <isviridov> eglynn: actually we are trying to go very close to cassandra in functionality, to keep its performance
15:45:16 <eglynn> isviridov: and if I deployed magnetoDB backed by say hbase, maybe I wouldn't see the same benefits when manipulating TSD?
15:46:51 <isviridov> eglynn: Implement it on top  of C* is the  cheapest way and C* is very easy in administration (geterogenous cluster). But with HBase we have key-value storage as well. And I beilive it is the main reason for performance.
15:47:15 <isviridov> eglynn: but yes, you are right. It will be somehow another
15:48:06 <eglynn> isviridov: cool, I was just wondering if the suitablity for storing TSD in magnetoDB is more an attribute of the chosen backing store than of magnetDB *itself*
15:48:16 <eglynn> isviridov: ... and sounds like the answer is "yes"
15:48:55 <isviridov> eglynn: awesome!
15:49:08 <nprivalova> it very 'openstack'ish' way... one service use another service. When all backends implementations in Ceilometer we may control performance (theoretically). Now isviridov suggests a service that (theoretically) will provide best performance for all backends in a year (2,3...)
15:50:21 <nprivalova> it theory sounds great :) but what is the status of Magneto? Tests, integration testing and so on
15:50:36 <eglynn> isviridov: is magnetoDB officially incubating?
15:50:41 <isviridov> nprivalova: The main benefit as for me is in maintenence of one storage and yes, MagnetoDB is scalable horisontally, so performace is close to C*
15:50:42 <eglynn> isviridov: (... as an openstack project)
15:51:44 <ildikov_> isviridov: what about nprivalova's two questions?
15:51:57 <nprivalova> eglynn: do you agree that our own implementations and Magneto usage should live together for a long time?
15:52:31 <nprivalova> eglynn: I'm not against this actually, just asking
15:52:54 <isviridov> eglynn: it is openstack project and (i hope) will be incubated soon. Now it has unit testing coverage about 60-80%. It has very good tempest coverage and devstack gate
15:53:08 <isviridov> ildikov_: missed
15:53:26 <nprivalova> isviridov: 1. SQL 2. Complex queries
15:53:26 <eglynn> nprivalova: ... well magnetoDB sounds like an interesting way to allow Ceilometer leverage Cassandra's suitability for TSD
15:53:49 <eglynn> nprivalova: ... but I'm not sure that it will displace the "direct" storage drivers
15:53:54 <ildikov_> nprivalova: thanks :)
15:53:58 <isviridov> nprivalova:  No sql, but JSON defined queries
15:54:53 <isviridov> nprivalova: about complex queries: there are several primitives which can be used to achive requied complexity.
15:55:21 <isviridov> nprivalova: in generall approach is to store different projections of the same data
15:55:47 <isviridov> nprivalova: for more analitics Hadoop can be used
15:56:33 <nprivalova> eglynn: ah, you see Cassandra usage primarily
15:56:33 <ildikov_> eglynn: I think the direct drivers will be needed, in case of SQL backends for sure and if we keep Mongo than for that also
15:56:56 <eglynn> ildikov_: agree
15:57:02 <isviridov> eglynn: not displace, but for cloud maintener, I believe it is better to work with one cluster of C*, HBase whatever
15:57:14 <eglynn> one other potential issue ...
15:57:25 <eglynn> ... assuming magnetoDB becomes an incubated openstack project as per isviridov's wishes
15:57:37 <eglynn> ... is it kosher for a graduated project like ceilometer to take a dependency on an incubated project?
15:57:53 <eglynn> ... i.e. is incubatedness somehow viral?
15:58:11 <eglynn> dhellmann: ^^^ you may know the answer to that
15:58:20 <dhellmann> eglynn: we couldn't depend on it as the only implementation, but we could provide a driver to use it
15:58:35 <eglynn> dhellmann: a-ha, k, thanks!
15:58:54 <dhellmann> and it probably shouldn't be the default until it is integrated
15:59:50 <eglynn> agreed
15:59:59 <eglynn> isviridov: so if I'm reading the temperature of the discussion correctly ...
16:00:17 <eglynn> isviridov: ... I'd say you should go ahead and start implementing such a driver :)
16:00:23 <ildikov_> eglynn: as a driver, it can be used I think and if nprivalova's guess comes true in a year (2, 3...) it can be considered again as default supported driver or something like
16:00:49 <isviridov> eglynn: thanks for your blessing
16:00:51 <eglynn> isviridov: ... with the privisos that (a) you commit to near-parity feature-wise and (b) you intend to stick around as driver maintainer
16:01:25 <isviridov> eglynn: agree
16:01:38 <eglynn> ildikov_: yep, possibly a longer-term possibility for the "one driver to rule them all"
16:02:41 <eglynn> ... we're running up against the shot-clock here folks
16:02:43 <ildikov_> eglynn: hmm, we will see, the game is just about to start :)
16:03:45 <nprivalova> thanks for discussions!
16:03:50 <eglynn> jd__: if there's nothing else on the agenda you could pull the trigger on the #endmeeting
16:04:00 <eglynn> thanks all!
16:04:06 <isviridov> thank you for your thoughts!
16:04:07 <jd__> ack :)
16:04:08 <jd__> #endmeeting