15:00:40 #startmeeting ceilometer 15:00:41 Meeting started Thu Apr 3 15:00:40 2014 UTC and is due to finish in 60 minutes. The chair is jd__. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:42 bye 15:00:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:44 The meeting name has been set to 'ceilometer' 15:00:53 o/ 15:01:00 o/ 15:01:02 o/ 15:01:03 hey hey 15:01:08 hey jd__ 15:01:19 hi 15:01:25 hello 15:01:26 o/ 15:01:35 o/ 15:01:45 o/ 15:01:51 #topic Release status 15:01:55 rc1 is out \o/ 15:02:03 that's fantastic! 15:02:03 w00t! 15:02:07 \o/ :) 15:02:09 doesn't look like we need a rc2 AFAIK 15:02:21 eglynn is going to talk about known issues to release note 15:02:36 can you guys also debate about a new publisher .. graphite ? 15:02:37 jd__: what about the translation issue around all projects? 15:02:39 o/ 15:02:39 ... /me was just think we should collect known gotchas for the release notes 15:02:43 that i submitted the blueprint to 15:03:02 #link https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse 15:03:08 ildikov_: dunnow 15:03:24 at least two potential issues I can think of ... 15:03:24 admin0: not now, but yes 15:03:40 ok 15:03:54 1. the "implicit" requirement for happybase version not specified in the offical requirements 15:03:58 jd__: as I saw it was targeted to RC2 in several projects, but if we do not have any other reason for RC2, then I just asked :) 15:04:13 nprivalova: ^^^ does that make sense? 15:04:56 eglynn: AFAIK correct requirements were merged 15:05:03 eglynn: let me check 15:05:17 nprivalova: merged to master, but not milestone-proposed? 15:05:56 eglynn: yep 15:06:03 #link https://review.openstack.org/#/c/82438/ 15:06:18 nprivalova: so not in RC1 in that case 15:06:25 eglynn: right 15:06:37 nprivalova: is it enough in the requirements that !=0.7? 15:07:12 ildikov_: we support >0.4 and !=0.7 15:07:37 I think we may add it to release note 15:07:55 cool 15:08:00 anything else? 15:08:01 nprivalova: I just remembered that there was something about 0.6 also as it is buggy like 0.7, but maybe it's not my day as for questions :) 15:08:25 nprivalova: https://github.com/openstack/ceilometer/blob/master/requirements.txt is still happybase>=0.4,<=0.6 15:08:52 nprivalova: ^^^ isn't that problematic? (the =0.6 case) 15:09:14 eglynn: it should be synced with global 15:09:21 eglynn: ==0.6 is ok 15:09:46 ildikov_: it was just mistake in message :) 0.6 is ok 15:09:48 besides, I'd like discuss https://bugs.launchpad.net/ceilometer/+bug/1288284 . We add it to known issues? 15:09:49 nprivalova: a-ha, ok ... I thought we couldn't sync it up on Friday due to the dependency freeze (as you discussed with ttx) 15:09:50 Launchpad bug 1288284 in ceilometer "[Hbase]Resource with multiple meters is not handled correctly" [High,In progress] 15:10:04 nprivalova: yep, it's on review now, I mean the sync 15:10:15 nprivalova: a-ha, ok, cool :) 15:10:24 nprivalova: yep, sounds reasonable to note 1288284 15:10:30 eglynn: ok 15:10:59 ok the other one I had in mind was the issue with multi-process/multi-worker collector running against postgres 15:11:15 (as discussed last week with gordc) 15:11:33 yep, and DBDeadlock issue 15:11:52 seems we should include a caveat not to scale out the collector if running against postgres 15:11:57 eglynn: is only postgres affected i nthis whole DB issue finally? 15:12:19 ildikov_: TBH I'm not 100% sure about that 15:13:06 ildikov_: discussed briefly with gordc yesterday http://eavesdrop.openstack.org/irclogs/%23openstack-ceilometer/%23openstack-ceilometer.2014-04-02.log 15:13:16 eglynn: ok, I will try to check after meeting in the logs and mails 15:14:02 #action eglynn wrangle release notes "known issues" entries from SMEs 15:14:05 we will test performance soon and will check it anyway 15:14:45 k, we can prolly move on if no-one else has other release-notable items in mind 15:15:26 ack 15:15:34 * admin0 waits :) 15:15:36 #topic Tempest integration 15:15:54 * isviridov also waits 15:16:24 unfortunately no updates. We still have one collector (as it done by default) and tests fail 15:16:46 ok 15:16:53 #topic Release python-ceilometerclient? 15:17:03 I think we should do that now 15:17:16 already done a couple days ago :) 15:17:18 it's released \o/ 15:17:29 oh great, I was just checking and still saw a few patches 15:17:32 #link https://pypi.python.org/pypi/python-ceilometerclient/1.0.10 15:17:46 thanks eglynn 15:17:56 #topic Open discussion 15:18:24 ... /me just realized he released 1.0.10 on April Fool's Day ;) 15:18:45 eglynn: LOL :) 15:19:15 graphite publisher :) .. so that people do not have to install the full ceilometer components, but just get the graphs directly to graphite 15:19:16 eglynn: it was a bit unbelievable that it finally got released ;) 15:19:30 ildikov_: ... it was a long time coming all right! :) 15:19:51 admin0: so do you want feedback on the idea or the code? 15:20:05 admin0: is it out for (WIP) review on gerrit? 15:20:11 i have not uploaded the code yet .. i was waiting for your views on it before i upload the code 15:20:30 someone said i should check before wrting a lot of codes 15:20:50 it was me :) 15:20:50 admin0: but you don have some basic working code already, right? 15:21:05 i do have a basic working code ready that is already working 15:21:18 admin0: (... maybe no tests, docco yet, but that's fine for WIP) 15:21:46 the question from me was: why we need separate publisher for graphite? why UDP is not enough? 15:21:56 BTW, do we want to have specialized publishers? 15:22:05 does graphite want a special format for its packets? 15:22:12 nprivalova: with udp, i was just getting the metrics, and i need a separate daemon running to convert the codes to the graphite format 15:22:48 admin0: ... my recommendation would be throw it up on gerrit so as to get early eyes on it, which would help to answer nprivalova's question in a more concrete way 15:22:51 aha, so we may need some converter? not publisher actually? 15:23:23 I think it is too specific to be placed inside Ceilometer 15:23:53 nprivalova: publisher is currently end of the sink chain, but I wonder could this conversion be done in a transformer? 15:24:04 if ceilometer can send to a tcp port in the format say: ceilometer.tenant(name/id).vm(name/id).disk.write.bytes 10 timestamp .. .. that would do 15:24:24 eglynn: I'm thinking about transformer too... 15:25:44 so I think that if code is ready admin0 may publish it and we will think about moving conversion to transformer or smth else 15:26:10 eglynn: do we want to have client specific transformers in Ceilometer? maybe I'm on the wrong track with this question, but it seems a bit out of scope for me 15:26:13 nprivalova: though the publisher still needs the transformed data in sample-like form https://github.com/openstack/ceilometer/blob/master/ceilometer/publisher/utils.py#L74 15:26:46 ildikov_: well I suppose it doesn't necessarily need to live in the ceilometer tree 15:27:11 eglynn: a-ha, ok, fair enough 15:27:14 ildikov_: I don't see a problem with adding this. It's a very commonly used tool, and IIRC we talked about adding this eventually when we first made publishers pluggable. 15:27:15 ildikov_: ... but our pipeline should probably be flexible enough to accomodate an externally provided plugin 15:27:41 but we may provide a mechanism to add such converters 15:27:46 if using pipeline it can have tcp function and able to format (string ) the output, it should work 15:28:11 so I think first step is to look at admin0's code and if the coversions can be performed in a transformer 15:28:26 eglynn: agreed 15:28:28 (while re-using the existing UDP publisher as nprivalova suggests) 15:28:53 admin0: can you propose a patch on gerrit with your code? 15:29:00 i will start on it right now 15:29:03 dhellmann: I see your point, but I'm also sure that there are several commonly used tools on the market, so I just wanted highlight this question earlier, than later to discuss the options 15:29:14 admin0: ty! 15:29:47 ildikov_: I'd like to think we would take a big-tent approach for publishers, like we do with storage drivers 15:30:04 i will beautify my codes, put comments on why i am doing something .. and then submit a review 15:30:53 * isviridov thinks if it is time for MagnetoDB blueprints? 15:30:54 dhellmann: ... which is a nice segueway into isviridov's topic 15:31:04 dhellmann: we have some issues now we the supported DB drivers, it needs more than less refactor/redesign now, I just would like to avoid similar issues with publishers 15:31:55 isviridov: please move on :) 15:32:08 So, I would suggest support of our MagnetoDB in Celiometer 15:32:11 #link https://blueprints.launchpad.net/ceilometer/+spec/magnetodb-driver 15:32:33 ildikov_: yes, true. we should add the tests if possible, rather than refusing new plugins 15:33:02 isviridov: key questions in my mind ... 15:33:19 It is key-value and perfect for timeseries data 15:33:21 eglynn: pelase 15:33:37 isviridov: ... 1. would be providing full or partial feature parity with mongo/sqla drivers? 15:34:03 isviridov: ... 2. would you be sticking around as driver maintainer for at least the medium-term 15:34:14 dhellmann: yes, I agree in this point, we should surely focus more on proper tests this time 15:34:46 isviridov: ... (we need to avoid "code drops" lacking active maintainership IMO) 15:34:54 eglynn: +1 15:35:23 eglynn: +1 15:35:59 eglynn: 1: it has http interface and actually covers database behind. 15:36:39 isviridov: I'm thinking more of the "glue code" that lies between the ceilometer API and the native DB API 15:36:47 eglynn: 2. Yes. We have a plans also for future integration with Celiometer in metrics writing and collecting 15:37:43 isviridov: for each storage driver we have logic that maps between a high-level primitive like get_resources and the corresponding DB-specific queries 15:38:08 isviridov: in some case the semantics of these primitives are incomplete in some subset of the drivers 15:39:09 isviridov: so the question is not whether magnetoDB can do everything database-y 15:39:18 eglynn: yeap, it is clear. We will drive inplementation of that part to make MagnetoDB + Celiometer better 15:39:35 isviridov: did I see right that you plan to use Cassandra behind it? 15:40:35 So Magneto looks like an additional layer between API and DBs... In future we may remove add DB-code from Ceilometer and use only Magneto API :)? 15:40:56 ildikov_: Cassandra is our primarry storage, the best tested and with best performance. 15:41:43 ildikov_: but we are planning support of HBase as one of next and started devstack integration for that. 15:42:02 jd__: you've already got a partially implemented "direct" cassandra driver for ceilo, or? 15:42:17 eglynn: only write is supported 15:42:22 isviridov: ok, I just asked because of future testing issues mainly 15:42:34 jd__: a-ha, k 15:42:44 nprivalova: exactly 15:43:07 (just wondering if the same cassandra benefits, i.e. TSD goodness etc., could be got via magnetoDB) 15:43:46 ildikov_: we have done with devstack integration and have tempest running on gates. 15:43:57 eglynn: good point, I was thinking about the same 15:44:07 isviridov: so this TSD goodness in magnetoDB you speak of, is that mostly derived from the Cassandra underpinnings? 15:44:09 I have two more questions. 1. What about sql and 2. Complex queries 15:44:28 eglynn: actually we are trying to go very close to cassandra in functionality, to keep its performance 15:45:16 isviridov: and if I deployed magnetoDB backed by say hbase, maybe I wouldn't see the same benefits when manipulating TSD? 15:46:51 eglynn: Implement it on top of C* is the cheapest way and C* is very easy in administration (geterogenous cluster). But with HBase we have key-value storage as well. And I beilive it is the main reason for performance. 15:47:15 eglynn: but yes, you are right. It will be somehow another 15:48:06 isviridov: cool, I was just wondering if the suitablity for storing TSD in magnetoDB is more an attribute of the chosen backing store than of magnetDB *itself* 15:48:16 isviridov: ... and sounds like the answer is "yes" 15:48:55 eglynn: awesome! 15:49:08 it very 'openstack'ish' way... one service use another service. When all backends implementations in Ceilometer we may control performance (theoretically). Now isviridov suggests a service that (theoretically) will provide best performance for all backends in a year (2,3...) 15:50:21 it theory sounds great :) but what is the status of Magneto? Tests, integration testing and so on 15:50:36 isviridov: is magnetoDB officially incubating? 15:50:41 nprivalova: The main benefit as for me is in maintenence of one storage and yes, MagnetoDB is scalable horisontally, so performace is close to C* 15:50:42 isviridov: (... as an openstack project) 15:51:44 isviridov: what about nprivalova's two questions? 15:51:57 eglynn: do you agree that our own implementations and Magneto usage should live together for a long time? 15:52:31 eglynn: I'm not against this actually, just asking 15:52:54 eglynn: it is openstack project and (i hope) will be incubated soon. Now it has unit testing coverage about 60-80%. It has very good tempest coverage and devstack gate 15:53:08 ildikov_: missed 15:53:26 isviridov: 1. SQL 2. Complex queries 15:53:26 nprivalova: ... well magnetoDB sounds like an interesting way to allow Ceilometer leverage Cassandra's suitability for TSD 15:53:49 nprivalova: ... but I'm not sure that it will displace the "direct" storage drivers 15:53:54 nprivalova: thanks :) 15:53:58 nprivalova: No sql, but JSON defined queries 15:54:53 nprivalova: about complex queries: there are several primitives which can be used to achive requied complexity. 15:55:21 nprivalova: in generall approach is to store different projections of the same data 15:55:47 nprivalova: for more analitics Hadoop can be used 15:56:33 eglynn: ah, you see Cassandra usage primarily 15:56:33 eglynn: I think the direct drivers will be needed, in case of SQL backends for sure and if we keep Mongo than for that also 15:56:56 ildikov_: agree 15:57:02 eglynn: not displace, but for cloud maintener, I believe it is better to work with one cluster of C*, HBase whatever 15:57:14 one other potential issue ... 15:57:25 ... assuming magnetoDB becomes an incubated openstack project as per isviridov's wishes 15:57:37 ... is it kosher for a graduated project like ceilometer to take a dependency on an incubated project? 15:57:53 ... i.e. is incubatedness somehow viral? 15:58:11 dhellmann: ^^^ you may know the answer to that 15:58:20 eglynn: we couldn't depend on it as the only implementation, but we could provide a driver to use it 15:58:35 dhellmann: a-ha, k, thanks! 15:58:54 and it probably shouldn't be the default until it is integrated 15:59:50 agreed 15:59:59 isviridov: so if I'm reading the temperature of the discussion correctly ... 16:00:17 isviridov: ... I'd say you should go ahead and start implementing such a driver :) 16:00:23 eglynn: as a driver, it can be used I think and if nprivalova's guess comes true in a year (2, 3...) it can be considered again as default supported driver or something like 16:00:49 eglynn: thanks for your blessing 16:00:51 isviridov: ... with the privisos that (a) you commit to near-parity feature-wise and (b) you intend to stick around as driver maintainer 16:01:25 eglynn: agree 16:01:38 ildikov_: yep, possibly a longer-term possibility for the "one driver to rule them all" 16:02:41 ... we're running up against the shot-clock here folks 16:02:43 eglynn: hmm, we will see, the game is just about to start :) 16:03:45 thanks for discussions! 16:03:50 jd__: if there's nothing else on the agenda you could pull the trigger on the #endmeeting 16:04:00 thanks all! 16:04:06 thank you for your thoughts! 16:04:07 ack :) 16:04:08 #endmeeting