21:00:11 #startmeeting ceilometer 21:00:12 Meeting started Wed Nov 20 21:00:11 2013 UTC and is due to finish in 60 minutes. The chair is eglynn. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:16 The meeting name has been set to 'ceilometer' 21:00:28 hello 21:00:28 hey everybody, show of hands? 21:00:39 jd__ is indisposed this week, so I'll fight the ircbot again ... 21:01:22 tumble weeds ... ;) 21:01:26 o/ 21:01:29 o/ 21:01:35 my hand is here 21:01:42 :-) 21:02:05 fairly thin attendance by the looks of it ... 21:02:13 o/ 21:02:16 I'll put it down to post-HK exhaustion ;) 21:02:16 o/ 21:02:29 hey y'all 21:02:39 k, lets roll ... 21:02:41 #topic actions from last week ... 21:02:50 jd__ was to propose a patch on the governance gerrit for our new OpenStack Telemetry moniket 21:02:52 * dhellmann has a flakey irc connection today, so messages are coming in bunches after a delay 21:02:56 that was done, here's the review ... 21:03:07 #link https://review.openstack.org/56402 21:03:14 proving fairly un-controversial so far 21:03:24 other that the TeleWTF? jibe from markmc ;) 21:03:34 should land soon enough methinks 21:03:48 anyone with buyer's remose on the name choice? 21:04:08 that's what google is for... to answer 'wtf' part 21:04:14 hi all 21:04:23 gordc: yep 21:04:43 I like the new name, nice work eglynn 21:04:52 cool :) 21:05:07 #topic status on icehouse-1 21:05:15 it's looking pretty thin 21:05:21 #link https://launchpad.net/ceilometer/+milestone/icehouse-1 21:05:30 I guess because of the shorter than usual summit-->milestone-1 lead in 21:05:48 I think that's OK 21:05:54 ... probably a pattern reflected somewhat in the other projects too 21:06:13 release mgr's thought on this at the project meeting last night: 21:06:19 yeah, ttx said the -1 milestone was set to basically include stuff that was in process already at the summit 21:06:24 using icehouse-1 it to clean up the slate and prepare for the real work is fine 21:06:43 * eglynn paraphrasing ^^^ 21:06:45 so really, a thin -1 release is a good thing? 21:06:48 :) 21:06:50 yes 21:07:18 nealph: well it's also a busy time in the cycle for those with distros to get out the door 21:07:34 so overall understandable 21:07:50 but means we'll be pretty backloaded for i2 & i3 21:08:29 does anyone have anything up their sleeve they'd like to target at i-1? 21:08:49 relaistically speaking, it would have to be more a bug fix than a BP at this late stage 21:09:14 nope? 21:09:24 k 21:09:26 I think john (herndon) is looking at the event 21:09:27 were any features implemented after the havana cut-off without a blueprint? 21:09:35 storage updates. 21:09:37 those would be candidates to be added for i-1 21:09:56 dhellmann: none that I can think of 21:10:06 eglynn: I didn't think so, either 21:10:35 nealph: those storage updates == DB migrations? 21:10:38 (for events) 21:11:06 no...one sec...digging for BP. 21:12:34 https://blueprints.launchpad.net/openstack/?searchtext=specify-event-api 21:13:10 so it would really need to be well progressed implementation-wise (approaching code-complete) by EoW to have a realistic chance of landing for i-1 21:13:20 (given the review lag etc.) 21:13:59 noted. I'll leave it to him to comment further. 21:14:16 * eglynn notices status is blocked 21:15:27 k, let's punt on the specify-event-api until herndon and jd__ are around to discuss 21:15:30 was blocked waiting on alembic vs. sqlalchemy 21:15:41 sorry, I showed up late 21:15:49 herndon_: np! thanks for the clarification 21:16:01 making good progress now, 4 reviews up. 21:16:08 herndon_: so, were you eyeing up icehouse-1 for this? 21:16:26 that would be great 21:16:55 herndon_: (for context, I don't think there's any particular pressure to beef up i-1, but if the feature ready to fly ...) 21:17:34 let's see where we are by the end of this week? there is still work to do on the ceilometerclient bit, I think 21:17:38 and documentation. 21:17:56 it's better to target something for a later milestone and finish it early than the other way around 21:17:57 herndon_: cool let's see how the reviews progress on gerrit, make a call by EoW 21:18:06 dhellmann: fair point 21:18:24 moving on ... 21:18:25 #topic blueprints for icehouse 21:18:38 we agreed last week to aim for today for BPs to be filed 21:18:58 I've made progress but I'm not thru all mine quite yet 21:19:07 #link https://blueprints.launchpad.net/ceilometer/icehouse 21:19:38 also nprivalova has started on a rough aggregation & roll up BP 21:19:47 #link https://blueprints.launchpad.net/ceilometer/+spec/aggregation-and-rolling-up 21:20:00 (with some discussion still on-going in the corresponding etherpad) 21:20:12 I also have a BP, which I'll finish till the end of this week 21:20:32 just going to say, ildikov is working on locking down the new API query filtering defintion 21:20:41 ... and she beat me to to it :) 21:20:58 also with some etherpad discussion 21:21:22 ildikov: cool, thanks for that! 21:21:35 for everyone else with an idea discussed at summit that they want to target for icehouse ... 21:21:53 prolly a good idea to try to get something some rough written down in a BP sooner rather than later 21:22:07 before the memory of hong kong fades too much into the mists of beer ... 21:22:19 sorry, the mists of time ;) 21:22:37 the mists of jet lag. 21:22:39 hehe 21:22:44 LOL :) 21:23:05 :D 21:23:17 once BPs all filed, I'd expect jd__ will do a first approval pass 21:23:35 then we'll have to start thinking about target'ing to i2 or i3 21:24:01 we were heavily back-loaded on h3 in the last cycle 21:24:09 ... mostly down to me ;) 21:24:32 so would be good to have a real chunky i2 milestone this time round 21:24:49 anything else on BPs? 21:25:04 k, moving on ... 21:25:15 #topic tempest / integration tests discussion 21:25:28 just following up on last week's discussion 21:25:38 I put some initial thoughts in that etherpad started by nprivalova 21:25:48 #link https://etherpad.openstack.org/p/ceilometer-test-plan 21:26:02 would be great if folks who are interested in contributing to integration testing ... 21:26:20 could stick in a brief description of the areas they're planning to concentrate on 21:26:46 just real rough ideas would be fine 21:26:55 doesn't need to be poetry ;) 21:27:13 just so long as we don't end up duplicating efforts 21:29:30 anyone got anything on tempest they want to raise? 21:30:05 i'll just ask since i can't see tempest branch 21:30:20 gordc: which tempest branch? 21:30:51 not branch... code i guess... is there a specific folder in tempest which ceilometer tests sits on? just curious so i know what to track in tempest. 21:31:09 * gordc haven't touched tempest in a very long time. 21:31:36 gordc: so there's nothing landed in tempest yet AFAIK, but there were some reviews outstanding 21:31:52 * eglynn doesn't have the links to hand ... 21:32:11 ok, i'll dig around. i'll need to look at code eventually anyways. 21:32:23 I have another question 21:32:26 #link https://review.openstack.org/43481 21:32:42 #link https://review.openstack.org/39237 21:32:50 eglynn: cool cool. thanks. 21:32:55 dperaza: shoot 21:32:56 are there any plans to run performance specific tests with ceilometer 21:33:19 I think this has been raised before... 21:33:31 I was just looking at https://etherpad.openstack.org/p/icehouse-summit-ceilometer-big-data 21:33:31 the variations between env are waaay too many. 21:33:38 @dperaza, I was planning to run some test before the summit, then I was pulled to other things. 21:33:47 did not see tempest action items 21:33:49 dperaza: this will generally happen as part of downstream QE I think 21:33:54 @dperaze, I think I can do some of these next few weeks. 21:34:22 dperaza: tempest is more suited to a binary gate as opposed to shades-of-grey performance testsuite 21:34:28 (IIUC) 21:34:29 litong is that something that could potentially live in tempest 21:34:39 or are you thinking another home 21:34:48 @dperaza, I think that is different, 21:34:54 you were asking performance tests. 21:35:00 right 21:35:18 I do see https://github.com/openstack/tempest/tree/master/tempest/stress 21:35:35 if I remeber well, in HK we discussed to separate it from tempest, I mean the performance tests 21:35:52 it should, because they serve different purposes. 21:35:53 right if we come up with performance buckets where would they live 21:36:33 are we saying performance would not vote at the gate then 21:36:38 ildikov: right. performance tests wouldn't work with jenkins... it's performance fluctuates way too much. 21:36:46 at least initially? 21:36:56 dperaza:ildikov:can you help me understand the goal for performance testing 21:36:59 ? 21:37:31 so tempest seems to support an @stresstest decorator 21:37:41 #link https://github.com/openstack/tempest/commit/31fe4838 21:37:44 first establish a benchmark and then use as a reference to see how you are improving or regressing 21:37:59 but that seems to be just a discovery mechanism 21:38:07 nealph: in my opinion performance test helps to understand the behavior of the system under different load 21:38:07 interesting eglynn 21:38:28 dperaza: I don't think we could realistically gate on such performance tests 21:38:41 we can start by stressing a single ceilometer node 21:38:44 e.g. reject a patch that slowed things down by some threshold 21:38:48 and see where it brakes 21:38:51 @eglynn, agreed. 21:39:04 too much variability in the load on the jenkins slaves 21:39:14 we can run these performance test aside to help find performance problems, then open bugs and fix them. 21:39:22 that's not to say performance testing isn't good 21:39:29 just that we can't gate on it IMO 21:39:41 so tempest is necessarily the correct home for such tests 21:39:47 *isn't 21:39:52 * nealph nods 21:39:53 I do think we can have as a non voting jobin jenkins 21:40:21 that runs daily for example as opposed to on every commit for gatting porposes 21:40:55 dperaza: given how instable jenkins is, i'm guessing you'll get a lot of false positives... or a really bad starting baseline. 21:40:56 dperaza: sure, something like the bitrot periodic jobs that run on stable 21:41:06 In the testing related session at the summit, didn't someone say there were plans for seperate performance testing systems? 21:41:18 dperaza: but again it would be purely advisory 21:41:33 dperaza: (i.e a hint to get in and start profiling recent changes ...) 21:41:37 DanD: that will work too 21:42:01 DanD: I don't recall hearing that, can you remember the session title? 21:42:06 DanD: that'd be interesting. 21:42:33 eglynn: right something to at least tell you the changes that degraded the performance in a period of a few days 21:43:17 will have to go back and look at the sessions, but there was one focused on testing. and I thought someone from the test group had said essentialy that performance tests would not work at part of the gate tests but that they were working on someone seperate for this 21:44:05 dperaza: k, in principle it would be good stuff but (a) probably best separated off from Tempest and (b) simple integration tests are our first priority now 21:44:08 @DanD, @eglynn, I think that the problem is that performance tests normally take long time and we do not want gating job takes long time. 21:44:24 DanD: I meant this discussion, when I mentioned the separation thing, it was on one of the last sessions I think 21:44:25 Performance testing ussually is done also on release or milestone boundries 21:44:25 I hope that gating job returns in few seconds. 21:44:27 (seeing as that's where we have the most pressing/basic gap in our coverage) 21:45:09 litong: agreed 21:45:15 I think I got the answer to my question, thanks guys 21:45:18 litong: we've been waiting 24hrs+ for some of our patches... a few seconds is a dream. lol 21:46:06 @gordc, yeah, my dream. 21:46:17 the point is that we do not include the performance test in gating jobs 21:46:37 litong: :) 21:46:41 litong: yes, we're ad idem on that point 21:46:57 k, moving on ... 21:47:26 #topic release python-ceilometerclient 21:47:36 I've got a couple of bug fixes in progress 21:47:45 most importantly https://bugs.launchpad.net/python-ceilometerclient/+bug/1253057 21:47:47 Launchpad bug 1253057 in python-ceilometerclient "alarm update resets repeat_actions attribute, breaking heat autoscaling" [High,In progress] 21:47:57 once these land, I'd like to cut a 1.0.7 release 21:48:07 anyone got anything else that they'd like to get in before I cut that? 21:48:20 herndon_: you mentioned some events support in the client? 21:48:40 herndon_: note that client releases are cheap and easy 21:48:42 * gordc keeps forgetting to look at python-ceilometerclient stream... will take a look. 21:48:46 let's skip this one 21:48:59 that part still has a couple of holes to patch up before it is reay 21:49:12 herndon_: cool, we can easily spin up a 1.0.8 whenever you're good to go with that 21:49:26 ok, I will keep you posted 21:49:44 gordc: if you're in a reviewing mood ... https://review.openstack.org/57423 21:50:30 eglynn: will try to give it a look later tonight. 21:50:40 gordc: thank you sir! 21:50:44 k, we're done on the client methinks ... 21:50:53 #topic open discussion 21:51:38 i guess i can ask abour resource stuff here. 21:51:52 gordc: sorry forgot that 21:52:01 np. :) 21:52:02 o/ 21:52:12 #topic what's a resource? 21:52:16 in Samples, user and project attributes are pretty self-explanatory and map to keystone concepts pretty well but what about resource_id... 21:52:21 gordc: the floor is your's! 21:52:53 i was reviewing  https://review.openstack.org/#/c/56019 and it seems like we don't really have a consistent use for resource_id... 21:53:04 or i may have missed the concept as usual :) 21:53:29 gordc: yeah so I think the original assumption was that every resource would have a UUID associated with it 21:53:47 gordc: i.e. the pattern followed by nova, glance, cinder etc. 21:54:07 gordc: but obviously swift doesn't fit into that neat pattern, right? 21:54:21 ? 21:54:38 eglynn: right... there are a way meters were resource_id kind of deviates. 21:54:50 notmyname: context is using the container name as the resource ID for swift metering, IIUC 21:55:07 notmyname: see comments on https://review.openstack.org/56019 21:55:55 gordc: is the issue one of uniqueness? 21:55:58 o/ (blast dst :P ) 21:56:17 gordc: i.e. that the container name isn't guaranteed unique, or? 21:56:19 (out of scope) why is ceilometer tracking individual resource id's? that seems unnecessarily granular at first glance 21:56:37 eglynn: i guess... or maybe just checking that it isn't written somewhere that resource_id in Sample must be 21:56:43 dolphm: we want to be able to query grouped by resource 21:57:12 dolphm, for samples it may not be required, but for events we get a lot of use from per-resource tracking 21:57:30 gordc: e.g. must be a stringified UUID? 21:57:51 gordc: I don't think that's a requirement, more just the convention that most services follow 21:57:52 eglynn: I'm not sure I follow. 21:58:48 notmyname: ceilometer samples include a resource ID that's generally a UUID, whereas a patch for swift metering uses the container name for the samples it emits 21:58:54 AFAIK, resource id just needs to be a globally unique id. 21:59:03 dragondm: agreed 21:59:19 eglynn: the (account, container) pair is unique in a swift cluster (eg AUTH_test/images) 21:59:21 uuid works for that, so there won't be collisions. 21:59:29 dragondm: that's my original thought as well. 21:59:31 so I think the question is whether the container name reach that bar of unqiueness 21:59:51 and within the context of a single swift cluster, that pair is globally unique 22:00:10 seems that (account, container) would be better from what john is saying above 22:00:18 but if you have multiple regions it does not... 22:00:32 multiple clusters, you mean 22:00:37 eglynn: something for the mailinglist? (given the time) 22:00:48 gordc: yep, good call 22:00:50 notmyname: yes. 22:00:58 gordc: can you raise it on the LM? 22:01:05 i'll do that. 22:01:27 #action gordc raise resource ID uniqueness question on ML 22:01:41 dragondm: if ceilometer is metering multiple clusters, then it must have some sort of cluster identifier. seems pretty easy to construct a unique resource_id from that 22:01:51 k, I think we're over time here 22:01:59 let's drop the open discussoin 22:01:59 sorry for the late arrival (DST) ... re: integration testing, we've started working on storage driver load testing https://github.com/rackerlabs/ceilometer-load-tests (with a CM fork for test branches) 22:02:11 sandywalsh: cool! 22:02:19 dperaza: :-( 22:02:30 Yup. we are also looking at testing an experimental Riak driver. 22:02:33 I gues I could bring up in next meeting 22:02:52 yep, let's defer to next meeting on the ML as we're over time 22:03:01 Cool. 22:03:21 k, thanks for your time folks! 22:03:29 #endmeeting