15:01:34 #startmeeting ceilometer 15:01:35 Meeting started Thu Sep 18 15:01:34 2014 UTC and is due to finish in 60 minutes. The chair is eglynn_. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:39 The meeting name has been set to 'ceilometer' 15:01:41 sorry for the late start folks 15:01:46 o/ 15:01:48 o/ 15:01:50 o/ 15:01:54 o/ 15:01:54 o/ 15:01:54 o/ 15:01:56 o/ 15:01:57 o/ 15:02:36 #topic Juno status and FFEs 15:02:51 #link https://launchpad.net/ceilometer/+milestone/juno-rc1 15:03:03 o/ 15:03:12 \o/ all FFE BPs are landed since Monday :) 15:03:26 nice work to all concerned 15:03:28 great! 15:03:33 Nice :) 15:04:01 should we target https://bugs.launchpad.net/ceilometer/+bug/1369538 to rc1 also? 15:04:02 Launchpad bug 1369538 in ceilometer "static resources from pipeline.yaml not partitioned" [Undecided,In progress] 15:04:08 so I did a trawl on rc1-target'd bugs and bumped anything that looked like it wasn't getting enough traction to land my mid next week 15:04:12 * eglynn_ looks 15:04:46 nsaje: yep, done 15:04:57 how about https://bugs.launchpad.net/nova/+bug/1348818, do we have patch for that now 15:04:59 Launchpad bug 1348818 in keystone "Unittests do not succeed with random PYTHONHASHSEED value" [Medium,In progress] 15:05:00 cdent 15:05:02 nsaje: Importance? ... medium? 15:05:08 eglynn_: yes 15:05:11 Really I think this will be useful 15:05:31 llu-laptop: there remain unclear issues with wsme. I'm not sure how that's resolved... 15:06:20 gordc: did you have some insight on that PYTHONHASHSEED bug? https://bugs.launchpad.net/nova/+bug/1348818 15:06:23 Launchpad bug 1348818 in keystone "Unittests do not succeed with random PYTHONHASHSEED value" [Medium,In progress] 15:06:24 llu-laptop: not sure how to resolve it either. i want to say it's wsme fault but i don't want to incorrectly lead people that way. 15:06:27 eglynn_: ^ 15:06:41 i'll try looking at it again later today i think since i have time. 15:06:48 gordc: thanks! 15:07:13 o/ 15:07:34 gordc: feel free to commit on that patchset if something needs to happen 15:07:48 cdent: will do 15:08:15 so in terms of the time pressure around getting fixed into juno, RC1 will probably be cut next week 15:09:00 after that we can request an rc2 15:09:09 ... but the fix would have to be backported from master to proposed/juno and the level of scrutiny will go up significantly 15:09:25 understood 15:09:26 ... so best to get everything we know that we need into rc1 if at all poss 15:09:42 eglynn_: all the patch to be merged right now must be related to rc1 bug? 15:10:12 can we get https://review.openstack.org/#/c/121440/ merged? it's a doc bug 15:10:14 llu-laptop: yep, exactly ... no closes-bug tag in the commit message should be a bar to landing 15:10:17 it's a doc patch 15:10:38 got that 15:10:46 doc patches should be an exception, no? 15:10:59 I think so too 15:11:05 llu-laptop, nsaje: yeah, we can prolly make an exception in that case 15:12:06 As far as I remember it was somehow defined in rules - and that was the exception 15:12:39 Like common case 15:12:44 ... that's good enough for me :) 15:13:09 shall we move on from rc1? 15:13:19 eglynn_, we have any consensus on https://bugs.launchpad.net/ceilometer/+bug/1357869 ? are we thinking of just adding negations in catch-all source for rc1? sounds like its a lot more work to get it fixed in pipeline logic now? 15:13:20 Launchpad bug 1357869 in ceilometer "duplicate samples for network.services.lb.member" [Medium,In progress] 15:13:50 pradk: TBH I haven't fully worked out the best way of proceeding on that 15:14:06 ok 15:14:18 pradk: ... as discussed on the channel earlier the same logic is involved in an issue with static resources defined in the pipeline 15:14:38 ah ok, i'll go back and read 15:15:04 unearthed by llu-laptop in https://review.openstack.org/#/c/121586/1/ceilometer/agent.py 15:15:50 cool 15:15:59 don't understand the root cuase of duplicating samples of network.services.lb.member 15:15:59 #topic TSDaaS/gnocchi status 15:16:01 :( 15:16:20 it's the same as what you mentioned in the patch I linked 15:16:33 if you define discovery in one source, it applies to all sources 15:16:40 llu-laptop: it's due to the way that the polling tasks are keyed by meter name instead of by source 15:16:43 so even catch-all source has network discovery configured (implicitly) 15:18:16 llu-laptop: does that make sense? 15:18:54 eglynn_: i need to 're-think', looks like my brain jammed now 15:19:04 llu-laptop: cool enough :) 15:19:27 jd__: so some nice progress on gnocchi this week, do you want to elaborate? 15:20:57 sure 15:21:14 We moved forward on archive policy implementation and got a lot of things merged 15:21:24 I think we'll finish final support by next week, fingers crossed 15:21:29 cool :) 15:21:34 eglynn_, as far as I know Mehdi, Igor and Ildiko started collaboraiton on Gnocchi dispatcher 15:21:45 sileht took over the Ceilometer dispatcher so we're having good progress 15:22:01 jd__, yeah, it is good :) 15:22:03 DinaBelova: yep sileht has proposed https://review.openstack.org/#/q/status:open+project:stackforge/gnocchi+branch:master+topic:sileht/ceilo-dispatcher,n,z 15:22:05 I've a few patches to do to enhance the API a bit still 15:22:18 but from where I stand things are looking pretty good and active, thanks to you guys :) 15:22:30 we should have something fancy for Juno! 15:23:06 excellent! :) 15:24:35 jd__: so are you thinking in terms of an alpha release of gnocchi prior to the summit? 15:24:50 eglynn_: something like that 15:24:53 ... i.e. something that folks could play with alongside ceilo/juno 15:24:57 cool 15:25:00 yep 15:25:53 anything else on gnocchi? 15:26:06 o/ 15:26:12 I have created a devstack patch 15:26:17 crazy you 15:26:21 wow! 15:26:28 sileht: nice :) 15:26:30 #link https://review.openstack.org/#/c/122349/ 15:26:48 I saw it just did not have the time to try it yet :) 15:27:01 it needs some patch in gnocchi 15:28:22 hum, it seems all gnocchi patches needed for devstack is merged :) 15:28:49 cdent: would sileht's devstack patch above be another example of where your s/screen_it/run_process/ change would make sense? 15:29:03 cdent: i.e. https://review.openstack.org/#/c/122349/1/lib/gnocchi 15:29:30 "make sense" in what sense? 15:29:44 in the sense sense? 15:29:51 cdent: as is, "should also be applied to" 15:29:58 basically if something is being started in a screen then run_process can/should be used 15:30:29 however if somebody uses screen_it then run_process will get called anyway if USE_SCREEN is False 15:30:44 cdent, afaik it'll be something 'start gnocchi api in new srceen' and that's it 15:30:51 so it'll be used, yeah 15:30:52 cdent: k, just checking if it made sense to change explicitly across the board or only for selected services 15:31:14 The plan is that any use of screen_it ought to be replaced with run_process 15:31:30 there's some ambiguity for things which are services like rabbit 15:31:40 (which are generally started in other ways) 15:31:54 k, fair enough 15:32:21 move on? 15:32:33 #topic Tempest status 15:32:52 DinaBelova: anything new there to report? 15:33:25 eglynn_, sadly I don't know what happened with the gate workers clean-up 15:33:36 that was the blocker for our tempest tests 15:33:40 due to my vacation... 15:33:50 a-ha, k 15:33:52 i think we're still in sit and wait mode 15:33:59 gordc, indeed... 15:34:07 side question, did we ever look at setting up pollster tests? 15:34:32 gordc: the problem was figuring out an acceptable way of accelerating the polling interval 15:34:38 gordc: we look, Sead - no :) 15:34:43 Sean_ 15:34:45 ok... just asking because i noticed compute agent wasn't running for a few days. 15:34:48 gordc, we tried to start the discussion with QA folks 15:35:03 gordc, yeah, without the tests it's hard to notice.. 15:35:09 i guess something we should track manually for now. 15:35:09 the no-screen stuff that run_process() is supposed to be enabling has not been turned on yet because there are some issues with processes being shut down properly <- hearsay from #os-qa couple days ago 15:35:29 cdent, a-ha, thanks 15:35:35 cdent: i see. 15:35:45 gordc: the compute agent not running in tempest? ... now resolved, or? 15:35:58 eglynn_, in devstack :) 15:36:06 eglynn_: yeah, it's fixed now. it merged this morning 15:36:20 just a devstack config issue. 15:37:18 that's it from me on tempest. :) 15:37:30 me as well 15:38:00 to enable compute agent in jenkins - need to add path to devstack-gate 15:38:07 patch_ 15:38:34 gordc: a-ha, ok ... so it sounds like we're going to have punt on some of these testing scenarios (e.g. polling) that we feel we need, to the kilo "in-tree" functional tests 15:39:16 eglynn_, that was the QA insight afair 15:39:24 eglynn_: i think so... 15:39:54 #topic Kilo summit planning 15:40:39 so the main thing here is just a heads-up that collaborative scheduling for the summit tracks will be the recommended model for kilo 15:41:02 not a million miles from what we did withing the ceilo core team for the juno summit 15:41:37 difference though is that there won't be a CFP webapp this time round for the initial proposals 15:41:59 instead we'll just go directly to some shared document 15:42:18 for now we really just have to aggree on the shared doc format 15:42:32 last time we used a googledoc spreadsheet 15:42:53 seems like a natural choice 15:42:53 that seemed to work OK, may tallying up the votes etc. easy 15:43:08 works for me. 15:43:19 this time round some of the other projects are using etherpads 15:43:21 well, it's ok for me as well 15:43:38 hm, that'll be not so working 15:43:49 with so mane features as googledoc has 15:43:59 for me googledoc is still the best here 15:44:08 eglynn: you're speaking to the process of approving, or submitting? 15:44:17 googledoc worked fine last time I think, so +1 from me too 15:44:19 nealph_: both 15:44:24 * nealph_ thinks of the cinder push for formal specs 15:44:41 nealph_: last time round the submissions were all done thru a CFP webapp 15:45:18 nealph_: ... then we copied them over to a spreadsheet and the core team agreed on the track content from there 15:45:54 got it...okay. 15:46:04 nealph_: ... then the results were copied back to the CFP webapp, which got a bit messy TBH 15:46:31 k, sounds like a googledoc spreadsheet is the consensus 15:46:48 I'll set up a public one, and add the link to the summit wiki 15:47:14 eglynn_, thank you sir 15:48:41 note that we'll have circa 33% fewer scheduled slots in the track this time 15:48:49 ... as the Friday is being taken up by the free-flowing contributor meetups 15:48:58 ... i.e. the new pod++ 15:50:02 #topic open discussion 15:50:15 eglynn_: is this new method same for all project, or just ceilometer? 15:50:55 two from things: a) I'll be gone next week, moving. b) some rally comparisons of icehouse sql sample handling and juno: https://tank.peermore.com/tanks/cdent-rhat/RallySummary (now with visual aids, eglynn_ ) 15:50:55 llu-laptop: it's recommended for all projects, but the details may differ (e.g. etherpad versus googledoc versus some other shared doc format) 15:51:01 llu-laptop - for all, if I'm right - it was decided on one of the TC meetings 15:51:10 cdent: nice :) ... thank you sir! 15:51:22 cdent, a-ha! 15:51:58 "Summary Summary" :) 15:52:02 Was asking earlier about work on multi-queue support for the collector...seems like there was a recent patch/bp? Anyone familiar? 15:52:12 llu-laptop: see http://lists.openstack.org/pipermail/openstack-dev/2014-September/045844.html 15:52:36 eglynn_: thx for the link 15:52:40 nealph_: I don't recall a patch 15:52:49 cdent: wow, that looks nice 15:53:04 sileht: you did something related no? ^ 15:53:16 it definitely seems a good improvement at least for those specific circumstances nsaje 15:53:53 cdent: thanks for the numbers. forgot to look last time 15:54:54 so far we have two independent sources of verification (from Chris and Ilya's testing) that the sql-a improvments over Juno have yielded significant improvements :) 15:55:30 ... so this will be excellent data to bring to the TC when they go the final progress review on the gap analysis 15:55:35 nealph_: https://review.openstack.org/#/c/77612/ 15:55:59 nealph_: was that what you were asking about? 15:56:28 or on collector side specifically? 15:56:55 gordc: bingo, that was it. 15:57:10 nealph_: cool cool 15:57:41 jd__: is it time we called a result on those core nominations? 15:57:52 eglynn_: I think we're late 15:57:53 ... I think everyone who is going to vote has done so 15:57:59 so yeah 15:58:04 cool 15:58:40 * DinaBelova and nsaje are hiding and trying to be around :) 15:58:57 nsaje & DinaBelova: your plus-two-ability is coming soon :) 15:59:04 :) 15:59:06 :D 15:59:07 congrats to both of you. 15:59:15 hear, hear! :) 15:59:17 in what form would you like your bribes? 15:59:18 and thanks for the work 15:59:22 gordc, thanks :) we'll try to be useful further :) 15:59:26 cdent: cash money! 15:59:30 :D 15:59:30 congrats! :) 15:59:34 Huge thanks for the recognition guys! 16:00:01 ... so on that happy note :) 16:00:01 it is really nice, really :) I guess we both are happy :) and happy to work further as well :) 16:00:07 I'll make a toast to doing more reviews in the future today :) 16:00:07 bye! :) 16:00:16 ... let's call it a wrap 16:00:22 thanks folks! 16:00:23 see you, and thanks! 16:00:26 #endmeeting ceilometer