15:01:34 <eglynn_> #startmeeting ceilometer
15:01:35 <openstack> Meeting started Thu Sep 18 15:01:34 2014 UTC and is due to finish in 60 minutes.  The chair is eglynn_. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:39 <openstack> The meeting name has been set to 'ceilometer'
15:01:41 <eglynn_> sorry for the late start folks
15:01:46 <nsaje> o/
15:01:48 <gordc> o/
15:01:50 <ildikov> o/
15:01:54 <llu-laptop> o/
15:01:54 <KurtRao> o/
15:01:54 <nealph_> o/
15:01:56 <ityaptin> o/
15:01:57 <DinaBelova__> o/
15:02:36 <eglynn_> #topic Juno status and FFEs
15:02:51 <eglynn_> #link https://launchpad.net/ceilometer/+milestone/juno-rc1
15:03:03 <cdent> o/
15:03:12 <eglynn_> \o/ all FFE BPs are landed since Monday :)
15:03:26 <eglynn_> nice work to all concerned
15:03:28 <nsaje> great!
15:03:33 <DinaBelova__> Nice :)
15:04:01 <nsaje> should we target https://bugs.launchpad.net/ceilometer/+bug/1369538 to rc1 also?
15:04:02 <uvirtbot> Launchpad bug 1369538 in ceilometer "static resources from pipeline.yaml not partitioned" [Undecided,In progress]
15:04:08 <eglynn_> so I did a trawl on rc1-target'd bugs and bumped anything that looked like it wasn't getting enough traction to land my mid next week
15:04:12 * eglynn_ looks
15:04:46 <eglynn_> nsaje: yep, done
15:04:57 <llu-laptop> how about https://bugs.launchpad.net/nova/+bug/1348818, do we have patch for that now
15:04:59 <uvirtbot> Launchpad bug 1348818 in keystone "Unittests do not succeed with random PYTHONHASHSEED value" [Medium,In progress]
15:05:00 <llu-laptop> cdent
15:05:02 <eglynn_> nsaje: Importance? ... medium?
15:05:08 <nsaje> eglynn_: yes
15:05:11 <DinaBelova__> Really I think this will be useful
15:05:31 <cdent> llu-laptop: there remain unclear issues with wsme. I'm not sure how that's resolved...
15:06:20 <eglynn_> gordc: did you have some insight on that PYTHONHASHSEED bug? https://bugs.launchpad.net/nova/+bug/1348818
15:06:23 <uvirtbot> Launchpad bug 1348818 in keystone "Unittests do not succeed with random PYTHONHASHSEED value" [Medium,In progress]
15:06:24 <gordc> llu-laptop: not sure how to resolve it either. i want to say it's wsme fault but i don't want to incorrectly lead people that way.
15:06:27 <gordc> eglynn_: ^
15:06:41 <gordc> i'll try looking at it again later today i think since i have time.
15:06:48 <eglynn_> gordc: thanks!
15:07:13 <jd__> o/
15:07:34 <cdent> gordc: feel free to commit on that patchset if something needs to happen
15:07:48 <gordc> cdent: will do
15:08:15 <eglynn_> so in terms of the time pressure around getting fixed into juno, RC1 will probably be cut next week
15:09:00 <eglynn_> after that we can request an rc2
15:09:09 <eglynn_> ... but the fix would have to be backported from master to proposed/juno and the level of scrutiny will go up significantly
15:09:25 <nsaje> understood
15:09:26 <eglynn_> ... so best to get everything we know that we need into rc1 if at all poss
15:09:42 <llu-laptop> eglynn_: all the patch to be merged right now must be related to rc1 bug?
15:10:12 <llu-laptop> can we get https://review.openstack.org/#/c/121440/ merged? it's a doc bug
15:10:14 <eglynn_> llu-laptop: yep, exactly ... no closes-bug tag in the commit message should be a bar to landing
15:10:17 <llu-laptop> it's a doc patch
15:10:38 <llu-laptop> got that
15:10:46 <nsaje> doc patches should be an exception, no?
15:10:59 <ildikov> I think so too
15:11:05 <eglynn_> llu-laptop, nsaje: yeah, we can prolly make an exception in that case
15:12:06 <DinaBelova__> As far as I remember it was somehow defined in rules - and that was the exception
15:12:39 <DinaBelova__> Like common case
15:12:44 <eglynn_> ... that's good enough for me :)
15:13:09 <eglynn_> shall we move on from rc1?
15:13:19 <pradk> eglynn_, we have any consensus on https://bugs.launchpad.net/ceilometer/+bug/1357869 ? are we thinking of just adding negations in catch-all source for rc1? sounds like its a lot more work to get it fixed in pipeline logic now?
15:13:20 <uvirtbot> Launchpad bug 1357869 in ceilometer "duplicate samples for network.services.lb.member" [Medium,In progress]
15:13:50 <eglynn_> pradk: TBH I haven't fully worked out the best way of proceeding on that
15:14:06 <pradk> ok
15:14:18 <eglynn_> pradk: ... as discussed on the channel earlier the same logic is involved in an issue with static resources defined in the pipeline
15:14:38 <pradk> ah ok, i'll go back and read
15:15:04 <nsaje> unearthed by llu-laptop in https://review.openstack.org/#/c/121586/1/ceilometer/agent.py
15:15:50 <pradk> cool
15:15:59 <llu-laptop> don't understand the root cuase of duplicating samples of network.services.lb.member
15:15:59 <eglynn_> #topic TSDaaS/gnocchi status
15:16:01 <llu-laptop> :(
15:16:20 <nsaje> it's the same as what you mentioned in the patch I linked
15:16:33 <nsaje> if you define discovery in one source, it applies to all sources
15:16:40 <eglynn_> llu-laptop: it's due to the way that the polling tasks are keyed by meter name instead of by source
15:16:43 <nsaje> so even catch-all source has network discovery configured (implicitly)
15:18:16 <eglynn_> llu-laptop: does that make sense?
15:18:54 <llu-laptop> eglynn_: i need to 're-think', looks like my brain jammed now
15:19:04 <eglynn_> llu-laptop: cool enough :)
15:19:27 <eglynn_> jd__: so some nice progress on gnocchi this week, do you want to elaborate?
15:20:57 <jd__> sure
15:21:14 <jd__> We moved forward on archive policy implementation and got a lot of things merged
15:21:24 <jd__> I think we'll finish final support by next week, fingers crossed
15:21:29 <eglynn_> cool :)
15:21:34 <DinaBelova> eglynn_, as far as I know Mehdi, Igor and Ildiko started collaboraiton on Gnocchi dispatcher
15:21:45 <jd__> sileht took over the Ceilometer dispatcher so we're having good progress
15:22:01 <DinaBelova> jd__, yeah, it is good :)
15:22:03 <eglynn_> DinaBelova: yep sileht has proposed https://review.openstack.org/#/q/status:open+project:stackforge/gnocchi+branch:master+topic:sileht/ceilo-dispatcher,n,z
15:22:05 <jd__> I've a few patches to do to enhance the API a bit still
15:22:18 <jd__> but from where I stand things are looking pretty good and active, thanks to you guys :)
15:22:30 <jd__> we should have something fancy for Juno!
15:23:06 <eglynn_> excellent! :)
15:24:35 <eglynn_> jd__: so are you thinking in terms of an alpha release of gnocchi prior to the summit?
15:24:50 <jd__> eglynn_: something like that
15:24:53 <eglynn_> ... i.e. something that folks could play with alongside ceilo/juno
15:24:57 <eglynn_> cool
15:25:00 <jd__> yep
15:25:53 <eglynn_> anything else on gnocchi?
15:26:06 <sileht> o/
15:26:12 <sileht> I have created a devstack patch
15:26:17 <jd__> crazy you
15:26:21 <DinaBelova> wow!
15:26:28 <eglynn_> sileht: nice :)
15:26:30 <sileht> #link https://review.openstack.org/#/c/122349/
15:26:48 <ildikov> I saw it just did not have the time to try it yet :)
15:27:01 <sileht> it needs some patch in gnocchi
15:28:22 <sileht> hum, it seems all gnocchi patches needed for devstack is merged :)
15:28:49 <eglynn_> cdent: would sileht's devstack patch above be another example of where your s/screen_it/run_process/ change would make sense?
15:29:03 <eglynn_> cdent: i.e. https://review.openstack.org/#/c/122349/1/lib/gnocchi
15:29:30 <cdent> "make sense" in what sense?
15:29:44 <jd__> in the sense sense?
15:29:51 <eglynn_> cdent: as is, "should also be applied to"
15:29:58 <cdent> basically if something is being started in a screen then run_process can/should be used
15:30:29 <cdent> however if somebody uses screen_it then run_process will get called anyway if USE_SCREEN is False
15:30:44 <DinaBelova> cdent, afaik it'll be something 'start gnocchi api in new srceen' and that's it
15:30:51 <DinaBelova> so it'll be used, yeah
15:30:52 <eglynn_> cdent: k, just checking if it made sense to change explicitly across the board or only for selected services
15:31:14 <cdent> The plan is that any use of screen_it ought to be replaced with run_process
15:31:30 <cdent> there's some ambiguity for things which are services like rabbit
15:31:40 <cdent> (which are generally started in other ways)
15:31:54 <eglynn_> k, fair enough
15:32:21 <eglynn_> move on?
15:32:33 <eglynn_> #topic Tempest status
15:32:52 <eglynn_> DinaBelova: anything new there to report?
15:33:25 <DinaBelova> eglynn_, sadly I don't know what happened with the gate workers clean-up
15:33:36 <DinaBelova> that was the blocker for our tempest tests
15:33:40 <DinaBelova> due to my vacation...
15:33:50 <eglynn_> a-ha, k
15:33:52 <gordc> i think we're still in sit and wait mode
15:33:59 <DinaBelova> gordc, indeed...
15:34:07 <gordc> side question, did we ever look at setting up pollster tests?
15:34:32 <eglynn_> gordc: the problem was figuring out an acceptable way of accelerating the polling interval
15:34:38 <vrovachev> gordc: we look, Sead - no :)
15:34:43 <vrovachev> Sean_
15:34:45 <gordc> ok... just asking because i noticed compute agent wasn't running for a few days.
15:34:48 <DinaBelova> gordc, we tried to start the discussion with QA folks
15:35:03 <DinaBelova> gordc, yeah, without the tests it's hard to notice..
15:35:09 <gordc> i guess something we should track manually for now.
15:35:09 <cdent> the no-screen stuff that run_process() is supposed to be enabling has not been turned on yet because there are some issues with processes being shut down properly <- hearsay from #os-qa couple days ago
15:35:29 <DinaBelova> cdent, a-ha, thanks
15:35:35 <gordc> cdent: i see.
15:35:45 <eglynn_> gordc: the compute agent not running in tempest? ... now resolved, or?
15:35:58 <DinaBelova> eglynn_, in devstack :)
15:36:06 <gordc> eglynn_: yeah, it's fixed now. it merged this morning
15:36:20 <gordc> just a devstack config issue.
15:37:18 <gordc> that's it from me on tempest. :)
15:37:30 <DinaBelova> me as well
15:38:00 <vrovachev> to enable compute agent in jenkins - need to add path to devstack-gate
15:38:07 <vrovachev> patch_
15:38:34 <eglynn_> gordc: a-ha, ok ... so it sounds like we're going to have punt on some of these testing scenarios (e.g. polling) that we feel we need, to the kilo "in-tree" functional tests
15:39:16 <DinaBelova> eglynn_, that was the QA insight afair
15:39:24 <gordc> eglynn_: i think so...
15:39:54 <eglynn_> #topic Kilo summit planning
15:40:39 <eglynn_> so the main thing here is just a heads-up that collaborative scheduling for the summit tracks will be the recommended model for kilo
15:41:02 <eglynn_> not a million miles from what we did withing the ceilo core team for the juno summit
15:41:37 <eglynn_> difference though is that there won't be a CFP webapp this time round for the initial proposals
15:41:59 <eglynn_> instead we'll just go directly to some shared document
15:42:18 <eglynn_> for now we really just have to aggree on the shared doc format
15:42:32 <eglynn_> last time we used a googledoc spreadsheet
15:42:53 <nsaje> seems like a natural choice
15:42:53 <eglynn_> that seemed to work OK, may tallying up the votes etc. easy
15:43:08 <gordc> works for me.
15:43:19 <eglynn_> this time round some of the other projects are using etherpads
15:43:21 <DinaBelova> well, it's ok for me as well
15:43:38 <DinaBelova> hm, that'll be not so working
15:43:49 <DinaBelova> with so mane features as googledoc has
15:43:59 <DinaBelova> for me googledoc is still the best here
15:44:08 <nealph_> eglynn: you're speaking to the process of approving, or submitting?
15:44:17 <ildikov> googledoc worked fine last time I think, so +1 from me too
15:44:19 <eglynn_> nealph_: both
15:44:24 * nealph_ thinks of the cinder push for formal specs
15:44:41 <eglynn_> nealph_: last time round the submissions were all done thru a CFP webapp
15:45:18 <eglynn_> nealph_: ... then we copied them over to a spreadsheet and the core team agreed on the track content from there
15:45:54 <nealph_> got it...okay.
15:46:04 <eglynn_> nealph_: ... then the results were copied back to the CFP webapp, which got a bit messy TBH
15:46:31 <eglynn_> k, sounds like a googledoc spreadsheet is the consensus
15:46:48 <eglynn_> I'll set up a public one, and add the link to the summit wiki
15:47:14 <DinaBelova> eglynn_, thank you sir
15:48:41 <eglynn_> note that we'll have circa 33% fewer scheduled slots in the track this time
15:48:49 <eglynn_> ... as the Friday is being taken up by the free-flowing contributor meetups
15:48:58 <eglynn_> ... i.e. the new pod++
15:50:02 <eglynn_> #topic open discussion
15:50:15 <llu-laptop> eglynn_: is this new method same for all project, or just ceilometer?
15:50:55 <cdent> two from things: a) I'll be gone next week, moving. b) some rally comparisons of icehouse sql sample handling and juno: https://tank.peermore.com/tanks/cdent-rhat/RallySummary (now with visual aids, eglynn_ )
15:50:55 <eglynn_> llu-laptop: it's recommended for all projects, but the details may differ (e.g. etherpad versus googledoc versus some other shared doc format)
15:51:01 <DinaBelova> llu-laptop - for all, if I'm right - it was decided on one of the TC meetings
15:51:10 <eglynn_> cdent: nice :) ... thank you sir!
15:51:22 <DinaBelova> cdent, a-ha!
15:51:58 <DinaBelova> "Summary Summary" :)
15:52:02 <nealph_> Was asking earlier about work on multi-queue support for the collector...seems like there was a recent patch/bp? Anyone familiar?
15:52:12 <eglynn_> llu-laptop: see http://lists.openstack.org/pipermail/openstack-dev/2014-September/045844.html
15:52:36 <llu-laptop> eglynn_: thx for the link
15:52:40 <eglynn_> nealph_: I don't recall a patch
15:52:49 <nsaje> cdent: wow, that looks nice
15:53:04 <gordc> sileht: you did something related no? ^
15:53:16 <cdent> it definitely seems a good improvement at least for those specific circumstances nsaje
15:53:53 <gordc> cdent: thanks for the numbers. forgot to look last time
15:54:54 <eglynn_> so far we have two independent sources of verification (from Chris and Ilya's testing) that the sql-a improvments over Juno have yielded significant improvements :)
15:55:30 <eglynn_> ... so this will be excellent data to bring to the TC when they go the final progress review on the gap analysis
15:55:35 <gordc> nealph_: https://review.openstack.org/#/c/77612/
15:55:59 <gordc> nealph_: was that what you were asking about?
15:56:28 <gordc> or on collector side specifically?
15:56:55 <nealph_> gordc: bingo, that was it.
15:57:10 <gordc> nealph_: cool cool
15:57:41 <eglynn_> jd__: is it time we called a result on those core nominations?
15:57:52 <jd__> eglynn_: I think we're late
15:57:53 <eglynn_> ... I think everyone who is going to vote has done so
15:57:59 <jd__> so yeah
15:58:04 <eglynn_> cool
15:58:40 * DinaBelova and nsaje are hiding and trying to be around :)
15:58:57 <eglynn_> nsaje & DinaBelova: your plus-two-ability is coming soon :)
15:59:04 <nsaje> :)
15:59:06 <DinaBelova> :D
15:59:07 <gordc> congrats to both of you.
15:59:15 <eglynn_> hear, hear! :)
15:59:17 <cdent> in what form would you like your bribes?
15:59:18 <gordc> and thanks for the work
15:59:22 <DinaBelova> gordc, thanks :) we'll try to be useful further :)
15:59:26 <gordc> cdent: cash money!
15:59:30 <DinaBelova> :D
15:59:30 <ildikov> congrats! :)
15:59:34 <nsaje> Huge thanks for the recognition guys!
16:00:01 <eglynn_> ... so on that happy note :)
16:00:01 <DinaBelova> it is really nice, really :) I guess we both are happy :) and happy to work further as well :)
16:00:07 <nsaje> I'll make a toast to doing more reviews in the future today :)
16:00:07 <DinaBelova> bye! :)
16:00:16 <eglynn_> ... let's call it a wrap
16:00:22 <eglynn_> thanks folks!
16:00:23 <DinaBelova> see you, and thanks!
16:00:26 <eglynn_> #endmeeting ceilometer