15:00:11 <eglynn> #startmeeting ceilometer
15:00:12 <openstack> Meeting started Thu Oct  2 15:00:11 2014 UTC and is due to finish in 60 minutes.  The chair is eglynn. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:15 <openstack> The meeting name has been set to 'ceilometer'
15:00:20 <cdent> o/
15:00:29 <eglynn> hey, y'all
15:00:48 <sileht> o/
15:00:54 <ildikov> o/
15:01:48 <eglynn> #topic Juno close-out
15:01:56 <DinaBelova> o/
15:02:00 <eglynn> so RC1 is in the bag! :)
15:02:06 <fabiog_> o/
15:02:12 <nealph> o/
15:02:21 <eglynn> ... or tagged and bagged as the cool kids would say :)
15:02:23 <eglynn> #link https://launchpad.net/ceilometer/+milestone/juno-rc1
15:02:53 <eglynn> so the final juno tag will be based on this
15:03:04 <eglynn> *if* we don't find any showstoppers in the mean time
15:03:34 <_nadya_> o/
15:04:04 <eglynn> if we do need an RC2, then the fix would have to be landed on master first then backported to proposed/juno branch
15:04:08 <DinaBelova> eglynn, I just wanted to point one interesting bug
15:04:15 <eglynn> DinaBelova: shoot
15:04:19 <DinaBelova> possibly it'll be a huge pain if we won't fix it
15:04:24 * DinaBelova searching
15:04:52 <DinaBelova> eglynn, https://bugs.launchpad.net/python-ceilometerclient/+bug/1357343
15:04:53 <uvirtbot> Launchpad bug 1357343 in python-ceilometerclient "ceilometer-alarm-evaluator fails after sometime with giving 401" [Medium,In progress]
15:05:26 <DinaBelova> it's the issue having roots in the our client updated to the oslo common code
15:05:44 <DinaBelova> so now after long time alarm evaluator running
15:06:03 <eglynn> DinaBelova: token expiry?
15:06:07 <DinaBelova> eglynn, yeah
15:06:08 <DinaBelova> :)
15:06:38 <eglynn> meh! we had similar problems before with the ceiloclient, which were fixed like a year ago
15:06:50 <eglynn> OK, that definitely needs to be fixed
15:06:50 <DinaBelova> eglynn, yeah, so there was the change by me merged to oslo-incubator
15:07:02 <DinaBelova> and now it's trying to be updated in the ceilo client
15:07:05 <DinaBelova> one moment
15:07:11 <eglynn> DinaBelova: so we will need an RC2 in that case
15:07:30 <DinaBelova> eglynn, here it is https://review.openstack.org/#/c/125058/
15:08:01 <DinaBelova> although it's kind of blocked for now with gordc change
15:08:19 <gordc> eglynn: are clients under the same release rules?
15:08:40 <DinaBelova> eglynn, it won't be needed the rc2
15:08:50 <DinaBelova> it'll be ceilo client change
15:08:55 <gordc> DinaBelova: my change merged... but i think we need to properly resync that item.
15:08:56 <DinaBelova> but we need to merge this asap
15:09:03 <DinaBelova> and create new ceilo client release
15:09:05 <eglynn> a-ha, I see
15:09:07 <DinaBelova> for the installations
15:09:39 <eglynn> gordc, DinaBelova: the clients were supposed to be frozen before juno-rc1, but agreed, we'll definitely need a 1.0.12 to get this fix
15:09:47 <DinaBelova> eglynn, a-ha, ok
15:09:55 <DinaBelova> gordc, good to know it'll be merged
15:09:58 <prad> eglynn, could we please consider this for juno rc https://bugs.launchpad.net/ceilometer/+bug/1374012
15:09:59 <uvirtbot> Launchpad bug 1374012 in ceilometer "Ceilometer polls lbaas resources even when the neutron enabled and disabled lbaas" [Medium,Fix committed]
15:10:55 <eglynn> prad: yes, we should get that in
15:11:03 <prad> ty sir
15:11:13 <eglynn> OK looks like we will need both a new ceiloclient release and an RC2
15:11:29 <DinaBelova> eglynn, yeah, with this bug -  yes
15:12:20 <gordc> there's two other bugs i've tagged with rc-potential: https://bugs.launchpad.net/ceilometer/+bugs?field.tag=juno-rc-potential
15:12:51 <eglynn> https://bugs.launchpad.net/ceilometer/+bug/1369124 was the one that we thought was fixed in RC1 right?
15:12:54 <uvirtbot> Launchpad bug 1369124 in ceilometer "syslog in gate-swift-dsvm-functional full of ceilometer errors" [Medium,In progress]
15:13:22 <gordc> eglynn: yeah, that was the one i where i fixed nothing... or i fixed half of it.
15:13:34 <eglynn> yeah, let's try to get our value in cutting an RC2 and get all 3 fixes in
15:14:09 <gordc> eglynn: agreed. https://bugs.launchpad.net/ceilometer/+bug/1375568 actually only affects juno (it's a py26 related bug)
15:14:13 <uvirtbot> Launchpad bug 1375568 in ceilometer "Python2.6: Error importing module ceilometer.ipmi.platform.intel_node_manager: 'module' object has no attribute 'OrderedDict'" [Medium,In progress]
15:14:54 <eglynn> let's aim for EoD Monday for all fixes landed if possible, so that I can request the RC2 in the rel mgr 1:1 on Tuesday at 11:45UTC
15:16:14 <gordc> two patches: https://review.openstack.org/#/c/124686/ https://review.openstack.org/#/c/124916/
15:16:37 <DinaBelova> gordc, a-ha, ok
15:16:50 <eglynn> one last thing about the juno closeout
15:17:08 <eglynn> I need to report back to the TC on the gap analysis outcome
15:17:35 <eglynn> #link https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Ceilometer_Gap_Coverage
15:18:13 <eglynn> I've marked pretty much everything as completed
15:18:28 <eglynn> ... except one task under Tempest: "curate the ceilometer Tempest coverage to ensure it does not become persistently problematic in the gate"
15:18:44 <eglynn> ... which I guess could be interpreted as an ongoing requirement in any case
15:18:51 <gordc> eglynn: seems accurate to me.
15:18:59 <DinaBelova> eglynn, yes
15:19:08 <ildikov> eglynn: the #4 in the docco is not done yet
15:19:26 <cdent> there's incremental progress with getting the USE_SCREEN=False stuff turned on which is supposed to help with the problem with services sometimes not starting
15:19:29 <ildikov> eglynn: but I don't think that currently it would hurt anyone though
15:19:46 <eglynn> ildikov: a-ha, k, I'll try to "finesse" that one :)
15:19:57 <cdent> there was a huge mess trying to get that stuff backported to icehouse (you may have seen Sean ranty email) but it has made it through just today
15:20:07 <cdent> the backport was required to get grenade working
15:20:26 <cdent> in the end this ought to result in more stable ceilo in tempest
15:20:29 <ildikov> eglynn: thanks, well, it's still better to have info on more places, than nowhere :)
15:21:00 <eglynn> cdent: nice work! I didn't see that ranty mail, but I'll read after the meeting, thanks for the heads-up
15:21:15 <cdent> the gist is that icehouse in the gate is fuxored
15:21:21 <cdent> and nobody cares
15:21:26 <DinaBelova> cdent, heh..
15:21:57 <eglynn> DinaBelova: I wondering also about "land the stalled Tempest patches" task given that the nova notification test was then skipped?
15:22:08 <eglynn> DinaBelova: ... your unskip patch didn't land yet, amiright?
15:22:16 <DinaBelova> eglynn, that's true..
15:22:38 <gordc> DinaBelova: have you tried rebasing/running it again recently?
15:22:39 <DinaBelova> eglynn, here it is https://review.openstack.org/#/c/115212/
15:22:54 <DinaBelova> gordc, will do today
15:23:00 <DinaBelova> had a hot weeks
15:23:12 <gordc> DinaBelova: cool cool, np
15:23:24 <eglynn> cool, it would great to get that landed
15:23:30 <DinaBelova> eglynn, yeah, for sure
15:24:08 <eglynn> anything else juno related?
15:24:45 <eglynn> #topic TSDaaS/gnocchi status
15:25:03 <eglynn> jd__: the floor is yours!
15:25:30 <jd__> hey
15:25:39 <jd__> good progress again this week
15:25:50 <jd__> we now have merged all archive policy stuff
15:26:00 <eglynn> cool :)
15:26:05 <DinaBelova> yay!
15:26:11 <jd__> we have Keystone middleware enabled by default
15:26:18 <jd__> (and support loading more middleware if needed)
15:26:33 <jd__> sileht progressed on the Ceilometer dispatcher
15:26:35 * eglynn will finally rebase his influx patch to try to emulate the archive_policy logic
15:26:38 <jd__> and started working on aggregation
15:26:55 <jd__> and we fixed a bunch of bugs and race condition in the tests
15:27:00 <jd__> so it's getting pretty solid at this stage
15:27:13 <sileht> jd__, thx for the resume :)
15:27:30 <eglynn> yeah, interesting discussion on the mean-of-means versus mean-of-raw-data on the cross-entity aggregation review
15:27:38 <jd__> we also now gate on py34-postgresql
15:27:50 <jd__> (we can't gate on py34-mysql because mysql does not work with Python 3)
15:28:14 <eglynn> a-ha, so can that job be skipped?
15:29:07 <eglynn> ok, we only have a py34-postgres right now
15:29:17 <eglynn> so no need to skip, cool, got it
15:29:25 <jd__> yup
15:29:51 <jd__> also I'm working on the tooz IPC issues that sileht discovered
15:30:43 <sileht> jd__, this one is not easy
15:31:27 <eglynn> BTW amalagon is working with the new archive policy support in her gnocchi custom aggregators
15:31:39 <eglynn> (... so that she can select the most granular data available when aggregating across periods, to avoid the mean-of-means distortion where possible)
15:31:48 <jd__> sileht: finger crossed I'll fix it :D
15:32:00 <sileht> jd__, I guess we can fix it, if gnocchi run as standalone (even with multiple worker), but for guys that will use wsgi, it's not possible
15:32:16 <jd__> sileht: I'm actually trying to fix in tooz directly
15:32:41 <eglynn> does that issue only manifest with the posix_ipc driver for tooz?
15:32:52 <sileht> eglynn, yep
15:33:17 <eglynn> would we be recommending the tooz/ZK driver instead for production deployments?
15:33:37 <sileht> once you have more than one gnocchi node you don't have the choice
15:33:58 <sileht> (or tooz/memcache driver)
15:34:33 <jd__> yeah for now IPC is only for one instance of Gnocchi running or in the unit tests
15:34:34 <eglynn> so really IPC approach is mainly intended for really small deployments and the tests?
15:34:40 <jd__> but I hope to fix that in tooz directly
15:34:46 <jd__> eglynn: yes, one node or tests
15:34:54 <eglynn> cool, got it, thanks!
15:35:02 <eglynn> anything else on gnocchi?
15:35:23 <eglynn> #topic Tempest status
15:35:49 <eglynn> main thing here would be to get that nova notification test unskipped before the TC review if poss
15:36:23 <eglynn> (slated for the next TC meeting, Oct 7th)
15:36:43 <DinaBelova> eglynn, I'll rebase it
15:36:47 <DinaBelova> let's give it a try
15:36:48 <DinaBelova> :P)
15:36:57 <eglynn> DinaBelova: great, thanks!
15:37:02 <DinaBelova> eglynn, np
15:37:36 <eglynn> #topic kilo summit planning
15:37:58 <DinaBelova> yay, kilo sumit soon :)
15:38:03 <eglynn> so I got confirmation that we'll definitely be down from 10 to 6 slots for Paris
15:38:18 <DinaBelova> :(
15:38:23 <DinaBelova> that's akward...
15:38:30 <DinaBelova> but anyway :)
15:38:31 <eglynn> apparently the average cut was 33% as there's one less day of formal design sessions
15:38:49 <DinaBelova> eglynn, and that organised 'pod' day, yeah?
15:38:50 <eglynn> but that was weighted by a rough metric of "project activity" over juno
15:39:06 <eglynn> i.e. a metric of BPs/bugs/reviews etc.
15:39:12 <eglynn> so we got a slightly larger cut
15:39:27 <eglynn> I guess becuase some of the focus was on gnocchi
15:39:27 <cdent> :(
15:39:33 <DinaBelova> eglynn, wow! i did not know they were using these stats
15:39:51 <eglynn> DinaBelova: me neither, until I asked how the decision was made
15:39:58 <DinaBelova> eglynn, a-ha, ok, got it
15:40:34 <eglynn> the upside is we'll have a full day on Friday of the "contributor meetup" ... i.e. the pod++
15:40:38 <cdent> that's going to be yet anothe perverse incentive
15:40:58 <cdent> "I get more goodies if I make lots of useless bps/bugs/reviews"
15:40:59 <eglynn> cdent: yeap, as if we didn't already have enough of those ...
15:41:17 <DinaBelova> eglynn, I also wantd to note that ityaptin is preparing the lab to test gnocchi (speaking about the performance part, but who knows, probably we'll find some other issues)
15:41:29 <eglynn> enough of those perverse incentive I meant ... as opposed to useless BPs ;)
15:41:39 <eglynn> DinaBelova: coolness :)
15:41:43 <cdent> :)
15:43:21 <eglynn> we'll discuss concrete topics next week
15:43:40 <eglynn> (as agreed last week)
15:44:09 <eglynn> moving on ...
15:44:10 <ildikov> I cannot attend the next week's meeting :(
15:44:11 <eglynn> #topic OPW ideas
15:44:46 <eglynn> ildikov: ok, we can maybe push it out a week, I'll check when the schedule needs to be formalized
15:45:11 <ildikov> eglynn: cool, thanks
15:45:24 <eglynn> on the OPW, we've a good record of diversity promotion on this project :)
15:45:49 <eglynn> two previous OPW interns (Terri & Ana) and another volunteer interested for the next round
15:45:52 <DinaBelova> eglynn, here is the etherpad https://etherpad.openstack.org/p/ceilometer-opw-ideas
15:46:07 <DinaBelova> so let's fill it :)
15:46:15 <eglynn> DinaBelova: thanks! ... exactly
15:46:28 <DinaBelova> I hope to spend time tomorrow for this thing
15:46:34 <DinaBelova> :)
15:47:02 <eglynn> if anyone has any project ideas, no matter how wacky, please do drop a quick description onto that etherpad
15:47:50 <eglynn> to give a sense of the scoping ... Terri worked on adding group_by semantics to the ceilo v2 API, Ana is working on period-spanning stats for gnocchi
15:48:45 <eglynn> thanks to DinaBelova for stepping up to mentor in this upcoming cycle!
15:48:54 <DinaBelova> eglynn, np :)
15:48:57 <DinaBelova> I'd love to try :)
15:49:05 <DinaBelova> I hope I'll be useful in this role
15:49:42 <jd__> I've one for Gnocchi I think
15:49:55 <eglynn> jd__: nice one!
15:50:12 <DinaBelova> jd__, it should be telemetry :) gnocchi or ceilo - it does not matter imho
15:50:16 <jd__> some people expressed interest in having a Ceph driver for Gnocchi
15:50:33 <DinaBelova> jd__, wow :) who are they? :)
15:50:37 <DinaBelova> just interested
15:50:38 <DinaBelova> :)
15:51:02 <eglynn> direct native ceph, as opposed to ceph-sitting-behind-swift?
15:51:06 <jd__> eglynn: yes
15:51:20 <eglynn> cool, that sounds really interesting
15:51:21 <DinaBelova> jd__, that's interesting thing btw
15:51:25 <DinaBelova> eglynn, indeed
15:51:36 <jd__> I talked about that with nijaba_ TBH
15:51:49 <DinaBelova> jd__, a-ha, ok
15:52:09 <eglynn> jd__: would it be an appropriate level of complexity/challenge for an intern?
15:52:20 <jd__> eglynn: good question, I don't know
15:52:32 * nijaba_ confirms
15:52:34 <DinaBelova> eglynn, speaking about the technical thing - yeah, it would be
15:52:43 <DinaBelova> but it won't be clear OpenStack task to be honest
15:52:48 <jd__> I think it is technically
15:53:11 <eglynn> cool, esp. if the intern has a bit of storage-foo already
15:53:13 <jd__> yeah it's kind of a mix, though it's still writing code for OpenStack
15:53:18 <jd__> like it would be writing a driver for Nova or Neutron
15:53:37 <jd__> and since it would be based on Carbonara, it wouldn't be uber complicated too
15:54:08 <ildikov> looks closer than the etherpad after the first quick read
15:54:21 <DinaBelova> ildikov, yeah, for sure
15:54:21 <eglynn> cool, so the main challenge would be mapping the swift client semantics onto the equivalent under the ceph API?
15:54:28 <jd__> eglynn: likely
15:54:32 <DinaBelova> ildikov, I just hope to find even better variant
15:54:46 <ildikov> is there a deadline for deciding the task?
15:55:36 <cdent> Given recent hullabaloo about "vendory" stuff is a ceph project ideal?
15:55:37 <eglynn> ildikov: application are due by Oct 22
15:56:34 <ildikov> DinaBelova: well, it's never an easy task, I guess we could investigate a bit the Ceilo-Gnocchi integration area, maybe it could give some more tasks, I'm not sure now
15:56:41 <eglynn> cdent: ceph being open-source mitigates the vendory feel
15:56:43 <eglynn> ?
15:56:52 <ildikov> eglynn: cool, thanks
15:57:09 <eglynn> ildikov: but that the application deadline for the intern herself
15:57:45 <ildikov> eglynn: and they choose project or task?
15:58:00 <chmouel> eglynn: the difference between the two is one using rest api and the native rados library is not (it's a library calling directly the ceph nodes)
15:58:00 <eglynn> ildikov: yeah ... to be realistic, the idea would have to be firmed up before then as the intern applies for a particular project idea
15:58:09 <cdent> eglynn: that's certainly an argument to make, I suppose, but some people would probably not agree and what's the win of just ceph v swift over ceph?
15:58:10 <chmouel> eglynn: but i don't think this is hard
15:58:19 <ildikov> eglynn: hmm, then we're already late :)
15:58:20 <eglynn> chmouel: a-ha, got it, thanks!
15:59:08 <eglynn> ildikov: yeah, sooner the better re. the ideas
15:59:13 <cdent> I don't have a position on this, just observing.
15:59:32 <ildikov> eglynn: yeap, sure, got it
15:59:33 <eglynn> looks like this ceph work has definite potential though
16:00:10 <cdent> time has run out
16:00:18 <eglynn> anything else on letting a thousand diverse flowers bloom in the open source world? :)
16:00:46 <eglynn> k, let's skip open discussion this week, the shot-clock has beaten us
16:00:54 <eglynn> thanks for your time folks!
16:01:02 <eglynn> #endmeeting ceilometer