15:00:39 #startmeeting ceilometer 15:00:39 Meeting started Thu Aug 21 15:00:39 2014 UTC and is due to finish in 60 minutes. The chair is gordc. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:43 The meeting name has been set to 'ceilometer' 15:00:46 hey folks 15:00:48 o/ 15:00:50 o/ 15:01:07 o/ 15:01:23 first meeting i'm running on my end so feel free to call me out if i miss anything 15:01:24 o/ 15:01:28 Hi 15:01:40 hola 15:01:48 this is my first meeting to discuss a small change blueprint I have submitted 15:02:09 kudva: cool cool. welcome 15:02:11 kudva: welcome 15:02:24 o/ 15:02:33 o/ 15:03:02 o/ 15:03:09 cool. lets assume quorum... i think a lot of people are away this week so this might be quick. 15:03:23 #topic Juno-3 planning 15:03:26 * nealph thinks quick is okay... :) 15:03:35 https://launchpad.net/ceilometer/+milestone/juno-3 15:03:47 here's a list of bps and bugs we're tracking for j-3... 15:03:48 o/ 15:04:09 at last check they've all had code pushed to gerrit so we can all start reviewing 15:04:33 nsaje: any concerns on partitioning work? 15:05:00 gordc: no, awaiting further feedback from interested parties that haven't reviewed it yet 15:05:13 gordc, nsaje, /me having not possibly the concerns, but just something to say 15:05:24 DinaBelova: go for it 15:05:43 idegtiarov and I are working on kind of testing of this HA patch to see if it works, etc. and how 15:05:52 possibly some other efforts here will be cool as well 15:05:58 to have many eyes here 15:06:17 DinaBelova: agreed, open invite on reviews for this patch since it's a pretty feature. 15:06:25 I still wonder if https://review.openstack.org/#/c/104784/ could be landed. Looks like there's no discussion anymore. Both pros and cons are made clear 15:06:26 pretty unique feature 15:06:26 gordc, yes, indeed 15:06:34 pretty feature too.lol 15:07:12 kurtrao: i haven't really tracked that since... i'll take a look. 15:07:26 for bps not yet accepted. please continue to work on them. 15:07:58 today is unofficially feature proposal freeze day but that's to be discussed with eglynn when he comes back 15:08:21 just keep working on it but no guarantees for juno 15:08:29 gordc: Yes. That's why I want to push it 15:08:50 kurtrao: seem to be a lot of -1's... is there a new patch expected? 15:09:25 gordc, these -1s are conceptual ones and depend much on personal opinion as far a I remember 15:09:40 so we need kind of boss to decide :) 15:10:06 gordc: I think for the new meter, no more patches. But there's room for improvement, in separate BPs 15:10:14 gordc: you can refer to the author's last comment 15:10:14 DinaBelova: i see... err. everyone is a boss so more eyes are better. i'll try to take a look later today 15:10:23 gordc, cool :) 15:10:39 carrying on. llu-laptop any concerns with your snmp work? or just need reviews on your patches? 15:11:12 just needs review 15:11:20 one patch has landed, the second needs review 15:11:30 llu-laptop: cool cool. let's get that merged. 15:11:52 got stuck by a strange jenkins doc error 15:12:06 my work regarding normalising data is up... there's discussion on performance with srinivas from HP... we'll post it to patch for people to track. 15:12:22 if others could verify there's no performance hit on their end that'd be great. 15:12:58 nealph: you still blocked with doc issues or you ok? 15:13:16 https://blueprints.launchpad.net/ceilometer/+spec/paas-event-format-for-ceilometer 15:13:22 gordc: good to go. Added ildikov to the review for some initial feedback. 15:13:39 nealph: cool cool. good to hear. 15:13:40 DinaBelova: you probably have some thoughts too...jump in. 15:14:06 nealph, /me jumping :) 15:14:21 when it's final-ish I'll add add'l folks from the PaaS services to comment too. 15:14:30 evangelize, sort of. 15:14:53 gordc: I could build nealph's patch locally, so it should be good in general, I added one tiny comment that was an issue with Sphinx in it 15:15:06 nealph, good practice :) 15:15:09 ildikov: awesome. 15:15:27 so i guess that's it for current bps. regarding juno-3, please help review the bps out there... 15:15:37 gordc, grenade stuff 15:15:42 I'm pretty much blocked on the grenade/javelin2 stuff. 15:15:43 cdent: right 15:15:49 gordc: I hope dhellmann will have some idea about the general doc issue that was also raised by nsaje on the dev ML not so long ago 15:16:06 ildikov: i'll bring that up after cdent's part 15:16:27 jogo has presented some concerns on whether ceilometer should even be in javelin, on the review: https://review.openstack.org/#/c/102354/ 15:16:30 cdent: did you get any feedback from joe or sean? 15:16:35 ildikov, well, it appeared this night I guess 15:16:50 cdent: further discussion is needed 15:16:59 and I've got a pending message to the list trying to figure out how to get some info on making the test actually robust: http://lists.openstack.org/pipermail/openstack-dev/2014-August/043372.html 15:17:03 cdent: i mean it'd be good to have an upgrade path for ceilometer tested. 15:17:15 So until the furuther discussion happens (however it happens), I'm not sure what's up. 15:17:30 i think we have one resource here that might be able to take a look at it tomorrow and give feedback. 15:17:31 cdent: I haven't had a chance to follow up on that 15:17:46 jogo: thanks for the input. 15:18:20 I think the main crux of this is how robust we want the testing to be. It can be much simpler than I've done it but then there will be ambiguities. 15:18:44 cdent: robust and simple 15:19:51 cdent: anyway I will try to follow up offline 15:19:55 thanks jogo 15:20:03 jogo: cdent: awesome. 15:20:16 DinaBelova: I couldn't check what has changed related to sphinx, or at least what I could check that didn't in the near past 15:20:18 cdent: if you're still blocked tomorrow or monday we can try elevating to bigger group. 15:20:26 roger gordc 15:20:30 anything else for juno-3? 15:20:51 if not, please carry on... give some reviews for current bps 15:21:24 bps not approved will be tbd for juno but plesae continue working on them with some hope it'll be in juno 15:21:53 #topic broken docs job 15:22:09 ildikov: you have an update? 15:22:16 or did you an DinaBelova hash it already? 15:22:23 :) 15:22:55 gordc: not that much actually as I couldn't find a root cause for this issue 15:22:57 well, gordc, last time /me was looking on it, it was smth really strange, as ceilometer.agent seems to have no link with wsme 15:23:07 and its sphinxext module 15:23:22 DinaBelova: yeah, i'm not sure there's a connection there either 15:23:23 does anyone how Sphinx decides which extension to use for a class? 15:23:31 *anyone know 15:23:45 the exception happens in https://github.com/stackforge/wsme/blob/master/wsmeext/sphinxext.py#L348 15:23:50 I'll take a look as well... dhellmann ^^ any guesses to sphinx issue? 15:23:58 and ServiceDocumenter does not have a can_document_class() method 15:24:08 nsaje, ++ 15:24:08 gordc: DinaBelova: the expert of Sphinx in OpenStack is dhellmann, or at least I've never had an issue with Sphinx of which he couldn't know the answer/fix... 15:24:18 on a long shot, it is possible it's getting confused because there's a service.py file in ceilometer.alarm 15:24:25 ildikov :) 15:24:37 nsaje: there are several extensions that are used during the different stages of the docco build 15:24:52 dhellmann, ryanpetrello would be great resources to ask. we'll try reaching out today. or if someone figures it out let us know in openstack-ceilometer 15:25:06 circle back to this later? 15:25:14 nsaje: where the problem occurs now is after the stage where the docstrings are read from the actual code 15:25:40 gordc: yeap, I don't think we could add more to this now 15:25:44 ok, let's take this off-meeting 15:25:49 cool cool 15:25:57 #topic Tempest status 15:26:03 any updates here? 15:26:17 gordc, me trying to understand what was that strange error 15:26:21 in my reverting change 15:26:37 which my patch fixed? or the grenade error you saw after? 15:26:43 grenade one 15:26:53 gordc, thanks for the your patch, btw 15:27:01 it was a little surprising for me :) 15:27:04 that error 15:27:12 DinaBelova: hmm. i hadn't had a chance to look at it but i'll try later today... it seemed like some keystone stuff 15:27:27 gordc, yes, it was - but it's kind of floating bug 15:27:42 I found other bugs (not the same) looking like that 15:27:46 but not this one 15:27:47 * gordc hoping it's a simple recheck 15:28:00 DinaBelova: i'll try debugging that later today i think. 15:28:11 gordc, /me rechecked, waiting in progress 15:28:19 gordc, if it'll be needed 15:28:25 any other blockers? experimental check working good? 15:28:34 gordc, yes, all the time 15:28:55 DinaBelova: cool cool. anyone else with tempest issues? 15:29:25 #topic TSDaaS/gnocchi status 15:29:39 jd__: any update here? or you on vacation? 15:29:47 * DinaBelova wanted to ask if somebody knows about influxdb status 15:29:52 I'm not on vacation 15:30:07 I'm waiting for a review from eglynn 15:30:11 on archiving policy 15:30:18 and I should continue to work on that branch then 15:30:27 if jd__ would ask, I'm on an internal workshop, so I couldn't get there this week... :S :( 15:30:38 I was about to ask you ildikov :) 15:30:41 Eoghan has proposed his change (and my one with OpenTSDB) for a while ago, but I don't know what's the influxdb status at least 15:31:21 jd__: i'm not sure eglynn will get around to it until next week... do you have a link? 15:31:21 DinaBelova: do you mean in Gnocchi or in general? 15:31:36 well, in general, as it leads us to the Gnocchi 15:31:41 #link https://review.openstack.org/#/q/status:open+project:stackforge/gnocchi+branch:master+topic:jd/archiving-policy,n,z 15:31:42 we needed some of their features 15:32:05 ildikov, and while they were not ready eglynn's change was not full 15:32:06 DinaBelova: i'll follow up with eglynn.. i'm not sure what status is for influx but it's still a driver we're intending on supporting at last check 15:32:16 so yeah things have slowed down a little but I hope to continue on that soon 15:32:21 DinaBelova: yeap, I remember that long list from the mid-cycle, I just wasn't 100% sure that you meant that also 15:32:44 jd__: awesome. sorry haven't looked at any of the patches. 15:32:48 ildikov, yes, indeed 15:32:52 np 15:33:37 #action eglynn - update on influxdb support in gnocchi 15:33:44 i've no idea how to use action ^ 15:33:51 but good enough 15:34:07 any other items regarding gnocchi? 15:34:20 gordc: I think this will be enough for eglynn, we he checks the logs :) 15:34:46 ildikov: perfect. 15:34:49 moving on. 15:34:55 #topic Future of the integrated release ML thread - update? 15:35:04 i'm not sure who added this? 15:35:23 probably Eoghan 15:35:29 but i've been following the thread and just haven't replied to avoid fanning the flames. 15:35:30 gordc: it can happen that it's a leftover from last week 15:35:58 gordc: I think the general comment would be for folks to track the ML if they're interested. 15:36:02 but side note. if anyone does have concerns... be it ceilometer, stacktach, monasca, another monitoring tool... 15:36:03 anyway, seems to be an endless thread there 15:36:34 feel free to bring it up on openstack-ceilometer or offline... whatever you're conforatable wiht... 15:37:27 and onwards we go. 15:37:31 #topic Ceilometer Client bug update 15:37:46 https://bugs.launchpad.net/python-ceilometerclient/+bug/1351841 15:37:47 Launchpad bug 1351841 in python-ceilometerclient "python-ceilometerclient does not works without v3 keystone endpoint" [High,Triaged] 15:37:53 fabiog: any update on this? 15:38:06 gordc, I checked the bug using 0.10.2 (Keystone) and 1.0.11 (Ceilometer) 15:38:15 in devstack 15:38:21 and I cannot reproduce the bug 15:38:35 fabiog, hehe, funny thing... 15:38:37 as usually 15:38:47 fabiog: yeah... i think i tried as well and it didn't come up. 15:38:58 fabiog: maybe we need to bump our requirements? 15:39:07 fabiog: I guess that's the ultimate latest Devstack then 15:39:23 I'd like to appeal to the core team to show some love to python-ceilometerclient :-) 15:39:29 gordc, a fresh devstack picks those versions 15:39:29 plenty old patches with one +2 15:39:40 fabiog: as a week ago, or smth like, I faced with this issue after a fresh devstack install using the admin port in the auth url 15:40:08 nsaje: we don't need your politiking here 15:40:11 :) 15:40:15 hah! 15:40:17 hehe :) 15:40:22 ildikov: I suspect it was the combination of ceilo client and keystone client that weren't compatible 15:40:33 never the less, if you specify a wrong IP address for auth_url, then Keystone should return that it does not find 15:40:39 but yeah, remember: https://review.openstack.org/#/q/status:open+project:openstack/python-ceilometerclient,n,z 15:40:41 and instead it blows 15:40:49 so I filed a bug against Keystone client 15:40:51 nsaje: got it, but the fact is that a day still consists of 24 hours... :( 15:41:02 fabiog: so i guess recommendation is to use latest keystoneclient? 15:41:06 #link https://bugs.launchpad.net/python-keystoneclient/+bug/1359412 15:41:07 Launchpad bug 1359412 in python-keystoneclient "keystone discovery command throws exception" [Undecided,New] 15:41:16 gordc: yes 15:41:28 fabiog: awesome. i noticed you commented on bug as well... 15:41:33 thanks for looking at it. 15:41:41 gordc: and make sure that the auth_url is correct 15:41:47 fabiog: so nova and glance worked fine with the url:port como, only ceilometer client was not working with those params 15:41:48 gordc: no problem 15:42:37 gordc: fabiog: I guess it's the third version of the same issue: https://bugs.launchpad.net/ceilometer/+bug/1350533 15:42:39 Launchpad bug 1350533 in ceilometer "CommandError: Unable to determine the Keystone version to authenticate with using the given auth_url: http://127.0.0.1:35357/v2.0" [High,Confirmed] 15:42:46 ildikov: I tested glance and nova and they work 15:42:49 or well, the first in time 15:43:54 ildikov: for the related bug on alarms try with a fresh devstack and see if you can replicate the issue 15:43:57 fabiog: in my devstack install that I mentioned they worked for me too, I only had issues with Ceilometer, but I will refresh my devstack env and will check 15:44:26 ildikov: cool cool. let us know if it works... or if you need help on that bug. 15:44:45 ok, I will close the bug which has my name on it as we have two other versions for it and I do not have time now to deal with it further and it seems that it would be a duplicated work anyway 15:44:48 ildikov: make sure that you are using the correct IP address for the auth_url for Keystone, it should not be localhost 15:45:18 ildikov: sounds good. just mark it as duplicate of whatever bug fabiog has i guess 15:45:27 fabiog: I used the right one 15:45:45 fabiog: I at least triple checked it :) 15:45:57 ildikov: one more time! 15:46:07 gordc: please assign it to me 15:46:11 fabiog: but I will install a fresh devstack env and will get back to you with version info, if I still see this error 15:46:39 gordc: if there are duplicates I will be able to quickly see if the problem still persist 15:46:59 fabiog: gordc: so we have three bug reports now 15:47:19 fabiog: gordc: let's pick one that we will update and mark the rest as duplicates 15:47:31 ildikov: can you paste them here or mark them as duplicates of the one fabiog is assigned to 15:47:44 fabiog: gordc: I will administer it, no probs, just tell me which one should remain as active 15:48:00 fabiog, you can track duplicated bugs on the right side in launchpad 15:48:03 gordc: ok, I will mark them 15:48:14 thanks ildiko! 15:48:23 ok, i think we're good. 15:48:32 #topic Open discussion 15:48:38 free for all 15:48:42 gordc, I have some interesting thing :) 15:48:43 Hi 15:48:58 kudva, ok, please be free to be first 15:49:01 :) 15:49:02 I wanted to discuss a predictive failure alert metric addition 15:49:24 We have implemented a 'host predictive failure' which runs as a plugin on the host 15:49:24 kudva: new feature? 15:49:34 gordc: yes I think so 15:49:39 ok. 15:49:47 kudva: tell us more 15:50:04 when we detect a failure we add a notification for possible failure so that all vms running on it can be migrated away. 15:50:21 we were able to do this with very little code 15:50:24 https://blueprints.launchpad.net/ceilometer/+spec/predictive-failure-metric 15:50:27 it works for us. 15:50:32 kudva: elevator pitch... side note: could you write a detailed item here: https://github.com/openstack/ceilometer-specs 15:50:48 We wanted to know how to contribute this to the community code 15:50:52 gordc will do 15:50:54 kudva: we track our specs there for everyone to review 15:51:19 kudva: start with spec write up... it won't get into juno as i'm assuming it's not a short patch. 15:51:19 gordc: will do. Didn't know the exact procedure for proposing and acceptance for ceilometer. We have contributed code directly to other projects 15:51:29 kudva, yeah, spec will be good 15:51:45 kudva: yeah. new process most projects implemented this cycle 15:51:45 kudva: what emits this predictive failure alert? 15:51:52 gordc: About 10 lines of code total as you can see in this blueprint 15:51:55 https://blueprints.launchpad.net/ceilometer/+spec/predictive-failure-metric 15:51:56 kudva, that's new practice for the BPs acception 15:52:20 DinaBelova: Ah, ok. I thought it was a blueprint, so I put the link above for review 15:52:39 kudva: https://wiki.openstack.org/wiki/Blueprints 15:52:43 gordc: So, I put an entry into ceilometer-specs and then need to have it approved? 15:52:44 kudva: err.. yeah so take your spec and match it to the template... 15:52:52 kudva, BPs are technically non-delitable in the LP, so we want to have smth deletable in the gerrit 15:53:04 if smth will be unacceptable not to create new onew and new onew 15:53:09 kudva: if it's really small, it may get into juno? i don't want to give false hope though 15:53:13 ones* 15:53:29 kudva: yeah, we'd approve specs...but you can post code at the same time 15:53:43 gordc, ++ 15:53:55 for this small change both spec + code will be good 15:53:55 gordc: okay will do both together asap 15:54:04 kudva: cool cool 15:54:15 gordc: thanks! 15:54:16 kudva, if you'll do that today, you'll have more chances for juno 15:54:21 and me :) 15:54:32 DinaBelova: yes, will do it today :) 15:54:38 gordc, today I got performance benchmarking results of ceilo on real lab 15:54:47 gordc: FYI, bug administration done, I will update the active one, when I have the fresh env 15:54:48 DinaBelova: sweet! 15:54:56 ildikov: thanks so much 15:54:57 I have some nice stuff for the 2000VMs and 1min pollong :) 15:55:00 polling* 15:55:18 DinaBelova: are you using ilya's testing script? i couldn't get it working. 15:55:24 so I'm going to prepare doc/blogpost to present the results 15:55:36 DinaBelova++ 15:55:38 DinaBelova: was just going ot ask about posting results 15:55:39 no-no, we were testing load on the disks, io, etc. 15:55:40 eagerto see that 15:55:50 so we used rally to perform the load 15:55:55 DinaBelova: awesome...ignore the ilya's script part. 15:56:00 gordc :) 15:56:24 this benchmarking stuff had this disks, CPU, etc. load aim 15:56:45 the only thing I can say now that 1sec polling kills nova :) 15:56:51 for sure :) 15:56:54 DinaBelova: can you post on openstack-ceilometer when it's up? i don't know how else we can share it. 15:57:04 gordc, yes, for sure! 15:57:19 I hope to have docco tomorrow and blogpost tomorrow or on MOnday 15:57:24 DinaBelova: yeah. i think that was known... if i recall there's a bp to address that? 15:57:33 DinaBelova: awesome 15:57:48 gordc, :) 15:58:12 https://review.openstack.org/#/c/101814/ not for juno but something 15:58:26 gordc, a-ha 15:58:34 DinaBelova: if you have alternative solution that'd be cool too. 15:58:46 but anything else? 15:58:50 okay, my LP network connection seems to die :-\ 15:58:53 anyone? 15:58:55 will look asap 15:58:59 DinaBelova: cool cool 15:59:01 gordc, not from me 15:59:05 nor me 15:59:18 apologies for lying and saying this would be a short meeting. 15:59:23 gordc :D 15:59:27 tsk, tsk 15:59:52 everyone happy or happy enough? 15:59:59 3 16:00:00 2 16:00:00 1 16:00:02 I guess that's it :) 16:00:03 thanks guys, see you tomorrow! 16:00:05 #endmeeting