Wednesday, 2016-02-24

*** kapil has quit IRC00:19
*** cdent has joined #openstack-telemetry00:20
*** KrishR has quit IRC00:22
*** diogogmt has joined #openstack-telemetry00:42
*** cdent has quit IRC00:42
*** cheneydc has joined #openstack-telemetry01:07
*** kapil has joined #openstack-telemetry01:11
kapili can see that ceilometer-agent-ipmi is running by doing a 'ps aux | grep agent-ipmi', but when I do 'service ceilometer-agent-ipmi restart', it says service not available01:12
kapili have ubuntu 14.04 on the host nodes and i am not sure if the ipmi agent is installed correctly01:12
kapilapt-get doesn't show ceilometer-ipmi-agent as I think the sources are outdated for trusty01:14
kapiland there are dependency issues if I try to manually install using 'dpkg -i'01:14
kapilany suggestions to fix it ?01:14
*** gtt116_ has quit IRC01:28
openstackgerritOpenStack Proposal Bot proposed openstack/ceilometer: Updated from global requirements  https://review.openstack.org/28255001:33
*** jwcroppe has joined #openstack-telemetry01:54
openstackgerritIgor Degtiarov proposed openstack/ceilometer: bracketer event transformer  https://review.openstack.org/26648801:55
openstackgerritMerged openstack/aodh: Install configuration files by default  https://review.openstack.org/27942002:03
*** vishwanathj has quit IRC02:06
*** pcaruana has quit IRC02:07
openstackgerritMerged openstack/aodh: Clean etc directory  https://review.openstack.org/27942102:11
openstackgerritliusheng proposed openstack/aodh: Change the SERVICE_TENANT_NAME to SERVICE_PROJECT_NAME  https://review.openstack.org/28343502:11
*** pcaruana has joined #openstack-telemetry02:19
*** liamji has joined #openstack-telemetry02:22
*** ljxiash has joined #openstack-telemetry02:24
*** vishwanathj has joined #openstack-telemetry02:26
*** vishwana_ has joined #openstack-telemetry02:28
*** vishwanathj has quit IRC02:31
openstackgerritliusheng proposed openstack/python-aodhclient: Make the alarm list output more concise  https://review.openstack.org/28391002:34
*** pcaruana has quit IRC03:01
*** pcaruana has joined #openstack-telemetry03:16
*** marcin12345 has quit IRC03:23
*** liusheng has quit IRC03:34
openstackgerritMerged openstack/ceilometer: Initial seed of hacking  https://review.openstack.org/28212403:34
*** liusheng has joined #openstack-telemetry03:34
*** links has joined #openstack-telemetry03:46
*** boris-42 has quit IRC03:54
openstackgerritOpenStack Proposal Bot proposed openstack/ceilometer: Updated from global requirements  https://review.openstack.org/28255003:55
*** pcaruana has quit IRC04:01
*** liusheng has quit IRC04:03
*** pcaruana has joined #openstack-telemetry04:15
*** thorst has joined #openstack-telemetry04:16
*** thorst has quit IRC04:20
*** sanjana_ has joined #openstack-telemetry04:20
*** thorst has joined #openstack-telemetry04:20
*** thorst has quit IRC04:20
*** thorst has joined #openstack-telemetry04:21
*** ljxiash has quit IRC04:24
*** liamji has quit IRC04:28
*** liamji has joined #openstack-telemetry04:29
*** thorst has quit IRC04:30
openstackgerritMerged openstack/aodh: Fix tempest test path  https://review.openstack.org/28337204:39
*** yprokule has joined #openstack-telemetry04:47
*** jwcroppe has quit IRC05:10
*** zqfan has joined #openstack-telemetry05:20
*** thorst has joined #openstack-telemetry05:27
*** thorst has quit IRC05:34
*** pcaruana has quit IRC05:38
*** ljxiash has joined #openstack-telemetry05:49
*** pcaruana has joined #openstack-telemetry05:53
*** liusheng has joined #openstack-telemetry05:56
*** openstack has joined #openstack-telemetry13:23
*** nicodemus_ has joined #openstack-telemetry13:31
*** ildikov has quit IRC13:33
*** liamji has joined #openstack-telemetry13:52
*** datravis has joined #openstack-telemetry13:54
*** gordc has joined #openstack-telemetry13:55
*** ityaptin has quit IRC13:56
*** ityaptin has joined #openstack-telemetry13:57
*** ildikov has joined #openstack-telemetry13:58
*** efoley__ has joined #openstack-telemetry14:04
*** efoley_ has quit IRC14:06
*** ljxiash has joined #openstack-telemetry14:10
openstackgerritDavanum Srinivas (dims) proposed openstack/ceilometer: [WIP] Trying latest oslo.* from master  https://review.openstack.org/28414814:14
*** julim has joined #openstack-telemetry14:15
gordcityaptin: i added a few more comments. to be honest, given complexity of change, i'm worried about patch now.14:15
gordci wonder what will happen if we ask nova to return all instances it knows about (in a large env). will the api explode sending back data for +100k instances?14:17
ityaptingordc: I tested it with thousand of instances (but in error state) and it works fine. I'll try it with more instances entries in db asap (today or tomorrow).14:24
ityaptinityaptin: Yes, I understand your concerns.14:25
gordcyeah, we need this tested more. thanks14:26
openstackgerritVitaly Gridnev proposed openstack/ceilometer: [sahara] add events definitions regarding new notifications  https://review.openstack.org/28122614:40
*** diogogmt has quit IRC14:42
openstackgerritIgor Degtiarov proposed openstack/ceilometer: [MongoDB] exchange compound index with single field indexes  https://review.openstack.org/27626214:42
*** diogogmt has joined #openstack-telemetry14:44
*** rickyrem has joined #openstack-telemetry14:46
*** rickyrem has quit IRC14:46
*** rickyrem has joined #openstack-telemetry14:46
*** rickyrem has quit IRC14:48
*** KrishR has joined #openstack-telemetry14:48
*** ddieterly has joined #openstack-telemetry14:48
*** ddieterl_ has joined #openstack-telemetry14:48
*** safchain has quit IRC14:57
*** rbak has joined #openstack-telemetry14:58
*** diogogmt has quit IRC15:01
*** efoley__ has quit IRC15:02
*** efoley__ has joined #openstack-telemetry15:03
*** rickyrem has joined #openstack-telemetry15:08
openstackgerritZi Lian Ji proposed openstack/ceilometer: Enable the Load Balancer v2 for the Ceilometer(Part Two)  https://review.openstack.org/27743415:16
openstackgerritVitaly Gridnev proposed openstack/ceilometer: [sahara] add events definitions regarding new notifications  https://review.openstack.org/28122615:17
openstackgerritZi Lian Ji proposed openstack/ceilometer: Enable the Load Balancer v2 for the Ceilometer(Part Two)  https://review.openstack.org/27743415:21
*** annasort has quit IRC15:31
ityaptingordc: About 100k+  instances from nova-api15:37
ityaptingordc: I researched nova-api and found out what upper bound  of count for the server.list is 1000 instances. So, I am going to make a pagination requests to the nova api from the discovery.15:39
*** efoley__ has quit IRC15:40
ityaptinIt's a N/1000 sequence requests to the nova-api. It doesn't bulk nova api so strong as 1000 requests in same time and allows to collect all samples without big output from api.15:40
*** kapil has quit IRC15:47
gordcityaptin: hmmm. so we still need to make multiple requests15:50
*** efoley__ has joined #openstack-telemetry15:51
gordcityaptin: it seems like it's easier to just drop all the caching stuff we're doing and add a new option that only gathers instances at set interval15:51
gordcthis is essentially what we are doing now.15:51
ityaptingordc: 1. we make a more few requests than for request per compute. For the 100 instances environment it's a 1-5 requests that depends on instance flavors.15:54
ityaptingordc: 2. For the fast polling (30s) current implementation will break nova-api, but we can set a cache to the 5 minutes and reduce requests rate to the 10 times.15:56
gordcityaptin: right, that's what i mean, you can do #2 but not #115:56
gordc#1 was big reason for all the caching because we thought 'we can do 1 req for everyone', this isn't true now that we've dug into it15:57
gordcso we do minimise api calls... but it's still multiple calls required.15:57
ityaptinfor #1 we have a less requests sequentially instead of N requests in same time15:58
gordcif we do #2 and just say 'you can set this option, and it will be the interval we use to query nova-api and find instances' the pipeline intervals will be what allows you to get measurement data at very fast interval15:59
gordcwell there's the jitter support so it's not all at the same15:59
gordcany other cores/contributers want to comment on this?16:02
*** safchain has joined #openstack-telemetry16:02
_nadya_gordc: perhaps, we ask nova-guys?16:02
gordcityaptin: i don't mind implementing the cache, but from my pov, it seems like it's not a huge win. it's a small win with a lot of extra baggage.16:02
gordc_nadya_: about how to query api and get all data?16:03
_nadya_gordc: I think it's important to support frequent polling16:03
gordc_nadya_: right. which can be done if we just do #216:04
gordcsort of.16:04
gordcright now, our polling is both instance discovery and measurement gathering16:04
*** efoley__ has quit IRC16:04
gordcthe proposed patch, is splitting instance discovery into another interval.. i'm ok with this.16:05
_nadya_gordc: ok, so what are your main objections?16:05
gordcbut i think considering we will always need to hit nova-api multiple times to actually gather all the instances. do we need to do instance discovery and store in global cache?16:06
*** efoley__ has joined #openstack-telemetry16:06
gordcthe global cache part seems less valuable now. because we need to hit nova-api multiple times regardless16:06
gordci'm not sure how much better it is than just letting each host hit nova-api (without global cache)16:07
ityaptingordc: About16:08
ityaptinnova api hitting16:08
gordcie. is 10 api requests (with required global cache) a lot better than 75 smaller requests (with no global cache)16:08
_nadya_gordc: let's consider real-life example.  We may have 3000 computes up to 300 instances. So we should ask for 900000 instances. We will do 3000 requests now, with the patch we will do 900000/X where X is configurable. By default we will do 90016:08
openstackgerritMerged openstack/python-ceilometerclient: make aggregation-method argument as a mandatory field  https://review.openstack.org/26894716:09
_nadya_gordc: nova people said that big request is better then a lot of small ones for them16:10
*** rpodolyaka has joined #openstack-telemetry16:10
_nadya_rpodolyaka: hi! is that correct " big request is better then a lot of small ones for nova"?16:11
gordc_nadya_: right. so i guess my question is ~4x less requests worth the global cache complexity and whatever delay it takes to make those requests sequentially16:11
rpodolyaka_nadya_: I think that's true in *most* cases for any service16:11
rpodolyakato reduce the number of round trips16:12
rpodolyakathough, I suggest you do a benchmark16:12
rpodolyakae.g on a fake compute driver16:12
rpodolyakawith a lot of nova-compute's running (containers?) and a lot of fake VMs spawned16:12
gordcrpodolyaka: yeah. it's also somethign to consider we are making sequentially requests vs scattered parallel requests16:13
rpodolyakaagreed16:13
gordc_nadya_: if we do 900 requests in a row, it will have less load on api but will take a look longer to accomplish probably than 3000 in some scattered parallel call16:13
_nadya_rpodolyaka: we are trying to understand whether it is needed to have one global cache for all instances in Ceilo instead of asking from each compute16:14
*** diogogmt has joined #openstack-telemetry16:14
*** annasort has joined #openstack-telemetry16:14
gordc_nadya_: so we can always add in the global cache. but right now, i think we really need to be sure it's worth it16:14
*** annasort_ has joined #openstack-telemetry16:15
rpodolyakahmm, I wonder if scattered parallel requests would be possible with bulk requests and pagination16:15
gordc_nadya_: the original argument for global cache was 1request vs 3000requests which is why everyone said 'yes, let's do it!'16:15
rpodolyaka*marker based pagination16:15
*** yprokule has quit IRC16:16
gordcrpodolyaka: i think you'd need the full response first no? to actually get the marker. i think if it was offset, we could do parallel16:16
rpodolyakayeah, looks like so16:16
gordcsomething to look at though. good idea16:16
rpodolyakaso these requests will have to be sequential then16:17
_nadya_gordc: ok... but I think that the idea to divide polling and cache update can be implemented, what do you think?16:17
rpodolyakaon the other hand, you should only need to do so many of them16:17
rpodolyakaI believe, we default to 1000 instances per API request upper limit16:17
gordcrpodolyaka: yeah. it's definitely less. i just don't know what is faster/better16:18
gordcrpodolyaka: do you know if 1000 is a safe limit? ie. we could make it a lot higher?16:18
gordcor is 1000 something someone tried once and found to be when issues start happening16:18
rpodolyakahonestly, I don't know :(16:19
*** annasort has quit IRC16:19
*** annasort_ is now known as annasort16:19
gordcrpodolyaka: cool cool. np. i assume it was a nice round number like all the other defaults in openstack16:19
rpodolyakaheh16:19
rpodolyakaI think so16:20
openstackgerritMerged openstack/python-ceilometerclient: Updated from global requirements  https://review.openstack.org/27872216:20
gordc_nadya_: yeah, let's split out the instance discovery and measurement polling part first16:20
gordc_nadya_: we can leave the global cache patch. but i think we need to figure out the questions above16:21
gordcapologies folks, these questions didn't pop into my head when i read spec.16:21
gordcrpodolyaka: thanks for your input16:22
_nadya_gordc: it's my fault too, don't worry :)16:22
rpodolyakagordc: np!16:22
_nadya_ityaptin: can modify your patch to use local cache only in 1-2 days?16:23
_nadya_brb16:24
*** _nadya_ has quit IRC16:24
ityaptinYep, It's need a little time.16:24
gordci'm ok with adding that as FFE. i think if we reuse all the existing framework it should hopefully be minimal.16:24
ityaptingordc: framework for the current local caching?16:25
gordcityaptin: any concerns? do you feel like we should still go down global cache path right away? we can always ask other cores/contributers16:25
ityaptingordc: No, i'm about > if we reuse all the existing framework16:26
ityaptinOnly for better understanding16:26
ityaptinAlso, I will make a new change with a new ChangeId.16:27
gordcityaptin: right. so i imagine it'd be similar to how you did it now? right now you only query nova-api based on the interval option you added?16:27
gordcso during discovery, just check if you need to update instance list16:28
ityaptinYep, but I can do it how it works now: Check `if now() - self.last_run >= resource_update discovery` in https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/discovery.py#L41.16:30
ityaptinOr add a oslo.cache support and etc16:31
ityaptinI prefer start this way with `if now() - self.last_run >= resource_update discovery`16:31
*** belmoreira has quit IRC16:31
ityaptinIf we decide to return to oslo.cache I always can rewrite an current CR with  changes16:32
gordcityaptin: agree. leave out oslo.cache stuff. let's start as simple as possible and move from there.16:34
*** idegtiarov has quit IRC16:37
*** ljxiash has quit IRC16:39
*** liamji has quit IRC16:41
*** rickyrem has quit IRC16:42
*** rickyrem has joined #openstack-telemetry16:45
*** Guest51435 is now known as mgagne16:56
*** mgagne has quit IRC16:56
*** mgagne has joined #openstack-telemetry16:56
*** thorst has quit IRC17:02
*** thorst has joined #openstack-telemetry17:03
*** thorst has quit IRC17:03
*** belmoreira has joined #openstack-telemetry17:06
*** safchain has quit IRC18:01
openstackgerritIlya Tyaptin proposed openstack/ceilometer: Add an update interval to compute discovery  https://review.openstack.org/28432218:02
ityaptingordc: ^^^18:02
*** julim has quit IRC18:03
*** KrishR has quit IRC18:03
gordcityaptin: looks promising. will review after lunch.18:06
gordcityaptin: thanks for quick turnaround!18:06
*** julim has joined #openstack-telemetry18:07
*** rickyrem has quit IRC18:11
*** rickyrem has joined #openstack-telemetry18:12
*** thorst has joined #openstack-telemetry18:13
*** _nadya_ has joined #openstack-telemetry18:13
*** belmoreira has quit IRC18:23
*** jwcroppe has quit IRC18:26
*** rickyrem has quit IRC18:27
*** rickyrem has joined #openstack-telemetry18:28
pradkgordc, hey we're seeing this error loading tempest https://bugs.launchpad.net/aodh/+bug/154942418:32
openstackLaunchpad bug 1549424 in Aodh "Aodh tempest plugin fails to load with DuplicateOpts Error" [Undecided,New]18:32
pradkgordc, if i change the opt name to something like aodh_available it loads ok.. so looks like there is aodh opt somewhere else18:32
pradkgordc, any idea whats using this opt? is it ok to change this name from aodh to something else .. i have a patch just want to check before i push18:33
pradkgordc, looks like its already defined in tempest repo thats why18:38
pradkr-mibu, you around?18:40
pradkr-mibu, gordc, i assume we have plans to remove aodh plugin code from tempest repo as part of the migration? currently the opt is loaded in both places and hence causing DuplicateOpt error18:46
*** ljxiash has joined #openstack-telemetry18:51
*** efoley__ has quit IRC18:54
*** ljxiash has quit IRC18:55
*** _nadya_ has quit IRC18:56
*** datravis has quit IRC18:57
*** datravis has joined #openstack-telemetry18:57
*** KrishR has joined #openstack-telemetry18:59
*** cdent has quit IRC18:59
gordcpradk: err. let's remove from aodh i guess?19:03
gordche's in Tokyo. no idea what time it is... i assume he's sleeping though19:03
gordcpradk: i want to remove aodh from tempest repo asap.19:07
gordcRyota said he may be able to get it merged for Mitaka19:07
*** jwcroppe has joined #openstack-telemetry19:13
*** belmoreira has joined #openstack-telemetry19:20
*** jwcroppe has quit IRC19:25
*** ddaskal has joined #openstack-telemetry19:26
*** ddaskal has quit IRC19:28
*** ddieterly has quit IRC19:31
*** ddieterl_ has quit IRC19:31
*** ddieterly has joined #openstack-telemetry19:31
*** ddieterl_ has joined #openstack-telemetry19:31
pradkgordc, yea not sure, you can recreate it locally by doing this in py shell - from tempest.api.baremetal.admin import base .. removing it from aodh does fix the issue but i'm not sure if thats the right fix though19:34
gordcwe could just remove it from tempest completely and just deal with it.19:40
gordcnot like we're testing much19:40
gordcor anything19:40
*** _nadya_ has joined #openstack-telemetry19:47
*** zqfan has quit IRC20:12
*** yassine__ has quit IRC20:12
*** gordc has quit IRC20:18
*** cdent has joined #openstack-telemetry20:28
*** gordc has joined #openstack-telemetry20:47
*** boris-42 has joined #openstack-telemetry20:53
*** belmoreira has quit IRC20:53
*** jwcroppe has joined #openstack-telemetry20:54
*** annasort has quit IRC21:08
*** thorst is now known as thorst_afk21:10
*** pcaruana has quit IRC21:10
*** KrishR has quit IRC21:23
*** KrishR has joined #openstack-telemetry21:33
*** ekarlso- has quit IRC21:34
*** ekarlso- has joined #openstack-telemetry21:34
*** _nadya_ has quit IRC21:51
*** kapil has joined #openstack-telemetry21:55
*** nicodemus_ has quit IRC21:57
*** jwcroppe has quit IRC21:57
kapili talked about this issue yesterday here but i am still stuck on it. I am running the ceilometer-agent-ipmi on the compute nodes, I changed pipeline.yaml of the compute node to include the ipmi meters and resource as "ipmi://localhost". I have ipmitool installed on the localhosts and restarted the ceilometer services on compute and controller nodes21:58
kapil. Yet, I am not receiving any ipmi meters in ceilometer meter-list.21:58
kapilAny help pls ?21:58
*** ljxiash has joined #openstack-telemetry22:07
gordckapil: you may want to ask on mailing list. all the devs are in europe/asia.22:09
gordckapil: also, i'm not entirely sure it knows how resolve localhost so you should put a real ip22:09
kapilok, thanks, I tried putting the hypervisor local ip as well as the ip i get from "ipmitool lan print" but neither works22:10
*** ljxiash has quit IRC22:10
*** kapil has quit IRC22:19
*** julim has quit IRC22:33
*** gordc has quit IRC22:38
*** vishwana_ is now known as vishwanathj22:39
*** safchain has joined #openstack-telemetry22:48
*** thorst_afk is now known as thorst23:01
*** safchain has quit IRC23:03
*** datravis has quit IRC23:09
*** rickyrem has quit IRC23:20
*** KrishR has quit IRC23:22
*** pradk has quit IRC23:24
*** ddieterl_ has quit IRC23:30
*** ddieterly has quit IRC23:30
*** rickyrem has joined #openstack-telemetry23:34
*** safchain has joined #openstack-telemetry23:37
*** rickyrem has quit IRC23:44
*** cdent has quit IRC23:47
*** thorst has quit IRC23:48
*** rbak has quit IRC23:59
*** safchain has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!