Wednesday, 2014-11-12

*** rbak has quit IRC00:01
*** shakamunyi has joined #openstack-ceilometer00:02
*** ryanpetrello has joined #openstack-ceilometer00:08
*** shakamunyi has quit IRC00:09
*** shakamunyi has joined #openstack-ceilometer00:10
*** shakamunyi has quit IRC00:15
*** ryanpetrello has quit IRC00:23
*** Viswanath has joined #openstack-ceilometer00:25
*** Viswanath has quit IRC00:29
*** shakamunyi has joined #openstack-ceilometer00:31
*** changbl has joined #openstack-ceilometer00:45
*** ryanpetrello has joined #openstack-ceilometer00:56
*** yatin has joined #openstack-ceilometer01:27
*** ddieterly has joined #openstack-ceilometer01:27
*** ryanpetrello has quit IRC01:39
*** yatin has quit IRC01:40
*** Viswanath has joined #openstack-ceilometer01:47
*** Viswanath has quit IRC01:51
*** Yanyanhu has joined #openstack-ceilometer01:53
*** ssakhamuri has joined #openstack-ceilometer01:56
*** ssakhamuri has quit IRC01:57
*** nosnos has joined #openstack-ceilometer02:06
*** openstackgerrit has quit IRC02:34
*** ddieterly has quit IRC03:00
*** Viswanath has joined #openstack-ceilometer03:22
*** Viswanath has quit IRC03:25
*** liusheng has quit IRC03:31
*** liusheng has joined #openstack-ceilometer03:31
*** shakamunyi has quit IRC03:32
*** deepthi has joined #openstack-ceilometer03:37
*** boris-42 has quit IRC03:37
*** ddieterly has joined #openstack-ceilometer03:51
*** ddieterly has quit IRC03:55
*** ddieterly has joined #openstack-ceilometer03:59
*** ddieterly has quit IRC04:04
*** liusheng has quit IRC04:19
*** liusheng has joined #openstack-ceilometer04:20
*** yatin has joined #openstack-ceilometer04:30
*** cmyster has quit IRC04:39
*** ddieterly has joined #openstack-ceilometer05:00
*** ddieterly has quit IRC05:05
*** ishant has joined #openstack-ceilometer05:40
*** cmyster has joined #openstack-ceilometer05:43
*** ildikov has quit IRC05:58
*** ddieterly has joined #openstack-ceilometer06:00
*** Yanyanhu has quit IRC06:04
*** ddieterly has quit IRC06:05
*** Yanyanhu has joined #openstack-ceilometer06:10
*** _elena_ has quit IRC06:10
*** _elena_ has joined #openstack-ceilometer06:10
*** boris-42 has joined #openstack-ceilometer06:21
*** k4n0 has joined #openstack-ceilometer06:37
*** _nadya_ has joined #openstack-ceilometer06:46
*** nellysmitt has joined #openstack-ceilometer06:56
*** ddieterly has joined #openstack-ceilometer07:00
*** ddieterly has quit IRC07:01
*** ddieterl_ has joined #openstack-ceilometer07:01
*** eglynn_ has joined #openstack-ceilometer07:01
*** ddieterl_ has quit IRC07:06
*** nellysmitt has quit IRC07:14
*** cmyster has quit IRC07:20
*** ildikov has joined #openstack-ceilometer07:31
*** nellysmitt has joined #openstack-ceilometer07:36
*** cmyster has joined #openstack-ceilometer07:39
*** Ala has joined #openstack-ceilometer07:46
*** _nadya_ has quit IRC07:50
*** Longgeek has joined #openstack-ceilometer08:04
*** Longgeek has quit IRC08:06
*** Longgeek has joined #openstack-ceilometer08:07
*** ishant2 has joined #openstack-ceilometer08:08
*** ishant has quit IRC08:10
*** ildikov has quit IRC08:13
*** nellysmitt has quit IRC08:13
*** promulo has joined #openstack-ceilometer08:24
*** ildikov has joined #openstack-ceilometer08:25
*** nellysmitt has joined #openstack-ceilometer08:32
*** ifarkas has joined #openstack-ceilometer08:41
*** safchain has joined #openstack-ceilometer08:47
*** ana_ has joined #openstack-ceilometer09:09
*** ana_ is now known as malagon09:09
*** nellysmitt has quit IRC09:17
*** nellysmitt has joined #openstack-ceilometer09:19
*** nellysmitt has quit IRC09:22
*** moravec has quit IRC09:24
*** malagon has quit IRC09:27
*** Yanyanhu has quit IRC09:41
*** amalagon has joined #openstack-ceilometer09:49
*** IvanBerezovskiy has quit IRC09:51
*** IvanBerezovskiy has joined #openstack-ceilometer09:51
*** cdent has joined #openstack-ceilometer09:55
*** _nadya_ has joined #openstack-ceilometer09:57
*** _nadya_ has quit IRC10:06
*** Longgeek has quit IRC10:09
*** Longgeek has joined #openstack-ceilometer10:10
*** Longgeek_ has joined #openstack-ceilometer10:12
*** Longgeek has quit IRC10:15
*** eoutin has joined #openstack-ceilometer10:29
*** cdent has quit IRC10:30
*** eoutin has quit IRC10:36
*** nosnos has quit IRC10:39
*** eoutin has joined #openstack-ceilometer10:50
*** ishant2 has quit IRC10:52
*** ishant has joined #openstack-ceilometer10:53
*** eoutin has quit IRC10:55
*** cdent has joined #openstack-ceilometer10:56
*** nosnos has joined #openstack-ceilometer11:02
*** nsaje has quit IRC11:19
*** deepthi has quit IRC11:30
*** ishant has quit IRC11:38
*** ishant has joined #openstack-ceilometer11:39
*** nellysmitt has joined #openstack-ceilometer11:53
*** chmouel has joined #openstack-ceilometer11:59
chmouelhello there i am having an issue with pecan and ceilometer gate job here, http://logs.openstack.org/10/131410/2/gate/gate-pecan-tox-ceilometer-tip/b119a86/console.html12:00
chmouelit seems that pymemcache is broken there12:00
chmouelit has been happening since Friday on that pecan job, did you guys have any issues as well?12:00
chmoueluit12:01
*** ildikov has quit IRC12:04
cdentchmouel: I haven't heard of anything recently, but I may be out of touch12:08
cdentpymemcache is brought in from tooz12:09
cdentand there was a new release 7 days ago12:10
cdentah, I see the problem12:12
cdentchmouel: the wrong package is being downloaded pymemcache-1.2.6.macosx-10.9-intel.tar.gz12:12
*** yatin has quit IRC12:13
cdentit's unclear why pip is selecting that version12:14
cdentIf tooz is installed in a virtenv directly, pymemcache installs fine12:16
cdentwhen trying to install ceilometer tip from git, things fail (as the logs show)12:16
*** asalkeld has left #openstack-ceilometer12:17
*** ildikov has joined #openstack-ceilometer12:19
chmouelah thanks cdent  this is weird indeed,12:20
cdentI posted the above on the patchset too, for reference12:21
cdentI think it is just that the mac binary release is not playing well with... something?12:21
chmouelit's very weird pip is trying to install the macosx binary at first12:22
cdentyes12:22
cdentwhat I don't understand is why that weirdness only kicks in when installed ceilo from git12:23
chmouellet's see if we can ping the pip maints i think the hangout on infra12:24
*** exploreshaifali has joined #openstack-ceilometer12:37
*** ryanpetrello has joined #openstack-ceilometer12:38
*** ddieterly has joined #openstack-ceilometer12:46
*** ildikov has quit IRC12:48
*** cmyster has quit IRC12:48
*** ildikov has joined #openstack-ceilometer12:58
*** zqfan has joined #openstack-ceilometer12:59
*** nosnos has quit IRC13:10
*** Longgeek has joined #openstack-ceilometer13:33
*** Longgeek_ has quit IRC13:36
*** amalagon has quit IRC13:37
*** nellysmitt has quit IRC13:41
*** _elena_ has quit IRC13:41
*** ddieterly has quit IRC13:43
*** yassine has joined #openstack-ceilometer13:45
*** k4n0 has quit IRC13:46
*** ishant has quit IRC14:01
*** _nadya__ has joined #openstack-ceilometer14:04
*** julim has joined #openstack-ceilometer14:07
*** joesavak has joined #openstack-ceilometer14:07
*** _elena_ has joined #openstack-ceilometer14:10
*** fnaval has joined #openstack-ceilometer14:12
*** IvanBerezovskiy has left #openstack-ceilometer14:13
*** _nadya__ has quit IRC14:16
*** _nadya_ has joined #openstack-ceilometer14:16
*** ddieterly has joined #openstack-ceilometer14:20
*** cmyster has joined #openstack-ceilometer14:26
*** cmyster has joined #openstack-ceilometer14:26
*** fnaval has quit IRC14:26
*** shakamunyi has joined #openstack-ceilometer14:31
*** ana_ has joined #openstack-ceilometer14:31
*** thomasem has joined #openstack-ceilometer14:33
*** amalagon has joined #openstack-ceilometer14:35
*** ana_ has quit IRC14:36
*** adam_g` is now known as adam_g14:37
*** adam_g has quit IRC14:37
*** adam_g has joined #openstack-ceilometer14:37
*** exploreshaifali has quit IRC14:39
*** openstackgerrit has joined #openstack-ceilometer14:40
openstackgerritLena Novokshonova proposed openstack/ceilometer: Add new notifications types for volumes/snapshots  https://review.openstack.org/13114714:51
eglynn_jd__: once https://review.openstack.org/132092 lands, can we get a tooz 0.9.0 released?14:53
jd__eglynn_: sure14:53
eglynn_jd__: excellent, thank you sir! :)14:54
openstackgerritJulien Danjou proposed stackforge/gnocchi: storage: multi-thread add_measure in Carbonara based drivers  https://review.openstack.org/13268114:58
openstackgerritJulien Danjou proposed stackforge/gnocchi: storage: factorize carbonara based drivers  https://review.openstack.org/13397314:58
*** ildikov has quit IRC15:10
*** underyx is now known as underyx|off15:16
*** nsaje has joined #openstack-ceilometer15:19
*** ryanpetrello has quit IRC15:19
*** rbak has joined #openstack-ceilometer15:25
*** ryanpetrello has joined #openstack-ceilometer15:25
*** Ala has quit IRC15:27
*** ildikov has joined #openstack-ceilometer15:30
*** Longgeek has quit IRC15:35
*** _nadya_ has quit IRC15:38
*** cmyster has quit IRC15:44
eglynn_jd__: two quick questions about gnocchi if you got a sec?15:45
eglynn_first (IIRC this was mentioned at summit) would it be good to allow a "full-res" granularity (e.g. 0s or -1) to be specified in the archive policy?15:46
eglynn_(in addition to the "green bar" representing in the ingestion buffer / look-back window)15:46
eglynn_this would allow alarming on a entity to have *any* reasonable period, as long as the number of eval periods is relatively small15:46
eglynn_(so that the span of the GET .../measures query fits within the retention of the full-res timeseries, and aggregation could be done on-demand)15:47
eglynn_second, any ideas on how we could represent transient resource-to-resource associations in the brave new world of gnocchi?15:49
eglynn_(i.e. without any per-datapoint metadata or tags)15:49
* eglynn_ is thinking here of opencontrail network flows, where each measure is associated with a "here" network and a "there" network15:50
boris-42eglynn_:  h there15:52
eglynn_boris-42: hey15:52
boris-42eglynn_: so I replied on your comment15:53
boris-42eglynn_:  if you have any questions about voting stuff just ping me15:53
eglynn_boris-42: looking15:53
jd__eglynn_: so, about 1st question, yes, that'd be the back window parameter (not yet exposed in the API, but soon) and ask for non-aggregated point (recent patch implementing that with full=yes in carbonara and the drivers)15:58
*** cmyster has joined #openstack-ceilometer15:59
jd__eglynn_: for your 2nd question, I'd build a resource type "network" and have the entity associated to a network with the other network as the key/name15:59
*** cmyster has quit IRC15:59
*** cmyster has joined #openstack-ceilometer16:00
*** cmyster has joined #openstack-ceilometer16:00
eglynn_jd__: on the 1st, I was thinking more of the query asking for say period=120 when the archive policy granularity is say [1min, 1hour]16:02
eglynn_... in that case gnocchi could return the requested 120s periodization by aggregating the full-res data on the fly16:02
eglynn_(if the timespan of the query was short and recent enough)16:02
*** david-lyle has joined #openstack-ceilometer16:03
eglynn_i.e. not returning the non-aggregated data, instead revert to the classic ceilo aggregate-on-demand pattern if pre-aggregated values not available for that granularity16:03
jd__eglynn_: ok I totally misunderstood your question then16:04
jd__that'd be good enough for me, though I don't know if it's something we should implement in the API or in the storage driver16:05
jd__I wonder if there's any driver that could implement that on storage side16:05
eglynn_jd__: fair enough ... I was thinking exposing in the API might be nice as the ingestion buffer is kinda an artifact of the implementation (IIUC)16:06
jd__that's not related to the buffer anyway16:07
jd__IIUC16:07
jd__unless you want to have that period computed only on it?16:07
jd__that'd be a very edgy case16:08
jd__all drivers have an ingestion buffer, it's just that it's easier to have access to the one from Carbonara from other I imagine16:08
eglynn_jd__: yeah, I was thinking having the periodization computed on the full-res buffer *if* the requested granularity wasn't being pre-aggregated for that entity16:10
eglynn_jd__: e.g. if the requested period was say 120s, then the pre-aggregated periods for this entity were say 1min and 1hour16:10
eglynn_jd__: just to clarify the 2nd q, did you mean exposing *two* levels of naming in the resource-to-entity mapping?16:12
eglynn_e.g. ... GET /v1/resource/network/here_UUID/entity/in.bytes/there_UUID/measures16:12
jd__eglynn_: the problem is that full-res buffer is only 1 hour long in that case, and I don't know if we have access to it in other drivers16:14
jd__eglynn_: 2nd q: no I meant not changing anything just /v1/resources/network/here_uuid/entity/there_uuid.in.bytes/measures16:15
eglynn_jd__: yeah, so in the alarming case, that 1 hour might often be enough as the period*eval_periods of the alarm is generally short16:17
eglynn_jd__: ... but if we allowed the archive policy to explicitly allow for a longer full-res rentention, that might be more logicial for users16:17
jd__eglynn_: I'm totally with all of that if all the drivers can have this ability16:18
jd__eglynn_: so let me know if we can do that with Influx :p16:18
eglynn_e.g. archive_policy = [(granularity=0s, retention=2hrs), (granularity=1min, retention=1day), (granularity=1hr, retention=1week)]16:18
*** renatoarmani has joined #openstack-ceilometer16:18
eglynn_where 0s ==full-res16:18
eglynn_coolness, will do :)16:19
jd__eglynn_: btw there's some patches to review if you ever feel like it O;-)16:19
eglynn_jd__: yeah, sorry, I've been a bit remiss on that since summit ... will give the gnocchi queue on gerrit another pass before EoD16:20
jd__cool thanks16:21
*** _cjones_ has joined #openstack-ceilometer16:34
*** Longgeek has joined #openstack-ceilometer16:35
*** Viswanath has joined #openstack-ceilometer16:36
*** _nadya_ has joined #openstack-ceilometer16:36
*** Viswanath has quit IRC16:39
*** Longgeek has quit IRC16:40
boris-42eglynn_: can we discuss rally job?16:42
boris-42eglynn_:  if you have some free slots of time=)16:42
boris-42eglynn_:  cause you are missing the point..16:42
boris-42a bit=)16:42
eglynn_boris-42: missing the point with my latest comment on https://review.openstack.org/#/c/129922/7/specs/kilo/rally-check-gate.rst ?16:42
boris-42eglynn_: ya16:43
eglynn_boris-42: sure, always open to correction ... shoot16:43
boris-42eglynn_:  I just read it16:43
boris-42eglynn_:  so you are absolutelly right about difference between nodes16:43
boris-42eglynn_: and perf of them16:43
eglynn_boris-42: OK, so in this case we avoid that reflecting in rally timings how?16:44
boris-42eglynn_:  so the idea of sla checks (that are related to duration) can be something like (value) that is bigger then in case of 99%16:44
boris-42eglynn_:  then you get failed rally job in 1% cases16:44
boris-42eglynn_: that's all16:44
*** renatoarmani has quit IRC16:45
eglynn_boris-42: "bigger then in case of 99%" == 99th percentile of previous/recent test runs?16:45
boris-42eglynn_: nope16:45
boris-42eglynn_:  just do N times rechecks and take a look at what avarage is16:45
boris-42eglynn_:  it will require some amount of time to collect data (so at first point we can just skip this step)16:46
boris-42eglynn_:  and when we collect enough info we will just put some criterias of success16:46
boris-42eglynn_:  this really can help you to avoid bad patches, that have huge impact on performance.16:46
eglynn_boris-42: do you mean N manual rechecks triggered by owner of a single patch, or some auto-rechecking?16:46
amalagonhi jd__: has anything changed in the way gnocchi is built? I am trying to add an entry point in setup.cfg but even though I'm following the same steps I've done before, stevedore can't find it16:47
boris-42eglynn_: we can just add job + some benchmarks16:47
boris-42eglynn_:  then adjust numbers16:47
dhellmannamalagon: https://pypi.python.org/pypi/entry_point_inspector might help debug16:47
boris-42eglynn_:  and when it will be some amount of Numbers collect all json files from gates and find these values16:47
eglynn_amalagon: have you re-run the setup.py16:47
eglynn_?16:47
boris-42eglynn_:  so we won't need to do anything by hand16:47
amalagonhey eglynn_, yeah16:47
amalagondhellmann: thanks, I'll try that16:48
eglynn_amalagon: k, in that case wot dhellmann said16:48
boris-42eglynn_:  and about historical perfromance data. We will just provide a page on github.io16:48
boris-42eglynn_:  with results for major components16:49
jd__amalagon: did you run setup.py?16:49
boris-42eglynn_:  so this is just used for tracking trends (but not for catching bad patches)16:49
boris-42that will be used*16:49
eglynn_boris-42: OK, just so I understand, the collection and aggregation of the historical data is automated?16:49
boris-42eglynn_:  nope16:49
amalagonjd__: yep, if 'sudo python setup.py install' is right16:49
boris-42eglynn_:  it will be automated. But for regression testing you don't need that16:50
jd__amalagon: should be yeah16:50
jd__amalagon: you could try to debug with entry_point_inspector16:50
boris-42eglynn_: just put criteria of success like no failures + some amount of stuff16:50
amalagonjd__: :) thanks, gonna try that now16:50
boris-42eglynn_:  hm maybe if we make hangout call it will be simpler for you to understand=)16:50
boris-42eglynn_:  if I just show16:50
boris-42=)16:50
eglynn_boris-42: the point is that setting that regression criteria is not trivial if we want to avoid false positives and yet not set it too loosely that it never catches a real performance issue16:52
boris-42eglynn_: it's not that hard as you think16:52
boris-42eglynn_: let's add propers jobs without criterias (except no failures)16:52
boris-42eglynn_:  and we will just incremental work on regression criterias16:53
*** shakamunyi has quit IRC16:53
boris-42eglynn_:  it is really hard at one step to do, but it really should be our long term goal16:53
eglynn_boris-42: can you define "incremental work on regression criteria" if no aggregation of the data?16:53
boris-42eglynn_: every run => new results16:54
eglynn_boris-42: do you mean manually gather the test results from the CI nodes?16:54
boris-42eglynn_:  why manually just make small script16:54
boris-42eglynn_:  that will fetch json files, and do analyze that we need16:54
eglynn_boris-42: is that the approach used for the rally SLA tests in other projects?16:54
*** ifarkas has quit IRC16:54
boris-42eglynn_:  not yet16:54
boris-42eglynn_:  you can be the first one lol)16:55
openstackgerritZhiQiang Fan proposed openstack/python-ceilometerclient: Add --slowest option for testr  https://review.openstack.org/13400316:56
boris-42eglynn_:  I mean json ouput of rally is quite simple16:56
eglynn_boris-42: k, so that was the point that I missed?16:56
boris-42eglynn_:  so we will just fetch all JSON files from all check jobs that passed other unit tess16:56
boris-42eglynn_: and functional16:57
boris-42eglynn_:  and analyze duration and find just the magic number16:57
eglynn_boris-42: i.e. that it's possible to manually aggregate the test results, figure out an average over a period, add in some wiggle-room, and then bada-bing ... we're done!16:57
amalagondhellmann: hi, sorry, basic question here- if I see my desired entry point listed in gnocchi.egg-info/entry_points.txt but epi group list doesn't show my entry point, does that tell me something useful about what's going on?16:58
boris-42eglynn_: ya16:58
boris-42eglynn_:  but we need some amount of Runs for that16:58
boris-42eglynn_:  so at the start we won't have any criteria (except no failures)16:59
*** ildikov has quit IRC16:59
boris-42eglynn_:  and then we will work on automation of adjusting numbers16:59
boris-42eglynn_:  from results from gates16:59
boris-42eglynn_:  and we will get some kind of regressions tests16:59
eglynn_boris-42: yeah, sure I get that it can be done ... I suspect it's not quite as trivial as you present17:01
eglynn_boris-42: for example we'd have to look into the data and conisder excluding outliers17:02
eglynn_boris-42: it would be interesting also to see if those numbers revealed any sustained differences between the RAX and HP public clouds17:02
eglynn_(since a CI node can be scheduled to either IIUC)17:02
boris-42eglynn_: so from what I saw the difference is < 2 times17:03
*** joesavak has quit IRC17:03
boris-42eglynn_:  but what I think is we need just to start working on this=)17:04
*** joesavak has joined #openstack-ceilometer17:04
eglynn_boris-42: "difference is < 2 times" ==> instances on one cloud up to twice as fast as the other cloud?17:05
boris-42eglynn_:  ya17:06
boris-42eglynn_:  that is what I saw in rally results for a quite long amount of time17:06
eglynn_boris-42: ... so in that case, surely we'd need cloud-specific thresholds for the SLAs?17:06
boris-42eglynn_:  I saw a lot of regressions that >2 times17:07
*** yassine has quit IRC17:07
eglynn_boris-42: i.e. expect number < X on HP, but < Y on RAX17:07
*** exploreshaifali has joined #openstack-ceilometer17:08
boris-42eglynn_:  so what I think is for the begging it's nice to get just absolute values17:08
boris-42eglynn_:  at least*17:08
boris-42eglynn_:  you can't do everything at one step+)17:09
eglynn_boris-42: so the absolute value should be based on the aggregated numbers from the cloud that's consistently slowest, amight?17:09
eglynn_boris-42: ... i.e. we discard the historical numbers from the faster cloud17:09
*** zqfan has quit IRC17:10
boris-42eglynn_:  I will just drop this stuff (about different clouds) and found value that has passed more then X% times17:10
boris-42eglynn_:  like 99% or 99.9% or any17:10
boris-42eglynn_:  this will make job quite rare failing and quite tight to what we can do for now17:11
boris-42eglynn_:  and we will still work on "normalization" of results17:12
boris-42eglynn_:  like run small set of benchmark and get some magic coefficient*17:12
boris-42eglynn_:  what we need actually is to measure perfromance of memory, cpu, disks, mysql17:13
boris-42eglynn_:  and make formula using this coefficients=)17:13
eglynn_boris-42: "found value that has passed more then X% times like 99%" is the 99th percentile that I referred to above, IIUC17:15
*** nellysmitt has joined #openstack-ceilometer17:17
eglynn_boris-42: if there is such a sustained difference between the numbers from the two clouds, aggregating over both is going to be less meaningful17:18
*** cdent has quit IRC17:18
boris-42eglynn_: so we can think about this a bit, but I still think that normalization is the better way..17:19
boris-42eglynn_: then have different SLA for different clouds..17:20
boris-42eglynn_:  in any case I don't see anything that should block this work=)17:20
eglynn_boris-42: a-ha, I see ... normalization == "multiply the numbers from one cloud by a compensating factor" ?17:21
boris-42eglynn_: yep17:21
eglynn_boris-42: fair enough, if the penalty for using one cloud over the other is relatively constant17:22
boris-42eglynn_:  and factor is calculated from formula that includes (cpu, mem, disk) perfomrace17:22
boris-42eglynn_:  factor can be every time calculated17:22
boris-42eglynn_:  running small benchmarks inside vm17:22
boris-42eglynn_:  before task17:22
*** _nadya_ has quit IRC17:22
boris-42eglynn_:  so this is very interesting topic for PHD =)17:23
eglynn_boris-42: ok, so is the kind of info I was looking for ... I'll summarize in a comment on gerrit17:24
boris-42eglynn_:  great17:24
dhellmannamalagon: a couple of things could cause that. are epi and gnocchi installed into the same site-packages (globally or in a virtualenv)?17:28
amalagonuh, great question - I think they are both installed globally because I did 'pip install entry_point_inspector' and installed gnocchi globally, but I'll double-check the site-packages17:30
eglynn_boris-42: done on https://review.openstack.org/#/c/129922/7/specs/kilo/rally-check-gate.rst17:33
amalagondhellmann: ok, yep, they are both installed in user/local/lib/python2.7/dist-packages17:33
*** renatoarmani has joined #openstack-ceilometer17:34
dhellmannamalagon: so when you look at the entry_points.txt file under the gnocchi installed in dist-packages you see the entry point, but when you scan with epi you don't see it?17:35
*** _nadya_ has joined #openstack-ceilometer17:35
amalagondhellmann: exactly; also, there are two other entry points in the setup.cfg - both of these appear with epi, just not the additional one I made.17:37
dhellmannamalagon: how did you get the new entry point listed in the entry_points.txt file?17:37
amalagondhellmann: sorry, don't understand the question?17:38
dhellmannamalagon: at one point there was no new entry point and now there is. What happened between those 2 states to cause it to be added to the file? Did you edit entry_points.txt, setup.cfg, something else?17:38
*** fnaval has joined #openstack-ceilometer17:39
*** _nadya_ has quit IRC17:39
amalagonah, I see. So I added my entry point into setup.cfg17:40
dhellmannok, that wouldn't do it by itself, so did you re-install?17:41
amalagonyep, and then did the sudo python setup.py install17:41
dhellmanncan you put your setup.cfg and entry_points.txt files on http://paste.openstack.org?17:42
amalagonyep! one sec17:42
dhellmannamalagon: also the output of epi17:42
amalagonmy setup.cfg file: http://paste.openstack.org/show/132448 , entry_points.txt: http://paste.openstack.org/show/132449 , results of epi: http://paste.openstack.org/show/13245017:45
dhellmannamalagon: and which plugin is the new one?17:45
amalagondhellmann: gnocchi.aggregate17:46
dhellmannamalagon: ok, that's a namespace not an individual plugin17:47
dhellmannamalagon: what does "epi group show gnocchi.aggregate" report?17:47
amalagondhellmann: http://paste.openstack.org/show/13245117:48
dhellmannamalagon: what does this print?: python -c 'import gnocchi; print gnocchi.__file__'17:49
amalagonhm, AttributeError: 'module' object has no attribute '__file__' ?17:50
dhellmannreplace __file__ with __path__ and try again17:50
amalagonsame error, no attribute '__path__'17:51
dhellmannok, so your gnocchi installation seems broken somehow17:51
amalagonnooo17:51
dhellmanntry "pip uninstall gnocchi" and see if that is able to remove anything17:51
*** safchain has quit IRC17:51
amalagonwhoa! I uninstalled and reinstalled and the namespace showed up!17:52
amalagonthank you!17:52
dhellmanngood!17:52
amalagon:D17:53
*** shakamunyi has joined #openstack-ceilometer17:56
*** shakamunyi has quit IRC17:56
*** fnaval has quit IRC17:59
*** shakamunyi has joined #openstack-ceilometer18:01
*** alexpilotti has joined #openstack-ceilometer18:21
*** eglynn_ is now known as eglynn-afk18:22
*** ildikov has joined #openstack-ceilometer18:27
*** renatoarmani has quit IRC18:32
*** renatoarmani has joined #openstack-ceilometer18:35
*** jsavak has joined #openstack-ceilometer18:58
*** joesavak has quit IRC19:02
*** alexpilotti_ has joined #openstack-ceilometer19:09
*** liusheng has quit IRC19:09
*** alexpilotti has quit IRC19:09
*** alexpilotti_ is now known as alexpilotti19:09
*** liusheng has joined #openstack-ceilometer19:09
*** shakamunyi has quit IRC19:12
*** shakamunyi has joined #openstack-ceilometer19:13
*** rbak_ has joined #openstack-ceilometer19:44
*** rbak has quit IRC19:44
*** ddieterly has quit IRC19:47
*** ddieterly has joined #openstack-ceilometer19:48
*** ddieterly has quit IRC19:53
*** ddieterly has joined #openstack-ceilometer19:55
*** dnalezyt has joined #openstack-ceilometer19:55
*** jsavak has quit IRC20:01
*** joesavak has joined #openstack-ceilometer20:03
*** prad has joined #openstack-ceilometer20:22
*** exploreshaifali has quit IRC20:24
*** alexpilotti has quit IRC20:33
*** promulo has quit IRC20:34
*** promulo has joined #openstack-ceilometer20:35
*** _nadya_ has joined #openstack-ceilometer20:39
*** _nadya_ has quit IRC20:43
*** renatoarmani has quit IRC20:44
*** amalagon has quit IRC20:46
*** shakamunyi_ has joined #openstack-ceilometer20:49
*** thomasem has quit IRC20:57
*** shakamunyi_ has quit IRC21:03
*** amalagon has joined #openstack-ceilometer21:05
*** shakamun_ has joined #openstack-ceilometer21:10
*** shakamunyi has quit IRC21:10
*** MasterPiece has joined #openstack-ceilometer21:11
*** shakamunyi_ has joined #openstack-ceilometer21:17
*** shakamun_ has quit IRC21:26
*** shakamunyi_ has quit IRC21:27
*** shakamunyi has joined #openstack-ceilometer21:30
*** Viswanath has joined #openstack-ceilometer21:36
*** thomasem has joined #openstack-ceilometer21:37
*** Viswanath has quit IRC21:40
*** Viswanath has joined #openstack-ceilometer21:41
*** Viswanath has quit IRC21:44
*** asalkeld has joined #openstack-ceilometer21:56
*** joesavak has quit IRC22:16
*** nellysmitt has quit IRC22:22
*** asalkeld has quit IRC22:51
*** thomasem has quit IRC22:54
*** ddieterly has quit IRC23:05
*** _cjones_ has quit IRC23:06
*** asalkeld has joined #openstack-ceilometer23:07
*** _cjones_ has joined #openstack-ceilometer23:09
*** eglynn-afk has quit IRC23:09
*** david-lyle is now known as david-lyle_afk23:12
*** promulo has quit IRC23:22
*** promulo has joined #openstack-ceilometer23:22
*** promulo__ has joined #openstack-ceilometer23:28
*** promulo has quit IRC23:29
*** ryanpetrello has quit IRC23:39
*** prad has quit IRC23:42
*** exploreshaifali has joined #openstack-ceilometer23:48
*** rbak_ has quit IRC23:53
*** eglynn-afk has joined #openstack-ceilometer23:57
*** shakamunyi has quit IRC23:58
*** shakamunyi has joined #openstack-ceilometer23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!