Tuesday, 2014-01-28

*** ociuhandu has quit IRC00:01
*** openstack has joined #openstack-gate00:02
mriedemi.e. test_rescue_server does a lot of global setup in the class with resources that aren't necessarily needed in all test cases00:03
mriedemthat all runs in parallel making a load on the server when it's unnecessary00:03
mriedemmtreinish: ^00:05
mriedemi also don't know why that test worries about adding an _unpause/_unrescue cleanup when it's just going to delete the server when the test exits anyway, seems like a waste of time00:08
*** ken1ohmichi has joined #openstack-gate00:11
cyeohmriedem: its only deleted at tearDownClass isn't it?00:11
cyeohmriedem: so its just making sure the server is put back in the correct state before the next rescue test00:11
mriedemcyeoh: hmm you might be right about that, still i'm not sure this test is very stable00:14
mriedemcyeoh: like it creates a server to rescue in the setup, but then there are tests that rescue a different server00:15
mriedemseems like this test is trying to do too much00:15
mriedemor cover too many different scenarios00:15
mriedemcyeoh: you might want to look at this: https://review.openstack.org/#/c/69455/00:15
*** markmcclain has quit IRC00:16
cyeohmriedem: looking now. Would be a bit concerning if cinder is causing this slowness though...00:17
cyeohmriedem: it looks like the second server is only used for negative tests, but could probably just use the original server00:19
cyeohand just put it into rescue state first00:19
*** sc68cal has joined #openstack-gate00:23
mriedemcyeoh: yeah, with the _unrescue cleanup i guess00:25
mriedemcyeoh: i'm not sure how to tell if cinder is causing the slowness00:25
cyeohmriedem: timestamps on the cinder-api logs?00:25
mriedemcyeoh: especially considering we did this for another volume-related race fail: https://review.openstack.org/#/c/69443/00:25
mriedemyeah, it's just this has been showing up since last september at least00:26
mriedemthe pause/rescue one at least has00:26
mriedemthis is the bug i'm tracking the pause fail: https://bugs.launchpad.net/nova/+bug/122641200:26
cyeohmriedem: ok I guess my concern about patches like 69443 is I don't think we really have an understanding of why it takes so long sometimes00:28
cyeohis it just a property of the the way we are doing testing in the gate or is it going to be a problem in real setups too?00:28
*** dims has quit IRC00:34
*** dims has joined #openstack-gate00:36
jgriffithmriedem: cyeoh cinder causing slowness?00:41
cyeohjgriffith: looking at changes like this: https://review.openstack.org/#/c/69455/ which are trying to avoid creating/deleting volumes because its suspected that the timeouts are caused waiting for cinder to create/delete volumes00:44
jgriffithcyeoh: I don't think that's accurate00:45
jgriffithcyeoh: is there data to back that up?00:45
jgriffithcyeoh: I mean... for example the last "cinder is too slow to attach bug" had absolutely nothing to do with cinder00:46
cyeohjgriffith: yea thats what I've been asking :-)00:46
jgriffithcyeoh: ahhh... :)00:46
jgriffithcyeoh: well I can surely help figure that out00:46
jgriffithcyeoh: FWIW, I run all sorts of tests that create delete batches of 100 vols without problems... BUT00:47
jgriffithit all changes if you throw in things like instances and attaches00:47
cyeohok, which is what we have here.00:47
jgriffithcyeoh: yeah, looking00:48
jgriffithcyeoh: do you have an example failure?00:48
jgriffithcyeoh: never mind00:48
jgriffithcyeoh: I remember looking at one of these00:49
jgriffithcyeoh: does this mean anything to you: http://logs.openstack.org/77/56577/9/check/check-tempest-devstack-vm-postgres-full/f5fe3ff/logs/screen-n-cpu.txt.gz#_2013-11-25_15_17_36_73200:49
cyeoh... just looking...00:50
cyeohjgriffith: you're referring to the traceback's in there?00:51
jgriffithcyeoh: yes00:51
jgriffithcyeoh: also a glance failure or two00:51
jgriffithcyeoh: some "WARNING" traces from instance not found as well00:52
jgriffithcyeoh: so IIRC the create volume is a create bootable via nova00:52
jgriffithcyeoh: and the fetch to glance is what's failing00:53
jgriffithcyeoh: completely independent of the volume00:53
jgriffithcyeoh: http://logs.openstack.org/77/56577/9/check/check-tempest-devstack-vm-postgres-full/f5fe3ff/logs/screen-n-cpu.txt.gz#_2013-11-25_15_18_55_58500:53
jgriffithcyeoh: and the "slowness" is of course the timeout waiting for the fetch from glance to fail00:54
cyeohah ok. (so those InstanceNotFound ones I think are caused by some negative tests and I think we have a fix for it, but I'll need to track it back to the api logs to double check)00:54
jgriffithcyeoh: sure... those are fine00:54
jgriffithcyeoh: the get image one though... that's another story00:54
mriedemmtreinish: about when did parallel testing go live in the gate?00:55
mriedem~september?00:55
cyeohjgriffith: ah ok, I'll look into it more.00:57
jgriffithcyeoh: wish that tempest output had the volume ID00:58
jgriffithcyeoh: that would sure help00:58
cyeohwe have so many errors/warnings in the logs still and I'm pretty sure at least a few of them are just spurious logging caused by negative tests (eg we're logging errors where we shouldn't be)00:58
jgriffithbut anyway.. the attach is the real fail I tink00:59
jgriffiththink00:59
jgriffithcyeoh: yeah, but I've noticed the last week it's gotten WAYYY better00:59
jgriffithcyeoh: I think most of the negative tests are captured now00:59
jgriffithcyeoh: if not all of them00:59
jgriffithcyeoh: and IIRC there's now a gate job that fails if traces are present no?00:59
cyeohjgriffith: yea we're trying to clean up the logs....00:59
cyeohjgriffith: that got turned off for $REASONS01:00
jgriffith:(01:00
jgriffithor :)01:00
jgriffithdepending which side of the patch you're on01:00
cyeohheh :-) Hopefully can get it on again soon01:00
jgriffithcyeoh: I'd agree01:02
jgriffithhmm01:02
jgriffiththere's an awful lot going on in this one, I think I was incorrect about the bootable volume piece here01:04
jgriffithwithout the uuid it's kinda hellish to trace though01:04
*** masayukig has quit IRC01:06
jgriffithcyeoh: oh dear....01:06
jgriffithcyeoh: this is prior to the cinder cleanup for negatives01:06
jgriffithcyeoh: so the c-api is full of garbage01:06
cyeohah :-(01:07
jgriffithcyeoh: alright, well if we get a recent version of this please let me know.  I'm happy to help01:08
jgriffithcyeoh: I don't see much sense in trying to work off of something from November at this point though01:08
*** alexpilotti_ has quit IRC01:08
cyeohjgriffith: thanks - yea, agreed01:09
*** masayukig has joined #openstack-gate01:09
fungieep, nova stable/havana needs a backport of the stevedore mock patch, looks like01:52
fungithere's a stable nova change failing unit tests in the gate at this moment01:52
mriedemfungi: crapola, link?02:08
mriedemfungi: looks like sdague already has a backport: https://review.openstack.org/#/c/69515/02:08
fungiaha, i guess https://review.openstack.org/64521 just needs to be rebased onto that (along with every other nova stable/havana change which is open)02:09
mriedemdansmith: you're a stable maintainer right? can you +2 sdague's stevedore backport ^ ?02:10
*** masayukig has quit IRC03:27
*** mriedem has left #openstack-gate03:55
*** mriedem has quit IRC03:55
marun04:15
*** masayukig has joined #openstack-gate04:31
*** david-lyle has joined #openstack-gate04:34
*** gsamfira has quit IRC06:46
*** coolsvap has joined #openstack-gate07:17
*** ndipanov_gone is now known as ndipanov07:27
*** flaper87|afk is now known as flaper8707:44
*** david-lyle has quit IRC08:27
*** ken1ohmichi has quit IRC09:05
*** SergeyLukjanov_ is now known as SergeyLukjanov09:08
*** marun has quit IRC09:19
*** coolsvap has quit IRC10:47
*** coolsvap has joined #openstack-gate11:00
*** SergeyLukjanov is now known as SergeyLukjanov_a11:19
*** SergeyLukjanov_a is now known as SergeyLukjanov11:19
*** coolsvap has quit IRC11:30
chmouelsdague: i think we may need to classify this one http://is.gd/lFruQX11:51
chmouelsdague: ah it's acually referenced here https://github.com/openstack-infra/elastic-recheck/blob/master/queries/1254772.yaml11:52
chmouelsdague: but didn't seem to catchup on my review https://review.openstack.org/#/c/41450/11:52
sdaguechmouel: sure, I'm trying to figure out the differences11:55
sdaguechmouel: so a bunch of those are different issues11:57
sdaguethey aren't actually a volume setup failure, they are a network setup failure11:57
*** masayukig has quit IRC11:57
chmouelreally? I guess i need to grab all the screen output as well to grep it11:59
sdagueyeh, I'm pretty sure the big spikes yesterday were dansmith's nova network series12:03
chmouelthat was just an hour or two ago but well it's perhaps still a WIP12:18
sdagueyeh12:33
*** markmcclain has joined #openstack-gate13:00
*** dhellmann_ is now known as dhellmann13:16
*** dhellmann is now known as dhellmann_13:36
*** dhellmann_ is now known as dhellmann13:37
*** dims has quit IRC13:50
*** dims has joined #openstack-gate13:52
*** markmcclain has quit IRC13:57
*** markmcclain has joined #openstack-gate13:57
*** dims has quit IRC14:03
*** mestery has quit IRC14:03
*** dims has joined #openstack-gate14:04
anteayamriedem, thanks for the link14:11
russellbwe keeping this channel?14:11
russellbso many openstack channels ...14:11
russellbsurely -qa / -infra / -project channels suffice :)14:12
russellbsomeone yell if i need to return14:12
*** russellb has left #openstack-gate14:12
sdagueyeh, I'm ok abandoning this channel now14:14
*** sdague has left #openstack-gate14:15
*** mestery has joined #openstack-gate14:15
*** dansmith has left #openstack-gate14:28
*** mestery_ has joined #openstack-gate14:40
*** mestery__ has joined #openstack-gate14:42
mtreinishmriedem: it was right before h-3 which was late august I think14:42
*** meste____ has joined #openstack-gate14:43
*** mestery has quit IRC14:43
*** mestery_ has quit IRC14:46
*** mestery__ has quit IRC14:46
*** flaper87 is now known as flaper87|afk14:50
*** portante has left #openstack-gate14:54
*** RelayChatInfo has joined #openstack-gate14:56
*** RelayChatInfo has left #openstack-gate14:56
*** mestery has joined #openstack-gate14:57
*** mestery_ has joined #openstack-gate14:58
*** meste____ has quit IRC15:00
*** mestery__ has joined #openstack-gate15:01
*** mestery has quit IRC15:02
*** mestery_ has quit IRC15:03
*** mestery has joined #openstack-gate15:05
*** mestery__ has quit IRC15:09
*** coolsvap has joined #openstack-gate15:16
*** sc68cal has left #openstack-gate15:18
*** david-lyle has joined #openstack-gate15:41
*** rossella_s has joined #openstack-gate15:55
*** markmcclain has left #openstack-gate15:58
*** SergeyLukjanov is now known as SergeyLukjanov_16:09
*** flaper87|afk is now known as flaper8716:12
*** marun has joined #openstack-gate17:03
*** marun has quit IRC17:05
*** dtroyer_zz has left #openstack-gate17:24
*** flaper87 is now known as flaper87|afk17:28
fungiif it's decided that this channel no longer serves a real purpose, someone please submit a change to openstack-infra/config reverting 95c630f so that our meetbot doesn't hang out in here logging forever17:34
*** SergeyLukjanov_ is now known as SergeyLukjanov17:36
*** therve has left #openstack-gate17:41
*** rossella_s has quit IRC18:41
*** jmeridth has quit IRC18:53
*** alexpilotti has joined #openstack-gate18:54
*** ndipanov has quit IRC19:09
*** alexpilotti has quit IRC19:13
*** alexpilotti has joined #openstack-gate19:16
*** alexpilotti has quit IRC19:21
ttxagreed, this should be a transient channel19:25
*** ndipanov has joined #openstack-gate19:32
*** mtreinish has left #openstack-gate19:33
*** david-lyle has quit IRC19:42
*** asadoughi has left #openstack-gate19:52
*** salv-orlando has left #openstack-gate20:14
chmoueldone:   https://review.openstack.org/6971420:29
*** ndipanov has quit IRC20:49
*** ttx has left #openstack-gate20:57
*** jog0 has left #openstack-gate21:27
*** coolsvap has quit IRC21:35
*** SergeyLukjanov is now known as SergeyLukjanov_21:59
fungichmouel: thanks! +2'd. just waiting for a few more people from here to +1 it so we're sure22:00
* fungi makes like a banana and leaves22:01
*** fungi has left #openstack-gate22:01
*** bnemec has left #openstack-gate22:20
anteayahttps://review.openstack.org/#/c/69714/ dhellmann dims HenryG jgriffith mestery obondarev roaet SergeyLukjanov_ can you +1 this patch to remove logging from this channel?22:46
anteayawe don't need it anymore, and can reinstate it again if we need to log this channel again22:47
dimsdone. thx22:50
roaetditto22:50
dhellmannanteaya: done22:51
anteayathank you22:52
*** ndipanov has joined #openstack-gate22:52
*** ndipanov has quit IRC23:03

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!