Tuesday, 2014-01-21

jog0lifeless: and based on http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSW1wb3J0RXJyb3I6IE5vIG1vZHVsZSBuYW1lZCBwYXNzbGliLmhhc2hcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MDI2MjQwNjMzOH0=00:00
jog0I ignored it00:00
jog0being I am not supposed to be working00:00
lifelessjog0: ack00:02
jog0lifeless: I think your effort to make delete faster will probablly give us the most bang for our buck right now00:03
*** dcramer_ has quit IRC00:03
*** david-lyle_ has quit IRC00:05
jog0and the gate just reset  again00:05
lifelessjog0: So its done AFAICT, just needs infra folk to00:06
bknudsonjog0: it's the "Error: iSCSI device not found at /dev/disk/by-path/ip-127.0.0.1:3260-iscsi-" problem.00:06
bknudsonin gate-grenade this time.00:07
*** rakhmerov has quit IRC00:08
*** rakhmerov1 has joined #openstack-infra00:08
*** oubiwann_ has quit IRC00:08
lifelessclarkb: fungi: mordred: ^ So , any of you around to apply this nodepool stack ?00:11
lifelesshttps://review.openstack.org/#/c/67985/ + https://review.openstack.org/#/c/67986/00:12
lifelessclarkb: fungi: mordred: ^00:12
*** ryanpetrello has quit IRC00:13
*** rakhmerov1 has quit IRC00:14
openstackgerritSean Dague proposed a change to openstack-infra/elastic-recheck: narrow to just pci explosion  https://review.openstack.org/6798900:16
*** markwash has quit IRC00:16
*** markwash has joined #openstack-infra00:18
*** moted has quit IRC00:19
*** moted has joined #openstack-infra00:19
fungilifeless: i can look in a moment. trying to grab a quick bite and get evening chores out of the way so i can pack bags--not a lot of time to kill and clean up after nodepool so we can restart it. the external delete loop i've got going the past couple hours seems to be helping some00:21
lifelessfungi: what cleanup do you need to do after you kill it ?00:22
fungilifeless: at least previously, it had a tendency to leave in-progress image builds, node builds and so on in disarray00:24
lifelessfungi: I've fixed at least one such bug in passing :)00:24
fungiso mostly circling back around to clear those out. sounds like maybe it will be less effort with that00:25
lifelessfungi: note that there is an explicit 15m delay before it cleans all those out00:25
fungialso, i do want to make sure that if i put any changes in place, i'll also be around to roll them back if we end up breaking it badly for some unforseen reason00:26
fungihopefully in about 30 minutes i can free up to give it a go00:26
fungilifeless: how much additional log noise do you expect 67924 to add?00:27
fungijust curious if we're going to be blowing up the logs with that00:28
lifelesshopefully none00:28
lifelesswould you like me to make it debug ?00:28
lifelessbut for instance, if the rackspace api takes 100ms to call00:28
lifelesswe'd log that message 1/second when we have lots of servers to delete00:28
fungino, i think warning is fine if this is a condition we should warn the administrator about00:28
lifelessmmm00:29
lifelessperhaps info00:29
lifelessfungi: it's easy to change if needed00:30
lifelessfungi: the problem is we just don't know if it's happening or not at the moment00:30
fungiright, i'm fine with it. should be back in a bit to go over the rest00:31
openstackgerritA change was merged to openstack-infra/elastic-recheck: narrow to just pci explosion  https://review.openstack.org/6798900:32
lifelessoh, possibly - lol I suspect we're throttling all our api calls (e.g. floating ip deallocation) to 1/ second00:32
lifelesswe probably *do* want to make more calls than that00:32
lifelesssince deleting a server is 1 list, 1 list floating ips, 1 remove ip, 1 delete ip, 1 list keypair, 1 delete keypair 1 delete server 1 list server00:33
lifelessor 8 seconds per server at the moment00:33
*** xchu_ has joined #openstack-infra00:39
*** sandywalsh has quit IRC00:42
sdaguefungi: when you get back, I'm going to propose ninja merging a test skip00:43
*** rnirmal has quit IRC00:45
lifelessfungi: found another one00:46
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Default to a ratelimit of 10/second for API calls  https://review.openstack.org/6799300:46
lifelessdeleting 300 servers without this patch == 8 seconds per, or 2400 seconds.00:47
sdaguelifeless: nice00:47
jeblairlifeless: there is one manager per provider, we have 6 providers00:47
jeblairlifeless: divide all your numbers by 600:48
lifelessjeblair: ack - still00:48
jeblairlifeless: please base the rate limits on actual rates from our providers00:48
*** markmcclain has joined #openstack-infra00:48
jeblairlifeless: istr that 1/sec is an actual rate from at least one of our providers00:48
lifelessjeblair: per provider actual figures should go in the config no? We haven't set that at all00:48
lifelessjeblair: I will have a look and see what I can find00:49
jeblairlifeless: hitting rate limit timeouts is not going to make it faster00:49
lifelessjeblair: I know00:49
lifelessjeblair: next step along that path is to issue less API calls, e.g. request deletion of several resources at once00:49
jeblairlifeless: also, the 900 second delay in the node delete method is to avoid the periodic cleanup thread stepping on nodes that are currently being handled by a parallel delete thread00:50
lifelessfor HP cloud, you have to query them - I can only find the limits for *me*, not for infra00:50
jeblairlifeless: please don't remove that protection -- having multiple threads working on the same node is bad news for sqlalchemy00:50
lifelesshttp://docs.hpcloud.com/api/compute/#listLimits-jumplink-span00:50
lifelessjeblair: the next patch avoids that problem00:50
lifelessjeblair: I can fold the patches together if you want, though they are conceptually separate; reordering doesn't make much sense either00:51
lifelessjeblair: btw the 900 second thing hasn't been working ever00:51
*** dcramer_ has joined #openstack-infra00:51
lifelessjeblair: https://review.openstack.org/#/c/67980/00:51
jeblairlifeless: i understand, but removing it without putting some kind of protection in isn't going to make it better00:52
lifelessjeblair: so while it might be bad news, nodepool thus far has been operating *without it*00:52
lifelessjeblair: I fix it00:52
jeblairlifeless: no, it's just been erroring out in a different place00:52
lifelessjeblair: thats too cryptic for me, I don't understand how time.time() < 900 could ever have been true00:53
jeblairlifeless: i'm happy you are fixing it; i'm just asking that you please not design a system where two threads are fighting over the same node -- we already know it doesn't work00:53
jeblairlifeless: i'm not saying it worked00:53
lifelessok, 10/s is faster than rax's default limits, though we can get them changed00:56
lifelesshttp://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Determining_Limits_Programmatically-d1e1039.html00:56
lifelessneeds to be followed to find out if they have been changed for us already00:56
lifelessI'm happy to translate the answers for both HP cloud and rax into the nodepool yaml file; and I'll change the patch to be more conservative (2/s) by default00:56
bknudsondo reviews like https://review.openstack.org/#/c/65161/ slow down the gate much?00:57
bknudsonit's approved but fails to merge now00:57
bknudsondue to some other conflicting change merged already00:57
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Default to a ratelimit of 2/second for API calls  https://review.openstack.org/6799300:58
jeblairlifeless: i'm sorry i'm not well enough to stick around and work through this with you; i need to go and rest.  i'm just trying to provide info to help you know that there is at least some real-world experience in the choices in nodepool.00:58
lifelessjeblair: I know there; I'm trying to preserve that00:59
*** yongli has quit IRC00:59
lifelessjeblair: I don't see any new races possible in my code; I ack that there is an existing one, but will this make it worse?00:59
lifelessjeblair: dropping out of delete early should in fact make it better than it was, AFAICT01:00
jeblairlifeless: "This may cause multiple conccurent deletion attempts with the completion handler threads, but that should be harmless (no worse than someone running nodepool node-delete from the command line)."01:00
jeblairlifeless: that should really be a red flag; deleting things from the command line causes stuff to fail all the time01:00
lifelessjeblair: yes, which fungi is triggering right now01:00
jeblairwe don't want the program to do that on purpose01:00
lifelessjeblair: ah - new data!01:00
mikalewindisch: http://zuul.rcbops.com/workload/ is a graph of what turbo hipster is doing, to give you an idea of workload01:01
jeblairlifeless: that 900 second delay was intended to avoid that (i grant you it failed, but it was added because we saw _real_ failures)01:01
lifelessjeblair: ok, I will make sure they threads don't stomp on each other01:01
lifelessusing a similar mechanism01:01
*** yongli has joined #openstack-infra01:01
jeblairlifeless: another change went in to the peridoc cleanup at the same time which caused us to not notice/care as much01:01
ewindischmikal: nice!01:01
mikal:)01:01
mikalThe big dip is because I can't have nice things01:02
jeblairlifeless: (the skip on failure and proceed with a new transaction change)01:02
jeblairlifeless: okay, i'm really going to go now; i think i've conveyed the info i needed to in order to help; thanks for pitching in.01:03
mordredmikal: that's an interesting graph - do we have that in the infra system?01:04
mikalmordred: no, because you're lame01:04
mikal:P01:04
*** UtahDave has quit IRC01:04
mordredmikal: really? I don't remember seeing the patch to put it in ...01:04
mikalmordred: that's just a quick and dirty graph of the output of the mysql reporter, which hasn't landed in zuul and is unlikely to be used by infra01:04
mikalmordred: but which rocks my world for adhoc reporting01:05
mordredah - so you aren't reporting stats to a statsd that could be used by graphite and thus reused upstream? I'm sad taht you're making something for ad-hoc reporting taht the project can't use01:05
mikalPlease hold, cuddling daughter01:07
mikalOk, so01:08
mikalIts more complicated than that of course01:08
mikalAnd has been discussed with James / in a gerrit code review01:08
mikalI wrote a mysql reporter because I needed a way of rapidly finding TH misvotes so I could recheck stuff while bedding down TH01:09
mikalIts perfect for that01:09
mikalInfra says they're not interested because they can use logstash / elastic search to do that sort of thing01:09
mikalWhereas I'm not excited by setting up logstash at this point in my life01:09
sdagueso given that we're not seeming to make progress on the various volume issues - I'm going to propose we turn off the test that seems to be tickling them most often - https://review.openstack.org/#/c/67991/01:09
mikalThe graphing is a 50 line python script which just dumps a summary in the right format for the graphing library01:10
mikalAnd infra is welcome to it if they want it01:10
sdaguemikal: you should be excited to setup logstash, it's kind of awesome :)01:10
mikalmordred: ^--01:10
mikalsdague: sure, but I'm focussed on TH reliability at the moment. Anything not on the critical path for that isn't going to happen.01:10
mikalWell, isn't going to happen in the next few weeks.01:10
sdaguemikal: fair01:10
*** ivar-lazzaro_ has joined #openstack-infra01:12
sdaguehttp://logs.openstack.org/71/67971/2/check/check-tempest-dsvm-full/a71a086/console.html#_2014-01-21_00_16_14_358 - looks like some network hiccups with the mirror01:16
*** krotscheck has quit IRC01:17
*** zhiwei has joined #openstack-infra01:18
*** dcramer_ has quit IRC01:19
mordredmikal: fair enough. still, I'm sad whenever you make tools that I can't use. you know, because of bunnies and unicorns01:21
lifelessjeblair: on the off chance you aren't actually gone, I think the right thing to do is to move all the deletes to periodic, avoid the races entirely. I'm going to make my patch do that and we can discuss in detail when you're feeling better.01:21
sdaguemordred: don't forget ponnies01:22
fungiokay, back now01:23
mordredsdague: I'm sure there are - I REALLY need to make a mirror-per-cloud-region01:23
mordredOR - what would be great ...01:24
sdagueget on that slacker.... ;)01:24
mordreddstufft: you know, npm has this feature "--offline", which will satisfy requests purely from local cache if a local cache exists and will not hit internet indexes ...01:25
mordreddstufft: it would help _soooo_ many things if pip had one of those01:25
sdagueyeh, agreed01:25
*** markmcclain has quit IRC01:25
sdaguefungi: ok so next time we get a gate reset - https://review.openstack.org/#/c/67991/ promote that01:26
sdagueor ninja merge01:26
fungisdague: yep, saw your ping on -qa01:26
fungilooking it over real fast like01:26
sdagueok, great so you have context01:26
sdaguebasically, turns off a test01:26
* fungi nods01:26
sdaguethat is failing enough that we can go figure it out in a side channel01:27
sdaguebecause figuring it out in the current gate state isn't working01:27
mordredsdague: ++01:28
sdagueand with that, I'm done for the day. As I've been plugging for about 14hrs at this point. I'm also not going to be very responsive until afternoon my time tomorrow, as I need to do a couple other things in the morning.01:29
dstufftmordred: file a ticket please01:29
openstackgerritMonty Taylor proposed a change to openstack-infra/storyboard-webclient: Storyboard API Interface and basic project management  https://review.openstack.org/6758201:29
mordreddstufft: damn. that's a sensible response. I just wanted to complain :)01:29
*** david-lyle_ has joined #openstack-infra01:34
openstackgerritKen'ichi Ohmichi proposed a change to openstack-infra/devstack-gate: Copy libvirt log file after tempest run  https://review.openstack.org/6189201:35
mikalmordred: oh, you _can_ run it if you want. Its just that no one wanted to.01:35
mikalmordred: it was asserted it was possible to trivially count number of failed test runs in a day with elastic search, at which point I kind of lost interest in the sales pitch01:36
*** jaypipes has quit IRC01:37
*** greghaynes has quit IRC01:40
*** weshay has joined #openstack-infra01:43
openstackgerritEli Klein proposed a change to openstack-infra/jenkins-job-builder: Added rbenv-env wrapper  https://review.openstack.org/6535201:47
openstackgerritDavanum Srinivas (dims) proposed a change to openstack-infra/devstack-gate: Temporary HACK : Enable UCA  https://review.openstack.org/6756401:49
*** krotscheck has joined #openstack-infra01:50
*** ryanpetrello has joined #openstack-infra01:50
*** coolsvap has quit IRC01:51
*** ok_delta has joined #openstack-infra01:53
*** krotscheck has quit IRC01:58
lifelessincoming01:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Avoid redundant updates of node.state=DELETE.  https://review.openstack.org/6800101:59
lifelessfungi: ^ here tis01:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Log the time a node has been in state DELETE.  https://review.openstack.org/6800201:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Split out the logic for deleting a nodedb node.  https://review.openstack.org/6800301:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Use the nonblocking cleanupServer.  https://review.openstack.org/6800401:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Cleanup nodes in state DELETE immediately.  https://review.openstack.org/6797901:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Default to a ratelimit of 2/second for API calls  https://review.openstack.org/6799301:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Provide diagnostics when task rate limiting.  https://review.openstack.org/6792401:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Log how long nodes have been in DELETE state.  https://review.openstack.org/6798201:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Consolidate duplicate logging messages.  https://review.openstack.org/6798301:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Fix early-exit in cleanupOneNode  https://review.openstack.org/6798001:59
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Make cleanupServer optionally nonblocking.  https://review.openstack.org/6798501:59
fungilifeless: thanks, having a look01:59
*** nosnos has joined #openstack-infra01:59
lifelessfungi: https://review.openstack.org/#/c/68004/ is the top01:59
lifelessfungi: now I'm going to go work on tripleo-gate again02:00
lifelessfungi: ping me if you need anything02:00
fungilifeless: will do--thanks again02:01
openstackgerritEli Klein proposed a change to openstack-infra/jenkins-job-builder: Add local-branch option  https://review.openstack.org/6536902:01
*** yaguang has joined #openstack-infra02:01
*** ryanpetrello has quit IRC02:04
*** ivar-lazzaro_ has quit IRC02:09
*** oubiwann_ has joined #openstack-infra02:09
lifelesshmm02:10
* lifeless resists the temptation to poke at it more02:11
openstackgerritA change was merged to openstack-infra/storyboard-webclient: Add tox.ini file to run things via tox  https://review.openstack.org/6772102:11
*** dcramer_ has joined #openstack-infra02:12
*** jhesketh__ has quit IRC02:13
fungithere's that gate reset we were waiting for. promoting 67991,1 now02:13
*** morganfainberg|z is now known as morganfainberg02:13
openstackgerritEvgeny Fadeev proposed a change to openstack-infra/askbot-theme: modified the launchpad answers importer script  https://review.openstack.org/6800802:20
*** yamahata has quit IRC02:29
*** vkozhukalov has joined #openstack-infra02:34
*** jhesketh__ has joined #openstack-infra02:37
*** jasondotstar has joined #openstack-infra02:40
*** jerryz has quit IRC02:40
openstackgerritA change was merged to openstack-infra/nodepool: Provide diagnostics when task rate limiting.  https://review.openstack.org/6792402:48
openstackgerritA change was merged to openstack-infra/nodepool: Default to a ratelimit of 2/second for API calls  https://review.openstack.org/6799302:48
*** dkranz has joined #openstack-infra02:48
*** mrda_ is now known as mrda_away02:49
*** mrda_away is now known as mrda_02:49
*** sdake has quit IRC02:49
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Use the nonblocking cleanupServer.  https://review.openstack.org/6800402:55
*** markmcclain has joined #openstack-infra02:57
*** markmcclain1 has joined #openstack-infra02:59
*** markmcclain has quit IRC03:02
*** markmcclain1 has quit IRC03:03
*** dkranz has quit IRC03:05
*** weshay has quit IRC03:05
lifelessI need a pointer, where does the pip cache etc on jenkins nodes come from ?03:06
lifelessI'm trying to reproduce running tripleo-gate without jenkins; but with a successfully built nodepool image03:07
fungii believe the nodepool prep scripts build those03:12
* fungi looks03:12
openstackgerritA change was merged to openstack-infra/askbot-theme: made launchpad importer read and write data separately  https://review.openstack.org/6756703:13
openstackgerritA change was merged to openstack-infra/askbot-theme: modified the launchpad answers importer script  https://review.openstack.org/6800803:13
fungiwell, the deb cache is built in http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/nodepool/scripts/cache_devstack.py03:14
fungiempty pip cache directory is added in http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/nodepool/scripts/prepare_devstack.sh03:15
*** dcramer_ has quit IRC03:16
fungii think we don't prepopulate the pip cache because pip will use what versions it finds there even if newer ones are on the remote mirror03:17
fungibut i could be mistaken03:17
fungithat might be the egg build cache i'm thinking of03:18
openstackgerritA change was merged to openstack-infra/nodepool: Fix early-exit in cleanupOneNode  https://review.openstack.org/6798003:21
openstackgerritA change was merged to openstack-infra/nodepool: Avoid redundant updates of node.state=DELETE.  https://review.openstack.org/6800103:21
openstackgerritA change was merged to openstack-infra/nodepool: Log the time a node has been in state DELETE.  https://review.openstack.org/6800203:21
*** emagana has quit IRC03:21
*** dkliban has quit IRC03:27
*** dcramer_ has joined #openstack-infra03:28
*** reed has quit IRC03:30
*** yamahata has joined #openstack-infra03:32
*** talluri has joined #openstack-infra03:33
*** carl_baldwin has joined #openstack-infra03:37
*** carl_baldwin has quit IRC03:40
*** greghaynes has joined #openstack-infra03:40
*** talluri has quit IRC03:41
*** salv-orlando has quit IRC03:42
*** salv-orlando has joined #openstack-infra03:42
*** ok_delta has quit IRC03:43
*** talluri has joined #openstack-infra03:43
*** gsamfira has joined #openstack-infra03:46
*** carl_baldwin has joined #openstack-infra03:47
*** pcrews has joined #openstack-infra03:47
*** xchu_ has quit IRC03:50
*** dkranz has joined #openstack-infra03:52
*** mriedem has quit IRC03:56
*** ryanpetrello has joined #openstack-infra03:57
*** marun has joined #openstack-infra04:00
*** mayu has joined #openstack-infra04:05
mayugit puppet error04:05
mayugit clone https://git.openstack.org/openstack-infra/config /opt/config/production04:05
mayuerror: Failed to connect to 2001:4800:7813:516:3bc3:d7f6:ff04:aacb: Network is unreachable while accessing https://git.openstack.org/openstack-infra/config/info/refs04:06
mayuI open that link in my browser, there is nothing04:07
mayuI follow the guide http://ci.openstack.org/puppet.html#id204:07
mayua wrong guide ?04:08
*** reed has joined #openstack-infra04:08
dstufftmordred: clarkb fungi https://twitter.com/dstufft/status/42548007581623910404:09
*** marun has quit IRC04:12
*** marun has joined #openstack-infra04:13
*** marun has quit IRC04:14
*** marun has joined #openstack-infra04:16
*** gema has quit IRC04:17
*** krotscheck has joined #openstack-infra04:19
*** ArxCruz has quit IRC04:20
*** krotscheck has quit IRC04:24
mordreddstufft: woot04:25
*** krotscheck has joined #openstack-infra04:26
*** mayu has quit IRC04:30
*** pcrews has quit IRC04:31
*** emagana has joined #openstack-infra04:32
*** emagana has quit IRC04:37
*** coolsvap has joined #openstack-infra04:38
*** Ryan_Lane has joined #openstack-infra04:39
*** ryanpetrello has quit IRC04:40
*** talluri has quit IRC04:42
*** dstanek has quit IRC04:53
*** rcleere has joined #openstack-infra04:57
*** carl_baldwin has quit IRC04:59
*** carl_baldwin has joined #openstack-infra05:01
*** SergeyLukjanov_ is now known as SergeyLukjanov05:06
*** marun has quit IRC05:08
*** krotscheck has quit IRC05:12
*** chandankumar_ has joined #openstack-infra05:14
*** oubiwann_ has quit IRC05:20
*** dstanek has joined #openstack-infra05:20
*** carl_baldwin has quit IRC05:25
*** nicedice has quit IRC05:28
*** crank has quit IRC05:28
*** crank has joined #openstack-infra05:29
*** dpyzhov has joined #openstack-infra05:31
lifelessfungi: thanks05:36
lifelessfungi: I ask because devstack-gate throws errors all over teh place which are ignored (no set -eu in the scripts) from jenkins :)05:36
StevenKHm05:38
StevenKTelling devstack to use postgres ends up with keystone token-get failing horrible05:39
StevenK*horribly05:39
lifelesslol, don't do that :)05:40
StevenKlifeless: Trying to address mikal's whinge about how people whinge about mysql usage, but then don't front up and help about postgres or other engine05:41
lifelessStevenK: you will probably regret it05:41
lifelessStevenK: personally, I wouldn't even touch anything other than *mysql* until we've got full HA no-downtime on mysql05:42
lifelessStevenK: as it will be a massive time sink05:42
lifelessbecause you'll be playing catchup with 900 developers05:42
StevenKlifeless: Why not until we have no-downtime HA?05:44
lifelessStevenK: because when we have that we're at feasible production deployment stage05:44
lifelessStevenK: and if you disappear off into a pit of postgresql for 6 months, we'll still have at least shipped something :)05:45
lifelessStevenK: or if I were to do it, same thing :)05:45
*** jasondotstar has quit IRC05:57
bradmStevenK: I hear sqlite is webscale, maybe you should port it to that.. ;)05:58
*** emagana has joined #openstack-infra05:59
StevenKbradm: Hahaha06:00
StevenKbradm: Poor paste.u.c06:00
*** reed has quit IRC06:01
bradmStevenK: yet it still runs, I'm surprised how well06:01
*** rakhmerov has joined #openstack-infra06:03
*** nati_ueno has quit IRC06:06
* mordred avoids trolling StevenK about postgres06:08
StevenKmordred: Aw06:08
mordredStevenK: the thing lifeless didn't mention is that also I'll troll you mercilessly if you decide to go jump into the bottomless pit of postgres06:08
mordrednot, mind you, because I don't like postgres06:09
mordredbut because for openstack it quite simply does not matter - postgres is better than mysql by having a bunch of features mysql doesn't have - openstack doesn't use any of them - hence, postgres is sensless overhead06:10
* mordred isn't opinionated though - certainly didn't used to work for MySQL06:10
mordredStevenK: so ignore me as appropriate06:10
lifelessmordred: all true, including transactions06:10
* mordred stabs lifeless in the face, neck and hands06:11
StevenKHaha06:11
*** dpyzhov_ has joined #openstack-infra06:11
mordredStevenK: now - if you want to do a db related boondoggle that could be useful - get some fixtures written to control drizzle so that we can just use that in all of the project's unittests06:12
*** SergeyLukjanov is now known as SergeyLukjanov_a06:12
*** dpyzhov has quit IRC06:12
*** dpyzhov_ is now known as dpyzhov06:12
*** SergeyLukjanov_a is now known as SergeyLukjanov_06:13
lifelessfo shizzle06:14
StevenKWe have a postgres fixture. *cough*06:14
*** nati_ueno has joined #openstack-infra06:22
*** nati_ueno has quit IRC06:23
*** nati_ueno has joined #openstack-infra06:24
* mordred refrains from pointing out that postgres didn't quite make it to usable in the high-volume large-scale install space before people stopped caring about RDBMS's06:30
* mordred neglects to mention that the chance of facebook or twitter or google migrating to postgres instead of just migrating to mongodb (from MySQL, of course) is probably pretty low06:31
* mordred declines to rise to the troll-bait - preferring the high road06:32
mikalActually, it really bothers me that all the postgres advocates I meet hate mysql because of problems in myisam that became irrelevant nearly a decade ago06:32
mikalIts like they set their opinion on something, and then wont listen to facts...06:32
mordredmikal: yeah. taht actually does bother me - especially when I go out of my way when trolling the to point out the areas in which postgres is actually pretty decent06:33
mikalHeh06:33
mordredmikal: but, you know, sort of like being a white male, considering MySQL's multiple-orders-of-magnitude level of dominance, I suppose expecting quid pro quo and the avoidance of outdated cheapshots is unreasonable06:34
mikalI need that on a tshirt06:34
mikalI also feel that if the strongest argument you can make in favour of your product is to make cheap shots at the other guys, you have a problem06:35
mikalI mean that in general06:35
mikalI would be very upset if we had a "why eucalyptus sucks" wiki page or something06:35
mordredI do too - I try my best to remember/think about that06:35
mordredyup06:35
mikalI am glad we finally agree on something06:35
mikalI consider this the peak of my day06:35
mordredI occasionally feel unhappy when my response to cloudstack is "fuck those guys"06:35
mikalBut its been a pretty bad day06:36
mikalI'm sure cloudstack has a perfectly valid use case. I just don't know what it is. And don't care.06:36
mordredI mean, I don't dwell on the unhappiness and usually just have a beer - but I totally agree with you - I'd rather focus on my product and not knock other people's06:36
mordredmikal: I hear it has a gui installer06:37
mordredmikal: also, it might make sushi06:37
mordredbecause, you know, gui installers and sushi are both super important features for long-term cloud operation06:37
mikalHeh06:37
mikalSurely some of the distros have gui installers though?06:37
mikalSo that's not that unique a point06:37
mikalI wonder if they have stupid single threaded python problems like us?06:38
mikalI think that's probably our biggest flaw06:38
mordredmikal: name the last time OpenStack had a problem due to Python's GIL06:38
mordredother than people complaining about it while rubbing their neckbeards06:38
mikalOh, I more mean greenthreads06:38
mikalI see problems with that a fair bit06:38
mordredyeah. that goes in the hipster category06:39
mikalWe scale with multiple processes06:39
bradmmikal: just rewrite everything in go, that'll fix it06:39
mordredof "we have to solve the GIL before it'sa  problem!"06:39
*** crank has quit IRC06:39
mordredmikal: you know zuul uses threads, right? seems to scale pretty well up into the realms of crazypants06:39
mordredI mean, we just had to move it onto a tmpfs because it was cloning so many damned git repos that the fs couldn't keep up06:40
mordredI do not believe we've EVER had a problem with the GIL06:40
clarkbmordred: turns out git is slow >_>06:40
mordredclarkb: nicely done06:40
mikalmordred: sure, but I didn't mean the GIL.06:41
mordredmikal: I know - but I got into ranty mode06:41
mikalHeh06:41
mikalBe nice, I have a headache06:41
* mordred jumps up and down on mikal06:41
*** SergeyLukjanov_ is now known as SergeyLukjanov06:42
* mordred hands mikal a cookie06:43
mikalHeh06:43
mikalI think salt might help06:43
* mordred wants a cookie now06:43
* mikal wanders off to eat chips06:43
*** jamielennox is now known as jamielennox|away06:43
*** vkozhukalov has quit IRC06:47
*** rcleere has quit IRC06:50
*** crank has joined #openstack-infra06:51
morganfainbergclarkb your name and mordred's are the same color in my IRC client... when you were talking about git... i initially thought mordred was talking to himself.06:52
* morganfainberg wanders off after supplying some random to the channel06:52
lifelessmordred: so neckbeards aside, GIL problems can be fairly subtle, I wouldn't want to rule them out in openstack at this point :)06:53
*** dpyzhov has quit IRC06:56
*** crank has quit IRC07:02
*** crank has joined #openstack-infra07:02
*** nati_uen_ has joined #openstack-infra07:12
*** nati_uen_ has quit IRC07:13
*** nati_uen_ has joined #openstack-infra07:14
*** nati_ueno has quit IRC07:16
*** jufeng has joined #openstack-infra07:16
*** yolanda has joined #openstack-infra07:18
*** mrda has quit IRC07:27
mordredmorganfainberg: hahahahahaha. nice07:28
*** SergeyLukjanov is now known as SergeyLukjanov_07:29
mordredlifeless: indeed, I wouldn't either. however, I do think it's possible that we over-optimized over-early based on neckbeard stroking and not, you know, reasons07:30
mordredBUT07:30
mordredthat's the boat we're in07:30
mordredso we'll just work with it07:30
mordredI also may be wrong07:30
*** amotoki has joined #openstack-infra07:38
*** NikitaKonovalov_ is now known as NikitaKonovalov07:45
*** sdake has joined #openstack-infra07:58
*** nati_ueno has joined #openstack-infra08:02
*** nati_ueno has quit IRC08:02
*** nati_ueno has joined #openstack-infra08:03
*** pblaho has joined #openstack-infra08:03
*** nati_uen_ has quit IRC08:05
*** moted has quit IRC08:06
*** _ruhe is now known as ruhe08:11
*** jcoufal has joined #openstack-infra08:13
*** flaper87|afk is now known as flaper8708:14
*** mrmartin has joined #openstack-infra08:15
*** dizquierdo has joined #openstack-infra08:18
*** vkozhukalov has joined #openstack-infra08:29
*** SergeyLukjanov_ is now known as SergeyLukjanov08:37
*** fbo_away is now known as fbo08:41
*** vogxn has joined #openstack-infra08:42
*** dpyzhov has joined #openstack-infra08:42
*** DinaBelova_ is now known as DinaBelova08:43
*** ilyashakhat has quit IRC08:43
*** ilyashakhat has joined #openstack-infra08:43
*** yamahata has quit IRC08:45
*** yassine has joined #openstack-infra08:47
*** nati_uen_ has joined #openstack-infra08:51
*** mancdaz_away is now known as mancdaz08:52
*** nati_ueno has quit IRC08:55
*** praneshp has quit IRC08:55
*** nati_uen_ has quit IRC08:56
*** markwash has quit IRC08:56
*** nati_ueno has joined #openstack-infra08:57
*** dpyzhov has quit IRC08:57
*** dcramer_ has quit IRC08:59
*** boris-42 has quit IRC09:00
*** derekh has joined #openstack-infra09:00
*** boris-42 has joined #openstack-infra09:00
*** afazekas has joined #openstack-infra09:03
*** jhesketh__ has quit IRC09:04
*** jpich has joined #openstack-infra09:07
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard: Introducing basic REST API  https://review.openstack.org/6311809:07
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard: API Testset draft  https://review.openstack.org/6744709:08
openstackgerritMark McLoughlin proposed a change to openstack/requirements: Allow use of oslo.messaging 1.3.0a3 from pypi  https://review.openstack.org/6804009:08
openstackgerritdaisy-ycguo proposed a change to openstack-infra/config: Job to push Horizon translation to Transifex  https://review.openstack.org/6804209:10
*** jhesketh_ has quit IRC09:10
*** dcramer_ has joined #openstack-infra09:11
*** markmc has joined #openstack-infra09:20
*** jufeng has quit IRC09:21
markmcttx, morning09:21
markmcttx, do you have perms to add me to https://pypi.python.org/pypi?%3Aaction=pkg_edit&name=oslo.messaging ?09:21
markmcttx, or rather https://pypi.python.org/pypi?:action=role_form&package_name=oslo.messaging09:21
markmc"Package Index Owner: openstackci "09:22
*** dpyzhov has joined #openstack-infra09:25
matelHi, I would like to add a package (XenAPI) to the openstack infra pip repo, how should I do that?09:32
AJaegermarkmc, what do you need to change? This should be all autogenerated once you upload a package...09:35
AJaegermarkmc, I never edited the page - thanks to python magic, it looks nice: https://pypi.python.org/pypi/openstack-doc-tools/0.309:36
markmcAJaeger, was just going to upload manually while waiting for https://review.openstack.org/68040 to be merged09:38
AJaegermarkmc, so, uploading via normal tagging does not work for you?09:39
markmcAJaeger, not until https://review.openstack.org/68040 is merged09:39
AJaegermarkmc, I'm not an expert here - but you point out requests in the requirements repo.09:40
markmcAJaeger, hah09:40
AJaegerAnd we do have jobs that upload new tarballs of python packages to pypi once you tag them.09:41
markmcAJaeger, https://review.openstack.org/67131 sorry09:41
AJaegermarkmc, yeah, that's a patch you need ;)09:41
AJaegerSo, you have two options: Ask to get 67131 approved or manual upload - correct?09:42
markmcyes09:44
markmcand the people who can do either are likely asleep, unless ttx can do the latter09:45
*** zhiwei has quit IRC09:50
*** jhesketh has joined #openstack-infra09:50
*** nati_uen_ has joined #openstack-infra09:50
AJaegermarkmc, understood - sorry, can't help myself either ;(09:51
markmcAJaeger, np, thanks09:51
*** jhesketh__ has joined #openstack-infra09:52
*** nati_ueno has quit IRC09:54
*** salv-orlando has quit IRC09:55
*** salv-orlando has joined #openstack-infra09:56
*** nati_uen_ has quit IRC09:57
*** nati_ueno has joined #openstack-infra09:58
*** jooools has joined #openstack-infra10:00
*** jasondotstar has joined #openstack-infra10:00
*** johnthetubaguy has joined #openstack-infra10:05
*** mancdaz is now known as mancdaz_away10:06
*** coolsvap has quit IRC10:09
*** sileht has quit IRC10:11
*** coolsvap has joined #openstack-infra10:12
*** jp_at_hp has joined #openstack-infra10:12
*** rakhmerov has quit IRC10:12
*** mancdaz_away is now known as mancdaz10:14
*** bauzas has joined #openstack-infra10:17
*** mrmartin has quit IRC10:17
bauzasfolks, does anyone know if the Zuul pipe is broken ?10:18
bauzashttps://review.openstack.org/#/c/52296/ is asking for a recheck, but can't see it in the check queue10:18
bauzashttp://status.openstack.org/zuul/10:18
AJaegerbauzas, see the line at the top of the status page: "Queue lengths: 419 events, 4 results. "10:19
bauzasI guess there are capacity issues with icehouse-2 but I would at least expect Zuul is putting in its pipe the review10:19
AJaegerthose 419+4 elements are not shown...10:19
AJaegerThe gates are really busy10:19
bauzasAJaeger: oh, totally missed it10:19
bauzasAJaeger: thanks10:20
AJaegerbauzas, waiting times of two hours or more until a job starts seem to be normal, so drink a coffee and enjoy10:21
bauzasAJaeger: well, I usually see the review appearing in the check queue, but defined as queued10:21
*** sileht has joined #openstack-infra10:21
bauzasAJaeger: that's the first time I even don't see it in the status page10:21
AJaegerit's busy these days ;)10:23
*** coolsvap has quit IRC10:23
matelAnyone knows how to add a new package to be served by http://pypi.openstack.org/openstack ?10:25
*** dpyzhov has quit IRC10:26
*** coolsvap has joined #openstack-infra10:27
AJaegermatel, perhaps getting it in the global requirements.txt file?10:30
matelAJaeger: Thanks, is it the global-requirements.txt @ https://github.com/openstack/requirements ?10:31
AJaegermatel, yeah.10:32
matelAJaeger: Thanks, I'm pushing a patch!10:32
AJaegermatel, I'm not 100 per cent sure that having it in there will have it show up at pypi.o.o - just an educated guess10:32
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Add rtfd trigger jobs for climate project  https://review.openstack.org/6806210:33
matelAJaeger: Who whould be the best person to ask?10:34
AJaegermatel, the infra team once they are awake ;)10:34
AJaegermatel, if your project uses a requirements.txt file, it's good to add all entries to the global file ;)10:36
*** boris-42 has quit IRC10:38
*** yassine has quit IRC10:38
*** ruhe is now known as _ruhe10:39
matelAJaeger: I am working on getting xenapi tested within the gate, and to get it up and running, I need the XenAPI package: https://github.com/matelakat/xenapi-os-testing/blob/start-devstack/launch-node.sh#L51 - and I don't want to depend on the official pypi10:40
AJaegermatel, better wait for the experts in that case - AFAIU pypi.o.o is just a mirror, so you do something special here10:41
*** yassine has joined #openstack-infra10:42
*** rakhmerov has joined #openstack-infra10:42
matelAJaeger: I think pypi.o.o is a mirror - but only some packages are mirrored. Imagine, a full pypi mirror is a huge thing.10:44
AJaegeryep10:44
AJaegerSorry, can't help further.10:44
matelAjaeger: but's see what they say. Approximately, what time does the guys join?10:44
AJaegerin four or five hours you should catch some unless they travel (and some might travel today)10:45
matelAJaeger: Thanks for your help!10:45
AJaegermatel, you're welcome10:45
*** nati_uen_ has joined #openstack-infra10:46
*** rakhmerov has quit IRC10:47
*** ArxCruz has joined #openstack-infra10:48
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Enable tempest/savanna gate tests  https://review.openstack.org/6806610:48
*** nati_ueno has quit IRC10:49
*** yassine has quit IRC10:50
*** yassine has joined #openstack-infra10:50
*** _ruhe is now known as ruhe10:54
*** yassine has quit IRC10:57
*** jasondotstar has quit IRC11:01
*** max_lobur_afk is now known as max_lobur11:05
*** jhesketh__ has quit IRC11:07
*** jasondotstar has joined #openstack-infra11:07
*** emagana has quit IRC11:07
*** yassine has joined #openstack-infra11:10
*** dpyzhov has joined #openstack-infra11:14
*** SergeyLukjanov is now known as SergeyLukjanov_11:15
*** lcestari has joined #openstack-infra11:17
*** jcoufal has quit IRC11:17
*** SergeyLukjanov_ is now known as SergeyLukjanov11:19
*** yassine has quit IRC11:21
*** yassine has joined #openstack-infra11:23
*** rfolco has joined #openstack-infra11:32
*** emagana has joined #openstack-infra11:38
*** talluri has joined #openstack-infra11:39
openstackgerritSergey Lukjanov proposed a change to openstack/requirements: Bump paramiko version to 1.9.0  https://review.openstack.org/6808811:40
openstackgerritSergey Lukjanov proposed a change to openstack/requirements: Bump paramiko version to 1.9.0  https://review.openstack.org/6808811:40
*** ociuhandu has joined #openstack-infra11:41
*** jcoufal has joined #openstack-infra11:42
*** rakhmerov has joined #openstack-infra11:43
*** emagana has quit IRC11:46
*** rakhmerov has quit IRC11:48
flaper87fungi: around? I really need to figure out what's going on here:11:49
flaper87fungi: here https://review.openstack.org/#/c/65499/11:50
flaper87fungi: I set up an ubuntu saucy box but I couldn't replicate the issue :/11:50
flaper87fungi: if there's a chance I can get access to one of those boxes, that'd be cool. You said that py26 is almost impossible, perhaps py27 ?11:51
*** yaguang has quit IRC11:55
*** ociuhandu has quit IRC11:55
openstackgerritVictor Sergeyev proposed a change to openstack-infra/config: Enable ironicclient py33 tests voting  https://review.openstack.org/6809211:58
*** ociuhandu has joined #openstack-infra12:03
*** ruhe is now known as _ruhe12:05
*** boris-42 has joined #openstack-infra12:05
*** _ruhe is now known as ruhe12:11
*** gsamfira has quit IRC12:11
*** coolsvap has quit IRC12:22
*** b3nt_pin has joined #openstack-infra12:23
*** b3nt_pin is now known as beagles12:23
sdagueflaper87: the test nodes a precise, I'd start with that to try replication12:24
*** andreaf has quit IRC12:25
*** madmike has quit IRC12:27
*** mancdaz is now known as mancdaz_away12:31
*** mancdaz_away is now known as mancdaz12:33
*** jasondotstar has quit IRC12:36
*** DinaBelova is now known as DinaBelova_12:37
*** talluri has quit IRC12:37
*** jhesketh has quit IRC12:38
*** talluri has joined #openstack-infra12:38
*** talluri has quit IRC12:42
*** rakhmerov has joined #openstack-infra12:44
*** rakhmerov has quit IRC12:49
*** smarcet has joined #openstack-infra12:49
flaper87sdague: oh well, that's a good point. I should've known that12:53
flaper87sdague: danke12:53
*** heyongli has joined #openstack-infra12:57
*** mrmartin has joined #openstack-infra12:57
*** markmcclain has joined #openstack-infra12:58
*** amotoki has quit IRC12:58
*** david-lyle_ has quit IRC12:59
*** yaguang has joined #openstack-infra12:59
*** gsamfira has joined #openstack-infra13:02
*** SergeyLukjanov is now known as SergeyLukjanov_a13:02
*** SergeyLukjanov_a is now known as SergeyLukjanov_13:03
*** afazekas has quit IRC13:04
*** pblaho has quit IRC13:07
*** DinaBelova_ is now known as DinaBelova13:07
*** SergeyLukjanov_ is now known as SergeyLukjanov13:09
*** Shrews_ has joined #openstack-infra13:10
*** Shrews has quit IRC13:12
*** Shrews_ is now known as Shrews13:12
*** Shrews_ has joined #openstack-infra13:13
*** amotoki has joined #openstack-infra13:14
*** Shrews_ has quit IRC13:14
*** Shrews_ has joined #openstack-infra13:15
*** Shrews has quit IRC13:16
*** Shrews_ has quit IRC13:16
*** Shrews has joined #openstack-infra13:19
*** afazekas has joined #openstack-infra13:19
*** markmcclain has quit IRC13:22
*** emagana has joined #openstack-infra13:26
*** nosnos has quit IRC13:30
*** nosnos has joined #openstack-infra13:31
*** emagana has quit IRC13:31
openstackgerritKei YAMAZAKI proposed a change to openstack-infra/jenkins-job-builder: Fix multibyte character problem  https://review.openstack.org/6461013:35
*** mfink has joined #openstack-infra13:36
*** nosnos has quit IRC13:38
*** miqui_ has joined #openstack-infra13:38
*** miqui_ has quit IRC13:39
ttxmarkmc: I can't13:39
*** miqui has joined #openstack-infra13:39
*** gema has joined #openstack-infra13:42
*** ruhe is now known as _ruhe13:42
*** rpodolyaka has joined #openstack-infra13:44
ttxWow 12313:44
*** _ruhe is now known as ruhe13:45
*** rakhmerov has joined #openstack-infra13:45
*** gema has left #openstack-infra13:46
*** dpyzhov has quit IRC13:46
*** jasondotstar has joined #openstack-infra13:47
fungittx: wow high or wow lower than you expected? note that after the two major gate-loosening patches yesterday we went from averaging a commit every 6 hours to every hour overnight. still terribad tho13:47
ttxhigh :)13:48
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard: API tests for rest  https://review.openstack.org/6744713:48
ttxone commit per hour ?13:49
fungithe time at the head of the gate was as high as 60 hours, now it's at least back down to 5013:49
*** julim has joined #openstack-infra13:49
fungiyeah13:49
fungilooking at the commit log for openstack/openstack: http://git.openstack.org/cgit/openstack/openstack/log/13:49
*** rakhmerov has quit IRC13:50
fungijust a reminder to all, i'm basically afk or semi-away most of today for travel13:50
ttxfungi: so it's not just load. We also merge less13:51
*** dpyzhov has joined #openstack-infra13:51
*** ryanpetrello has joined #openstack-infra13:51
bknudsonwould it be better to merge multiple commits into one big one?13:51
*** dstanek has quit IRC13:51
*** mriedem has joined #openstack-infra13:51
fungittx: yeah, the reset rate is still rather high13:51
*** thomasem has joined #openstack-infra13:52
fungiand we're still strained for available test node quota, causing zuul to swing from long periods trying to satisfy the check backlog, to long periods trying to service the gate, then back again13:53
ttxfungi: we'll consider pushing back the milestone one week13:54
fungiand as developer activity increases during the day, the check pipeline buildup/runthrough gets bigger and bigger, making the swing period longer and longer13:54
ttxfungi: but that's a bit useless if we don't think we can fix it to chrun at least 50 paches a day13:54
bknudsondo you want us not to approve anything?13:55
fungii'm not sure not approving changes will necessarily help, since once we catch up we'll just be facing a hung dump of new approvals13:56
fungiat pea yesterday core reviewers were (re)approving a roughly dozen changes an hour13:56
fungipeak13:56
mriedemhave we seen things get better after sdague skipped test_volume_boot_pattern to get past 1720608?13:56
fungimriedem: yes13:56
mriedemok13:56
fungiwe made some headway since then13:57
*** mancdaz is now known as mancdaz_away13:57
*** thuc has joined #openstack-infra13:57
sdaguemriedem: there are still other volumes tests failing, at a lower fail rate13:57
sdagueso I think the revert is still a good idea13:58
*** yaguang has quit IRC13:58
*** dcramer_ has quit IRC13:58
*** rcleere has joined #openstack-infra13:59
sdaguettx: I had a -dev email with some updates this morning, including merge count for last 12 hrs13:59
ttxsdague: ok, I'm way behind14:00
*** dprince has joined #openstack-infra14:00
ttxcatching up14:00
*** vogxn has quit IRC14:00
ttx7am here14:00
sdaguepshaw, I'm always up at 7am ;)14:00
*** coolsvap has joined #openstack-infra14:00
*** sgordon_ has joined #openstack-infra14:01
*** heyongli has quit IRC14:02
*** viktors has joined #openstack-infra14:02
bknudsonit looks like whatever's scheduling all the gate jobs is preferring the "easy" tests through all the commits...14:03
*** max_lobur is now known as max_lobur_afk14:04
bknudsonso rather than running gate-tempest-dsvm-full for the first commit, it's running gate-cinder-pep8 for all the commits in the queue14:04
bknudsonwouldn't it be better to have all the jobs in the first commit running?14:05
*** mancdaz_away is now known as mancdaz14:05
fungibknudson: that's because of node availability. it has plenty of long-running nodes for those jobs, but not enough single-use nodes for the others14:05
*** dims has quit IRC14:06
fungiif all jobs used the exact same pool of systems, then yes what you describe is how it would work14:06
bknudsonok, maybe because of the delete time that was discussed earlier.14:06
ttxsdague: If we agree that the current situation is exceptional, I think deferring one week makes sense14:06
ttxsdague: might screw your gateblocking bugday a bit but I think we are past that14:06
anteayafungi I can answer questions today, I'm trying to catch up on the situation, is there a current standing suggestion for +A's?14:06
fungibknudson: and, also, the build time. creating new nodes from snapshots and registering them as jenkins slaves is time-consuming activity as well14:07
sdaguettx: honestly, I think that if i2 pushes back, we run the bugday on monday regardless14:07
*** dkliban has joined #openstack-infra14:07
ttxyep14:07
fungiit seems like we've been running a gate bugday for the past two weeks already14:07
sdaguewell, a few of us have been14:08
sdaguewhich is why I'm trying to call out specifically who's helping in the emails... maybe encourage others to help14:08
fungianteaya: business as usual as far as i know. also clarkb should be around in a few hours14:08
anteayavery good14:08
anteayawill do my best to be helpful14:08
anteayafungi: travel safely14:08
fungisdague: i'm taking your word for it--still been too busy to read e-mail14:09
*** ryanpetrello has quit IRC14:09
fungithanks anteaya14:09
anteaya:D14:09
sdaguefungi: I know I keep bugging folks on this one - https://review.openstack.org/#/c/67591/ but it will really help let more people find the missing bugs14:09
sdaguebecause right now it's jog0 and I until we get that hit list out there public14:09
mriedemsdague: the cinder revert has passed jenkins twice already https://review.openstack.org/#/c/67973/14:12
mriedemany more rechecks after that won't run test_volume_boot_pattern now that it's skipped,14:12
mriedembut should be good to recheck still if there are more flaky volume fails14:13
sdaguemriedem: they are at a lower fail rate, so at this point if you suspect it's the culprit, I'd just get it approved, promote, and see how the gate reacts.14:13
mriedemit's got one +214:14
mriedemjgriffith was still looking into it yesterday last i heard, he wasn't sure how the test ever worked...14:15
mriedemi'll keep following it today14:15
matelAnyone knows how to add a new package to be served by http://pypi.openstack.org/openstack ? I would like to add the XenAPI package.14:15
anteayahi matel, fungi is in transit today so you might have to wait about 3 more hours for clarkb to show up14:16
matelanteaya: Thanks for the info, I will wait.14:17
anteayaunless someone else with that knowledge is around, unfortunately I am not such a person14:17
anteayamatel: thanks for your patience14:17
*** changbl has quit IRC14:17
openstackgerritTrevor McKay proposed a change to openstack/requirements: Update python-savannaclient version  https://review.openstack.org/6812214:17
*** dims has joined #openstack-infra14:19
*** yassine has quit IRC14:21
*** yassine has joined #openstack-infra14:22
*** mgagne1 has joined #openstack-infra14:22
*** yaguang has joined #openstack-infra14:22
*** mgagne has quit IRC14:23
*** mgagne has joined #openstack-infra14:24
*** CaptTofu has joined #openstack-infra14:25
openstackgerritTrevor McKay proposed a change to openstack/requirements: Update python-savannaclient version  https://review.openstack.org/6812214:26
openstackgerritRuslan Kamaldinov proposed a change to openstack-infra/storyboard: Add tests for Alembic migrations  https://review.openstack.org/6641414:26
*** max_lobur_afk is now known as max_lobur14:27
*** mgagne1 has quit IRC14:27
*** odyssey4me has joined #openstack-infra14:28
*** eharney has joined #openstack-infra14:29
*** thuc has quit IRC14:30
*** mancdaz is now known as mancdaz_away14:31
*** thuc has joined #openstack-infra14:31
*** mancdaz_away is now known as mancdaz14:32
*** dizquierdo has quit IRC14:33
*** mfer has joined #openstack-infra14:33
*** BobBall is now known as BobBallAWay14:34
*** thuc has quit IRC14:36
*** SergeyLukjanov is now known as SergeyLukjanov_a14:42
*** SergeyLukjanov_a is now known as SergeyLukjanov_14:43
*** prad has joined #openstack-infra14:44
*** coolsvap_away has joined #openstack-infra14:45
*** prad has quit IRC14:46
*** rakhmerov has joined #openstack-infra14:46
*** gokrokve has joined #openstack-infra14:46
*** coolsvap has quit IRC14:47
*** coolsvap_away is now known as coolsvap14:48
*** mriedem has quit IRC14:49
*** dstanek has joined #openstack-infra14:49
*** dcramer_ has joined #openstack-infra14:50
*** changbl has joined #openstack-infra14:50
*** SergeyLukjanov_ is now known as SergeyLukjanov14:50
*** rakhmerov has quit IRC14:50
*** bauzas has quit IRC14:51
*** vogxn has joined #openstack-infra14:51
*** vogxn has quit IRC14:52
*** mgagne has quit IRC14:53
*** bauzas has joined #openstack-infra14:53
*** boris-42 has quit IRC14:54
*** amotoki has quit IRC14:55
*** oubiwann_ has joined #openstack-infra15:02
*** jcoufal has quit IRC15:02
*** jcoufal_ has joined #openstack-infra15:02
*** vkozhukalov has quit IRC15:04
*** chmouel has quit IRC15:05
*** chmouel_ has joined #openstack-infra15:05
*** chmouel_ is now known as chmouel15:06
*** prad has joined #openstack-infra15:07
*** IvanBerezovskiy has left #openstack-infra15:08
*** yassine has quit IRC15:08
*** emagana has joined #openstack-infra15:08
*** yassine has joined #openstack-infra15:09
*** vogxn has joined #openstack-infra15:10
*** ryanpetrello has joined #openstack-infra15:10
*** emagana has quit IRC15:13
gsamfiraHey guys. What plugin should I use for testr to generate HTML results. Like these for example: http://logs.openstack.org/71/68071/1/check/gate-ceilometer-python27/b68e4db/15:15
*** mgagne has joined #openstack-infra15:16
anteayahi gsamfira, I don't know if that is a plugin15:17
anteayasdague: do you happen to know?15:17
*** mgagne1 has joined #openstack-infra15:17
anteayagsamfira: I suspect it has less to do with testr and more to do with how we have set up logging and compression with our system15:19
*** vogxn has quit IRC15:19
*** rcleere has quit IRC15:20
*** mgagne has quit IRC15:21
gsamfirathank you anteaya15:22
anteayanp, sorry I don't have more details for you15:22
gsamfirathat's ok. I'll keep digging15:23
*** marun has joined #openstack-infra15:25
gsamfirafor posterity, it apears to be a helper script: http://goo.gl/JqCE4s15:25
anteayagsamfira: I think this might be helpful: http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/os_loganalyze/cmd/htmlify_log.py15:25
gsamfirathanks. I'll have a look at that as well15:26
anteayayes that is probably part of it as well15:26
anteayasure, glad you were able to find something useful15:26
*** mgagne1 has quit IRC15:30
*** rnirmal has joined #openstack-infra15:30
openstackgerritA change was merged to openstack/requirements: Allow for sqla 0.8... finally  https://review.openstack.org/6483115:31
*** mriedem has joined #openstack-infra15:31
anteayadolphm: I'm seeing a string of 5 keystone failures at the head of the gate, are you watching these patches?15:33
*** yassine has quit IRC15:35
*** yassine has joined #openstack-infra15:35
*** fifieldt has joined #openstack-infra15:35
anteaya67204, 64587, 64749, 64589, 64758, 6475915:35
anteayaso 6 now15:35
dolphmanteaya: yes15:36
dolphmanteaya: they're all dependent on each other15:36
*** jchiles has joined #openstack-infra15:37
*** UtahDave has joined #openstack-infra15:37
anteayaah15:37
dolphmanteaya: if they get bounced i'm squashing them into one change lol15:38
anteayaso the first one is failing15:38
anteayak15:38
dolphmanteaya: the last two (still showing green) are separate15:38
anteayak15:38
anteayathanks for being on top of it15:38
anteayaI'm not a fan of the squash, but if the net result is < 200 lines of code, so be it15:39
dolphmanteaya: agree15:39
anteayait that error a heat error?15:40
*** nati_uen_ has quit IRC15:40
anteayaI haven't seen it before, but I haven't been looking at random test logs lately15:40
*** rcleere has joined #openstack-infra15:41
*** emagana has joined #openstack-infra15:42
sdagueok, new sin of the day: people running reverify on merge conflict -2s15:42
sdaguehttps://review.openstack.org/#/c/55449/15:42
anteayadolphm: so can you snipe out those patches then so they don't get in the gate reset, please?15:43
*** basic` has quit IRC15:43
*** mrmartin has quit IRC15:43
sdagueanteaya: the stuff at the top is disconnected15:43
sdagueit won't reset anything any more15:43
anteayaso do the dependant keystone patches need to be sniped or no?15:44
sdagueno15:44
anteayaokay great15:44
anteayadolphm: ignore me15:44
anteayagood thing reverify is retired15:44
sdaguethough I just looked at the inprogress job on 64575,1515:45
sdagueand it's failing right now15:45
sdagueso about to do another reset15:45
russellbsdague: i have another patch for the PCI extension bug, will have it up in a minute after tests finish locally15:45
russellbsdague: patch yesterday didn't get it all15:45
anteayaso should dolphm snipe or not snipe?15:45
sdaguerussellb: great15:45
dolphmanteaya: sdague: let me know what you need from me15:45
sdagueanteaya: not bother15:46
*** rakhmerov has joined #openstack-infra15:46
anteayadolphm: so 67204,1 has another swing at the gate it appears15:46
sdaguelets get the full test results on the fail so we can figure out what's wrong15:46
* anteaya digs out her rubber chicken15:46
*** emagana has quit IRC15:47
*** sandywalsh has joined #openstack-infra15:50
*** NikitaKonovalov is now known as NikitaKonovalov_15:51
*** markwash has joined #openstack-infra15:52
russellbsdague: https://review.openstack.org/6814715:52
russellbneed that one reviewed and promoted i think15:53
bknudsonsetUpClass (tempest.api.compute.v3.servers.test_create_server.ServersV3TestJSON) ... FAIL -- so the gate's going to reset on that one.15:53
sdaguerussellb: +2 from me. And I agree a promote is in order on that15:54
russellbsdague: thanks15:54
anteayaso right now fungi and clarkb and jeblair (when his is not ill) can promote, yes?15:55
anteayamordred do you have access to that magic button?15:56
mordredanteaya: access, yes. knowledge, no.15:56
openstackgerritPavel Sedlák proposed a change to openstack-infra/jenkins-job-builder: Add support for Test Stability with Junit  https://review.openstack.org/6815215:57
*** jcoufal_ has quit IRC15:57
*** SergeyLukjanov is now known as SergeyLukjanov_15:57
anteayamordred: dangerous combination, we will wait for clarkb15:58
*** mancdaz is now known as mancdaz_away15:58
anteayahe tells me promotions are expensive, and I don't know what the trade off is for the use15:58
*** mancdaz_away is now known as mancdaz15:58
mordredthey cause a gate reset15:59
mordredusually we watch for a reset and then do the promotion real quick15:59
anteayacause? or float to the top?15:59
mordredcause15:59
anteayaah so timing is important15:59
mordredone of these days, I think we should add a feature "promote-on-next-reset"15:59
fungii can try to promote 68147 from the airport in a but15:59
anteayayeah, thou shalt not cause a gate reset15:59
anteayafungi: thanks, as you see fit16:00
fungier in a bit16:00
fungibut not if it's not urgent16:00
* fungi will bbiaw16:00
anteayafungi: I think we have a reset coming up16:00
mordredfungi: your flight got uncancelled?16:00
sdaguesqla 0.8 in global requirements just merged16:00
mordredwoot16:00
mordred(we didn't decide to just go to 0.9?)16:00
sdaguedoesn't work16:00
*** senk has joined #openstack-infra16:01
sdaguehttps://review.openstack.org/#/c/66156/16:01
mordredk. excellent16:01
sdaguethat's a partial fix, but more is needed16:01
sdagueI think it's probably only migrate that's a problem16:01
sdaguestepping away for a few16:02
*** DinaBelova is now known as DinaBelova_16:02
*** dmsimard has joined #openstack-infra16:02
*** boris-42 has joined #openstack-infra16:02
dmsimardHi guys, what's the magic word to reschedule a merge of a patchset ? It failed because of yesterday's github issues.16:02
*** alexpilotti has joined #openstack-infra16:02
anteayadmsimard: can you post the url of the patchset please?16:03
dmsimardNevermind, I opened my eyes :)16:03
dmsimardanteaya: https://review.openstack.org/#/c/67571/16:03
* russellb is unaware of what "yesterday's github issues" are ...16:03
*** carl_baldwin has joined #openstack-infra16:03
russellbah16:04
dmsimardrussellb: https://twitter.com/githubstatus16:04
*** burt1 has joined #openstack-infra16:04
*** dpyzhov has quit IRC16:06
anteayadmsimard: okay you need a reapproval for this to head to the gate16:06
anteayadmsimard: track down one of the cores for this repo and ask them to +A again, that will trigger it16:06
anteayawe have removed "reverify" since it was getting abused16:07
*** dpyzhov has joined #openstack-infra16:07
dmsimardanteaya: Understandable, and it was surely worse considering there has been a lot of problems lately16:07
*** nati_ueno has joined #openstack-infra16:08
anteayadmsimard: thanks for understanding16:08
anteayayes, some folks pay attention to the bigger picture and some folks not so much16:08
*** dpyzhov has quit IRC16:09
*** chandankumar_ has quit IRC16:10
*** luqas has joined #openstack-infra16:11
*** mrodden has joined #openstack-infra16:11
*** prad has quit IRC16:11
*** prad has joined #openstack-infra16:12
*** dmsimard1 has joined #openstack-infra16:12
*** dmsimard has quit IRC16:13
*** mgagne has joined #openstack-infra16:14
*** kruskakli has joined #openstack-infra16:15
*** markwash has quit IRC16:15
*** eharney has quit IRC16:15
*** mgagne1 has joined #openstack-infra16:16
*** sgordon_ has quit IRC16:18
*** dmsimard1 is now known as dmsimard16:18
*** mgagne has quit IRC16:19
*** gyee has joined #openstack-infra16:19
dmsimardanteaya: Got a core to re-approve, let's wait and see16:25
mriedemsdague: fungi: we should get this to the top of the queue: https://review.openstack.org/#/c/68147/16:26
anteayadmsimard: may the force be with you16:26
russellbi think fungi may be on a plane today16:27
russellbso maybe clarkb when he's around?16:27
*** mgagne1 has quit IRC16:28
*** mgagne has joined #openstack-infra16:28
*** LinuxJedi is now known as LinuxJedi__16:29
*** mancdaz is now known as mancdaz_away16:29
*** mrodden has quit IRC16:30
*** mgagne1 has joined #openstack-infra16:30
*** mancdaz_away is now known as mancdaz16:31
locke105mriedem: #devstackgateheroes16:32
mriedemlocke105: you must be at home then16:33
*** mgagne has quit IRC16:33
*** DinaBelova_ is now known as DinaBelova16:33
*** eharney has joined #openstack-infra16:33
russellblocke105: there's nobody in there :(16:34
locke105lol, was more supposed to be a twitter hashtag16:35
dmsimardlol.16:35
locke105pretty sure infra is the place where devstackgateheroes hang out16:36
anteayawould be a great channel for the gate bugday16:37
*** pcrews has joined #openstack-infra16:37
*** kgriffs has joined #openstack-infra16:38
russellblocke105: :-p16:38
russellbhashtags have ruined us all16:39
russellbnot long ago i noticed my orange, yes a fruit, had a hashtag sticker on it16:39
russellbdidn't even make any sense16:39
Mithrandirwas it #orange?16:39
russellbhttps://twitter.com/russellbryant/status/41372913699023667216:39
russellb#spin16:39
russellbwat16:39
mriedemfungi: clarkb: can this be promoted? https://review.openstack.org/#/c/68147/16:39
mriedemmy wife watches the bachelor/ette, they think it's cool to put tweets on the screen during the show. the show is bad enough on it's own, but that takes it to a new level.16:41
mriedem#omfgimsojealousinlove16:41
*** markwash has joined #openstack-infra16:41
mgagne1can we get someone to review this one? https://review.openstack.org/#/c/65406/ it's about removing a check/gate job for puppet projects16:41
*** mgagne1 is now known as mgagne16:42
anteayamgagne: today we have mordred and clarkb16:42
anteayaand clarkb has not arrived yet16:42
mgagneanteaya: and mordred is in a plane? =)16:43
anteayasurprising not16:43
anteayaI don't think16:43
anteayafungi is on a plane16:43
mgagneanteaya: alright, thanks =)16:43
anteayaand jeblair is sick and soon to be on a plane, I expect16:43
anteayanp16:43
*** LinuxJedi__ has quit IRC16:44
*** thuc has joined #openstack-infra16:45
*** bauzas has quit IRC16:45
*** praneshp has joined #openstack-infra16:45
*** odyssey4me has quit IRC16:45
*** Aarongr_afk is now known as AaronGr16:46
*** markmc has quit IRC16:46
*** emagana has joined #openstack-infra16:47
*** kraman has joined #openstack-infra16:47
*** kraman has left #openstack-infra16:48
*** gokrokve has quit IRC16:49
*** gokrokve has joined #openstack-infra16:49
*** mrodden has joined #openstack-infra16:50
*** gokrokve_ has joined #openstack-infra16:51
*** FallenPegasus has joined #openstack-infra16:51
sdaguedmsimard: why does github outage affect gate?16:51
sdaguewe should be fully isolated16:51
dmsimardsdague: In the context of puppet openstack, jenkins clones dependencies from github16:52
dmsimardsdague: To run tests16:52
sdaguedmsimard: ah, gotcha16:52
sdagueso it's a stackforge thing, that's fine16:53
dmsimardsdague: yup.16:53
matelAnyone knows how to add a new package to be served by http://pypi.openstack.org/openstack ? I would like to add the XenAPI package.16:54
*** gokrokve has quit IRC16:54
*** yaguang has quit IRC16:55
*** fbo is now known as fbo_away16:55
*** talluri has joined #openstack-infra16:55
*** kraman has joined #openstack-infra16:56
*** fbo_away is now known as fbo16:57
*** DinaBelova is now known as DinaBelova_16:58
max_loburHi Everyone!16:58
max_loburI'm from Ironic project16:58
max_loburSomebody from requirements core group, could you please review/approve the patches for us:16:58
max_loburhttps://review.openstack.org/#/c/66349/3 this one already have +1 from core reviewer16:58
max_loburhttps://review.openstack.org/#/c/66077/16:58
max_loburI would greatly appreciate16:59
anteayahi max_lobur here are your requirements core people: https://review.openstack.org/#/admin/groups/131,members17:00
fungimriedem: russellb: does 68147 still warrant moving to the front?17:00
max_loburanteaya, thanks!17:00
anteayamax_lobur: I note none of them are infra-core17:00
anteayamax_lobur: np, happy hunting17:00
russellbfungi: yes17:01
jeblairmordred: ping17:01
russellbit's still one of the top issues17:01
mordredjeblair: otp17:01
mriedemfungi: yeah, it's for 172068017:01
max_loburanteaya, do you think it's ok if I contact some of them by email for this?17:01
jeblairmordred: i wondered if you wanted to promote 6814717:01
*** talluri has quit IRC17:01
fungimriedem: russellb: is it the cause of that keystone failure at the front?17:01
anteayamax_lobur: I personally have nothing against that and don't think any of the folks on the list will object17:01
*** talluri has joined #openstack-infra17:02
mriedemfungi: doesn't sound like it, no17:02
anteayanote that joe heck hasn't been doing much openstack lately17:02
*** med_ has quit IRC17:02
*** nati_uen_ has joined #openstack-infra17:02
anteayaI'm not sure who dave walker is17:02
*** nati_uen_ has quit IRC17:02
fungiokay, i'll wait a few minutes for the front of the gate to clear17:02
anteayathe rest are active in openstack17:02
max_loburanteaya, thanks a lot!17:02
anteayamax_lobur: np17:03
*** nati_uen_ has joined #openstack-infra17:03
jeblairmordred: istr you saying you would have time to help out more this week; and i wasn't sure if you had promoted a change yet.  but i guess if you're busy, nevermind.17:03
*** nati_ueno has quit IRC17:03
*** markwash has quit IRC17:03
*** hashar has joined #openstack-infra17:04
fungimordred: for reference, i'm planning to run 'sudo zuul promote --pipeline gate --changes 68147,3'17:04
russellbfungi: sounds good, thank you!17:04
anteayajeblair: he has said he can promote but doesn't have the promotion knowledge giving him confidence to promote17:04
*** medberry has joined #openstack-infra17:04
jeblairanteaya: that's what i was trying to fix.17:04
anteayaand the rest of us are honouring his good judgement17:04
anteayaah17:04
anteayahe was around a minute ago17:04
anteayaI realize that doesn't help much17:04
anteayajeblair: feeling any better today?17:05
*** talluri has quit IRC17:06
*** dstanek_afk has joined #openstack-infra17:06
*** johnthetubaguy1 has joined #openstack-infra17:06
*** vkozhukalov has joined #openstack-infra17:06
*** ociuhandu_ has joined #openstack-infra17:07
*** CaptTofu_ has joined #openstack-infra17:07
*** gokrokve_ has quit IRC17:08
*** saschpe has joined #openstack-infra17:08
*** FallenPegasus has quit IRC17:08
*** julim has quit IRC17:09
*** kgriffs has quit IRC17:09
*** johnthetubaguy has quit IRC17:09
*** niska has quit IRC17:09
*** saschpe_ has quit IRC17:09
*** CaptTofu has quit IRC17:09
*** kruskakli has quit IRC17:09
*** dcramer_ has quit IRC17:09
*** lifeless has quit IRC17:09
*** mrda_ has quit IRC17:09
*** wayneeseguin has quit IRC17:09
*** mrda has joined #openstack-infra17:09
*** lifeless1 has joined #openstack-infra17:09
*** dstanek has quit IRC17:09
*** afazekas has quit IRC17:09
*** ociuhandu has quit IRC17:09
*** skraynev has quit IRC17:09
*** jeblair has quit IRC17:09
*** ociuhandu_ is now known as ociuhandu17:09
*** wayneseguin has joined #openstack-infra17:09
*** skraynev has joined #openstack-infra17:09
*** niska has joined #openstack-infra17:09
*** wayneseguin is now known as wayneeseguin17:09
*** afazekas has joined #openstack-infra17:09
*** kgriffs has joined #openstack-infra17:10
*** julim has joined #openstack-infra17:10
*** dcramer_ has joined #openstack-infra17:10
*** jeblair has joined #openstack-infra17:10
*** morganfainberg has quit IRC17:11
*** morganfainberg has joined #openstack-infra17:11
dimsjog0, sdague - is there a etherpad for the gate issues?17:11
matelfungi: do you knows how to add a new package to be served by http://pypi.openstack.org/openstack ? I would like to add the XenAPI package.17:11
fungimatel: it would need to be added to the openstack/requirements global-requirements.txt file17:13
matelfungi: thanks.17:13
matelfungi: How frequently does it get refreshed?17:13
*** jasondotstar has quit IRC17:13
fungimmm, that keystone change at the very front is going to fail too, according to the jenkins log for its last running job17:13
fungii'l go ahead and promote that change now to give them another chance i guess17:14
fungisince when that one goes, it's a full gate reset regardless17:14
bknudsondo we want to let the job finish so we can get some logs on the failure?17:15
fungimatel: refreshed in what way?17:15
fungibknudson: not really. it looks like a failure pattern we're already tracking anyway17:15
anteayathis gate reset has been going on for some time17:15
matelSo, if I add a new entry, and it gets merged, when can I download the package from pypi.openstack.org?17:15
bknudsonok, if it's a known failure.17:15
anteayato my eyes, it feels like this gate reset has been taking over an hour17:16
fungimatel: usually around an hour after the change to openstack/requirements merges17:17
*** thuc has quit IRC17:17
matelfungi: thanks17:17
fungimerge events for that repository trigger mirror refreshes17:18
dolphmanteaya: about 1.5 hours now17:18
jeblairanteaya: what 'gate reset?'17:18
*** thuc has joined #openstack-infra17:18
*** mgagne has quit IRC17:18
anteaya64575 at the top of the gate has been there for over an hour17:18
*** MarkAtwood has joined #openstack-infra17:18
jeblairanteaya: that doesn't mean it has taken zuul an hour to reset the changes after it17:19
anteayaoh I see this job is still running: https://jenkins03.openstack.org/job/gate-tempest-dsvm-postgres-full/2605/17:19
russellbanteaya: tempest runs take a bit over an hour now17:19
jeblairanteaya: the postgres job has taken 3 hours to run17:19
fungithat change has been there because there's a job on it which ran for almost 3 hours17:19
russellb3 hours, eep17:19
russellbwhat the duece17:19
anteayamy mistake, yeah this job - which is going to fail (correct dolphm?) is still running17:19
anteayathen the gate reset17:19
*** CaptTofu_ has quit IRC17:19
openstackgerritMate Lakat proposed a change to openstack/requirements: Add XenAPI to OpenStack dependencies  https://review.openstack.org/6818117:20
*** fifieldt has quit IRC17:21
matelfungi: Thanks for the info, patch uploaded.17:22
jeblairfungi: you might have to wait until it actually shows up in the pipeline since there's an event queue backlog right now17:22
*** SergeyLukjanov_ is now known as SergeyLukjanov17:22
*** thuc has quit IRC17:22
anteayasdague | though I just looked at the inprogress job on 64575,1517:23
anteaya15:45:16          sdague | and it's failing right now17:23
*** smurugesan has joined #openstack-infra17:23
fungijeblair: yeah, it didn't take yet17:23
*** praneshp has quit IRC17:23
fungiso chances are all those keystone changes will be ejected before i cat get the new change promoted anyway17:23
jeblairfungi: are there any operational issues you would like me to address?17:24
*** pcrews has quit IRC17:24
fungijeblair: lifeless1 has an updated patch series up for nodepool which should help the deleted node handling. i approved some of the initial ones in the series because they looked minimal and safe, but the others may merit more eyes to confirm they don't take the design in unintended directions17:25
*** jasondotstar has joined #openstack-infra17:25
*** thuc has joined #openstack-infra17:26
*** thuc has quit IRC17:27
jeblairfungi: did you restart nodepool with any of those changes?17:27
*** thuc has joined #openstack-infra17:27
fungijeblair: no, not yet. puppet's disabled on nodepool at the moment still because of the failure we hit early last week, so i could temporarily hand-edit out the configuration for the tripleo poc provider which went offline and tanked nodepool17:28
*** rakhmerov has quit IRC17:28
openstackgerritEvgeny Fadeev proposed a change to openstack-infra/askbot-theme: removed a broken line from the script  https://review.openstack.org/6818317:28
fungii have a patch proposed to address where i saw the exceptions in the log which seemed to be preventing it from adding any new nodes17:28
jeblairfungi: ah.17:29
fungi(any new nodes on any provider)17:29
russellbfungi: looks like a good time to promote17:29
fungirussellb: zuul hasn't spotted the approval event for that patch yet, so it'll be a few more minutes17:29
russellbah ok17:29
jeblairi don't understand some of lifeless1's changes that were merged17:30
SergeyLukjanovit looks like propose req updates job failing now ;( http://logs.openstack.org/a9/a94a666767516699d7ee689661f2a157bc73671e/post/propose-requirements-updates/17:30
SergeyLukjanovmorning guys17:30
* fungi looks bach at them to refresh memory17:30
jeblairSergeyLukjanov: good morning17:30
jeblairhttps://review.openstack.org/#/c/67924/17:30
jeblairfungi: what will that tell us?17:30
*** mancdaz is now known as mancdaz_away17:30
*** viktors has left #openstack-infra17:30
mferfungi howdy17:30
jeblairfungi: other than that the log system is working, by emitting 10 lines/second?17:30
kmartinany chance a core could look at https://review.openstack.org/#/c/65179/ it has four +1's and now gate appears to be working as sdague just had a patch land. This requirement change in holding up a new cinder driver that is suppose to landed in Icehouse-2.17:30
anteayakmartin: might want to check with jgriffith on that17:31
*** senk has quit IRC17:31
anteayattx is advising PTLs to continue with what ever is currently in the gate and mark anything not in the gate for i317:32
kmartinanteaya: jgriffith already gave it a +1, he's not able to +2's as far as I know17:32
*** fifieldt has joined #openstack-infra17:33
fungijeblair: oh, it looked like that would only be hit if we entered that early. now that i look at it with fresh eyes, i agree lifeless1's suggestion that it would only be emitted at most once a second and only when under heavy load may have been mistaken17:33
jeblairfungi: i left a comment on it17:33
fungijeblair: it was also suggested as a short-term debugging test to find out how often we were hitting that delay and being forced to wait17:33
anteayakmartin: you are looking for requirements cores: https://review.openstack.org/#/admin/groups/131,members17:33
jeblairfungi: i expect the answer is 'constantly'17:34
anteayakmartin: none of which are infra cores17:34
jeblairby design17:34
kmartinanteaya: got it...thanks :)17:34
anteayakmartin: np17:34
fungijeblair: yeah, now i think so too. i should not try to do code review when it's that late at night17:35
jeblairfungi: left a comment on https://review.openstack.org/#/c/67993/ as well17:35
*** afazekas has quit IRC17:35
jgriffithanteaya: kmartin is correct, I have no +2 powers here :)17:36
fungijeblair: yep, i totally missed that we had a knob for that17:36
anteayajgriffith: very good, just trying to spread the message ttx is advocating regarding what is in the gate is good to stay and if not retargeting for i3, which is the PTLs decision of course17:38
*** marun has quit IRC17:38
jgriffithanteaya: agreed17:38
jeblairfungi: the rest of the merged ones in that series lgtm17:38
jgriffithanteaya: that one meets that criteria17:38
anteayajgriffith: okay, thank you17:38
jgriffithanteaya: you're welcome, and thanks to you!17:39
anteaya:D17:39
dmsimardanteaya: Still haven't gotten jenkins to reverify and merge my patch set, I tried looking at jenkins01/02 in search of issues but I don't exactly know where to look. Any ideas or is it a matter of patience ?17:39
fungijeblair: thanks for checking back over those17:39
zaromorning17:39
* fungi is about to disappear again. flight is boarding shortly and i think this 5-hour leg may have no wifi :(17:40
anteayadmsimard: go here: http://status.openstack.org/zuul/17:40
zarofungi: did new scp plugin get installed?17:40
anteayadmsimard: see this? Queue lengths: 1056 events, 630 results.17:40
fungizaro: on all the jenkins masters running newer jenkins, yes17:40
fungizaro: we still need to upgrade jenkins and plugin on jenkins.o.o and 0117:40
anteayathere are over 1000 events that zuul hasn't processed yet, your reapproval is in them17:41
zarofungi: cool, thx. enjoy the flight.17:41
anteayaafter your patch hits the gate queue it is still a 50 hour wait17:41
dmsimardanteaya: Okay, I thought zuul was only for primary openstack projects, not for stackforge - makes sense.17:41
*** esker has joined #openstack-infra17:41
anteayazaro morning17:42
jeblairfungi, lifeless: why do you think making all deletes serialized will help?17:42
mferfungi did you ever hear back on the openstack sdk naming convention stuff?17:42
*** gokrokve has joined #openstack-infra17:43
*** gokrokve has quit IRC17:43
*** gokrokve has joined #openstack-infra17:43
fungijeblair: i gather the suggestion there is to avoid having the event-driven deletes conflict with the queued deletes, and then run through the queue more aggressively instead17:43
*** fifieldt has quit IRC17:44
*** marun has joined #openstack-infra17:44
jeblairlifeless1: and why would reordering the delete and check calls help?17:44
*** wenlock has joined #openstack-infra17:44
fungimfer: i have e-mailed mark collier again asking for an update. i will also see him in person tomorrow so i'll be sure to find out the status then if nothing else (but i won't be around irc much for the next few days either)17:44
jeblairfungi: i don't get it.  right now we can parallelize work across all providers, this serializes it.17:44
*** DinaBelova_ is now known as DinaBelova17:45
fungijeblair: i'm still unconvinced as well. maybe we would do better to have queue per provider and iterate through those with one or more parallel tasks each?17:46
jeblairfungi: this means that if we're waiting our turn to send an api call to rackspace (because we're rate limiting), we won't be using the same time to send an api call to hpcloud (because it's serialized behind the request to rackspace)17:46
dstufftsdague: some day pip will figure it out :[17:46
jeblairfungi: yeah, that's what we have now.17:46
mferfungi ok. if mordred knows the direction and i can twist his arm into doing it manually i'll be happy enough17:46
*** fifieldt has joined #openstack-infra17:46
zarois there openstack-infra meeting today?17:46
kramanjeblair: ping. can I get some of your time to bounce some ideas off you re solum project's use of zuul17:47
sdaguedstufft: yes, which is fine17:47
sdagueI'm just trying to be realistic on that thread17:47
sdaguebecause I feel like there is a POV coming from "ah it's already a solved problem" by a set of people that have never looked into the details17:47
*** sarob has joined #openstack-infra17:48
dstufftsdague: yea I commented already saying that relying on 22+ things is probably going to be painful for end users17:48
anteayazaro if so looks like it will be you me and clarkb and whoever else shows up17:49
anteayapleia2 will probably be there too, double teaming with the tripleo meeting17:50
fungizaro: anteaya: i'll join from the flight if there's wifi, otherwise just assume most of the action items i assigned myself last week are still unaddressed17:50
*** gothicmindfood has joined #openstack-infra17:50
clarkbhello17:50
*** nati_ueno has joined #openstack-infra17:51
jeblairkraman: time is in short supply right now.  can it wait until later in the week?17:51
*** nati_ueno has quit IRC17:51
kramanjeblair: sure17:51
*** harlowja_away is now known as harlowja17:51
anteayafungi: very good17:51
*** nati_ueno has joined #openstack-infra17:51
anteayaclarkb: hello17:51
kramanjeblair: when should i ping you back? would sometime thursday be better?17:52
anteayajeblair: do you think you will have wifi during meeting time?17:52
jeblairanteaya: i'm at home, sick, remember?17:52
anteayajeblair: sorry I found it hard to tell17:52
anteaya:D17:52
anteayaso no Utah for you?17:52
*** rakhmerov has joined #openstack-infra17:53
*** kgriffs is now known as kgriffs_afk17:53
*** CaptTofu has joined #openstack-infra17:53
*** coolsvap is now known as coolsvap_away17:54
*** mestery has quit IRC17:55
*** nati_uen_ has quit IRC17:55
pleia2anteaya: yep, I'm around17:55
anteayajeblair: so you will be here to chair?17:55
*** sarob_ has joined #openstack-infra17:56
* anteaya is trying to figure out if she should be warming up the bot commands in case she has to chair17:56
anteayapleia2: awesome17:56
*** sarob__ has joined #openstack-infra17:57
*** ruhe is now known as _ruhe17:57
pleia2(and I can chair if jeblair is sick or plane-ing)17:57
anteayagrand17:57
*** sarob has quit IRC17:58
*** sarob___ has joined #openstack-infra17:58
*** derekh has quit IRC17:58
gsamfirahello again :). We are getting ready to bring the Hyper-V CI online, and we are looking into monitoring solutions for the various services that are running on the nodes. What are you guys using? Nagios, Zabbix?17:58
*** sarob has joined #openstack-infra17:59
anteayagsamfira: do us all a favour and hold off until at least next week if what you are discussing is an 3rd party testing system18:00
clarkbgsamfira: we have a simple cacti server http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=7&select_first=true18:00
clarkbanteaya: ?18:00
*** sarob has quit IRC18:00
clarkbalso do changes still need promotion?18:00
anteayaclarkb: should it not?18:00
fungijeblair: clarkb: mordred: if one of you can 'sudo zuul promote --pipeline gate --changes 68147,3' on zuul.o.o once 68147,3 shows up in the gate (its approval is still i the event queue), it would be appreciated. i have to drop offline now, possibly for many hours18:00
*** sarob_ has quit IRC18:00
clarkbfungi: will do thanks18:00
anteayacan we tolerate the possible spam of a new 3rd party testing system right now?18:01
* fungi departs18:01
clarkbanteaya: should it not what?18:01
clarkbanteaya: it won't affect anything on our end18:01
anteayaokay18:01
clarkbanteaya: it may affect code review18:01
*** jamielennox|away is now known as jamielennox18:01
*** Alex_Gaynor has quit IRC18:02
*** sarob__ has quit IRC18:02
gsamfirawe will not be voting right away, untill we make sure that it works as expected. Right now we are finishing the last bits. If you think we should hold off untill next week, we will. But we need to get this online by 31/01/2014 (the latest)18:03
*** sarob___ has quit IRC18:03
*** Alex_Gaynor has joined #openstack-infra18:03
clarkbgsamfira: I don't think you need to wait as long as you hold off on voting until you are confident in the system18:04
gsamfiraawesome18:04
*** gothicmindfood has quit IRC18:05
*** hogepodge has joined #openstack-infra18:06
*** fifieldt has quit IRC18:07
*** elasticio has joined #openstack-infra18:07
*** max_lobur is now known as max_lobur_afk18:07
*** nicedice has joined #openstack-infra18:09
*** yamahata has joined #openstack-infra18:13
*** sarob has joined #openstack-infra18:13
*** praneshp has joined #openstack-infra18:14
*** markwash has joined #openstack-infra18:15
clarkbzaro and I found a small bug in the scp plugin change that went in. Noticed that there are a bunch of tests on jenkins masters that have been going for hours and hours. Looks like they ran into network trouble which killed the file upload thread before it could notify the main job thread. I am manually killing those jobs and we will work on a fix18:16
jeblairclarkb: ack thx18:17
*** dims has quit IRC18:17
*** fifieldt has joined #openstack-infra18:20
*** jerryz has joined #openstack-infra18:21
*** smarcet has left #openstack-infra18:24
*** smarcet has joined #openstack-infra18:24
*** dims has joined #openstack-infra18:24
*** jp_at_hp has quit IRC18:27
*** dripton is now known as dripton_shovelin18:30
anteayapleia2: you are chairing today's meeting18:30
anteayalet me know if there is anything I can do to help18:31
*** kgriffs_afk is now known as kgriffs18:31
*** pcrews has joined #openstack-infra18:31
*** jooools has quit IRC18:32
clarkbjeblair: are you hacking on zuul or trying to recover from illness?18:32
clarkb(or both? :) )18:32
jeblairclarkb: mostly trying to catch up and ascertain the current situation.18:33
jeblairclarkb: and recover.18:33
jeblairclarkb: fungi suggested that improving nodepool delete performance was the most operationally critical thing i should look at.18:33
pleia2anteaya: thanks18:33
clarkbjeblair: I would mostly agree with that. I also think that having a rate limit of some sort in zuul would help tremendously in moving the queue because we can stop wasting resources on the 50th change in the gate18:34
jeblairclarkb: do you want to write that?18:34
clarkbjeblair: I can try :)18:35
clarkbthinking about I actually think it won't be too hard as we can just preslice the queues that zuul operates over based on $thing18:35
*** markwash has quit IRC18:38
mordredclarkb: having it be an adaptive rate limit would be interesting ... perhaps set it based on the past X time's success/failure ratio?18:39
*** nati_uen_ has joined #openstack-infra18:40
fungithere is wireless on this flight. looks like zuul still hasn't spotted 68147,3 though18:40
mordredbut - a static configured one is probably a great step towards that18:40
mordredfungi: yay inflight wifi18:40
clarkbmordred: yup, probably going to start simple and do something similar to tcp18:40
mordredI love it that "something similar to tcp" was your follow up to "start simple"18:41
*** markwash has joined #openstack-infra18:41
clarkbmordred: well the window sizing in tcp is simple18:41
clarkbincrement increment increment, trouble reduce by half, increment increment increment18:42
clarkbttx: I can't context switch enough right now, but I really don't like zuul handling the assumption that tests will fail18:43
*** nati_ueno has quit IRC18:43
clarkbassuming tests will fail implies to me that we should just turn the gate off and have tea18:43
jeblairtea sounds lovely right now18:43
ttxclarkb: well, not really.18:43
fungiyay taxiing back to the gate for a warning light in the cockpit... grr18:44
ttxclarkb: we are only ignoring them as far as the top changes in the gate go18:44
jeblairfungi: that's what you get for using wifi18:44
fungiindeed18:44
mordredttx, clarkb: context?18:44
clarkbmordred: suggestions for improving the gate thread18:44
ttxclarkb: I agree it's an aggressive change, but I still prefer that to turning gate off18:44
mordredgotcha. looking18:44
clarkbbut I can't email right now because ETOOMUCHGOINGON18:44
ttxif it boils down to those two optons18:45
clarkbttx: its an aggressive change that assumes nothing will mege18:45
clarkbso why do it18:45
ttxclarkb: it just automates what we are doing anyway18:45
mordredttx: what's the thread subject?18:45
ttxcould be temporary too18:45
mordredttx: I'd like to read the suggestion18:45
jeblair   [  80: Sean Dague             ] [OpenStack-Infra] suggestions for gate optimizations18:45
ttxmordred: on openstack-infra18:45
mordredahhh18:45
openstackgerritSlickNik proposed a change to openstack-infra/devstack-gate: Add Trove testing support  https://review.openstack.org/6504018:46
jeblairMessage-ID: <52DEAF38.6070409@openstack.org>18:46
mordredgotit18:46
clarkbjeblair: right now I am thinking only dependent pipelines need rate limiting? because independent will get their jobs run once and done. Does that seem correct to you or should we limit the entire space to give everything a more level playing field?18:46
ttxfwiw I'm not saying we should do that. Just want to put it on the table, just above the "turn gate off" nuclear option.18:46
mordredyeah - we've talked about more parallel testing of optional combinations - but I think we should implement throttling first18:47
jeblairclarkb: i think dependent only18:47
clarkbjeblair: ok18:47
*** dstanek_afk is now known as dstanek18:47
mordredyes18:47
*** obondarev_ has joined #openstack-infra18:47
*** luqas has quit IRC18:48
jeblairmordred: by throttling do you mean what clarkb is working on?18:48
*** gothicmindfood has joined #openstack-infra18:48
*** mfer has quit IRC18:48
mordredjeblair: yes18:48
*** mfer has joined #openstack-infra18:48
*** marun has quit IRC18:48
jeblairmordred: i would not view that as addressing the fundamental problem18:49
mordredjeblair: because I think that before we could even think about running more speculative branches of things, we'd have to have a mechanism to control resorces consumption.18:49
*** marun has joined #openstack-infra18:49
jeblairmordred: oh i see the connection.  ack.18:49
mordredI don't think either address the fundamental problem - I just think that jumping to speculative first-fail branches right now is untennable18:49
HenryGHow can I fetch the latest patchset of a review without knowing how many patchsets it has?18:51
*** fbo is now known as fbo_away18:52
fungiHenryG: with git-review? it should pick the latest patchset bu default if you don't specify one18:52
HenryGfungi: actually I was hoping to just download the diff18:53
jeblairfungi: do you have an idea of what the actual time to delete a rax server is?18:53
sdagueclarkb: if you are putting in rate limmitting, if you can do it in such a way that the value could be updated without a zuul restart18:53
sdaguewe could do it pseudo static18:53
clarkbsdague: ya18:53
sdagueI think right now a value of 10 would be appropriate18:54
*** dripton_shovelin is now known as dripton18:54
clarkbsdague: possibly as a zuul rpc command to start with18:54
*** _david_ has joined #openstack-infra18:54
*** marun has quit IRC18:54
fungijeblair: in current nodepool, or just in general?18:54
sdagueclarkb: sure, or a config that it rereads18:54
sdagueso it would be persistent across reboots18:54
jeblairfungi: in general?  if 10 mins isn't long enough, how long is enough?18:54
fungias in, how long is nodepool taking to clear a rax vm, or how long does it take after i nova delete before i see it no longer in nova list?18:55
sdagueok... really going to get lunch18:55
jeblairfungi: i think the first one18:55
fungijeblair: i've seen rax, particularly ord, vms hanging around in nodepool list for up to 3 hours, but usually they're gone within 0.5 to 1 hour18:56
*** krotscheck has joined #openstack-infra18:56
jeblairfungi: you have something running right now that deletes things side-channel?18:56
*** kgriffs is now known as kgriffs_afk18:57
jeblairfungi: i see a lot of "NotFound: The resource could not be found. (HTTP 404)" errors in the logs; could it be because of that?18:57
*** jcoufal-mobile has joined #openstack-infra18:57
jeblairfungi: (as in, is it possible your side-channel deleting is getting the jump on the NodeCompleteThread delete?)18:57
fungijeblair: right now, entirely probable18:58
fungii can stop and let it run its course18:58
jeblairfungi: don't worry, if you think it's helping overall it's finj18:58
jeblairfine18:58
*** jchiles has quit IRC18:58
russellbwhat's the page to the monitoring dashboard for infra hosts ... so i can see things like load avg on zuul18:58
russellbor cpu consumption or whatever18:58
*** _david_ has quit IRC18:58
jeblairrussellb: http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=2318:58
fungiit is. without doing it, the width of the "used" band in the graph dwindles to about 20% of the aggregate quota at most18:58
russellbjeblair: thanks!18:58
*** lifeless1 is now known as lifeless18:59
lifelessjeblair: will talk after meetings :/18:59
*** thuc has quit IRC18:59
*** thuc has joined #openstack-infra19:00
pleia2meeting time (once they finish up)19:00
mriedemwell the cinder revert patch for 1270608 passed jenkins for a 3rd time https://review.openstack.org/#/c/67973/19:00
*** thuc has quit IRC19:00
russellbjeblair: load not crazy high it seems ... just wondering why it hasn't picked up a change from a while ago (68147,3), which fixes one of the top gate bugs19:00
mriedemalthough the last one didn't have test_volume_boot_pattern19:00
*** thuc has joined #openstack-infra19:01
*** jerryz has quit IRC19:02
*** otherwiseguy has joined #openstack-infra19:03
jeblairrussellb: zuul is not extremely efficient.  the suggestion i sent to the list about splitting out the merger component and horizontally scaling it would help.  also, making it so that it doesn't have to process the full queue on every result event would help.19:03
russellbjeblair: cool, i'm hoping i can study up on the zuul code here soon ... i'd like to help19:03
jeblairrussellb: that last one sounds trivial, but the last time i looked it seemed moderately complex.19:03
*** _david_ has joined #openstack-infra19:04
russellbthere was one suggestion i saw on the infra list that i don't think got a response but sounded interesting ... which was to run more than one zuul, like run one for just check, and the rest in another19:05
jeblairrussellb: (basically, the "result queue" should just be a boolean flag (">=1 new result received; queue processing needed")19:05
russellbnot sure if that's even possible though19:05
jeblairrussellb: that would be more complex than actually improving zuul, and degrade our experience at the same time.19:05
russellbk :)19:05
*** johnthetubaguy1 has quit IRC19:06
*** gema has joined #openstack-infra19:07
*** ok_delta has joined #openstack-infra19:08
*** ok_delta__ has joined #openstack-infra19:08
*** dstufft has quit IRC19:10
*** dstufft has joined #openstack-infra19:10
*** praneshp has quit IRC19:10
ArxCruzjeblair: hey, happy new year :) backing to business now, whenever you have chance, can you take a look in my patch https://review.openstack.org/#/c/62739/19:12
ArxCruz:)19:12
*** dkliban is now known as dkliban_afk19:12
ArxCruzanteaya: you too :)19:12
*** markmcclain has joined #openstack-infra19:12
anteayahey ArxCruz19:12
ArxCruz=D19:12
anteayanice to see you back19:12
ArxCruzthanks19:15
*** melwitt has joined #openstack-infra19:15
ArxCruzwe've very busy preparing everything to start report results19:15
ArxCruzand now it's almost done19:15
anteayacool19:16
*** praneshp has joined #openstack-infra19:16
*** fifieldt has quit IRC19:18
*** krtaylor has quit IRC19:22
*** aburaschi has joined #openstack-infra19:23
*** sarob has quit IRC19:26
*** Ajaeger1 has joined #openstack-infra19:27
*** vkozhukalov has quit IRC19:27
*** nati_ueno has joined #openstack-infra19:27
*** gokrokve has quit IRC19:28
aburaschiHi guys, newbie question: I've noticed something changed in the way tests are listed in tempest. Newlines and formatting in general is no longer working as before. Is this an intended behavior? Is there a different way to run tempest other than run_tempest.sh or run_tests.sh?19:30
*** mriedem has quit IRC19:30
*** pblaho has joined #openstack-infra19:31
*** nati_uen_ has quit IRC19:31
anteayaaburaschi: tox19:31
anteayahttp://git.openstack.org/cgit/openstack/tempest/tree/README.rst19:32
*** markmcclain has quit IRC19:32
fungiaburaschi: also, questions about tempest development are probably better handled in #openstack-qa19:32
anteayawell these instructions use testr19:32
anteayaand what fungi said19:32
*** yamahata has quit IRC19:33
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Revert "Default to a ratelimit of 2/second for API calls"  https://review.openstack.org/6821319:33
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Revert "Provide diagnostics when task rate limiting."  https://review.openstack.org/6821419:33
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Try longer to delete servers in the NodeCompleteThread  https://review.openstack.org/6821519:33
*** yamahata has joined #openstack-infra19:33
aburaschianteaya: hey! hi again :)  I'm having hard time running tox. It fails in oslo.config package. In the meantime, I was trying with run_* and testr.19:33
emaganaHi Guys! Anyone from Infra who can help with a service account for third party testing?19:34
anteayaaburaschi: see if this helps: http://git.openstack.org/cgit/openstack/oslo-incubator/tree/TESTING.rst19:34
anteayaemagana: best if you email the infra email list19:34
aburaschianteaya: Ok! I'll review the directions you mention and come back if still no luck.19:34
anteayastand by for the address19:34
emaganaanteya: Yes, I did already!19:34
aburaschianteaya: Thanks again :)19:34
emaganaanteaya: My email awaits moderator approval19:35
anteayaemagana: are you subscribed to teh list yet?19:35
emaganaanteaya: No, I am not!19:36
anteayawe are in a meeting, after the meeting I will ask mordred or jeblair or pleia2 to approve your email19:36
anteayaemagana: I recommend subscribing19:36
emaganaanteaya: I will subscribe but email will still in hold, right?19:36
fungiemagana: anteaya: pleia2 usually goes through and approves moderated messages periodically19:36
anteayaemagana: for this one, yes19:37
pleia2I let it through19:37
emaganapleia2: Thanks!!!19:37
pleia2sure thing19:37
sdaguehmmpph, zuul's about 4hrs behind processing events now it seems (based on spot check of something in the review queue)19:37
anteayasdague: is that an improvement or slower than it has been?19:37
*** fifieldt has joined #openstack-infra19:38
emaganaanteaya: I guest I just need to wait for the account creation..19:38
fungithough be forewarned, i have a pretty big backlog of third-party testing account add/change requests since i've been busy with other things. i'll take a look at them sometime in the next few days if nobody else beats me to it19:38
anteayaemagana: you might get an email asking for additional details, I don't know I haven't read it yet, if so respond promptly please19:38
emaganaanteaya: I will for sure!19:39
sdagueanteaya: this is new19:39
emaganaanteaya: Thanks a lot!19:39
anteayaand yes, as fungi says, please be patient19:39
*** apevec has joined #openstack-infra19:39
*** apevec has joined #openstack-infra19:39
anteayasdague: hmmmmph19:39
sdaguenew devstack patch posted at 10:30am EST - got to check queue 30 minutes ago19:39
emaganafungi: Thanks as well19:39
sdagueso slightly less than 4 hrs19:39
apevecsdague, was maybe approve +1 removed ? I had few stable-maint try to approve this leftover release-bump: https://review.openstack.org/6212719:40
apevecand all they could do was +2 review19:40
apevecbut not approve19:40
sdagueapevec: yes, it was removed, because stable/havana can't pass19:41
sdagueand stable maint folks were apparently ignoring the emails we sent about that19:41
apevecsdague, ok, but this is grizzly19:41
sdagueyeh, that couldn't pass either for a while19:41
apevecsdague, big hammer always works :)19:41
*** mestery has joined #openstack-infra19:41
sdaguenot sure if we are passing there or not19:41
apevecsdague, but grizzly should be good now, no?19:41
sdagueapevec: probably, you have current test results?19:42
apeveclemme try recheck on that one19:42
sdagueapevec: so please never +A something like that -https://review.openstack.org/6212719:43
*** senk1 has joined #openstack-infra19:43
sdaguethe last valid tests are from Dec 15th19:43
sdague*so* much could have changed since then19:43
apevecs/could//19:43
jeblairhttp://logs.openstack.org/periodic-stable/periodic-tempest-dsvm-full-grizzly/3fc6377/console.html19:43
sdagueright :)19:43
jeblairsdague: apevec: ^19:43
jeblair(from 2 days ago)19:44
sdaguejeblair: yeh, I think we got a fix in after that, it was definitely broken through the weekend19:44
jeblairoh19:44
jeblairhttp://logs.openstack.org/periodic-stable/periodic-tempest-dsvm-full-grizzly/31473f3/console.html19:44
apevecso no joy on grizzly yet19:44
jeblairsdague: yeah, that's most recent and is success19:45
openstackgerritClark Boylan proposed a change to openstack-infra/zuul: Add rate limiting to dependent pipeline queues  https://review.openstack.org/6821919:45
jeblairapevec: ^19:45
clarkbjeblair: sdague ^ super simple not configurable, but wanted eyes on the general mechanism before I get into it too deep19:45
*** yolanda has quit IRC19:45
sdagueapevec: right now, at current gate queue length, if you approve a change today, it will probably merge friday19:45
apevecsdague, ugh, and we planned stable/havana freeze next week :(19:46
fungisdague: that's optimistic of you!19:46
sdaguefungi: well we were merging > 1 patch / hr overnight19:46
fungiyep19:46
portantesdague: if the change does not need to be reverified, right?19:46
sdagueportante: correct19:47
sdaguethough grizzly has like no tests on it19:47
portantefair enough19:47
*** sarob has joined #openstack-infra19:47
sdaguethe fact that the entire volumes infrastructure was broken in our setup for grizzly until my fix, and tempest only failed 5 tests, gives you an indication of how much lighter it was tested19:48
sdaguesorry, not my fix19:48
sdaguea fix I got bumped up19:48
*** mriedem has joined #openstack-infra19:49
jgriffithsdague: which fix are you referring to BTW?19:51
jgriffithsdague: I realize infra, but still curious19:51
sdaguethe pip -e install one19:51
sdagueit was a devstack fix19:51
jgriffithsdague: ahh :)19:51
jgriffithsdague: yes19:51
*** marun has joined #openstack-infra19:51
sdaguehttps://review.openstack.org/#/c/67425/19:52
jgriffithso it wasn't the "no-wheel"19:52
sdagueno, turns out it was an older issue19:53
sdaguehonestly, I have no idea how it was working before19:53
*** thuc has quit IRC19:53
*** thuc has joined #openstack-infra19:53
sdagueapevec: this is needed for stable/havana to work - https://review.openstack.org/#/c/67739/19:53
*** thuc has quit IRC19:54
sdagueit's in the queue, but given the queue length, I didn't figure it needed promoting19:54
openstackgerritClark Boylan proposed a change to openstack-infra/config: Pack zuul git refs daily.  https://review.openstack.org/6822219:54
*** thuc has joined #openstack-infra19:54
*** ArxCruz has quit IRC19:55
*** kgriffs_afk is now known as kgriffs19:57
*** GheRivero has quit IRC19:57
*** GheRivero has joined #openstack-infra19:58
*** dkliban_afk is now known as dkliban19:58
*** gokrokve has joined #openstack-infra19:58
sdagueclarkb: why did you add change queue to the queue item in - https://review.openstack.org/68219 ?19:59
apevecsdague, can the queue be force-flushed to let this through?19:59
anteayathanks for doing a great job chairing, pleia219:59
clarkbsdague: because there was no other way I could find to get at the queue a queue item belonged too19:59
anteayayou ended before I could squeeze that into the logs19:59
anteaya:D19:59
clarkbsdague: all of the result processing is on queue items, so from there need to be able to tell the queue to change throttle19:59
sdagueclarkb: ok, I see now19:59
*** sarob has quit IRC20:00
pleia2anteaya: hah, thanks :)20:00
*** gokrokve_ has joined #openstack-infra20:00
fungiclarkb: i suppose we could manually run the repack and see if times improve, but i'm not sure how long it's likely to take to finish under current load20:00
*** sarob has joined #openstack-infra20:00
*** markmc has joined #openstack-infra20:01
sdagueapevec: right now, promoting anything other than a suspected fix for a gate reset causing bug needs really strong justification20:01
fungii guess with it on tmpfs though, shouldn't be too impactful20:01
*** pblaho has quit IRC20:02
fungiload average is half the core count on zuul20:03
apevecsdague, I mean 67739 which is a supposed fix20:03
*** gokrokve has quit IRC20:03
sdagueright, but we blocked stable, so stable's not reseting the gate now20:04
*** rakhmerov has quit IRC20:04
*** sarob has quit IRC20:05
jeblairclarkb: you have comments on 68219; in general looks excellent20:05
apevecsdague, I can't see from zuul status page, are jobs for 67739 running or is it waiting in the queue?20:05
*** gokrokve_ has quit IRC20:05
clarkbjeblair: thanks20:06
jeblairis terry wilson in irc?  that looks like a good comment too.20:06
*** mrodden1 has joined #openstack-infra20:07
*** mrodden has quit IRC20:07
clarkbI don't know who terry wilson is and yes good comment20:07
jeblairclarkb: i left one more20:07
apevecotherwiseguy is terry wilson20:07
*** DinaBelova is now known as DinaBelova_20:08
*** bauzas has joined #openstack-infra20:08
otherwiseguyjeblair: hi!20:09
jeblairotherwiseguy: nice to meet you!  thanks for the good comment on clarkb's zuul change20:09
fungihe made more than one good comment on it, in fact20:10
otherwiseguyjeblair: nice to meet you as well. hopefully I'll be around a little more in the near future and be able to contribute a bit.20:10
fungithough i guess one was a comment on a comment. in general i agree with most of the tuning suggestions there20:11
fungis/most/all/20:11
* otherwiseguy is new so might not always know what he's talking about20:12
otherwiseguy;)20:12
*** markwash has quit IRC20:12
fungii think most of those numbers should grow knobs in the config though20:12
clarkbfungi: yup definitely plan to make this configable20:12
clarkbwanted to get mechanics of it in front of people asap though20:13
fungiyeah, it makes sense to me, from a core design perspective20:13
fungiincrement in the good times, halve in the bad, start with a reasonably large number we're unlikely to exceed except when under heavy load20:14
otherwiseguyThe exponential backoff combined with creeping up by 1 means that the actionable_size will probably mostly be half of what we are capable of, though.20:15
otherwiseguyWe'll creep up, then bam!20:15
*** krtaylor has joined #openstack-infra20:16
*** prad_ has joined #openstack-infra20:16
clarkbotherwiseguy: yup20:16
*** nati_uen_ has joined #openstack-infra20:16
jeblairotherwiseguy: basically i was thinking that having 1 < min > 4 wouldn't waste too many resources, and could still get some of the benefit of the parallel jobs20:16
*** mgagne has joined #openstack-infra20:16
*** elasticio has quit IRC20:17
*** prad has quit IRC20:17
*** prad_ is now known as prad20:17
openstackgerritClark Boylan proposed a change to openstack-infra/zuul: Add rate limiting to dependent pipeline queues  https://review.openstack.org/6821920:17
clarkbthat addresses the comments. I am going to grab lunch then work on making it configurable20:17
clarkbalso tests20:18
*** jcoufal-mobile has quit IRC20:18
*** mriedem has quit IRC20:18
clarkboh lol my #slice and dice here comment should be delete :)20:18
harlowjaif u guys get a sec, https://review.openstack.org/#/c/65135/ please :)20:18
sdagueclarkb: yeh, honestly floor of 3 is probably safe20:18
clarkbgah I missed logging too ... anyways after lunch20:18
clarkbsdague: ok20:18
*** nati_ueno has quit IRC20:19
harlowjaclarkb for rate-limiting, if u want i made this a while ago, http://paste.openstack.org/show/61647/20:21
harlowjamight be useful in your stuff20:21
fungiclarkb: i'm trying to think through what happens when you have (head) a, b, c, d, e in the gate, a is passing, b is failing, c depends on b and is skipped, d and e are passing (nnfi basing d on a)... with actionable at, say 3 how does that scenario play out?20:22
fungido we not test d and e until a,b,c flush through?20:22
jeblairfungi: i believe that's correct; c counts as one of the actionables20:22
fungiin that case, i wonder whether long dependent series might have some unforeseen impact when actionable gets low20:24
fungii guess not until one of them fails20:24
*** NikitaKonovalov_ is now known as NikitaKonovalov20:24
jeblairfungi: yeah, i think we could end up actually testing 1 change if b and c are dependents and a is failing.20:25
jeblairfungi: which would be a bit sad; zuul would be running jobs for 0 potentially mergable changes.20:26
sdaguejeblair: oh, because we don't distinguish runnable20:26
sdaguehmmm20:27
jeblairwe could say that's acceptable for a first cut of this algorithm, or we could have 'getActionableItems' be a bit smarter and try to reach deeper and always find x runnable items20:27
sdagueso why don't we just fix the value at 10, with an rpc way to update20:27
fungiagreed. this is great as a starting point20:27
sdagueand not try to be clever and adaptive20:27
sdaguebecause it's going to cause possibly other behavior we don't understand20:28
sdagueand right now, is not a great time to add uncertainty20:28
fungisdague: a dependent patch series of 10 would still have the same effect in that case20:28
sdaguefungi: sure20:28
jeblairi think if we set the min to a high enough value, we can go with the clever and adaptive bit20:28
sdaguebut getting those all approved at once is rare20:28
jeblairso clever and adaptive with min=10 perhaps?20:28
sdaguejeblair: sure20:28
*** fifieldt has quit IRC20:30
otherwiseguyjust out of curiosity, how many has it been getting up to right now?20:30
sdaguemerging, or running?20:30
jeblairotherwiseguy: recently we've merged batches of 3 changes together20:31
jeblairotherwiseguy: in the best of times we've seen 2020:31
*** gyee has quit IRC20:32
*** nati_uen_ has quit IRC20:33
otherwiseguysdague: running. i.e. if setting the min actionable_items to 10, how does that compare to what we've been hitting.20:33
sdagueotherwiseguy: we've been getting 40 deep20:33
sdaguewhich is the problem20:33
sdaguethen we have to throw all that away20:33
fungibasically about enough jobs to use most of our 450 nodepool node capacity20:33
fungiwhich varies a bit depending on which changes are in the front of the pipeline and how long jobs run until one completes with a failing result20:34
otherwiseguyYeah, overcommit ratios only work when at least some resources are occasionally idle. :)20:35
*** mriedem has joined #openstack-infra20:35
russellbstill don't see the nova gate bug fix in the queue so it can be promoted :-/20:36
* russellb offers zuul a cookie20:36
zaroclarkb: i didn't need the generic exception. the new scp plugin is on review-dev and have pushed code to the github pull request.20:37
*** mrmartin has joined #openstack-infra20:39
*** sarob has joined #openstack-infra20:39
sdaguerussellb: yeh, we're event backlogged now20:39
* russellb nods20:39
sdaguenotice the event queue length > 100020:39
russellboh snap20:39
sdaguethis was like it was on wed20:39
russellbi wasn't looking at that part of status20:39
sdagueso it's like 3.5hrs for an event to get processed20:40
russellbcan we ninja merge stuff past all this?  that nova bug has 340 hits in 12 hours20:41
sdaguerussellb: yeh, it passed check earlier?20:41
sdaguei'm cool with a ninja merge if we have one set of good test results20:42
russellbno ... hasn't made it to check yet, either20:42
*** pcrews has quit IRC20:42
*** apevec has quit IRC20:43
*** gokrokve has joined #openstack-infra20:43
NobodyCam-infra just a passing "Great Job guys!"20:43
fungiNobodyCam: thanks!20:43
NobodyCamlol think I posted that in the t meeting :-p by mistake20:44
russellbNobodyCam: +1 :)20:44
anteayaNobodyCam: guys in the gender inclusive sense, of course?20:44
NobodyCam:-p20:44
NobodyCamofc20:44
anteaya:D20:44
*** marun has quit IRC20:46
*** gokrokve has quit IRC20:47
*** mrda_ has joined #openstack-infra20:49
*** fifieldt has joined #openstack-infra20:50
*** burt1 has quit IRC20:52
portantefunny, if NobodyCam said, "Great Job girls!", girls in the gender inclusive sense, of course, would it come across in the same way?20:54
*** markmcclain has joined #openstack-infra20:54
NobodyCamportante: :-p20:55
*** kgriffs has left #openstack-infra20:56
*** jasondotstar has quit IRC20:56
portanteNobodyCam: no offense intended, just curious about folks use language20:57
NobodyCamnone taken...20:57
NobodyCam:)20:57
*** senk1 has quit IRC20:57
* fungi would take no offense20:58
fungithen again, my long hair gets me called "ma'am" by waiters and store clerks all the time20:58
anteayaI try to go for other terms such as: folks, group, people20:59
*** obondarev_ has quit IRC20:59
fungi"friends!"20:59
anteayathat works20:59
NobodyCam+1 for friends20:59
portantestackers21:00
NobodyCamlol21:00
anteayaactually there was an informal poll about the use of the term guys21:00
anteayafemales read the term as excluding them21:00
*** emagana has quit IRC21:00
* NobodyCam comment was NOT ment to exclude anyone!21:00
*** emagana has joined #openstack-infra21:00
*** mrodden has joined #openstack-infra21:01
jog0if I reembmer correctly long queue lengths mean zuul is slow to process gerrit events21:01
*** mrodden1 has quit IRC21:01
* NobodyCam rewords to: -infra just a passing "Great Job everyone!"21:02
openstackgerritlifeless proposed a change to openstack-infra/config: Move tuskar-ui to horizon program.  https://review.openstack.org/6826421:02
*** madmike has joined #openstack-infra21:02
sdaguejog0: yuo are correct21:02
*** kraman is now known as kraman_lunch21:02
anteayaNobodyCam: :D21:02
NobodyCam:)21:02
*** kgriffs has joined #openstack-infra21:02
kgriffsguys, we really need this patch in for i-2 but zuul hasn't picked it up for hours now - https://review.openstack.org/#/c/68161/21:03
jog0looks like at least we are landing code in o/o today21:03
jog0https://github.com/openstack/openstack/graphs/commit-activity sums it up well21:03
kgriffsanything I can do, or is zuul just backed up?21:03
lifelessjeblair: ok, so nodepool21:03
*** gokrokve has joined #openstack-infra21:03
*** dangers_away is now known as dangers21:04
jog0sdague: did something change late last night because nodepool graph looks less jagged21:04
jog0much less jagged21:04
fungikgriffs: too many devs (and too many bugs). i think ttx said something about probably postponing i-221:04
kgriffsoh21:04
kgriffsgtk!21:04
kgriffsI'll hop over to #openstack-meeting and see21:05
*** mfink has quit IRC21:05
*** prad has quit IRC21:05
fungijog0: i'm constantly running a loop looking for nodepool nodes in a delete state and executing a nodepool delete from the cli as a follow-on to catch deletes which the providers are ignoring21:05
*** zul has quit IRC21:06
fungi(with some rate limiting so i don't spam it too badly)21:06
jog0btw is the console.html missing in logstash bug fixed?21:06
*** markwash has joined #openstack-infra21:07
fungijog0: should be as of the past 24 hours, yes21:07
jog0fungi: ahh that may be it, neat21:07
*** oubiwann_ has quit IRC21:07
jog0fungi: thanks, getting good data in es is really helful, thanks21:07
funginot really neat. ugly, hackish and something we want to solve in nodepool source instead. but for now it's s eeking out a little extra capacity21:08
fungi(the delete loop)21:08
jog0the neat part is that the graph is less jagged21:09
jog0and that we *think* we know why21:09
*** julim has quit IRC21:09
mikalDoh21:10
*** mrmartin has quit IRC21:10
mikalIRC fail21:10
mikalIs there a guide to running tempest somewhere? I can't see one on the wiki and am ashamed to admit I've never run it in person.21:10
mordredclarkb, fungi: can one of you msg the tenant id of the nodepool account at rackspace to pvo ?21:10
*** markwash_ has joined #openstack-infra21:10
mtreinishmikal: I don't think so. There is: http://docs.openstack.org/developer/tempest/21:10
fungimordred: sure. doing that now21:10
mtreinishbut that probably doesn't have enough detail21:10
russellbmikal: it's pretty easy with devstack21:11
mikalrussellb: oh, its just a script in devstack?21:11
mikalCool21:11
mtreinishmikal: I've been meaning to write some real guides for people who are hand configuring things. (not devstack)21:11
*** pballand has joined #openstack-infra21:11
mikalI'm interested in seeing how bad tempest is for libvirt+lxc21:11
*** markwash has quit IRC21:12
*** markwash_ is now known as markwash21:12
jog0https://jenkins04.openstack.org/job/gate-nova-python27/1215/console HUH21:12
jog0] Resource temporarily unavailable21:12
jog0seen in gate21:12
russellbmikal: i think devstack sets up tempest for you ... check the devstack readme21:12
mtreinishrussellb: yeah it does: http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/tempest21:13
jog0fungi: ^21:13
*** oubiwann_ has joined #openstack-infra21:14
fungijog0: on stdout's fd21:15
fungiweird21:15
fungijog0: oh, that's stdout of a child process21:16
*** mrda is now known as mrda__21:16
fungiso the child presumably died in flames?21:16
*** MarkAtwood has quit IRC21:16
*** mrda_ is now known as mrda21:16
jog0logstash message:"os.read(self.stdout.fileno(), 1024)"21:17
*** pcrews has joined #openstack-infra21:17
dimsjog0, just 14 hits in last 7 days, i see a bunch on 16th21:18
jog0dims: yeah strange21:18
jog0its only in gate-nova-python2721:18
fungiso subprocess tried to run get_schema_cmd and the child's stdout was inacessible (either not started yet, termiated, disassociated)21:18
*** madmike has quit IRC21:19
fungii don't think subprocess can normally return control if the descriptors haven't been attached yet (though i could be wrong), so it probably either disassociated after starting or, more likely, died21:19
*** prad has joined #openstack-infra21:20
*** oubiwann_ has quit IRC21:20
sdagueclarkb: so are we backed up on events because of the git gc issue?21:21
*** aburaschi1 has joined #openstack-infra21:21
sdaguebecause zuul was totally on top of events until today21:21
sdagueeven with more things in the queue21:21
jog0fungi: thanks filing a nova bug on this21:22
*** aburaschi has quit IRC21:22
*** nati_ueno has joined #openstack-infra21:23
*** dprince has quit IRC21:23
*** markmcclain has quit IRC21:23
*** aburaschi1 has left #openstack-infra21:23
*** oubiwann_ has joined #openstack-infra21:24
*** markmcclain has joined #openstack-infra21:24
*** hashar has quit IRC21:25
*** beagles has quit IRC21:25
*** b3nt_pin has joined #openstack-infra21:25
*** b3nt_pin is now known as beagles21:25
*** hashar has joined #openstack-infra21:27
openstackgerritDavanum Srinivas (dims) proposed a change to openstack-infra/devstack-gate: Temporary HACK : Enable UCA  https://review.openstack.org/6756421:28
*** jhesketh has joined #openstack-infra21:29
*** markmc has quit IRC21:30
jeblairsdague: a full queue gate reset currently takes 13 minutes; it likely would have taken 4 minutes 4 days ago21:31
sdaguejeblair: ok21:31
jeblairsdague: that could certainly be a big factor, if not the main cause of the queue backlog21:31
sdagueso why the event backup21:31
lifelessjeblair: hi21:31
jeblairlifeless: hi21:31
sdaguewe are litterally taking hours to queue something in check now21:32
sdaguenot run anything on it21:32
sdaguebut just to add it to the check queue21:32
lifelessjeblair: I tried to arrange my patchset in order of most-likely-that-jeblair-will-approve :)21:32
*** hashar is now known as hasharMeeting21:32
*** NikitaKonovalov is now known as NikitaKonovalov_21:32
jeblairsdague: i think clarkb's limiter, along with horizontally scaling mergers will substantially help.21:33
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Add fingerprint for bug 1271331  https://review.openstack.org/6827021:33
sdaguejeblair: cool21:33
jeblairsdague: i also believe the optimization i mentioned to russellb (about treating results as a flag, not a queue) would help too, but not quite as substantially21:33
russellbjeblair: yeah i started looking at doing that and then meetings started21:34
openstackgerritMatt Ray proposed a change to openstack-infra/config: Chef style testing enablement and minor speed cleanup starting with checks  https://review.openstack.org/6796421:34
*** melwitt has quit IRC21:34
*** melwitt has joined #openstack-infra21:34
*** melwitt has quit IRC21:35
*** melwitt has joined #openstack-infra21:35
*** melwitt has quit IRC21:35
clarkbok back21:36
*** jhesketh has quit IRC21:37
clarkbjog0: console.html bug is mostly fixed, we need to put a slightly newer verson of the plugin on the masters that fixes a bug we added21:37
*** gyee has joined #openstack-infra21:37
*** melwitt1 has joined #openstack-infra21:38
jeblairmordred: if we add more quota, we need to add more jenkins masters (1 master / 100 nodes of quota)21:38
*** jhesketh has joined #openstack-infra21:38
zaromgagne: were you looking for me?21:38
jheskethHowdy21:38
mgagnezaro: no more =)21:38
*** jhesketh_ has joined #openstack-infra21:39
jog0clarkb: awesome sauce21:40
*** jhesketh_ has quit IRC21:40
*** CaptTofu has quit IRC21:41
*** beagles has quit IRC21:41
*** dkliban has quit IRC21:41
*** jhesketh_ has joined #openstack-infra21:43
openstackgerritA change was merged to openstack-infra/config: Upload pre-release oslo.messaging tags to pypi  https://review.openstack.org/6713121:43
*** b3nt_pin has joined #openstack-infra21:43
*** b3nt_pin is now known as beagles21:43
pballandmordred: were you able to take a look that the congress build scripts?21:44
*** dstufft_ has joined #openstack-infra21:45
*** Ajaeger1 has quit IRC21:46
*** fifieldt has quit IRC21:47
openstackgerritKhai Do proposed a change to openstack-infra/config: point zuul-dev to review-dev  https://review.openstack.org/6827121:47
*** dstufft has quit IRC21:47
*** dstufft_ is now known as dstufft21:48
*** beagles has quit IRC21:48
*** reed has joined #openstack-infra21:52
*** dmsimard has quit IRC21:52
*** jasondotstar has joined #openstack-infra21:52
*** sarob_ has joined #openstack-infra21:53
*** sarob_ has quit IRC21:54
*** sarob_ has joined #openstack-infra21:55
*** sarob has quit IRC21:56
*** jcoufal has joined #openstack-infra21:56
*** sarob_ has quit IRC21:57
sdague7 straight patch merge about to happen21:57
sdaguesorry 621:57
*** rcleere has quit IRC21:58
*** fifieldt has joined #openstack-infra21:58
*** senk has joined #openstack-infra21:58
*** jerryz has joined #openstack-infra21:59
*** rcleere has joined #openstack-infra21:59
jerryzfungi: ping21:59
fungijerryz: what's up? (i'm on a plane right now, so my ping round-trip time isn't so great)22:00
*** rcleere has quit IRC22:00
*** sarob has joined #openstack-infra22:00
*** sarob has quit IRC22:00
*** smarcet has left #openstack-infra22:00
*** sarob has joined #openstack-infra22:01
jerryzfungi: i want to revert a change, but after the revert is created, zuul didn't verify or submit22:01
*** hasharMeeting is now known as hashar22:02
*** sarob_ has joined #openstack-infra22:02
*** jooools has joined #openstack-infra22:02
*** sarob_ has quit IRC22:02
*** miqui has quit IRC22:04
*** miqui has joined #openstack-infra22:04
*** miqui has quit IRC22:04
jog0clarkb sdague: https://review.openstack.org/#/c/67485/  Don't run non-voting gate-grenade-dsvm-neutron22:04
jog022:04
fungijerryz: it's in the gerrit event fifo zuul maintains, and will emerge eventually. at the moment we're seeing zuul take several hours to acknowledge a patchset upload or approval/comment due to voume22:05
jog0that should marginally help with resources and help with our classification rate22:05
*** CaptTofu has joined #openstack-infra22:05
fungier, due to volume22:05
jerryzfungi: i didn't see the change in queue22:06
fungijerryz: the event queue is nearly 1500. that's the list of new events zuul hasn't looked at yet22:06
*** sarob has quit IRC22:06
jog0sdague clarkb: if we map logs/new in grenade to logs then they would get dumped into logstash right?22:07
fungijerryz: once the corresponding event is processed, the change will appear in the appropriate pipeline(s) on zuul's status page22:07
jog0any downside with that approach?22:08
jerryzfungi: ok. just didn't expect to take that long22:08
jerryzfungi: it was created 11:30pst in the morning22:08
fungijerryz: yes, we're experiencing unprecedented test volume levels today22:08
*** sarob has joined #openstack-infra22:09
jerryzjog0: i just used that workaround a month ago. old stack logs are not indexed though.22:10
*** dizquierdo has joined #openstack-infra22:10
*** nati_ueno has quit IRC22:10
*** mfer has quit IRC22:10
jerryzfungi: thank you. just confirm with you revert is no different than a new patchset creation.22:11
*** nati_ueno has joined #openstack-infra22:11
fungijerryz: correct. from zuul's (and gerrit's, and git's) perspective it's just another commit22:12
jog0jerryz: right, this is forward facing22:12
clarkbjog0: a better way to do it would be to implement recursive log searching to some sane depth in the logstash gearman client/workers22:12
*** esker has quit IRC22:12
jog0jerryz: did you just have a symbolic link or ssomething22:12
clarkbjog0: but that may be a lot more work22:13
jerryzjog0: i added the copy in scp plugin. but in my env, the jobs are not generated by jjb22:13
*** harlowja is now known as harlowja_away22:14
jog0clarkb: yeah, we are seeing grenade have a decent number of failures (since we run tempest)22:14
jog0and without the logs we can't classify em22:14
jog0clarkb: what about setting source-file names to: logs/(new)+/screen-...22:15
jog0can the names have regex22:15
mgagnezaro: https://review.openstack.org/#/c/64610/322:16
lifelessjeblair: do you run nodepool with log set to debug ?22:16
mgagnezaro: can we approve?22:16
fungiclarkb: jeblair: is there any interest in seeing whether a manual run of the repack routine from cron gets git operation times on zuul back down to what we saw last week? i/o impact should be minimal since that's all on tmpfs, and zuul's only occupying about half its available vcpus so i would expect performance impact while it's running to be fairly minimal22:17
clarkbfungi: good idea22:17
zaromgagne: jenkins-job-builder-compare-xml test failed on that one.  not sure why.22:17
mgagneutf-822:17
fungilifeless: we use a logging config which spits debug level to a separate file from info22:17
clarkblifeless: we have nodepool log debug and greater to a debug.log and not debug to other logs22:17
clarkbI may have gone a bit overboard on the next zuul rate limit patchset >_> once I have it pep8 clean will push it up22:18
mgagnezaro: so this might retrigger a full update of all jenkins jobs on openstack-infra22:19
*** vipul is now known as vipul-away22:19
lifelessfungi: oh, I didn't realise that there was a production debug log22:20
lifelessuhm22:20
lifelessso I'd like to gather that data22:20
lifelesscould you perhaps ship me a few hours of debug log ?22:20
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Sort uncategorized fails by time  https://review.openstack.org/6776122:20
zaromgagne: what does that mean?22:21
mgagnezaro: hopefully, nothing =)22:21
fungilifeless: you mean after we restart with that patch in place?22:21
lifelessfungi: answered https://review.openstack.org/#/c/67979/ for you22:22
fungii've hesitated to restart nodepoold yet since i don't want to set us back even further resource-wise22:22
*** jamielennox is now known as jamielennox|away22:22
lifelessfungi: oh, nvm then22:22
lifelessbut I'd really like those stats somehow22:22
lifelessfungi: how do we know that it logs too much data if we haven't run it ?22:23
lifelessjeblair: ^22:23
*** burt1 has joined #openstack-infra22:23
*** oubiwann_ has quit IRC22:23
zaromgagne: got the +2 from me.22:23
*** yassine has quit IRC22:23
fungilifeless: i think the assertion was that under volume like we have now, every worker (all 10 of them) would be logging that every second22:23
*** nati_ueno has quit IRC22:24
dimshmm, updated a review abut 45 mins ago and don't see it yet on zuul/ page. another symptom of existing issues? new one?22:24
*** dmsimard has joined #openstack-infra22:25
*** vipul-away is now known as vipul22:25
fungidims: it's related to the current activity volume22:25
dimsthx, just making sure22:25
lifelessfungi: would statsd be ok then ?22:25
*** vipul is now known as vipul-away22:25
lifelessfungi: OTOH if everyone is convinced we're hitting rate limits, perhaps the focus should be on getting below the limit22:25
*** Hefeweizen has quit IRC22:26
openstackgerritClark Boylan proposed a change to openstack-infra/zuul: Add rate limiting to dependent pipeline queues  https://review.openstack.org/6821922:26
dmsimarddims: the queue is quite lengthy right now, have commits that were done 3 hours ago that have not yet been checked22:27
mgagnezaro, fungi, clarkb, jeblair: the approval of this change will retrigger an update of all jenkins jobs through the API: https://review.openstack.org/#/c/64610/ (due to XML change) Is there any reason to refrain from approving it?22:27
*** prad has quit IRC22:27
clarkbjeblair: otherwiseguy sdague latest patchset makes it configurage via the layout.yaml22:27
clarkbharlowja_away: thanks, but we are ratelimiting not over time but based on success and failure rates22:28
*** prad has joined #openstack-infra22:28
fungilifeless: good question. maybe statsd (though that generates network traffic on every hit instead) or some sort of internal counter we could periodically report on?22:31
*** thomasem has quit IRC22:31
jog0do we have a fingerprint for git.openstack.org[0: 2001:4800:7813:516:3bc3:d7f6:ff04:aacb]: errno=Network is unreachable22:31
*** dims has quit IRC22:31
fungijog0: not sure. we also have some additional diagnostics for that one, but it'll be in the gate for a couple days still22:33
*** jamielennox|away is now known as jamielennox22:33
jog0fungi: is message:"fatal: unable to connect to git.openstack.org:" AND filename:"console.html" to generic?22:33
*** praneshp has quit IRC22:33
openstackgerritGregory Haynes proposed a change to openstack-infra/gitdm: Add Gregory Haynes to HP  https://review.openstack.org/6827722:33
fungijog0: you might match on the ipv6 address (it should remain constant unless we need to move the haproxy frontend to a new server)22:34
jog0fungi: that works for me, thanks22:34
fungijog0: and errno=Network is unreachable22:34
*** harlowja_away is now known as harlowja22:35
fungibasically this is hpcloud vms thinking they have ipv6 connectivity even though they don't, so would be good to capture that fairly explicitly22:35
harlowjaclarkb kk, np, thought it might be useful anyway :)22:35
openstackgerritlifeless proposed a change to openstack-infra/nodepool: Use the nonblocking cleanupServer.  https://review.openstack.org/6800422:35
jog0git.openstack.org: Temporary failure in name resolution22:35
jog0wee22:35
*** dmsimard has quit IRC22:36
jog0two hits for message:"git.openstack.org[0: 2001:4800:7813:516:3bc3:d7f6:ff04:aacb]: errno=Network is unreachable" AND filename:"console.html"22:36
harlowjaclarkb although success/failure rate might just be something u can use instead, replace time dimension with success/failure dimension with that and there u go22:36
*** jooools has quit IRC22:36
jog020 for message:"git.openstack.org: Temporary failure in name resolution" AND filename:"console.html"22:36
* jog0 files two bugs 22:36
fungijog0: that leading 0: might be throwing it, depending on which local address it tries to source from22:38
jog0https://bugs.launchpad.net/openstack-ci/+bug/127038222:39
*** kraman_lunch is now known as kraman22:39
fungijog0: also, we've seen it manifest in the setup logs, not just in the console. i expect a lot more hits there22:40
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Add fingerprint for bug 1270382  https://review.openstack.org/6828022:41
*** yassine has joined #openstack-infra22:41
jog0are setup logs in elasticsearch22:42
*** praneshp has joined #openstack-infra22:42
funginoidea22:42
jog0 message:"2001:4800:7813:516:3bc3:d7f6:ff04:aacb]: errno=Network is unreachable" has only two hits still22:42
*** krtaylor has quit IRC22:42
jog0with only two hits is it worth filing a CI bug?22:43
lifelessjeblair: I'd like to go through in more detail the approach my patch queue shows, when would be a good time for you?22:43
lifelessjog0: will it always be that ip address?22:44
jog0thats the git.o.o ipv6 addr22:44
*** sarob has quit IRC22:45
*** dcramer_ has quit IRC22:45
*** sarob has joined #openstack-infra22:45
*** dims has joined #openstack-infra22:46
*** dkliban has joined #openstack-infra22:46
jog0sdague: here is another fun failure  message:"Unable to lock the administration directory (/var/lib/dpkg/), is another process using it"22:48
fungijog0: i think there alrady is a bug filed22:48
jog0fungi: last hit was 2014-01-21T06:38:36.00022:48
jog0oh filed22:48
jog0what project?22:48
fungiopenstack-ci i thought22:49
fungimaybe i imagined it22:49
fungiwe've not seen it often, which is why we didn't promote the diag patch for that22:49
jog0https://bugs.launchpad.net/openstack-ci/+bug/109759222:49
jog0message:"Unable to lock the administration directory (/var/lib/dpkg/), is another process using it" AND filename:"console.html"22:50
*** yamahata has quit IRC22:50
*** SergeyLukjanov is now known as SergeyLukjanov_a22:50
*** yamahata has joined #openstack-infra22:50
*** SergeyLukjanov_a is now known as SergeyLukjanov_22:51
fungithat happens when two things try to install a deb at the same time22:52
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Add fingerprint for bug 1097592  https://review.openstack.org/6828222:52
fungicould indicate a hung apt-get install run22:52
ewindischsdague / jeblair - to continue yesterday's chat about the docker gate... ;-)22:55
*** praneshp has quit IRC22:55
ewindischsdague / jeblair - we're looking at running external infrastructure since running in openstack's infra seems contentious. The question is - if we give you keys to the castle, would you want them?22:56
*** prad has quit IRC22:56
*** gothicmindfood has quit IRC22:57
*** hashar has quit IRC22:57
jog0ewindisch: infra is already constrained on human resources, I don't know if infra can handle any more load without more people22:57
*** sarob has quit IRC22:58
*** praneshp has joined #openstack-infra22:58
ewindischjog0: and the idea wouldn't be that we'd hand it off, we'd continue to support it ourselves, but infra would have access as well.22:59
jog0ewindisch: what would the benefit of giving infra access be?22:59
ewindischbut I agree it could be distracting, which is one reason I ask22:59
ewindischjog0: maybe someone that cares and does have time WOULD want to have access?22:59
*** luis_ has quit IRC23:00
sdagueewindisch: it was only contentious in timeline, like I said lets do this at juno summit23:00
sdagueand plan for it during that cycle23:00
*** _david_ has quit IRC23:00
*** jpich has quit IRC23:00
jog0sdague: so gate queue delay dropped under 2 full days, woot!23:00
sdagueyay!23:00
*** hashar has joined #openstack-infra23:01
sdagueno it didn't23:01
sdague44hrs at top of queue23:01
ewindischsdague: right - so we acknowledge that running it externally is far less contentious for Icehouse and we're looking to do that23:01
jog048hrs in two days right?23:01
sdagueoh, to under 2 days23:01
sdagueI thought we lost 2 days of lag23:01
jog0sdague: baby steps23:01
sdague:)23:01
*** sarob has joined #openstack-infra23:03
fungi68147,3 is finally in the gate. i'll try to catch the next reset to promote it23:03
*** jaypipes has joined #openstack-infra23:03
jog0sdague: you have a minute to talk gate failure classification rate?23:03
*** rnirmal has quit IRC23:04
ewindischjog0 / sdague: anyway, if we provide servers and run them ourselves, do you explicitly not want access to them? We're not asking you to manage or run them, but offering you access should you desire it.23:04
jeblairlifeless: having the cleanup thread do it fails to maximise paralellism23:04
ewindischat least until Juno and we can discuss the future at teh summit23:04
*** CaptTofu has quit IRC23:04
sdagueewindisch: ask jeblair on that one23:04
*** ryanpetrello has quit IRC23:04
*** jcoufal has quit IRC23:04
ewindischsdague: yeah, I cc'ed his nick earlier, but I guess he isn't on IRC right now23:05
ewindischthanks23:05
*** dizquierdo has quit IRC23:05
sdagueewindisch: there is just a ton going on right now with the current gate23:05
jeblairewindisch: remind me why running it in openstack-infra is contentious?23:05
ewindischjeblair: sdague doesn't want us to ;-)23:05
ewindischer, in Icehouse23:05
sdaguejeblair: I provided push back that it wasn't fair to come to infra at i2 and try to get that in by icehouse23:05
ewindischsdague can elaborate, but basically the project is constrained on human and hardware resources23:06
sdagueyou are welcome to overrule :)23:06
jeblairewindisch: what are the technical requirements?23:06
russellbcould run in existing dsvm nodes, but requires a 3rd party apt repo for docker right now23:06
*** HenryG has quit IRC23:07
jeblairrussellb: is the 3rd party repo bit likely to change in the future?23:07
russellbbasically need a devstack-gate job running a subset of tempest tests against the docker driver23:07
russellbjeblair: i have no clue23:07
*** sarob has quit IRC23:07
*** harlowja has quit IRC23:07
jeblair(and is the 3rd party repo ubuntu cloud archive by any chance?)23:07
russellbfungi: cool ... it doesn't have check results yet, but it's probably worth the risk ... isn't going to make things much worse anyway23:07
*** gothicmindfood has joined #openstack-infra23:07
russellbjeblair: it is not23:07
ewindischjeblair: we're trying to run devstack-gate with full tempest, triggered on all patchsets.  Tests can run inside VMs which don't necessarily need to be respawned for subsequent tests (i.e. jenkins slave could run on a VM)23:08
* russellb should let ewindisch speak23:08
*** hashar has quit IRC23:08
russellbewindisch: what's the repo you need though, a docker managed one?23:08
ewindischjeblair / russellb : we have precise packages in a docker-managed repo. Tahr has a package upstream, but that doesn't help us... If necessary, I can see about pushing a backport from Tahr into precise or cloud-archive23:09
*** luisg has joined #openstack-infra23:10
ewindischjeblair / russellb: another option is to run docker on the slave and we use our own image pre-loaded with Docker and having all the network resources pre-installed or cached -- I suspect this might be more contentious? Still, this is what we would do for our own externally-managed gate, if we ran it.23:11
russellbewindisch: i kinda feel like at this point timing wise, you should shoot for running it yourself short term, and aim to get into infra under less time pressure23:12
russellbthat lets you control your own destiny a lot more with respect to the nova deadline23:12
jog0russellb: ++23:12
jeblairrussellb, ewindisch: ++23:12
russellband also doesn't put extra pressure on -infra, when they're pretty slammed right now with some critical work23:12
ewindischrussellb: which is what I've been doing. Today's question was if openstack-infra would want keys to our castle, no strings attached23:12
russellbOK, awesome23:12
portanteclarkb: the check job of 67920 seems to be hung in the swift functional tests, which does not usually happen, all the other jobs have finished23:13
ewindischbasically, we'd own it and run it, but we're offering keys to the castle23:13
russellbewindisch: keys to the castle isn't too hard, there are public ssh keys in the infra config repo23:13
jeblairewindisch: if this is runnable in openstack-infra and russellb+nova wants it tested, let's talk about getting it in there in the long run, but i don't think we have time now to do it justice23:13
jeblairewindisch: i don't think we would want/need/use keys23:13
russellbjeblair: yeah, that sounds good to me23:13
ewindischjeblair: I think we all agree to get it upstream in the long term23:13
ewindischjeblair: alright then.23:14
jeblairewindisch: cool; so no keys, and we'll talk when we're less busy.  thanks for asking/checking in.  :)23:14
*** burt1 has quit IRC23:14
ewindischjeblair: thanks23:14
russellbhuzzah, sounds like a good path forward23:14
russellbthanks ewindisch23:14
sdagueagreed23:14
david-lyleok23:15
david-lyleoops23:15
*** dstanek has quit IRC23:15
jeblairlifeless: where was i?  right now the complete threads funnel their actions through >=6 provider managers in parallel23:15
clarkbportante: https://jenkins03.openstack.org/job/check-swift-dsvm-functional/286/console the test does appear to have just stopped23:15
jeblairlifeless: having the cleanup thread means that there's only one thread driving those managers, so you're essentially serializing all of the delete operations23:15
clarkbtests++23:15
jeblairlifeless: perhaps if you had a cleanup thread per provider that would alleviate that problem23:16
portanteis there a way for me to look at the syslog for that system?23:16
*** dstanek has joined #openstack-infra23:17
clarkbportante: yes, I will hold the node and you can poke at it23:17
jeblairclarkb: that's a lot of knobs.  you sure you want all of them?23:18
clarkbjeblair: no I am not sure I want all of them23:19
clarkbjeblair: I definitely want floor and the starting level, the linear vs exponential and factor stuff I am not sold on23:19
clarkbjeblair: I have a test locally that isn't working because there is at least one bug in that patchset23:19
*** HenryG has joined #openstack-infra23:19
clarkbportante: have an rsa public key I can put on that host?23:20
portantesec23:20
jeblairclarkb: drat. i did not see the bug.  :)23:21
*** jorisroovers has joined #openstack-infra23:21
clarkbjeblair: there are more, each time I fix a thing the test points out more :)23:21
jeblairclarkb: anyway, your call on the knobs.  seems excessive to me but i don't object.  i think this is the highest priority thing for zuul, so when it's ready we'll put it into prod immediately.23:22
clarkbok23:22
jorisrooversfungi: ping23:22
jeblairclarkb: (i definitely agree that floor and level are good; it's the others i'm also not so sure about)23:22
fungiclarkb: the repack finished a minute ago23:23
fungijorisroovers: what's up?23:23
jorisrooversfungi, I've got an issue with git when trying to git review a patch that was rebased on a different patch23:23
jorisrooversfungi, "Errors running git rebase -i remotes/gerrit/master"23:23
jorisrooversnot sure what is going on there. I had to checkout my patchset on a different machine, made a few edits, did a commit, no issues23:24
jorisrooversbut then when doing git review, I ran into this issue23:24
fungijorisroovers: that usually means git-review thinks it needs a rebase but is running into merge conflicts trying to perform one for you. have output you can put on paste.openstack.org and give me a link?23:24
jorisrooversfungi, yeah, that is what I figure as well, but not seeing the issue23:25
jorisrooversnot sure whether output will be very helpful, but pasting anyway, 1 sec23:25
jorisrooversfungi, http://paste.openstack.org/show/61653/23:26
jorisrooversfungi, this is the relevant patch set: https://review.openstack.org/#/c/65515/23:26
* fungi waits for the horrible in-fight wireless to let him through the web proxy23:26
jorisrooversfungi :-)23:27
*** kgriffs is now known as kgriffs_afk23:27
*** krtaylor has joined #openstack-infra23:27
ttxjeblair: sorry we'll not be seeing you here23:28
jorisrooversso there is 2 patches, 1 that does creates a new directory, and a second that adds a python file. I rebased the second patch on the first as the new file needs to go in the new directory23:28
fungijorisroovers: so you're trying to have 66854 rebased on the tip of tempest master and 61653 on 6685423:28
fungicorrect?23:29
openstackgerritSean Dague proposed a change to openstack-infra/config: add in elastic-recheck-unclassified report  https://review.openstack.org/6759123:29
* jorisroovers is figuring the patchset numbers23:29
*** oubiwann_ has joined #openstack-infra23:30
jorisrooversfungi, think you mixed up some numbers23:30
*** miqui has joined #openstack-infra23:30
devanandaone ironic patch landed today! \o/23:30
jorisrooverstrying to rebase 65515 of 6685423:30
fungijorisroovers: the error message mentions the commit sha of 66854, which currently (n gerrit) has 61653 depending on it23:31
devananda(not sarcastic, i didn't expect that and am happy)23:31
anteayadevananda: :D23:31
fungidevananda: don't spend it all in one place23:31
openstackgerritClark Boylan proposed a change to openstack-infra/zuul: Add rate limiting to dependent pipeline queues  https://review.openstack.org/6821923:31
devanandafungi: lol23:31
fungiwow, lag getting worse here23:32
jorisrooversfungi: so 61653 is giving me a complete different patchset23:32
jeblairttx: me too.  i not only will miss seeing you all, but also possibly my only chance to ski this season as there is no snow in CA.23:32
jorisrooversfungi, I don't know about that one23:32
clarkbjeblair: ^ that change fixes a bunch of bugs, makes volptuous happy and adds a simple test. I am hoping to remember how the test stuff works so that I can check the window value after each change reports23:32
fungijorisroovers: it looks like the problem is 66854 conflicts with the tip of master and needs a rebase23:32
ttxjeblair: :(23:32
jorisrooversfungi: 61653 is the paste number :p23:33
jorisrooversah, I need to rebase 66854 on latest master?23:33
jorisrooversthat would make sense23:33
jeblairclarkb: cool, lemme know if you have specific questions23:34
fungijorisroovers: whups... s/61653/65515/23:34
fungisorry23:34
jorisrooversfungi, no worries.23:34
fungijorisroovers: yes23:34
jorisrooversfungi, ok let me try that23:34
*** eharney has quit IRC23:34
*** slong_ has joined #openstack-infra23:34
*** slong has quit IRC23:35
*** thuc has quit IRC23:35
*** thuc has joined #openstack-infra23:36
fungiwow, neutron changes are still gate wrecking balls. when i see one failing, they inevitably have more than one failed voting job23:37
anteayafungi: which one?23:38
russellb"gate wrecking balls"23:38
russellbwow23:38
fungi4 days since the last successful merge into neutron master23:38
fungianteaya: the current failure near the top of the gate23:39
anteayayeah we are having some problems with isolated testing23:39
anteayaI see https://review.openstack.org/#/c/65245/ failing23:39
anteayawhat is it taking down with it?23:39
fungianteaya: a few hundred virtual machines23:39
jorisrooversfungi, so, I believe the issue is that I moved a file to a new directory and that file got updated in master before my patch got merged23:39
fungijorisroovers: sounds likely23:40
jorisrooversfungi, how do I now move that latest version of the file23:40
jorisrooversor better, update the version that I moved to the latest version in master23:41
anteayafungi I feel like I am the target of your frustration23:41
anteayawhich I understand23:41
jeblairfungi, clarkb: when we want to restart zuul to pick up clarkb's patch, i think we could pause all the jenkins masters and let the event queue catch up; then when it's 0, we can save it, restart zuul, restore the queue and unpause the masters23:41
fungianteaya: not at all!23:41
anteayabut what I lack is context to go back and take action23:41
*** markmcclain has quit IRC23:41
jeblairmordred: ^23:41
* fungi is in no way frustrated, merely mentioning that pretty much every neutron change in the gate right now is causing resets all the way up as it travels23:41
jeblairmordred, clarkb, fungi: (iow, that is an outline of a possible generalized way to allow zuul's queues to catch up in situations like this)23:42
fungijeblair: that sounds reasonable23:42
anteayafungi: if that is the case, do you want them all sniped out?23:42
fungianteaya: i have no particular wants in this situation, but others might23:43
anteayaif the decision is to snipe I will snipe23:43
anteayaI have worked hard as have others to remove the scapegoat energy from neutron23:43
fungijeblair: i likely won't be online for the zuul restart (i expect it to happen about when i'm sprinting between gates in vegas) but sounds good23:43
anteayawe seemed to have been making progress and I don't want to undo all our hard work today23:43
* anteaya is prepared to snipe23:43
fungianteaya: most (all?) of the neutron changes from the sprint etherpad weren't passing check jobs, but the tempest changes all went in so maybe rechecking some now would get fresh results on their efficacy23:44
fungianteaya: unless the devs are still in a pow-wow over how to improve them first23:45
anteayawould that change the rate of failure on the neutron jobs in the gate?23:45
anteayamostly they keep asking me what they can do to fix the gate situation and not make it worse23:46
anteayaI have most of them in a holding pattern23:46
anteayaI was suggesting to them to not submit anything new to gerrit that wasn't absolutely required23:47
anteayaI can recheck the sprint etherpad neutron changes but I would prefer to wait until i2 is cut tomorrow23:47
anteayaload on zuul and all23:47
sdagueanteaya: I think the neutron changes need to be prioritized based on what will make the isolated job pass first23:47
*** blamar has quit IRC23:47
anteayaI would like to focus on the gate tonight23:47
sdaguebecause it's failure rate is super high right now23:48
russellbfungi: looks like no dsvm jobs running at the head of the gate, may be a good time to promote?23:48
anteayaif is failing, and I am paraphrasing salv-orlando here because the isolated job is not in the voting jobs, it is just in the experiemental jobs23:49
fungirussellb: yep, promotingitnow23:49
fungiwow, serious spacebar fail23:49
clarkbanteaya: isolated is voting23:49
* russellb crosses fingers he doesn't break the world23:49
anteayaso to ensure new patches don't break the isolated job, the experimental jobs need to pass first23:49
*** praneshp has quit IRC23:49
clarkbgate-tempest-dsvm-neutron-isolated and gate-tempest-dsvm-neutron-isolated-pg23:50
anteayathen I am confused23:50
clarkber pg-isolated23:50
*** thuc has quit IRC23:50
*** thuc has joined #openstack-infra23:50
fungianteaya: isolated is voting *on neutron changes* so any neutron changes approved into the gate are basically destined to fail23:51
anteayaso I should snipe them23:51
anteayasince I don't know what patches are designed to fix isolated jobs23:51
fungior we should set those jobs non-voting check-only, if that's the decision23:51
anteayathat is what I am hearing23:51
anteayabut then that code gets merged and then we have to go back and fix that in order to make those jobs voting again23:52
anteayawe haven't merged anything in 4 days23:53
anteayanot merging anything today is not a hardship23:53
sdagueanteaya: parallel is not voting23:53
sdaguebut serial isolated is23:53
fungianother alternative is to revoke approval on neutron like was done for stable branches, but that's a very heavy hammer and i'm not a fan of it23:54
*** thuc has quit IRC23:55
anteayaI see two neutron patches in the gate23:55
anteayaI will snipe them and shoulder the neutron fallout23:55
jorisrooversfungi, still strugling here, starting over...23:55
fungiyeah, i think those were probably approved before word got around23:55
*** alexpilotti has quit IRC23:56
anteayahttps://review.openstack.org/#/c/65245/ and https://review.openstack.org/#/c/53609/23:57
fungijorisroovers: sorry, i don't know off the top of my head how to work around git's file move detection not recognizing it during a rebase/conflict resolution. i started to search the internet for recommendations, but it's not easy from an airplane23:57
anteayaI'd rather take the wrath of neutron than try to undo the wrath of everybody else directed at neutron23:57
fungijorisroovers: maybe someone else here knows (or can find you) the answer if you're coming up short23:57
jorisrooversfungi, no worries23:57
jorisrooversI tried checking out the relevant files from master and moving them again, but that didn't really work23:58
fungijorisroovers: i would probably resort to cherry-picking the changes, maybe with a manual move of the file i between23:58
fungii'm sure there's a way to do it in a interactiv rebase short of manually diffing the two files during the conflict resolution, i just don't know what the best alternative is23:59
*** praneshp has joined #openstack-infra23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!