Wednesday, 2014-01-08

sdagueclarkb: it's modified more than you think00:03
sdaguees has very complicated query structure00:03
*** sarob_ has quit IRC00:04
*** sarob has joined #openstack-infra00:05
*** sarob has quit IRC00:09
openstackgerritJeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos  https://review.openstack.org/6540000:10
fungijeblair: there's a first stab ^00:10
jeblairfungi: awesome; i've written the failing test for zuul replication; the code shouldn't be too much longer, though it's getting to be breakfast/transit time here00:11
*** esker has joined #openstack-infra00:13
jeblairfungi: that looks great; is it worth not having a default, and if it isn't set, not creating them?00:13
fungijeblair: fair enough, we just need to make sure to pass the environment variable in puppet00:14
fungii'll do that00:14
*** kraman has quit IRC00:14
jeblairfungi: ok.  i think we should call it '/zuul' to be clear about what the conntents of the repos are00:15
fungijeblair: agreed. i just wanted to go with something generic about the code and then we can override it to whatever name is most effective for our use case00:16
*** pballand has quit IRC00:16
jeblair++00:16
*** pmathews has quit IRC00:17
*** pmathews1 has joined #openstack-infra00:17
*** eharney has quit IRC00:19
*** fifieldt has joined #openstack-infra00:20
*** ruhe is now known as _ruhe00:21
clarkbsdague: mriedm gotcha00:23
*** banix has quit IRC00:23
clarkbzaro the problem is everyone has access00:23
clarkbzaro that is not a good thing as melody lets you do stuff00:24
fungiclarkb: everyone, not just admins?00:25
clarkbfungi it let me in without loggibg in00:25
*** yamahata has joined #openstack-infra00:26
clarkbunless I was ninja logged in00:26
fungioh, ew00:26
*** mriedem has joined #openstack-infra00:27
*** yidclare1 has quit IRC00:27
*** kraman has joined #openstack-infra00:29
openstackgerritJeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos  https://review.openstack.org/6540000:30
*** mriedem has quit IRC00:30
fungijeblair: there's the off-by-default version ^00:31
*** mriedem has joined #openstack-infra00:31
*** wenlock has quit IRC00:31
fungioh, wait, bug :(00:32
openstackgerritJeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos  https://review.openstack.org/6540000:34
fungibetter ^00:34
zaroclarkb: yikes!00:34
jeblairfungi: cool, transit time.  biab00:35
*** SergeyLukjanov is now known as _SergeyLukjanov00:37
openstackgerritDavanum Srinivas (dims) proposed a change to openstack-infra/devstack-gate: Gather horizon/apache2 logs  https://review.openstack.org/6449000:37
fungijust realized i could do that way more efficiently. conditional in a loop is silly00:38
*** yamahata has quit IRC00:40
*** _cjones_ has joined #openstack-infra00:41
*** pmathews1 has quit IRC00:46
*** hunner1 is now known as Hunner00:49
*** pelix has joined #openstack-infra00:52
*** nosnos has joined #openstack-infra00:53
openstackgerritJeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos  https://review.openstack.org/6540000:58
*** dcramer_ has joined #openstack-infra00:59
*** UtahDave has quit IRC01:00
*** devanand1 is now known as devananda01:02
*** hogepodge has quit IRC01:02
*** slong has joined #openstack-infra01:04
*** herndon has quit IRC01:11
*** yaguang has joined #openstack-infra01:12
*** dkliban is now known as dkliban_afk01:14
*** mrodden1 has quit IRC01:22
*** ryanpetrello has joined #openstack-infra01:24
*** senk has quit IRC01:24
*** melwitt has quit IRC01:26
*** praneshp has quit IRC01:26
*** marun has quit IRC01:27
*** yidclare has joined #openstack-infra01:27
*** resker has joined #openstack-infra01:31
*** yidclare has quit IRC01:32
*** esker has quit IRC01:33
*** CaptTofu has quit IRC01:34
*** CaptTofu has joined #openstack-infra01:34
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Pass a zuul scratch subpath to create-cgitrepos  https://review.openstack.org/6540301:35
*** CaptTofu has quit IRC01:35
fungijeblair: and that's ^ the config change to make use of it. i'd roll in credentials for zuul to push with, but not sure how you're engineering that to work so i'll hold off a bit01:35
*** CaptTofu has joined #openstack-infra01:35
*** CaptTofu has quit IRC01:39
pelixLike some input on https://review.openstack.org/#/c/63579/ - current regex results in inconsistent behaviour once you have a certain number of levels of xml tags01:42
pelixPyXML fixes this by producing consistent XML on python 2.6 but is not maintained (i.e. unlikely to work with version 3), 're' module doesn't support recursive regexes which is probably what's required to be able to apply a regex that doesn't screw up,01:42
*** yamahata has joined #openstack-infra01:42
*** nosnos has quit IRC01:43
*** nosnos has joined #openstack-infra01:44
*** pcrews has quit IRC01:44
pelixThat leaves a regex project pypi that supports recursive regexes which is what is likely to be required to not have he regex screw up on different levels of tags and is supposedly intended to replace python's re, or writing a prettyprint function that works with elementtree and produces consistent output across multiple versions of python.01:45
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Allow zuul to push to git servers  https://review.openstack.org/6540501:45
fungijeblair: a guess ^ at the git farm end of granting zuul access01:45
*** ^d has joined #openstack-infra01:46
openstackgerritA change was merged to openstack-dev/hacking: Add H904: don't wrap lines using a backslash  https://review.openstack.org/6458401:47
*** weshay_afk has quit IRC01:48
*** senk has joined #openstack-infra01:49
openstackgerritMathieu Gagné proposed a change to openstack-infra/config: Remove unit tests against Puppet 3.0.x  https://review.openstack.org/6540601:54
*** senk has quit IRC01:59
*** senk has joined #openstack-infra02:00
*** marco-chirico has joined #openstack-infra02:01
*** oubiwann has joined #openstack-infra02:01
*** reed has quit IRC02:15
*** kraman has quit IRC02:18
*** oubiwann has quit IRC02:19
*** nati_ueno has quit IRC02:23
*** ^d has quit IRC02:26
*** tianst20 has quit IRC02:29
*** resker has quit IRC02:32
*** markmcclain has quit IRC02:33
*** pcrews has joined #openstack-infra02:41
*** tian has joined #openstack-infra02:41
*** loq_mac has joined #openstack-infra02:42
*** nosnos has quit IRC02:43
*** coolsvap has quit IRC02:43
*** ryanpetrello has quit IRC02:44
*** changbl has joined #openstack-infra02:44
*** fallenpegasus has joined #openstack-infra02:46
*** talluri_ has joined #openstack-infra02:48
*** kraman has joined #openstack-infra02:49
*** marco-chirico has left #openstack-infra02:51
*** kraman has quit IRC02:54
*** dhellmann has quit IRC02:55
*** fallenpegasus has quit IRC02:56
*** oubiwann has joined #openstack-infra02:57
*** fallenpegasus has joined #openstack-infra02:59
*** fallenpegasus has quit IRC03:06
*** talluri_ has quit IRC03:08
*** dcramer_ has quit IRC03:09
openstackgerritEli Klein proposed a change to openstack-infra/jenkins-job-builder: Added rbenv-env wrapper  https://review.openstack.org/6535203:16
*** fallenpegasus has joined #openstack-infra03:20
*** loq_mac has quit IRC03:21
*** pcrews has quit IRC03:24
*** fallenpegasus has quit IRC03:34
*** loq_mac has joined #openstack-infra03:38
SpamapSwooooot03:40
SpamapSnetaddr 0.7.10 is on pypi03:40
clarkbnice03:44
*** fungi has quit IRC03:44
*** nicedice has quit IRC03:45
openstackgerritEli Klein proposed a change to openstack-infra/jenkins-job-builder: Add local-branch option  https://review.openstack.org/6536903:47
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Add Zuul ref replication  https://review.openstack.org/6541003:48
jeblairfungi: (is gone :( ) ^03:48
*** vipul is now known as vipul-away03:48
*** nicedice has joined #openstack-infra03:48
*** kraman has joined #openstack-infra03:50
*** fallenpegasus has joined #openstack-infra03:50
*** marun has joined #openstack-infra03:51
*** harlowja is now known as harlowja_away03:52
*** vipul-away is now known as vipul03:52
*** fungi has joined #openstack-infra03:52
*** kraman has quit IRC03:54
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Allow zuul to push to git servers  https://review.openstack.org/6540503:56
*** coolsvap has joined #openstack-infra04:00
*** reed has joined #openstack-infra04:01
*** praneshp has joined #openstack-infra04:07
*** praneshp_ has joined #openstack-infra04:08
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o  https://review.openstack.org/6541204:08
*** praneshp has quit IRC04:12
*** praneshp_ is now known as praneshp04:12
*** loq_mac has quit IRC04:13
*** AaronGr is now known as AaronGr_Zzz04:15
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Add Zuul ref replication  https://review.openstack.org/6541004:15
fungireviewing those04:17
jeblairfungi: cool; note i updated your ssh key change04:18
openstackgerritA change was merged to openstack-infra/jenkins-job-builder: Implements: Archive publisher allow-empty setting.  https://review.openstack.org/6280604:19
jeblairfungi: (which makes me think we should probably give zuul a new key, but maybe later)04:19
fungijeblair: on 65405, do you think that's safe enough, or should we set separate ownership over /var/lib/git/zuul and use a separate push account from the one gerrit uses so that we don't accidentally destroy one set of repos or the other via misconfiguration?04:19
jeblairfungi: that would make me happy.04:19
*** fallenpegasus has quit IRC04:19
jeblairfungi: 2 accounts.  not destroying them.  :)04:19
fungiseems a marginal risk, so i wasn't overly worried, but can certainly separate them04:19
*** fallenpegasus has joined #openstack-infra04:20
clarkbfungi jeblairs net blew up we are headed to lunch04:20
clarkbsorry04:20
fungiclarkb: no worries. my net keeps blowing up too04:21
openstackgerritNoorul Islam K M proposed a change to openstack-infra/config: Remove oslo.sphinx from test-requirements.txt  https://review.openstack.org/6541404:25
*** pballand has joined #openstack-infra04:26
*** praneshp has quit IRC04:27
*** pcrews has joined #openstack-infra04:30
*** senk has quit IRC04:31
*** fallenpegasus has quit IRC04:35
*** fallenpegasus has joined #openstack-infra04:36
*** Ryan_Lane has joined #openstack-infra04:37
*** fallenpegasus has quit IRC04:40
*** rhsu has joined #openstack-infra04:41
*** marun has quit IRC04:42
*** mriedem has quit IRC04:45
*** esker has joined #openstack-infra04:48
*** kraman has joined #openstack-infra04:50
*** rcarrillocruz has quit IRC04:52
*** ryanpetrello has joined #openstack-infra04:54
*** tian has quit IRC04:54
*** ryanpetrello has quit IRC04:55
*** senk has joined #openstack-infra04:56
*** kraman has quit IRC05:00
*** chandankumar has joined #openstack-infra05:00
*** dhellmann has joined #openstack-infra05:04
*** talluri has joined #openstack-infra05:07
*** talluri has quit IRC05:08
*** kraman has joined #openstack-infra05:14
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Allow zuul to push to git servers  https://review.openstack.org/6540505:20
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o  https://review.openstack.org/6541205:20
jeblairfungi: back05:22
fungijeblair: working on the ownership tweak for the jeepyb change now, and then they're probably ready05:22
clarkbjeblair: I left a comment on one of the config changes05:30
openstackgerritJeremy Stanley proposed a change to openstack-infra/jeepyb: Create scratch git repos  https://review.openstack.org/6540005:30
jeblairclarkb: let's change zuul's key later, but that's the situation we have now05:30
*** dhellmann has quit IRC05:31
jeblairclarkb: (jenkins pushing our tarballs and merges changes, so it's not a huge security profile change)05:31
jeblairs/pushing/pushes/05:31
clarkbok will chnage my vote in a minute, reviewing the jeepyb change now05:32
jeblairclarkb: there's a cron that changes its name in there; won't puppet make a new cron entry without an 'ensure=>absent'?05:32
jeblairclarkb, fungi: assuming so, i think we should just leave the original cron name05:32
clarkbjeblair: oh right it will05:33
*** loq_mac has joined #openstack-infra05:33
clarkbthe old one will stick around05:33
jeblair    content => $git_gerrit_ssh_key,05:33
*** dcramer_ has joined #openstack-infra05:33
jeblairquotes were added around that, should we go back to that version?05:34
jeblair(the quotes were added for the ps that had 2 keys)05:34
*** oubiwann has quit IRC05:35
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Allow zuul to push to git servers  https://review.openstack.org/6540505:36
jeblairfungi, clarkb: i made those changes ^05:36
fungijeblair: makes sense on the cron job. i just spotted the same quoting issue as well as a couple other bugs on the 65405 change i'm marking but my internet access issues are maddening05:36
fungigah05:36
clarkbjeblair: were you going t ofix the cron job?05:37
fungioh well, i left my comments on the previous patchset05:38
jeblairclarkb: didn't it?05:39
jeblairclarkb: didn't i?05:40
clarkbjeblair: oh you did, I was looking at the wrong idff05:40
clarkbfungi: I don't understand the comment that says this should be /var/lib/git05:40
clarkbfungi: my eyes tell me the two strings match05:41
fungidid i typo it twice?05:41
fungishould be /var/lib/git/zuul05:41
fungii should leave this to more awake people with less broken internets05:41
clarkbfungi: :) np05:41
*** dhellmann has joined #openstack-infra05:42
jeblairfungi: are you going to address those comments or should i?05:42
fungijeblair: i can fix it05:43
*** tian has joined #openstack-infra05:43
*** banix has joined #openstack-infra05:44
*** nicedice has quit IRC05:45
*** nicedice has joined #openstack-infra05:46
*** yaguang has quit IRC05:47
*** mlipchuk has joined #openstack-infra05:48
*** fallenpegasus has joined #openstack-infra05:51
*** fallenpegasus has quit IRC05:52
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Allow zuul to push to git servers  https://review.openstack.org/6540505:52
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o  https://review.openstack.org/6541205:53
*** fallenpegasus has joined #openstack-infra05:53
*** reed has quit IRC05:53
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Add Zuul ref replication  https://review.openstack.org/6541005:56
jeblairfungi, clarkb: i approved the changes to create the repos05:58
fungicool05:59
*** pelix has left #openstack-infra06:00
*** banix has quit IRC06:00
*** fallenpegasus has quit IRC06:01
fungialso, not sure if the aussie contingent caught this morning's fun in scrollback (overnight for you), but we accidentally upgraded all the precise slaves to libvirt 1.1.1 for a little while... details in bug 126671106:02
clarkbjeblair: can you look at the the inline comment on ps2 of the zuul change/06:02
jeblairfungi: thx06:02
clarkbfungi: I did catch that :/06:03
fungifor future reference, removing an apt repo object in puppet does not remove the source.list snippet it installs06:04
fungier, sources.list06:04
jeblairaaaaah06:07
*** morganfainberg has quit IRC06:07
*** loq_mac has quit IRC06:07
*** fallenpegasus has joined #openstack-infra06:08
fungithat and meetings sucked up a good chunk of my day. sat in on the defcore call, which kept getting into the weeds. i still don't think they're anywhere near being ready for designing the automated scorecard reporting infrastructure06:10
openstackgerritA change was merged to openstack-infra/jeepyb: Create scratch git repos  https://review.openstack.org/6540006:11
fungithough i'm starting to wonder whether the tripleo baremetal test cloud should be the exemplar for refstack06:11
jeblairlifeless: ^06:12
fungirather than waiting for someone to make a separate refstack we can baseline06:12
fungiseems like something we should push for, to help conserve/combine effort06:14
*** loq_mac has joined #openstack-infra06:14
clarkb++06:15
clarkbespecially since it is community run06:15
fungii'm watching git01... it's already got that jeepyb update now06:15
openstackgerritA change was merged to openstack-infra/config: Pass a zuul scratch subpath to create-cgitrepos  https://review.openstack.org/6540306:15
fungiso once it gets that ^ puppet change, i should see tons of empty repos for zuul appear thereafter06:16
*** rhsu has quit IRC06:16
*** rhsu has joined #openstack-infra06:16
*** yaguang has joined #openstack-infra06:17
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Allow zuul to push to git servers  https://review.openstack.org/6540506:18
*** markmc has joined #openstack-infra06:19
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o  https://review.openstack.org/6541206:19
jeblairsyntax fix06:19
*** changbl has quit IRC06:20
fungioh, right, those repos aren't going to get created unless the cgit list file gets changed (since the exec is subscribed to changes on that file)06:20
*** rhsu has quit IRC06:20
* fungi fakes it out06:20
jeblairfungi: sounds god06:22
jeblairgood06:22
jeblairso when it's time to upgrade zuul, i'm going to save the queue, shut it down, remove all the git repos[1], start it, replace the queue06:23
*** changbl has joined #openstack-infra06:23
jeblair[1] is just to delete current zuul refs to speed things up a bit06:24
jeblairwell, we could probably skip that; i don't think they are having a big impact.06:25
jeblairconsider that stricken from the plan06:25
*** rhsu has joined #openstack-infra06:27
*** senk has quit IRC06:27
clarkbso a funny thing has happened since the new year06:29
clarkbthe logs/syslog.txt files are being indexed as from 201306:29
clarkbit is very odd and I haven't sorted out why yet06:30
jeblairthat is interesting06:30
clarkbcan you say y2k06:30
dstufftfungi: did you see https://github.com/drkjam/netaddr/issues/57#issuecomment-31796111 ?06:31
*** rhsu has quit IRC06:31
*** rhsu1 has joined #openstack-infra06:31
*** fallenpegasus has quit IRC06:32
openstackgerritJeremy Stanley proposed a change to openstack-infra/jeepyb: Correct variables masquerading as strings  https://review.openstack.org/6542006:33
fungiadding an extra blank line to /home/cgit/projects.yaml was enough to trigger the exec, but it can haz bug ^06:34
fungidstufft: yeah, SpamapS mentioned it a little while ago06:34
lifelessjeblair: hi, 'sup?06:35
jeblairlifeless: fungi had a suggestion about refstack and tripleo; not urgent06:35
fungilifeless: was mainly wondering whether anyone had previously discussed the possibility of refstack becoming the reference implementation for refstack, since nobody seems to be putting time into a separate refstack06:36
clarkbI have a hunch that because syslog doesn't include a year the year is coming from when the process started possibly?06:36
fungier, of the tripleo baremetal test cloud06:36
clarkb*the logstash indexer process06:36
fungibecoming06:36
clarkbjeblair: we are currently only using 9-16?06:37
lifelessfungi: 'refstack becoming the reference implementation for refstack' ?06:37
clarkbjeblair: I am tempted to simply restart processes to see if things get indexed in the proper index06:37
jeblairclarkb: we're using 1-X where X is 4 or 8 i think.06:37
clarkbjeblair: thanks06:38
*** rhsu1 has quit IRC06:38
fungilifeless: tripleo baremetal test cloud becoming the reference implementation in place of refstack, i should have said06:38
fungibeen going about 18 hours with no break again06:38
fungifingers getting ahead of brain06:39
lifelessfungi: ah. Go get sleep ;)06:39
fungisoon06:39
*** Ryan_Lane has quit IRC06:39
lifelessfungi: I've certainly been thinking of tripleo's deployment as a reference implementation :)06:40
*** Ryan_Lane has joined #openstack-infra06:40
*** pblaho has joined #openstack-infra06:40
fungithe openstack provider compat scorecard idea needs a reference implementation against which we can run baseline test sets. that cloud has the benefit that it's not really a public cloud vendor and so can be neutral ground06:41
dstufftfungi: ah cool, I just woke up from a nap so didin't see :)06:41
*** praneshp has joined #openstack-infra06:42
*** loq_mac has quit IRC06:42
openstackgerritNoorul Islam K M proposed a change to openstack-infra/config: Allow normal users to create file in solum source tree  https://review.openstack.org/6542106:43
clarkbjeblair: fungi: yup restarting the indexer processes fixed the problem06:45
clarkbthat is ridiculously brain dead, Will need to debug further to come up with a proper rix06:45
lifelessfungi: not the baremetal cloud though, a virt cloud deployed on it.06:45
clarkbI could just write a once yearly cron for midnight january 1st to restart them all >_>06:46
*** fallenpegasus has joined #openstack-infra06:46
*** zigo_ is now known as zigo06:46
fungilifeless: ahh, yeah that would make sense06:48
fungiclarkb: now that *is* a y2k-esque solution ;)06:49
fungii manually tested 65420 on git01 and it works, btw. but went ahead and removed the directory tree again since it won't get created with the right ownership until 65405 merges06:58
fungionce that's in place, making a trivial modification to /home/cgit/projects.yaml should cause the empty zuul repos to all get populated06:59
fungiand with that, it's 2am in my neighborhood, so i'm going to grab a quick nap07:00
jeblairfungi: good night, thanks!07:00
*** morganfainberg has joined #openstack-infra07:02
*** fallenpegasus has quit IRC07:03
*** SergeyLukjanov has joined #openstack-infra07:05
*** yolanda has joined #openstack-infra07:11
*** jamielennox is now known as jamielennox|away07:12
openstackgerritNoorul Islam K M proposed a change to openstack-infra/config: Run solum tests using sudo  https://review.openstack.org/6542107:14
*** jcoufal has joined #openstack-infra07:20
openstackgerritNoorul Islam K M proposed a change to openstack-infra/config: Run solum tests using sudo  https://review.openstack.org/6542107:20
openstackgerritNoorul Islam K M proposed a change to openstack-infra/config: Remove oslo.sphinx from test-requirements.txt  https://review.openstack.org/6541407:20
openstackgerritNoorul Islam K M proposed a change to openstack-infra/config: Remove oslo.sphinx from test-requirements.txt  https://review.openstack.org/6541407:24
openstackgerritNoorul Islam K M proposed a change to openstack-infra/config: Run solum tests using sudo  https://review.openstack.org/6542107:25
*** rcarrillocruz has joined #openstack-infra07:30
*** fallenpegasus has joined #openstack-infra07:31
*** loq_mac has joined #openstack-infra07:31
AJaegerclarkb, jeblair : Do we have any Jenkins problems right now? I just had two strange doc failures07:31
jeblairAJaeger: i don't think there's anything systemic that would cause failures07:34
*** ken1ohmichi has joined #openstack-infra07:35
AJaegerjeblair, any idea what causes these two fails: https://review.openstack.org/#/c/65425/  and https://review.openstack.org/#/c/64872/07:35
AJaeger"This change was unable to be automatically merged with the current state of the repository. Please rebase your change and upload a new patchset."07:35
AJaegerBut the changes are at HEAD - properly rebased.07:35
*** loq_mac has quit IRC07:36
jeblairAJaeger: i think there was a transient problem with zuul07:39
jeblairAJaeger: there was a dns resolution error07:39
AJaegerthis just happened 10 mins ago - so is it safe to recheck?07:40
*** talluri has joined #openstack-infra07:41
jeblairAJaeger: actually it's not transient...07:42
jeblairAJaeger: i'll fix it in a few mins07:42
AJaegerjeblair, thanks!07:42
openstackgerritA change was merged to openstack-infra/zuul: Make smtp tests more robust  https://review.openstack.org/6431107:44
openstackgerritA change was merged to openstack-infra/zuul: Add Zuul ref replication  https://review.openstack.org/6541007:45
*** praneshp has quit IRC07:45
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Have Zuul replicate to git.o.o  https://review.openstack.org/6541207:52
openstackgerritA change was merged to openstack-infra/config: Have Zuul replicate to git.o.o  https://review.openstack.org/6541207:52
*** kraman has quit IRC08:01
*** fallenpegasus has quit IRC08:01
*** fallenpegasus has joined #openstack-infra08:02
*** fallenpegasus has quit IRC08:06
*** Ryan_Lane has quit IRC08:11
*** Ryan_Lane has joined #openstack-infra08:12
*** coolsvap has quit IRC08:12
*** coolsvap has joined #openstack-infra08:12
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Fix typo in puppet  https://review.openstack.org/6543008:12
openstackgerritA change was merged to openstack-infra/config: Fix typo in puppet  https://review.openstack.org/6543008:13
jeblairclarkb: /home/cgit/projects.yaml08:15
openstackgerritA change was merged to openstack-infra/jeepyb: Correct variables masquerading as strings  https://review.openstack.org/6542008:22
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Fix typo in zuul config template  https://review.openstack.org/6543108:26
*** fallenpegasus has joined #openstack-infra08:26
openstackgerritA change was merged to openstack-infra/config: Fix typo in zuul config template  https://review.openstack.org/6543108:27
*** coolsvap has quit IRC08:28
*** coolsvap has joined #openstack-infra08:30
*** loq_mac has joined #openstack-infra08:30
*** fallenpegasus has quit IRC08:31
*** pcrews has quit IRC08:31
*** fallenpegasus has joined #openstack-infra08:31
*** pcrews has joined #openstack-infra08:31
*** kraman has joined #openstack-infra08:32
*** fallenpegasus has quit IRC08:32
*** fallenpegasus has joined #openstack-infra08:33
*** kraman has quit IRC08:36
*** Ryan_Lane has quit IRC08:37
*** ken1ohmichi has quit IRC08:37
*** flaper87|afk is now known as flaper8708:40
*** loq_mac has quit IRC08:40
*** mancdaz_away is now known as mancdaz08:41
clarkbjeblair: zuul:/home/clarkb/changes.txt08:42
*** hashar has joined #openstack-infra08:42
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Allow zuul to push to git servers  https://review.openstack.org/6540508:43
openstackgerritA change was merged to openstack-infra/config: Allow zuul to push to git servers  https://review.openstack.org/6540508:44
*** SergeyLukjanov is now known as _SergeyLukjanov08:44
*** _SergeyLukjanov has quit IRC08:44
clarkbjeblair: /home/clarkb/change_projects.txt tab delimited08:48
*** kraman has joined #openstack-infra08:50
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Fix passing zuul public key to git backends  https://review.openstack.org/6543208:51
openstackgerritA change was merged to openstack-infra/config: Fix passing zuul public key to git backends  https://review.openstack.org/6543208:51
*** kraman has quit IRC08:54
*** fallenpegasus2 has joined #openstack-infra08:56
*** fallenpegasus has quit IRC08:56
*** SergeyLukjanov has joined #openstack-infra08:59
*** fallenpegasus2 has quit IRC09:00
*** noorul has joined #openstack-infra09:02
noorulhttps://review.openstack.org/6541409:02
noorulI am not sure why it is not getting queued up in zuul09:02
*** jpich has joined #openstack-infra09:03
AJaegernoorul, Zuul was down, might not have seen the recheck...09:04
clarkbjeblair: I don't grok why that is happening. the other dirs are correct09:04
clarkbjeblair: and they are created by the same script correct?09:04
AJaegernoorul, it's in the list now09:04
*** yolanda has quit IRC09:04
*** yolanda has joined #openstack-infra09:05
*** jroovers has joined #openstack-infra09:06
AJaegerclarkb, jeblair: I would appreciate some review for this one, please https://review.openstack.org/#/c/65391/09:06
AJaegerjeblair, thanks for fixing Zuul!09:06
clarkbjeblair: I see what is going on now09:08
jeblairclarkb: enqueing open changes now09:09
jeblairclarkb: zuul refs are being pushed to git farm09:09
jeblairclarkb: the first changes for each are going to be slow because they'll be pushing most of the project history09:10
*** derekh has joined #openstack-infra09:11
clarkbjeblair: roger09:12
clarkbjeblair: I really do not know why the env vars did not work here09:12
clarkbjeblair: unless the mapping is wrong09:12
noorulAJaeger: Thank you!09:12
noorulAJaeger: Did you do anything specific?09:15
AJaegernoorul, no. Looking at your patch: I suggest to rmdir as well09:17
*** CaptTofu has joined #openstack-infra09:17
noorulAJaeger: Because my recheck has no effect, see https://review.openstack.org/#/c/65421/09:18
noorulAJaeger: Are you suggesting to remove temp directory?09:18
*** CaptTofu has quit IRC09:19
AJaegernoorul, patience: http://status.openstack.org/zuul/09:19
*** CaptTofu has joined #openstack-infra09:19
AJaeger"Queue lengths: 155 events, 4 results.      " - so there are lots of events that I do not see...09:19
AJaegernoorul, yes I do suggest - let me add to the review09:19
noorulAJaeger: Usually during IST zuul used to respond quickly09:20
noorulAJaeger: So I thought something is wrong09:20
AJaegernoorul, as I said before: Zuul was down for an hour and is catching up.09:20
AJaegernoorul, so, something was wrong but clarkb and jeblair have been fixing it09:22
*** CaptTofu has quit IRC09:24
*** johnthetubaguy has joined #openstack-infra09:24
*** jooools has joined #openstack-infra09:26
*** johnthetubaguy1 has joined #openstack-infra09:26
*** johnthetubaguy has quit IRC09:27
*** tian has quit IRC09:30
openstackgerritJames E. Blair proposed a change to openstack-infra/devstack-gate: Fetch zuul refs from git.o.o  https://review.openstack.org/6543809:33
*** yamahata has quit IRC09:34
*** homeless has quit IRC09:38
*** michchap has quit IRC09:39
*** michchap_ has joined #openstack-infra09:39
* AJaeger gave the wrong URL when asking for a review - this is the right one https://review.openstack.org/#/c/64795/09:39
*** homeless has joined #openstack-infra09:40
*** yolanda has quit IRC09:47
flaper87fungi: around?09:49
*** kraman has joined #openstack-infra09:50
flaper87Actually, let me just ask the question. We're trying to release an alpha version of marconiclient but I guess something went wrong. ttx was helping us and he created / pushed a tag for the client. The thing is that no job was triggered09:50
jeblairflaper87: we had to restart zuul09:51
jeblairflaper87: an infra root can trigger the job manually, but i don't have time at the moment09:51
flaper87jeblair: we literally just did that like 10 mins ago, did you guys restarted zuul around that time?09:51
jeblairflaper87: yes09:51
flaper87jeblair: ok, I'll ping again later then09:51
jeblairflaper87: send an email to openstack-infra with the project and tag name09:52
flaper87jeblair: awesome, thanks!09:52
jeblairfungi: quick summary: everything is in place; zuul ran out of file descriptors so we had to restart it09:53
jeblairfungi: we ran with the new feature for a while, but it looks like it adds about 20 seconds to the processing of each change, which is a bit too much at current levels09:54
*** kraman has quit IRC09:55
jeblairfungi: so i disabled it manually09:55
jeblairfungi: we may want to prepare a rax performance node with a lot of cpu capacity to deal with the current crunch09:55
*** yolanda has joined #openstack-infra10:05
jeblairfungi: i've spun one up; dns ttl is 30010:05
*** afazekas has joined #openstack-infra10:10
jeblairfungi: ip is 162.242.154.8810:11
jeblairi have to go to dinner now10:11
*** SergeyLukjanov has quit IRC10:17
*** fallenpegasus has joined #openstack-infra10:21
*** fallenpegasus has quit IRC10:21
*** fallenpegasus has joined #openstack-infra10:21
*** SergeyLukjanov has joined #openstack-infra10:22
*** SergeyLukjanov has quit IRC10:23
*** che-arne has joined #openstack-infra10:45
*** kraman has joined #openstack-infra10:50
*** kraman has quit IRC10:52
*** kraman1 has joined #openstack-infra10:52
*** yamahata has joined #openstack-infra10:53
*** yaguang has quit IRC10:54
*** jooools has quit IRC10:54
*** jooools has joined #openstack-infra10:55
*** kraman1 has quit IRC10:57
*** SergeyLukjanov has joined #openstack-infra10:58
*** noorul has left #openstack-infra11:02
*** lcestari has joined #openstack-infra11:05
*** tma996 has joined #openstack-infra11:07
*** saschpe has joined #openstack-infra11:12
*** MIDENN_ has quit IRC11:20
*** hashar has quit IRC11:33
*** ArxCruz has joined #openstack-infra11:36
*** michchap_ has quit IRC11:39
*** mlipchuk has quit IRC11:39
*** mlipchuk has joined #openstack-infra11:40
*** michchap has joined #openstack-infra11:40
*** mlipchuk has left #openstack-infra11:44
*** roeyc has joined #openstack-infra11:50
*** roeyc has quit IRC11:50
*** kraman has joined #openstack-infra11:50
*** roeyc_ has joined #openstack-infra11:52
*** alexpilotti has joined #openstack-infra11:53
*** kraman has quit IRC11:55
*** CaptTofu has joined #openstack-infra11:55
*** ociuhandu has quit IRC11:55
*** esker has quit IRC12:14
*** dizquierdo has joined #openstack-infra12:15
*** openstackstatus has quit IRC12:17
*** openstackstatus has joined #openstack-infra12:17
jeblairi'm going to move zuul to the new server now12:20
*** ociuhandu has joined #openstack-infra12:20
*** yolanda has quit IRC12:23
*** yolanda has joined #openstack-infra12:24
*** rpodolyaka has joined #openstack-infra12:31
*** yassine_ has joined #openstack-infra12:33
*** hashar has joined #openstack-infra12:36
*** thomasem has joined #openstack-infra12:38
*** pmathews has joined #openstack-infra12:40
*** alexpilotti has quit IRC12:43
*** saschpe has quit IRC12:44
*** saschpe has joined #openstack-infra12:44
*** coolsvap has quit IRC12:47
sdaguehey, so I see by the linkedin spam that ryan lane is no longer at wikimedia. Any idea if he's still going to maintain the wiki, or if wikimedia is going to have someone else help with that, or where that stands at all?12:50
*** kraman has joined #openstack-infra12:50
anteayasdague: he will continue to maintain our wiki12:50
sdaguecool12:51
anteayapleia2 and I talked to him at summit about it12:51
sdagueah, cool, yeh, I'm just late to the game12:51
anteayano worries12:51
sdagueanteaya: you are up early/late?12:51
anteayahard to keep up with all the movement12:51
anteayajust had one of the most spectacular dinners of my life12:51
anteayacatching up on the latest and then off to bed12:52
anteaya8:52 pm12:52
*** dizquierdo has quit IRC12:52
anteayaI don't think he had changed call signs yet at summit12:52
*** smarcet has joined #openstack-infra12:54
*** kraman has quit IRC12:54
*** dcramer_ has quit IRC12:55
*** heyongli has joined #openstack-infra12:57
*** smarcet has left #openstack-infra12:57
*** johnthetubaguy1 is now known as johnthetubaguy13:00
chmouelfyi the rechecks page seems to be down https://bugs.launchpad.net/openstack-ci/+bug/126709813:01
jeblairchmouel: thx, i'll fix it in a sec13:03
ttxjeblair: one of those nights, huh13:03
jeblairI'm moving zuul to a new server and i need to copy over that data manually13:03
*** dizquierdo has joined #openstack-infra13:04
jeblairttx: yeah, fun times.13:04
*** yassine_ has quit IRC13:15
*** yassine_ has joined #openstack-infra13:16
*** yassine_ has quit IRC13:16
*** yassine_ has joined #openstack-infra13:17
*** yassine_ has quit IRC13:17
*** yassine_ has joined #openstack-infra13:19
*** yassine_ has quit IRC13:19
*** yassine has joined #openstack-infra13:21
*** talluri has quit IRC13:23
*** sandywalsh_ has joined #openstack-infra13:27
*** banix has joined #openstack-infra13:30
*** sandywalsh_ has quit IRC13:31
*** yassine has quit IRC13:31
*** eharney has joined #openstack-infra13:32
*** smarcet has joined #openstack-infra13:32
*** yassine has joined #openstack-infra13:33
*** banix has quit IRC13:35
openstackgerritJulien Danjou proposed a change to openstack-infra/config: Install Cassandra on OpenStack CI slaves  https://review.openstack.org/6546613:36
*** sandywalsh_ has joined #openstack-infra13:36
*** CaptTofu has quit IRC13:36
*** dprince has joined #openstack-infra13:36
*** CaptTofu has joined #openstack-infra13:37
*** CaptTofu has quit IRC13:37
*** markmc has quit IRC13:40
*** jasondotstar has joined #openstack-infra13:46
*** kraman has joined #openstack-infra13:50
*** rossella_s has joined #openstack-infra13:52
*** dkranz has joined #openstack-infra13:53
openstackgerritAntoine Musso proposed a change to openstack-infra/zuul: dequeue abandoned changes  https://review.openstack.org/6546713:53
*** ryanpetrello has joined #openstack-infra13:54
*** kraman has quit IRC13:55
*** dims has quit IRC13:55
*** dims has joined #openstack-infra13:57
*** esker has joined #openstack-infra13:59
*** yassine has quit IRC14:01
*** yassine has joined #openstack-infra14:01
*** yamahata has quit IRC14:03
openstackgerritThierry Carrez proposed a change to openstack-infra/config: Support proposed/* branches for milestone-proposed  https://review.openstack.org/6510314:04
*** sandywalsh_ has quit IRC14:04
*** prad_ has joined #openstack-infra14:08
*** mfer has joined #openstack-infra14:11
hasharhave you guys noticed your node pool seems screwed ?  It barely has any VM available :/  Since monday 6th apparently14:11
dhellmannsdague: ping?14:13
jeblairhashar: they are in use; we've been running at capacity since about then.14:14
*** sandywalsh_ has joined #openstack-infra14:14
*** mriedem has joined #openstack-infra14:15
*** heyongli has quit IRC14:17
*** dkliban_afk is now known as dkliban14:20
*** mrodden has joined #openstack-infra14:21
jeblairfungi, clarkb, mordred: new zuul host is up; seems to be dealing with the load _very_ well14:24
jeblairtwo things:14:24
*** mrodden has quit IRC14:26
*** miqui has joined #openstack-infra14:28
*** mrodden has joined #openstack-infra14:28
*** weshay has joined #openstack-infra14:29
sdaguedhellmann: pong14:29
dhellmannsdague: catching up on that oslo.sphinx issue -- is that affecting the gate?14:30
sdaguenot as far as I know14:30
dhellmannok14:30
sdaguethere is the one guy that wanted to build docs on a devstack nova pull, and it wasn't installed14:30
sdagueI have no idea why the solum folks are trying to remove it from test-requirements, then monkey patch it back in in dg14:31
dhellmannwould it help if we had a doc-requirements.txt separate from test-requirements.txt? and then a tox env specifically for docs?14:31
annegentle_dhellmann: nah don't put docs in a ghetto14:32
dhellmannannegentle_: ghetto?14:32
jeblaira) data from zuul is not making it to statsd.  i have no idea why14:32
annegentle_dhellmann: setting docs apart makes for a perception of second class citizen14:33
fungijeblair: you are up *late*14:33
jeblairb) we're going to run out of space on /, so we should figure out what on the old zuul server was using space and deal with it.14:33
jeblairfungi: i am about to go to bed14:33
annegentle_dhellmann: I saw the post also and I think it's best to run devstack while working on docs14:33
fungii can have a looksie14:33
jeblairfungi: cool; any other questions before i crash?14:34
jeblairfungi: (we've gone from load avg 150 -> 1.0 with the new server, so i'm pleased with that)14:34
dhellmannannegentle_: ok14:34
mriedemsdague: dhellmann: i was interested in the oslo.sphinx / nova thing too, but didn't have a good answer14:35
openstackgerritJulien Danjou proposed a change to openstack-dev/pbr: package: read a specific Python version requirement file  https://review.openstack.org/6323614:35
fungijeblair: no, i think i should be able to figure out any gaps. thanks!14:35
sdaguedhellmann: so you could say that a million ways14:35
sdaguehow about pep8-requirements14:35
jeblairfungi: the replication was too slow, so i've disabled it.  sorry if that ends up being a dead end.14:35
*** dstanek has joined #openstack-infra14:36
jeblairfungi: but i did come up with what i think will be a good way to scale zuul horizontally; i'll try to write it up tomorrow.14:36
sdaguerealistically its currently requirements.txt => run time requirements14:36
dhellmannsdague: yeah, we should have that one, too -- takes forever to run pep8 against ceilometer because one of our test requirements is nova14:36
fungijeblair: oh, race conditions getting the refs to the git s3ervers in time for tests to pull them? i wondered if it might14:36
sdaguetest-requirements.txt => everything else14:36
openstackgerritAntoine Musso proposed a change to openstack-infra/zuul: test dequeue on abandoned changes  https://review.openstack.org/6547614:36
jeblairfungi: no actually it was just that the pushes were too slow14:36
sdaguedhellmann: then you need to manage all these things and keep them in sync14:37
fungioh, wow14:37
dhellmannsdague: we need to address the python2 vs python3 question, too14:37
mriedemsdague: dhellmann: avoiding a build-requirements.txt14:37
mriedemlike rpms14:37
jeblairfungi: added about 20 seconds per-change14:37
*** amotoki has joined #openstack-infra14:37
*** malini_afk is now known as malini14:37
mriedembut that's what this sphinx thing really is, it's not test, it's not runtime, it's build14:37
sdaguedhellmann: well that I consider a pip flaw14:37
dhellmannsdague: ?14:38
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Don't have zuul replicate to git.o.o  https://review.openstack.org/6547714:38
fungijeblair: maybe if crufty zuul repos end up being part of the load issue over time, we could look into ways to expire out refs and prune. i'll play around with options when i get time14:38
hashariirc I have to `git gc` zuul repositories from time to time14:38
jeblairfungi: pls merge that and restart puppet ^14:38
fungijeblair: will do14:38
jeblairfungi: i think we tried turning on packing; i can't recall right now14:39
fungiokay. get some sleep. i'll catch you on the other side of the sun14:39
jeblairk.  good night!14:39
*** dims has quit IRC14:40
*** ryanpetrello has quit IRC14:40
*** rossella_s has quit IRC14:40
*** pmathews has quit IRC14:40
*** fallenpegasus has quit IRC14:40
*** morganfainberg has quit IRC14:40
*** bknudson has quit IRC14:40
*** prad_ has quit IRC14:40
sdaguedhellmann: sorry, just grumpy. And I should get to code and now policy this morning :)14:40
sdaguemriedem: so sure... but honestly other than *purity* what does that buy you?14:40
mriedemnada right now14:41
jeblairfungi: oh one more thing; if you need to restart zuul, see ~root/zuul-changes2.py14:41
jeblairfungi: you can save the queue and re-enqueue changes with it14:41
jeblairfungi: help text should get you started14:41
sdagueI'm also completely grumpy that there is no sane tox + pip + site packages solution14:42
jeblairokay, really leaving now14:42
fungijeblair: great--thanks!14:42
*** rossella_s has joined #openstack-infra14:42
*** dims has joined #openstack-infra14:42
*** ryanpetrello has joined #openstack-infra14:42
*** pmathews has joined #openstack-infra14:42
*** fallenpegasus has joined #openstack-infra14:42
*** morganfainberg has joined #openstack-infra14:42
*** bknudson has joined #openstack-infra14:42
mriedemi thought python had a setup_requires vs install_requires thing?14:43
mriedemidk, i did rpm packaging for openstack and it's dependencies for about a year and just got used to having to sort out the BuildRequires when i hit htem14:44
mriedem*them14:44
annegentle_dhellmann: I'm reading the whole rest of the thread this morning and hmming.14:44
*** herndon_ has joined #openstack-infra14:46
dhellmannannegentle_: I think we're running into a setuptools or pip bug :-/14:47
*** changbl has quit IRC14:47
fungidhellmann: got a link to a summary/repeatable test case?14:47
dhellmannsdague: no worries, I'll be grumpy this afternoon14:47
dhellmannfungi: there's a mailing list thread "[Solum] Devstack gate is failing"14:48
annegentle_dhellmann: remind me, the reason for oslo.sphinx is theming only?14:48
annegentle_dhellmann: though I guess you can't build the docs without the tehem14:48
annegentle_them14:48
annegentle_theme. gah14:48
fungii've been eyeballs deep in the most recent pip/virtualenv/setuptools churn so maybe something will look familiar14:48
*** rossella_s has quit IRC14:49
fungisdague: do you know whether havana horizon has bit-rotted on us? https://jenkins01.openstack.org/job/gate-grenade-dsvm/3181/console14:49
dhellmannannegentle_: it's the theme, but also meant to hold any sphinx customizations we make (the "feature" of pbr that autogenerates files for the api docs is supposed to move there eventually, iirc)14:50
*** kraman has joined #openstack-infra14:50
dhellmannfungi: the tl;dr is that when oslo.config is installed globally with "pip install -e" and oslo.sphinx is installed in a virtualenv with "pip install" python in the virtualenv can't find oslo.config14:50
jeblairfungi: ipv6 firewall on graphite14:51
fungidhellmann: the only way that would work is with --system-site-packages on the virtualenv right?14:51
*** burt has joined #openstack-infra14:51
fungijeblair: ahh, i'll fix14:51
dhellmannfungi: they must have that to not have oslo.config installed in the virtualenv at all, yeah14:52
dhellmannfungi: but the imports are *not* working14:52
jeblairfungi: new zuul doesnn't have a aaaa record14:52
fungidhellmann: virtualenv 1.11? --system-site-packages is broken (there's an rc for 1.22.1 up)14:52
annegentle_dhellmann: so, if incubating projects use oslo.sphinx, do they get openstack theming prematurely?14:52
fungier, rc for 1.11.1 up14:52
jeblairfungi: can probably add it; i just didn't because the old one didn't have one14:52
dhellmannfungi: this is in the gate jobs for solum, so I don't know which versions are in use14:52
fungijeblair: good call14:53
jeblairfungi: but didn't realize it would be talking to graphite over v614:53
jeblairok.  back to bed14:53
dhellmannannegentle_: I'm not sure whether it counts as premature or not -- they're incubating, so presumably we've given them some sort of nod of approval14:53
fungidhellmann: my guess is it's probably pulling latest virtualenv then (or could be too-old virtualenv, 1.9.x has issues with pip 1.5). lemme dig up the bugs i know about14:54
openstackgerritThierry Carrez proposed a change to openstack-infra/config: Support proposed/* branches for milestone-proposed  https://review.openstack.org/6510314:54
*** kraman has quit IRC14:54
annegentle_fungi: much appreciated14:56
dhellmannfungi: no rush, I think we're waiting for the solum guys to explain what they're trying to do on the ML15:01
fungidhellmann: k15:01
jd__did someone already see the netaddr download issue?15:05
jd__someone except dhellmann15:05
*** dkranz has quit IRC15:05
fungijd__: which netaddr download issue? are you talking about bug 1266513 or something else?15:07
jd__fungi: that looks like it, I got this error http://logs.openstack.org/70/58770/4/check/check-requirements-integration-dsvm/5266bfc/console.html from https://review.openstack.org/#/c/58770/ but maybe I just need to recheck15:08
*** rwsu has joined #openstack-infra15:09
jd__or I need to submit a patch to requirements to add --allow-unverified and external?15:10
*** herndon_ has quit IRC15:11
fungijd__: i'll have to look at the log to see why you're hitting it there... just getting up to speed and am a bit swamped with my overnight backlog (it was a very busy night)15:11
jd__fungi: no hurry, just let me know if I can help15:11
jd__you seem more aware than me of what could be the problem, but I'm willing to take action to take my part of the load15:11
openstackgerritA change was merged to openstack-infra/config: Don't have zuul replicate to git.o.o  https://review.openstack.org/6547715:12
fungijd__: well, it's been an emergent issue for us going on 6 days now, and we've been plugging it with workarounds in a variety of places15:12
jd__:-(15:12
jd__issues, ya never got enough15:13
*** rossella_s has joined #openstack-infra15:13
*** dkranz has joined #openstack-infra15:13
fungijd__: if you really want to help, join the crusade to convince the remaining straggler requirements of ours to start (or resume) uploading their releases to pypi.python.org15:13
fungijd__: SpamapS badgered the netaddr devs into assent yesterday (so awesome)15:14
fungibut there are still a few more15:14
jd__my sword is yours, fungi15:14
fungiit doesn't just help us, after all, but everyone using pip/pypi15:15
jd__we should list them in the requirements repo as a start I guess15:16
jd__that would also allow to build the --allow-* list dynamically15:16
fungijd__: if it goes on for much longer, we're going to have to do something like that in openstack/requirements (probably in a separate file from global-requirements.txt just to make it easier on our existing tooling)15:17
* jd__ nods15:17
*** julim has joined #openstack-infra15:17
fungijd__: i confirmed yesterday that you can put just the the override options (but not the version specs themselves) in something like a wall-of-shame.txt and then tox can be instructed to do -r wall-of-shame.txt -r requirements.txt -r test-requirements.txt and that works15:18
jd__ah that's indeed unexpected, but handy15:19
dimsjog0, sdague - check/gate-grenade-dsvm is keeling over again15:19
sdaguedims: examples?15:20
fungibut there's a lot of places we'd want to update to make that a permanent fixture in requirements enforcement, sync and mirroring scripts, so if we can just convince the devs of those packages to dtrt instead we can possibly avoid that extra technical debt15:20
fungisdague: the one i asked you about earlier. failing havana horizon exercises, probably some sort of bitrot15:20
*** dcramer_ has joined #openstack-infra15:20
sdaguefungi: I don't know15:21
dimssdague, log stash query "Horizon front page not functioning!"15:21
sdaguehonestly, if I'm ever going to get the ER stuff working, I need to stop looking at actual gate fails :)15:21
fungisdague: do you know whether havana horizon has bit-rotted on us? https://jenkins01.openstack.org/job/gate-grenade-dsvm/3181/console15:21
fungithat one15:21
fungisounds like no, so someone needs to start investigating15:21
sdaguefungi: I'd start with a bug and an ER check15:22
dimsfungi, sdague - i had a pending review for capturing the apache2 logs (https://review.openstack.org/#/c/64490/)15:22
dimssdague, already filed bug - https://bugs.launchpad.net/openstack-ci/+bug/126505715:22
fungidims: you probably need to bring it to the attention of the horizon and stable-maint devs, i'm guessing, so that we can up the priority15:23
sdagueso if I actually get these data queries sorted, we'll have a nearly instant answer here, so I'm going to go back to try to get that code done15:23
fungisdague: yes, please do. i was only asking in case you'd already seen/looked into it15:24
*** ttx has quit IRC15:25
*** ttx has joined #openstack-infra15:25
*** ttx has quit IRC15:25
*** ttx has joined #openstack-infra15:25
*** AaronGr_Zzz is now known as AaronGr15:25
*** kraman has joined #openstack-infra15:26
*** prad_ has joined #openstack-infra15:26
*** dprince has quit IRC15:27
dimsfungi, will hop onto horizon, https://review.openstack.org/#/c/64490/ is against openstack-infra/devstack-gate to get apache2 logs, could use blessings from folks here15:29
*** markmcclain has joined #openstack-infra15:31
fungidims: approved, but it'll take some time to go in... i can promote it to the head of the integrated gate queue if there's consensus that the disruption from another reset will be worthwhile to get that extra data15:31
fungidims: developers can also recreate this job failure themselves pretty reliably, i expect, by following https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/README.rst#n10015:32
sdagueso we're missing stuff in ER indexes15:33
dimsthanks fungi !15:33
sdaguein ES indexes that is15:33
sdaguehttp://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF91dWlkOjIxMGI3N2UzZmFhMTQ1ZGQ4ZTE0ZjNhODNiOTdmOTIyIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg5MTk1MjMzMTY5fQ==15:34
sdagueno console log15:34
*** rpodolyaka has left #openstack-infra15:35
*** pmathews has quit IRC15:35
*** afazekas has quit IRC15:37
*** roaet has joined #openstack-infra15:38
*** AaronGr is now known as AaronGr_Zzz15:38
roaetHowdy infra folk! I had a question about the last 3 or 4 failures for my change. It appears that it is failing for random reasons (different tests, different times). Mostly Jenkins issues it appears. Anything I can do to help move this along? https://review.openstack.org/#/c/57517/615:39
openstackgerritFlavio Percoco proposed a change to openstack-infra/config: Post pre-releases on pypi for marconiclient  https://review.openstack.org/6548315:39
*** andreaf has joined #openstack-infra15:40
*** CaptTofu has joined #openstack-infra15:41
dimsroaet, that last failure is a flakiness, you can try a recheck against bug 1265057 (you have the symptom - http://logs.openstack.org/17/57517/7/check/check-grenade-dsvm/612c6c2/logs/new/error.txt.gz)15:42
roaetthank you dims, I will try that.15:43
matelHello, what's the best way to try out nodepool changes at home?15:43
roaetWith the other errors, should I just recheck no bug?15:43
chmoueldims: i would imagine it's 1265057, no?15:43
chmouelhttps://bugs.launchpad.net/grenade/+bug/126505715:43
*** wenlock has joined #openstack-infra15:44
sdagueif i had to guess, I'd guess the grenade issue was a horizon havana patch that landed15:45
sdagueforward upgrade gating is still a gap15:45
*** herndon_ has joined #openstack-infra15:45
*** fifieldt has quit IRC15:47
*** coolsvap has joined #openstack-infra15:47
dimschmouel, right "recheck bug 1265057"15:47
mateljeblair: ping15:47
chmoueldims:, roaet: yeah correct15:48
sdagueso realize recheck is only useful if that job is sometimes passing15:50
sdagueotherwise it just makes things worse15:50
openstackgerritMatt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for bug 1265057  https://review.openstack.org/6548915:50
mriedemjog0: clarkb: mtreinish: sdague: ^15:50
*** rcleere has joined #openstack-infra15:51
roaetsdague: what if it is sometimes passing/failing random tests?15:51
*** mdenny has joined #openstack-infra15:51
*** fallenpegasus has quit IRC15:52
sdagueroaet: that means there is a race either in openstack or in the tests. so it needs to get fixed, otherwise we end up with 80 queue, 24hr delay gates, like we do today15:52
sdaguemriedem: realize that grenade jobs can't post on reviews yet15:52
sdaguefor ER15:52
*** chandankumar has quit IRC15:53
*** chandankumar_ has joined #openstack-infra15:53
roaetThe error prior to my last one was a failed py26 test (as in it couldn't run for some reason). In those situations I should recheck (it passed prior)?15:53
mriedemsdague: ah, didn't knwo that15:53
fungiroaet: does the log say it ran a couple days ago on centos6-3?15:54
*** jd__ has quit IRC15:54
roaetfungi: not sure:  http://logs.openstack.org/17/57517/7/check/gate-python-neutronclient-python26/61fa46c/console.html15:54
fungiif so, that server had a problem and got disciplined very harshly15:54
roaetit was this morning15:54
fungiahh, maybe we have another...15:55
roaetyes. centos6-3 something15:55
*** jd__ has joined #openstack-infra15:55
roaetjust randomly failed.. some java hudson failure15:55
*** chandankumar__ has joined #openstack-infra15:55
fungiroaet: that's not this morning15:55
roaetoh. weird. I thought it was ran this morning. (is in a time loop)15:55
fungiroaet: that's the event from a couple days ago. there was an infra bug for that server (now resolved)15:55
roaetAh got it. Sorry about that.15:56
sdaguefungi: promote this - https://review.openstack.org/#/c/64491/15:56
sdaguenext time you have a chance15:56
fungiyou can recheck against the bug if you go looking at recently resolved bugs in lp for openstack-ci15:56
fungisdague: will do asap15:56
*** chandankumar_ has quit IRC15:57
fungisdague: promoted changes 64491,4 64490,2 (in that order)15:59
dimswhoa - gate is 101 deep16:00
mriedemdkranz: ping16:00
openstackgerritA change was merged to openstack-infra/elastic-recheck: Add query for bug 1265057  https://review.openstack.org/6548916:02
openstackgerritEli Klein proposed a change to openstack-infra/jenkins-job-builder: Add local-branch option  https://review.openstack.org/6536916:02
dkranzmriedem: Give me two minutes please...16:02
*** dprince has joined #openstack-infra16:04
*** CaptTofu has quit IRC16:06
*** senk has joined #openstack-infra16:06
*** tma996 has quit IRC16:06
*** CaptTofu has joined #openstack-infra16:07
*** oubiwann has joined #openstack-infra16:07
*** yamahata has joined #openstack-infra16:08
*** CaptTofu has quit IRC16:09
dkranzmriedem: pong16:09
*** CaptTofu has joined #openstack-infra16:09
mriedemdkranz: hey, so i was looking into bug 1265906 to add logstash indexing on logs/error.txt.gz16:09
mriedemto this: http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/logstash/jenkins-log-client.yaml16:09
mriedemhowever, given the format of that log file:16:10
mriedemhttp://logs.openstack.org/37/61037/9/check/check-tempest-dsvm-neutron/6d98c79/logs/error.txt.gz16:10
mriedemlooks like i'd need to define a new format to handle it here, right?16:10
mriedemhttp://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/templates/logstash/indexer.conf.erb16:10
mriedemthe existing formats expect timestamps, logs/error.txt.gz doesn't have that16:10
zulhow do i make one gerrit review depend on another https://review.openstack.org/#/c/62432/ and https://review.openstack.org/#/c/65493/16:10
dkranzmriedem: The other option would be to change the generator for that log file16:11
dkranzmriedem: I didn't even know it existed until recently16:11
mriedemzul: https://wiki.openstack.org/wiki/Gerrit_Workflow#Add_dependency16:11
mriedemdkranz: where would i change the generator for that log file?16:12
dkranzmriedem: I'm not sure. I don't know a lot about this stuff.16:12
mriedemdkranz: ok, i'm picking on your due to your ML post on where the indexers are defined. :)16:12
*** reed has joined #openstack-infra16:13
dkranzmriedem: I figured16:13
mriedemthat's what you get for helping16:13
dkranzmriedem: No good deed goes unpunished :)16:13
dimslol16:13
dkranzmriedem: Most of the infra folks are far away from home now I think16:14
*** banix has joined #openstack-infra16:16
*** sdake has quit IRC16:17
*** gyee_ has joined #openstack-infra16:17
*** sdake has joined #openstack-infra16:17
*** ttx_ has joined #openstack-infra16:18
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Allow statsd hosts to connect to graphite via IPv6  https://review.openstack.org/6549716:18
* ttx_ experiences some kind of routing issues16:19
fungias painful as a very deep gate queue is, it's fun to watch the subway construction underway16:23
*** SergeyLukjanov has quit IRC16:23
openstackgerritFlavio Percoco proposed a change to openstack-infra/config: Post pre-releases on pypi for marconiclient  https://review.openstack.org/6548316:29
*** herndon_ has quit IRC16:29
*** esker has quit IRC16:31
*** sandywalsh_ has quit IRC16:31
portantefungi: are we in a state where folks should hold off further approvals?16:32
portantefwiw: if there is anything I can do to help out, let me know16:33
*** dstanek has quit IRC16:33
fungiportante: i don't think so. today's main gate blocker (besides a little lost ground while we were down for a zuul replacement with a faster vm) is assumed to be fixed by 64491, which is now at the head of the gate and slated to merge in 10 minutes. fingers crossed16:36
fungithe new zuul is performing waaaay better than its predecessor too16:37
fungithough i'm hammering through 65497 to fix a problem with graphs16:37
*** ^d has joined #openstack-infra16:38
*** UtahDave has joined #openstack-infra16:39
fungisdague: 64491 went in16:40
openstackgerritA change was merged to openstack-infra/config: Allow statsd hosts to connect to graphite via IPv6  https://review.openstack.org/6549716:40
*** talluri has joined #openstack-infra16:43
portantefungi: great, thanks16:43
hasharnice16:44
*** beagles is now known as beagles_brb16:45
*** talluri has quit IRC16:45
*** hashar has quit IRC16:46
*** jasondotstar has quit IRC16:52
*** changbl has joined #openstack-infra16:53
*** pcrews has quit IRC16:53
*** markmcclain has quit IRC16:55
*** esker has joined #openstack-infra16:58
fungithe tabloid headline for this would be "red hat takes over centos" http://lists.centos.org/pipermail/centos-announce/2014-January/020100.html16:58
ttx_fungi: or "redhat eats centos alive"16:59
ttx_(not that I think it's a bad idea :)16:59
fungiright next to a fuzzy picture of bat boy16:59
*** esker has quit IRC17:00
*** SergeyLukjanov has joined #openstack-infra17:00
*** chandankumar__ has quit IRC17:00
ttx_ok, I declare my network connection totally useless and end my day17:01
*** jasondotstar has joined #openstack-infra17:01
*** dstanek has joined #openstack-infra17:01
fungittx_: have a pleasant evening. my isp issues seem to be clearing up, at least17:02
*** esker has joined #openstack-infra17:02
ttx_fungi: if at least it was just failing, I would not keep on trying and getting random loss17:02
*** sarob has joined #openstack-infra17:03
fungittx_: was similar for me. established tcp sessions kept working, but ~50% of new sessions plus most of my udp traffic timed out17:03
*** sarob has quit IRC17:04
*** sarob has joined #openstack-infra17:04
ttx_yeah, the drops seem to affect encypted connections more, too. Trying not to get too paranoid17:04
fungiweb browsing to ajaxy sites doing callback or using crap like google analytics was basically impossible17:04
fungiand api calls using http(s), like novaclient, also severely broken17:05
*** ttx_ has quit IRC17:06
jog0whoa gate queue is at 10217:06
fungijog0: pretty awesome, yeah?17:07
jd__not sure if that's irony17:07
jd__or rather sarcasm17:08
fungiwell, the scalability there is pretty grand. maybe i can be a bit more objective because it's not preventing me from getting other things done17:08
*** sarob has quit IRC17:09
*** sarob has joined #openstack-infra17:09
fungithe graphite graphs won't be accurate for another hour, so hard to see what sort of volume we're really doing17:10
*** sarob has quit IRC17:11
*** sarob has joined #openstack-infra17:11
fungithe most recent gate reset was classified by elastic-recheck as bug 125487217:11
*** jcoufal has quit IRC17:12
jog0libvirt ... sigh17:12
*** sarob has quit IRC17:12
fungibut the grenade/horizon fix made it in a little while ago, so hopefully things will be back to moving quickly17:13
*** sarob has joined #openstack-infra17:14
jog0fungi: cool. that was my patch17:15
*** oubiwann has quit IRC17:15
*** sarob has quit IRC17:15
fungijog0: just think of all the changes you saved from a reverify with that one17:16
*** sarob has joined #openstack-infra17:17
jog0fungi: heh, I am just happy that one of my 20 odd outstanding patches is done17:18
*** yamahata has quit IRC17:18
fungijog0: indeed. i think i only have 15 which aren't wip right now17:20
fungiclearly i should be writing more patches17:21
jog0^_^17:21
*** sarob has quit IRC17:22
*** pballand has quit IRC17:24
*** roeyc_ has quit IRC17:25
*** dstanek has quit IRC17:28
*** herndon has joined #openstack-infra17:28
*** hub_cap is now known as carnac17:29
*** carnac is now known as hub_cap17:30
*** jpich has quit IRC17:32
sdaguedhellmann: so is there a concise write up of the oslo.config issue?17:32
sdaguebecause there are now a millions things related to it, but I'm not sure I see a real analysis of what is going on, and why it's an issue17:33
*** pballand has joined #openstack-infra17:33
dhellmannsdague: I think bnemec worked out that it's caused by a combination of pip install -e and something else in the same namespace package not being installed in that mode17:33
*** herndon has quit IRC17:33
dhellmannsdague: alternatives to fix it seem to be install oslo.sphinx with pip install -e or change its package name17:34
*** oubiwann has joined #openstack-infra17:34
dhellmannsdague: or don't install it at all, of course17:34
bnemecdhellmann: sdague: Right, the problem is that we pip install -e oslo.config in the system site packages, then pip install oslo.sphinx in the venv.17:34
bnemecThat combination results in oslo.config being unavailable in the venv.17:34
sdagueso that sounds like a pip or venv bug17:35
dhellmannprobably setuptools, but yeah17:35
fungidhellmann: is the issue to do with them sharing a common namespace but being split between system and venv?17:35
dstufftouch are you guys using setuptools namespaces?17:35
bnemecfungi: Partially, but it doesn't happen if you do a normal pip install of both.17:35
dhellmannfungi: common namespace, split, *and* installed with .egg-link in some cases (I think)17:35
dstufftnamespaces don't play well with -e17:35
bnemecIt's only a problem if you mix egg-links and regular pip installs.17:35
fungiahhhh17:35
dstufftI recommend staying away from namespace pacakges unless you're python3 only and can use the built in support for them :/17:36
dhellmannso for normal production oslo libs, we'd just install them from devstack with pip install -e and be done17:36
dhellmannbut because oslo.sphinx is not a production lib, that's not necessarily the best answer17:36
dhellmanndstufft: too late :-/17:36
dstufftdhellmann: I figured as much, i'd try to get away from them if you can though. I don't think i've seen a single use of them that didn't end up being a regret eventually17:37
*** senk has quit IRC17:37
dhellmannbnemec: I have a meeting starting in a minute, can you reply to sdague's email on the list with a summary?17:37
*** sparkycollier has joined #openstack-infra17:38
bnemecdhellmann: Sure, I was actually just doing that when this discussion started. :-)17:38
dhellmanndstufft: I've had fairly good luck, but I try to avoid doing anything tricky with them most of the time17:38
sdaguedstufft: so can you explain what the issue is?17:38
dhellmannbnemec: thanks!17:38
dstufftsdague: the issue is they rely on a hack, a hack that breaks down if you install two things with the same namespace in different ways17:39
dhellmannsdague: the code building the import path for the oslo namespace package is getting confused because it's finding .egg-link and real directories -- it's probably stopping on the real directory in the venv17:39
sdaguedstufft: so the issue is it's called oslo.sphinx17:39
sdaguebasically the issue is we used '.' in packages?17:39
dhellmannno, a namespace package is a special kind of package that does stuff to the import path17:40
dhellmannif we had just an oslo package, and bundled everything into it, this would all work fine17:40
dstufftyea17:40
dstufftwhat dhellmann said17:40
dstuffthttps://github.com/openstack/oslo.sphinx/blob/master/oslo/__init__.py#L1717:40
dstufftthat line is what does the hack17:40
sdaguedhellmann: ok, so all these details need to be in that synopsis17:40
dhellmannbut we have several different oslo directories, and each of those extends the import path so when you import from oslo it looks in all of them17:40
*** andreaf has quit IRC17:40
dhellmannexcept that that path munging stuff doesn't like that we're mixing editable installs and regular installs17:40
*** CaptTofu has quit IRC17:40
sdaguebecause it sounds like oslo is doing stuff that's unwise, so we should stop doing that.17:41
dhellmannwe should probably just reserve the oslo namespace for production libraries, and rename the theme package17:41
*** herndon_ has joined #openstack-infra17:41
*** CaptTofu has joined #openstack-infra17:41
sdagueand then maybe we wouldn't have to uninstall / reinstall oslo.config twice at the beginning of every devstack install to try to get a working env17:41
sdaguedhellmann: so isn't oslo.config hit by the same issue17:41
dhellmannwell, it could also be said that if devstack didn't use that -e option this wouldn't be an issue, so it's a combination of all of it17:41
dhellmannI don't know what led to us reinstalling oslo.config like that17:42
dhellmannis it being brought in as a dependency?17:42
*** mancdaz is now known as mancdaz_away17:42
sdaguedhellmann: because it didn't work until we did that17:42
dstufftif I recall correctly, and i'm not well versed in namespace packages, there are 3 cases of how something can be installed17:42
dstufftand pip uses one of them normally, another one for -e, and I think easy_install uses a third way17:43
sdagueyou end up with "pip install oslo.config" can't install, already installed17:43
sdagueimport oslo.config17:43
sdagueexplode17:43
dstufftand you can have namespace packages compat with oe and the pip way, or the pip way and the easy_install way17:43
dstufftIIRC17:43
sdaguepythong can't find it17:43
dstufftbut not all 3 of them17:43
dstuffthttps://github.com/pypa/pip/issues/317:43
dhellmannsdague: do you know what caused it to be installed and then for something else to try to install it again?17:43
*** nicedice has quit IRC17:44
dhellmanndstufft: thanks, that's good info17:44
sdaguedhellmann: because it's packaged in the distro, and some times things drag it in17:44
sdagueremember, oslo wanted their stuff to be generic17:44
sdagueso other people might use it17:44
sdaguehonestly, I don't know all the details17:44
dhellmannwhy are we installing the distro package?17:44
dhellmannok17:44
*** dcramer_ has quit IRC17:45
sdagueyou can't assume it's not installed17:45
sdaguein devstack17:45
sdagueit actually caused all kinds of breaks17:45
dhellmannthat's not going to be unique to the other libraries, then17:45
dhellmannor rather, that's going to affect the other libraries and not be unique to config17:45
sdagueexcept oslo.config gets imported freaking everywhere17:46
* dhellmann needs lunch to english better17:46
dhellmannsdague: so will oslo.log and oslo.text when those are released17:46
dstufftdhellmann: np, sorry I don't have good news :|17:46
*** harlowja_away is now known as harlowja17:46
dhellmanndstufft: "this is not only your issue" is a start :-)17:46
dstufftdhellmann: :)17:48
sdaguedhellmann: so basically, it has to be fixed in a real way, because we were hacking around this on oslo.config, and those hacks are breaking down17:48
dhellmannsdague: To start I'll look into the impact of changing the name of oslo.sphinx. Do you have another suggestion for the name? openstack-sphinx? with a python package name "openstacksphinx"?17:48
sdaguebut honestly, it totally blows my mind that it's ok that you can pip install a package, then not be able to import it :P17:48
sdaguethat seems like a fundamental break in the software17:49
dhellmannsdague: The multi-install issue would show up with any of our dependencies, right? If we tried to install an editable version of sqlalchemy, that would cause the same issue?17:49
dhellmannsdague: yeah, see the bug dstufft linked above -- it is17:49
sdaguedhellmann: so you have to walk through the details, remember you've got a decade on my in wonky python internals17:49
*** marun has joined #openstack-infra17:50
*** CaptTofu has quit IRC17:50
dhellmannsdague: these are 2 issues, and I was just talking about the "multiple install oslo.config to get it to work" problem17:50
*** CaptTofu has joined #openstack-infra17:50
dhellmanninstalling something "for real" leaves it in a different format than installing it in "development mode" where you can edit the files without having to reinstall the package17:51
dstufftsdague: packaging is built on a pile of broken, we're trying to sort things out and make it sane, but some stuff is just fundamentally broken and afaik the setuptools namespace hack isn't something that can be fixed realiably :[ The future of namespace packages lives in python3, but it's not useful until 2.x is dead :|17:51
dhellmannan os package is going to use the first mode, and devstack is using the second17:51
dhellmannthat makes sense, because we want to leave the system in a state where the developer can make changes to it17:51
*** nati_ueno has joined #openstack-infra17:51
dhellmannbut what it means is we have to work to make sure there is no previous version of the library, in order to get it cleaned up properly17:51
sdagueright, so the point of devstack is you have trees of code that you can edit so you can ^C and restart with that code and test your changes17:52
dhellmannthat's right17:52
dhellmannbut that may mean we have to say "do not install these other libraries globally if you are using devstack"17:52
dhellmannor, have the logic in devstack to forcibly remove them17:52
dhellmannfor example, if we know the system package names, we could try to delete them17:52
sdaguehowever, what you are telling me is that because we need oslo.config in /opt/stack you can never use another program that uses oslo.config on that system which isn't in /opt/stack17:52
dhellmannor if that doesn't work, we could just remove the relevant files with rm rather than "pip uninstall"17:53
sdaguewhich means we should tell everyone to minimize their use of oslo.config17:53
dhellmannno, it doesn't have to do with where the source is on the filesystem17:53
*** _ruhe is now known as ruhe17:53
sdagueit has to do with the fact that it's in development mode17:54
dhellmannif you've installed my-fancy-app and that uses oslo.config and then you try to use devstack, you'll have this issue17:54
sdagueright17:54
dhellmanns/oslo.config/any other package/17:54
sdagueso that seems completely broken17:54
*** DennyZhang has joined #openstack-infra17:54
dhellmannyes, I agree17:54
dstufftSo this may be a dumb question17:55
dstufftwhy's devstack not isolated from the system?17:55
dhellmannbecause what we're doing is installing editable versions of libraries into the global operating system's view of python17:55
dhellmannand that's not a great idea17:55
dhellmannright17:55
sdaguedstufft: because we need to make sure it will actually work on the system17:55
*** dstanek has joined #openstack-infra17:55
* dhellmann has a meeting, sorry17:55
dstufftactually work on what system17:55
dstufftisn't integrating into the OS the OS packager's job17:56
openstackgerritNoorul Islam K M proposed a change to openstack-infra/config: Move devstack hooks from infra config to solum repo  https://review.openstack.org/6541417:56
*** pcrews has joined #openstack-infra17:56
sdaguedstufft: we're actually trying to not hand them a big pile of fail that makes that hard though.17:56
dhellmannsdague: if it helps, libraries that are definitely meant for others to consume will not be in the oslo namespace, and that should make this less of an issue for those17:57
openstackgerritNoorul Islam K M proposed a change to openstack-infra/config: Move devstack hooks from infra config to solum repo  https://review.openstack.org/6541417:57
sdaguedhellmann: so oslo.config is not meant to be used beyond openstack server packages?17:57
dhellmannsdague: if we installed all of our dependencies into a single virtualenv, would that be too far off from installing it globally?17:57
dhellmannsdague: god, I hope not17:57
*** krtaylor has quit IRC17:58
sdaguedhellmann: so why is it used by oslo's update script then?17:58
dhellmannsdague: because that's also an openstack tool?17:58
dhellmannit's not meant to be used outside of openstack17:58
dhellmannat least, IMHO17:58
sdagueso if it was oslo-config instead would this not be an issue?17:59
sdagueor still an issue?17:59
*** herndon_ has quit IRC17:59
dhellmannthe name issue related to the python package, so it would have to be osloconfig17:59
dhellmannbut yeah, if we didn't use namespace packages this would probably be less of an issue17:59
sdagueless of an issue, or not an issue?18:00
dhellmannalthough I don't know for sure that you'd have less trouble re-installing it in edit mode, I'd have to experiment with that18:00
dhellmannI'm not sure18:00
sdaguebecause if the answer is "not an issue", I think that's what has to be done18:00
*** thomasem has quit IRC18:01
dstufftit wouldn't be an issue that you'd install it and not see it because of how you installed it18:01
dhellmannI need to understand why a single virtualenv for devstack wouldn't work before I can agree, but we'll hae to talk about that later18:01
sdaguesure18:01
sdagueand you need mordred in that conversation18:01
dstufftatleast as far as I know18:01
*** Ajaeger1 has joined #openstack-infra18:01
dstufftthis is python packaging we're talking about so there's a giant ass caveat that nobody actually understands how all of this works so18:02
sdaguebecause, honestly I'm noobish enough to just know the current situation is terrible. As a user pip install -WHWATEVER boo, then not being able to import boo, is fail. :)18:02
*** medberry is now known as med_18:03
*** jasondotstar has quit IRC18:03
sdagueand I'm surprised that "just throw a venv at it" is the solution. Because didn't we make fun of java people for shipping 20 jvms with their software because they couldn't get all the software to play together :)18:03
*** kraman has quit IRC18:03
*** matel has quit IRC18:03
dhellmannwe wouldn't use a virtualenv in production, just for devstack18:04
*** kraman1 has joined #openstack-infra18:04
*** pballand has quit IRC18:05
sdaguedhellmann: in which case we just made moving from dev -> production that much further way18:06
sdagueaway18:06
sdaguebecause we'll stop testing if we can actually build a system where all this stuff plays together with a base OS18:06
*** senk has joined #openstack-infra18:06
*** beagles_brb is now known as beagles18:06
dhellmannthe virtualenv lets us say "all of these versions work together". What does using /usr give us over that if we're replacing system package versions of libs anyway?18:06
sdaguedhellmann: because we don't want to be replacing *all* of the system18:07
fungidstufft: part of the argument for installing it globally in the os is the suggestion that we want to see if it's going to be viable packaged before the distro packagers package it. sort of chicken-and-egg proposition which i don't completely think is sound18:07
dhellmanndoesn't devstack do that when it syncs the requirements file and then calls pip install -U?18:07
sdaguedhellmann: it doesn't install pip install -U18:07
sdagueit does pip install without the -U18:08
sdagueso if you have sufficient requirements, you get them18:08
dhellmannok18:08
sdaguenot the latest18:08
dstufftfungi: yea I don't buy that personally. I've never had anything but problems mixing virtualenv and a system site packages. Now granted i've never done anything as big as openstack so idk :)18:08
dstufftalthough if someones packaging openstack wouldn't they be packaging it with the next version of their OS?18:08
dstufftthe *nix's don't usually like putting new stuff into an already released OS18:09
dhellmannsdague: i expect we could work around the issue with multiple installs for oslo.config by making devstack understand how the package might be installed, and removing it ourself instead of relying on pip to do it18:09
sdaguedstufft: most of them provide it via an update channel18:09
*** thomasem has joined #openstack-infra18:09
dstufftsdague: ah, ok18:09
sdaguedstufft: because it evolves faster than the base OS18:10
fungidstufft: right now we're so fast-moving/evolving that shipping releases as old as the distro's release is seen by much of the community as broken/insufficient18:10
fungiwhat sdague said18:10
dhellmannfungi: could you have a look at https://review.openstack.org/#/c/65180/ when you have a few minutes?18:10
sdaguedstufft: "I've never had anything but problems mixing virtualenv and a system site packages" unfortunatelly translates to me as "never use python for system level things" :(18:10
*** jasondotstar has joined #openstack-infra18:10
fungias primarily a sysadmin, i believe in you get what released with your distro, plus security/usability fixes backported18:10
dhellmannfungi: no rush, but I'd like to start gating cliff on openstack consumers18:10
*** dstanek has quit IRC18:11
dstufftsdague: Eh, it's really an issue because you're using two different package managers to manage the same stuff18:11
fungidhellmann: i think i have that change starred, and i'm getting close to being able to get back to reviewing today if nothing else breaks18:11
dhellmannfungi: cool, thanks18:11
dhellmannsdague: what dstufft said18:12
sdaguedstufft: right, but when you have a distro that has a bunch of python stuff to get things done, what do you do?18:13
dstufftI always isolate myself from the system site packages18:13
sdaguedstufft: and how often does that become system level packages?18:14
*** oubiwann has quit IRC18:14
dstufftIf someone wants to put mysoftware in a *nix, it's on them to integrate it, that's their value add over just installing things from PyPI, that they've spent the time to integrate it into the system18:14
dstufftsdague: I don't understnad that question18:15
sdagueyeh, I'm not sure I do anymore either. :)18:16
dstufftI mean i work on pip so our stuff gets packaged by OSs, but we avoid dependencies like the plague :)18:16
sdague:)18:17
*** sarob has joined #openstack-infra18:17
*** sarob has joined #openstack-infra18:18
sdagueok, time to get some lunch before I just start trolling :)18:18
*** sarob has quit IRC18:18
kraman1fungi: ping, hows your load today. got some time to answer a few questions?18:19
*** sarob has joined #openstack-infra18:20
*** oubiwann has joined #openstack-infra18:20
*** noorul has joined #openstack-infra18:21
noorulhttps://review.openstack.org/#/c/65414/18:22
noorulIf someone can help me to get this in quickly18:22
dkranzfungi: How do you investigate why jenkins is not running on a new upload of a patch, for example https://review.openstack.org/#/c/64818/218:23
Ajaeger1dkranz: have a look at http://status.openstack.org/zuul/18:24
dkranzAJaeger: I did that but don't see anything for 6481818:24
mriedemsdague: to get a feel for how long a particular ES query takes, you suggested adding debug logging to check_success.py in elastic-recheck,18:24
mriedemi was thinking about actually making that something we store in the metrics we collect and dump - thoughts on that?18:25
Ajaeger1dkranz: Zuul was restarted at the time the change was done, you need to retrigger it.18:25
* Ajaeger1 checked the timestamp18:25
dkranzAJaeger: ok, thanks18:25
*** sarob has quit IRC18:25
Ajaeger1dkranz "recheck no bug" should be all it needs - and then some patience ;)18:26
*** sarob has joined #openstack-infra18:26
dkranzAJaeger: Yup18:26
*** CaptTofu has quit IRC18:26
sdaguemriedem: so we're actually pretty stateless today, I'd rather stay that way as much as possible18:30
*** sarob has quit IRC18:31
mriedemsdague: not sure i follow, figured getting the query time into the print_metrics output would be better than having to dig through the debug logs per query?18:31
*** sarob has joined #openstack-infra18:31
*** pblaho has quit IRC18:31
fungikraman1: my day is pretty crazy, but i can probably answer a question or two18:31
sdaguemriedem: so eventually, sure. the problem is there is a hot/cold data problem18:32
sdagueso... actually, maybe a different tool is better18:32
*** sarob has quit IRC18:32
sdaguethat does 4 runs of each query, throws away the first one, averages the other 318:32
kraman1fungi: i got zuul working the way I need it with a bunch of hacks. was hoping to get some time run the hacks by you and see if there was a better way to do it so i could send patches to zuul18:33
fungidkranz: i think that 64818,2 got uploaded while jeblair was replacing zuul, so it probably missed the patchset upload event18:33
fungioh, Ajaeger1 just mentioned that18:33
mriedemsdague: maybe a new option for the elastic-recheck CLI18:33
mriedem?18:33
kraman1fungi: if you dont have time (looks like it), then can you please suggest anyone else I could talk to about the hacks?18:33
sdaguemriedem: honestly, I'd rather go with a smaller set of tools vs. one big cli at this point18:34
dkranzfungi: So in the future, if I see something where jenkins does not run and nothing in zuul, I should not bother to report that but just do recheck?18:34
*** CaptTofu has joined #openstack-infra18:34
Ajaeger1dkranz: and the zuul queue empty!18:34
*** sarob has joined #openstack-infra18:34
*** praneshp has joined #openstack-infra18:34
Ajaeger1if there's a long queue, you might not see it ;)18:34
dkranzAJaeger: Got it :)18:34
fungikraman1: maybe asking next week would be better when everyone besides me isn't at a conference in australia (jeblair is our resident expert on zuul since he wrote just about all of it)18:34
*** sarob has quit IRC18:35
kraman1fungi: ok, will wait for jeblair to return. thanks18:36
*** sarob has joined #openstack-infra18:36
fungidkranz: if you search for the change number on http://status.openstack.org/zuul/ and it's not there after a minute, and that page says "Queue lengths: 0 events, 0 results." then chances are good zuul missed the event for it for some reason18:36
dkranzfungi: k, thanks18:36
fungidkranz: though there's another corner case... if any of the change's dependencies or reverse-dependencies is a draft patchset (but if you avoid drafts that should be uncommon). currently that breaks zuul's ability to figure out how to test it18:37
dkranzfungi: Yes, I have learned to check that too the last time I saw this :)18:38
*** branen_ has quit IRC18:38
*** derekh has quit IRC18:38
*** rnirmal has joined #openstack-infra18:39
*** sarob has quit IRC18:40
*** sarob has joined #openstack-infra18:40
*** dstanek has joined #openstack-infra18:40
*** praneshp has quit IRC18:42
*** kraman1 has left #openstack-infra18:43
*** krtaylor has joined #openstack-infra18:43
*** sarob has quit IRC18:45
*** praneshp has joined #openstack-infra18:49
*** dizquierdo has quit IRC18:50
*** sandywalsh has quit IRC18:54
*** dprince_ has joined #openstack-infra18:55
*** dprince has quit IRC18:55
openstackgerritMatt Riedemann proposed a change to openstack-infra/elastic-recheck: Log how long it takes to run a query when collecting metrics  https://review.openstack.org/6551418:55
fungidevs going into a blind reverify loop on consistent failures is not helping our gate throughput... https://review.openstack.org/6192418:57
fungii think we should call a stop to any havana approvals until https://launchpad.net/devstack/bugs/1266094 is solved (and now it looks like volumes is broken on grizzly as well as aggregates)18:58
*** oubiwann has quit IRC18:58
*** CaptTofu has quit IRC18:58
bknudsonfungi: the launchpad link didn't work for me.18:59
mriedemhttps://bugs.launchpad.net/devstack/+bug/126609418:59
fungioops... i meant https://launchpad.net/bugs/126609418:59
fungior what mriedem corrected it to (that works too)18:59
mriedemmaybe should be marked as critical...19:00
fungimaybe should be triaged at all19:00
* fungi doesn't have permissions to set importance on devstack bugs19:01
fungiafaik it could be a duplicate (i find it hard to believe this situation has escaped detection by a wider audience)19:02
fungibut that was the only obvious one i spotted19:02
*** markmcclain has joined #openstack-infra19:03
mriedemhmm Error in sys.exitfunc19:05
*** sandywalsh has joined #openstack-infra19:07
*** changbl has quit IRC19:08
*** wenlock has quit IRC19:10
*** melwitt has joined #openstack-infra19:10
openstackgerritJim Branen proposed a change to openstack/requirements: Use new hplefthandclient  https://review.openstack.org/6517919:11
*** wenlock has joined #openstack-infra19:12
*** esker has quit IRC19:12
*** herndon_ has joined #openstack-infra19:12
*** jroovers has quit IRC19:13
*** eharney has quit IRC19:15
*** branen has joined #openstack-infra19:16
*** mrodden has quit IRC19:19
*** changbl has joined #openstack-infra19:19
portantefungi: I just noticed that a swift commit was about to fail due to one of the gate tests failing (all others gate tests for the commit had passed), but when the job ahead of failed, it was reset and that flaky failure mode was lost19:20
*** mrodden has joined #openstack-infra19:20
portantethat sounds like we are potentially missing the true rate of test flakiness with that, no?19:21
*** herndon_ has left #openstack-infra19:21
*** ruhe is now known as _ruhe19:21
fungiportante: not entirely. the logs for those jobs were still uploaded unless they got cancelled, so elasticsearch has a record of them (it's why the hit volume reported by elastic-recheck's graphs is not reflective of the number of changes which actually failed to merge on the first pass)19:22
portantegreat, thanks for explaining that19:22
fungiso we do collect data on jobs which fail, even if they don't result in a change getting kicked out of teh gate19:23
portantedo we have those events recorded in a database somewhere?19:23
*** syerrapragada1 has joined #openstack-infra19:24
fungilogstash keeps a short-term (currently two week) record of them, which is used in the analysis elastic-recheck performs19:24
portantehmm, okay19:24
fungiand so they can be queried via lucene expressions through the kibana interface at http://logstash.openstack.org/19:24
*** syerrapragada1 has quit IRC19:24
*** dcramer_ has joined #openstack-infra19:24
fungiwhat it's recording and indexing is the logs themselves. i think zuul also reports the job results via statsd to graphite.openstack.org (though at the moment that still may be broken--we thought it was an iptables problem after the zuul replacement last night, but the graphs are still flat-lining)19:26
*** johnthetubaguy has quit IRC19:26
portanteah19:26
fungii need to look into why. the firewall rules have been correct for a while now i think, so it's got to be something else19:27
*** thuc has joined #openstack-infra19:27
*** jooools has quit IRC19:31
portantek19:32
*** rossella_s has quit IRC19:33
*** DennyZhang has quit IRC19:34
*** dcramer_ has quit IRC19:34
*** sarob has joined #openstack-infra19:34
*** hashar has joined #openstack-infra19:35
*** jroovers has joined #openstack-infra19:38
*** smarcet has left #openstack-infra19:45
*** thuc has quit IRC19:45
*** thuc has joined #openstack-infra19:46
*** ArxCruz has quit IRC19:46
*** dcramer_ has joined #openstack-infra19:48
*** ^d has quit IRC19:49
*** thuc has quit IRC19:50
*** thuc has joined #openstack-infra19:51
*** thuc has quit IRC19:51
*** thuc has joined #openstack-infra19:51
*** eharney has joined #openstack-infra19:52
*** sarob has quit IRC19:53
*** sarob has joined #openstack-infra19:54
*** dripton has joined #openstack-infra19:54
*** dripton__ has quit IRC19:54
*** sarob has quit IRC19:57
*** hashar has quit IRC20:00
*** vipul is now known as vipul-away20:02
*** vipul-away is now known as vipul20:02
*** rfolco has quit IRC20:04
*** mrodden1 has joined #openstack-infra20:13
*** mrodden has quit IRC20:14
*** _ruhe is now known as ruhe20:14
*** yolanda has quit IRC20:15
*** hashar has joined #openstack-infra20:16
*** vipul is now known as vipul-away20:17
openstackgerritMalini Kamalambal proposed a change to openstack-infra/devstack-gate: Add Support for Marconi  https://review.openstack.org/6514520:18
*** ^d has joined #openstack-infra20:19
*** AaronGr_Zzz is now known as AaronGr20:25
fungiseeing what's currently slowing up the gate, https://launchpad.net/bugs/1232303 is killing quite a few cycles for gate-tempest-dsvm-large-ops (using nova network)20:26
*** malini is now known as malini_afk20:33
openstackgerritZane Bitter proposed a change to openstack-infra/reviewstats: Add Bartosz Gorski to heat-core  https://review.openstack.org/6553420:36
*** vipul-away is now known as vipul20:36
*** freyes__ has quit IRC20:39
*** sarob has joined #openstack-infra20:41
*** ociuhandu has quit IRC20:41
openstackgerritSergey Lukjanov proposed a change to openstack/requirements: Sort tracked projects list  https://review.openstack.org/6377020:50
*** hogepodge has joined #openstack-infra20:54
mriedemfungi: do we need an e-r query for bug 1232303?20:57
*** sarob has quit IRC20:59
*** sarob has joined #openstack-infra21:00
fungimriedem: possibly? i haven't checked (too swamped with other tasks still)21:00
mriedemfungi: i mean, there isn't one today, but i see you have one with 100% fail rate an 49 hits in the last 7 days21:00
mriedemi'll check it out21:00
*** dstanek has quit IRC21:00
mriedemonly thing i'm not sure about is if it must be pinned to that project, but i can look21:00
*** sarob_ has joined #openstack-infra21:01
fungimriedem: i really don't know. i saw it shoot the entire ~95 change gate queue in the head, googled the error, found that bug, adjusted the query in it for the new dsvm job names and saw it was pretty bad21:02
fungithen griped in here and hoped someone else would do all the real work ;)21:02
mriedemi didn't see the griping, but saw the bug report update, so i'll check out at least the e-r query21:02
fungiyeah, it hit a horizon change in the past hour21:03
fungithis might come back to the console log not getting indexed issue sdague mentioned earlier (i think he posted to the -infra ml on it too)21:03
*** sarob has quit IRC21:04
sdaguefungi: yeh I posted over to -infra list21:05
sdagueas I think clarkb will need to look into it21:05
sdagueI expect it's reasonably subtle21:05
*** CaptTofu has joined #openstack-infra21:06
sdaguemriedem: we should also probably only add e-r bugs for things that are not in New state. I think a part of the issue is projects aren't triaging them, or realize the bug is bouncing the gate21:06
fungisdague: yeah, i've already got a plate full of subtle (and not-so-subtle) i'm trying to wolf down, so i'm hoping he'll get spare time to respond on it21:06
sdaguefungi: yep agreed21:06
*** sarob_ has quit IRC21:07
sdagueon the upside, after I got past that issue, getting the data series to report reasonable things based on pandas series is starting to click21:07
*** sarob has joined #openstack-infra21:07
sdagueI've got a rewrite of check_success about 60% complete21:08
fungitoo awesome21:08
sdagueit makes some things much simpler and cleaner, though you kind of have to get a feel for how these DataFrame objects work21:09
*** krtaylor has quit IRC21:11
*** sarob has quit IRC21:12
*** hogepodge_ has joined #openstack-infra21:17
*** weshay has quit IRC21:17
*** hogepodge has quit IRC21:19
*** hogepodge_ is now known as hogepodge21:19
openstackgerritMatt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for bug 1232303  https://review.openstack.org/6553921:19
jog0sdague: have you seen the large-ops test failing21:20
jog0in gate http://logs.openstack.org/43/60443/5/gate/gate-tempest-dsvm-large-ops/ac1b04a/console.html21:20
sdaguejog0: I saw there were failed jobs. I've basically tried to pull back looking into failures otherwise this ER work is never getting done21:21
fungijog0: we were discussing bug 1232303 as being a likely high-priority one for large-ops fails21:21
fungiooh, SSHTimeout in a tempest job also just caused a massive gate reset21:22
jog0fungi: sshtimeout is alwys causing gate resets21:23
* fungi wonders how long it will be until we can design a cloud which can actually boot reachable vms ;)21:23
* jog0 wonders too21:23
*** prad has joined #openstack-infra21:24
jog0mriedem: once the tests pass +A  on https://review.openstack.org/#/c/6553921:25
*** smarcet has joined #openstack-infra21:26
mriedemokey dokey21:26
fungiouch, precise12 went into rapid-fire jenkins agent failure and killed a bunch of jobs, so i unplugged it just now https://jenkins02.openstack.org/computer/precise12/builds21:26
*** prad_ has quit IRC21:26
fungii'll wrestle it back into shape21:27
*** david-lyle has quit IRC21:27
*** david-lyle has joined #openstack-infra21:28
jog0fungi: dumb question I don't see 65539 in status.o.o/zuul21:30
jog0why is that21:30
Ajaeger1wow, "Queue lengths: 120 events, 3746 results. " - that's a lot of events21:30
Ajaeger1jog0: ^ - there are many events not listed, see the queue length...21:30
jog0check queue of 25 is the limit?21:31
*** weshay has joined #openstack-infra21:32
fungijog0: no, zuul is busy processing thousands of events from a gate reset in the past few minutes, and hasn't gotten through the backlog far enough to find the patchset event for that change and enqueue it21:32
jog0fungi: ahhhhh21:33
jog0fungi: anyway to make zuul not allocate resources to patchets that are a certain depth down in the gate queue?21:33
*** SergeyLukjanov_ has joined #openstack-infra21:34
*** SergeyLukjanov has quit IRC21:35
*** Ajaeger1 has quit IRC21:38
*** rcleere has quit IRC21:39
fungijog0: that has been discussed as a possible pessimistic "optimization" (perhaps dynamically determined by recent gate success rates) but i think the consensus was "let's just fix the bugs instead of making them hurt less21:39
fungiin some ways, broken software slowing down the rate of development serves to redirect some of that effort to addressing the broken, mostly proportional to the painful21:40
*** dprince_ has quit IRC21:40
jog0the only thing this would change is less wasted resouces in zuul, but I agree about the redirect efforts part21:41
jog0so i'll retract my comment21:41
portantefungi: are we seeing more bugs fixed?21:41
fungiportante: good question--i don't have relevant numbers on bug fix rates (maybe qa does), but i'm pretty sure that just making them easier to ignore is not going to improve quality21:43
portantethe fear I have is that the pain has to get too high before the bug fix rate changes and the environment becomes unusable21:45
portanteperhaps the PTLs can help monitor the fix rate and make that public so we have a feedback loop in place21:45
fungiwell, we had some consensus from ptls that gate-impacting bugs would get addressed as a top priority, but there has to be some basic rate of bug triage going on for that to happen (unless we're just going to accomplish it by opening bugs and then shouting on the -dev ml until someone notices)21:46
portantesome how that bug fix rate has to be tracked and made visible, I would think21:47
portanterussellb: you around?21:47
fungiat least from an infra perspective, i can say that while i may not triage all bugs in a timely fashion, i keep an eye on the gating-related ones and jump on them straight away if they're an actual infra problem (or reroute them quickly if they're not)21:48
portanteand we try to do the same with swift21:48
portantebut that assumes that we have a good mechanism in place to get them tracked properly in the first place, which I don't know for sure21:49
fungii think this week, there may still be some post-holiday sluggishness going on with projects keeping on top of bugs21:49
fungibut that's just an unfounded guess21:49
portanteyes, that is not an unreasonable expectation21:49
fungiokay, precise12 has been rebooted, brought back online in jenkins and is running tests without any sign of agent failures now21:50
*** smarcet has left #openstack-infra21:51
*** dizquierdo has joined #openstack-infra21:52
fungiprecise32 got automatically marked offline for some reason, so i'm prodding it now21:54
fungiyep, it's gone unreachable. probably hung21:54
jeblairfungi: this is the worst jetlag ever.21:55
fungijeblair: sorry to hear it. the sort of jetlag which shoots you in a dark alley and then fishes in your pockets for loose change?21:56
jeblairfungi: loose change is serious money here21:56
fungijeblair: i haven't solved the mystery of the broken statsd reporting yet, what with other stuff cropping up. i added ip6tables rules to graphite, but that host has no aaaa record anyway and tcpdump on zuul shows no sign that it's trying to send any 8125/udp packets21:57
fungijeblair: however, i did confirm that the statsd envvars are present in the calling environment for the currently running zuul daemon pids according to /proc/pid/environ21:58
harlowjawho killed github, lol21:59
jeblairfungi: weird; okay i'll keep looking21:59
fungiharlowja: i shot the github and i won21:59
harlowjafungi :)21:59
harlowjashot em dead21:59
mgagneharlowja: https://status.github.com/messages =)22:00
harlowjaya mgagne22:00
harlowjafungi did it22:00
funginot really, but i did manage to get a sex pistols song stuck in my head now22:00
harlowja:)22:00
jog0mriedem: your patch failed22:00
jog0https://review.openstack.org/#/c/65539/122:01
fungier, sorry, it was dead kennedys22:01
jog0mriedem: ping me when it passes tests and I will +A22:01
mriedemjog0: yeah, hit a bad slave22:01
fungimriedem: precise12 or a different one?22:01
mriedemprecise3722:01
jog0bad slave lol22:01
fungiugh, i'll jump on that one next22:01
*** hogepodge has quit IRC22:03
mriedemso i guess e-r doesn't run on e-r jobs?22:03
mriedem:)22:03
fungiokay, precise37 is in the corner for a timeout while i look it over22:03
fungilooks like it did a fair amount of damage too... https://jenkins01.openstack.org/computer/precise37/builds22:04
fungijeblair: did you just manually generate statsd traffic off the new zuul?22:04
jeblairfungi: yes22:04
fungii hadn't gotten around to killing my tcpdump yet and just saw two packets22:04
jeblairfungi: and it showed up on graphite :/22:05
fungiso the zuul process simply doesn't want to send them22:05
jog0mriedem: also 1257626 is the same bug22:05
jog0 and we already have a query for it22:05
jog0or almost the same bug22:05
jog0not sure why we don't always see that one hit22:05
mriedemjog0: because the message has "nova.compute.manager" in it?22:06
*** sarob has joined #openstack-infra22:08
*** SergeyLukjanov_ has quit IRC22:08
*** mfer has quit IRC22:08
mriedemjog0: hmm, it hits in logstash: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibm92YS5jb21wdXRlLm1hbmFnZXIgVGltZW91dDogVGltZW91dCB3aGlsZSB3YWl0aW5nIG9uIFJQQyByZXNwb25zZSAtIHRvcGljOiBcXFwibmV0d29ya1xcXCIsICBSUEMgbWV0aG9kOiBcXFwiYWxsb2NhdGVfZm9yX2luc3RhbmNlXFxcIlwiIEFORCBmaWxlbmFtZTpcImxvZ3Mvc2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlc22:09
mriedemjog0: is it that gate-tempest-dsvm-large-ops isn't checked by e-r?22:09
jog0mriedem: doh yes22:09
jog0thats it22:09
*** SergeyLukjanov has joined #openstack-infra22:10
mriedemalright, i'll abandon my patch as a duplicate, and mark one of the bugs as a dupe of the other22:10
openstackgerritSean Dague proposed a change to openstack-infra/elastic-recheck: wip: use pandas DataFrames for check_success  https://review.openstack.org/6517322:10
jog0just did that22:10
jog0mriedem: it shouldn't be hard to make e-r support large-ops22:11
jog0want to take a wack at that22:11
mriedemjog0: if you can give some guidance22:11
jeblairfungi: and it used ipv4, so nothing relevant should have changed since the last zuul restart, right?22:11
fungijeblair: yep. ipv422:11
fungi22:02:03.770385 IP 162.242.154.88.46477 > 198.61.209.112.8125: UDP, length 822:11
*** fbo is now known as fbo_away22:11
jog0mriedem: search the code for tempest22:11
jog0actually hmmmm22:12
jog0we should be checking large-ops22:12
jog0not sure if we are but we should be22:12
jog0because we check all tempest jobs22:12
mriedemjog0: i can open a bug at least to track it22:12
jeblairfungi: i'm out of ideas other than to see if a restart changes anything.22:13
fungijeblair: oh, i solved the mystery of the zuul filesystem usage, btw. very, very large zuul/gearman logs from a while back. some from july were 2gb per log. /var/log/zuul accounted for 22gib on the / filesystem. scratch log copies/thread dumps in /root took up another 6+gib22:13
sdaguejog0: we're also loosing stuff in the ER index22:13
jog0sdague: eep22:14
fungijeblair: so i don't think there's too much risk of running out of / for now22:14
*** weshay has quit IRC22:14
sdaguego see my openstack-infra email22:14
sdaguethat was tripping me up all last night22:14
* jog0 isn't on ifra22:14
sdaguejog0: you should fix that :P22:14
* jog0 signs up 22:14
sdaguehttp://lists.openstack.org/pipermail/openstack-infra/2014-January/000615.html22:15
sdaguebasically I was trying to do a merge on build_uuids22:15
jeblairsdague: do you know if that's the case for very recent builds?22:15
sdaguejeblair: that build was 7 days ago22:15
*** thomasem has quit IRC22:16
fungiprecise32 and precise37 are back on line not and not insta-failing jobs22:16
*** krtaylor has joined #openstack-infra22:16
fungier, back on line now22:16
jeblairsdague, fungi: we should correlate that with the jenkins upgrade and scp plugin issue22:16
sdaguejeblair: I can write a tool to spit out missing build_uuids once I get the check_success bits done22:16
sdaguewhich I'm close on, but I ran out of time for the day. Time to run off to linux users group here22:17
fungijeblair: sdague: good point. we had a span of nearly a day where console log files were being truncated22:17
sdaguefungi: truncated to 0?22:17
fungisdague: not the ones i saw, just various points in the middle of the log22:17
sdaguethere are 0 console.html lines for the build uuid I provided in there22:17
sdagueI provided a build_uuid and es url in the email22:17
sdaguefor further exploration22:18
sdagueanyway, need to run. Talk to folks tomorrow22:18
jeblairfungi: how do you feel about a zuul restart?22:19
fungithe gate just reset, so i suppose now's as good a time as any22:19
jeblairdone22:20
* fungi gigs up the restart notes22:20
jeblairi wish there were a nodepool delete all command22:20
fungi~root/zuul-changes2.py22:21
jeblairfungi: oh, i already did that22:21
jeblairfungi: i should have phrased my question differently22:21
fungiahh, okay22:21
jeblairstarting zuul now22:21
fungia'ight22:21
jeblairlet's see if there's any statsd traffic22:21
*** rwsu has quit IRC22:22
jeblairit should register  empty queue gauges when it starts22:22
jeblairi think22:22
jeblairwow, still no joy22:22
funginada22:23
fungi2014-01-08 22:21:50,833 DEBUG zuul.Scheduler: Statsd enabled22:25
fungithat's the only reference it seems to log about anything to do with statsd or graphite22:25
*** sarob has quit IRC22:26
mriedemjog0: looks like e-r is reporting on 1257626 after all, here is an example: https://review.openstack.org/#/c/57358/22:27
jeblairfungi: that was a manual test22:28
jog0mriedem: hmm strange this may be the issues sdague saw22:28
fungijeblair: i figured, since it was solitary. i'm expecting a flood if it's spontaneously fixed22:28
sdakehey guys, https://github.com/openstack/heat is 404ing - any tips?22:29
*** weshay has joined #openstack-infra22:29
mriedemsdake: not for me, but use git.openstack.org22:29
*** pelix has joined #openstack-infra22:29
*** dkranz has quit IRC22:30
*** rwsu has joined #openstack-infra22:30
fungimriedem: sdake: https://status.github.com/messages (but at least if you use git.o.o instead you can pester us to fix it when it breaks)22:31
*** hogepodge has joined #openstack-infra22:31
*** flaper87 is now known as flaper87|afk22:32
jeblairfungi: restarted and queues restored22:33
*** sarob has joined #openstack-infra22:33
pelixclarkb: re https://review.openstack.org/#/c/63579 would monkey patching the minidom Element class to fix the writexml method for python 2.6 be an acceptable fix?22:34
fungiyep, i see the changes popping back up on the status page as zuul catches up with the queue22:34
pelixAlternatives involve using something like elementreewriter, an escape method on all element text fields for html entities and a regex to remove the blank spaces in empty nodes i.e. make '<tag />' = '<tag/>'22:34
pelixfixing minidom until support for python 2.6 is removed seems the least crazy solution I have so far :|22:34
*** sarob has quit IRC22:36
clarkbpelix: that doesnt sound too bad and can probably be done in a future python friendly way22:36
*** sarob has joined #openstack-infra22:37
*** dkranz has joined #openstack-infra22:37
*** sarob_ has joined #openstack-infra22:38
*** thuc has quit IRC22:38
*** sarob_ has joined #openstack-infra22:39
*** thuc has joined #openstack-infra22:39
*** pelix has quit IRC22:41
*** sarob has quit IRC22:41
jeblairfungi: i have a reproducible test case; it looks like it's related to the python daemon package22:42
jeblairthere's a new version.  maybe it sanitizes the env22:42
*** dmsimard has joined #openstack-infra22:42
fungijeblair: i kept thinking it might be an environment issue, which was why i was digging in /proc/pid/environ22:42
fungithat does sound entirely possible22:43
*** thuc has quit IRC22:43
*** dkranz has quit IRC22:43
fungiusing a deb of it? the latest on pypi was uploaded 2010-03-0222:44
fungibut i do recall a recent cve for python-daemon22:45
fungijust not the details22:45
*** sarob_ has quit IRC22:46
dmsimardHey guys, maybe you can point me in the right direction ? I'm trying to do a commit that depends on a commit that is already in review. I essentially checked out the commit in review, committed on top of it and sent it for review but it doesn't seem like it's that easy. Tried to dig the documentation but haven't found much. Any ideas ?22:46
dmsimard(Maybe I should be asking in #openstack-dev..)22:46
fungijeblair: the deb for python-daemon installed on zuul has its most recent package changelog entry dated Sat, 17 Dec 2011 14:09:14 +000022:46
fungidmsimard: it is supposed to be that easy. what errors did you get?22:47
*** SergeyLukjanov_ has joined #openstack-infra22:47
*** SergeyLukjanov_ has quit IRC22:47
jeblairfungi: oh! we're using an _older_ one on the new server, and a newer one on the old server from pypi22:48
*** SergeyLukjanov has quit IRC22:48
*** SergeyLukjanov_ has joined #openstack-infra22:48
openstackgerritZane Bitter proposed a change to openstack-infra/reviewstats: Reformat heat.json  https://review.openstack.org/6555822:48
openstackgerritZane Bitter proposed a change to openstack-infra/reviewstats: Add Bartosz Gorski to heat-core  https://review.openstack.org/6553422:48
dmsimardfungi: The error suggests a change-id should be provided in the commit message - but I'm confused since this should be generating a new review - not a new patch set to the ongoing review.22:48
fungijeblair: aha!22:48
*** SergeyLukjanov_ has quit IRC22:48
fungidmsimard: does git log show a Change-Id: header in your commit message?22:48
dmsimardfungi: No, but none of the commits do, actually22:49
fungidmsimard: what repository?22:50
dmsimardfungi: puppet-swift22:50
dmsimardstackforge/puppet-swift22:50
dmsimardThe change-id are usually in the footer.22:50
*** jroovers has quit IRC22:51
fungidmsimard: normally, the first time you run git-review in a local repository, it adds a commit hook to automatically update your commit message with a random change-id header when you write it out (if there isn't already one in the commit message), but if you hadn't used git-review in that repository before the last time you ran git commit, it wouldn't have been added22:51
fungidmsimard: right, git header lines go at the end of the commit message (more correctly called a footer, but they get confusingly referred to as header lines most of the time anyway)22:52
jeblairfungi: i think the daemon version is a red herring; it seems related to the new gear lib which imports statsd22:52
fungiohhh22:52
jeblairfungi: (but we now have daemon 1.6 from pip on the new server)22:52
fungiis it importing one of the other python modules called statsd rather than the statsd we want?22:52
dmsimardfungi: So is gerrit expecting the change-id from the parent commit ?22:53
ruhedmsimard: "git review -s" and the then "git commit --amend" to get change-id appended to commit message22:53
fungidmsimard: it expects a change-id on any commit you're pushing which is not already on the target branch22:53
jeblairfungi: afaict it still gets a correct statsd object22:53
*** dims has quit IRC22:53
fungioh, so not the "wrong" statsd, just maybe the wrong version22:53
jeblairfungi: i think it's gear statsd + daemonizing that's triggering it22:54
jeblairbrb22:54
clarkbis statsd opening a socket then daemon closing it?22:55
*** sarob has joined #openstack-infra22:55
dmsimardruhe: That did the trick. Thanks.22:55
arosenHi, I did recheck  bug 1257626 here but it hasn't rerun the tests https://review.openstack.org/#/c/64769/  . I had also tried recheck no bug and same thing.22:55
fungidmsimard: if you're trying to push a series of patches and more than one of them is not yet in gerrit, you may need to rebase -i and switch them from pick to edit so you can commit --amend each of their commit messages22:55
arosenIt failed with that ssh timeout issue.22:55
fungiarosen: you may have done it right when zuul was being restarted around 30 minutes ago22:56
* fungi looks22:56
fungiyep22:56
*** pelix has joined #openstack-infra22:56
fungioh, nope, that was over an hour ago22:56
fungiarosen: ahh, i think it was in the middle of being tested, got aborted and restored during the zuul restart, and the status page shows it there now22:57
*** rnirmal has quit IRC22:58
fungiall its tests are started except py26, which is waiting on an available centos6 slave22:58
arosenfungi: ah i see it now on http://status.openstack.org/zuul/ didn't see it there a few min ago.22:58
arosenthanks!22:58
funginp22:58
jeblairfungi: i'm guessing it's because two processes can't share a socket.  :)22:59
fungijeblair: this is not a surprise22:59
*** jroovers has joined #openstack-infra22:59
fungipresumably they need different local port numbers22:59
jeblairi think the current statsd initializes a global object with a socket23:00
fungii mean, it's a surprise in that i didn't think of it23:00
jeblairand the zuul server command imports gear which imports statsd before the fork23:00
jeblairso both processes end up with a statsd object with an initialized socket23:00
*** dizquierdo has quit IRC23:00
jeblairi _think_ the newer statsd library changed this, but i think zuul still needs updating to use it23:01
fungii wonder if it wouldn't be cleaner to start with one parent process and then fork the zuul and gearman processes from that before importing statsd23:01
*** sparkycollier has quit IRC23:01
jeblairso quick fix is just to move the gear import down into the 'start geard' function23:02
fungioh, if they've fixed statsd connection object sharing in the module, then yeah, totally23:02
*** sarob has quit IRC23:02
jeblairi think we should move the import for now, then move it back if that's fixed later23:02
fungiquick fixes until we can upgrade to that, right23:02
*** sarob has joined #openstack-infra23:03
*** burt has quit IRC23:04
*** sarob_ has joined #openstack-infra23:04
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Move gear import to a safe place  https://review.openstack.org/6556123:05
fungimikal: if you're near an internet, garyk posted to the -dev ml with something which might be a turbo-hipster implementation bug (problem with dependent patchsets, sounds like)23:07
fungimikal: Subject: [openstack-dev] [nova][turbo hipster] unable to rebase23:07
*** hashar has quit IRC23:07
*** jamielennox|away is now known as jamielennox23:07
*** sarob has quit IRC23:07
sdakethanks fungi23:08
*** dims_ has joined #openstack-infra23:08
*** thuc has joined #openstack-infra23:09
*** eharney has quit IRC23:10
*** mriedem has quit IRC23:14
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Move gear import to a safe place  https://review.openstack.org/6556123:15
*** mrodden1 has quit IRC23:17
*** thuc has quit IRC23:18
*** dmsimard has left #openstack-infra23:18
*** yassine has quit IRC23:21
*** sarob_ has quit IRC23:21
*** sarob has joined #openstack-infra23:22
*** ryanpetrello has quit IRC23:23
*** ryanpetrello has joined #openstack-infra23:25
jeblairfungi: i'm waiting for a reset then i plan to apply that zuul patch manually23:25
*** sarob has quit IRC23:26
fungijeblair: sounds like a fine idea23:29
fungiit's still waiting on a py26 slave, but it passed py27 and the rest just fine23:30
fungishould be safe23:30
fungiespecially since we're not running zuul on centos ourselves23:30
*** thuc has joined #openstack-infra23:33
*** dstanek has joined #openstack-infra23:33
*** jgrimm has quit IRC23:33
jeblairdavid-lyle: https://wiki.openstack.org/wiki/GerritJenkinsGithub#Tagging_a_Release23:36
jeblairdavid-lyle: that was written for thierry, but if you follow the same instructions but substitute master instead of milestone-proposed, you should get a release automatically built and uploaded to pypi23:37
*** dstanek has quit IRC23:38
fungidid i miss a question in scrollback, or was this a contextual change of venue?23:38
david-lylejeblair: thanks23:38
jeblairfungi: change of venue23:38
jerryzfungi:if you have time,  could you please take a look at this: https://review.openstack.org/#/c/65178/ and add initial reviewers to the group? https://bugs.launchpad.net/openstack-ci/+bug/1266603. Thanks23:38
david-lylefungi: I asked about releasing django_openstack_auth23:38
fungioh, good. just making sure i hadn't gone blind23:38
david-lylebut not here23:39
fungimy network has been terrible the last couple days, so now i'm paranoid i'm dropping from irc23:39
*** wenlock has quit IRC23:39
*** wenlock has joined #openstack-infra23:40
david-lyledjango_openstack_auth now uses the __init__.py file to specify version number, is it better to switch to use pbr.version.VersionInfo?23:40
fungijerryz: i flagged the bug so i wouldn't forget to check it once the change merges23:40
jeblairdavid-lyle: oh, yeah, if you do that, it will get filled in automatically based on the tag23:40
*** ruhe is now known as _ruhe23:41
david-lylejeblair: ok, I will make that change, seems like a better model23:41
jeblairdavid-lyle: yeah, i like it -- way less work.  :)23:41
*** prad has quit IRC23:41
david-lyle+1 less work23:42
*** sarob has joined #openstack-infra23:44
jeblairreset inbound23:45
* fungi braces23:46
jeblairstopped23:50
fungistatsd traffic burst alert!23:54
jeblairyay!23:54
jeblairreloading queues23:54
fungiand there goes a ton more23:54
openstackgerritEli Klein proposed a change to openstack-infra/jenkins-job-builder: Added rbenv-env wrapper  https://review.openstack.org/6535223:55
*** slong_ has joined #openstack-infra23:57
*** senk has quit IRC23:57
jeblairfungi: precise7 went rogue23:57
jeblairfungi: i disconnected it23:57
*** slong has quit IRC23:57
fungiokay, i'll have a look23:57
jeblairfungi: i'm going to redo the restart23:58
jeblairsince precise7 took out the top of the queue23:58
*** senk has joined #openstack-infra23:58
*** ryanpetrello has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!