Thursday, 2019-05-16

ianwso how are we resolving the circular reference with the skopeo job?00:03
ianwi'm happy to babysit a force for 659296 if that's what we decided00:03
ianwi feel like that clears the plate for 65922100:05
clarkbI think it may not be required and I've misunderstood some subtlety there00:05
openstackgerritMerged openstack/openstack-zuul-jobs master: Remove infra trusty puppet jobs  https://review.opendev.org/65939500:06
clarkbthat said the zuul projecg would proba ly appreciate fixing skopeo sooner than later so force merge may not be worst idea00:06
clarkbianw I'm happy for that to be force merged to unblock them while we sort puppet00:06
ianwahh, right, so 659221 is green just with the puppet updates00:06
clarkbya00:07
ianwso i can babysit 659221 if we like00:09
ianwit seems the only maybe question is if it goes mad during bridge.o.o pulses and re-installs puppet00:09
clarkbya that would be the minor concern00:10
clarkbif you are able to watch it go in I can keep an eye on irc and help if it goes sideways for about another 3-4 hours00:10
fungibridge.o.o shouldn't have puppet on it to begin with00:11
ianwfungi: oh i mean on the puppet hosts, if somehow we missed something00:11
fungiahh00:11
ianwduring their runs from run_all00:11
clarkbianw the other minir concern I had with it is it may try to do things on trusty nodes that eill fail00:11
clarkband not sure if that eill prevent other updates on the trusty nodes00:12
clarkbbut a tually that should only be a concern for ask00:12
ianwwell, i think there's only one way to find out ... i'll watch it00:12
clarkbbecause ask is the only puppet 3 + trusty and the oyhers will noop00:12
ianwbetter to debug now than later :)00:12
ianwalso i have real internet to monitor ... FINALLY the right person to do the job came out.  replaced the amplifier two pits up from mine, and remove the corroded and illegally split junctions in my pit and installed a proper 8x way splitter00:15
ianwhttps://bit.ly/2Q4MkDz ... it was a miracle any signal was getting through at all00:19
*** yamamoto has quit IRC00:19
fungithe fault is clearly yours for expecting timely, competent work from a utility company00:19
fungilooks like a portal to a darker, wetter dimension00:21
clarkbjust put more electrical tape on it00:21
fungilooks like someone already tried that, but not quite hard enough00:23
ianwit just wastes so much of everyone's time.  to get a lead-in cable installed and this fixed has taken *literally* 9 technicians in various ways appearing.  all these guys (they have all been guys) have driven all over town, appeared, variously made things worse00:24
ianwthis must play out thousands of times a day ... just time and effort and lifespan evaporating into the bureaucracy00:24
fungiin my country we like to call that "job security"00:25
fungiif they're all that incompetent, clearly none of them can be fired00:26
clarkbmy tech was actuallu really good00:27
clarkbI got him to upgrade the fiber terminal so I could update to gig speeds if I want to pay for them and he terminated cat5 instead of coax with moca00:27
fungiokay, so just to clarify... we *don't* think we'll need to bypass zuul for merging 659296?00:29
clarkbcorrect00:29
ianwyep, if we get the puppet stuff in, a recheck should work00:29
fungiun-staging my verify +2 and taking myself back out of project bootstrappers then00:29
ianwand also maybe the debootstrap ppa install bits for nodepool-builder for zigo -- which is where I started with all this yesterday too :)00:30
fungiyeesh, this rabbit hole goes deep indeed00:30
clarkbheh we had two different threads lead to the same spot00:31
*** gyee has quit IRC00:36
*** xek has quit IRC00:37
*** dpawlik has joined #openstack-infra00:38
*** rascasoft has quit IRC00:39
*** rascasoft has joined #openstack-infra00:41
*** dpawlik has quit IRC00:42
*** armax has quit IRC00:45
*** diablo_rojo has quit IRC00:45
*** ricolin has joined #openstack-infra00:50
*** factor has joined #openstack-infra00:50
ianwhrm, it's -2'd itself00:53
ianwPulling registry (registry:2)...", "Get https://registry-1.docker.io/v2/library/registry/manifests/2: unauthorized: incorrect username or password"00:54
*** lseki has quit IRC00:54
*** dpawlik has joined #openstack-infra00:54
ianwDue to high demand, we are working to enable repositories on release-archives. You can follow along here: https://tickets.puppetlabs.com/browse/CPR-68500:57
clarkbianw that error is due to running on ipv6 cloud I think you can recheck00:58
*** cloudnull has joined #openstack-infra00:58
*** dpawlik has quit IRC00:59
*** d34dh0r53 has joined #openstack-infra01:01
*** armax has joined #openstack-infra01:02
fungiand once we can get the artifacts fix merged, we need a scheduler restart before we give the release team a heads-up on resuming approving releases?01:05
clarkbyup01:09
*** oyrogerg has joined #openstack-infra01:11
fungibarring new surprises, we ought to have that cleared up by their weekly meeting at 19:0001:11
*** bgmccollum has joined #openstack-infra01:16
ianwi thought we had a full rspec test of ask.o.o ... but now on staging i'm seeing issues with jetty package name01:28
*** bgmccollum has quit IRC01:29
ianwahh, we do ... for ask ... this is coming in via solr ... sigh :/01:31
ianwof course the upstream puppet-solr we use is dead ...01:31
*** bgmccollum has joined #openstack-infra01:33
*** nicolasbock has quit IRC01:36
*** armax has quit IRC01:37
fungifor some reason i thought we had our own puppet-solr mrmartin had created01:38
*** rh-jelabarre has quit IRC01:39
ianwit may literally be s/jetty/jetty8 but getting that one char into what it's doing seems annoying01:41
fungimore or less annoying than creating an equivs package to alias jetty to jetty8?01:49
ianwfungi: you read my mind :) trying that, but now it seems /usr/share/jetty is not /usr/share/jetty8 and that's important to puppet somehow :/01:52
fungi:(01:53
*** ijw has quit IRC01:55
ianwhrm, a symlink might do it ... now hte service name is different01:57
ianwthis is a hot mess01:57
*** apetrich has quit IRC01:57
openstackgerritMerged opendev/system-config master: Handle moved puppet repos  https://review.opendev.org/65922102:02
*** whoami-rajat has joined #openstack-infra02:06
dangtrinhntfungi, Yes, you can just start doing that asap :) Many thanks :)02:08
openstackgerritIan Wienand proposed opendev/system-config master: Remove ask-staging from disabled list  https://review.opendev.org/65942102:11
*** dklyle has joined #openstack-infra02:14
*** rlandy|bbl is now known as rlandy02:21
*** dhellmann has quit IRC02:23
*** dhellmann has joined #openstack-infra02:23
*** kjackal has joined #openstack-infra02:23
openstackgerritMerged opendev/storyboard master: update StoriesController so  users subsribe to stories they create  https://review.opendev.org/64354602:24
*** kjackal has quit IRC02:31
*** armax has joined #openstack-infra02:45
*** yamamoto has joined #openstack-infra02:48
clarkbianw the puppet fix still looking happy?02:50
clarkbshould we recheck the skopeo change too? (not sure how late it is for you)02:51
*** oyrogerg has quit IRC02:53
*** oyrogerg has joined #openstack-infra02:54
*** oyrogerg has quit IRC02:55
*** dpawlik has joined #openstack-infra02:55
*** dpawlik has quit IRC03:00
*** armax has quit IRC03:05
*** dpawlik has joined #openstack-infra03:11
*** hongbin has joined #openstack-infra03:14
*** dpawlik has quit IRC03:15
*** bhavikdbavishi has joined #openstack-infra03:34
ianwclarkb: yeah, looking good03:36
*** ykarel|away has joined #openstack-infra03:40
ianwno weird errors, i was watching log and it should have applied that tick03:40
*** ykarel|away is now known as ykarel03:49
*** yamamoto has quit IRC03:50
*** yamamoto has joined #openstack-infra03:51
*** udesale has joined #openstack-infra03:53
*** hongbin has quit IRC03:55
openstackgerritAndreas Jaeger proposed openstack/openstack-zuul-jobs master: Remove unused jobs  https://review.opendev.org/65934004:07
fungisomeone's an early riser04:08
AJaegerconfig-core, please review https://review.opendev.org/#/c/659365 and https://review.opendev.org/65929504:08
AJaegersomeone's is a late one ;)04:08
AJaegermorning, fungi04:08
fungi;)04:08
fungimorning to you too04:09
AJaegerfungi, still fighting puppet?04:15
AJaegermnaser, fungi, thanks for late reviews!04:16
mnasernot late for me :D04:16
* mnaser is in beijing04:16
fungiwe *think* we're past that now04:16
AJaegermnaser: I see - have a great day04:17
AJaegerfungi: good ;)04:17
openstackgerritMerged openstack/openstack-zuul-jobs master: Remove legacy-python-barbicanclient-*  https://review.opendev.org/65929504:18
fungicurrently trying to get 659296 merged so we can get new zuul changes to merge, including the 659329 artifacts fix which is currently blocking the release team04:18
openstackgerritMerged opendev/system-config master: Pin skopeo to unbreak skopeo+bubblewrap  https://review.opendev.org/65929604:19
tobiashfungi: there is another artifacts fix which hit us yesterday: https://review.opendev.org/65926204:19
fungiand there's one more down04:20
tobiashyou probably want both merged04:20
fungiso in theory once puppet rolls 659296 out to the executors, zuul image jobs should work again04:20
tobiashI just picked both of them into our deployment as an emergency measure as we got hit by the same artifacts problem04:20
fungitobiash: good to know, thanks!04:20
ianwtobiash: yeah, they say they're building repos for them now04:21
ianwhttps://tickets.puppetlabs.com/browse/CPR-68504:22
fungiianw: not the puppet packages, the artifacts bug which hung our release and release-post pipelines earlier04:22
tobiashas the skopeo fix is merged we can recheck zuul changes (artifacts fixes first) in ~30min?04:23
fungizuul trying to treat jobs which pass artifacts as triggered from changes even when they aren't04:23
openstackgerritMerged openstack/project-config master: Move sqlalchemy-migrate jobs in-tree  https://review.opendev.org/65936504:23
fungitobiash: roughly04:24
tobiashk04:24
*** janki has joined #openstack-infra04:43
AJaegerconfig-core, https://review.opendev.org/#/c/659340 is the last of my spring cleanup changes, reviews welcome (I just rechecked)04:53
AJaegerand couple of new repos: https://review.opendev.org/657764 https://review.opendev.org/656657 https://review.opendev.org/657862 , please review04:53
*** cloudkiller has joined #openstack-infra04:55
*** cloudkiller has quit IRC05:08
openstackgerritIan Wienand proposed opendev/puppet-askbot master: Move to apache 2.4 access always  https://review.opendev.org/65943605:10
*** cloudkiller has joined #openstack-infra05:11
*** _d34dh0r53_ has joined #openstack-infra05:11
*** ykarel is now known as ykarel|afk05:12
*** dpawlik has joined #openstack-infra05:12
*** ykarel|afk has quit IRC05:16
*** dpawlik has quit IRC05:17
*** dpawlik has joined #openstack-infra05:28
*** ykarel|afk has joined #openstack-infra05:31
*** ykarel|afk is now known as ykarel05:31
openstackgerritTristan Cacqueray proposed zuul/zuul master: [WIP] zuul_stream: add debug to investigate tox-remote failures  https://review.opendev.org/65791405:40
*** yboaron_ has joined #openstack-infra05:40
*** hamzy has quit IRC05:44
*** jbadiapa has joined #openstack-infra05:49
*** e0ne has joined #openstack-infra05:53
*** kjackal has joined #openstack-infra05:59
openstackgerritTristan Cacqueray proposed zuul/zuul master: [WIP] zuul_stream: add debug to investigate tox-remote failures  https://review.opendev.org/65791406:01
*** quiquell has quit IRC06:04
*** quiquell has joined #openstack-infra06:04
*** quiquell has quit IRC06:05
*** udesale has quit IRC06:05
*** udesale has joined #openstack-infra06:06
*** quiquell has joined #openstack-infra06:06
*** sarob has joined #openstack-infra06:09
*** kjackal has quit IRC06:16
*** sarob has quit IRC06:19
*** mujahidali has joined #openstack-infra06:19
*** hamzy has joined #openstack-infra06:21
*** oyrogerg has joined #openstack-infra06:29
*** ccamacho has joined #openstack-infra06:31
*** e0ne has quit IRC06:36
*** admcleod has quit IRC06:39
*** pgaxatte has joined #openstack-infra06:41
ianwAJaeger: i'll review it, but it's an autumn cleanup for some ;)06:46
openstackgerritTristan Cacqueray proposed zuul/zuul master: quick-start: pin docker image to 2.16.8  https://review.opendev.org/65945406:46
openstackgerritmelissaml proposed openstack/diskimage-builder master: Replace git.openstack.org URLs with opendev.org URLs  https://review.opendev.org/65856406:46
AJaegerianw: sorry ;)06:46
openstackgerritIan Wienand proposed opendev/puppet-askbot master: Install redis 2.10.3  https://review.opendev.org/65945506:50
*** udesale has quit IRC06:51
*** udesale has joined #openstack-infra06:52
openstackgerritMerged openstack/project-config master: Set up cyborg-tempest-plugin repository  https://review.opendev.org/65776406:53
*** xek has joined #openstack-infra06:54
*** apetrich has joined #openstack-infra06:56
*** rcernin has quit IRC07:01
*** udesale has quit IRC07:02
*** openstackgerrit has quit IRC07:03
*** udesale has joined #openstack-infra07:03
*** tesseract has joined #openstack-infra07:05
*** jaosorior has quit IRC07:06
*** ginopc has joined #openstack-infra07:07
*** admcleod has joined #openstack-infra07:09
*** kjackal has joined #openstack-infra07:10
*** pcaruana has joined #openstack-infra07:13
*** rpittau|afk is now known as rpittau07:17
*** kopecmartin|off is now known as kopecmartin07:18
*** xek has quit IRC07:21
*** openstackgerrit has joined #openstack-infra07:21
openstackgerritIan Wienand proposed opendev/puppet-askbot master: Install redis 2.10.3  https://review.opendev.org/65945507:21
*** ralonsoh has joined #openstack-infra07:23
openstackgerritTristan Cacqueray proposed zuul/zuul master: [WIP] zuul_stream: add debug to investigate tox-remote failures  https://review.opendev.org/65791407:27
*** oyrogerg has quit IRC07:27
openstackgerritMerged openstack/project-config master: Set up cyborg-tempest-plugin repository  https://review.opendev.org/65776707:31
*** jpena|off is now known as jpena07:32
*** yamamoto has quit IRC07:33
openstackgerritAndreas Jaeger proposed zuul/zuul master: bubblewrap: bind mount /etc/subuid  https://review.opendev.org/65921807:34
openstackgerritAndreas Jaeger proposed zuul/zuul master: Make image build jobs voting again  https://review.opendev.org/65924207:35
*** iurygregory has joined #openstack-infra07:35
*** udesale has quit IRC07:36
*** udesale has joined #openstack-infra07:37
openstackgerritIan Wienand proposed opendev/system-config master: ask.o.o : workaround old puppet-solr package  https://review.opendev.org/65947307:38
*** yamamoto has joined #openstack-infra07:39
openstackgerritIan Wienand proposed opendev/system-config master: Revert "Pin skopeo to unbreak skopeo+bubblewrap"  https://review.opendev.org/65947407:39
*** slaweq has joined #openstack-infra07:40
ianwclarkb: ^ yeah, so when around ... i looked in the /var/cache archives for a deb of 14 but couldn't find one.  i dunno if it exists any more at all :/07:40
*** amoralej|off is now known as amoralej07:41
*** bhavikdbavishi has quit IRC07:43
*** ykarel is now known as ykarel|lunch07:46
*** tosky has joined #openstack-infra07:48
*** tesseract has quit IRC07:52
*** tesseract has joined #openstack-infra07:52
*** e0ne has joined #openstack-infra08:08
*** lucasagomes has joined #openstack-infra08:12
*** pkopec has joined #openstack-infra08:14
openstackgerritTobias Rydberg proposed openstack/project-config master: Renaming of Public Cloud WG to Public Cloud SIG  https://review.opendev.org/65948908:16
*** tkajinam has quit IRC08:32
*** ykarel|lunch is now known as ykarel08:33
*** ginopc has quit IRC08:34
*** derekh has joined #openstack-infra08:38
*** dtantsur|afk is now known as dtantsur08:50
*** ginopc has joined #openstack-infra09:06
*** panda is now known as panda|rover09:17
*** ginopc has quit IRC09:36
*** ginopc has joined #openstack-infra09:40
*** ykarel is now known as ykarel|meeting10:07
*** yamamoto has quit IRC10:09
*** tosky has quit IRC10:10
*** tosky has joined #openstack-infra10:11
openstackgerritMark Meyer proposed zuul/zuul master: Create a basic Bitbucket event source  https://review.opendev.org/65883510:11
*** jaosorior has joined #openstack-infra10:12
*** yamamoto has joined #openstack-infra10:20
*** yamamoto has quit IRC10:25
*** jlibosva has joined #openstack-infra10:26
*** pcaruana has quit IRC10:27
openstackgerritTristan Cacqueray proposed zuul/zuul master: [WIP] runner: enable reuse of a job-dir  https://review.opendev.org/65795510:27
*** xek has joined #openstack-infra10:27
*** sshnaidm|afk is now known as sshnaidm10:38
*** ricolin has quit IRC10:39
*** kjackal has quit IRC10:42
*** ykarel|meeting is now known as ykarel10:53
*** nicolasbock has joined #openstack-infra10:57
*** bhavikdbavishi has joined #openstack-infra11:00
*** jpena is now known as jpena|lunch11:03
*** amoralej is now known as amoralej|lunch11:04
*** panda|rover is now known as panda|rover|eat11:04
*** kjackal has joined #openstack-infra11:11
*** ykarel is now known as ykarel|afk11:11
*** mujahidali has quit IRC11:13
*** udesale has quit IRC11:15
*** ginopc has quit IRC11:28
*** pcaruana has joined #openstack-infra11:37
fungiokay, so to recap discussion from #zuul, the skopeo pin failed to downgrade our executors because the ppa built yet another package in the meantime and only keeps at most 2 on hand, so the version we were hoping to roll back to no longer exists there11:40
fungiso we could try to manually build that version and distribute it to the executors11:41
fungior point the ppa at a forked copy of the skopeo repo with the git state we want11:41
fungihowever, if we're going to that trouble, tristanC has pushed a pr for this problem which looks likely to get approved by the skopeo maintainers in short order, so we could i suppose build with that11:42
fungiin unrelated news, the gerrit 3.0.0 release yesterday *also* broke the zuul quickstart jobs because of new acls needed for project creation11:42
*** yamamoto has joined #openstack-infra11:44
*** yamamoto has quit IRC11:45
*** yamamoto has joined #openstack-infra11:45
*** dtantsur is now known as dtantsur|brb11:46
*** ykarel|afk is now known as ykarel11:59
*** yamamoto has quit IRC12:03
*** gfidente has joined #openstack-infra12:04
*** panda|rover|eat is now known as panda12:06
*** yamamoto has joined #openstack-infra12:10
*** panda is now known as panda|rover12:11
*** yamamoto has quit IRC12:19
*** rh-jelabarre has joined #openstack-infra12:21
*** gfidente has quit IRC12:23
*** gfidente has joined #openstack-infra12:24
*** iurygregory has quit IRC12:27
*** janki has quit IRC12:28
*** jpena|lunch is now known as jpena12:29
*** amoralej|lunch is now known as amoralej12:32
*** rlandy has joined #openstack-infra12:34
openstackgerritHervé Beraud proposed openstack/pbr master: Update Sphinx requirement  https://review.opendev.org/65954612:36
*** rh-jelabarre has quit IRC12:39
AJaegerfungi, and https://review.opendev.org/659218 might finally merge to allow moving forward with current state.12:39
fungiahh, cool, i entirely missed that solution had been proposed12:41
fungiwas currently busy setting up a xenial build chroot on my workstation to try and rebuild from the skopeo source packages in the ppa12:41
openstackgerritMerged zuul/zuul master: bubblewrap: bind mount /etc/subuid  https://review.opendev.org/65921812:43
fungiinfra-root: looks like if we want to take advantage of ^ we're going to need a restart of all the executors once it gets installed on them12:47
*** aaronsheffield has joined #openstack-infra12:50
fungii'm watching for it to roll out and will then get the executor daemons restarted once i've confirmed that commit is installed on all of them12:54
*** snapiri has quit IRC12:56
*** ccamacho has quit IRC12:56
*** ccamacho has joined #openstack-infra12:57
*** ginopc has joined #openstack-infra12:57
tristanCfungi: bind mounting /etc/subuid will fix the current error but skopeo will still fail later when trying to create the user namespace13:01
*** lseki has joined #openstack-infra13:01
fungioh13:01
fungiso we do still need a downgraded or patched skopeo installed on our executors anyway?13:01
*** udesale has joined #openstack-infra13:02
tristanCfungi: iiuc yes13:02
fungiin that case i'll go back to getting a build environment put together13:02
tristanCfungi: you can check the version by running grep "subuid mapping" $(which skopeo); that shouldn't match un-affected version13:03
fungithanks13:03
*** jcoufal has joined #openstack-infra13:07
*** rfarr has joined #openstack-infra13:11
clarkbmaybe we should wrote our own tool like everyone else but this one can actually work >_>13:13
*** mriedem has joined #openstack-infra13:15
*** yamamoto has joined #openstack-infra13:15
*** dtantsur|brb is now known as dtantsur13:18
mordredclarkb: :)13:18
mordredwait - so the old versions of skopeo *aren't* in the ppa? theyu're still listed on the page13:19
fungimordred: http://ppa.launchpad.net/projectatomic/ppa/ubuntu/pool/main/s/skopeo/13:21
mordredugh. yeah13:21
*** Goneri has joined #openstack-infra13:21
mordredoh - awesome - tristanC pushed a PR to skopeo ... thanks tristanC !13:23
* mordred hands tristanC a cookie13:23
openstackgerritMark Meyer proposed zuul/zuul master: Create a basic Bitbucket event source  https://review.opendev.org/65883513:24
clarkbwhere ia the PR?13:24
fungiin fact, only the latest package is technically in that repo13:24
fungiwget -qO- http://ppa.launchpad.net/projectatomic/ppa/ubuntu/dists/xenial/main/binary-amd64/Packages.gz|zgrep -A6 '^Package: skopeo$'13:24
fungijust one package entry, Version: 0.1.36-1~dev~ubuntu16.04.2~ppa1913:25
*** ianychoi has joined #openstack-infra13:26
fungiclarkb: https://github.com/containers/skopeo/pull/65313:27
*** xek_ has joined #openstack-infra13:28
clarkbthanks13:28
clarkbinteresting how parsing an image name results in a chown13:28
clarkbgood to see they are willing to fix the issue for us13:29
*** xek has quit IRC13:30
openstackgerritHervé Beraud proposed openstack/pbr master: Update Sphinx requirement  https://review.opendev.org/65954613:33
*** sreejithp has joined #openstack-infra13:34
fungiokay, i've now got a working xenial chroot (via mmdebstrap from debian/sid) with the skopeo build-deps installed into it, working on confirming i can build the binary packages from the latest source package in that ppa before moving on to patching it with tristanC's pr13:37
*** ginopc has quit IRC13:40
*** rh-jelabarre has joined #openstack-infra13:54
*** pcaruana has quit IRC13:56
fungithat much seems to have worked. now trying a patched build13:56
*** ginopc has joined #openstack-infra13:57
fungiinfra-root: assuming this build completes, should i put ze01 in the emergency file and try installing the resulting package there?13:58
fungiit's effectively the current package from the ppa but with https://patch-diff.githubusercontent.com/raw/containers/skopeo/pull/653.patch applied13:58
fungi(emergency file would be to prevent us from installing yet another new build from the ppa which may not yet contain that commit)13:59
fungithe patched package seems to have built successfully14:00
mordredfungi: wfm14:00
clarkbfungi: fine with me14:01
fungii now have a skopeo_0.1.36-1~dev~ubuntu16.04.2~ppa19.1_amd64.deb in ~fungi on ze0114:01
fungi(note the 19.1 instead of just 19)14:01
fungithough the broken ensure for that package resource i think means there's no need to actually add the executors to the disable list since puppet is repeatedly failing to find the package version we're asserting there anyway14:02
mriedemhttp://status.openstack.org/reviews/#nova was last refreshed last week14:03
mriedemis that page broken?14:03
fungiso going to upgrade ze01's skopeo package from the 0.1.36-1~dev~ubuntu16.04.2~ppa17 currently present to the 0.1.36-1~dev~ubuntu16.04.2~ppa19.1 i just built14:03
*** rfarr has quit IRC14:04
*** iurygregory has joined #openstack-infra14:04
*** rfarr has joined #openstack-infra14:04
fungiUnpacking skopeo (0.1.36-1~dev~ubuntu16.04.2~ppa19.1) over (0.1.36-1~dev~ubuntu16.04.2~ppa17) ...14:07
fungi#status log temporarily upgraded skopeo on ze01 to manually-built 0.1.36-1~dev~ubuntu16.04.2~ppa19.1 package with https://github.com/containers/skopeo/pull/653 applied14:08
openstackstatusfungi: finished logging14:08
*** ginopc has quit IRC14:08
fungianything else we should be doing other than just making sure ze01 doesn't fall over before we upgrade skopeo to the patched package on the remaining executors?14:13
clarkbI think that is probably it14:13
clarkbin particular that bwrap continues to run jobs successfuly14:13
clarkbsince it and atomic container stuff play in similar spaces14:14
*** HenryG has quit IRC14:14
*** ykarel is now known as ykarel|afk14:14
*** ykarel|afk is now known as ykarel14:15
fungii'll take a quick look on status.o.o and see why reviewstats has ceased updating14:17
*** ykarel is now known as ykarel|away14:17
*** tdasilva has joined #openstack-infra14:18
*** HenryG has joined #openstack-infra14:20
*** cloudkiller is now known as blah2314:27
*** blah23 is now known as cloudnull|afk14:27
*** cloudnull has quit IRC14:31
*** cloudnull|afk is now known as cloudnull14:31
fungibit of a head scratcher...14:32
fungihttp://paste.openstack.org/show/751478/ is all i find (repeated over and over) in /var/log/reviewday.log14:32
*** yamamoto has quit IRC14:33
fungithe cheetah package doesn't seem to be installed at all though14:33
fungiwhich would of course explain that exception, but...14:33
fungii can't imagine what would have suddenly uninstalled it a week ago14:33
*** yamamoto has joined #openstack-infra14:33
*** yamamoto has quit IRC14:33
fungithe reviewday and puppet-reviewday repos haven't seen new commits in a while14:34
*** yamamoto has joined #openstack-infra14:34
funginot since the opendev migration14:34
fungistatus.o.o concidentally is still on trusty and has an uptime of 490 days14:35
fungiso no rebuild a week ago to explain it14:36
*** d34dh0r53 has quit IRC14:36
*** d34dh0r53 has joined #openstack-infra14:37
*** armax has joined #openstack-infra14:37
fungi#status log deleted groups.openstack.org and groups-dev.openstack.org servers, along with their corresponding trove instances14:37
openstackstatusfungi: finished logging14:37
*** pcaruana has joined #openstack-infra14:37
*** Lucas_Gray has joined #openstack-infra14:37
Shrewsfungi: maybe that error has always been there?14:38
fungiperhaps...14:38
fungiunfortunately we only have one week of retention on that log file, so hard to know14:39
*** yamamoto has quit IRC14:39
fungiand i can `flock -n /var/lib/reviewday/update.lock bash` without any trouble, so doesn't seem to be a stale lock causing this14:41
*** d34dh0r53 is now known as Guest2890414:41
openstackgerritPaul Belanger proposed zuul/zuul master: Support Ansible 2.8  https://review.opendev.org/63193314:42
openstackgerritMerged zuul/zuul master: Fix missing check if logger is None  https://review.opendev.org/65926214:44
fungihttps://opendev.org/opendev/puppet-reviewday/src/branch/master/manifests/site.pp#L83-L89 should fire each time a change merges to the reviewday repo14:45
fungiand pip freeze says everything in https://opendev.org/openstack/reviewday/src/branch/master/requirements.txt is installed *except* cheetah14:46
fungii'm going to attempt to run that exec manually and see what it does14:46
fungiand now cheetah is installed. no errors14:48
fungiand manually running what the cronjob would do seems to be generating content now14:48
AJaegerclarkb, could you review https://review.opendev.org/#/c/659340/ https://review.opendev.org/657862 https://review.opendev.org/656657 and https://review.opendev.org/657657 , please?14:49
AJaegeror any other config-core ^14:49
fungii wonder if this is more mysterious fallout from the great dockering accident, though i can't immediately imagine how14:49
zigoWhere has gone the cgit link in gerrit? :/14:50
openstackgerritPaul Belanger proposed zuul/zuul master: executor: use node python path  https://review.opendev.org/63733914:51
openstackgerritPaul Belanger proposed zuul/zuul master: Update quickstart nodepool node to python3  https://review.opendev.org/65848614:51
fungizigo: we still need to revisit fixing that. we had to temporarily set it to a local gitweb link on the gerrit server (until we can start replicating all in-flight review refs to gitea), but there's something not quite right with the local gitweb i think?14:52
zigofungi: It's just that the link is just gone, nowhere to click so I can download the patch as a file ...14:52
zigoOr am I missing a new place where it is supposed to be?14:53
fungithe good news is that gitea merged corvus's fix for the ref replication problem so switching to that's probably not far out now (next time they tag a release unless we want to use unreleased gitea in the intermi)14:53
mordredfungi: yeah - I think we *could* use the stuff from corvus' branch since they've merged it upstream (so it shouldn't be dangerous to rely on unmerged feature)14:54
mordredbut we could also just wait14:54
fungizigo: yes, the gitweb link is currently missing. i can't remember if we merged a patch to fix the configuration problem which caused it to not be displayed, and if so we may simply be pending a gerrit restart for that to take effect14:54
AJaegerzigo: go to the upper left, there's a "Download" drop down14:54
*** UdayTKumar has joined #openstack-infra14:54
AJaegerzigo, that offers "Patch-File"14:55
mordredfungi: we'll still need to actually replicate the refs - which will likely take a *While* - so we should probably set up something that's not gerrit to do an push of all the refs/* for all of the repos to all of the giteas before we turn on gerrit replication of everything again14:55
*** smarcet has joined #openstack-infra14:56
clarkbfungi: the gitweb link is missing because our config for that was incomplete14:56
clarkbfungi: mordred tested a fix for this at the ptg on review-dev and I think got a change up for that which if it hasn't merged yet needs to be merged then gerrit restarted again14:56
*** jcoufal_ has joined #openstack-infra14:56
openstackgerritMerged zuul/zuul master: Handle artifacts on non change types.  https://review.opendev.org/65932914:57
mordredoh - right. lemme find link14:57
fungithat matches my recollection, other than i couldn't recall if the fix had been written or merely discussed14:57
zigoAJaeger: I can base64 decode it, but it's not super convenient ! :)14:57
zigoHum... that's enough with the zip ... :P14:58
zigoThanks.14:58
clarkboh cool the fix for the artifacts thing just merged \o/14:58
mordredhttps://review.opendev.org/#/c/656809/14:58
mordredclarkb: I thnik it's all been landed/applied - we just need to restart gerrit14:58
fungiawesome. so many restarts14:58
mordredRESTART ALL THE TINNGS14:59
fungiclarkb: tobiash mentioned there are two artifacts fixes we should make sure are applied before restarting the scheduler14:59
tobiashfungi: both have just landed14:59
smcginnisclarkb: With that artifacts fix, are we all clear to do any javascript releases if they come up?14:59
fungiperfect!14:59
*** jcoufal has quit IRC14:59
fungismcginnis: needs a scheduler restart once the code is rolled out to the server15:00
clarkbsmcginnis: we need to restart the zuul scheduler first15:00
smcginnisK, will watch for a restart notice before touching anything. Though I don't believe there are any pending.15:00
*** bhavikdbavishi has quit IRC15:00
*** smarcet has quit IRC15:08
clarkbsmcginnis: I'll try to remember to ping the release channel when we get to restarting the zuul scheduler15:12
smcginnisThanks clarkb15:16
*** eernst has joined #openstack-infra15:16
clarkbfungi: mordred I need to find breakfast and boot my day, but afterwards I can be around to help facilitate zuul and gerrit restarts15:17
clarkbfungi: do we want to install skopeo on all the executors first? that seems like a low impact improvement15:17
*** eernst has quit IRC15:18
openstackgerritMerged zuul/zuul master: Fix occasional gate resets with github  https://review.opendev.org/65038715:19
mordredclarkb: yeah. I think installing the edited skopeo everywhere seems like a win15:19
*** ccamacho has quit IRC15:26
*** sarob has joined #openstack-infra15:27
openstackgerritPaul Belanger proposed zuul/zuul master: Support Ansible 2.8  https://review.opendev.org/63193315:28
*** eernst has joined #openstack-infra15:28
fungiclarkb: i can go ahead and roll that patched skopeo out to the remaining executors since ze01 is still up and running jobs15:30
clarkbfungi: I think that would be a good next step in the fixing of the list of problems15:30
clarkbthen if we can get successful zuul docker image build jobs we can flip those back to voting (as well as publish images to dockerhub on merge again)15:30
*** smarcet has joined #openstack-infra15:32
mordred++15:32
*** roman_g has quit IRC15:36
*** _d34dh0r53_ is now known as d34dh0r5315:38
*** e0ne has quit IRC15:38
*** ricolin has joined #openstack-infra15:38
openstackgerritMerged zuul/zuul master: Centralize logging adapters  https://review.opendev.org/65864415:39
clarkbas a sanity check before we restart things zuul and nodepool (we aren't restarting nodepool) memory use continues to look good15:41
clarkbmordred: have you confirmed the needed config update is in place on review.o.o?15:43
clarkbI'm checking zuul.o.o now to see if it has updated its zuul install yet15:44
*** ykarel|away has quit IRC15:44
*** yboaron_ has quit IRC15:44
clarkbzuul==3.8.2.dev14  # git sha 27492fc is the version of zuul install on zuul01.o.o which includes d66b504804cede76f7c57d15a20ca5c902f1b8f6 the fix for the release stuff15:45
mordredclarkb: yes. I believe the config on disk on review.o.o is correct15:45
clarkbso I think zuul shceduler is ready when we are /me looks at status page now15:45
mordredit lists type = gitweb and cgi = /usr/share/gitweb/gitweb.cgi15:45
mordredand /usr/share/gitweb/gitweb.cgi is on the filesystem15:45
clarkbthere are a few things in the post pipeline that I think we may as well let filter through, but I can do the scheduler restart in a few minutse when that pipeline is shorter15:46
clarkbalso some requirements changes are about to merge, should let those through I guess15:46
clarkbabout 20 minutes away for that15:46
fungixorg crashed out on my workstation so trying to get that back to sanity before i copy the skopeo packages around15:47
fungishould be done in a couple more minutes15:47
*** pcaruana has quit IRC15:47
Shrewsclarkb: agreed, nodepool memory usage is surprisingly good15:48
Shrews(he said, breathing a heavy sigh of relief)15:48
openstackgerritFabien Boucher proposed zuul/zuul master: Pagure driver - https://pagure.io/pagure/  https://review.opendev.org/60440415:49
*** yamamoto has joined #openstack-infra15:50
openstackgerritFabien Boucher proposed zuul/zuul master: Pagure driver - https://pagure.io/pagure/  https://review.opendev.org/60440415:50
openstackgerritMerged zuul/zuul master: web: upgrade react and react-scripts to ^2.0.0  https://review.opendev.org/63190215:53
*** iurygregory has quit IRC15:55
fungiokay, patched skopeo package copied to all remaining executors and working on installing it on them now15:56
*** pgaxatte has quit IRC15:56
fungi#status log temporarily upgraded skopeo on all zuul executors to manually-built 0.1.36-1~dev~ubuntu16.04.2~ppa19.1 package with https://github.com/containers/skopeo/pull/653 applied15:57
openstackstatusfungi: finished logging15:57
*** yamamoto has quit IRC15:58
clarkbthere are some zuul changes queued in the gate that should test if this is mostlyworking for us now15:58
*** rpittau is now known as rpittau|afk15:58
*** imacdonn has quit IRC15:58
clarkbhttp://zuul.openstack.org/stream/e83a7906741b44acb1eb643d9f59816e?logfile=console.log may have started late enough15:58
*** lucasagomes has quit IRC15:58
*** eernst has quit IRC15:59
*** kopecmartin is now known as kopecmartin|off16:01
*** imacdonn has joined #openstack-infra16:02
*** ykarel|away has joined #openstack-infra16:04
*** dims has quit IRC16:05
*** tesseract has quit IRC16:05
fungianybody else having trouble loading our grafana zuul dashboard? all i get is little spinnies where the graphs should appear... http://grafana.openstack.org/d/T6vSHcSik/zuul-status?orgId=116:08
clarkbworks here16:08
clarkbtry a hard refresh maybe?16:08
fungik, likely just my browser being weird16:08
clarkbgraphite is behind the le cert now and maybe your js is trying to talk to the old thing still?16:08
fungioh, could be16:09
*** smarcet has quit IRC16:09
fungioh! hah16:10
fungieff privacy badger is blocking the callouts to graphite.o.o from that page ;)16:10
fungiyep, unblocking it seems to have solved my problem16:12
funginot sure why that just started16:12
clarkbrelated to that we seem to be underutilizing our clouds resources16:12
clarkbI wonder if a cloud is having a sad16:12
mlozaWhat is best way to to do anti affinity between racks not only compute nodes?16:12
clarkbmloza: we probably aren't the best group to ask. While we consume a lot of openstack clouds we don't currently run any (we run the developer infrastructure for openstack)16:13
fungimloza: i couldn't begin to tell you... this is the channel where we design and run various collaboration services for the openstack community. you may be better off asking in #openstack16:13
clarkbor sending email to the openstack-discuss mailing list16:13
mlozaoh ok16:14
mlozasorry16:14
*** sarob has quit IRC16:16
AJaegerzuul-build-image succeeded on https://review.opendev.org/58054716:17
openstackgerritMerged zuul/zuul master: Cleanup executor specific requirements  https://review.opendev.org/64942816:17
AJaegerWoohoo!16:17
clarkbyay the spice is flowing again16:17
* AJaeger rechecked https://review.opendev.org/659242 - to make that job voting again.16:18
clarkbI'm waiting on one job to finish in the openstack g ate before I restart the zuul scheduler16:18
*** dtantsur is now known as dtantsur|afk16:19
AJaegerclarkb: want to review https://review.opendev.org/659340 in the meantime? Or should I self-approve removal of a unused jobs from ozj?16:20
clarkbAJaeger: done16:21
AJaegerthanks16:21
clarkb(we probably can single core approve those cleanups though zuul should check if we do something wrong with them)16:21
clarkbok I am saving queues now and restarting the zuul scheduler16:22
AJaegerclarkb, ok - will do next time.16:22
*** ykarel|away has quit IRC16:22
clarkbthe stop command has been issued and I'm waiting for it to actually stop16:23
clarkbthings are on their way back now16:25
clarkbwill reload queues once zuul has loaded its config16:25
fungii just realized i forgot to upgrade skopeo on ze11 and ze12 but just finished fixing it16:25
fungiso if we see any similar-looking failures logged between 1557 and now, it likely ran from one of those16:26
*** sarob has joined #openstack-infra16:26
clarkbI wonder if we can get zuul to remember its config when we restart it to speed things up16:27
fungiobvious risk for that is when new code changes the way that configuration would be parsed16:28
fungiyou always want to remember the configuration except for those times when you don't16:28
*** mattw4 has joined #openstack-infra16:28
clarkbya16:28
fungicould end up being a source of subtle confusion16:28
clarkbI guess we had a recent case of that too (the security fix)16:29
fungias for under-utilizing our clouds, we're not really running much of a node request backlog, so some of that is likely just it being a "slow" week16:30
fungipost-summit rush has played out, and now everyone's overcome with post-summit exhaustion16:30
clarkbgate is reloaded, check is on its way now16:30
*** sarob has quit IRC16:32
clarkbcheck is loaded16:36
*** sshnaidm is now known as sshnaidm|afk16:37
openstackgerritMerged openstack/openstack-zuul-jobs master: Remove unused jobs  https://review.opendev.org/65934016:37
clarkb#status log Restarted zuul-scheduler to pick up a fix for zuul artifact handling that affected non change object types like tags as part of release pipelines. Now running on ee4b6b1a27d1de95a605e188ae9e36d7f1597ebb16:40
openstackstatusclarkb: finished logging16:40
fungisafe to let #openstack-release know we're open for business again?16:40
openstackgerritAndreas Jaeger proposed openstack/project-config master: Update node to version 10 for release-zuul-python  https://review.opendev.org/65962016:40
fungior should we also do the gerrit restart first and wait for replication to quiesce?16:40
clarkbfungi: I let them know about the zuul restart already16:41
clarkbI'm torn on the gerrit restart. I personally never use the gitweb links so don't find them to be a major aid16:41
clarkbbut if we are gonna do it we should probably just take the hit and get it done with16:41
fungivarious people keep bringing it up, so clearly some were relying on it even if we haven't16:42
clarkbmordred: are you still around?16:42
*** ricolin has quit IRC16:44
openstackgerritLogan V proposed openstack/project-config master: Increase nodepool image journal size  https://review.opendev.org/65921516:44
openstackgerritLogan V proposed openstack/project-config master: Cleanup nodepool DIB environment using anchors  https://review.opendev.org/65962116:45
clarkblogan-: neat!16:45
fungihttp://paste.openstack.org/show/751484/ is what the production config has currently16:45
*** rfarr has quit IRC16:46
clarkbfungi: its the cgi = line that we didn't have before that fixes it and mordred said earlier that he was happy with it16:46
fungithe cgi definitely exists at that path too16:46
*** amansi26 has joined #openstack-infra16:47
logan-clarkb :)16:47
fungilogan-: wow, that's an excellent observation16:49
clarkbThis week is a week of fixing all the weird random unexpected bugs16:50
openstackgerritClark Boylan proposed openstack/project-config master: Update node to version 10 for release-zuul-python  https://review.opendev.org/65962016:51
clarkbfungi: well shall we restart gerrit?16:51
*** smarcet has joined #openstack-infra16:52
clarkbfungi: I'll let the release team know if so16:52
fungiyeah, may as well take the hit now16:52
clarkbok I'm letting them know now16:52
fungii'm already logged in and can do the stop/start if you want to handle the status notice?16:52
*** eernst has joined #openstack-infra16:53
clarkbfungi: hrm there is a relaeses job in the gate16:53
clarkbhttps://review.opendev.org/#/c/652847/ do we need to wait on that to get through zuul first?16:53
fungiokay, will wait16:53
fungimay as well16:53
*** bauzas has quit IRC16:54
*** michael-beaver has joined #openstack-infra16:55
*** bauzas has joined #openstack-infra16:57
clarkbfungi: the change has merged and is now in the post queue16:57
fungiyep, that's going to apply the relevant eol tags from the change17:00
*** derekh has quit IRC17:00
clarkblogan-: http://logs.openstack.org/21/659621/1/check/project-config-nodepool/5749bd1/job-output.txt.gz#_2019-05-16_16_58_16_086627 our nodepool config is too smart for its own good17:01
logan-yep, oh well17:01
clarkblogan-: you'll need to set the anchor in one of the images so that it is valid17:01
clarkboh except there are various other env vars that are image specific17:01
clarkbso ya may need to over account for these things17:01
logan-yeah I don't think there are any images with non-specific vars17:02
*** dims has joined #openstack-infra17:03
*** sarob has joined #openstack-infra17:05
*** jcoufal_ has quit IRC17:05
*** jcoufal has joined #openstack-infra17:06
*** harlowja has joined #openstack-infra17:07
AJaegerany config-core wants to review 2 new repo additions? https://review.opendev.org/657862 and https://review.opendev.org/656657 are ready17:09
openstackgerritLogan V proposed openstack/project-config master: Increase nodepool image journal size  https://review.opendev.org/65921517:09
mordredclarkb: I am back17:09
clarkbmordred: ok we held off resetarting due to a release chagne in flight. It is still in release-post but once done there we are gonna restart17:09
mordredkk. awesome17:09
clarkbAJaeger: I can probably look after the gerrit restart17:10
AJaegerthat release change tags 16 tripleo repos with pikeem...17:10
fungifor some reason i thought release-post had priority high, but that changeish has been waiting 14 minutes for nodes already17:10
clarkbfungi: could be that we are at capacity due to the zuul restart?17:10
fungiahh, yep that17:10
fungijust wanted to be sure it wasn't the artifact bug coming back again17:10
clarkbI don't think so17:11
clarkbbut can check the zuul debug log17:11
*** jpena is now known as jpena|off17:11
clarkbnot seeing tracebacks like before17:11
*** amoralej is now known as amoralej|off17:12
fungicool, almost certainly just the thundering herd of reenqueues17:12
*** dims has quit IRC17:12
*** panda|rover is now known as panda|rover|off17:13
*** dims has joined #openstack-infra17:13
openstackgerritMerged zuul/zuul master: web: honor allowed-labels setting in the REST API  https://review.opendev.org/65389517:14
*** ykarel|away has joined #openstack-infra17:17
*** Lucas_Gray has quit IRC17:18
amansi26fungi: I am facing this warning http://paste.openstack.org/show/751485/ while running all the patches. Can you help me with this?17:19
openstackgerritMerged zuul/zuul master: docs: fix a typo in executor zone documentation  https://review.opendev.org/65366317:20
openstackgerritMerged openstack/project-config master: Update node to version 10 for release-zuul-python  https://review.opendev.org/65962017:20
fungiamansi26: those are benign warnings you can ignore. gerrit emits replication events on its event stream and zuul isn't configured to know what they are or whether it should do anything with them17:20
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate github logs with the event id  https://review.opendev.org/65864517:20
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate gerrit event logs  https://review.opendev.org/65864617:21
openstackgerritTobias Henkel proposed zuul/zuul master: Attach event to queue item  https://review.opendev.org/65864717:21
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate some logs in the scheduler with event id  https://review.opendev.org/65864817:21
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate logs in the zuul driver with event ids  https://review.opendev.org/65864917:21
openstackgerritTobias Henkel proposed zuul/zuul master: Add event id to timer events  https://review.opendev.org/65865017:21
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate pipeline processing with event id  https://review.opendev.org/65865117:21
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate merger logs with event id  https://review.opendev.org/65865217:21
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate job freezing logs with event id  https://review.opendev.org/65888817:21
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate node request processing with event id  https://review.opendev.org/65888917:21
openstackgerritTobias Henkel proposed zuul/zuul master: WIP: Annotate builds with event id  https://review.opendev.org/65889517:21
clarkbfungi: one of two jobs have run now17:21
AJaegerclarkb: and now 12 jobs in tag queue ;(17:22
AJaeger12 changes I mean17:22
fungiyep17:22
clarkbAJaeger: ya17:22
fungithose were generated by the commit in release-post17:22
AJaegeryeah, that's the result of that job...17:22
fungiright17:22
* AJaeger was too fast in typing (or too slow to connect the dots ;)17:22
fungithankfully they're just release notes publication jobs so should go very quickly17:23
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate node request processing with event id  https://review.opendev.org/65888917:24
openstackgerritTobias Henkel proposed zuul/zuul master: WIP: Annotate builds with event id  https://review.opendev.org/65889517:24
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate node request processing with event id  https://review.opendev.org/65888917:25
openstackgerritTobias Henkel proposed zuul/zuul master: WIP: Annotate builds with event id  https://review.opendev.org/65889517:25
openstackgerritAndreas Jaeger proposed openstack/project-config master: Fix nodejs version for publish-zuul-python-branch-tarball  https://review.opendev.org/65962817:29
*** smarcet has quit IRC17:29
AJaegerclarkb, pabelanger, one more nodejs change needed ^17:29
*** xek__ has joined #openstack-infra17:30
*** rfarr has joined #openstack-infra17:32
*** xek_ has quit IRC17:32
clarkbfungi: mordred AJaeger double check me but I think we can restart gerrit now17:33
fungilooks like we're all clear, yes17:33
fungiif someone wants to status notify i'll get going on the restart17:33
clarkbI can do the status notify17:33
mordredclarkb: I think we can - I'm unable to ssh atm, so I'm mostly moral support (my linux subsystem is updating which means I can't open a shell)17:34
AJaegerclarkb: go ahead17:34
*** rfarr_ has joined #openstack-infra17:34
fungiit's on its way down now17:34
clarkbhow about #status notify Gerrit is being restarted to add gitweb links back to Gerrit. Sorry for the noise.17:34
*** dklyle has quit IRC17:34
fungiwfm17:34
clarkb#status notify Gerrit is being restarted to add gitweb links back to Gerrit. Sorry for the noise.17:34
openstackstatusclarkb: unknown command17:34
clarkb#status notice Gerrit is being restarted to add gitweb links back to Gerrit. Sorry for the noise.17:34
openstackstatusclarkb: sending notice17:34
fungijust waiting on the java process to end17:35
*** Goneri has quit IRC17:35
*** sarob has quit IRC17:35
fungioh, i think systemd is confused by the previous gerrit start timing out17:36
fungiso stop doesn't try to stop it17:36
fungii'll work around that17:36
-openstackstatus- NOTICE: Gerrit is being restarted to add gitweb links back to Gerrit. Sorry for the noise.17:36
clarkbugh17:36
fungirunning the initscript manually instead17:37
clarkbmaybe we should edit the init script to have a longer timeout17:37
fungii don't think it's the initscript timing out, is it?17:37
*** rfarr has quit IRC17:37
fungiokay, cleanly stopped, starting again17:37
clarkbfungi: the init script has a 5 minute timeout waiting for gerrit to stop17:37
fungioh, well this wasn't a stop timeout17:37
fungithe last time we restarted gerrit, something (systemd?) gave up waiting for the start to return17:38
*** electrofelix has quit IRC17:38
fungiso systemd has been thinking the gerrit service is stopped this whole time17:38
openstackstatusclarkb: finished sending notice17:38
fungieven though it's been running17:38
clarkbah17:38
fungiso asking systemd to stop gerrit was a no-op from its perspective17:39
*** sean-k-mooney has quit IRC17:39
fungithere we go... happened again: Job for gerrit.service failed because the control process exited with error code. See "systemctl status gerrit.service" and "journalctl -xe" for details.17:39
clarkbit might have a timeout for startup too17:39
*** pcaruana has joined #openstack-infra17:39
clarkbor maybe it was always a startup timeout17:39
fungiyeah, i expect that's it. could be the initscript is giving up early17:40
mordredyeah17:40
*** cfriesen has joined #openstack-infra17:40
clarkbGERRIT_STARTUP_TIMEOUT=`get_config --get container.startupTimeout`17:40
clarkbso its somethign we can set in the gerrit config for next time and it defaults to 90 seconds17:41
*** sean-k-mooney has joined #openstack-infra17:41
mordredok. I have access to my shell again so I can shell places as needed17:41
cfriesenyou guys are aware of review.opendev.org being down?17:41
AJaegercfriesen: yes, we're restarting it17:41
fungicfriesen: yep, it's restarting17:41
openstackgerritMerged zuul/zuul master: gerrit: Add some quoting in 'gerrit query' commands  https://review.opendev.org/64987917:41
cfriesencool, thanks17:41
AJaegercfriesen: openstackstatus should have send a message to all channels...17:41
clarkbcfriesen: we sent an irc notice to all channels17:41
fungisent notice to all channels where the openstack status bot resides17:41
fungiand it looks to be back up and responding finally17:42
AJaegerindeed - thanks for updating my sentence ;)17:42
clarkbI don't see gitweb links though17:42
clarkbmaybe I need to clear caches? /me tries a different browser17:42
fungii did a force refresh in mine and it didn't help17:42
AJaegerfungi, mordred , could either of you +2A https://review.opendev.org/659628 to fix zuul post jobs, please?17:43
*** jlibosva has quit IRC17:43
clarkbfungi: using a different browser brought gitweb back17:43
*** Weifan has joined #openstack-infra17:43
sean-k-mooneyfor what its worth i did not see it in nova neutron or placemetes irc but i also just got discconected and reconnected form freenode so maybe my client missed it17:43
clarkbfungi: so probably need a more forceful cache eviction than force refresh17:43
fungiclarkb: loading a different change i hadn't looked at before solve it for me17:43
* AJaeger sees "gitweb"17:43
clarkbbut do they work17:43
fungizigo: gitweb links are back now17:43
AJaegerbut it does not work for me ;(17:43
clarkbya they don't work for me either17:44
AJaegerI get an URL - and that one returns nothing ;(17:44
fungizigo: nevermind, seems something's still not quite right17:44
clarkbpossible our apache is rewriting things wrongly?17:44
mordredPOO17:45
fungiwe're proxypassing those urls to http://localhost:8081/17:46
clarkbwhich is gerrit17:46
sean-k-mooneythat is weired i see that too17:46
fungiwe do expect the gerrit service to serve that, i assume17:46
openstackgerritTobias Henkel proposed zuul/zuul master: DNM: further zuul-remote debugging attempts  https://review.opendev.org/65963117:46
mordredyes, it's supposed to serve it as a cgi17:46
clarkbERROR com.google.gerrit.httpd.gitweb.GitwebServlet : CGI: Can't locate CGI.pm in @INC (you may need to install the CGI module)17:46
*** udesale has quit IRC17:46
mordredoh. HAHAHAHAH17:46
clarkbso we are msising a dependency?17:47
clarkbperl-cgi?17:47
mordredyes. and we rmeoved it on purpose anyway17:47
fungihah17:47
mordredthe gitweb package isn't required for the gitweb cgi to exist17:47
sean-k-mooneyif i dsable my browser cache with the debug pain the gitweb links come back17:47
mordredso we stopped installing it, because it pulls in, you know, perl17:47
clarkblibcgi-pm-perl17:47
fungiat least that's easy to fix17:47
clarkbI think its that package but will let monty who udnerstands this better decide which package we need17:48
mordredwe could probably just install gitweb17:48
mordredhow about I try installing it by hand for now - and if that fixes it, we can update the puppet17:48
clarkbthat would likely cover other deps17:48
clarkbmordred: wfm17:48
*** Goneri has joined #openstack-infra17:48
mordredclarkb: ok. done - are you still getting the error?17:49
clarkbERROR com.google.gerrit.httpd.gitweb.GitwebServlet : CGI: BEGIN failed--compilation aborted at /usr/share/gitweb/gitweb.cgi line 1317:49
clarkbnew error17:49
mordredit did not install libcgi-pm-perl17:49
mordredneat17:49
clarkbuse CGI qw(:standard :escapeHTML -nosticky); is line 1317:50
clarkbits teh first real line of code after use warn17:50
mordredyeah17:50
fungisean-k-mooney: yeah, it was your disconnect... the notice went to #openstack-nova at 17:37z and at 17:39z the channel saw you disconnect for a 255 second timeout17:50
clarkband use strict17:50
mordredclarkb: how about I install libcgi-pm-perl just for giggles?17:50
sean-k-mooneyfungi: right on my side it was for maybe 5 seconds but i guess i was not getting updates for a bit17:50
clarkbmordred: ok17:50
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate github logs with the event id  https://review.opendev.org/65864517:51
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate gerrit event logs  https://review.opendev.org/65864617:51
openstackgerritTobias Henkel proposed zuul/zuul master: Attach event to queue item  https://review.opendev.org/65864717:51
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate some logs in the scheduler with event id  https://review.opendev.org/65864817:51
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate logs in the zuul driver with event ids  https://review.opendev.org/65864917:51
openstackgerritTobias Henkel proposed zuul/zuul master: Add event id to timer events  https://review.opendev.org/65865017:51
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate pipeline processing with event id  https://review.opendev.org/65865117:51
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate merger logs with event id  https://review.opendev.org/65865217:51
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate job freezing logs with event id  https://review.opendev.org/65888817:51
openstackgerritTobias Henkel proposed zuul/zuul master: Annotate node request processing with event id  https://review.opendev.org/65888917:51
openstackgerritTobias Henkel proposed zuul/zuul master: WIP: Annotate builds with event id  https://review.opendev.org/65889517:51
mordredclarkb: ok. that did it17:51
mordredclarkb: I'm going to re-uninstall gitweb to see if we needed it actually17:51
openstackgerritMerged openstack/project-config master: Fix nodejs version for publish-zuul-python-branch-tarball  https://review.opendev.org/65962817:51
clarkbmordred: k17:51
AJaegerworks for me as well117:51
AJaeger\o/17:51
sean-k-mooneymordred: works for me too17:51
mordredgitweb links work without gitweb package17:51
mordredso it's libcgi-pm-perl that was needed17:51
sean-k-mooneyand as i said if you clear you borwser cache or disable it with the debug pannel the gitweb links show up again on reviews where they are missing17:52
fungizigo: okay, *now* the gitweb links are working again ;)17:52
AJaegersean-k-mooney: and now you can even "click" on the links17:53
*** eernst has quit IRC17:53
sean-k-mooneyyep well you could click on them before and be disapointed17:53
sean-k-mooneyi see ye also fixed code search to wrok with opendev recently too17:53
openstackgerritMonty Taylor proposed opendev/puppet-gerrit master: Install libcgi-pm-perl for gitweb  https://review.opendev.org/65963217:54
mordredclarkb, fungi: ^^17:54
*** jcoufal has quit IRC17:55
*** auristor has quit IRC17:56
clarkbAJaeger: release-zuul-python works now thanks!17:56
AJaegeryes, it does!17:56
AJaegernow checking the publish branch-tarball...17:56
*** smarcet has joined #openstack-infra17:57
clarkbmordred: can you review https://review.opendev.org/#/c/655476/ these are our fixes to the renaming playbook based on our day of fun a few weeks back17:57
clarkbI guess it was almost am onth ago now wow17:57
clarkbok I think that means we have fixed our known backlog of problems from yesterday17:59
clarkbpuppet works again, skopeo chagne merged but didn't work because PPAs so fungi made a package himself, zuul was updated to handle artifacts outside of change contexts and zuul and gerrit restarted to get fixes in17:59
clarkb\o/17:59
clarkbfeels like a full day already :)18:00
AJaeger;)18:00
mordredclarkb: I have just learned that accidentally typing Alt-M causes weechat to attempt to capture all of my mouse things, including copy/paste of text and url launching18:00
clarkbmordred: you can /set mouse.disable or something like that18:01
clarkbI run into that randomly occasionally and freak out for a minute until I remember how to turn it off18:01
mordredyeah - but Alt-M is the hotkey to toggle that18:01
mordredwhich I just learned :)18:01
openstackgerritMerged zuul/zuul master: github: add event filter debug  https://review.opendev.org/58054718:02
*** nicolasbock has quit IRC18:02
*** jcoufal has joined #openstack-infra18:06
fungiright, setting it disabled doesn't help disable the toggle. i'm considering rebinding/unbinding that key combo as a solution18:06
fungii don't need irc to know anything about my pointer18:07
fungii use alt-j all the time to navigate between channel buffers, and j is very close to m on my keyboards and i'm a rather hamfisted typist18:08
fungiproblem is i do it unconsciously, so tend not to catch that it's been reenabled until some time later when i attempt to copy or paste something in my irc terminal18:08
fungiand then i get to spend several minutes figuring out what the magic combo is to turn it back off18:09
fungibecause i never can remember18:09
AJaegerand zuul-branch-tarball is fixed as well - so, zuul post jobs are fixed...18:11
*** auristor has joined #openstack-infra18:11
*** amansi26 has quit IRC18:11
clarkbyay and we are publishing to dockerhub again18:13
mordredfungi: the real fun behavior this time was that "click-drag" to try to highlight something did not highlight the thing - instead it changed me to the next window like I'd typed Alt-<arrow>18:13
mordredclarkb: woohoo!18:14
fungimordred: yep, and some pointer interactions in that mode will scroll the buffer too18:14
mordredfungi: oh - so it will! that's actually kind of handy18:14
clarkbI find it unnecessarily confusing to have mouse mode on18:15
*** ianychoi has quit IRC18:15
mordredif I could tell it "please obey scrollwheel but please do not capture anythign else" that would work18:15
fungii have a feeling this is mostly to placate folks using weechat on a touchscreen device18:15
clarkbfungi: I duno its the touchscreen where I get extra confused18:15
fungii too like nothing about the mouse mode in weechat and wish it had an actual config option to disable the ability to accidentally enter that mode18:16
openstackgerritTobias Henkel proposed zuul/zuul master: Add foreground option  https://review.opendev.org/63564918:16
mordredfungi: I have a touchscreen. I do not understand wanting to use it18:18
fungii have a touchscreen and i've tried to use it, but it seems like a bit of a novelty given i'm accustomed to doing almost everything through key combos18:18
fungiif i did sketching or similar sorts of art on my portable devices then maybe18:19
fungibut i have a bluetooth pen tablet for that purpose18:19
mordredyeah. like - use cases where touch is actually a better UI than a keyboard18:19
*** ralonsoh has quit IRC18:20
clarkbI have two issues with my touchscreens. First is it is a battery drain so may as well turn it off and operate longer. Second is hands tend to not be clean and then I can't see what I'm looking at18:20
clarkblogan-: I've +2'd https://review.opendev.org/#/c/659215/3 I think it is safe to merge that before dib updates and dib will just ignore the env var until it doesn't18:22
clarkblogan-: I think you can remove your WIP as a result18:22
logan-thanks clarkb, makes sense to me18:22
*** ccamacho has joined #openstack-infra18:23
*** gfidente is now known as gfidente|afk18:24
*** smarcet has quit IRC18:24
clarkbinfra-root re https://review.opendev.org/#/c/657862/3 any concern with adding repos to x/ ?18:25
clarkbI seem to recall some people thinking we wouldn't add much there? I don't think its an issue if people are happy with the generic prefix18:25
fungiwhen i suggested x originally i saw it as a nice place for a catch-all when folks don't care/want a namespace18:25
funginot merely as an hostoric artifact of the openstack namespace eviction event18:26
clarkbalso I approved https://review.opendev.org/#/c/656657/3 which hopefully isn't an issue with replication liekly stuill happening18:26
fungier, historic18:26
mordredI don't have a _problem_ with it18:26
clarkbfungi: ya me too its super boring an generic "I don't care" type space18:26
mordredbut I also think it'll be nice to get our story for new namespaces fleshed out18:26
clarkbwhcih I expect many people will be in that boat18:26
fungix marks the spot or your buried treasure!18:27
clarkbat 10k ish tasks18:27
* mordred would like buried treasure18:27
*** ykarel|away has quit IRC18:28
openstackgerritMerged openstack/project-config master: Add new Keystone SAML Mellon charm repositories  https://review.opendev.org/65665718:38
openstackgerritTobias Henkel proposed zuul/zuul master: DNM: further zuul-remote debugging attempts  https://review.opendev.org/65963118:41
openstackgerritMerged zuul/zuul master: Don't check out a commit if it is already checked out  https://review.opendev.org/16392218:47
*** trident has quit IRC18:51
*** trident has joined #openstack-infra18:52
openstackgerritMerged zuul/zuul master: Use a more visible selection color  https://review.opendev.org/64886518:59
clarkbinfra-root as an fyi I am going to recheck https://review.opendev.org/#/c/656880/1 which will prune our docker images19:00
clarkbit didn't manage to merge during the ptg and then it got put to the side with all of the brokeness sicne being back19:01
clarkbbut I'm finally in a place to watch it so trying to get it in now19:01
mordredclarkb: ++19:01
openstackgerritPaul Belanger proposed zuul/zuul master: Support Ansible 2.8  https://review.opendev.org/63193319:07
*** imacdonn has quit IRC19:12
*** imacdonn has joined #openstack-infra19:14
clarkbbah the idempotent check in beaker job makes that chagne fail again19:17
clarkbI'll rechcek again (and maybe we need to loosen that rule)19:17
clarkbI'll rechcek again (and maybe we need to loosen that rule)19:17
clarkbderp19:17
openstackgerritMerged zuul/zuul master: Make image build jobs voting again  https://review.opendev.org/65924219:18
clarkbhrm all the system-config run ansible jobs also failed and look super unhappy19:21
clarkboh ansible 2.8 released19:21
clarkbI bet that is our issue19:21
clarkbyay19:21
clarkbdid ansible 2.8 drop the paramiko requirement?19:22
clarkbseems that testinfra is failing to ssh?19:22
*** nicolasbock has joined #openstack-infra19:22
clarkbhttps://github.com/ansible/ansible/commit/6ce9cf7741679449fc3ac6347bd7209ae697cc5b#diff-b4ef698db8ca845e5845c4618278f29a yup19:24
clarkbshould we try to pin ansible in our test jobs?19:24
*** imacdonn has quit IRC19:25
*** Weifan has quit IRC19:26
openstackgerritClark Boylan proposed opendev/system-config master: Cap ansible to <2.8 to fix testinfra  https://review.opendev.org/65964619:27
*** sarob has joined #openstack-infra19:27
clarkblets see if that works19:27
clarkbthat doesn't actually run the testinfra tests..19:28
openstackgerritClark Boylan proposed opendev/system-config master: Cap ansible to <2.8 to fix testinfra  https://review.opendev.org/65964619:28
*** Goneri has quit IRC19:32
*** xek__ has quit IRC19:32
*** xek__ has joined #openstack-infra19:32
openstackgerritMerged zuul/zuul master: zuul-quick-start: run docker-compose as regular user  https://review.opendev.org/65873819:32
*** Goneri has joined #openstack-infra19:33
openstackgerritDean Troyer proposed opendev/system-config master: Add #starlingx to statusbot channels  https://review.opendev.org/65965219:40
*** imacdonn has joined #openstack-infra19:42
*** smarcet has joined #openstack-infra19:44
pabelangerzuul.o.o isn't loading for me19:46
pabelangerclarkb: mordred^19:46
pabelangerI seem to be getting a JS error19:46
clarkbI'm guessing react 2.0 broke something19:46
*** imacdonn_ has joined #openstack-infra19:46
pabelangerhttp://paste.openstack.org/show/751488/19:47
clarkbI know nothing about the react 2.0 work so not a great person to debug it19:47
clarkbhttps://reactjs.org/docs/error-decoder.html/?invariant=130&args[]=undefined&args[]=19:47
*** gfidente|afk has quit IRC19:48
pabelangerk, will bring it up in #zuul19:49
clarkbpupept did just run and it did update the web stuff19:49
*** imacdonn has quit IRC19:49
*** Goneri has quit IRC19:50
mordredpabelanger: it's loading for me19:51
mordredpabelanger: have you tried shift-reloading?19:51
mordredoh - hrm19:51
clarkbmordred: I think you are on the old code if it is working19:51
pabelangeryah, even cleared cache19:52
mordredhttp://zuul.opendev.org/t/opendev/status works for me19:52
mordredbut http://zuul.opendev.org/t/openstack/status19:52
mordreddoesn't19:52
*** pcaruana has quit IRC19:52
pabelangersame19:52
mordredI'm grumpy about how that is doing what it's doing19:53
mordredwe used to produce source maps so that we could get real errors even with minification19:53
mordredthe react error decoder is useless19:54
clarkbmordred: there is still a map file in the static dir19:54
mordredawesome. well - I don't know why we're getting useless error messages19:54
mordredbut I recommend we revert19:54
clarkbwfm I'll let other people drive as I need to eat a late lunch now19:54
openstackgerritClark Boylan proposed opendev/system-config master: Cap ansible to <2.8 to fix testinfra  https://review.opendev.org/65964619:55
pabelangerk, will get revert up19:55
openstackgerritDavid Shrewsbury proposed zuul/zuul master: Revert "web: upgrade react and react-scripts to ^2.0.0"  https://review.opendev.org/65965519:55
clarkbalso if infra-root can review ^ it would be helpful for fixing system-config tests. note ps2 tests ran all the jobs19:55
clarkbI've pushed ps3 without the comment in .zuul.yaml which will run fewer tests19:55
pabelanger+219:56
pabelangerclarkb: yah, users now need to manage paramiko themself for ansible, IIRC19:56
clarkbpabelanger: I'm expecting testinfra will make an update that includes it as a dep19:57
clarkbsince testinfra needs it19:57
pabelanger+119:57
pabelangerclarkb: k, I'll do a PR with testinfra now19:59
*** Weifan has joined #openstack-infra20:02
*** ccamacho has quit IRC20:02
pabelangerclarkb: ah, reading docs, paramiko is an extra install, since it is only used by specific backends. so, we likely need to update our job to do the install20:05
*** whoami-rajat has quit IRC20:05
*** imacdonn_ is now known as imacdonn20:07
*** sarob has quit IRC20:09
*** e0ne has joined #openstack-infra20:10
clarkbok, we dont use 2.8 in production yet so the cap seems appropriate for now20:11
clarkbbut can add paramiko and 2.8 in the same go20:11
*** e0ne has quit IRC20:12
pabelanger++20:13
openstackgerritMerged zuul/zuul master: Add logs spec  https://review.opendev.org/64871420:18
*** Lucas_Gray has joined #openstack-infra20:19
*** Weifan has quit IRC20:19
*** Goneri has joined #openstack-infra20:27
*** Weifan has joined #openstack-infra20:30
*** jcoufal has quit IRC20:38
*** dave-mccowan has joined #openstack-infra20:38
*** tbarron has left #openstack-infra20:41
openstackgerritMerged zuul/zuul master: web: add tags to jobs list  https://review.opendev.org/63365420:41
*** ijw has joined #openstack-infra20:46
*** diablo_rojo has joined #openstack-infra20:48
openstackgerritMerged opendev/system-config master: Cap ansible to <2.8 to fix testinfra  https://review.opendev.org/65964620:55
clarkbyay I can recheck my other change I want to merge now :)20:55
*** armax has quit IRC20:55
*** kjackal has quit IRC20:56
openstackgerritMerged zuul/zuul master: Ensure correct state in MQTT connection  https://review.opendev.org/65293220:57
pabelangerclarkb: I was trying to get czuul working, but doesn't seem to support python320:58
pabelangerand didn't feel like installing python220:58
clarkbczuul?20:58
clarkbah harlowja's zuul status viewer20:59
*** trident has quit IRC20:59
*** trident has joined #openstack-infra21:00
pabelangeryah21:01
*** sarob has joined #openstack-infra21:06
clarkbwhats the new status json url?21:08
clarkb/api/status ?21:08
clarkbya that seems to be it21:08
clarkbI'm not sure how to read czuul21:08
*** rh-jelabarre has quit IRC21:09
clarkbthe revert is in the gate21:12
*** slaweq has quit IRC21:18
openstackgerritMerged zuul/zuul master: Fix dequeue ref not handling the project name  https://review.opendev.org/65911021:19
harlowjasomeone uses czull still, nice, lol21:19
clarkbharlowja: well a react update broke the web dashboard21:20
clarkbso using it to see how the fix for the broken web dashboard is doing :)21:20
harlowjalol, cool21:20
clarkbI'm not quite sure I undesttand how to read czuul's render but understand it well enough to know the fix is in the gate21:21
harlowjaya, this was my introduction to urwid, lol21:22
harlowja*it was21:22
harlowja^ not very good code as a result, lol21:23
clarkbthe docker prune chagne is failing though ;/21:29
openstackgerritMerged zuul/zuul master: Add support for submitting reviews on GitHub  https://review.opendev.org/65654121:32
*** xek has joined #openstack-infra21:33
*** xek__ has quit IRC21:35
*** Goneri has quit IRC21:42
openstackgerritJeremy Stanley proposed zuul/zuul master: Install latest git-review from PyPI in quickstart  https://review.opendev.org/65967421:45
*** xek_ has joined #openstack-infra21:47
*** xek has quit IRC21:47
*** smarcet has quit IRC21:50
*** smarcet has joined #openstack-infra21:50
*** smarcet has joined #openstack-infra21:51
openstackgerritMerged zuul/zuul master: Support fail-fast in project pipelines  https://review.opendev.org/65276421:51
*** mriedem has quit IRC21:53
*** armax has joined #openstack-infra21:55
*** xek has joined #openstack-infra21:56
*** xek_ has quit IRC21:56
*** xek has quit IRC21:58
*** xek has joined #openstack-infra21:59
*** tosky has quit IRC21:59
*** smarcet has quit IRC22:02
*** smarcet has joined #openstack-infra22:03
openstackgerritMerged zuul/zuul master: Support Ansible 2.8  https://review.opendev.org/63193322:04
clarkbI think we should consider direct enqueing the react revert for zuul when it kicks out of the gate again22:06
clarkbI guess I could dequeue then enqueue it22:06
*** sreejithp has quit IRC22:06
*** smarcet has quit IRC22:06
fungialternative trick is to just keep re-promoting it as soon as it fails a build but before it reports on the buildset22:07
*** sarob has quit IRC22:12
*** rfarr_ has quit IRC22:15
clarkbwhen the current one fails (so that we get a report on the change with logs) I'll reenqueue straight to the gate then I need to step out for a bit22:24
fungii'm in and out taking breaks from the lawn, need to get it finished before dark22:25
funginote that you can enqueue it straight to the gate regardless. the buildset in check will continue and will report back in gerrit anyway22:26
clarkbya I didn't recheck it just enqueued it to gate22:27
clarkbI just wanted record of the py36 failure before I did that (so didn't dequeue)22:27
clarkbthe docker image prune is finally going to make it to the gate I Think22:28
*** jtomasek has quit IRC22:33
*** UdayTKumar has quit IRC22:48
*** iokiwi has joined #openstack-infra22:48
fungiService 'node' failed to build: Get https://registry-1.docker.io/v2/rastasheep/ubuntu-sshd/manifests/latest: unauthorized: incorrect username or password22:50
fungiis that the ipv6 registry problem?22:50
*** ijw has quit IRC22:52
openstackgerritJeremy Stanley proposed zuul/zuul master: Install latest git-review from PyPI in quickstart  https://review.opendev.org/65967422:54
*** tkajinam has joined #openstack-infra22:55
openstackgerritMerged opendev/system-config master: Prune docker images after docker-compose up  https://review.opendev.org/65688022:55
* fungi cheers22:59
*** xek has quit IRC23:03
openstackgerritMerged zuul/zuul master: Revert "web: upgrade react and react-scripts to ^2.0.0"  https://review.opendev.org/65965523:04
clarkbfungi: yes that is the ipv6 problem23:04
*** rcernin has joined #openstack-infra23:04
clarkbfungi: docker should talk to the buildset registry proxy whihc proxies to dockerhub and pulls off the auth23:04
clarkbif that fails then it tries to talk to dockerhub directly with auth which fails23:04
*** _erlon_ has quit IRC23:05
clarkbwe think that happens because in ipv6 only clouds you get many to one nat to dockerhub from the buikdset registry and it isn't as reliable23:05
*** slaweq has joined #openstack-infra23:05
fungiokay, that's what i thought i recalled from the earlier discussion (at the ptg?)23:07
fungiand we were going to try pointing that at the per-provider mirror servers at some point, but i guess we haven't yet?23:07
clarkbno corvus pointed out we really want tls for that23:08
clarkbbecause we are publishing the end result images not just consuming them23:08
clarkbthis is why ianw's work to deploy tls'd mirrors is important23:08
clarkb(if you haven't reviewed those changes yet please do!)23:08
fungii think i have, but maybe i haven't. will make sure23:09
clarkbinfra-root we are down to 4.2GB ish of free space on the gitea cluster members23:10
*** slaweq has quit IRC23:10
clarkbIt appears to /var/lib/docker-old represents about 6GB we cloud clear out23:10
clarkbwe had moved the /var/lib/docker aside when recovering from the k8sification of everything, but is there any reason to keep those dirs around anymore or can we rm them?23:10
clarkbmordred: corvus ^ you may want to weigh in on that23:11
*** slaweq has joined #openstack-infra23:11
clarkbits not an emergency yet so I won't go deleting anything23:11
clarkbbut if we are happy with gitea + docker now I can go through and clean those docker-old dirs up23:11
*** Lucas_Gray has quit IRC23:14
fungilooks like 658930 is currently parented to a dnm demonstration, but i approved the one ahead of that in the series23:15
*** Lucas_Gray has joined #openstack-infra23:15
*** slaweq has quit IRC23:16
clarkbhttps://review.opendev.org/#/c/658281/29 -> https://review.opendev.org/#/c/658930/3 -> https://review.opendev.org/#/c/652801/15 is the stack I see23:17
clarkbthank you for approving the two at the bottom23:17
clarkbworth noting that because we had to create a new /var/lib/docker after the k8sification, our image pruning isn't currently going to prune much23:18
fungiahh, no you're right, gertty's current attempts at rendering a nonlinear dependency are a little hard to interpret23:18
clarkbbut its good that change finally merged as it is one less thing to worry about going forward23:19
pabelangerclarkb: zuul.o.o is back online23:19
clarkbpabelanger: cool puppet picked up the new (old) js then23:19
fungiso the revert worked, good23:19
pabelangerya23:20
*** dklyle has joined #openstack-infra23:21
*** diablo_rojo has quit IRC23:24
ianwclarkb: sorry just coming online ; did you see the skopeo issues with the package being removed from the ppa?23:24
clarkbianw: yes fungi built a new package and just manually installed it I think23:25
clarkbianw: tristanC's fix appears like it will get merged so hopefully upstream skopeo will work again for us shortly23:25
ianwok, cool; yeah that was the next thing.  we could put it in openstack-ci-core ppa but sounds like we can just live with it23:25
clarkbif tristanC's fix didn't already have a ton of traction I think we would put it in our ppa, but considering they have already acked it a bunch I think we are good to coast on fungi's package for a few days while they get it merged and built in their ppa23:26
tristanCclarkb: i'll update the PR shortly according to the last suggestion23:27
ianwclarkb: so i've started looking more at ask.o.o ... i think we can get it on xenial on life support ...23:28
pabelangerclarkb: re zuul / ansible 2.8.0 we should keep an eye for restarts to zuul in openstack. Once we do, jobs will be able to use ansible 2.8.  I'm happy to help with testing when we want to schedule that, either tomorrow or early next week.23:28
ianwit's going to be a great target for someone to containerise and migrate; i don't think any of us feel the need to get into puppet hacking around it23:28
*** aaronsheffield has quit IRC23:30
mordredianw: ++23:31
mordredianw: I actually think that services like that are potentially a great win for our new container overlords23:31
clarkbpabelanger: not just will be able to but will be forced to by default right?23:33
clarkbianw: ya I'm fine with life support and then putting up the flag that says this needs work please to container/ansible23:34
ianwespecially, excellent and persistent work from cmurphy and others not withstanding, but almost literally the day we got to the point we could turn off puppet3 support, puppet4 went EOL :)23:34
*** rlandy has quit IRC23:34
clarkbianw: not just eol they deleted it23:34
clarkbbecause thats what you do, delete all the packages on eol day23:34
clarkbianw: re life support do you think that will work with existing puppetry or do we need to do another upgrade in place?23:34
ianwclarkb: sshhh, you'll start giving pip/setuptools ideas! ;)23:35
*** dklyle has quit IRC23:35
clarkbwith ask there are a couple big struggles we've had with it. First we were asked to deploy and run it and we basically said no because its a pain (after I spent weeks trying to get it ot work)23:36
ianwclarkb: i'm not sure; i've been playing on ask-staging https://ask-staging.openstack.org/ but it's not looking great23:36
clarkbbut then it got deployed by the askbot people on a server we hosted for them until that contract went away or something23:36
pabelangerclarkb: no, we didn't change the default yet. It should be 2.7 still23:36
clarkbso we sort of got forced into it :/23:36
clarkbbut then it is also the type of service where people using it area also least likely to be involved in helping to run it23:37
pabelangerclarkb: I figure once we roll it out to opendev, then confirm zuul-jobs works with 2.8 we can land the bump to 2.8 by default in zuul23:37
clarkbso we need to figure out a more sustainable way forward with it otherwise it will die under the pressure of we can't keep it going for people that can't help23:37
clarkbpabelanger: gotcha23:37
ianwclarkb: in place would eliminate starting the ask virtualenv fresh, which would be good because just non-pinned dependency pulled in cascades into hard to debug failures23:37
clarkbianw: hrm if it is in a virtualenv an inplace upgrade will probably break it though23:38
ianwe.g. it's incompatible with later redis packages -> https://review.opendev.org/65945523:38
clarkbianw virtualenvs don't survive minor python updates23:38
ianwhrm, yes true23:38
fungiyeah, replace your python interpreter and you need to rebuild your venv23:38
clarkbya problems like that redis one are why we basically gave up trying to run it ourselves the first time around23:38
clarkbit required a very particular set of tooling from django to postgres to python23:39
ianwi don't know if you saw -> https://review.opendev.org/#/c/659473/ ...23:39
ianwyep, it's all pretty fragile as i'm seeing23:39
ianwin fact, ask-staging is now throwing a different error to when i left it last night23:39
clarkbso maybe the thing to start doing here is gather info on the fragility so we can make concrete and not hand wavy argument for why this service is unsustainable23:39
clarkbianw: I wonder if the "proper" way to do this server upgrade is to also upgrade the service?23:40
ianwclarkb: sure, i have an increasing number of reviews pointing in that direction23:41
ianwi'm certain it is ... ancient django versions, incompatability with any modern redis, etc23:41
clarkbhttps://github.com/ASKBOT/askbot-devel/blob/master/askbot/doc/source/upgrade.rst appears to be their doc on this23:41
ianwbut that puts us at the point of basically re-hacking all the puppet ... at which point we should be containerising it23:41
ianwwhich i'd be happy to work with someone who is interested in it on, but i'm not sure if i want to be the driver ...23:42
clarkbianw: ya23:42
ianwclarkb: how about this; i'll spend a little more time now just seeing where i get with it, and update the trusty etherpad page with some details23:43
*** Lucas_Gray has quit IRC23:43
clarkbthat sounds good then we can send an email to the openstack-discuss list that basically says "look this service has not been cared for and requires a ton of effort ot get it into a happy place. We need help doing that otherwise we'll likely have to shut off the service" (something along those lines)23:44
ianwwe can decide if we want to, basically, have this as a stand-alone service ... like puppet it, but accept that the idea it could be re-deployed is gone23:44
ianwyeah, then what you said.  call for volunteers to containerise it, which i'm happy to "mentor" (i'd be learning too)23:44
fungithe codebase for it seemed to have stabilized significantly. i don't know if that means it's still getting updates to keep it working on more modern platforms just no new features or... a handful of new features with increasingly stale platform support23:45
fungiif it's the latter and we need to maintain a fork just to keep it secure, then yes we need folks stepping up to help or else it's going away23:46
clarkbfungi: its not just that though, just running it is a significant pita23:46
clarkbthere is a reason we refused to run it the first time we were asked23:46
clarkbit has a ton of moving parts all of which you have to get just right (whcih often isn't documented) otherwise the thing breaks23:47
clarkbthe major issue I ran into when trying to run it was to have mysql be the db which was documented as an option just didn't work23:47
clarkbthat and getting localization to work23:47
clarkbwhich sounds very similar to needing a specific version of redis and so on23:48
fungioh, sure, i'm just saying a volunteer or two could probably keep it going *if* it's still being updated for more modern operating systems and frameworks23:48
mordredyeah23:48
fungiif it's not and we need to do development work on the software to get it to support newer versions of stuff, that's going to need more than a volunteer or two23:49
clarkbalso why do people keep using redis? operationally it is so pain ful to run23:49
*** yamamoto has joined #openstack-infra23:49
mordredand also - if it can be done via containers and whatnot, where running a specific redis and a specific postgres might be easier for a contributor less deeply ingrained in infra-lore could do23:49
mordredclarkb: becuase it's easy for a dev to spin up on their laptop23:49
clarkbmordred: until it OOMs and your kernel kills firefox/chrome23:49
mordredclarkb: and devops == "get rid of ops people and bow to the wisdom of devs with no ops experience"23:49
* clarkb has really bad memories of redis23:49
fungiand this is why they all by laptops with 64gb+ of ram23:49
fungier, buy23:49
* mordred grumps some more23:50
clarkbfungi: they are all single stick non user replaceable now which means you get a max of 16GB23:50
clarkbfungi: but usually 8GB is really all you get :/23:50
fungiyeesh23:50
mordredyeah. they don't sell laptops with ram23:50
fungiin modern laptops? really?23:50
mordredyeah23:50
clarkbfungi: yup it was easier to load up a laptop with memory 10 years ago than now23:50
fungi8gb ram is what this little netbook has23:50
clarkbeverything is soldered to the board now and single dimm23:50
mordredbecause they've decided that laptops should try to be tablets23:50
clarkbso on intel cpus that means 16GB max23:51
mordredrather than letting tablets be tablets23:51
fungimost tablets probably have at least that much ram, yeah23:51
clarkb(and was 8GB max not that long ago)23:51
openstackgerritMerged opendev/system-config master: Use handlers for letsencrypt cert updates  https://review.opendev.org/65280123:51
mordredand letting laptops be, well, laptops23:51
openstackgerritMerged opendev/system-config master: letsencrypt: use a fake CA for self-signed testing certs  https://review.opendev.org/65893023:51
ianwyou know what i love; when you google an error message, and the only result is your own mail to the openstack-infra list about that error message23:51
clarkbmy x240 isn't soldered memory and is user replaceable, but the intel cpu in it only allows for 8GB per dimm23:52
clarkbso I have 8GB of memory in my laptop23:52
clarkbianw: thats always my favorite thing23:52
clarkbor you get a launchpad bug you filed a year ago23:52
*** diablo_rojo has joined #openstack-infra23:52
fungiianw: i love when the only hit is me three years ago asking the same question, receiving no answer, and then forgetting about it23:52
fungiyeah, that23:52
* mordred loves all of those things23:52
clarkbdesktop has 16GB of memory 11GB of which is used and the vast majority of that is firefox23:53
fungiand then i think, "past me should have tried a little harder, you know?"23:53
clarkbI have firefox, thunderbird, and three terminals open23:53
clarkbthats 11GB23:53
*** yamamoto has quit IRC23:53
fungimy workstation is currently using 2.7gb out of 16gb ram23:54
fungiand that's with firefox and ~20 terminals open23:54
clarkbya I'm sure some of my overhead is the desktop environment23:54
clarkband I have a tab problem in firefox23:54
clarkbI regularly go over 400 ish tabs23:54
fungiahh, right, i'd be surprised if ratpoison is using more than a few kb23:54
pabelangerianw: I especially love, when some new shinny thing is the best, and other people recommend it. Then I decide to install and try it, and it clearly doesn't work because of super obvious that some how every other person has seemed to miss.23:54
fungipabelanger: that's because all the people who are talking about how great it is have so little time left after all that talking to actually try it23:55
pabelanger:)23:55
clarkbpabelanger: thats like using docker with ipv6 right?23:55
pabelangerIKR23:56
fungimeh, ipv6 is so last century23:56
mordredpabelanger: +10023:56
pabelangerhow is that even possible23:56
fungiit's hipster to find new and exciting ways to preserve ipv4 and embed it deeper and deeper into the assumptions of your software23:56
mordredfungi: using ipv4 in 2019 is so ironic23:56
mordredusing ipv6 is too obvious23:57
fungimy internet protocol has a goatee23:57
fungidoes yours?23:57
clarkbI've begun to learn the value of using children as transport layer23:57
mordredclarkb: do you just pin things to their shirts?23:57
fungistick things to them and they eventually arrive in another room23:57
clarkbmordred: I ask them for stuff and more often than not it arrives quickly23:57
mordredneat!23:57
fungiand sometimes it's even what you asked for23:58
clarkbthey particularly enjoy it if it is something they wouldn't normally be allowed to interact with like my wallet23:58
fungisomeone needs to invent error correction for that23:58
mordredclarkb: have you considered using them as a form of cold storage for backups? "go hide this"23:58
clarkbmordred: that is a neat idea23:58
clarkbbackups so good even I can't delete them23:58
mordred++23:58
mordredof course, the most important thing in a backup is the ability to restore from it23:59
fungithey're probably an excellent source of entropy too23:59
clarkbI asked for my wallet yseterday because hbo and roku couldn't figure out how to process my monthly payment23:59
clarkbthen in my attempt at figuring it out decied well I'll just unsubscribe then subscribe23:59
ianwok, ask-staging.openstack.org is back to the same error i was seeing.  i'm going to try migrating a dump of the db; if that doesn't fix it ... i dunno23:59
clarkbya no the unsub totally worked but now they can't figure out how to give me a subscription again23:59
clarkbso I may have to create a new account to watch GoT finale23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!