Saturday, 2022-06-04

*** pojadhav- is now known as pojadhav04:55
noonedeadpunkHey! I dunno if that's related to openstacksdk 0.99.0 or not, but we have quite a bunch of post_faulire in zuul for last 48h.06:46
noonedeadpunkwe can't merge a thing now10:11
noonedeadpunkout of 4 re-checks not any passed because of that10:12
funginoonedeadpunk: have a link to an example?10:18
fungilooking at https://zuul.opendev.org/t/openstack/builds i don't see one10:18
fungihttps://zuul.opendev.org/t/openstack/builds?result=POST_FAILURE&skip=0 suggests it may be mostly limited to openstack-ansible jobs?10:20
fungiand monasca10:20
fungihttps://zuul.opendev.org/t/openstack/build/a2efebe25b02451d9d97a48f5c52d80e ended in POST_FAILURE and ran on ze04. looking at the executor debug log there, the "get df disk usage" task in post-run ended in an unreachable state, though i don't see any output from the command to explain why ansible decided that node was "unreachable"10:32
fungilimiting to voting master branch jobs in check or gate, it looks like there were some other projects which also saw POST_FAILURE results in the past couple of days: https://zuul.opendev.org/t/openstack/builds?branch=master&pipeline=check&pipeline=gate&result=POST_FAILURE&voting=1&skip=010:35
fungithough most were openstack/openstack-ansible or openstack/openstack-ansible-repo_server10:36
fungihttps://zuul.opendev.org/t/openstack/build/09fbf1d7a73e4d2db43e286e41cbcdf1 ran from ze06, and i find the same thing in its debug log (ansible reports unreachable from the df task)10:40
fungioh! that may be a red herring10:42
fungi2022-06-03 21:15:08,965 WARNING zuul.AnsibleJob: [e: 9e49267797fa49ae95b541a87bd79c7c] [build: 09fbf1d7a73e4d2db43e286e41cbcdf1] Ansible timeout exceeded: 180010:43
fungi2022-06-04 09:40:47,139 WARNING zuul.AnsibleJob: [e: 3908f0efcea242eda6fcc3906c5c4d77] [build: a2efebe25b02451d9d97a48f5c52d80e] Ansible timeout exceeded: 180010:45
fungiso yeah, looks like these may be timeouts masked by a failing post-run task10:45
fungioh, though it's during the "Upload logs to swift" task10:46
fungiso maybe log uploads are taking too long to complete?10:46
fungii have to get ready for my flight, but maybe something has started making swift uploads sometimes take too long?10:49
fungior is causing them to hang?10:49
funginoonedeadpunk: it shouldn't be openstacksdk 0.99.0 at least, since we pin to 0.61.0 in the executor ansible environments for now (given the other bugs we encountered)10:52
fungianyway, i'm off to the airport, but may have some time sitting around later today to take a longer look12:07
fungithis looks right up corvus's alley: https://youtu.be/2XLZ4Z8LpEE "Using a 1930 Teletype as a Linux Terminal"15:58
fungii'll see your model m and raise you...15:59
fungithe perfect computer fixture for your refurbished sleeper car16:00
opendevreviewAde Lee proposed zuul/zuul-jobs master: Add thhe post-reboot-tasks role  https://review.opendev.org/c/zuul/zuul-jobs/+/84470416:32
*** lbragstad4 is now known as lbragstad20:56

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!