Thursday, 2023-09-21

*** travisholton3 is now known as travisholton00:11
opendevreviewOpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml  https://review.opendev.org/c/openstack/project-config/+/89599202:39
opendevreviewMerged openstack/project-config master: Normalize projects.yaml  https://review.opendev.org/c/openstack/project-config/+/89599206:00
fricklerfungi: checking merge conflicts for ^^ I saw https://review.opendev.org/c/openstack/project-config/+/685778 , do you still want to keep that open?06:04
opendevreviewdaniel.pawlik proposed zuul/zuul-jobs master: Parametrize cri-o version  https://review.opendev.org/c/zuul/zuul-jobs/+/89559707:02
opendevreviewdaniel.pawlik proposed zuul/zuul-jobs master: Add feature to set --vm-driver name for minikube  https://review.opendev.org/c/zuul/zuul-jobs/+/89475507:14
opendevreviewdaniel.pawlik proposed zuul/zuul-jobs master: Add feature to set --vm-driver name for minikube  https://review.opendev.org/c/zuul/zuul-jobs/+/89475507:49
opendevreviewdaniel.pawlik proposed zuul/zuul-jobs master: Add feature to set --vm-driver name for minikube  https://review.opendev.org/c/zuul/zuul-jobs/+/89475508:13
opendevreviewdaniel.pawlik proposed zuul/zuul-jobs master: Add feature to set --vm-driver name for minikube  https://review.opendev.org/c/zuul/zuul-jobs/+/89475508:29
*** elodilles_pto is now known as elodilles08:47
opendevreviewdaniel.pawlik proposed zuul/zuul-jobs master: Add feature to set --vm-driver name for minikube  https://review.opendev.org/c/zuul/zuul-jobs/+/89475509:48
opendevreviewdaniel.pawlik proposed zuul/zuul-jobs master: Add feature to set --vm-driver name for minikube  https://review.opendev.org/c/zuul/zuul-jobs/+/89475509:59
fungifrickler: thanks for spotting that, i've abandoned it since i don't expect to find time to work on that any time soon11:35
*** amoralej is now known as amoralej|lunch12:23
*** amoralej|lunch is now known as amoralej13:10
fricklerfungi: we might have an issue with some log uploads, two post_failures without logs in https://review.opendev.org/c/openstack/requirements/+/896053 , do you have time to check zuul logs?13:12
fungiyep, just a sec13:30
fungif802192c96bd4bec80b3948761297e8a (one in your example) ran on ze01 and picked ovh_gra as its upload destination13:34
fungi2023-09-21 11:50:20,595 DEBUG zuul.AnsibleJob.output: [e: c56e5ae5b08444c39a56600d02319824] [build: f802192c96bd4bec80b3948761297e8a] Ansible output: b'TASK [upload-logs-swift : Upload logs to swift] ********************************'13:39
fungi2023-09-21 11:50:30,679 INFO zuul.AnsibleJob: [e: c56e5ae5b08444c39a56600d02319824] [build: f802192c96bd4bec80b3948761297e8a] Early failure in job13:39
fungi2023-09-21 11:50:30,684 DEBUG zuul.AnsibleJob.output: [e: c56e5ae5b08444c39a56600d02319824] [build: f802192c96bd4bec80b3948761297e8a] Ansible result output: b'RESULT failure'13:39
fungi2023-09-21 11:50:30,685 DEBUG zuul.AnsibleJob.output: [e: c56e5ae5b08444c39a56600d02319824] [build: f802192c96bd4bec80b3948761297e8a] Ansible output: b'fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that \'no_log: true\' was specified for this result", "changed": false}'13:39
fungiwell that's not much help13:40
fungiso precisely 10 seconds into the "upload-logs-swift : Upload logs to swift" task, there was an unspecified failure13:40
fungiinfra-root: i'm 10 minutes late for approving 895205 so doing that now13:41
fungibedf3ffb2a2e42178199337efeb6fb8c, the other build from the example change, ran on ze04 and also selected ovh_gra13:44
fungihttps://public-cloud.status-ovhcloud.com/ indicates the only known incidents today for their public cloud was in the de1 region13:47
fungii guess we should keep an eye out for more of the same in case it's ongoing13:48
opendevreviewMerged opendev/system-config master: Move OpenInfra and StarlingX lists to Mailman 3  https://review.opendev.org/c/opendev/system-config/+/89520514:09
fungimailing list migration maintenance starts in 30 minutes15:00
fungi895205 deployed about 5 minutes ago, so we should be all set15:01
clarkbsounds good. I'm just sitting down now.15:08
fungii'm taking the opportunity to revisit moderation queues while my tea steeps15:08
clarkbI'm drinking my tea far too quickly in an effort ot warm up15:11
fungium, this is unfortunate timing... does anyone else get an error when going to https://lists.openinfra.dev/ ?15:16
clarkbfungi: yes, thats the old system judging by the resulting cgi url?15:17
fungiright. looks like the cert got replaced a couple of hours ago?15:18
fungiNot Before: Sep 21 13:42:09 2023 GMT15:18
clarkbI don't get a cert error15:18
clarkbI get an Internal Server Error15:18
fungihuh15:18
clarkbcert is valid until november 815:18
fungii just get firefox telling me "Warning: Potential Security Risk Ahead. Firefox detected a potential security threat and did not continue to lists.openinfra.dev. If you visit this site, attackers could try to steal information like your passwords, emails, or credit card details."15:19
fungi"Unable to communicate securely with peer: requested domain name does not match the server’s certificate."15:20
clarkbfungi: double check you are talking to the correct server? maybe you're pointed at the mm3 server and the new cert didn't get deployed?15:20
corvuscn and san are both for lists.openstack.org for me15:21
fungiit's the correct ip address15:21
corvusie, my experience matches fungi15:21
fungihttps://paste.opendev.org/show/b4PCRUWDYpBrNN004JHn/15:21
clarkbhttps://review.opendev.org/c/opendev/system-config/+/895205/1/inventory/service/host_vars/lists.openstack.org.yaml is the reason15:22
fungimaybe a problem with the chain?15:22
clarkband I must be talkign to an apache child that hasn't restarted yet15:22
fungioh15:22
fungiyep, okay15:22
fungiso i guess that's a good reason to put the old server in emergency disable in the future15:22
fungii bet it redid the cert on the server when the config deployed15:23
fungiokay, mystery solved15:23
fungiwe're 5 minutes out from starting at this point anyway15:24
ajaiswal[m]how to run re run job failed job or the job which is not triggered15:25
ajaiswal[m]for example https://review.opendev.org/c/starlingx/ansible-playbooks/+/894802/5?tab=change-view-tab-header-zuul-results-summary15:25
clarkbas noted in the zuul amtrix room recheck is what you'd typically do. But that has apparently not worked15:26
clarkbajaiswal[m]: are there two problems here? One with a job not triggering and another where recheck does work on a job that previously failed?15:26
corvusajaiswal: what job didn't run that you expected to run?15:27
ajaiswal[m]https://review.opendev.org/c/starlingx/ansible-playbooks/+/894802/5?tab=change-view-tab-header-zuul-results-summary15:27
ajaiswal[m]corvus: gating job which will provide verified+115:28
clarkbajaiswal[m]: that link is a link to the jobs which did run. There were 3 and they did provide a +1. Which job did not run that you expected to run?15:29
fungi#status notice The lists.openinfra.dev and lists.starlingx.io sites will be offline briefly for migration to a new server15:30
opendevstatusfungi: sending notice15:30
-opendevstatus- NOTICE: The lists.openinfra.dev and lists.starlingx.io sites will be offline briefly for migration to a new server15:30
fungii'm starting to do dns updates now15:30
opendevstatusfungi: finished sending notice15:32
fungidns changes submitted as of 15:33z, so we should be safe to proceed with service shutdowns after 15:38z15:33
fungiwe're on line 220 of https://etherpad.opendev.org/p/mm3migration now15:34
corvusajaiswal: if you're wondering why the recheck of the earlier patchset (#4) did not work, it's because that patchset depended on an outdated change; the rebase fixed that and that's why the jobs ran.15:34
fungiokay, lists.openinfra.dev and lists.starlingx.io resolve to the temporary addresses to me and we're well past the prior ttl, so we should be able to proceed15:39
fungii'll stop the related services on the old server now15:40
fungiservices stopped and disabled for mailman-openinfra and mailman-starlingx15:41
fungifinal rsyncs done15:41
fungii've got a root screen session open on the new server now, proceeding with the import steps15:42
fungimigrate script is now running for lists.openinfra.dev15:43
jkthi there, a long time Zuul user here. Just wondering, we have a FLOSS project hosted at GitHub, with code review at GerritHub, and so far we used managed Zuul at Vexxhost15:47
jktbut it seems that we might be hitting some internal budget problems and therefore won't be able to pay for the resources because of some internal reasons15:48
jktI was wondering if it's possible for such an "external project" to piggyback on the opendev's Zuul infrastructure15:49
fungijkt: not really. we don't run zuul-as-a-service, we run a code hosting and development collaboration platform which happens to have an integrated ci system that uses zuul15:50
jkthttps://github.com/Telecominfraproject/oopt-gnpy is the project, it's BSD-3-clause, and the patch traffic is rather low15:50
jktfungi: understood15:50
fungiif you're interested in relocating your development into the opendev collaboratory, then it could use opendev's integrated zuul project gating15:50
jktI guess that it's a bit complex to move existing patches, or is that possible with upstream Gerrit now?15:51
clarkbjkt: I think the problem with moving existing patches is they are tied to a specific installation id in the notedb15:51
jktyeah15:51
fungiit might be possible to import changes with the new notedb backend, but we've never tried it, and yes that15:51
fungithere's almost certainly some sort of transformation that would be necessary15:52
jktI recall reading *something* in the release notes of the next release though15:52
fungihowever, some of the review history (who reviewed/approved, et cetera) is incorporated into git notes associated with the commits that merged15:52
jktright15:53
fungii wonder if luca has ever considered attaching a zuul to gerrithub15:53
jktI am that guy who Luca mentioned at https://groups.google.com/g/repo-discuss/c/MkrP8RmErOk?pli=115:53
fungiheh15:54
jktbut yeah, it would be awesome if you could "just" plug your own openstack API endpoint & credentials into a zero-cost-zuul-plus-nodepool somewhere :)15:54
fungii think there's some optional notion of tenancy in nodepool recently, but it might become a lot easier post-nodepool when zuul is in charge of allocating job resources more directly15:56
clarkbthe struggles we had when we tried to open up zuul elsewhere are that the problems arise at those integration boundaries and because you have less strong community ties there is less communication. Then when something inevitably goes wrong zuul/opendev get blamed for something that may be completely out of our control or at least require the user involved to intervene in some way.15:56
clarkbThat is how we ended up with the policy of keeping things in house15:56
clarkbwe can debug our gerrit and our zuul and our cloud accounts. Its much more difficult to sort out why zuul is hitting rate limits in github when we don't control the application installation15:56
jktyeah15:58
fungimigrate script is now running for lists.starlingx.io15:59
clarkbthe hope too is that we can convince people to help us out and while taking advantage of the resource available also contribute back to runnign them, add cloud resources, etc15:59
jktI'm also running Zuul at $other-company. It's a small dev team, just a handful of people, so we only upgrade when we have a very compelling reason to upgrade, which means that it's like Gerrit 3.3 and one of the oldish Zuul versions pre-zookeeper I think16:00
fungilast week when i was trying to explain the opendev collaboratory to someone, i hit on this analogy: it's like dinner at grandma's... guests are welcome but you eat what's for dinner, and if you come around too often you're likely to get put to work in the kitchen16:00
jktbut the most time-demanding thing for us are probably the image builds16:01
jktdue to the fact that we're a C++ shop, and we tend to bump into various incompatibilities in -devel, libraries, etc in these projects16:01
jktBoost for example :)16:01
jktalso, from the perspective of that first project that I mentioned, when we asked for some budget, I felt like the feedback I got was essentially "why don't you use github actions for that"16:03
jktanyway, realistically -- GNPy is a small Python thing, it requires a minute or two for the test suite to pass, and the only thing that might be non-standard is that we care about more Python versions, like 3.8-3.1216:04
jktbut since it's something that's mostly being worked on by people from academia, I'm not sure if we can commit to that kitchen duty of potato peeling16:04
clarkbI think we're far less worried about the job runtime and more the integration. As all of that falls on us. We're happy to host things on our systems then we can spread that load out across the few thousand repos we are hosting instead of ending up with a complete one off16:06
clarkbone thing I like to remind people is that openstack is about 95% of our resource consumption. Adding other projects next to openstack doesn't move that needle much.16:06
jktthat makes sense; I remember the days of JJB and Turbo-Hipster that I absolutely loved16:06
jkthow does it look like on the VM images, what determines which Python versions are available? Or is it something like "latest ubuntu" being the default, and jobs installing some oldish python in a pre-run playbook?16:08
clarkbjkt: we offer ubuntu, centos stream, rocky linux, etc images that are relatively barebones. Projects then execute jobs that install what they need at runtime (often pulling through or from our in cloud mirrors/caches).16:09
clarkbFor example tox/nox python jobs will run the ensure-python role to install python at the required version (you need to align this with the distro offerings if using distro python) and also run bindep package listings and installations for other system deps16:10
clarkbthen they run tox or nox depending on the variant to pull in the python deps, building project and execute a test suite16:10
fungialso zuul's standard library contains simple roles for things like "provide this version of python" and "install and run tox"16:10
fungiwe do keep caching proxies/mirrors of things like package repositories network-local to the cloud regions where our job nodes are booted too, in order to speed up run-time installation of job requirements16:11
fungiand to shield them from the inherent instability of the internet and remote services16:11
fungiand we also bake fairly up-to-date copies of git repositories into the images the nodes are booted from, so that they spend as little time as possible syncing additional commits over the network16:14
fungiokay, listserv imports have completed. stdout/stderr are recorded to import_openinfra.log and import_starlingx.log in root's homedir on the new server. proceeding with the manual django site additions next16:17
*** blarnath is now known as d34dh0r5316:19
jktnice, so it looks like the Ubuntu LTS versions combined together do have a reasonable Python version coverage; https://zuul.opendev.org/t/openstack/build/0c26758ef7a14b88b939ef45e07fc5d4/console says it took 6s to install that16:19
jktthat's nice16:19
fungiswitching dns for both sites to the new server now16:20
fungidns records updated as of 16:22z, ttl for the previous entries should expire by 16:27z and then we can proceed with testing16:22
fungiokay, we're there16:27
fungidns is resolving to the new addresses for me16:27
fungitesting urls at line 245 of https://etherpad.opendev.org/p/mm3migration16:27
clarkbdns is updated for me as well /me looks at urls16:28
clarkblooks good to me16:28
fungihttps://lists.openinfra.dev/archives/list/foundation@lists.openinfra.dev/ shows some incomplete (very old) messages which got imported incompletely. i think they were missing dates and so got imported with today's date16:29
fungibut yeah, everything checks out16:30
clarkbthere are 6 of them judging by the None subjects and timestamps16:30
fungiokay, i'm going to send completion notifications to the two lists indicated, which will also serve to test message delivery and archive updates. i have them already drafted16:31
corvusreceived my copy @foundation16:34
clarkbme too and that one actually goes to gmail16:34
clarkbI forget why I'm dobule subscribed on that one16:34
clarkbI see the starlingx email as well16:35
corvusme too16:35
fungiyeah, i received my reply on both lists and the archive looks right too (links added to the pad)16:35
corvus\o/16:35
fungithat concludes today's maintenance activity, not too far outside the announced window either16:36
corvusregarding the zuul regex email -- i sent a copy to service-discuss: https://lists.opendev.org/archives/list/service-discuss@lists.opendev.org/thread/DLBYJAVTIJOXZPY6JOLP3AJHDTA6XF2R/16:36
corvusbut it occurs to me that maybe that should have gone to service-announce?16:36
fungicorvus: i had a similar thought when i saw it16:36
corvusshould i resend a copy to -announce then?16:36
clarkbya -announce is probably best to ensure more people see it16:36
fungimight be worth sending a copy there, i agree16:37
corvusok will do16:37
fungii'll point openstack folks to that announcement16:37
clarkbI'll probably have to log in and moderate it through. Let me know when it is waiting for me16:37
fungi(as a subtle reminder to at least subscribe to our announcements ml)16:37
corvusokay it awaits moderator approval now16:38
fungii'll get it16:40
clarkbI got it16:40
fungiyou're too quick for me16:41
fungithanks!16:41
corvusha, the quote of the release note is collapsed by default in the web ui16:42
corvus(which i guess makes sense under the assumption that quotes are things from earlier in the thread; but not in this case...)16:42
corvusclick "..." to read the important part!  fortunately, i think all the important things show up later in the text16:43
corvusbut next time, i'll just block indent it without the >16:44
fungiyeah, one of the unfortunate design elements that people seem to want ported from webmail clients16:44
clarkbmy mail client seems to know it wasn't previously viewed so doesn't collapse it at least16:45
fungiit's not as annoying as discourse at least. i've basically stopped including quoted context in my replies to the python community "mailing lists" because it just eats them unless i include specially-formatted markdown tags16:45
fungidiscourse's "mailing list mode" is basically incapable of parsing traditional attribution lines in replies16:46
fungigranted they're not all that standardized, but it could at least avoid eating them16:46
clarkbfungi: how difficult would it be to skim the openstack-discuss or other list archives to see if they have similar problems to the foundatiuon archive emails16:47
clarkbslightly worred there could be hundreds in that particular mailing list archive. But maybe not since that list is relatively new?16:47
fungiwell, we have a log of the import, so it may have clues16:47
clarkbaha16:47
fungiand yeah, openstack-discuss is comparatively new16:47
fungilooking at lists01:/root/import_openinfra.log it doesn't mention any errors from the hyperkitty_import step for that list16:49
fungiwe also have the original mbox files (both on the old server and the new one) we can look those messages up in16:50
TheJuliao/ Anyone up for a bit of a mystery with zuul starting around august 10th, to do with artifact build jobs?16:55
TheJuliahttps://zuul.opendev.org/t/openstack/builds?job_name=ironic-python-agent-build-image-tinyipa&project=openstack/ironic-python-agent16:56
fungiit was colonel mustard in the library with the candlestick16:56
TheJuliaeh, well, aside from focal in openstack/project-config....16:56
TheJuliaI'm wondering if we've got a new timeout from Zuul?!16:57
TheJulia(which, I thought was 30 minutes, but dunno now)16:57
fungimodule_stderr: Killed16:58
fungirc: 13716:58
fungihttps://zuul.opendev.org/t/openstack/build/dc30232cfd42432597a7b1c0ab2c12ba/console#2/0/3/ubuntu-focal16:58
TheJuliathat looks like the actual zuul task was killed16:58
fungitook 3 mins 4 secs16:58
TheJulia:\16:58
fungier, took 13 mins 4 secs16:58
TheJuliastill, not that long16:59
TheJuliaDid we forget to bake fresh chocolate chip cookies and give them to zuul?!16:59
fungijust remember zuul is allergic to nuts16:59
TheJulia... oh.16:59
fungii don't see any long-running tasks prior to that one17:00
* TheJulia whistles innocently as it was Ironic with chocolate walnut cookies17:00
fungiand that's pretty early in the job to have been a timeout17:00
TheJuliaThat is what I thought17:00
clarkbthe taks after the one that is killed is a failed ssh connection17:00
clarkbcould the image build be breaking the system in such a way that ansible just stops working?17:00
TheJuliait is running gcc in most of the cases I looked at17:00
TheJulialike, nowhere near there17:01
clarkbpsosibly by filling the disk? (oddlyif ansible can't write to /tmp to copy its scripts then it treats that as a network failure)17:01
TheJulia... I doub't it17:01
TheJuliathat is true, afaik the script uses the current working directory, but I'll check17:01
clarkbthat is long enough ago I bet we don't have executor lgos for it which is the next thing I would check to see if we get more verbose info on the killed task17:02
TheJuliaugh17:02
TheJuliaoff hand, how much space should /tmp have on these test VMs?17:02
JayFI'm going to note that post job is running on focal17:02
JayFwhen most of our other items are on jammy17:02
clarkbit depends on the cloud. / and /tmp are shared and real disk (not tmpfs iirc)17:03
TheJuliawell, yes. It does seem that openstack/project-config needs an update17:03
fungilooks like that node started out with a ~40gb fs that was nearly half full: https://zuul.opendev.org/t/openstack/build/dc30232cfd42432597a7b1c0ab2c12ba/log/zuul-info/zuul-info.ubuntu-focal.txt#12517:03
clarkbso on rax you'll get 20GB - 13GB or so for 7GB free17:03
clarkband on ovh it will be someting like 80GB - 13GB or so17:03
TheJuliaThose builds are a couple hundred megs... tops17:03
JayFwhen complete; but while dib runs it can use GBs of cache17:04
corvushttps://zuul.opendev.org/t/openstack/build/69805541fb554b2a8d48753d7c9b2aca/console is a recent build with logs on ze0217:04
TheJuliaI've never seen it use more than like 500 mb17:04
clarkbit definitely looks liek something has made the test node unresponsive though17:04
TheJuliabut it has been a while17:04
JayFTheJulia: ack17:04
* TheJulia gives it a spin17:04
clarkbwhich could be external too, but image builds also mess with filesystems and do things that if done wrong could hose a host17:04
corvusthe logs don't have any additional info17:05
corvus2023-09-21 12:49:37,990 DEBUG zuul.AnsibleJob.output: [e: 4fcf54103f834451985e34885dff8dad] [build: 69805541fb554b2a8d48753d7c9b2aca] Ansible output: b'fatal: [ubuntu-focal]: FAILED! => {"changed": false, "module_stderr": "Killed\\n", "module_stdout": "", "msg": "MODULE FAILURE\\nSee stdout/stderr for the exact error", "rc": 137}'17:05
* TheJulia runs a test build locally to see17:05
fungialso notable, the builds for stable/2023.1 seem to be successful17:06
fungibut not stable/2023.2 nor master17:06
JayFThe builds do succeed in CI; we have jobs that build an image and use it17:07
fungiso the start date for the problem could be as early as 2023-08-07 because the one that succeeded on 2023-08-10 was for stable/2023.1 which also succeeded on 2023-08-3017:07
JayFjust not in post, which is weird17:07
fungiJayF: i guess the check/gate jobs have a different name then?17:08
TheJuliayeah, they do17:08
JayFhttps://opendev.org/openstack/ironic-python-agent/src/branch/master/zuul.d/ironic-python-agent-jobs.yaml#L2217:08
fungithey may have other differences in that case17:08
JayFbasically ipa-*-src jobs17:08
TheJuliaThese jobs get sourced out of ironic-python-agent-builder17:09
JayFI'm pointing it out as a data point we can look at for success in master branch17:09
TheJuliafor the build itself17:09
TheJuliathe check jobs invoke builder17:09
fungii'd say look into what's different in the check/gate version of the job, and also what's different in the stable/2023.1 vs stable/2023.2 builds17:09
* TheJulia is building locally... finally17:10
TheJuliaI'm way past where it dies at and /tmp is untouched17:12
corvusthere are streaming logs from that task; https://zuul.opendev.org/t/openstack/build/69805541fb554b2a8d48753d7c9b2aca/log/job-output.txt#2244 is the point where it died17:12
TheJuliaJayF: way past the failure point, by like miles, and only at 502MB so far17:13
JayFack17:13
clarkbdoes the waiting on logger there after the failure imply the zuul logging daemon was killed (possibly by OOMKiller or a reboot?)17:14
corvusclarkb: yes it is suggestive of that; and that it took 29 minutes to run df further points to "unhappiness on the remote node"17:16
corvusprobably worth collecting syslog from the remote system in a post-run playbook17:17
TheJuliaJayF: so more like 2GB total disk space utilized so far, but I'm deep in the compile process  like 10 minutes past where it failed17:19
JayFTheJulia: I realized with your second comment that my experiences, and comment, was not relating to the tinyipa image, but the dib one17:19
TheJuliaahh!17:19
TheJuliaokay17:19
TheJuliaanyhow, all in current working directory where executed which means the user's folder where cloned17:20
TheJuliaso it looks like it was downloading when it failed too17:20
TheJuliaSo, when it fails, it is doing the build in a chroot17:25
clarkbI wonder if we get some sort of kernel panic because of some incompatibility between new image build stuff (mounting a fs or whatever) and the old(ish) focal kernel17:25
TheJuliagetting and loading the files17:25
TheJuliaThat could be actually17:25
TheJuliamaybe there is some issue we're ticking within the chroot which is causing a panic17:26
TheJuliaOnly thing I can guess at the moment17:26
clarkbswitching the job to jammy might be a quick way to narrow down on something with the focal kernel17:26
TheJuliayup17:26
TheJuliashould we consider switching up openstack/project-config in general?17:27
clarkbthe default nodeset in zuul should already be jammy17:27
clarkbI think this job must specifically ask for focal17:27
TheJuliaproject-config carries an override17:27
TheJuliaand it is all rooted on the artifact upload job17:27
TheJuliain project-cnofig17:27
TheJuliaconfig17:27
clarkbah this one publish-openstack-artifacts17:28
clarkbI would start by overriding in your job since that will affect all openstack artifact publishing and the relaase is soon17:28
TheJuliaack17:28
clarkbbut ya bumping that post release is probably a good idea17:28
corvusfungi: clarkb here's one of the None messages: https://web.archive.org/web/20221202000314/https://lists.openinfra.dev/pipermail/foundation/2011-October/000353.html17:30
fungioh, good find! i hadn't started trying to hunt one down yet17:31
fungithe >From escaping is likely the problem17:31
TheJuliaJayF: change posted to ironic-python-agent-builder, we'll need to merge it to see17:31
fungii saw someone mention it on the mm3 list earlier this week even17:31
JayFTheJulia: +2A as a single core as a CI fix; I think you'll still have to wait for jobs17:32
fungihttps://lists.mailman3.org/archives/list/mailman-users@mailman3.org/thread/5JDF7OVYZNLBEHROZWGNMWR5LQICNHMB/17:33
TheJuliaJayF: indeed :(17:34
clarkbfungi: new favorite word: "guessalasys"17:35
clarkbI'm not sure I understand the nuances of quoting there though17:35
fungii think it's that every message in an mbox file starts with a line matching "^From .*" and so anything in the message body gets rewritten to ">From ..." in order to not prematurely truncate the message17:37
fungibut then something about the import is unescaping the >From to From which results in everything after that looking like a new message with no headers17:38
fungii don't have my head fully wrapped around it myself either17:39
fungithe case described in that thread isn't exactly ours because it's someone exporting list archives from one mm3 server and importing onto another17:39
corvusfungi: according to that thread, you probably shouldn't commit crimes unless you fully understand `from` escaping.17:40
fungibut my point was the hyperkitty_import tool seems to possibly have some rough edges around escaped froms17:40
fungiso maybe those messages are tripping that (which would match the results we're seeing at least)17:40
fungioh, also no need for the wayback machine, we still serve those original archives unmodified: https://lists.openinfra.dev/pipermail/foundation/2011-October/000353.html17:43
fungithough maybe sans a bit of theming (not sure where that went)17:43
fungii downloaded the mbox file from https://lists.openinfra.dev/pipermail/foundation/ and this is what that same message looks like trimmed out: https://paste.opendev.org/show/bl8lVessN5hQHRFsdLCJ/17:46
fungiif i grep that month's mbox file for '^>From ' i get 9 matches though, not 617:47
corvusmay only trigger as the first line of the message?17:49
stephenfinI'd like to cut a release of x/wsme, but I notice it's not hooked into openstack/release. How do I push tags without that? I already have access to PyPI so I can do it locally but I'd like to push the tag up to the remote also17:52
Clark[m]stephenfin: the first things to check are that you are a member of the release group in gerrit for the project and that the project is configured to run the release jobs when you push a tag. With that in place you tag a signed tag then push the tag to Gerrit and let zuul do the rest17:56
fungicorvus: oh, yeah maybe needs a preceeding blank17:56
Clark[m]Keep in mind you need to take care to push commits that are already merged to a branch in Gerrit. Otherwise you'll push a bunch of unreviewed code up hanging off of that tag. We also don't typically delete tags because git doesn't notice when tags change17:56
fungicorvus: though all 9 matches have blank lines before the >From so it must be something more subtle17:58
stephenfinClark[m]: thanks17:59
fungistephenfin: https://docs.opendev.org/opendev/infra-manual/latest/drivers.html#tagging-a-release17:59
fungiokay, a new twist. while there are 9 messages in that mbox file which match ^>From they are 3 duplicate copies of 3 messages each18:10
fungiall 3 unique messages are in the set of ones that didn't import correctly18:10
clarkbmaybe it was able to do deduplication but only one layer deep resulting in 6 messages being imported wrongly?18:12
clarkbs/wrongly/in a weird way/18:12
fungiaha, yes, the messages incorrectly imported in hyperkitty appear to be 2 duplicates of each of 3 messages18:13
fungimaybe there is a copy of each one in the right place as well18:14
fungibody getting stripped out resulting in a blank message: https://lists.openinfra.dev/archives/list/foundation@lists.openinfra.dev/thread/VEFLPHLB43Z4EM5KJN76XEUHHOT5HR4A/18:15
clarkbthat would be fun. We could potentially just delete the bad ones out of the archive18:15
fungihttps://lists.openinfra.dev/archives/list/foundation@lists.openinfra.dev/thread/2UJR6X6ZAMH4TNEPU4ABFXHMKXEWCOLP/ and https://lists.openinfra.dev/archives/list/foundation@lists.openinfra.dev/thread/7EKUWZXEMOWODPWUMFPP3WIRADQ6BH5C/ are the other two18:16
fungiso in summary, there are three messages in the october 2011 archive which are disembodied heads, and then two duplicate copies of their headless zombie bodies roaming the september 2023 archives18:18
fungi...come back from the grave to reclaim their heads18:18
fungianyway, i think we can delete the extra headless corpses and then try to reattach bodies to the three severed heads from 2011, but the latter part may require db surgery (and lots of lightning)18:22
fungialso not urgent, we can mull the situation over as long as we like, maybe discuss in tuesday's meeting18:23
fungiand with that, i've got an appointment to get to... bbiaw18:26
TheJuliaso... crazy question. Is it just me, or does the favicon for lists.openinfra.dev look like moopsy ?18:57
TheJulia(from star trek lower decks)18:58
TheJuliaOh, I see, it is "Hyperkitty"18:59
TheJulia... still looks like moopsy :)19:00
fungiit does indeed look like moopsy19:44
fungiwe can change it, we just haven't invested in theming postorius/hyperkitty until we're done migrating everything19:44
fungiin a few weeks we'll hopefully do openstack, and then everything will be fully on mm3 and we can probably spend some time beautifying it19:45
fungilike add site-specific favicons and logos for each list domain and stuff19:46
opendevreviewJay Faulkner proposed openstack/project-config master: JayF volunteering for IRC ops  https://review.opendev.org/c/openstack/project-config/+/89616219:50
TheJuliafungi: oh, no worries. I love it!20:52
funginow you're channeling nils ;)22:18
TheJulialol22:55
fungithat video is immortal22:56

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!