Wednesday, 2022-10-19

*** ysandeep|out is now known as ysandeep|PTO00:05
*** rlandy is now known as rlandy|out00:32
*** yadnesh|away is now known as yadnesh04:19
*** bhagyashris is now known as bhagyashris|ruck06:24
*** jpena|off is now known as jpena07:37
*** rlandy|out is now known as rlandy10:31
*** sean-k-mooney1 is now known as sean-k-mooney11:31
opendevreviewVishal Manchanda proposed openstack/openstack-zuul-jobs master: Add nodejs v18 project templates for 2023.1 release  https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/86187611:40
*** frenzy_friday is now known as frenzyfriday|rover12:52
*** dasm|off is now known as dasm13:55
opendevreviewFelipe Reyes proposed openstack/project-config master: Create 'Backport-Candidate' for Charms repos  https://review.opendev.org/c/openstack/project-config/+/86189214:29
coreycbclarkb: jammy has python version 3.11.0~rc1-1~22.0415:02
clarkboh neat, I thought I had looked and didn't find one. But I used package search and not aptitude search against the indexes15:08
coreycbclarkb: actually it's in jammy-proposed so you may have15:09
*** ykarel is now known as ykarel|afk15:27
*** dviroel is now known as dviroel|lunch15:46
*** ykarel|afk is now known as ykarel15:48
fungifor some reason, puc is giving me a 5xx error when i search for keyword "python3.11"16:16
tonybThis may be better asked in https://matrix.to/#/#zuul:opendev.org but I'll start here as it's for the requirements repo.16:25
tonybAre there good examples / docs on building a multi-node job where each node runs a different OS (eg CentOS8, Focal, Jammy) that each run $commands to generate an artifact, and then a dependant job that grabs and collates each artifact ?16:27
*** dviroel|lunch is now known as dviroel16:28
fungitonyb: the short explanation is you need to create a custom nodeset which lists (and optionally renames) the nodes you want included and what labels (images) they're based on16:30
fungii can get you simple examples, just a sec16:31
tonybThanks fungi16:32
fungitonyb: here's an example of doing it anonymously within a job, though you can also create named nodesets and refer to those in your jobs if you need to reuse them: https://opendev.org/opendev/system-config/src/branch/master/zuul.d/system-config-run.yaml#L60-L7116:34
fungithat one's a fairly extreme case, but shows you a 5-node group using 4 different labels16:35
fungithe only real caveat is that there needs to be at least one node provider which has all the labels you use, since nodesets can't span providers16:36
fungibut if you're using our generic labels that's generally not an issue16:37
tonybOk that makes sense, I have no idea how to do the second part.  Grabbing some artifact that each of those nodes create.16:37
fungiif it has the same name then you just need a task which grabs the file and apply it to "all" hosts, but there's almost certainly a standard role already in zuul-jobs which does what you need in that regard16:39
tonybOkay I'll go grepping16:40
fungitonyb: here's an example build which collected similarly-named things from multiple nodes: https://zuul.opendev.org/t/openstack/build/b5b1691da7d241d792bd65564b9d9c63/logs16:44
fungithough you can see the same process at work in multinode devstack jobs too16:45
*** jpena is now known as jpena|off16:45
fungicheck out the console tab from that build and expand the post playbooks16:46
fungithere you can see the roles/tasks which ran to fetch the files from each node16:46
tonybfungi: Thanks16:49
fungitonyb: this may be the one you want https://zuul-ci.org/docs/zuul-jobs/general-roles.html#role-stage-output16:50
tonybThat looks good.  .... and there's a small docs fix coming :)16:55
*** yadnesh is now known as yadnesh|away17:14
elodillesclarkb fungi : if there is nothing planned for now, then i'd run the EOL branch clean up script to delete ironic project's rocky & stein branches (which by the way will eliminate some zuul config errors \o/)17:43
clarkbelodilles: I have nothing right now. I do need to restart gerrit at some point to pick up some gerrit plugin cleanups in the newer images, but that will be late today when thingsare quiet if I get to it17:44
elodillesclarkb: ack, thanks, then i'll run the script now and then it won't interfere with the gerrit restart17:48
fungiyeah, sounds good!18:01
elodillesclarkb: the script has finished (deleted branches: https://paste.opendev.org/show/bm23cSgmfWzurR93gEkr/ ) 18:20
clarkbelodilles: thanks for the heads up18:21
fungialso yay for more zuul config errors and periodic job failures going away!18:22
fricklertonyb: actually I think you don't need multiple jobs. just run the u-c generation on the different nodes as one task and then do the aggregation on one of those node or maybe on the executor directly18:47
fungiyes, i assumed it would be: test nodes generate constraints lists, executor combines lists and uploads19:01
fungii agree that doing it as multiple jobs is overkill19:02
fungiif you wanted to avoid having multi-node jobs, you could generate each constraints sublist in a separate single-node job which creates an artifact, and then have a zero-node job depending on those which retrieves and combines them19:03
fungithe only thing i'm not sure about is if we can do patch proposals from the executor, but yes you can always just pick a specific test node and make those change proposal tasks only relevant for that one19:04
*** dviroel is now known as dviroel|biab20:47
tonybokay.  I think I understand.21:28
tonybI'll write up a dummy set of plays and a small doc to make sure I do.  and then I'll figure out the real implementation.21:28
fungithat sounds like a great way to sketch it out21:38
*** dviroel|biab is now known as dviroel21:59
*** dasm is now known as dasm|off22:19
*** dviroel is now known as dviroel|out23:03

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!