Tuesday, 2023-04-04

opendevreviewIan Wienand proposed opendev/system-config master: install-launch-node: upgrade launch env periodically  https://review.opendev.org/c/opendev/system-config/+/87938700:01
opendevreviewClark Boylan proposed opendev/system-config master: Fix rax reverse DNS setup in launch  https://review.opendev.org/c/opendev/system-config/+/87938800:03
clarkbianw: ^ like that maybe?00:03
clarkbI've got to start on dinner now. Feel free to push updates to that.00:04
ianwthanks, that feels like things are in more logical places00:07
ianwi think we have something wrong with our plugin generation02:27
ianwfor gerrit02:27
ianwsudo docker-compose run shell java -jar /var/gerrit/bin/gerrit.war init -d /var/gerrit --list-plugins02:27
ianw * download-commands version v3.7.202:28
ianwmaybe this one is clearer02:31
ianw      - name: gerrit.googlesource.com/plugins/commit-message-length-validator02:31
ianw        override-checkout: v3.6.402:31
ianwhttps://opendev.org/opendev/system-config/src/commit/a24509a7d70f0be7aa7e2e4de2555d0551802dc2/zuul.d/docker-images/gerrit.yaml#L6302:31
ianw * commit-message-length-validator version v3.7.202:31
ianwlist plugins shows it at v3.7.2 on review0202:31
Clark[m]ianw: they are very likely the same thing because they retag old versions. Maybe the build process takes the latest tag version?02:32
Clark[m]Yes 3.6.4 and 3.7.2 are the same thing on https://gerrit.googlesource.com/plugins/commit-message-length-validator02:32
ianwyeah, i wonder if that's it, that bazel is basically putting the wrong stamp in the .jar02:33
ianwi am interested in this because i was just validating the downgrade path02:33
ianwwhich does work to go 3.7 -> 3.6 ... but then i listed the 3.6 plugins and thought "that's weird ..."02:33
Clark[m]They differ on review notes. Do we install that one and does it list as 3.6.4 on prod?02:33
Clark[m]https://gerrit.googlesource.com/plugins/reviewnotes/02:34
ianwfull list is https://paste.opendev.org/show/bwWcqd0agVQuXnXemgV4/02:34
ianwreviewnotes show 3.6.402:34
Clark[m]Ya I think that is what is happening. Delete-project also differs 02:35
ianwyeah, i'd agree all the ones showing up as 3.7 appear to be multiple tags in gerrit02:36
ianwi guess crisis averted, just something to be aware of, especially in a downgrade pressure situation02:36
ianwi'll add it to the downgrade notes02:36
ianwfor reference https://23.253.56.187/c/x/test-project/+/3 is a downgrade.  i've now completed everything i can think of for the 3.7 upgrade checklist.  i'm going to work on adding in notes for renames02:37
opendevreviewIan Wienand proposed opendev/system-config master: Upgrade Gerrit to version 3.7  https://review.opendev.org/c/opendev/system-config/+/87941203:44
opendevreviewIan Wienand proposed opendev/project-config master: Add renames for April 6th outage  https://review.opendev.org/c/opendev/project-config/+/87941404:18
Clark[m]ianw: I think that needs the empty groups list in the yaml too04:28
ianwok04:55
opendevreviewIan Wienand proposed opendev/project-config master: Add renames for April 6th outage  https://review.opendev.org/c/opendev/project-config/+/87941405:10
*** amoralej|off is now known as amoralej06:10
opendevreviewTakashi Kajinami proposed openstack/project-config master: Retire puppet-rally - Step 1: End project Gating  https://review.opendev.org/c/openstack/project-config/+/87941906:22
*** elodilles_pto is now known as elodilles08:04
opendevreviewgnuoy proposed openstack/project-config master: Add OpenStack hypervisor charm  https://review.opendev.org/c/openstack/project-config/+/87943610:18
opendevreviewgnuoy proposed openstack/project-config master: Add OpenStack hypervisor charm  https://review.opendev.org/c/openstack/project-config/+/87943610:57
lecris[m]Could we bring back tarball releases for https://opendev.org/openstack/ansible-collections-openstack? According to Fedora packaging guidelines https://docs.fedoraproject.org/en-US/packaging-guidelines/Ansible_collections/#_collection_source, it is preferred to use the upstream source11:57
*** amoralej is now known as amoralej|lunch12:13
lecris[m]Why are most packages dependent on pbr build?12:32
fungilecris[m]: these seem like questions for the openstack community, not the opendev collaboratory. specifically you'd need to ask the openstackansible folks about their choice to run (or not run) tarball publishing ci jobs. you can find them in the #openstack-ansible irc channel here on oftc12:53
lecris[m]Same for the pbr?12:54
fungias for why most projects use pbr, it's so that they don't need to duplicate version information, change history and other git metadata into the worktree in order to include them in source distributions12:54
fungipbr infers versioning from git tags when building source distributions, generates changelogs, and a number of other similar sorts of tasks12:55
lecris[m]Hmm there already is setuptools_scm for versioning, but what about the other that are needed?12:56
fungisetuptools_scm is very new. we created pbr about 12 years ago to allow us to have declarative packaging rules and template them consistently across hundreds of projects12:57
lecris[m]One thing is that it doesn't support PEP517/PEP621 which will help migrate the spec recipies12:57
fungipbr added the idea of a setup.cfg file so we could strip pretty much all arbitrary code out of setup.py12:57
fungiwhich setuptools eventually copied years later12:58
fungiand yes, pbr does support pep 517, but openstack projects aren't using it that way yet12:58
fungiyou can configure pbr as a build backend in pyproject.toml12:58
lecris[m]Is it in master or a recent release?12:58
fungiit's been supported in a number of recent releases12:59
fungii forget how far back now12:59
lecris[m]So the documentation is outdated? https://docs.openstack.org/pbr/latest/user/using.html#pyproject-toml12:59
fungithat documentation says it supports being used as a pep 517 build backend. why are you thinking it's out of date?13:00
lecris[m]> Eventually PBR may grow its own direct support for PEP517 build hooks, but until then it will continue to need setuptools and setup.py.13:00
fungiright now its pep 517 support is as a setuptools plugin13:01
fungithat's saying it may eventually be made usable for pep 517 cases without setuptools at all13:02
lecris[m]But the "setup.py" bit? Is it still necessary?13:02
fungisure, that doesn't make it non-pep-517-compliant though13:02
jrosserfungi: #openstack-ansible-sig is for the ansible collection13:03
fungijrosser: oh, good to know. i assumed from the name it was an openstackansible deliverable13:03
lecris[m]Hmm, so currently can it be migrated to pyproject.tom+setup.py file?13:04
jrosserfungi: OSA predates the notion of ansible collections by a long long time so there is an unfortunate name collision there13:04
fungilecris[m]: yes, at the moment the only reason a setup.py is required is to set a runtime flag in setuptools. its contents in a pep 517 context can be as minimal as "import setuptools; setuptools.setup(pbr=True)"13:05
lecris[m]Hmm, ok, might be worth preparing a few reviews to move those to a PEP62113:06
fungilecris[m]: also technically, the channel for the pbr maintainers is #openstack-oslo, i just happen to be familiar with the answers to your questions since i help with it13:06
*** amoralej|lunch is now known as amoralejh13:07
*** amoralejh is now known as amoralej13:07
lecris[m]Although something opendev related, can we configure webhooks or equivalents? I am discussing with packit folks if we can get some support there13:08
fungisure, zuul jobs can call webhooks to trigger external systems. a good example is probably the one we have for readthedocs13:09
lecris[m]So it is on the repo's zuul job rather than in the opendev/gitea config?13:09
fungilecris[m]: yes, a project could define a job in its own .zuul.yaml for calling an external webhook13:11
fungithough the one for readthedocs is here for reference: https://opendev.org/openstack/project-config/src/commit/db4de26/zuul.d/jobs.yaml#L1131-L115013:11
lecris[m]Oh this one is propagating to all jobs?13:13
fungithat one happens to be defined centrally so multiple projects can reuse it (since it happens to rely on a common set of credentials)13:14
fungithe playbook referenced there simply calls this role from the zuul stdlib: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/trigger-readthedocs/tasks/main.yaml13:14
lecris[m]The secrets are stored in zuul?13:15
fungithe credentials for that one are encoded as a zuul secret, yes: https://opendev.org/openstack/project-config/src/commit/db4de26766a0acf0db9068482b209cec16b1cc57/zuul.d/secrets.yaml#L578-L59213:16
lecris[m]Ok, but there are no built-in zuul webhooks that can later be used for other non-opendev projects?13:18
fungizuul has per-project asymmetric encryption keys, and a client tool you can use to access the public key for each project from its rest api and locally encrypt whatever you need in a secret: https://zuul-ci.org/docs/zuul/latest/project-config.html#encryption13:18
lecris[m]Or gerrit actually13:18
funginot sure what you mean by "built-in zuul webhooks that can later be used for other non-opendev projects"13:18
fungimaybe you can describe what you're trying to automate?13:19
fungiare you trying to use opendev's zuul to automate an external process based on development events in a project hosted in opendev, or automate something for a project in opendev triggered by some external event?13:21
lecris[m]Basically hooking packit so that it pushes builds to copr, downstream fedora (or equivalent) etc.13:21
fungii'm not familiar with packit and copr, and only vaguely familiar with fedora's build systems... but trying to restate to make sure i understand, you want a tag event for a project in opendev to trigger package builds in fedora's infrastructure?13:23
lecris[m]So for packit we need a few ways of communicating:... (full message at <https://matrix.org/_matrix/media/v3/download/matrix.org/VUZNpmmirCTzRYqLRMAxcrQE>)13:23
fungii know fedora's using zuul already, so could just point their zuul at our gerrit's event stream as a source connection and run jobs (for packit or something else) based on tag push events13:24
lecris[m]Minimally, yes tags/releases, but preferably both commits to branches and PRs13:24
lecris[m]The main goal is to make sure the packages are being built and a required dependency is being met in each distro13:26
fungibookwar doesn't seem to hang out in here (unless lurking via the matrix bridge), but might be good to reach out to about fedora's packaging automation. i know we've had some very productive discussions with her about that stuff in the recent past13:26
lecris[m]Can we get a full git repo from gerrit or only patches?13:27
fungiwe strongly recommend you clone from opendev.org and only pull patches from review.opendev.org in order to minimize unnecessary load on the code review system13:28
fungiit sounds like what you want to do could be accomplished by connecting some sort of listener to our gerrit's event stream. doesn't have to be a zuul, there's a jenkins plugin for connecting to gerrit, but also it's just a basic ssh connection with json data that can be decoded by just about anything13:29
fungifor a while we had an mqtt broker we also published all that into, but nobody ever used it for anything non-experimental so we eventually turned it off13:30
lecris[m]Like a time-based listener?13:30
lecris[m]Wouldn't that be too consuming compared to a webhook?13:30
fungiwell, i mean, the webhook that doesn't exist isn't time consuming at all but also doesn't do anyone much good ;)13:31
fungigerrit has a stream-events function in its ssh-based api, so we make that available to anyone who wants to connect to and consume it13:32
lecris[m]Hmm, but the connection has to be maintained to get the latest updates?13:33
fungiyes13:33
fungiwe're a volunteer open source collaborative run by a skeleton crew of sysadmins with some development background and almost no available time, so building things just because they might be nice unfortunately falls below the priority threshold13:33
fungimost of the time we strive for "good enough" and are happy when we can achieve that with our available resources13:34
lecris[m]Yeah, that's why I was considering if there are built-in webhooks in either gerrit or zuul13:34
fungii'll admit, the concept of a "web hook" is still a bit vague to me, but when i think of a web hook being provided by a service, it's to allow external processes to trigger something in that system. it sounds like you're not asking for web hooks built into our systems, but some ability for our systems to call web hooks in some other systems?13:36
fungibut also, maybe i have a fundamental misunderstanding of what a web hook is13:37
lecris[m]More like if the systems already have webhook capability coded that can be reused13:37
fungiyeah, we have zuul, which runs arbitrary ansible, and that can make web requests to whatever we want13:38
fungiwe see that as built in support for calling web hooks, but i guess that's not what you're talking about13:38
lecris[m]Indeed but if there is a bult-in, there is a standard form of the information being pushed (commit, PR, etc.) which can be re-used both for opendev and other projects that use other hosted such services13:39
lecris[m]For example, in the gitea webhooks, it mimick github's information, so we can reuse the code for github with minimal changes13:40
fungiwhere is that standard defined?13:40
fungiahh, github is the standard?13:40
opendevreviewLucas Alvares Gomes proposed openstack/project-config master: Rename x/ovn-bgp-agent to openstack/ovn-bgp-agent  https://review.opendev.org/c/openstack/project-config/+/87945613:40
fungifor gitea, who gets to configure what web hooks it calls in which systems? is that something the project owners specify or is there a callback registration you use?13:41
lecris[m]Primarily the upstream service. Gitea has its webhook standard that is adapted from Github, so if we change it for gitea, it supports other gitea instences as well13:41
lecris[m]For gitea it's baked in the gitea executable13:42
fungibut i mean, taking your example where gitea is signalling some event to your packit service, does the project admin in gitea tell it to send those events to packit, or does the operator of the packit system register a callback into gitea to say it wants to receive certain events? mainly curious what the control flow and coordination protocol is like13:43
lecris[m]Ok, had to double-check that I don't have the order mixed up. It's the former, the project registers a webhook URL, e.g. https://webhook.packit.dev/gerrit/openstack/project. Then whenever there is a new event (git commit, new branch, etc.) gitea prepares a payload with the relevant information and passes it to webhook.packit.dev13:49
fungithis is one of the reasons i dislike the vague "web hook" terminology, it elides much of the complexities of inter-service rpc13:49
fungiso based on your description, the maintainers of the project in gitea are the ones to decide that events should cause a signal to be sent to the packit service when an event occurs which meets the relevant conditions13:51
fungithat's not too dissimilar from the maintainers of the project setting a ci job which sends a signal to the packit service under the same circumstances13:52
lecris[m]Yes, but by default webhooks send all possible information13:54
fungii don't understand why that would be any different from what a ci job could supply13:55
opendevreviewLucas Alvares Gomes proposed openstack/project-config master: Rename x/ovn-bgp-agent to openstack/ovn-bgp-agent  https://review.opendev.org/c/openstack/project-config/+/87945613:55
fungiin our case, zuul has a significant amount of state information for events which can be used by jobs and so could be included in whatever http requests were made by the job to an external system13:56
lecris[m]In the ci job, you have to manually create the payload, while if there is built-in webhook, it will send the maximum information so the other end simply filters the relevant info13:56
fungisomeone at github or devs for gitea had to create that payload too. the up side to making it a role in zuul's standard library is that it's then standardized and reusable by any project13:57
lecris[m]And if it is in the service, there are standardized names, e.g. repo_name: my_repo vs respoistory: my_repo, etc.13:57
lecris[m]Indeed that, but one step further if that could be in zuul itself, then there is less to be configured13:58
fungiif there's a published specification for that standardized payload, then making an ansible role to transmit that data doesn't sound like a substantial development task13:59
fungizuul tries to not embed things which can be externalized into ansible, in order to provide increased flexibility13:59
fungibut "in zuul" could mean multiple things. it's like saying "this should be included in python. no not the python stdlib, in the interpreter itself" (something in python's stdlib can be considered sufficiently "in python" for most purposes)14:01
lecris[m]Well the payload is dependent on the service. github payload vs rtd payload are different. The former has fields of commit the latter may have documentation_url. But within the families of services it can be standardized or adapted from one to another. Gitea and gitlab borrow from github, zuul would borrow from droneCI, etc.14:02
fungiright, i meant a role that implemented whatever payload specification you're talking about. it would be named for the standard it's implementing of course14:03
funginot "an anything webhook" that somehow tried to support all standards at once, because that would be madness14:03
lecris[m]Indeed, but also the triggers must be relevant, i.e. new commit, new branch, new tags, sure, but there is also new comment (e.g. /packit redo build), new user ownership14:05
fungior alternatively, someone writes a specification particular to zuul rather than trying to copy the behavior of another system, but if there's a public standard that could be emulated closely i expect that would be preferable. i know we talked about some of this in the cdf interop sig a while back14:05
fungithe roadblocks we ran up against is that different ci systems have fundamentally different designs and support different sorts of workflows, so creating a standard that covers all of them seemed insurmountable14:08
lecris[m]I am thinking of enabling these features in the repository that defines these, e.g. webhooks.gitea.URL=webhookpackit.dev, similarly for a gerrit and zuul one. Then internally, we just use the built-in/configure equivalent to give the relevant info for each service14:08
lecris[m]For packit the main information required is in the gitea+gerrit+launchpad14:09
lecris[m]Primarily gitea+gerrit. There is already built-in gitea support that only needs to configure URLs, while gerrit I am not sure14:10
fungiyes, from zuul's perspective it gets all that information from the gerrit event stream, and supplies it as ansible vars which can be embedded in a url task as parameters, it's mostly a matter of templating that out in the desired format14:11
fungiwell, in our case mostly from gerrit event steams anyway, but zuul also has github, gitlab and other source connection drivers so it can supply information from them as well depending on the context of the pipeline14:12
lecris[m]But what about the difference in events? Commits, tags, comments, etc.?14:13
lecris[m]The zuul would then have to be called for each small event, which would be excessive14:14
fungii'm not sure what you mean by "called"14:29
fungizuul consumes the event streams/feeds from all of its configured source connections, and takes whatever actions are defined that match the parameters in the received events14:29
fungiso zuul is already being "called" by every event in every stream it's multiplexing14:30
fungiaside from a small amount of up-front noise filtering, it looks at each event and decides if it matches a trigger, and if so enqueues it to run whatever jobs are defined for the pipelines triggered from such an event14:32
lecris[m]But you would have N repos configured to filter through for each event, compared to if it's triggered directly by the service14:32
fungimany jobs are run simply from pattern matches in review comment events14:32
fungiagain, not sure what you mean by "directly triggered by the service." which service? it seems like you're making a lot of architectural assumptions14:33
lecris[m]Gitea/gerrit14:33
fungiwell, again, that's why i originally pointed out gerrit's event stream. that's what zuul consumes directly too14:34
lecris[m]Yes but it goes gerrit -> zuul -> playbook -> packit, instead of gerrit -> packit14:35
fungiit may not meet the modern expectation that "all protocols are taco bell^W^Whttp" but it addresses the basic use case14:35
fungiand exists today available for use already14:36
lecris[m]Zuul has to do a lot of heavy-lifting there both in the filtering the events on gerrit -> zuul, which if it's handled by ansible it would be another overhead, and then preparing payloads zuul -> playbook14:37
lecris[m]With gitea at least it has webhooks baked in, so we don't have to go gitea -> zuul -> playbook -> packit for new tags, and new commits14:39
fungiyes, i mean the gerrit event stream is already available if you want to consume that data and bypass zuul to do so14:41
lecris[m]Well, yes, but the whole point of the webhooks is to limit the requests from the application (packit) and to lessen the need of the application to keep in sync 14:44
fungisure, i get that. we don't have that though, we have the gerrit stream-events api or zuul jobs calling ansible to send data somewhere14:45
lecris[m]Hmm, I don't suppose gerrit folks would be interested in webhook development?14:50
lecris[m]Oh wait, it already has https://gerrit-documentation.storage.googleapis.com/Documentation/3.6.0/config-plugins.html#webhooks14:51
fungithere may be a gerrit plugin, but we would need to do development work on our side to integrate whatever configuration mechanism that uses to our infrastructure-as-code continuous deployment automation14:51
lecris[m]Other than enabling a webhook based on the configuration of the projects repo, I don't think there is anything else needed14:52
fungi"other than [trivial sounding development work that really isn't necessarily trivial] ..." ;)14:54
lecris[m]I mean: https://gerrit.googlesource.com/plugins/webhooks/+doc/master/src/main/resources/Documentation/config.md#plugin_configuration. It doesn't look like there is a per/project configuration, but it doesn't look that bad14:55
fungithink back to the last time someone approached you about adding something in one of your complex projects, didn't actually have a patch and hadn't even looked at your source code, but insisted "surely it's simple"14:58
fungii'll try to take a look at the plugin's docs, but likely won't have time until after my next several meetings15:00
clarkbin general I would say we already have a generic way of accomplishing this task using tools we understand well. It feels weird to add another system to do it that we have to understand and debug15:01
clarkbre pep517 setuptools itself is pep517 capable and we piggy back off of that in pbr to avoid needing to rewrite major parts of pbr to remove setuptools15:01
lecris[m]Indeed, but the plugins and enabling services sounds simpler than creating a custom zuul job. And I was surprised looking at the gerrit configuration that it's rather small. Gitea on the other hand, I admit, it would be more complicated if there is no api access to it.15:03
clarkblecris[m]: I expect the zuul job would be straightforward and likely completely reusable. So write once and then anyone can reuse it15:03
lecris[m]clarkb: re: adding new system. I think we are overlooking that the plugin is already maintained externally. We don't need to understand and debug it. Like we don't need to understand and debug gitea right?15:05
clarkbAgain the upside is we already have that tool and it is much easier to debug than the gerrit server15:05
clarkblecris[m]: we absolutely do need to understand and debug gitea...15:05
clarkbrunning services like this is non zero effort. Figuring out upgrades, pushing bugfixes we do all of that15:05
clarkbwe already need to do that for zuul and zuul exposes its logs to the end user in an easy to consume manner15:06
clarkbgerrit does not15:06
clarkbthere is also no mechanism for managing a secret (password/token/etc) in the gerrit projet config today15:07
clarkbbut zuul supports this making a potential zuul hook system more flexible15:07
lecris[m]<clarkb> "lecris: I expect the zuul job..." <- I think here we are swinging the pendulum too much on the other side. We would need to understand how to filter events, handle webhooks etc., which the webhook plugin have better expertise in. We are already maintaining the gerrit configuration, so the work to add the plugin will be less than the zuul playbook15:10
lecris[m]As for debugging on zuul vs the plugin, we would have to do the same debugging on either implementation (i.e. for the initial setup), but for the plugin, we know that someone has done 90% of the work15:11
clarkbin general we have been trying to take functionality out of gerrit15:12
lecris[m]clarkb: That is not an issue, because most webhooks are not encrypted by a configured secret15:12
clarkbwe were stuck on version 2.13 for years and it took a herculean effort to upgrade to 3.2 because of all the stuff glommed on15:12
clarkbI think we need to start with that as a general given. Gerrit should be kept as simple as possible.15:12
clarkbWith that as a given we have another tool that integrates with gerrit called zuul that enables extremely flexible event driven actions.15:13
clarkbWe use this for replicating to github for example even though gerrit could do it15:13
clarkbits not like these decisions are happenign in a vacuum there is history and long term effort behind it. I personally do not want ot add generic webhooks to gerrit15:14
clarkbthe upside is that simplifying gerrit allows us to keep it upgraded. We are also able to run zuul effectively and keep it up to date. The end result is that we can focus our efforts in places that make us more efficient15:15
lecris[m]One issue is that webhooks are more intense in the number of events. Syncing with github would be suitable for a CI system, but I don't know about a webhook system, especially if it has to be maintained by a minimal team15:16
lecris[m]Note that we also need to understand the gerrit webhook plugin to mimick the payload there, since upstream would want to develop once for any gerrit system15:18
lecris[m] * Note that we also need to understand the gerrit webhook plugin to mimick the payload there, since application downstream (packit) would want to develop once for any gerrit system15:19
lecris[m]Btw, going back to pbr and/or the current packaging approach. I see that not even the optional dependencies are included in setup.cfg. Makes packaging rather difficult to maintain15:32
opendevreviewAndy Wu proposed openstack/project-config master: Add cinder-nfs charm to OpenStack charms  https://review.opendev.org/c/openstack/project-config/+/87946915:33
lecris[m]I.e. what is BuildRequires, what is Requires, what is for testing, etc.15:33
lecris[m]Normally we would use one-line: %pyproject_buildrequires -x test15:34
clarkbpbr loads requirements.txt as requires. test-requirements are installed separately in test envs15:34
clarkbthe only pbr requires should be pbr + setuptools unless epxressed in the setup.py or setup.cfg or pyproject.toml15:34
fungilecris[m]: they can be put in pyproject.toml now that it's a thing, they just haven't been. remember these are 13 year old projects you're talking about15:35
clarkb*the only pbr build requires15:35
fungiupdating to newer packaging standards when the old ones still work is viewed as low-priority makework by most projects15:35
fungithey'd rather spend what limited time they have on improving their software, and like to ignore packaging topics as much as they possibly can as long as nothing breaks15:36
lecris[m]Yeah, let the packagers deal with that ><15:36
JayFAlso until recently, pyproject.toml support in setuptools was lacking -- you couldn't even install a package in develop mode until a few weeks ago.15:36
fungialternatively, upstream developers can spend their time doing packaging and stop making software ;)15:37
lecris[m]At least it helps to know some basics like "pbr" is the only `BuildRequires`. I see in the spec file that they were not taking that in consideration15:38
fungii'm a major supporter of distro package maintainers and the tireless work they put in, but there's a significant (and growing) slice of open source software development that views traditional packaged distributions as "legacy" process to be magically replaced by publishing random container images15:39
lecris[m]But otherwise, there is no EPEL9 compatibility issues for implementing PEP621?15:39
fungiso while openstack tries to help the various downstream distributors, there is a sentiment leaking slowly into projects that it would be "easier" to just forget all the things we do to try to make stuff packagable and assume everyone will use random container images15:40
fungiwhich makes it that much harder to get them to care about packaging15:40
lecris[m]I don't know, doesn't pypi also encourage PEP621?15:41
clarkbto be fair before pyproject.toml pbr was probably one of the easier ways to figure this out for external packaging since it was well defined15:41
clarkba random setup.py could do anything. pbr constrained that15:41
lecris[m]That would not be legacy packager issue, simply adapting a common standard that pypi supports helps with Fedora packaging as well15:42
clarkbI don't know that pbr supports 621 since it would need to grow support for reading that info out of pyproject.toml instead of setup.cfg15:43
clarkbbut that can probably be added. I also don't know how that relates to epel9 (this is the wrong group for that I suspect)15:44
clarkbI guess it might half work if setuptools supports it15:44
fungilecris[m]: the same people who think distros are "legacy" also don't see a lot of point in packaging things for pypi either (though humorously, still rely heavily on things others publish to it when making images)15:46
lecris[m]Yeah, that's the issue I recently encountered in that setuptools in EPEL9 and Fedora36 does not read pyrpoject.toml, while F37 onwards have that baked in. I was wondering if that was a limiting factor to that migration, if it was discussed15:46
JayFthe primary limiting factor in any openstack migration of packaging tooling is the number of repos we have :)15:46
fungiclarkb: actually pbr doesn't need to do 621 for anything other than the project name. i have a pbr-using project which moved everything else from setup.cfg to pyproject.toml15:47
lecris[m]fungi: *goes to cry in a corner*15:47
clarkbfungi: cool so half works is actually 99% works :)15:47
fungisetuptools directly handles the rest15:47
JayFthat level of inertia requires quite a bit to customer/business-case-making to overcome15:47
fungiclarkb: the only metadata i didn't have any luck moving out of setup.cfg was the name, pbr looks directly for it there15:48
fungii'll eventually push a patch to fix that if i can work out where it should be patched15:48
clarkbfungi: is toml parsing in stdlib? if not we probably have to vendor that15:48
lecris[m]fungi: How about dependencies? It does not have to be from requirements.txt?15:48
clarkblecris[m]: no, pbr reads them from requirements.txt ut if you define them elsewhere in standard setuptools locations that should be respected15:49
fungiclarkb: it is and isn't. there's a toml lib in newer python versions' stdlib but packaging tools vendor in a lib for cases where it's not available15:49
lecris[m]clarkb: 3.11 iirc, otherwise they use toml lib15:49
lecris[m]Oh it's the opposite? 🤔15:50
fungilecris[m]: clarkb: yes, i can use pbr with all of project.dependencies in pyproject.toml, no requirements.txt files at all15:50
fungiyou need fairly recent setuptools, but it just relies on setuptools' support for it15:52
clarkbbasically pbr was born out of distutils2 ideas that got killed abandoning a lot of what pbr had done. The idea was to make standard configuration practices for packaging. This included how you define dependencies ut also versioning etc. In the many years since setuptools has swung around and added these features bit by bit15:52
clarkbbecause pbr acts as a setuptools add on essentially you can do all of the normal setuptools stuff too15:52
JayFclarkb: if openstack were better at OSS-marketing, we might could've saved the entire python packaging ecosystem from this chaos :D (I'm joking but there might be a nugget of truth to this)15:53
clarkbJayF: well pypa was very opposed to pbr despite being based on their ideas becaus at some point they decided their ideas were bad15:53
clarkbthey just failed to communicate that externally until it was too late.15:53
fungiprimarily things like setuptools adopting a (slightly modified) version of pbr's setuptools.cfg declarative data files, and the emergence of plugins like setuptools-scm (and others whose names escape me for some of its other features), though they don't have quite the level of pep 440 + semver implementation it does15:53
clarkbAnd since then seem to have acknowledged it wasn't all a terrible idea since a lot of new setuptools features are things pbr has done since day one15:53
lecris[m]Hmm, so is there something specific to pbr that is still needed that couldn't be taken from recent setuptools?15:54
fungithe main one openstack relies heavily on is semver signalling15:54
JayFa secondary one is you need a *darn good reason* to change things across hundreds of repos/deliverables15:55
fungibeing able to infer likely future version numbers based on information in git metadata, in order to provide sensible development versioning for non-tagged commit states15:55
fungilike knowing when there will be a minor or major version increase coming in the next release is inferred from specially formatted commit footers15:55
lecris[m]<JayF> "a secondary one is you need a *..." <- Yes, but if we ever migrate to PEP621, would be free work to change to setuptools16:00
fungiuntil the next packaging standards change comes along16:01
lecris[m]Oh pbr doesn't just use git describe information?16:01
clarkbno it creates pep440 compliant versions based on what it knows about the repo state16:01
fungino, not even close. that's not nearly robust enough to predict and create pep 440 compliant prerelease and development versions that sort correctly16:02
clarkbit also records the shas in package metadata since pep440 doesn't allow you to embed them16:02
fungithough there's work in progress to try to annotate additional git information in python package metadata, that's something pbr has been doing for over a decade16:02
lecris[m]But does it also use .git_archival.txt? Is the information there not enough?16:03
JayFI think we've been trying to say that many of those things likely didn't exist when PBR was created and supporting these things16:04
fungibasically, we solved most of these problems 10+ years ago, and now that the rest of the python packaging system who rejected our implementations is coming back around to realizing people had similar needs, there's not a lot of urgency from projects to switch to something that provides no additional benefit just because it's suddenly been standardized16:04
*** amoralej is now known as amoralej|off16:04
lecris[m]For reference it does have the hash and describe... (full message at <https://matrix.org/_matrix/media/v3/download/matrix.org/oXeOSxiQSddxTGhCdWolkSBB>)16:05
fungiwe've had an implementation of all of this all along. people had prejudices against what we produced for a variety of reasons, and while they've since proven that the things we did were actually needed, there's a lot of amnesia and nih inherent to the whole python packaging scene16:05
lecris[m]I understand the PBR did it first, but are we keeping up with setuptools on the other fronts?16:05
fungilecris[m]: who is "we" in this case? if you don't know whether pbr is keeping up with setuptools, then it's an odd way to phrase the question16:06
fungipbr works with the latest releases of setuptools, as evidenced by pbr-using projects being buildable with latest setuptools16:08
fungii try to keep on top of deprecation warnings by testing my projects with PYTHONWARNINGS=error and running down any new ones that appear so that it stays that way16:09
clarkbJayF: not only did they not exist the rest of the python world was strongly against what we were doing. Then maybe 6ish years ago there was a shift where they seemed to realize these features were useful and they started showing up in standard tools16:10
fungilecris[m]: there have been some stumbling blocks, for example the vendoring of distutils into setuptools after it was removed from the stdlib suddenly shadowing distro-supplied distutils, but stuff like that tends to get worked out upstream in setuptools, i and others have helped to patch things there over time16:11
clarkband ya when you've had all these features that are just now showing up for 12ish years it makes motivation to change everything low16:11
JayFI mean, I hold zero of those ... emotional desire to not change it for philosophical/hisotrical reasons16:11
JayFbut like, we're talking about literally hundreds of changes across hundreds of repos, some of which are maintained by nobody or by like 5% of a full time person16:12
JayFthe bar for making that kind of change is ENORMMOUSLY high, as proven by us staying with tox even after the abuse it put us through16:12
clarkbJayF: I think the risk is that they will chnge it all under you again and again and again which is what history says is likely to happen. PBR on the other hand is quite stable16:12
fungiby the time they came around to wanting the features we had in pbr, there was an entrenched memory handed down of "pbr bad, avoid" so they basically redid everything in a vacuum rather than just reusing our solutions and code16:12
clarkbsome new file format will show up and we'll have to rewrite all our toml16:14
JayF"Why TOML?" "Everyone got tired of writing yaml for kubernetes and wanted a break" /s16:14
clarkbin this case I think they evaluated json and python's variant of ini16:15
clarkbjson isn't really human readable (fair) and ini can't do complex data srtuctures iirc16:15
fungiyeah, i don't have high hopes that the "ooh squirrel!" folks who decided toml was the shiny new thing everyone should use (because rust did it) won't get distracted by something else in two years and write all new standards16:15
fungii also love that one of the reasons they discounted using yaml was because "indentation confuses people"16:16
lecris[m]Hmm, now I understand more about the issue. I just hope that the standards don't grow too much dissimilar in that you need to package a different way for each build system. As long as it can be feature par or feature equivalent, at least with respect to the PEPs16:16
JayFfungi: so they thought that .... python developers .... don't understand indentation?16:16
JayFlecris[m]: I think that's the intention of the new peps in place. I suspect we'll wait to take action until it's evident they succeeded at that  :D16:17
fungilecris[m]: yes, we're trying to keep pbr capable of working with the new packaging standards as they solidify, but just because pbr can support them doesn't mean projects using pbr will use them16:17
lecris[m]Ok, then the only issue is getting the upstream projects to consolidate the metadata16:18
fungiand we have a (much stronger than usual for the python packaging ecosystem) sense of backward-compatibility for pbr. we still test that it works with python 2.7 for example16:19
lecris[m]But is it confirmed that pbr can read the metadata on EPEL9? I saw issues with setuptools reading pyproject.toml metadata16:20
clarkbno we don't do anything with epel16:20
fungihow new is the setuptools on epel9?16:20
lecris[m]It's before pyproject.toml support16:20
fungiyou need fairly recent setuptools for full pyproject.toml support, and if you're installing in editable/usedevelop mode there are still some rough edges with the setuptools pep517 build backend16:21
lecris[m]https://pkgs.org/search/?q=python3-setuptools 53 in some cases16:21
fungii can't speak to epel since i don't use proprietary distros, but we do run a bunch of pbr-using projects on centos stream 9 and rocky linux 916:22
clarkbre supporting python2 you have to do that because setup_requires are pulled by easy_install and can't filter by python version support metadata :(16:22
fungiand i think the openeuler we have may be close to rhel916:22
clarkbfungi: ya openeuler is a different kernel but a lot of theo ther packages are aligned with rhel9 iirc16:23
fungiso anyway, maybe whatever issues you saw with setuptools were specific to something on proprietary rhel16:23
lecris[m]But if pbr cannot bridge the gap there, it seems that moving completely to PEP621 would be on hold until EPEL1016:23
fungii have no idea what epel10 is. some red hat thing i guess?16:23
lecris[m]fungi: Same issue with F36 and centos-stream16:24
fungito be honest, i know next to nothing about red hat derivatives. long time debian user/contributor here16:24
*** iurygregory_ is now known as iurygregory16:24
fungibut i gather red hat, as a for-profit company, has the funds to solve problems like that if they see them as a priority16:25
lecris[m]fungi: I mean the next release of RHEL, because rebasing setuptools on all the packages will probably not pass16:25
fungino idea what rebasing setuptools means, but presumably something specific to rh/rpm packaging workflow?16:25
lecris[m]An update to setuptools will need an update to dependent packages as well since it is a build system, similar to a gcc update16:27
jrosseri wonder what actual issue there is, as openstack is already packaged for RH-ish systems via the RDO repos16:28
jrosserthat would be a great place to start to see how it is done16:28
fungiyes, you would presumably trigger new builds of all reverse-build-depends (in debian lingo)16:28
fungithere's also a #rdo channel where the red hat openstack package maintainers congregate16:30
lecris[m]I mean using PEP621 only. Moving all metadata to pyproject.toml16:30
lecris[m]rdo repos and everything work. I was asking about if pbr can bridge the gap on EPEL9 such that centralizing the metadata would be possible16:32
fungii doubt that will happen any time soon in most openstack projects for a variety of reasons, none of which have to do with pbr related challenges. also setuptools isn't likely to stop supporting its old configuration formats without a very lengthy deprecation period. the setuptools maintainers as a whole are risk-averse which is part of why it took so long to get support for pyproject.toml16:32
fungimetadata there in the first place16:32
lecris[m]But would it be feasible to start that, ideally migrating the metadata to pyrpoject.toml, but at the very least moving the requirements.txt to setup.cfg. Like a multi repo review where we can steadily add on top of it? It would make downstream packaging a trivial task, and we can just reuse the patches there until then16:38
fungiit's not all that straightforward. a lot of openstack projects have at least three requirements files used in different contexts, so those would need to be translated to extras which don't work quite the same way. i'm not saying it's insurmountable, but it's definitely work nobody's likely to prioritize over normal project development efforts unless there's some critical external change that16:41
fungiforces their hand16:41
lecris[m]Well yes, but at least to get the ball rolling16:43
lecris[m]I currently suspect that there is an issue with the requirements.txt being ambiguous if it's BuildRequires or Requires, and that makes for example ansible-collections-openstack not buildable because of dependency issue that should only be runtime dependency not build dependency16:45
clarkbI don't think that is ambiguous in a pbr context.16:46
jrosserfor that, it's an ansible collection, not really a python project though?16:46
clarkbrequirements.txt area always runtime deps with pbr iirc. You have to explicitly set setup_requires in your setup.py for build deps or use pyproject.toml16:46
fungiin openstack projects, typically "build dependencies" are tracked as setup-requires and not kept in the requirements.txt (which is for install-requires essentially)16:47
lecris[m]Hmm, then I'll have to look closer on what is going on there16:48
fungiif you find pbr being included as a runtime dependency, there are a couple of possible reasons. one is that it's just included erroneously and not actually needed, the other is that it's actually also being used as a runtime lib (usually for its ability to fetch versions from metadata, though there are ways to do the same with just stdlib modules these days so could be replaced with some small16:49
fungiamount of development effort in those projects)16:49
fungimy preference is to avoid using pbr as a runtime dependency in projects unless they're projects directly interfacing with package data in ways that pbr simplifies, which is to say a very small number of packaging-oriented tools not the vast majority of projects16:50
JayFit's extremely common in openstack to take pbr as a runtime dep just to get the version out16:52
JayFe.g. https://github.com/openstack/ironic/blob/af0e5ee096fa237290776969a37f3bced96b7456/ironic/version.py#L1816:52
lecris[m]It's not pbr itself being runtime dependence, I will check that again later. It's that runtime dependence like openstacksdk is treated like build requirement preventing the build16:53
fungiwhich can also be done in a few lines of python calling the pkg_resources or importlib, but there's also been enough churn there recently that i can understand projects wanting to have a library to abstract away all the guesswork of trying to figure out what solution works for that on an arbitrary python version16:53
lecris[m]Imo they should change to generated version files there16:53
fungithey do. the generated version files are part of the package metadata, but how you access those files from python differs depending on what version of python you're running16:54
fungimore recent python versions can use importlib, though very new versions of python have changed importlib in ways that you still have to try a couple of different possibilities16:55
lecris[m]Shouldn't that be handled by the generator and simply from __version__.py import __version__?16:55
fungioh, you mean pbr should add or edit a script which embeds a version string? that can quite easily get out of sync with the package metadata, especially in downstream repackaging contexts16:57
lecris[m]Where __version__.py is a generated file on each install or edittable or otherwise16:57
fungidownstream distributors often alter python package metadata16:57
lecris[m]Why would it get out of sync when downstream is calling pip install to generate those?16:58
fungiand the goal is to have the project know what version the distributor has set16:58
fungiagain, you're making many, many architectural assumptions based on your own experience which is not universal16:58
lecris[m]Assumption on pip install or on how pbr generates the version?16:59
fungiassumption on how redistributors in general repackage python libraries. there are many ways used by different distros, we're trying to solve for a variety of use cases not just yours17:00
lecris[m]With projects using setuptools_scm we rely on the project being a git archive and the build system taking all of the information from a .git_archival.txt. That is not the case with pbr?17:00
fungipbr can take version information from git metadata, from envvars, from configuration, from existing package metadata files...17:01
lecris[m]Sure, but couldn't it take the information once, write it in generated files for import __version__ so that there is no more runtime requirement?17:04
clarkbit does, it writes it to the package metadata which can be read back out again17:04
lecris[m]The only case then that woould be affected would be edittable builds which could use pbr runtime depenency17:04
lecris[m]So then why do projects use pbr as a runtime requirement to get the version?17:06
fungiin order to not have to try several different ways of accessing python package metadata17:06
fungibecause the python community keeps changing its mind as to how packaging metadata should be accessed, and has made several non-backwards-compatible changes in that regard17:07
JayFIt's literally a single line in Ironic, and I don't have to worry about putting more than that single line in any of the nearly 2 dozen projects that are in the Ironic program17:07
fungiit's not hard, but it's also not trivial and requires domain knowledge that developers would rather not have to learn17:08
JayFanytime I can farm out something like that, which isn't in Ironic's core competency of provisioning bare metal, I'll do it every single time17:08
JayFjust like I'm glad that e.g. metal3 developers outsources hardware provisioning to us -- we're good at it, they can focus on k8s integration17:08
lecris[m]But it's also a single line in __init__.py from __version__.py import __version__17:08
JayFYou're saying that works in every single case, including all python versions and platforms we support?17:09
lecris[m]Yes, it's up to pbr to generate __version__.py accordingly17:09
JayFSo that won't work in cases where there's not an install step for the package?17:10
fungiand to assume downstream package maintainers don't alter package metadata to amend versions after pbr generates that script17:10
lecris[m]Adding whatever guards are necessary for python version and platforms, but writing the metadata statically17:10
opendevreviewAndy Wu proposed openstack/project-config master: Add cinder-nfs charm to OpenStack charms  https://review.opendev.org/c/openstack/project-config/+/87946917:11
fungibasically, one of the design principles of pbr is that information should never be duplicated because that leads to divergence, ordering problems, sometimes race conditions in processes17:11
lecris[m]JayF: In those cases you would do editable install where a dynamic version look-aside is used like the one shown in ironic17:11
fungiputting the version in two separate places is a risk that pbr is designed to avoid17:11
* JayF already knows more about version identification in installed python packages than he wanted to have to know lol17:12
lecris[m]fungi: Yes, downstream maintainers have to edit metadata in %prep stage (although it's strongly discouraged to do so)17:12
fungibut if they do, then pbr's runtime version reporting will reflect that automatically17:13
JayFIn RPM/Fedora, they are.17:13
JayFIn other repositories, such as Gentoo, it's required in some cases (to normalize the version in cases of some specific strings in versions, like '-')17:13
lecris[m]Yes, but pbr will be a runtime dependency which in principle it shouldn't be needed17:14
fungisure, it can be replaced as a runtime dependency with half a dozen lines of python depending on how many python interpreter versions you want to support17:14
lecris[m]Yes, but I would say that that's a reasonable approach for the downstream users of that package17:17
lecris[m]The same dozen lines are run internally in pbr, while separating the dependency makes the package more portable17:18
JayFhttps://github.com/openstack/requirements/blob/master/global-requirements.txt one down, almost 500 to go? /s 17:18
JayFAt a certain point it stops seeming valuable to spend time reducing dependencies when you already have *literally an entire project* dedicated to managing project dependencies because you have so many lol17:19
JayFThere probably is value in reducing it, but that's so far down everyones' list that it's not going to be a focus17:19
lecris[m]Where is the global-requirements.txt used? I was referring to individual projects which would be managing that information internally in their setup.cfg17:20
JayFSometimes you have to give up something when you scale up... you're seeing some of those tradeoffs actively :) 17:20
fungiit's a reasonable idea to split out a pbr-runtime package (or pbr-version or something) that can do the version lookup tasks as a separate very small dependency, though i don't know who has motivation or time to work on making that17:22
* JayF nudges lecris[m] ^17:22
JayFthat would be the sort of incremental change that is possible at our scale17:22
clarkbglobal-requirements is a level above the projects, it maintains lists of libs that can be used to help ensure reuse and unnecessary addition of libs that perform the same/similar tasks. Also does basic license checking17:22
jrosserlecris[m]: i'm going to say it again :) but you've been talking about ansible-collections-openstack a lot here, so thats an ansible collection not really a python project, and have you seen this? https://opendev.org/openstack/ansible-collections-openstack/src/branch/master/docs/releasing.md#publishing-to-fedora17:30
fungialso, to reiterate, the collections, and pbr for that matter, are maintained by teams that have their own irc channels which aren't this one ;)17:31
lecris[m]Indeed, I am trying to modernize the rdo spec files there to be more in line with the Fedora packaging guidelines. Currently those are tailor made for rdo only and can't be added downstream in that state17:32
fungithis is the channel where we discuss maintenance of the collaboration services that make up opendev17:32
lecris[m]Indeed, but isn't project guidelines relevant here, e.g. moving away from requirements.txt?17:33
fungipossibly relevant to openstack, not to everything in opendev17:33
fungiopenstack has its own irc channels (many of them)17:33
fungialso rdo has #rdo17:34
lecris[m]Well at least the webhook discussion was not out of scope17:35
fungiyep17:35
ianwdo we install a theme for our etherpad or something?  i just hit another etherpad and was surprised how different it looked20:51
ianwhttps://etherpad.wikimedia.org/p/hello looks more like that20:52
Clark[m]We use the old theme which is more dense. The new one looks like google docs20:55
ianwhuh i guess i never realised there was a new one :)20:59
TheJuliao/ Any chance I can get a node held from a failing jenkins job?21:08
TheJuliaironic-grenade, we're seeing it fail on numerous changes right now :(21:12
Clark[m]We don't have Jenkins :) is there a specific repo or change or just that job?21:18
JayFPretty much any ironic-grenade that's failing is likely to work21:18
JayFunless we discover some different way for it to fail :-|21:18
clarkbapparently project is a required field I'll set it to openstack/ironic since the vast majority of the jobs run against that project21:22
clarkbJayF: TheJulia the next failure of ironic-grenade against openstack/ironic will be held21:23
JayFthank you!21:23
TheJuliaclarkb: muchas gracias21:23
JayFTheJulia: I'm going to go recheck that grenade-only change iury posted and see if we can get it to spawn a failure21:23
TheJuliaJayF: I'm going to recheck one of the other stacked change since it had 2x fails in a row and is entirely unrelated21:23
TheJuliaunless you scream "noooo" in the next minute21:24
JayFis that the ironic or ironic-lib?21:24
TheJuliaironic21:24
JayFack; good. I thought the one you were looking at was -lib or I wouldn't have rechecked iury's :D21:24
TheJuliai figured out the ironic-lib one21:24
TheJuliait was me mixed up the mock21:24
JayFactual failure hiding inside all the random failures21:24
TheJuliaheh21:28
fungireal bugs like to hide in thickets of false bugs to evade detection21:38
opendevreviewIan Wienand proposed zuul/zuul-jobs master: remove-registry-tag: no_log assert  https://review.opendev.org/c/zuul/zuul-jobs/+/87951821:41
clarkbianw: I've reviewed the wheel audit script and it lgtm21:41
clarkbleft a comment to indicate that in gerrit too21:41
ianwclarkb: thanks!  21:43
ianwhave we used "vos backup" before?21:43
clarkbif we have I don't remember :)21:44
ianwi'm thinking to start cleaning up the volumes more or less one-by-one, we can backup before we run anything destructive, then remove the backup when we're happy21:45
fungii document vos backup in the file deletion section, i've used it on occasion for that process21:46
ianwoh right, yeah you can actually *read* documentation, haha https://docs.opendev.org/opendev/system-config/latest/afs.html#deleting-files21:48
ianwi guess you just "vos remove" the backup volume?21:49
fungiyep21:51
fungiit's a judgement call. i don't usually do it unless i'm making file changes that i deem especially risky/error prone21:52
fungiand mainly useful in case you need to immediately undo something21:52
opendevreviewIan Wienand proposed opendev/project-config master: Add renames for April 6th outage  https://review.opendev.org/c/opendev/project-config/+/87941421:57
opendevreviewMerged zuul/zuul-jobs master: promote-image-container: do not delete tags  https://review.opendev.org/c/zuul/zuul-jobs/+/87861222:02
opendevreviewMerged zuul/zuul-jobs master: build-container-image: expand docs  https://review.opendev.org/c/zuul/zuul-jobs/+/87900822:02
opendevreviewMerged zuul/zuul-jobs master: promote-container-image: add promote_container_image_method  https://review.opendev.org/c/zuul/zuul-jobs/+/87900922:04
opendevreviewMerged zuul/zuul-jobs master: remove-registry-tag: role to delete tags from registry  https://review.opendev.org/c/zuul/zuul-jobs/+/87861422:04
opendevreviewMerged zuul/zuul-jobs master: promote-container-image: use generic tag removal role  https://review.opendev.org/c/zuul/zuul-jobs/+/87874022:04
opendevreviewMerged zuul/zuul-jobs master: remove-registry-tag: update docker age match  https://review.opendev.org/c/zuul/zuul-jobs/+/87881022:04
opendevreviewMerged zuul/zuul-jobs master: remove-registry-tag: no_log assert  https://review.opendev.org/c/zuul/zuul-jobs/+/87951822:04
ianwclarkb / fungi: i've updated https://etherpad.opendev.org/p/gerrit-upgrade-3.7 in the rename, section 4 "force merge changes" to do what fungi suggested -- merge first two renames, wait and clear the no-op manage-projects job, then unemergency for the last22:15
fungii suggested it half in jest, but it really does require the fewest changes to the process. though it also requires a bit of extra care22:17
ianwi think it's the best idea, we just want to confirm the jobs are doing what we think they're doing on the day22:18
Clark[m]Ya it's nice to see that everything is running as expect at the end22:19
fungigood point, i do appreciate that it's a quick confidence test22:20
opendevreviewIan Wienand proposed openstack/project-config master: Ironic program adopting virtualpdu  https://review.opendev.org/c/openstack/project-config/+/87623122:22
opendevreviewIan Wienand proposed openstack/project-config master: Rename x/xstatic-angular-fileupload->openstack/xstatic-angular-fileupload  https://review.opendev.org/c/openstack/project-config/+/87384322:22
opendevreviewIan Wienand proposed openstack/project-config master: Rename x/ovn-bgp-agent to openstack/ovn-bgp-agent  https://review.opendev.org/c/openstack/project-config/+/87945622:22
ianw^ there were no conflicts, but i've stack those ontop of each other in the order they came in22:22
fungialso i'm not a huge fan of the squashing idea because we lose some easy tracking of the individual changes back to their authors if we abandon 2/3 of them22:23
fungigranted, it's only a very minor concern22:24
ianwi think if we want to call it on any further renames, we can commit https://review.opendev.org/c/opendev/project-config/+/87941422:26
fungiwhen did we schedule the maintenance window for? i've already forgotten22:27
ianwplanned April 6, 2023 beginning at 22:00 UTC22:27
fungithanks, i wanted to say utc thursday but i wasn't certain22:27
fungiso yeah, we're 48 hours out, this seems like a reasonable cut-off time22:27
ianwjust for an abundance of caution -> https://opendev.org/opendev/system-config/src/branch/master/playbooks/manage-projects.yaml i'm confident that manage-projects will do nothing with gitea and review in emergency22:30

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!