Thursday, 2024-04-25

albexl[m]Hello, I am part of volvocars's Zuul team. I would like to be added to the groups created for [these repositories](https://review.opendev.org/c/openstack/project-config/+/916038)(volvocars-powertrain-build-core, volvocars-powertrain-build-release, volvocars-project-config-core, volvocars-project-config-release) so that I get access to -2...+2 the changes in those repositories and also be able to add more of my team members if needed. Can06:31
albexl[m]that be fixed? Thanks in advance.06:31
opendevreviewJan Marchel proposed openstack/project-config master: Add new repository for NebulOuS testing data  https://review.opendev.org/c/openstack/project-config/+/91687608:13
frickleralbexl[m]: done for three of the groups. I'm unsure about the volvocars-project-config-release group, do you really intend to push tags and branches for this repo? IIUC this is generally considered a bad idea for a config repo, so I would suggest to amend that ACL instead (cc clarkb fungi corvus, cf. 08:56
fricklerhttps://review.opendev.org/c/openstack/project-config/+/916038/13/gerrit/acls/volvocars/project-config.config )08:56
fricklerwell actually tags could be o.k., we do even have exactly one on openstack/project-config09:05
albexl[m]<frickler> "albexl: done for three of the..." <- Thanks10:26
opendevreviewAlberto Gonzalez proposed openstack/project-config master: Add "Verified" label permissions to volvocars groups  https://review.opendev.org/c/openstack/project-config/+/91701710:32
opendevreviewAlberto Gonzalez proposed openstack/project-config master: Add "Verified" label permissions to volvocars groups  https://review.opendev.org/c/openstack/project-config/+/91701710:53
corvusfrickler: albexl i did just wake up, but i *think* we need to force-merge the initial project-config change for the volvocars tenant13:26
corvusi think we probably haven't thought this through 100% since we add tenants so rarely -- but now that i think about it, maybe we should have had the volvocars/project-config repo be an import of one of the other project-config repos... like zuul/project-config.  then it would have started with a valid pipeline config, and could be modified from there13:28
corvusso maybe that's what we should do in the future, but for now, we just force-merge https://review.opendev.org/91680213:28
corvusit does look like a force-merge is how we bootstrapped the zuul tenant: https://review.opendev.org/64852313:29
corvusAlbin Vass: ^ fyi13:30
fungioh i like the import idea for a future process improvement13:38
corvusif no one objects, i'll go ahead and force-merge 91680213:39
fungiwe do reject importing repo content which contains zuul configuration though, so we'd need to figure out how to special-case it13:39
corvusfungi: lol :)13:40
fungicorvus: no objection from me13:40
corvusdone13:42
fricklersorry I was away for a bit. but right, the initial commit in a config-repo isn't self-testing. if the import from a different repo turns out to be difficult, we could maybe also bootstrap with a tenant config in openstack/project-config that gets removed after the initial commit? or would that give a conflict?15:00
corvusi think moving pipeline definitions between repos could be tricky15:06
clarkb2024-04-25 04:09:35.283 |   Downloading glean-1.24.0-py3-none-any.whl (111 kB) from https://nb02.opendev.org/ubuntu-jammy-427f2442d72c4a61ba03ab1462efea4a.log would indicate we are using glean as long as the new image has uploaded to the clouds successfully15:15
fungiperfect15:16
clarkb*we are using new glean15:16
frickler24.04 is out https://lists.ubuntu.com/archives/ubuntu-announce/2024-April/000301.html15:32
clarkbfrickler: has the dib change been updated now that there is a glean release? we can probably land that today once it is done15:33
fricklerclarkb: not yet, but I think I can get to that before I eow15:34
clarkbsounds good, thanks!15:34
fricklerclarkb: fungi: actually one question about these three DIB_* vars we already have for jammy and releases before. do we know whether these are still needed and why or are these just cargo-culted and we could try building noble without them?15:41
fricklerhttps://review.opendev.org/c/openstack/diskimage-builder/+/915915/4/.zuul.d/jobs.yaml#26415:41
fricklerI did drop them for my local build and didn't notice anything different fwiw15:42
clarkbfrickler: DIB_APT_LOCAL_CACHE disables the apt cache that dib can manage. We don't want it in the test jobs because they are single use nodes (we do enable it in our normal builders). I don't know about the DISABLE_APT_CLEANUP var. The no check gpg is necessary when using our mirrors because they are not properly signed. It works with noble because we use upstream for now.15:43
clarkbDIB_DISABLE_APT_CLEANUP : by default dib cleans up the apt cache in the image before creating it to try and get a smaller image. I don't know why we would disable that here. I think we can probably drop that flag but keep the other two15:44
fricklerah, o.k., thx for the explanations, I'll do that15:45
fricklerwell I think for the test job the DISABLE_APT_CLEANUP also makes sense as it could make the test run faster skipping that step?15:46
clarkboh yes maybe that is why it is there. A speedup since the test images are already small enough we don't need to optimize for that15:47
opendevreviewDr. Jens Harbott proposed openstack/diskimage-builder master: Add Ubuntu 24.04 (noble) build to testing  https://review.opendev.org/c/openstack/diskimage-builder/+/91591515:51
opendevreviewDr. Jens Harbott proposed openstack/diskimage-builder master: Add tox-py311 job  https://review.opendev.org/c/openstack/diskimage-builder/+/91705815:52
clarkbfrickler: any reason to not add that job to the gate (I notice the noble job was also not in the gate)15:53
fricklerclarkb: no real reason other than me not thinking about it. will update15:54
opendevreviewDr. Jens Harbott proposed openstack/diskimage-builder master: Add Ubuntu 24.04 (noble) build to testing  https://review.opendev.org/c/openstack/diskimage-builder/+/91591515:55
opendevreviewDr. Jens Harbott proposed openstack/diskimage-builder master: Add tox-py311 job  https://review.opendev.org/c/openstack/diskimage-builder/+/91705815:57
clarkbfungi: frickler: I'm about to grab some late breakfast after morning meetings, but if we still think now is a good time to do the gitea db upgrades I'm generally around16:59
fungisounds good. i'm probably vanishing for a while at 19:30 utc or so to meet a friend for an early dinner, but available until then17:01
clarkbI've just checked on nl01 and I see image uploads for jammy were successful. Considering we haven't had any screaming that none of the jobs are working anymore I suspect glean is happy17:02
clarkbfungi: cool it should take about an hour to gate and then 15 minutes to deploy which means if we approve it nowish should be done well before you pop out17:02
clarkbfungi: do you want ot approve it or should I/17:02
fungii can do it17:07
fungiapproved it now17:08
fungizuul estimates it will merge around 18:3018:03
fungithe hourly jobs should hopefully be done by then, so deploy ought to go quickly18:04
clarkbexcellent18:09
opendevreviewClark Boylan proposed opendev/system-config master: DNM Forced fail on Gerrit to test the 3.9 upgrade  https://review.opendev.org/c/opendev/system-config/+/89357118:17
fricklerclarkb: fungi: forgot to ask earlier, are we ready to proceed with the ssh key update stack?18:17
clarkbfrickler: I think so18:18
clarkbI put a hold on the gerrit 3.9 job for 893517. Will use that to test the downgrade process and check things in the release notes18:18
opendevreviewMerged opendev/system-config master: Upgrade Gitea's backend DB to MariaDB 10.11  https://review.opendev.org/c/opendev/system-config/+/91684718:33
clarkbthe deployment is running18:34
fungiyep, that was quick18:34
fungiwatching mariadb container processes bounce on gitea0918:35
clarkbgitea09 is done18:35
clarkbweb ui seems to work18:36
fungiVersion: '10.11.7-MariaDB-1:10.11.7+maria~ubu2204'  socket: '/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution18:36
clarkband gitea10 is done now too18:36
clarkbgitea11 is done now too. So far this seems to be working as expected18:38
clarkball 6 are done now18:42
fungiand reported successful18:42
clarkbsome quick clicking around in the web ui for each lgtm. docker ps shows the expected container image for mariadb running on all 6 and spot checking the logs shows upgrades ran as expected18:43
clarkbI'm also able to git clone system-config so this is looking good18:44
clarkbThe last remaining db that needs an update is Gerrit's and other than needing a downtime its a fairly low risk change due to how gerrit uses that DB19:00
fungiokay, heading out for a while, but i'll be back later19:31
clarkbI'm (slowly) starting to put together gerrit 3.9 upgrade planning notes here: https://etherpad.opendev.org/p/gerrit-upgrade-3.922:17
fungidefinitely a good start22:21

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!