19:01:25 #startmeeting infra 19:01:25 Meeting started Tue Jul 11 19:01:25 2023 UTC and is due to finish in 60 minutes. The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:25 The meeting name has been set to 'infra' 19:01:49 #link https://lists.opendev.org/archives/list/service-discuss@lists.opendev.org/thread/FV2S3YE62K34SWSZRQNISEERZU3IR5A7/ Our Agenda 19:02:15 #topic Announcements 19:02:25 I did make it to UTC+11 19:03:14 I'm finding that the best time to sit at a computer is something like 01:00/02:00 UTC and later simply due to weather. But we'll see as I get more settled this is only day 5 or something there 19:04:36 #topic Topics 19:04:43 #topic Bastion Host Updates 19:04:56 #link https://review.opendev.org/q/topic:bridge-backups 19:05:12 Looks like this set of changes from ianw could still use some infra root review 19:05:42 if we can get that review done we can plan the sharing of the individual key portions 19:06:17 #topic Mailman 3 19:06:32 fungi: any updates on the vhosting? then we can talk about the http 429 error emails 19:06:38 no new progress, though a couple of things to bring up yeah 19:06:59 go for it on the new things 19:07:03 the first you mentioned, i'm looking to see if there's a way to create fallback error page templates for django 19:07:19 but perhaps someone more familiar with django knows? 19:07:35 i know we can create specific error page templates for each status 19:08:12 so we could create a 429 error page template, but what i'm unsure about is if there's a way to have an error page template that applies to any error response which doesn't have its own separate template 19:08:49 i think i recall tonyb mentioning some familiarity with django so i might pick his brain later if so 19:09:06 I'm unsure myself 19:09:14 assuming my web searches and documentation digging turn up little of value 19:09:23 a default would be nice if possible but I suspect adding a 429 file would be a big improement alone 19:09:53 I don't think it was me 19:10:01 oh too bad 19:10:50 sorry I'll do better :P 19:10:55 the other item is we've had a couple of (mild) spam incidents on the rust-vmm ml, similar to what hit zuul-discuss a few months back. for now it's just been one address i initially unsubscribed and then they resubscribed and sent more, so after the second time i switched the default moderation policy for their address to discard instead of unsubscribing them 19:11:58 but still might consider switching the default moderation policy for all users on that list to moderate and then individually updating them to accept after they send good messages 19:12:17 that is if the problem continues 19:12:38 I'm good with that but ideally if we can find a moderator in that community to do the filtering 19:12:50 I'm not sure we should be filtering for random lists like that. 19:13:44 well, yes i stepped in as a moderator since i was already subscribed and the only current community moderator had gone on sabbatical, but we found another volunteer to take over now 19:14:03 great 19:14:06 my concern is it seems like the killer feature of mm3, the ability for people to post via http, increases the spam risk as well 19:14:50 which is going to mean a potentially increased amount of work for list moderators 19:15:13 Though two? incidents in ~6 months ins't too bad 19:16:08 yeah, basically 19:16:21 but these are also very low-volume and fairly low-profile lists 19:16:39 so i don't know how that may translate to some of the more established lists once they get migrated 19:16:48 something to keep an eye out for 19:16:55 there is probably only one way to find out unfortunately 19:17:00 agreed 19:17:05 anyway, that's all i had on this topic 19:17:15 #topic Gerrit Updates 19:17:47 We are still building a Gerrit 3.8 RC image. This is only used for testing the 3.7 to 3.8 upgrade as well as genereal gerrit tests on the 3.8 version but it would be good to fix that 19:17:59 #link https://review.opendev.org/c/opendev/system-config/+/885317?usp=dashboard Build final 3.8.0 release images 19:18:15 Additionally the Gerrit replication tasks stuff is still ongoing 19:18:42 I think my recommendation at this point is that we revert the bind mount for the task data so that when we periodically update our gerrit image and replace the gerrit contianer those files get automatically cleaned up 19:18:55 #link https://review.opendev.org/c/opendev/system-config/+/884779?usp=dashboard Stop bind mounting replication task file location 19:19:32 If we can get reviews on one or both of those then we can coordinate the moves on the server itself to ensure we're using the latest image and also cleaning up the leaked files etc 19:19:46 what's the impact to container restarts? 19:19:59 if we down/up the gerrit container, do we lose queued replication events? 19:20:27 fungi: yes. This was the case until very recently when I swapped out the giteas though so we were living with that for a while already 19:21:08 The tradeoff here is that having many leaked files on disk is potentially problematic when that number gets large neough. Also these bad replication tasks produce errors on gerrit startup that flood the logs 19:21:26 we'd be trading better replication resiliency for better service resiliency I think 19:22:53 fungi: that said having anothe rset of eyes look over the situation may produce additional ideas. The alternative I've got is the gerrit container startup script updates that try to clean up the leaked file sfor us. I don't think the script will clear all the file scurrently but having a smaller set to look at will help identify the additional ones 19:23:39 #link https://review.opendev.org/c/opendev/system-config/+/880672 Clear leaked replication tasks at gerrit startup using a script 19:24:24 I'm happy to continue down that path as well, its jus thte most risky and effort needed option 19:24:26 thanks, makes sense 19:24:37 risky because we are automating file deletions 19:25:03 for a todo here maybe fungi can take a look this week and next week we can pick an option and proceed from there? 19:25:47 The other Gerrit item is disallowing implicit merges across branches in our All-Projects ACL 19:26:04 I can't think of any reason to not do this and I don't recall any objections to this in prior meetings where this was discussed 19:26:15 yeah, i should be able to 19:26:28 receive.rejectImplicitMerges is the config option to reject those when set to true 19:26:32 did i propose a change for that? i can't even remember now 19:26:48 fungi: I don't think so since it has to be done directly in All-Project sthen simply recorded in our docs? 19:26:55 there may be a chngae to do the recording bit /me looks 19:27:19 https://review.opendev.org/c/opendev/system-config/+/885318 19:27:39 so ya if you have time to push that All-Projects update I think you can +A the change to record it in our docs 19:28:39 oh, cool 19:28:45 i guess someone did propose that 19:29:23 that was all I had for gerrit. Anything else befor ewe move on? 19:31:04 #topic Server Upgrades 19:31:17 I'm not aware of any changes here since we last met 19:31:44 tonyb helped push the insecure ci registry upgrade through. I may still need to delete the old server I can't recal lif I did that right now 19:32:09 ze04-ze06 upgraded to jammy today 19:32:23 I think tonyb is looking at other things now in order to diversify the bootstrapping process as an OpenDev contributor so I'll try to look at some of the remaining stragglers myself as I have time 19:32:26 corvus: excellent 19:32:44 the cleanup is removing it (01) from the inventory and then infra-root deleting the 01 vm? 19:32:59 tonyb: correct 19:33:12 Okay 19:34:11 #topic Fedora Cleanup 19:34:38 tonyb: I've lost track o fwhere we were in the mirror configuration stuff. Are there changes you need reivewing on or input on diretion? 19:35:21 I need to update the mirror setup witrh the new mirrorinfo variable 19:35:59 tonyb: is that something where some dedicated time to work through it would be helpful? if so we can probably sort that out with newly overlapping timezones 19:36:44 Yeah, that's a good idea. I understand the concept of what needs to happen but I'm in danger or overthiking it 19:37:04 ok lets sync up when it isn't first thing in the morning for both of us and take it from there 19:37:24 great 19:37:28 #topic Quo vadis Storyboard 19:38:16 i think i switched a neutron deliverable repo over to inactive and updated its description to point to lp last week? openstack/networking-odl 19:38:18 One thing I noticed the other day is that some projects like starlingx are still createing subprojects in storyboard. We haven't told them to stop and I'm not sure we should, but they were confused that it seems to take some time to do that creation. I think we are only creating new storyboard projects once a day 19:38:56 At this point I'm not sure there is much benefit in having project-config updates trigger the storyboard job more quickly 19:39:16 But it was a thing people noticed so I'm mentioning it here 19:39:35 there was also some discussion about sb in the #openstack-sdks channel, in particular a user was surprised to discover that an unescaped