19:01:04 #startmeeting infra 19:01:04 Meeting started Tue Jul 13 19:01:04 2021 UTC and is due to finish in 60 minutes. The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:05 The meeting name has been set to 'infra' 19:01:08 #link http://lists.opendev.org/pipermail/service-discuss/2021-July/000267.html Our Agenda 19:01:10 Juuuust made it to Seattle. 19:01:21 So I may half paying attention. 19:01:28 diablo_rojo_phone: don't worry about it 19:01:35 #topic Announcements 19:01:58 o/ 19:02:03 A reminder that the gerrit server will be moving July 18 at 23:00UTC. We'll talk about that in more depth later in the meeting though 19:02:12 Other than that I dind't have any announcements 19:02:19 I can't type didn't today 19:02:31 #topic Actions from last meeting 19:02:37 #link http://eavesdrop.openstack.org/meetings/infra/2021/infra.2021-07-06-19.01.txt minutes from last meeting 19:02:44 #action someone write spec to replace Cacti with Prometheus 19:03:08 That hasn't happened yet. I'm not too worried about it as we've been focused on updates to other systems. But once we can get up for some air that would be a good thing to look at next 19:03:15 There were no other actions recorded that I saw 19:03:23 #topic Specs Approval 19:03:33 #link https://review.opendev.org/796156 Supporting communications on our very own Matrix homeserver 19:04:01 I think this is now in a position where people can review it with enough real world information to make informed decisions 19:04:49 We have a matrix homeserver up for opendev.org. We have a test channel on that server. infra-root can invite themselves to that channel usign the admin account (details in the typical location) or you can ask corvus, mordred, fungi, or myself to add you 19:05:09 thoguh doesn't look like fungi has made it in there yet 19:05:28 given all that do we think we are in a position to put the spec up for approval now? I'm comfortable with that myself 19:05:35 ++ 19:05:35 corvus: ^ you may have input 19:06:39 considering the focus on gerrit things this week and that fungi is not currently around. What about asking for reviews before 7/22 then approving it then if there are no objections 19:06:45 (gives people a few days after this week to review it) 19:07:15 wfm 19:07:47 Alright infra-root please review https://review.opendev.org/796156 by 7/22 19:08:12 and feelfree to interact with the system that is there to aid your review 19:08:40 #topic Topics 19:08:49 #topic review server upgrade 19:09:05 This is still schedueld for July 18 at 23:00 UTC 19:09:39 we ran into a small hiccup today when it was noticed that depends-on had stopped working. Turns out this was related to switching zuul to talk to review01.opendev.org instead of review.opendev.org. Zuul uses that config the line up the depends on and determine which are valid 19:10:04 A revert of that chaneg is in the gate right now and we'll need to restart Zuul once the deploy job for zuul runs for that 19:10:29 To handle the DNS update during the migration I think we can force merge the DNSchange in gerrit on review02, then manually pull and run the dns deploy playbook on bridge 19:10:39 Not as elegant but prevents depends-on from being ignored 19:10:48 ianw: ^ not sure if you had caught up on all that scrollback yet but that is the tldr 19:11:02 ++ yep, i will re-write the checklist today to account for that 19:11:14 I also pushed up a change to reduce the TTL on the review.o.o cname record to 300 seconds since updating that will be more important now 19:11:26 we should be able to land that one today to get it out of the way 19:12:03 yep, good idea 19:12:15 I think it would be good to do a resync of review02 today as well. Then we can spin it up with the current gerrit image and make sure everything looks happy 19:12:36 I have a related item on the next topic, but I'll hold off in case there are other upgrade specific things to go over 19:13:02 oh! have reminders gone out yet? we should send those to the mailing list. The meme is peopel don't read but we can only do our best :) 19:13:24 send our reminder gifs? 19:13:24 ahh, i said i would do that and got sidetracked sorry. i'll send one in reply to the original now 19:13:33 ianw: thanks! 19:13:44 corvus: no reminders that the server will haev a longer that typical outage 19:13:59 corvus: but adding gifs is probably a good way to get people to read them :) 19:14:54 Anything else? 19:16:01 #topic Gerrit Account Cleanup 19:16:30 I won't bother with cleaning up the ~176 external ID conflicts that I retired accounts for until after the move 19:16:44 however efoley reached out yesterday after they managed to get themselves into a bad spot with their account. 19:17:17 The root cause has been captured in https://bugs.chromium.org/p/gerrit/issues/detail?id=14776 tldr is deleting emails in the web ui is not safe if you delete the email for your openid it also deletes your openid externalid 19:17:57 We can't fix this in a simple way because of the conflicts I have been working to cleanup. What we can do is take advantage of the downtime to push a fix to the externalid records under gerrit then we'll reindex anyway and in theory be happy 19:18:40 ianw: ^ I started looking at the testing and staging of this on review02 today. That led me to create a /home/gerrit2/scratch/ dir where I was going to clone All-Users to and then checkout refs/meta/external-ids to make the necessary edits so they are staged and ready to go (and possibly test them?) 19:19:08 ianw: but I ran into a weird thing: I don't want that dir to be backed up because the refs/meta/externalids checkout has tons of small files and we already backup the source repo 19:19:24 ianw: is there a better location for me to do that? maybe /tmp? we can figure that out after the meeting too 19:20:03 hrm, yeah the root disk should be big enough to handle it 19:20:05 But I am hoping to be able to stage that all up, push the fixes back into review_site/git/All-Users.git after we sync up to current state then maybe have efoley test a login against review02 if we turn it on 19:20:20 I'll coordinate that with ianw and we can edit the outage doc with what we learn 19:20:58 otherwise we could do something like add an exclude to ~gerrit2/tmp ... that might be a good idea as even on the old server we've acquired random intermediate bits of little value 19:21:52 so working in ~/tmp// ... would be good that we know we can always remove those bits (and a signal to us working to remind us to consider it ephemeral and do things another way if we want it persisted) 19:22:23 not a bad idea. I actually do that on my personal machines because tmp is small 19:23:12 Anyway I think /tmp will work for now and we can coordinate the syncing and testing bits later 19:23:38 Another odd thing I noticed when doing that is /home/gerrit2 is root:root ownership 19:23:55 which means gerrit2 can't create dirs or files in its own homedir root. I suspect something related to docker containers with that? 19:24:12 Not critical either, but things like that make me want to turn on the gerrit if we can and ensure it starts up cleanly 19:24:36 #topic gitea backups failing to one backup target 19:24:44 hrm, i quite possibly did a mkdir of /home/gerrit2 to get the LVM mounted there 19:24:49 ianw: ah 19:25:03 ianw: re gitea backups do we still suspecttimeout values in mysql configs? 19:25:06 so that would be an oversight. i definitely have started it and played with it, so it does minimally work 19:25:11 cool 19:25:42 umm, last thing was the ipv6 between gitea01 -> backup was seem to not work 19:25:54 oh right this is the vexxhost between regions routing problem 19:25:55 i've reported that to mnaser and i believe an issue was raised, but i haven't followed up since 19:26:20 ok. This topic is on here mostly to remind me to catch up if there are any updates to catch up on. Sounds like we are still waiting for vexxhost 19:26:35 maybe we should consider dropping the AAAA record for now? 19:27:16 it seems unfortunate but we could 19:27:28 also the filesystem component of the backup is working 19:27:33 so it must be falling back to ipv4 19:27:54 I wonder if the streaming setup for the db prevents fallback from working 19:27:59 because the stream gets interrupted 19:28:09 vs the fs backup which can simply reconnect and then start doing files 19:29:16 afaics borg doesn't log anything of interest relating to that 19:29:32 i'll have to fiddle more, i'll put it on the todo list 19:29:34 It seems plausible at least 19:29:43 the ipv6 may be a red herring to the actual problem 19:29:52 ya 19:29:55 and thanks 19:29:57 it would just be nice to debug one thing at a time :) 19:30:06 ++ 19:30:14 #topic Gitea 1.14.4 upgrade scheduling 19:30:25 #link https://review.opendev.org/c/opendev/system-config/+/800274 Gitea 1.14.4 upgrade 19:30:40 I've got this change passing now. It is one of the larger Gitea upgrade changes that we've had I think 19:30:55 worthy of careful review. There is a link to a held test node running that code too 19:31:17 Given everything else happening I'm happy to defer this to next week assuming things settle down a bit :) But if you have time for review this week that would be helpful 19:31:26 as that way I can address any concerns before we actually do the upgrade 19:31:55 ++ i played around and change overall lgtm 19:32:33 #topic Scheduling Gerrit Project Renames 19:33:40 We said we'd do the week after the server upgrade/mova previously. Does anyone have opinions on a specific day for that? Probably Monday 7/26 or Friday 7/30? (I think I'm supposed to not be around on 7/30) 19:34:26 Any objections to pencilling in 7/26? 19:35:34 Let's pencil that in then and when fungi returns we can talk about a specific timeframe 19:35:45 I expect the rename outage to be quite short as we can do online reindexing 19:35:53 #topic Open Discussion 19:35:58 Anything else? 19:36:33 I got https://paste01.opendev.org/ up 19:36:52 i have a minor change to db layout to merge, but will then import the old database 19:37:13 if it seems to work, i'm presuming no objections to changing the paste.openstack.org CNAME ? 19:37:34 sounds good to me 19:38:22 i don't think the service has a bright future, but it should continue chugging along for a while in it's container 19:39:08 as with all good web apps, every library it depends on has changed to the point that you basically have to rewrite everything to update it 19:39:30 fun, I think vexxhost was doing some minor maintenance with it though 19:40:45 yeah, i got into "this bit deprecated from main framework, use this library -- oh, that library is now unmaintained and has bug that makes it not work with later versions of main framework" loop and gave up 19:42:56 Sounds like that may be about it. I'll go ahead and call the meeting here so that we can proceed with the Zuul restart 19:43:11 As always feel free to bring discussion up in #opendev or at service-discuss@lists.opendev.org 19:43:14 Thank you everyone 19:43:16 #endmeeting