Thursday, 2024-03-07

tkajinamo/ I wonder if I can get https://review.opendev.org/c/openstack/project-config/+/910452 merged to move the retirement process forward05:47
opendevreviewMerged openstack/project-config master: Retire puppet-sahara: End Project Gating  https://review.opendev.org/c/openstack/project-config/+/91045206:16
tkajinamthanks !06:17
fricklertkajinam: welcome, sorry that this got missed so long06:19
opendevreviewBirger J. Nordølum proposed openstack/diskimage-builder master: feat: add almalinux-container element  https://review.opendev.org/c/openstack/diskimage-builder/+/88385507:31
amoralejmay i get reviews in https://review.opendev.org/c/openstack/puppet-openstack-integration/+/910576 from unmaintained-core members?08:35
frickleramoralej: this likely isn't the best channel for this question, but I'm also not sure which one would be. there is #openstack-unmaintained but it is not official yet and pretty quiet09:13
amoraleji didn't know where to ask, tbh09:13
amoraleji'll try there, thanks09:14
jrossersimilarly on unmaintained patches - what needs to happen next with this? https://review.opendev.org/c/openstack/project-config/+/91157609:47
jrosseri have unmaintained/yoga patches that i want to merge for OSA and forsee a bunch more needed once the next branch renaming happens09:48
fungijrosser: i went ahead and approved it, seems like the openstack-unmaintained-core folks are probably still just ramping up13:16
opendevreviewMerged openstack/project-config master: Implement openstack-ansible-unmaintained-core group  https://review.opendev.org/c/openstack/project-config/+/91157613:22
fungijrosser: i'll add you as the initial member once that ^ deploys13:22
fungijrosser: i've added you now13:29
*** gthiemon1e is now known as gthiemonge13:36
fungii've gone ahead and approved https://review.opendev.org/911381 to try out the api key for rackspace dfw on nl0114:01
fungionce it's deployed, i'll put nl01 into emergency disable and pull docker://insecure-ci-registry.opendev.org:5000/quay.io/zuul-ci/nodepool-launcher:e02cb1c8119b4a00bb6c4a02e13585a4_latest restarting onto that14:07
fungiin fact, i've pre-pulled it there just to save time later14:10
fungii also need to disappear shortly before 16:00 utc for a tax prep appointment and will be grabbing lunch out after, so probably will be gone for approximately two hours (could be less)14:12
opendevreviewMerged openstack/project-config master: Try switching Rackspace DFW to an API key  https://review.opendev.org/c/openstack/project-config/+/91138114:26
fungiit's deployed, nl01 is in emergency disable now and restarted onto the test container image14:45
Clark[m]Now we watch the grafana graphs. Thank you for getting that going. I've still got a school run to do so won't really be around for a bit yet14:54
fungino worries14:56
fungikeystoneauth1.exceptions.auth_plugins.NoMatchingPlugin: The plugin rackspace_apikey could not be found14:56
fungimmm, think we missed something14:57
fungiah, no that was before the container restart14:57
jrosserfungi: thanks for setting up the unmaintained group, it's working for us14:58
fungijrosser: my pleasure14:58
Clark[m]fungi: that is a good sign it is using the new clouds.yaml credentials at least. As long as it doesn't complain post restart15:00
fungiright15:05
tristanC[m]Hello folks, it seems like the AFS cache are out of sync or something, in http://mirror.ord.rax.opendev.org/centos/8-stream/AppStream/x86_64/os/repodata/ we see missing primary files listed in the repomd.xml. Is there something we can do about it?15:24
fricklertristanC[m]: check the mirror update log?15:25
Clark[m]tristanC you need to check the upstream to determine if we are out of sync or if upstream is then work from there15:25
tristanC[m]frickler: ok, would you know where I can check the update log?15:27
Clark[m]We last updated a few hours ago according to the timestamp in that mirror so either the issue was resolved very recently or upstream is out of sync or rsync broke somehow 15:27
tristanC[m]Clark: thanks, the upstream mirror seems consistent according to http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/repodata/15:29
fricklerconfusingly it is logged in here https://mirror.iad3.inmotion.opendev.org/logs/rsync-mirrors/centos.log and not in centos-stream.log15:29
fungihas to do with where in the upstream file tree they put stream 8 vs stream 9 files15:30
fungii agree it's confusing, but it's confusion chosen by centos15:30
Clark[m]tristanC is that where we pull from?15:31
fungiClark[m]: we mirror from mirror.centos.org yes15:31
fungithough it seems to round-robin between multiple servers, so no guarantee they all share a common file backend15:31
opendevreviewTristan Cacqueray proposed opendev/system-config master: Add AFS mirror update logs location  https://review.opendev.org/c/opendev/system-config/+/91186915:32
Clark[m]And possible they dont do updates that are rsync safe15:32
fungiin unrelated news, the final unmerged devstack-single-node-centos-7 nodeset removal backport shows another project using it, as predicted: https://review.opendev.org/c/openstack/devstack/+/91098615:33
fricklerin master, yay15:34
fungifrickler: well, *at least* in master, but probably in other branches too15:34
fricklerso likely in umpteen branches, too, yes15:34
fricklerand zuul only shows the first project it finds15:35
frickleror first error really15:35
fungiright, there are probably more in other projects as well15:35
fricklercodesearch has only three more, not as bad as I feared https://codesearch.openstack.org/?q=devstack-single-node-centos-7&i=nope&literal=nope&files=&excludeFiles=&repos=15:36
fungibut again, that's in master... there are potentially more in other branches of projects that more recently removed it from their master branches15:36
fricklerone spontaneous idea: iirc deploying codesearch is simple. can we deploy multiple instances, at least for the current maintained branches? assuming the branch that is getting indexed is configurable15:37
amoralejwould it be possible to rerun the centos-stream-8 mirror script manually?15:49
fungiamoralej: we do that sometimes if there's reason to think it's warranted15:49
fungii'm gone for the next two hours, but can look at it when i return if it's still a problem at that time15:50
clarkbtristanC[m]: amoralej  can you be more specific about which files are missing so that we can check things before we do manual intervention? linking to repodata dirs isn't very specific. Maybe you can link to failed job logs?16:07
clarkbI do note that the upstream mirrors repodata/ contents were all updated today16:07
clarkbmy hunch is that they do not update those directories in an atomic manner16:08
clarkband if we're going to use them as an upstream we either accept that or change to an upstream that does update more atomically16:08
amoralejmaybe, I'm asking in centos channels16:08
amoralejerror is in https://logserver.rdoproject.org/08/08046c811a4b3d0137652eaad951cecf31b18b2d/openstack-post/rdo-send-stable-release/18def42/job-output.txt i.e.16:08
clarkbbecause we can manually intervene everytime centos does large sets of updates16:08
clarkb*becase we cannot manually inervene16:08
amoralejit's missing metadata files16:08
clarkbamoralej: ack then I think that supports my hunch16:09
clarkbamoralej: they removed old indexes before writing new indexes and now we're broken when we rsync the intermediate state16:09
clarkbI don't think we should manually intervene for that. Either upstream fixes their order of operations to be more atomic or we use a different upstream16:09
amoralejthat'd be an explanation, but given that that's the repo used for global mirroring, i'd expect them to do it in the reverse order, first add, then remove, but i can't be sure16:10
clarkbyes I agree, but the evidence here is that we synced about 3.5 hours ago and don't have those files16:11
clarkbwhich should be fine if we continue to sync the old stuff before the new stuff is in place as long as nothing refers to the new stuff yet16:11
clarkbas a side note we've always been explicit that these are not public mirrors16:12
clarkbwe can and do delete/remove content from them (we are removing centos 7 for example)16:12
amoralejyes, no problem with that16:13
clarkbthe next automatic sync should begin at 18:43 UTC based on the cronjob table16:14
amoralejack, we will find out16:14
amoralejthanks for checking16:14
clarkbamoralej: tristanC[m]  reading the log at http://mirror.ord.rax.opendev.org/logs/rsync-mirrors/centos.log I think this supports my hunch. You'll notice that the appstream and baseos dirs delete a bunch of .xml files but do not add any .xml files16:17
clarkbamoralej: tristanC[m] whereas extras/ adds xml files16:17
clarkbso I think upstream wrote out their filse in an incorrect order. It should be add any new packge files, add any new metadata files, update indexes to point at new files, remove old files16:18
clarkbthen anyone performing an rsync should still have a valid mirror16:18
amoralejyep, that's a likely explanation16:21
amoralejactually, some rpms were updated in the repos, just metadata was probably not updated yet16:22
clarkbamoralej: `grep 'AppStream/.*/.*.xml' centos.log` shows only deleting lines16:23
amoralejyep, i've seen it16:23
clarkbcompared to `grep 'extras/.*/.*.xml' centos.log` which is what it should look like16:23
amoraleji see public mirrors have right content and pointing to updated packages so i guess we will pull it in next run16:24
clarkbas long as they are not consistent and fully updated then ya our next sync should catch us up16:24
clarkbs/they are now/16:24
clarkbI cannot type today16:24
clarkbinfra-root the switch of the rax-dfw clouds.yaml creds over the to "rackspace" cloud has jobs failing to resolve mirror.dfw.rackspace.opendev.org instead of mirror.dfw.rax.opendev.org and are failing16:40
clarkbthis was an unexpected side effect of our rax MFA testing. I think the testing we've done so far has been fine though16:40
clarkbso I'm going to manually revert the change to help make jobs happy again16:40
clarkbI will only touch the config on nl01 (the builders and other launchers shouldn't be affected)16:41
corvusclarkb: i was just checking up on that change, and i also think we should not change the metrics from rax to rackspace16:42
corvusclarkb: i agree the POC looks good and we should probably merge the nodepool change, then update our configs to use the api key (without any other changes like cloud or metric names)16:42
clarkbcorvus: ++16:43
clarkbthe manual revert is in16:43
corvusi'll make a system-config change to the config files16:44
clarkbcorvus: project-config too16:44
corvus++16:44
clarkbthis was unexpected fallout16:44
clarkband makes me wonder if we should try and decouple the clouds.yaml name from logical names16:45
clarkbbut easy enough to undo and move forward16:45
opendevreviewJames E. Blair proposed openstack/project-config master: Revert "Try switching Rackspace DFW to an API key"  https://review.opendev.org/c/openstack/project-config/+/91194416:46
corvusthat is safe to merge now ^16:46
clarkbcorvus: do you know if I have to manually restart the nl01 launcher service or should we pick up the new ocnfig?16:47
clarkberrors that fungi saw earlier imply that we don't need to restart16:47
corvusclarkb: no restart necessary16:47
clarkbI've approved 911944 and once that merges I'll remove nl01 from the emergency file on bridge16:47
clarkbsomething like #status notice Jobs that fail due to being unable to resolve mirror.dfw.rackspace.opendev.org can be rechecked. This error was an unexpected side effect of some nodepool configuration changes which have been reverted.16:51
clarkbdoes that look good?16:51
corvusokay working on this system-config change, i'm unclear why all the variable names changed from openstackci_rax_x to opendevci_rax_x -- we typically named those variables based on the account name/creds, but i didn't think the account name was changing?16:52
corvusclarkb: status lgtm16:52
clarkbcorvus: I think it was simply to have a second set of names that could be used without impacting existing stuff while we tested16:52
opendevreviewMatt Peters proposed openstack/project-config master: Add Distributed Cloud App to StarlingX  https://review.opendev.org/c/openstack/project-config/+/91194616:52
clarkbcorvus: now that testing is done I think you can remove the new names and just put the new config under the old names16:52
corvusi don't think that was necessary for testing16:53
clarkb#status notice Jobs that fail due to being unable to resolve mirror.dfw.rackspace.opendev.org can be rechecked. This error was an unexpected side effect of some nodepool configuration changes which have been reverted.16:53
opendevstatusclarkb: sending notice16:53
corvusit seems like something else was trying to be accomplished, like a migration to a different set of names16:53
-opendevstatus- NOTICE: Jobs that fail due to being unable to resolve mirror.dfw.rackspace.opendev.org can be rechecked. This error was an unexpected side effect of some nodepool configuration changes which have been reverted.16:53
corvus(sure, a second profile -- but we didn't need to duplicate all the variables)16:54
clarkbcorvus: it is possible fungi intended that, I am not sure. But it did allow for testing of launch node for example without impacting other uses (the cloud settings cron and prod launch node for example)16:54
corvusto be clear, i'm talking about the secrets and variable names16:54
clarkboh I see16:54
clarkbI am not aware of this changing account names or projects16:55
corvusi have to pick one, so i'm going to stick with the old names since they match the usernames16:55
clarkb++16:55
opendevreviewTristan Cacqueray proposed opendev/system-config master: Add AFS mirror update logs location  https://review.opendev.org/c/opendev/system-config/+/91186916:55
clarkbwe can have fungi double check before we approve the system-config update16:55
opendevstatusclarkb: finished sending notice16:56
opendevreviewMerged openstack/project-config master: Revert "Try switching Rackspace DFW to an API key"  https://review.opendev.org/c/openstack/project-config/+/91194417:04
opendevreviewJames E. Blair proposed opendev/system-config master: Switch rackspace clouds to api key auth  https://review.opendev.org/c/opendev/system-config/+/91194817:06
corvusokay i think that should do it.  i have made the corresponding secret hostvars additions; the removals should wait until that merges17:06
mannieHi sorry to disturb but am I in the right channel for outreachy contributors17:09
clarkbmannie: I think it depends on what you are trying to accomplish. We help run the developer tooling for projects participating in outreachy. This means we can help with gerrit account setup and understanding how to push to gerrit etc. But if you are looking for project specific info you'll need one of the irc channels dedicated to the project17:10
clarkbI would say ask your question and we can redirect if necessary :)17:10
gouthamrmannie clarkb: this channel: #openstack-outreachy is a good starting place for new contributors applying to outreachy17:11
clarkbinfra-root I have removed nl01 from the emergency file. Once the hourly deploy jobs finish the deploy job for 911944 should noop config update on nl01 (since I already manually made that change) but may restart nl01's nodepool container to switch us off the speculative image build17:15
clarkbthen when the nodepool image is updated we'll switch to the final version of that conatiner and finally 911948 will move all of rax over to the new config credentials17:15
clarkbI have also pointed this out to the openstack release team to have them double check some recent release jobs results. I know some failed and retried and were eventually successful. There is the small chance that some retried and ultimately failed and would need to rerun though17:16
clarkbOther than that I think we're largely in the clear again and its just a matter of rolling the rax credential updates out gracefully again17:17
mannieFirst of  thank you so much I wasn't sure how quickly my quickly my first message would be responded to I really really appreciate this for the bottom of my heart. Okay so here goes how do I go about contributing to this project, yes I am also asking relating to the "gerrit"  thing, I am really confused. can I just clone for the project with the regular git clone form the github repo or there's a different way of going about this? 17:18
clarkbmannie: note gouthamr's message as there is apparently a dedicated channel for this stuff. But yes, you clone the project (either from https://opendev.org or https://github.com) then use a tool called `git-review` to push proposed changes to gerrit where they can be reviewed and once merged will end up in the repos on opendev.org and github.com.17:19
clarkbmannie: git is a distributed version control system which allows us to host the git repos outside of the code review system whcih helps keep load and demand off the code review system whcih makes it responive for everyone doing code reviews17:20
mannieSo sorry for doing this on the wrong channel, I really am but I tried joining through the link on the outreachy channel via this link https://kiwiirc.com/nextclient/irc.oftc.net:6697 but i keep getting this "Closing Link: d.clients.kiwiirc.com (Banned)" after inputting a username instead of redirecting me to the channel17:23
clarkbmannie: how did you connect to this channel? Are you using kiwiirc here too?17:24
mannieI did some digging around after trying to understand why I couldn't get in with kiwiirc then I went to this url: https://meetings.opendev.org/ and some some channels i could get help from. At the moment I am using https://webchat.oftc.net/?channels=opendev#17:27
clarkbmannie: got it. I think kiwiirc may have been banned from oftc (not sure why) so using that tool won't work. You should be able to change the url you just pasted above to change the channel name. Currently we are in #opendev so your url would be https://webchat.oftc.net/?channels=openstack-outreachy# to get that channel17:29
mannieThanks so much clarkb 17:30
clarkblet me know if that doesn't work and we can continue to debug17:31
mannieOkay so the new link you provided works, but when ever I need help can I always reach you here?17:33
clarkbwell I sleep sometimes and may be busy with other things. But yes I am happy to help with general openstack (and opendev) developer tooling questions and problems17:34
mannieHaha alright, thanks again. Also I noticed I am the only person on the channel i.e for outreachy contributors. I don't wanna disturb y'all except when it is absolutely necessary but is there someone I can talk to on the outreachy chennel when ever i am confused about something.17:40
clarkbmannie: hrm I just joined and gouthamr is there too. Let me double check if you are there17:41
clarkbhrm maybe that didn't end up joining the correct channel17:42
mannieI just sent a message to the channel17:42
clarkbmannie: oh the url should be https://webchat.oftc.net/?channels=openstack-outreachy with no # suffix17:42
clarkbI think you ended up in the #openstack-outreachy# channel (which would've been auto created for you) rather than #openstack-outreachy17:43
gouthamrIRC teething troubles :) 17:53
aquindiablo_rojo: Hi I was able to push test code for review in the sandbox project. I cc'd you as a reviewer.18:02
opendevreviewGoutham Pacha Ravi proposed openstack/project-config master: Add op to #openstack-outreachy  https://review.opendev.org/c/openstack/project-config/+/91195418:02
fungiokay, back and catching up. took a little longer than expected18:18
fungioof, cloud name changes alter the mirror name settings. i briefly thought about that and then forgot to ask. sorry!18:19
fungithis is also part of why i was originally leaning toward changing the auth for the existing providers rather than making new providers in parallel18:21
fungifor the record, it wasn't "for testing" but "for transition" since i got requests to have two clouds defined in parallel that we could switch back and forth between easily18:22
fungii was trying to make sure that we didn't miss values for the new provider/cloud or accidentally use the old password when we meant to use the new api key18:23
fungii'm happy to roll all of the recent changes back and redo them by just changing the auth settings for the existing clouds/providers, since that was my original plan18:23
fungiinfra-root: ^ what would you prefer?18:24
clarkbfungi: I think corvus' change above already does that18:28
clarkbnote we didn't create new providers18:28
fungicool, i'll take a look at those18:28
clarkbwe chagned the cloud name in existing providers and taht was sufficient to make mirror stuff unhappy18:28
fungifwiw, i think it's cleaner to update the authentication for our existing provider definitions18:28
fungirather than add new ones18:29
clarkbright thats what we've done the whole time18:29
clarkboh maybe you mean cloud.yaml entries18:29
fungimost of the complexity in the recent changes was to satisfy the request to have new providers in parallel with the original ones18:29
clarkbya I wanted to avoid taht for testing because we can't limit updated cloud entries for a single region18:29
clarkbbut now that we've tested and it generally works I think this is ok (and I +2'd corvus' change)18:29
clarkbfungi: just for clarity we never wanted new providers18:30
clarkbfungi: an initial patchset did add a new provider but that was changed because we didn't want that18:30
fungioh, i clearly misunderstood18:30
clarkbbecause adding a new nodepool provider would've orphaned the existing stuff in the old provider18:30
fungii was personally satisfied when manual tests with the auth settings change worked, but was willing to add new clouds instead since others wanted that18:31
fungiyeah, new clouds, new providers, it wasn't clear to me that those were distinct choices18:31
clarkbya I was hoping to test existing nodepool providers with new credentials and doing so required new cloud entries unless you wanted to update all of nodepool for rax at once18:31
clarkband since rax is like 50% of our capacity I was hoping to avoid that for testing18:32
clarkbturns out it would've been better to just send it due to this unexpected behavior18:32
fungii assumed we needed separate providers to use different cloud configurations in parallel18:33
clarkbin any case adding new providers would've broken in the same way (beacuse there is no mirror with that name). But we didn't do that18:33
fungigot it18:34
clarkbthe next hourly run of the nodepool job should update all of our images to the new image with rackspaceauth installed then we can land the system-config update which swaps rax over to using rackspaceauth keys18:36
fungii'll un-wip it. thanks!18:36
clarkbthen we should be done for both control plane and nodepool and can opt into MFA early if we like18:36
clarkbfungi: https://review.opendev.org/c/opendev/system-config/+/911948 is the change in system-config I'm referring to18:37
fungioh, though i change the cloud names and stuff18:37
fungiit'll need a reword18:37
fungirework18:37
clarkbit isn't wip should be good to review (we just need to land it after the nodepool update occurs)18:37
fungiah, okay, i was talking about 91122918:37
fungiwhich, yes, was complicated by the requirement to add new clouds in parallel with the existing ones18:37
clarkbI think the confusion was that I only ever intended for new clouds.yaml entries to be temporary to test things18:38
fungii can abandon 911229 if we don't need it now18:38
clarkbya I don't think we need 911229 if we land 91194818:38
fungiyeah, i assumed it was a desire to replace the old definitions with new cloud entries and then later delete the old ones once everything was moved over18:38
clarkband the reason we needed new clouds.yaml entries for testing is that each cloud entry applies to all regions all at once18:38
clarkbbut once testing was done we'd remove the new stuff and apply the config update to what prod was using normally18:39
clarkbwhich is what we've ended up with here18:39
fungigot it. i didn't realize it was a test-and-roll-back suggestion, though it was add-parallel-replacements-and-clean-up-originals18:39
clarkbif we had cloud entries for each region already then none of this would've been necesasry. But we don't (and I think that is correct for how we're handling mirror stuff in nodepool and zuul)18:40
clarkbotherwise we'd have mirrors like mirror.dfw.rax-dfw.opendev.org without changing stuff up there18:40
fungiright, i had a fleeting thought that this was going to require new dns entries, and should have remembered to follow up on that18:41
clarkbI completely spaced on that18:41
clarkbmight be worth a note in our nodepool configs?18:41
clarkbsimilarly new credentials allowed for testing launchnode without impacting say tonyb trying to launch new meetpad server stuff18:43
clarkbanyway testing looks good other than the mirror mismatch and we can roll forward now pretty comfortably in our existing configs18:43
clarkbone "good" side effect of this is jobs failed quickly so we booted and deleted nodes quickly in dfw :) pretty good indication that this works well18:44
fungiyep, i think we're in a good place with it18:48
fungiso are there outstanding changes i should reference when abandoning 911229, something that isn't using topic:rackspace-mfa maybe?18:54
clarkbfungi: just the one from corvus19:04
clarkbhttps://review.opendev.org/c/opendev/system-config/+/91194819:04
clarkbthe hourly nodepool job is running now so we should get new container images deployed then can land 911948 after19:05
clarkbnl01's container just restarted19:07
opendevreviewJeremy Stanley proposed opendev/system-config master: Clean up unused Rackspace password test values  https://review.opendev.org/c/opendev/system-config/+/91195819:12
clarkbthe nodepool job is done now and was successful. All nodepool containers should be running images capaable of using the new auth type19:13
clarkbwe should be good to land 911948 if we are happy with that change and the private var updates19:14
clarkbfungi: corvus: I looked at the private vars and they look good. I think my only other fear is that if we somehow mixed up api keys and had test nodes booting in the control plane account that nodepool might try to delete those nodes. But it shouldn't because we don't let nodepool delete things without nodepool metadata19:19
clarkbI would expect auth to just not work beacuse usernames shouldn't match up in that case too19:19
clarkball that to say I think we're good to approve 911948 now?19:19
fungiyep, agreed. just approved it now19:21
clarkbI guess after that lands we should do some openstack server lists and make sure everything looks aligned properly19:22
opendevreviewClark Boylan proposed openstack/project-config master: Add warning to nodepool configs about changing cloud name  https://review.opendev.org/c/openstack/project-config/+/91195919:29
clarkbtristanC[m]: the 18:43 sync appears to have caught things up for centos 819:47
clarkbthe system-config change to update the clouds.yaml should be merging shortly. Not sure which side of the hourly deploy jobs it will end up on though19:50
tristanC[m]clarkb: that's great, thank you very much for the follow-up!19:52
opendevreviewMerged opendev/system-config master: Switch rackspace clouds to api key auth  https://review.opendev.org/c/opendev/system-config/+/91194819:59
clarkbdeploy for ^ ended up being hourly jobs but I think it may have merged soon enough that hourly jobs will use it anyway?20:05
clarkbyes bridge just updated20:05
clarkbthe nodepool job is starting so nodepool should update soon too20:07
clarkbserver listing against ord is extremely slow20:07
clarkbIAD isn't what I'd call quick but it is faster. And listings for both iad accounts work and I see appropriate values in both (though nodepool hasn't updated its clouds.yaml yet)20:10
clarkbthis looks good to me. I'm not sure if we have to restart nodepool to pick up the change since only clouds.yaml updated and not the provider config in nodepool.yaml. If so then the change to add more build and upload stats to nodepool should cause a container restart across the board20:12
clarkbfungi: corvus: please do checking from your end if you feel it is worthwhile. I can also restart containers if we want that to happen more quickly. But in the meantime I need ot eat lunch20:13
clarkband now we can start thinking about the swift credentials. Then figuring out opting into MFA early20:13
fungiyeah, i think we can wait for another change to restart nodepool onto the new image and read in the clouds.yaml when that happens20:23
clarkbnl01 just restarted due to that nodepool change merging a little while ago21:07
clarkbso ya no reason to do that manually and we just need to watch grafan for rax oddities21:08
clarkband start figuring out the swift credentials21:08
clarkbwe encrypt the username and project id info as well as the password21:09
clarkband grabbing the secret keys out of zk is difficult now because they are encrypted? THis could be a fun puzzle to sort through21:10
clarkbthough maybe https://review.opendev.org/c/zuul/zuul/+/908507/6/tools/decrypt_secret.py addresses those problems for us21:31
clarkbcorvus: ^ I reviewed that chaneg it seems to be fine more the most part but would be interested to see if you are willing to review it before we run it against our secrets22:53
corvusclarkb: reviewed23:07
clarkbthanks looks like no major concerns. maybe we see if we get a new iteration overnight (the timestamps imply maybe an EU submitter) then give ti a go to see what we've got in place for these swift creds23:09

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!