Wednesday, 2020-10-14

openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766000:11
openstackgerritIan Wienand proposed openstack/diskimage-builder master: WIP: boot test of containerfile image  https://review.opendev.org/72214800:21
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766000:40
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766000:58
*** hamalq has quit IRC01:04
*** DSpider has quit IRC01:07
openstackgerritIan Wienand proposed openstack/diskimage-builder master: WIP: boot test of containerfile image  https://review.opendev.org/72214801:42
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766002:07
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766002:14
*** chandankumar has quit IRC02:18
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766002:19
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766003:35
openstackgerritIan Wienand proposed openstack/diskimage-builder master: WIP: boot test of containerfile image  https://review.opendev.org/72214803:56
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766003:57
openstackgerritMerged opendev/system-config master: borg-backups: add some extra excludes  https://review.opendev.org/75796504:15
*** chandankumar has joined #opendev04:25
*** ykarel|away has joined #opendev04:44
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766004:51
*** ykarel|away is now known as ykarel04:58
*** ykarel_ has joined #opendev05:04
*** ykarel has quit IRC05:08
openstackgerritIan Wienand proposed openstack/diskimage-builder master: WIP: boot test of containerfile image  https://review.opendev.org/72214805:09
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766005:12
*** ykarel_ is now known as ykarel05:18
*** marios has joined #opendev05:20
*** prometheanfire has quit IRC05:32
*** ralonsoh has joined #opendev05:33
*** sboyron has joined #opendev05:47
*** prometheanfire has joined #opendev05:48
*** slaweq has joined #opendev06:08
*** roman_g has joined #opendev06:11
*** eolivare has joined #opendev06:31
openstackgerritIan Wienand proposed opendev/system-config master: [wip] reprepro  https://review.opendev.org/75766006:46
*** andrewbonney has joined #opendev07:06
*** fressi has joined #opendev07:06
*** ysandeep|away is now known as ysandeep07:09
*** sshnaidm|afk is now known as sshnaidm07:36
*** DSpider has joined #opendev07:39
*** tosky has joined #opendev07:43
*** lpetrut has joined #opendev07:50
*** rpittau|afk is now known as rpittau07:52
*** mkalcok has joined #opendev07:58
*** hashar has joined #opendev08:24
*** roman_g has quit IRC09:08
*** roman_g has joined #opendev09:16
*** moguimar has left #opendev10:01
*** zigo has joined #opendev10:11
*** Dmitrii-Sh has quit IRC11:20
*** Dmitrii-Sh has joined #opendev11:20
*** ysandeep is now known as ysandeep|brb11:23
*** ysandeep|brb is now known as ysandeep|afk11:32
*** priteau has joined #opendev11:39
openstackgerritJeremy Stanley proposed opendev/system-config master: Switch openstack/compute-hyperv->x tarball redir  https://review.opendev.org/75809611:54
fungiinfra-root: ^ relatively urgent, i'll handle the file moves myself once it merges11:55
fungii'm going to go ahead and get started on the corresponding file move12:07
fungi#status log manually moved files from project/tarballs.opendev.org/x/compute-hyperv into .../openstack/compute-hyperv per 75809612:10
openstackstatusfungi: finished logging12:10
clarkbfungi: is there also a job publishing update that needs to happen?12:12
fungiclarkb: nope, that's how we discovered the problem12:12
fungitarballs for it are being published into openstack not x12:12
clarkbgot it12:12
fungiso we moved the previous tarballs and redirected, but the release scripts couldn't find the new ones because they weren't being published to where the redirect goes12:13
clarkbI think you can self approve the redirect fix now if you want12:14
fungii figured i'd give it until check results returned in case anyone else wanted to review, since i wasn't planning to direct-enqueue it to the gate anyway12:16
fungiopenstackgerrit has gone silent on us12:31
*** fressi has quit IRC12:34
*** ysandeep|afk is now known as ysandeep12:38
openstackgerritAurelien Lourot proposed openstack/project-config master: Mirror ironic charms to GitHub  https://review.opendev.org/75810912:52
openstackgerritMerged opendev/system-config master: Switch openstack/compute-hyperv->x tarball redir  https://review.opendev.org/75809613:04
*** hashar has quit IRC13:11
*** priteau has quit IRC14:09
*** priteau has joined #opendev14:18
*** lpetrut has quit IRC14:40
AJaegerconfig-core, please review https://review.opendev.org/756717 and https://review.opendev.org/758109, https://review.opendev.org/75319914:49
*** ysandeep is now known as ysandeep|away15:25
clarkbreview-test has finished replicating to gitea99. Total disk use was 41GB after that. I'm running the gc now to compare post gc size15:26
clarkbThen I'll dobule check we haven't replicated all-users and open the firewall for the web server there again15:26
clarkbbtu I think it looks good15:27
clarkboh I should trigger another full replication and see how slow that is now that its replicated too15:28
clarkbI'll do that post gc15:28
openstackgerritMerged openstack/project-config master: Mirror ironic charms to GitHub  https://review.opendev.org/75810915:32
*** ykarel is now known as ykarel|away15:34
openstackgerritMerged openstack/project-config master: Add SNMP Armada App to StarlingX.  https://review.opendev.org/75671715:36
mnaserim getting really slow load times when i open http://opendev.org/openstack/governance15:42
*** ykarel|away has quit IRC15:42
fungii'll check if the server farm is getting overloaded again15:43
clarkbmnaser: if you inspect the ssl cert you get it will tell you which backend you're talking to15:44
clarkb(it is one of the cert names)15:44
fungiwas slow for me with gitea0315:45
clarkbfirefox dev tools say a good chunk of the time for me is tls setup15:46
mnaserin my case i did a local curl and it was even the http backend that did a 301 redirect that was slow15:47
clarkbwe just did a pass of project config updates too15:48
clarkbI wonder if that invalidates our caches15:48
fungiestablished tcp connections through the lb jumped by a couple orders of magnitude a little before 02:00 utc15:48
clarkbthat was a result of applying https://review.opendev.org/75671715:48
fungihttp://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66611&rra_id=all15:49
clarkboh ya thats it then, thats also our haproxy limit15:49
clarkbwhich would explain slow connection setups15:49
clarkbwe're queueing in there behind the dos15:49
fungiodds are we're seeing attack/abuse again but i'll check specific backends too15:49
clarkbthe week 39 block that looks like this was the ddos from china ISP IPs15:49
fungiyep, i was thinking the same15:50
clarkbif we're seeing the same sort of attack then we can flip the backend ports to 3081 to try ianw's filtering15:50
fungiseeing basically the same jump on all backends15:50
clarkbya if you look at /var/gitea/logs/access.log you see the silly user agent strings again15:51
fungiso i guess step next is to see if we can yeah15:51
fungithat15:51
clarkbhttps://gitea01.opendev.org:3081/ is the filtered version15:52
fungi"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)"15:52
clarkbI think we can tell haproxy to talk to 3081 instead of 3000 ?15:52
fungialso a lot of "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"15:52
clarkband then does apache do the http -> https redirect for us /me looks at that config15:52
fungiactually this may not be so easy15:54
fungithe winning ua by a factor of >6x the next most common one is "git/2.18.4"15:55
*** slaweq has quit IRC15:55
fungithough i suppose we assume it's only browser traffic which matters, in which case "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)" is 2x more common than the 3rd place ua15:56
openstackgerritClark Boylan proposed opendev/system-config master: Switch to filtered apache backends  https://review.opendev.org/75820015:56
clarkbfungi: I think ^ will put our planned mitigation for this in place15:56
clarkbif we want to apply that by hand to a single backend first we can do that too15:56
clarkbor maybe do a 50-5015:56
fungiaha, i see, there's a bunch of agent strings we thwap in that vhost15:57
clarkbyes, ianw implemented that in case we had what happaned last time reappear and this appears to be now :)15:58
*** slaweq has joined #opendev15:58
*** marios has quit IRC15:58
clarkbwe edit /var/haproxy/etc/haproxy.cfg by hand then run `docker-compose kill -s HUP haproxy` to pick up the new config15:58
fungii say we just go for it. this explains a failure we saw during openstack release processing earlier too15:58
fungii'll try gitea01 first and see if it calms down there15:59
clarkbworks for me. I think my biggest concern is that the http -> https redirects may not work but I think gitea does that as a port 80 to port 443 redirect which teh load balancer then handles15:59
clarkb++15:59
clarkbfungi: let me know if you want me to help with anything16:01
fungii've done the above for gitea0116:01
clarkbnow we wait for haproxy to bleed the old connections16:02
clarkbI believe I talk to gitea01 /me tests16:02
fungiand cacti to poll16:02
clarkbconfirmed I talk to gitea01 from my source IP and it seems much quicker?16:02
clarkbalso http -> https redirecting seems to work16:02
clarkbat the very least doesn't seem to be a regression16:03
clarkbstill seeing some weird user agents though16:04
fungiactually i'm talking to gitea06 on this machine now and it's still faster than it was16:04
clarkbmaybe they are on persistent tcp connections to the old haproxy which should eventually time out16:04
clarkbfungi: I vote we move the other 7 to 3081 and approve https://review.opendev.org/758200 then we can refine the UA filtration list as necessary16:05
clarkb(I think the ones that are getting through are different than what we already filter16:05
*** rpittau is now known as rpittau|afk16:05
fungidoing that now16:06
fungiand done16:06
fungii'm still able to browse stuff even after a hard refresh16:07
clarkb\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0" <- that seems to be a common one sneaking through16:07
clarkbmnaser: ^ maybe give it another minute or two but then try again and see if it is happier for you?16:08
fungitrident/5.0 seems to be the backend for msie 9.016:09
clarkbwhich is like windows XP era16:10
clarkbI don't mind blocking that :)16:10
clarkbsorry it was vista :)16:10
clarkb\"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" ie 6 is xp :P16:11
fungihttps://en.wikipedia.org/wiki/Internet_Explorer_9#User_agent_string16:11
fungicirca 2011 for initial release16:12
clarkbcacti is showing the connections fall off now too16:12
fungimsie9 is apparently ~0.06% of the desktop browser share based on recent stats16:13
clarkbdo you think we should update our UA list for completeness or leave it as is since what we've got seems to be sufficient?16:13
fungiwe should probably at least wait for the change to roll out and make sure things are stable before we add any more to the filter list16:14
clarkbsounds good16:14
*** hashar has joined #opendev16:15
clarkbmemory consumption looks good too so we aren't all of a sudden disrupting that balance via addition of apache16:17
fungioh yeah, gitea01 has recovered rapidly: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66632&rra_id=all16:18
clarkbya its basically able to throw away 80% of the connections immediately then worry about the other mostly valid connections16:19
smcginnisIE 9? Yikes.16:20
clarkb smcginnis its not actually IE9, what ianw discovered last time this happened is that the weird UAs we were seeing lined up almost perfectly to some botnet ddos tool on github16:21
clarkband from that ianw built the apache filter list we're now using as of 20 minutes ago or so16:21
*** tosky has quit IRC16:21
smcginnisInteresting choice for a user agent string for a DDOS tool. :)16:22
clarkboh those aren't even the fun ones16:23
clarkbthere are a bunch for things like ipods and hp tablets16:24
smcginnisHah16:24
* smcginnis imagines a farm of ipods16:24
fungiyeah, basically it has some 50 different legitimate (if old) looking uas it rotates through at random for its requests16:25
fungiin order to evade detection16:25
fungiand doesn't obey robots.txt exclusions or nofollow attributes for links16:25
fungiwe've had to set up a proxy filter in front of each of the gitea servers blocking requests based on that list of uas16:27
fungibecause the workload is also spread across random ip addresses from most of the major telecommunications providers in china16:27
fungiso the alternative (and what we did last time) is basically to block most chinese users16:27
clarkbwe all ianw a beer now too :P16:28
clarkb*all owe16:28
fungii'd argue we already did ;)16:28
fungia case of the finest chinese beer for ianw!16:28
*** eolivare has quit IRC16:34
openstackgerritBernard Cafarelli proposed openstack/project-config master: Update neutron grafana dashboard  https://review.opendev.org/75820816:36
clarkbafter git gc we are down toe 28GB disk used by gitea on gitea9916:40
fungihow does that compare to production gitea servers?16:40
clarkbpulling those numbers now too16:41
clarkbits harder to quantify on prod because its a single volume for all of /16:41
clarkbso have to wait for du to run16:41
*** iurygregory has quit IRC16:41
clarkb/var/gitea/data/git/repositories is 18GB on gitea99 and 13GB on gitea0116:41
fungiconnections through the lb are looking much better now: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66611&rra_id=all16:41
*** hamalq has joined #opendev16:41
fungithough you can tell it's still easily an order of magnitude higher than normal for us16:42
clarkb9.3GB of index data for both16:42
clarkbfungi: ya thats the extra UAs that are sneaking through I think16:42
fungii expect so16:42
clarkbre gitea disk use I think that means we're fine. We've got 29GB to grow into on gitea01 which is more than the total size we expect then when we gc the disk use will fall dramatically16:43
clarkbI also think we can gc while replication is happening16:43
clarkbnow to rerun replication and see if it is super slow still16:43
*** iurygregory has joined #opendev16:44
clarkbalso I've done more digging in git docs and the refspecs seem to only support the * glob16:44
clarkbI was hoping we might be able to do something like character classes and only replicate changes/XY/ABCXY/[0-9]*16:45
clarkbso far I think the only thing that has broken where we were using an existing true api was the commentlinks16:48
clarkbI've just jinxed it :P16:48
clarkbeverything else is hacks like the js stuff and direct db access16:49
clarkbnoop replication is much quicker fwiw16:49
clarkbshould be done in a few more minutes16:49
fungioh good16:50
clarkbrelpication finished at some point between me saying it had a few minutse and me returning from getting a drink17:08
*** hamalq has quit IRC17:08
fungibar trips are a legitimate unit of time17:09
fungisub-bt completion17:09
clarkbfungi: https://review.opendev.org/#/c/758200/ passed testing if you want to approve it to match production now17:10
*** hamalq has joined #opendev17:12
*** sgw has joined #opendev17:12
*** mlavalle has joined #opendev17:26
openstackgerritClark Boylan proposed opendev/system-config master: Add four more gitea ddos UA strings  https://review.opendev.org/75821917:31
clarkbfungi: ^ those the 4 common ones I see sneaking through17:31
clarkbAlso I don't know why I said "all three" in the commit message after saying "Add four"17:36
clarkbI was up early today and my brain is not quite in gear17:37
fungimeh17:37
fungiyeah17:37
fungisame here, along with a late night closing out elections17:37
*** ralonsoh has quit IRC17:42
clarkbwe've also got 3 more well behaved bots: http://www.semrush.com/bot.html https://aspiegel.com/petalbot and https://zhanzhang.toutiao.com/17:47
*** ykarel|away has joined #opendev17:48
*** ykarel|away has quit IRC17:53
*** hashar is now known as hasharDinner17:55
*** smcginnis has quit IRC17:59
*** smcginnis has joined #opendev18:01
openstackgerritMerged opendev/system-config master: Switch to filtered apache backends  https://review.opendev.org/75820018:09
AJaegerI'd like to merge https://review.opendev.org/742736 , a zuul-jobs change consolidates our log uploading. Any infra-root around to monitor in case it does not work as expected? Then I'll approve now...18:13
fungilooking now, but yes i'm around for at least a few more hours18:13
clarkbAJaeger: I'm watching the kids for abit but am arpund enough18:13
AJaegerdo you want to review first, fungi?18:13
AJaegerthanks, clarkb and fungi18:13
fungiAJaeger: nah, i just wanted to familiarize myself with it so i know what sort of breakage to be on the lookout for18:15
fungimerge at your discretion18:15
AJaegerapproved - thanks18:15
AJaegerare we running https://review.opendev.org/#/c/753222/ in our Zuul instance now? So, can we merge https://review.opendev.org/#/c/753199/ which "Refactor fetch-sphinx-tarball to be executor safe"?18:17
fungii'll check18:17
fungibut i think that eneded up in the most recent restarts18:18
AJaegerfungi: if it is, would be great if you could review 753199 as well. Thanks for checking!18:18
AJaegerchange merged two weeks ago - if we restarted everything, we should be fine18:18
clarkbif it passes testing then it mustbe restarted with that change18:18
clarkbotherwise it would fail18:18
fungithere's probably a docker-esque way to check start times, but stat on /var/lib/zuul/executor.socket seems to be fairly accurate18:20
fungithe oldest executor.socket is on ze01, created 2020-10-01 16:10:38 utc18:21
clarkbfungi: you can do docker ps -a or just usenormal ps18:22
AJaegerand the change merged before that - ok, please review 75319918:22
fungi753222 was uploaded to dockerhub at 2020-09-30 23:52:3518:22
*** mkalcok has quit IRC18:22
fungiso that's nearly 16.5 hours for it to pull an updated image18:22
fungiclarkb: yeah, i never can remember how to get ps to show precise timestamps when a process is more than a day old18:23
openstackgerritMerged zuul/zuul-jobs master: Consolidate common log upload code into module_utils  https://review.opendev.org/74273618:28
fungiAJaeger: so i guess there's another change which we've got to take advantage of 753199 and run the role on executors again?18:38
openstackgerritMerged zuul/zuul-jobs master: Refactor fetch-sphinx-tarball to be executor safe  https://review.opendev.org/75319918:51
clarkbnew thread on gerrit list says 2.16.23 will improve notedb migration performance18:51
clarkbno indication of when that will release though18:51
*** andrewbonney has quit IRC18:55
clarkbactuall we may already be building with those changes18:56
fungidepending on how much it improves, could allow us to pack the entire upgrade sequence into a single-day outage, but honestly i like the break the notedb step gives us18:56
clarkbyup we checkout stable-2.16 so we should haev them18:56
fungiwe could likely all use the midpoint rest18:56
clarkbthey are all about 2 months old and our image is much newer18:57
clarkbunfortunately I think that means our 8 hour migration is the fast version18:57
fungifortunately, we're getting the 8 hour migration and not the 80 hour migration, i guess18:58
clarkbI got all excited for a minute there18:58
clarkbI guess on the plus side they are continuing to improve the process and that will help us18:58
*** avass has joined #opendev19:02
*** avass has left #opendev19:03
*** avass has joined #opendev19:06
*** yourname_ has joined #opendev19:07
*** yourname_ has quit IRC19:09
*** yourname_ has joined #opendev19:09
clarkbto test manage-projects I've realized I need to sort out authentication because we restored a prod db which ahs the prod pubkey in the user account but we shouldn't use the same key to avoid any chance of cross talk (I need to check if we even have a key too)19:16
clarkbya I think we have a new key that mordred generated I just need to set up the account on review-test19:19
*** priteau has quit IRC19:20
clarkbyup I get permission denied as expected19:20
fungican probably still auth with the ssh host key as "Gerrit Code Review" and then add the user that way19:21
fungior, yeah, your openid is probably an admin19:22
fungii keep forgetting it's a production snapshot ;)19:22
clarkbya I'm trying to do it with my account using `gerrit set-account` but it is telling me unavailable19:24
*** roman_g has quit IRC19:25
clarkbthat makes me wonder if it is an acl issue19:28
clarkbhrm actually it may be beacuse that user has no email address set? there is a traceback complaining about a null email19:29
* clarkb tries setting the email too19:29
clarkbwow I think that aws it I added a bogus email addr and now it is happy19:31
clarkbfungi: maybe we should add an email addr to that account now pre migration to avoid dealing with this in the future?19:31
clarkbroot@ maybe?19:31
clarkb`ssh -p 29418 -i /home/gerrit2/review_site/etc/ssh_project_rsa_key openstack-project-creator@review-test.opendev.org gerrit ls-projects` is working now19:32
fungimmm, yeah probably19:35
fungithat reminds me we still need to decide how we want to set up our shared role addresses going forward so we can wean ourselves off openstack.org for one more thing19:36
fungimaybe just exim+(courier/cyrus/something) on a small vm, we don't really need a webmail interface for those19:37
fungiheck, i'd be satisfied with exim local delivery and mailx from a shell prompt, at least initially19:37
clarkbpine!19:42
fungisee, you're getting fancy19:45
fungibut yeah, an imap mailbox or several would not be hard to manage. i've been maintaining my own mailserver on debian for decades19:46
clarkbfungi: if you get a chance today can you look at review-test's manage-projects setup? I updated /usr/local/bin/manage-projects to use the correct gerrit docker image, added a ~gerrit2/acls/clarkb/clarkb-test-project.yaml, added a ~gerrit2/projects.yaml as well as the earlier ssh key addition to the openstack-project-creator account. ~gerrit2/projects.ini was not modified as it appears to be already set up to19:51
clarkbtalk to review-test but should be double checked too19:51
clarkbI'm mostly worried about wire crossing and accidentally adding a clarkb-test-project to prod19:51
fungiwill do19:52
fungii agree it would be unfortunate if it tried to update prod, though it really should not have sufficient credentials to do that19:52
clarkbya that is my expectations19:52
fungianyway, i'll double-check19:52
clarkbthanks19:53
*** tosky has joined #opendev19:53
clarkbI'ev also noticed that jeepyb is hardcoded to create local mirror dirs19:55
clarkbnot today, but soon I'll see about making that conditional19:55
fungioh, good catch, yeah we wanted that at one time, even still not that long ago19:58
fungiclarkb: the jeepyb config looks safe to me20:25
clarkbfungi: ok want to run it in the root screen really quickly?20:26
clarkbI've got the command queued up in the root screen and will run it momentarily20:27
fungioh, yeah, i had to reboot my workstation but i'll reattach now20:27
fungirunning now20:27
fungiand done20:27
clarkbthat seems suspicuoulsy quick and qiet20:28
fungiyes20:28
fungialso screen seems to have switch to your geometry20:28
fungiswitched20:28
clarkbfungi: ya I made it smaller for you20:28
clarkbI had disconnected too so when I reconnected it was huge then I made it arbitrarily smaller20:29
clarkbit doesn't seem to have created the project20:29
fungithat looks suspiciously empty20:29
clarkbhttps://review-test.opendev.org/admin/repos/q/filter:clarkb is empty20:29
clarkbthe writing cache file message is in a finally block20:30
fungithis seems like something to tackle tomorrow when we're both less busy20:30
clarkbya I'll look at it a bit more but I'm pretty sure it basically ssh'd then did nothing20:31
fungithat's what it seems like, at least20:32
clarkbmy hunch based on lack of output is we're hitting if args.projects and project not in args.projects: continue20:32
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Revert "Consolidate common log upload code into module_utils"  https://review.opendev.org/75824720:33
*** yourname_ is now known as avass20:33
openstackgerritTobias Henkel proposed zuul/zuul-jobs master: Remove unneeded gce from upload_utils  https://review.opendev.org/75824820:34
clarkband I'm off ot make lunch now. Will pick this up later20:39
fungienjoy! i'm about to go pitch in on dinner20:40
openstackgerritTobias Henkel proposed zuul/zuul-jobs master: Remove unneeded gce from upload_utils  https://review.opendev.org/75824820:43
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Revert "Refactor fetch-sphinx-tarball to be executor safe"  https://review.opendev.org/75825020:54
*** hasharDinner has quit IRC20:57
openstackgerritMerged zuul/zuul-jobs master: Remove unneeded gce from upload_utils  https://review.opendev.org/75824820:58
openstackgerritMerged zuul/zuul-jobs master: Revert "Refactor fetch-sphinx-tarball to be executor safe"  https://review.opendev.org/75825021:07
*** sboyron has quit IRC21:09
clarkbianw: we deployed your gitea user agent filtering to prod today21:11
clarkbwe got hit by the same ddos again. I have a change up to add 4 more UAs too21:12
fungiianw: oh, in other news, we merged an emergency change to un-move one of the tarballs projects which had been part of the original openstack namespace exodus but was subsequently adopted by an official team and moved back21:15
fungihopefully that was the only one in that situation21:15
ianwyeah, just catching up on various scrollback :)21:15
ianwthanks for fixing my screw-ups :)21:15
ianwfungi: it might be worth updating the opendev/project-config files with that, in case we have to make any more adjustments?21:16
ianwseems unlikely though21:16
fungii suspect we did, since we use that for input into the gerrit rename playbook, but i haven't checked21:17
ianwhrm, possibly my script was wrong then21:18
ianw20190531.yaml:  - old: x/compute-hyperv21:19
ianw20190531.yaml:    new: openstack/compute-hyperv21:19
ianwmy script probably didn't handle the second occurrence moving it back21:19
ianw           if old_tenant.startswith('openstack') and \21:21
ianw               not new_tenant.startswith('openstack'):21:21
fungihah21:21
ianwthat needed to be expanded with something like "if new_tenant is openstack and old-tenant was ..."21:22
ianwis that the only instance or should i go through it again?21:22
ianw  - old: x/whitebox-tempest-plugin21:23
ianw    new: openstack/whitebox-tempest-plugin21:23
fungiit's the only one handled by release management, but yeah there may be release independent stuff21:23
ianw  - old: x/devstack-plugin-nfs21:23
ianw    new: openstack/devstack-plugin-nfs21:23
fungiokay, so we have a handful of others to clean up i guess21:24
fungiodds are nobody was using the tarballs site for those21:24
ianwkayobe21:25
ianwfungi: this should be the list -> http://paste.openstack.org/show/799052/21:27
fungiianw: yeah those all look familiar. keep in mind that some may have tagged new releases since they were moved, so now have old tarballs in x and new tarballs in openstack (that was at least the case for compute-hyperv)21:29
ianwyep, ok i gotta do school run but will come back and fix these up21:30
ianwclarkb: now i think about it, i'm not sure that adding those strings will cause apache to pick up the new config?  i'm not sure there's a handler for that21:55
clarkbhrm we can either manually restart or add a handler I guess21:56
clarkbsorry my brain is checked out at this point. Was up early to check on release things and never cuaght back up from there21:56
*** slaweq_ has joined #opendev21:56
*** slaweq has quit IRC21:59
ianwi can look at a handler for future and do a restart for now22:00
ianwx/pyeclib openstack/pyeclib22:00
ianwx/whitebox-tempest-plugin openstack/whitebox-tempest-plugin22:00
ianwx/kayobe openstack/kayobe22:00
ianware the 3 remaining projects that have tarballs that should be moved back to openstack22:01
*** slaweq_ has quit IRC22:02
fungiyeah, i guess none of those released as part of victoria22:04
*** slaweq has joined #opendev22:05
ianwi'll just fix that up manually now22:07
*** erbarr has quit IRC22:13
openstackgerritIan Wienand proposed opendev/project-config master: [dnm] scripts to cleanup tarballs.opendev.org  https://review.opendev.org/75425722:13
*** erbarr has joined #opendev22:15
openstackgerritIan Wienand proposed opendev/system-config master: tarballs: remove incorrect redirects  https://review.opendev.org/75825922:16
*** jentoio has quit IRC22:18
*** jentoio has joined #opendev22:18
ianwclarkb: no i'm wrong -- notify: gitea Reload apache2 -- should deploy ok22:20
ianw#status log moved x/pyeclib x/whitebox-tempest-plugin x/kayobe back under openstack/ in tarballs AFS (https://review.opendev.org/758259)22:21
openstackstatusianw: finished logging22:21
*** slaweq has quit IRC22:36
*** qchris has quit IRC22:41
ianwthose extra matches failed in the gitea test22:47
ianwHTTPSConnectionPool(host='localhost', port=3000): Max retries exceeded with url: /api/v1/user/orgs?limit=50&page=1 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f204cff35f8>: Failed to establish a new connection: [Errno 111] Connection refused',))22:47
ianw[36m2020/10/14 21:55:26 [0m[32mcmd/web.go:204:[32mrunWeb()[0m [1;45m[C][0m Failed to start server: [1mopen /certs/cert.pem: no such file or directory[0m22:48
ianwwe don't capture the acme.sh output but i guess the letsencrypt failed22:50
fungithat seems like the most fragile dependency for installing that file22:51
fungiso a reasonable guess22:51
ianwthere's a couple of containers around that provide for fake acme workflows, it's on my todo list to implement something like that for gate testing22:52
*** qchris has joined #opendev22:54
*** hamalq has quit IRC23:02
fungiinfra-root: i'm not really around at the moment, but i see that rackspace has just opened tickets for us about possible host outages impacting zk03 and nl0123:04
fungimay be worth keeping an eye on23:04
ianwok, will do, probably with those services first we'll notice is system-config deployment failing23:06
*** hamalq has joined #opendev23:08
*** tosky has quit IRC23:27
*** mlavalle has quit IRC23:29

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!