Thursday, 2021-03-25

*** dwilde has quit IRC00:04
*** tosky has quit IRC00:12
*** mlavalle has quit IRC00:15
ianwfungi: hrm, i'd have to check if we have that turned on ATM00:19
ianwfungi: https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/apache-ua-filter/files/ua-filter.conf00:25
ianwsemrush is definitely something that's come up before.  but i don't think we're explicitly blocking it00:26
fungiyeah, i also think it's just statistical odds that crawlers would be the things which will show up in small cross-sections of our traffic01:02
openstackgerritIan Wienand proposed opendev/system-config master: [wip] gitea token test  https://review.opendev.org/c/opendev/system-config/+/78288701:08
fungiianw: experimenting with auth offload?01:14
ianwfungi: yeah, after reading those bugs on the password hash, it seems like maybe if we use an api token we bypass that?01:14
fungitoken auth seemed to be the primary recommendation, yes01:15
ianwshouldn't be too hard.  famous last words :)01:15
openstackgerritIan Wienand proposed opendev/zone-opendev.org master: Remove review-dev  https://review.opendev.org/c/opendev/zone-opendev.org/+/78288901:21
openstackgerritIan Wienand proposed opendev/system-config master: [wip] gitea token test  https://review.opendev.org/c/opendev/system-config/+/78288701:38
openstackgerritMerged opendev/system-config master: Create review-staging group  https://review.opendev.org/c/opendev/system-config/+/78069801:42
ianwinfra-root: can we remove opendev-k8s-master opendev-k8s-1,2,3 in vexxhost ca-ymq-1 region?  i'm trying to start a new review server but we're out of quota02:12
ianwthat woudl be 32 gb ram02:13
ianwRequested 131072, but already used 71680 of 102400 ram02:13
ianwi guess we need more at any rate02:14
*** hamalq has quit IRC02:24
ianwhrm, so it seems you get the token sha only once, from then on the api reports it as blank.  i guess this makes sense but it isn't documented03:02
*** hemanth_n has joined #opendev03:16
openstackgerritIan Wienand proposed opendev/zone-opendev.org master: Add review02.opendev.org  https://review.opendev.org/c/opendev/zone-opendev.org/+/78289303:17
*** ykarel|away has joined #opendev03:42
*** gothicserpent has joined #opendev03:44
openstackgerritIan Wienand proposed opendev/system-config master: [wip] gitea token test  https://review.opendev.org/c/opendev/system-config/+/78288704:20
*** ykarel|away is now known as ykarel05:04
openstackgerritIan Wienand proposed opendev/system-config master: [wip] gitea token test  https://review.opendev.org/c/opendev/system-config/+/78288705:17
openstackgerritIan Wienand proposed opendev/system-config master: launch-node : cap to 8gb swap  https://review.opendev.org/c/opendev/system-config/+/78289805:35
ianwgitea never went above ~6% ram usage with ^^ on local testing here05:38
*** ysandeep|away is now known as ysandeep05:56
*** marios has joined #opendev06:12
*** marios has quit IRC06:38
*** fressi has joined #opendev06:40
*** marios has joined #opendev06:46
*** ralonsoh has joined #opendev06:49
*** DSpider has joined #opendev06:52
*** DSpider has quit IRC06:54
*** jaicaa has quit IRC07:04
*** jaicaa has joined #opendev07:08
*** lpetrut has joined #opendev07:12
*** slaweq has joined #opendev07:18
*** tkajinam has quit IRC07:33
*** eolivare has joined #opendev07:33
*** tkajinam has joined #opendev07:33
*** ysandeep is now known as ysandeep|lunch07:51
*** jpena|off is now known as jpena07:59
*** fressi has joined #opendev08:09
*** andrewbonney has joined #opendev08:10
*** amoralej|off is now known as amoralej08:11
*** rpittau|afk is now known as rpittau08:13
*** sboyron has joined #opendev08:19
*** hashar has joined #opendev08:25
*** ykarel_ has joined #opendev08:36
*** ykarel has quit IRC08:38
*** ykarel_ is now known as ykarel08:43
*** tosky has joined #opendev08:43
*** ysandeep|lunch is now known as ysandeep08:45
openstackgerritCarlos Goncalves proposed openstack/diskimage-builder master: Auto find greatest Fedora cloud image sub-release  https://review.opendev.org/c/openstack/diskimage-builder/+/75599208:50
openstackgerritMerged opendev/irc-meetings master: Move the Masakari meeting one hour back  https://review.opendev.org/c/opendev/irc-meetings/+/78283808:50
*** ykarel_ has joined #opendev08:53
*** ykarel has quit IRC08:55
openstackgerritMerged opendev/irc-meetings master: Remove usused Heat meeting slot  https://review.opendev.org/c/opendev/irc-meetings/+/77718409:01
openstackgerritMerged opendev/irc-meetings master: Remove usused Charms meeting slot  https://review.opendev.org/c/opendev/irc-meetings/+/77718809:01
*** ykarel__ has joined #opendev09:02
*** ykarel_ has quit IRC09:05
openstackgerritCarlos Goncalves proposed openstack/diskimage-builder master: Add Fedora 33 support  https://review.opendev.org/c/openstack/diskimage-builder/+/75575009:07
*** sshnaidm|afk is now known as sshnaidm09:24
*** ykarel__ has quit IRC09:30
*** ykarel__ has joined #opendev09:34
*** ykarel__ has quit IRC09:34
*** hashar has quit IRC10:20
*** hashar has joined #opendev10:39
*** ykarel has joined #opendev10:39
*** ralonsoh_ has joined #opendev10:48
*** ralonsoh has quit IRC10:48
*** lpetrut has quit IRC10:56
*** dtantsur|afk is now known as dtantsur11:35
fungi#status log Restarted the haproxy container on gitea-lb01.opendev.org because it had been stuck in 100% cpu consumption since 04:00 UTC11:41
openstackstatusfungi: finished logging11:41
*** ysandeep is now known as ysandeep|afk11:48
*** jpena is now known as jpena|lunch11:48
openstackgerritDmitry Tantsur proposed openstack/diskimage-builder master: simple-init: support installing Glean from packages  https://review.opendev.org/c/openstack/diskimage-builder/+/78229911:49
*** lpetrut has joined #opendev11:52
fungiand now that haproxy is behaving again, something is destroying one of the backends11:54
fungiwill take gitea08 out of the pool11:54
*** jonher has joined #opendev11:55
fungia similar event hit gitea02 starting at roughly the same time haproxy went crazy11:55
*** ysandeep|afk is now known as ysandeep11:57
fungiand 05 and 0612:00
*** stephenfin has quit IRC12:00
*** ralonsoh_ is now known as ralonsoh12:02
fungino overlap in addresses between 02, 05 and 06 between 04:00 and 06:00 so i think we're looking at multiple address sources triggering this12:03
*** hashar is now known as hasharLunch12:10
*** hemanth_n has quit IRC12:16
*** rfayan has joined #opendev12:23
*** jpena|lunch is now known as jpena12:29
*** lbragstad has quit IRC13:01
*** amoralej is now known as amoralej|lunch13:04
*** lbragstad has joined #opendev13:05
*** klonn has joined #opendev13:09
*** klonn has quit IRC13:14
*** dwilde has joined #opendev13:19
corvusianw: i don't think there's any k8s stuff in use; is probably fine to delete13:32
*** amoralej|lunch is now known as amoralej14:07
*** hasharLunch is now known as hashar14:09
mordredianwcorvus : ^^ I agree - that _was_ where we were doing k8s for the gitea/ceph cluster14:12
openstackgerritSlawek Kaplonski proposed openstack/project-config master: Add neutron stable cores +2 powers in stable neutron-lib branches  https://review.opendev.org/c/openstack/project-config/+/78301114:14
fungigitea08 seems to have recovered, i'm putting it back in the haproxy pool14:25
*** mlavalle has joined #opendev14:48
openstackgerritMerged openstack/project-config master: Add neutron stable cores +2 powers in stable neutron-lib branches  https://review.opendev.org/c/openstack/project-config/+/78301114:49
*** dwilde has quit IRC15:04
*** dwilde has joined #opendev15:06
fungimemory consumption on 08 is climbing again15:17
fungii'm going to take it back out of the pool until it calms back down15:17
mtreinishrandom gerrit question, I just noticed that the Reply-to field in the notification emails looks like it contains every email address that's subscribed to changes on a repo.15:23
mtreinishWas it always like this and I just didn't notice until now, or did something change recently?15:23
fungithat's neat15:23
fungino idea if that's new in 3.215:23
mtreinishah, yeah I think it's probably a 3.2 change, looking at the user agent string at least in my mailbox the giant reply-to fields started with 'Gerrit/3.2.5.1-60-gbb9ea229fb-dirty'15:26
*** fressi has quit IRC15:26
*** ysandeep is now known as ysandeep|dinner15:42
*** ysandeep|dinner is now known as ysandeep15:46
*** mlavalle has quit IRC15:48
*** _mlavalle_1 has joined #opendev15:48
*** ysandeep is now known as ysandeep|dinner15:48
*** lpetrut has quit IRC15:57
*** roman_g has joined #opendev15:59
*** ykarel has quit IRC16:24
*** ykarel has joined #opendev16:26
weshay|ruckeeek.. can I get infra to peek at a patch in the gate.. 774500,116:32
weshay|ruckseems stuck to me, but not sure16:32
weshay|ruckoh.. nvrmind.16:32
weshay|ruckgoes to infra channel16:32
*** ysandeep|dinner is now known as ysandeep16:37
*** stephenfin has joined #opendev16:39
fungiyeah, saw your earlier question there, so that's where i've been following up with analysis16:40
*** ykarel has quit IRC16:40
*** hamalq has joined #opendev16:47
*** jpena is now known as jpena|off16:49
*** hamalq_ has joined #opendev16:49
*** hamalq has quit IRC16:52
*** rpittau is now known as rpittau|afk17:06
*** sshnaidm is now known as sshnaidm|afk17:11
*** marios is now known as marios|out17:15
*** ysandeep is now known as ysandeep|away17:15
*** otherwiseguy has quit IRC17:18
*** jpena|off has quit IRC17:19
*** janders has quit IRC17:19
*** dmellado has quit IRC17:20
*** mugsie has quit IRC17:20
*** jhesketh has quit IRC17:20
*** Dmitrii-Sh has quit IRC17:20
*** arxcruz has quit IRC17:20
*** amorin has quit IRC17:20
*** gibi has quit IRC17:20
*** osmanlicilegi has quit IRC17:20
*** valleedelisle has quit IRC17:20
*** smcginnis has quit IRC17:20
*** chrome0 has quit IRC17:20
*** JohnnyRa1 has quit IRC17:20
*** ttx has quit IRC17:20
*** logan- has quit IRC17:20
*** mkowalski has quit IRC17:20
*** radez has quit IRC17:20
*** slittle1 has quit IRC17:20
*** mugsie has joined #opendev17:20
*** amorin has joined #opendev17:20
*** ttx has joined #opendev17:20
*** janders has joined #opendev17:20
*** smcginnis has joined #opendev17:20
*** gibi has joined #opendev17:20
*** JohnnyRa1 has joined #opendev17:20
*** jpena|off has joined #opendev17:20
*** chrome0 has joined #opendev17:20
*** Dmitrii-Sh has joined #opendev17:20
*** mkowalski has joined #opendev17:20
*** arxcruz has joined #opendev17:20
*** valleedelisle has joined #opendev17:20
*** osmanlicilegi has joined #opendev17:20
*** otherwiseguy has joined #opendev17:20
*** jhesketh has joined #opendev17:20
*** ttx has quit IRC17:20
*** Eighth_Doctor has quit IRC17:20
*** stevebaker has quit IRC17:20
*** JayF has quit IRC17:20
*** akahat has quit IRC17:20
*** dpawlik has quit IRC17:20
*** calcmandan_ has quit IRC17:20
*** calcmandan has joined #opendev17:20
*** dpawlik has joined #opendev17:21
*** dmellado has joined #opendev17:21
*** JayF has joined #opendev17:21
*** mordred has quit IRC17:21
*** logan- has joined #opendev17:21
*** ttx has joined #opendev17:22
*** slittle1 has joined #opendev17:23
*** akahat has joined #opendev17:30
fungii've put gitea08 back into the haproxy pool again17:32
fungistill keeping an eye on all the servers17:32
*** JayF has quit IRC17:42
*** hamalq_ has quit IRC17:42
*** tkajinam has quit IRC17:42
*** eolivare has quit IRC17:42
*** cloudnull has quit IRC17:42
*** dmsimard has quit IRC17:42
*** paladox has quit IRC17:42
*** mgagne has quit IRC17:42
*** jmorgan has quit IRC17:42
*** hillpd has quit IRC17:42
*** aprice has quit IRC17:42
*** zaro has quit IRC17:42
*** dmellado has quit IRC17:42
*** dpawlik has quit IRC17:42
*** Dmitrii-Sh has quit IRC17:42
*** chrome0 has quit IRC17:42
*** sboyron has quit IRC17:42
*** sshnaidm|afk has quit IRC17:42
*** icey has quit IRC17:42
*** redrobot has quit IRC17:42
*** SWAT has quit IRC17:42
*** dviroel has quit IRC17:42
*** jroll has quit IRC17:42
*** ildikov has quit IRC17:42
*** bhagyashris has quit IRC17:42
*** eharney has quit IRC17:42
*** stephenfin has quit IRC17:42
*** lbragstad has quit IRC17:42
*** ricolin has quit IRC17:42
*** frickler has quit IRC17:42
*** artom has quit IRC17:42
*** yoctozepto has quit IRC17:42
*** openstackgerrit has quit IRC17:42
*** dhellmann has quit IRC17:42
*** chkumar|ruck has quit IRC17:42
*** jonher has quit IRC17:42
*** prometheanfire has quit IRC17:42
*** noonedeadpunk has quit IRC17:42
*** swest has quit IRC17:42
*** weshay|ruck has quit IRC17:42
*** spotz has quit IRC17:42
*** dardelean has quit IRC17:42
*** akahat has quit IRC17:42
*** slittle1 has quit IRC17:42
*** JohnnyRa1 has quit IRC17:42
*** gothicserpent has quit IRC17:42
*** owalsh has quit IRC17:42
*** zbr|rover has quit IRC17:42
*** mrunge has quit IRC17:42
*** ianw has quit IRC17:42
*** corvus has quit IRC17:42
*** kevinz has quit IRC17:42
*** gouthamr has quit IRC17:42
*** snbuback has quit IRC17:42
*** gmann has quit IRC17:42
*** diablo_rojo_phon has quit IRC17:42
*** portdirect has quit IRC17:42
*** rm_work has quit IRC17:42
*** Open10K8S has quit IRC17:42
*** zbr|rover has joined #opendev17:42
*** mrunge has joined #opendev17:42
*** _mlavalle_1 has quit IRC17:42
*** corvus has joined #opendev17:42
*** paladox has joined #opendev17:42
*** owalsh has joined #opendev17:42
*** stephenfin has joined #opendev17:42
*** eolivare has joined #opendev17:43
*** sshnaidm|afk has joined #opendev17:43
*** sboyron has joined #opendev17:43
*** hamalq has joined #opendev17:43
*** lbragstad has joined #opendev17:43
*** jonher has joined #opendev17:43
*** sboyron has quit IRC17:43
*** prometheanfire has joined #opendev17:43
*** SWAT has joined #opendev17:43
*** chrome0 has joined #opendev17:43
*** lbragstad has quit IRC17:43
*** rm_work has joined #opendev17:44
*** sboyron has joined #opendev17:44
*** swest has joined #opendev17:44
*** tkajinam has joined #opendev17:44
*** ianw has joined #opendev17:44
*** lbragstad has joined #opendev17:44
*** dhellmann has joined #opendev17:44
*** amoralej is now known as amoralej|off17:44
*** dmellado has joined #opendev17:44
*** icey has joined #opendev17:44
*** parallax has quit IRC17:45
*** gmann has joined #opendev17:45
*** bhagyashris has joined #opendev17:45
*** frickler has joined #opendev17:47
*** frickler_ has joined #opendev17:47
fricklerfreenode seems to take https://www.openssl.org/news/secadv/20210325.txt rather seriously, makes me wonder whether we should, too17:47
*** parallax has joined #opendev17:48
*** gothicserpent has joined #opendev17:48
*** 7YUAAAAAD has joined #opendev17:49
*** frickler_ has quit IRC17:49
*** eolivare has quit IRC17:49
*** akahat has joined #opendev17:50
*** cloudnull has joined #opendev17:51
*** jhesketh has quit IRC17:52
*** mordred has joined #opendev17:52
*** jhesketh has joined #opendev17:52
*** dwilde has quit IRC17:53
*** jmorgan has joined #opendev17:54
*** hashar has quit IRC17:57
*** auristor has quit IRC17:57
*** SotK has quit IRC17:57
*** mwhahaha has quit IRC17:57
*** johnsom has quit IRC17:57
*** TheJulia has quit IRC17:57
*** melwitt has quit IRC17:57
*** CeeMac has quit IRC17:57
*** mnaser has quit IRC17:57
*** clayg has quit IRC17:57
*** TheJulia has joined #opendev17:57
*** mwhahaha has joined #opendev17:57
*** CeeMac has joined #opendev17:57
*** hashar has joined #opendev17:57
*** clayg has joined #opendev17:57
*** auristor has joined #opendev17:57
*** melwitt has joined #opendev17:58
*** jpena|off has quit IRC17:58
*** dtantsur has quit IRC17:58
*** iurygregory has quit IRC17:58
*** donnyd has quit IRC17:58
*** rpittau|afk has quit IRC17:58
*** mattmceuen has quit IRC17:58
*** jrosser has quit IRC17:58
*** erbarr has quit IRC17:58
*** seongsoocho has quit IRC17:58
*** bbezak has quit IRC17:58
*** knikolla has quit IRC17:58
*** clarkb has quit IRC17:58
*** walshh_ has quit IRC17:58
*** fungi has quit IRC17:58
*** SotK has joined #opendev17:58
*** mnaser has joined #opendev17:58
*** johnsom has joined #opendev17:58
*** knikolla has joined #opendev17:58
*** seongsoocho has joined #opendev17:58
*** bbezak has joined #opendev17:58
*** clarkb has joined #opendev17:58
*** dtantsur has joined #opendev17:58
*** erbarr has joined #opendev17:58
*** donnyd has joined #opendev17:58
*** rpittau|afk has joined #opendev17:58
*** jrosser has joined #opendev17:58
*** walshh_ has joined #opendev17:58
*** mattmceuen has joined #opendev17:58
*** jpena|off has joined #opendev17:58
*** fungi has joined #opendev17:58
*** JayF has joined #opendev17:58
*** eharney has joined #opendev17:59
*** openstack has joined #opendev18:03
*** ChanServ sets mode: +o openstack18:03
*** clarkb has quit IRC18:03
*** hashar has quit IRC18:03
*** jmorgan has quit IRC18:03
*** akahat has quit IRC18:03
*** gothicserpent has quit IRC18:03
*** dmellado has quit IRC18:03
*** mnasiadka has quit IRC18:03
*** guilhermesp has quit IRC18:03
*** erbarr has quit IRC18:03
*** donnyd has quit IRC18:03
*** dtantsur has quit IRC18:03
*** jrosser has quit IRC18:03
*** mwhahaha has quit IRC18:03
*** TheJulia has quit IRC18:03
*** parallax has quit IRC18:03
*** mgoddard has quit IRC18:03
*** persia has quit IRC18:03
*** jmorgan has joined #opendev18:04
*** openstackstatus has joined #opendev18:04
*** ChanServ sets mode: +v openstackstatus18:04
*** guilhermesp has joined #opendev18:04
*** hashar has joined #opendev18:04
*** clarkb has joined #opendev18:04
*** jrosser has joined #opendev18:04
*** donnyd has joined #opendev18:04
*** erbarr has joined #opendev18:04
*** dtantsur has joined #opendev18:04
*** mwhahaha has joined #opendev18:04
*** TheJulia has joined #opendev18:04
*** parallax has joined #opendev18:04
*** mgoddard has joined #opendev18:04
*** persia has joined #opendev18:04
*** mnasiadka has joined #opendev18:04
*** dmellado has joined #opendev18:04
*** donnyd has quit IRC18:05
*** erbarr has quit IRC18:05
*** dtantsur has quit IRC18:05
*** jrosser has quit IRC18:05
*** mwhahaha has quit IRC18:05
*** TheJulia has quit IRC18:05
*** parallax has quit IRC18:05
*** mgoddard has quit IRC18:05
*** persia has quit IRC18:05
*** persia has joined #opendev18:05
*** TheJulia has joined #opendev18:05
*** mordred has quit IRC18:06
*** frickler has quit IRC18:06
*** tkajinam has quit IRC18:06
*** gmann has quit IRC18:06
*** Jeffrey4l has quit IRC18:06
*** Jeffrey4l has joined #opendev18:06
*** frickler_ has joined #opendev18:06
*** donnyd has joined #opendev18:06
*** tkajinam has joined #opendev18:06
*** gmann has joined #opendev18:07
*** erbarr has joined #opendev18:07
*** mwhahaha has joined #opendev18:07
*** Guest76033 has joined #opendev18:07
*** jrosser has joined #opendev18:08
*** auristor has joined #opendev18:08
*** andrewbonney has quit IRC18:08
*** dtantsur has joined #opendev18:09
*** gothicserpent has joined #opendev18:09
*** akahat has joined #opendev18:10
fungifrickler_: yeah, i need to read the advisory more closely, today has been a frenzy18:10
*** 7YUAAAAAD is now known as weshay|ruck18:11
*** mordred has joined #opendev18:12
*** ralonsoh has joined #opendev18:13
fungi#status log Restarted nodepool-launcher container on nl04.openstack.org in hopes of clearing stuck node requests from what looks like brief disruption in ovh-bhs1 around 03:30 UTC18:13
openstackstatusfungi: finished logging18:13
*** parallax has joined #opendev18:13
*** dtantsur is now known as dtantsur|afk18:13
*** Eighth_Doctor has joined #opendev18:14
*** ildikov has joined #opendev18:15
*** ralonsoh has quit IRC18:15
*** ttx has quit IRC18:16
*** calcmandan has quit IRC18:16
*** otherwiseguy has quit IRC18:16
*** mkowalski has quit IRC18:16
*** valleedelisle has quit IRC18:16
*** gibi has quit IRC18:16
*** amorin has quit IRC18:16
*** mugsie has quit IRC18:16
*** marios|out has quit IRC18:16
*** gnuoy has quit IRC18:16
*** elod has quit IRC18:16
*** priteau has quit IRC18:16
*** mtreinish has quit IRC18:16
*** amoralej|off has quit IRC18:16
*** tristanC has quit IRC18:16
*** amotoki has quit IRC18:16
*** ozzzo has quit IRC18:16
*** odyssey4me has quit IRC18:16
*** Eighth_Doctor has quit IRC18:16
*** hamalq has quit IRC18:16
*** logan- has quit IRC18:16
*** osmanlicilegi has quit IRC18:16
*** smcginnis has quit IRC18:16
*** janders has quit IRC18:16
*** roman_g has quit IRC18:16
*** rfayan has quit IRC18:16
*** tosky has quit IRC18:16
*** slaweq has quit IRC18:16
*** jaicaa has quit IRC18:16
*** irclogbot_3 has quit IRC18:16
*** jbryce has quit IRC18:16
*** ShadowJonathan has quit IRC18:16
*** gibi has joined #opendev18:16
*** gnuoy has joined #opendev18:16
*** calcmandan has joined #opendev18:16
*** ttx has joined #opendev18:16
*** mugsie has joined #opendev18:16
*** odyssey4me has joined #opendev18:16
*** valleedelisle has joined #opendev18:17
*** amotoki has joined #opendev18:17
*** jaicaa has joined #opendev18:17
*** hamalq has joined #opendev18:17
*** rfayan has joined #opendev18:17
*** mkowalski has joined #opendev18:17
*** tosky has joined #opendev18:17
*** otherwiseguy has joined #opendev18:18
*** osmanlicilegi has joined #opendev18:18
*** mtreinish has joined #opendev18:18
*** logan- has joined #opendev18:18
*** Eighth_Doctor has joined #opendev18:19
*** iurygregory has joined #opendev18:20
*** frickler_ is now known as frickler18:20
*** irclogbot_3 has joined #opendev18:20
*** cgoncalves has joined #opendev18:21
*** smcginnis has joined #opendev18:21
*** tristanC has joined #opendev18:21
*** Eighth_Doctor has quit IRC18:29
*** mordred has quit IRC18:29
*** dwilde has joined #opendev18:31
*** roman_g has joined #opendev18:35
*** elod has joined #opendev18:37
fungilooks like the irc disruption has concluded, i'm going to restart gerritbot because it seems to think it's joined to at least one channel that it isn't18:43
fungi#status log Restarted gerritbot, since in the wake of IRC netsplits it seems to have forgotten it's not joined to at least some channels18:45
openstackstatusfungi: finished logging18:45
*** Eighth_Doctor has joined #opendev18:52
*** mordred has joined #opendev18:55
*** roman_g has quit IRC18:55
*** jralbert has joined #opendev19:09
jralbertTrying to keep up with the chat logs from eavesdrop, so I'm not sure if this has been raised or not, but we're once again unable to git clone from opendev. We've got ~600 compute nodes to upgrade, and the process is stalled on trying to get source19:11
*** mlavalle has joined #opendev19:12
fungijralbert: starting around 16:00 utc yesterday we've been getting periodic bursts of traffic overloading our git server backends and load balancer, i've been trying to stay on top of it, but haven't been able to nail down specific source ip addresses triggering the issue yet19:16
jralbertIs it possible that it's us?19:16
fungijralbert: it's possible. did you start trying yesterday? are you cloning repositories in unusual ways? you're deploying via git checkouts i guess?19:17
jralbertWe started the compute node component of our upgrade yesterday19:17
jralbertWe're using openstack-ansible, which checks out from git on each node to build python wheels for nova and neutron. We're just working on nova right now19:18
fungilooks like something started hitting us again around 18:55 utc, winding up at the gitea06 backend and overloading it. i'm disabling that in our load balancer pool now19:18
jrosserjralbert: if you can clone the repos locally you can override the URL in the OSA config to pick up your local copies19:19
jrosseror you can search/replace the opendev.org for github in the repo specs19:19
jralbertjrosser: so this would be an effect of osa's move to per-node builds, yes? It used to do these checkouts once on the repo containers and push out built wheels from there?19:20
fungiideally we should be able to handle git clone operations, but nova takes a lot of ram to serve a complete clone, and our load balancing has to be persisted on source address for now unfortunately, so if tons of requests all come from the same ip address (for example behind a many-to-one nat) they all get directed to the same backend19:21
jrosserjralbert: if you have a repo server in your deployment the wheels are only built once19:21
jrosserthere should not be a per-node build at all19:21
jralbertcurious: we do have repo containers, but it certainly does look like every node is trying to check out. Would ansible limit affect that?19:22
jrosseri want to say no, but..... :)19:22
jralbertat the scale of our environment, the compute plays take so long to run that we split them out and run them in their own limit after the control plane is done19:23
jrosserjralbert: if you have some wierd behaviour there stick it in a paste and drop it in #openstack-ansible19:23
jralbertI'm not sure what to show you?19:24
jrosseranything that appears to need to clone the upstream repo once per compute node, thats certainly not as intended19:24
*** artom has joined #opendev19:25
fungilooks like we ended up with a fairly large memory spike on gitea04 after i took 06 out of the pool, but it seems to be backing off now19:29
fungior maybe not, it's erratic. if it's still high for another 5-minute sample i'll take it out temporarily too19:29
*** mgoddard has joined #opendev19:33
fungiyeah, okay, it seems to have stopped in the last 5 minutes so not taking it out of the pool after all19:36
fungi06 looks like it's going to need a while to get its memory usage back under control though, so leaving it out for now19:36
*** rfayan has quit IRC19:52
*** stevebaker has joined #opendev19:58
*** roman_g has joined #opendev20:00
*** zigo has joined #opendev20:03
*** priteau has joined #opendev20:36
*** slaweq has joined #opendev20:49
*** hashar has quit IRC21:11
ianwcorvus / mordred : thanks, i will clean up those resources21:19
ianwfungi: a tale of two gitea tests, told in images https://imgur.com/a/YrlDNcd :)21:30
corvusianw: for api access?21:48
corvus(for the repo creation/update stuff?)21:48
ianwcorvus: yep, the second was switched to use an api token22:03
*** ralonsoh has joined #opendev22:10
fungiianw: if i know anything about comparing shapes, one of those is definitely bigger than the other22:25
*** sboyron has quit IRC22:26
johnsomCloning into 'python-designateclient'...22:31
johnsomfatal: unable to access 'https://opendev.org/openstack/python-designateclient.git/': GnuTLS recv error (-110): The TLS connection was non-properly terminated.22:31
johnsomAre we rolling endpoints for the openssl thing?22:32
fungii don't think so, ot22:33
fungiit's probably an overloaded backend again... checking22:33
johnsomThis worked fine about 10 minutes ago, but I can't clone now.22:33
fungiwell, i don't see any runaway memory usage on any of the backends, nor cpu slammed on the lb this time22:35
fungii'm able to clone that22:35
fungijohnsom: can you `echo|openssl s_client -connect opendev.org:https 2>/dev/null|openssl x509 -text|grep Subject:`22:36
fungiand tell me what backend that says22:36
fungifrom the same machine where you're getting the error22:37
johnsomSubject: CN = gitea05.opendev.org22:37
*** Eighth_Doctor is now known as Conan_Kudo22:37
johnsom01 seems to work22:37
johnsomOn the host with 05, it won't clone22:38
fungichecking out 05 in closer detail then, thanks22:38
fungiit's certainly slow to log in via ssh22:38
fungivery slow22:38
*** Conan_Kudo is now known as Eighth_Doctor22:39
fungiyikes, load average is 11122:40
fungiswap thrashing like mad22:40
fungii wonder why the snmp memory graph in cacti doesn't reflect it22:40
fungitaking it out of the pool now22:40
fungijohnsom: try again, you'll likely hit a different backend now22:40
*** Eighth_Doctor is now known as Conan_Kudo22:40
johnsomYeah, got 04, works like a charm22:41
fungihah, now i see, the graph went from all normal and unassuming to unresponsive between 5-minute polls: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66725&rra_id=all22:41
fungihard to spot that22:41
*** Conan_Kudo is now known as Eighth_Doctor22:41
fungiwhatever's been going on, it's picked back up again22:42
johnsomo/ thanks. Back to churning out patches. Let me know if you need anything else from me.22:42
fungiin good news, gitea06 seems to have completely recovered so i've added it back to the pool22:42
fungijohnsom: nope, all set, thanks for letting us know!22:43
mordredfungi: you know - back in my mysql days I would tell clients to disable swap because it was better to have the db server hard-fail than degrade. ... I wonder - now that we have these in containers, instead of disabling swap on the server, maybe we should set a cgroup ram limit on the gitea container22:46
mordredthat's just me pondering out loud22:46
fungialso wish haproxy noticed when these fell over, but i think in layer 4 mode it will only notice if connections start being rejected or timing out22:47
fungiso connection tarpits continue to get an equal number of connections sent their way22:48
*** janders has joined #opendev22:54
*** openstackgerrit has joined #opendev22:55
openstackgerritIan Wienand proposed opendev/system-config master: [wip] gitea token test  https://review.opendev.org/c/opendev/system-config/+/78288722:55
fungihaproxy cpu has gone nuts again. restarting its container now22:58
fungi#status log Restarted haproxy container on gitea-lb01 due to runaway CPU consumption22:59
openstackstatusfungi: finished logging22:59
ianwfungi: sorry, trying to keep up ... this haproxy behaviour is new?23:00
fungisince wednesday, yes23:00
fungihttp://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66609&rra_id=all23:00
fungishort story is we see "something" happen which begins to skyrocket memory consumption on one or more backends. minutes later haproxy consumes all available cpu (~6 system ~2 user), probably an interrupt storm23:01
fungiit will continue like that indefinitely until we restart the container apparently, at least it has each time so far23:02
fungiafter restarting the haproxy container it returns to normal, but whatever problem connections are happening continue to pound the backends, and taking an exhausted backend out of the pool frequently means one of the others will suddenly see memory skyrocket once that happens23:03
fungii've been trying to correlate which source addresses are being directed to which backends and track which ones move across backend disablement to new backends exhibiting problems, but so far haven't found a clear culprit23:04
*** amorin has joined #opendev23:05
fungii'm starting to think there are multiple or changing source addresses involved, or the problem isn't tied to specific requests (though the fact that it moves from a disabled backend to another available backend suggests it is related to connections)23:05
ianwwe should really move the haproxy logging into separate files23:05
fungii've basically just opened cacti graphs for lb cpu and memory on each of the 8 backends and am watching them constantly23:06
fungiright now gitea05 is disabled in haproxy while it calms down from this latest incident, bit the other 7 are taking connections23:07
fungimemory is climbing steeply on gitea08, i may disable it momentarily if it continues23:12
johnsomAre the haproxy connection logs scrolling and are they timestamped behind?23:13
fungiit's being logged to syslog23:14
fungirsyslog23:14
fungiwhich is timestamping them23:14
johnsomYeah, but is it a very high rate per second? Sometime it can get io bound trying to write logs and you will see a system spike like that.23:15
fungioh, that might be, yeah23:15
fungithough when i log in, it's the haproxy process with high system cpu, not rsyslog23:15
fungimaybe writing to the rsyslog unix socket is the chokepoint there23:16
fungisnmp is definitely reporting it as system not iowait though23:17
openstackgerritIan Wienand proposed opendev/system-config master: haproxy: write to container log files  https://review.opendev.org/c/opendev/system-config/+/78312023:17
fungiwhich is why i suspected interrupt handling23:17
fungii used to see this same sort of cpu utilization pattern on firewalls when they'd get overrun with high volumes of tiny packets (udp datagrams, tcp synfloods, tcp reset reflection attacks, et cetera)23:19
fungithough in this case i have a feeling it's some client malfunctioning in strange ways after the backend its being persisted to turns into a tarpit23:19
johnsomThat is why I was curious what your requests per second is.23:20
johnsomThis is a updated version of focal?23:20
fungii'll try to get a count23:21
fungijohnsom: it's haproxy running in a container, so not sure how to answer23:21
johnsomI was just trying to get an idea of the version of haproxy23:21
ianwthe host is bionic, HA-Proxy version 2.3.7-2d39ce3 2021/03/1623:22
johnsomOh, home grown haproxy then23:22
ianwwell, from their container23:22
johnsomah, ok23:23
*** gibi has quit IRC23:23
ianwhttps://grafana.opendev.org/d/ZQmopePMz/opendev-load-balancer?orgId=1 is another thing to look at23:23
ianwit definitely shows when some of the servers get unhappy23:24
fungijohnsom: so... cacti is able to poll the load balancer when this is happening... new theory, the tarpitting is causing established connection count to skyrocket past some sane threshold and haporxy goes crazy polling/context switching: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66611&rra_id=all23:24
fungitraffic volume and packet rate are reasonably normal the whole time23:24
johnsomYeah, looking at those graphs, this is not a connection rate issue or the logging issue.23:24
fungiit would explain why the haproxy cpu utilization trails the memory exhaustion on the backend... takes that long for tarpitted connections to pile up to the point where haproxy can't easily track them23:26
johnsomSo, the "HTTPS Server Average Session Time" graph seems to align to the gitea05 issue I saw.23:27
fungii'd be happier if i could figure out what requests are eating so much memory on the gitea backends23:28
johnsomYeah, I know a heck of a lot more about haproxy than gitea. lol23:30
*** gibi has joined #opendev23:30
johnsomThere is a 2.3.8 build of haproxy from today though I can't point to an obvious fix in the changelog.23:31
funginew openssl? ;)23:33
johnsomDoesn't look like the container is on dockerhub yet though.23:33
johnsomhttp://git.haproxy.org/?p=haproxy-2.3.git;a=commit;h=e572195c76ee79c5780adecb2560b6e6e36a7d1e23:33
*** roman_g has quit IRC23:49

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!