Thursday, 2022-07-21

*** dviroel|afk is now known as dviroel|out00:37
*** rlandy is now known as rlandy|out01:09
*** undefined is now known as Guest559803:35
*** ysandeep|out is now known as ysandeep05:00
*** ysandeep is now known as ysandeep|lunch07:35
*** ysandeep|lunch is now known as ysandeep09:59
*** rlandy|out is now known as rlandy10:32
*** ysandeep is now known as ysandeep|afk10:53
*** dviroel|out is now known as dviroel11:24
fungipip 22.2 has just been released11:55
opendevreviewMerged opendev/system-config master: add computing force network mailling list for computing force network working group  https://review.opendev.org/c/opendev/system-config/+/85026812:31
*** dasm|off is now known as dasm|ruck13:05
*** Guest5598 is now known as rcastillo13:05
*** ysandeep|afk is now known as ysandeep13:20
fungilooks like infra-prod-base may be failing again (most recent daily failed, and again just now on that ^ change): https://zuul.opendev.org/t/openstack/builds?job_name=infra-prod-base&project=opendev/system-config13:36
fungi"non-zero return code" from the ansible-playbook task, but /var/log/ansible/base.yaml.log doesn't show any failed tasks13:37
fungitimestamps in that log correspond to the failed build too13:40
fungiahh, there's the reason... "fatal: [nb03.opendev.org]: UNREACHABLE!"13:41
fungii'll see if i can get it rebooted13:41
fungiapi says it's currently in "shutoff" state13:41
fungiokay, `openstack server start` worked this time13:43
fungii'm able to log into it now13:43
fungii'll try to reenqueue the deploy and make sure it works13:43
gthiemongeHi folks, in this buildset: https://zuul.opendev.org/t/openstack/buildset/4c45db68c55b407ebb05f77dcf65729b  all the jobs for wallaby and xena failed with mirror issues, is there a known issue? should I recheck?14:29
fungigthiemonge: that doesn't look like a mirror problem, since you got errors downloading the same packages through proxies in different parts of the world, indicating a recheck may not help unless the problem was transient and has already resolved itself14:49
gthiemongefungi: yeah you're right14:50
fungii'll see if i can find a corresponding error in one of the proxy logs to indicate what the problem might be on the pypi/fastly side14:51
opendevreviewDmitriy Rabotyagov proposed openstack/project-config master: Add job for publishing ansible collections  https://review.opendev.org/c/openstack/project-config/+/85066414:55
fungigthiemonge: oh! actually it looks like there may have been an earlier error i overlooked in the logs14:58
fungi"Building wheel for openstack-requirements (setup.py): finished with status 'error'"14:58
fungi"error: invalid command 'bdist_wheel'"14:59
fungihttps://zuul.opendev.org/t/openstack/build/373843dc47234df4b0fe35885803efb0/log/job-output.txt#5189-520015:00
fungiso maybe it's started using a new setuptools which removed bdist_wheel?15:00
fungithat's nothing to do with mirrors or pypi15:01
clarkbfungi: bdist_wheel is supplied by the wheel package iirc15:02
clarkband it is supposed to fallback to using an sdist in that situation15:03
fungiahh, okay so maybe that one's benign15:03
fungiso yeah, it does seem to be the truncated package downloads which are causing the job failure: https://zuul.opendev.org/t/openstack/build/373843dc47234df4b0fe35885803efb0/log/job-output.txt#6861-686615:05
fungiConnection broken: InvalidChunkLength(got length b'', 0 bytes read)15:05
clarkbwhatever the issue was I'm able to downloda those pcakges now15:06
clarkbsome sort of cdn blip with pypi I guess15:06
fungi'ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))': /pypi/simple/grpcio/15:06
fungifound in the apache logs:15:07
fungi[2022-07-21 10:43:51.503494] [substitute:error] [pid 2890472:tid 139668007925504] [client 2604:e100:1:0:f816:3eff:fece:9caa:41398] AH01328: Line too long, URI /pypi/simple/grpcio/,15:08
clarkbnote I'm able to download the pacakges through the our proxies too using the urls that are logged as being unhappy15:09
yoctozeptoI see you are already debugging the issue we have spotted in kolla CI too (InvalidChunkLength)15:12
fungiyeah, i can pull up https://mirror.ca-ymq-1.vexxhost.opendev.org/pypi/simple/grpcio/ just fine15:12
fungiit looks like it may have been a transient issue, i only see it logged on mirror.ca-ymq-1.vexxhost for several requests at 10:43:35 utc15:13
fungiyoctozepto: are your timeframes similar?15:13
clarkbone of the other jobs ran in ovh gra115:14
clarkbwhatever it was I suspect the cdn/pypi itself due to that15:14
fungiand another in inmotion15:14
fungii'm checking all the apache logs to see if they correspond15:14
yoctozeptothe earliest is (UTC) 12:59, the latest is 15:0015:14
yoctozeptobut the jobs likely failed before that15:14
yoctozeptoI'm reporting timestamps of Zuul comments15:15
fungion mirror.gra1.ovh i see a bunch of those between 10:53:37-13:46:47 utc15:15
clarkbthose can vary wildly form the actual errors and probably aren't very helpful15:15
clarkbzuul reporting at 15:00 could've hit the error at 10:4215:16
opendevreviewDmitriy Rabotyagov proposed openstack/project-config master: Add job for publishing ansible collections  https://review.opendev.org/c/openstack/project-config/+/85066415:16
fungithe individual job completion times weill probably be fairly soon after the errors, but the gerrit commit can be significantly delayed, yes15:16
clarkbin any case when I check the urls directly now they seem to work and considering the global spread I strongly suspect pypi or its cdn. https://status.python.org/ doesn't show currently issues though15:16
fungiseen on mirror.mtl01.iweb 10:46:13-13:39:0215:17
fungiyeah, i suspect it cleared up around 1.5 hours ago, after being broken for ~2.5 hours15:18
yoctozeptohttps://zuul.opendev.org/t/openstack/build/8b8b388d3e2d484b9c2215144b50d50515:19
yoctozeptoStarted at 2022-07-21 14:32:0015:19
yoctozeptoCompleted at 2022-07-21 14:43:3815:19
opendevreviewDmitriy Rabotyagov proposed openstack/project-config master: Add job for publishing ansible collections  https://review.opendev.org/c/openstack/project-config/+/85066415:25
fungiyoctozepto: which logfile should i be looking at to find the error?15:26
*** dviroel_ is now known as dviroel15:26
*** rlandy is now known as rlandy|afk15:28
kevkofungi: for example this -> https://zuul.opendev.org/t/openstack/build/8b8b388d3e2d484b9c2215144b50d505/log/kolla/build/000_FAILED_openstack-base.log15:29
*** ysandeep is now known as ysandeep|out15:30
fungikevko: thanks! so definitely the /pypi/simple/grpcio/ path again but at yet another mirror. i'll check the logs there15:33
fungino, nevermind, that was also mirror.mtl01.iweb15:34
clarkbunfortunately no timestamps in that log?15:35
fungithough zuul says that build started at 14:32:00 and ended at 14:43:38 so must be sometime in that 11 minute timespan15:36
clarkbkevko: side note: you can use https against our mirrors now (we updated jobs by default to this when it was added but maybe your docker image stuff needs explicit instruction to do so)15:37
fungistrangely, i don't see it in the access log even15:37
fungioh! that's why. i was looking at https15:38
fungithere we go15:38
*** marios is now known as marios|out15:40
fungi/var/log/apache2/proxy_8080_error.log on mirror.mtl01.iweb mentions the problem between 14:40:39-15:21:4415:40
clarkbfungi: it is also curious that it seems to be that specific package? Like maybe something is up with its backing files15:46
clarkbwe could try a purge against those packages in case it is the cached data taht is sad15:47
yoctozeptooh, https15:48
yoctozeptowe substitute http explicitly I guess15:48
fungiclarkb: i'm unsure it's coming from the backing files, the error looks like:15:56
fungi[2022-07-21 15:21:44.682230] [substitute:error] [pid 3820:tid 140633293432576] [client 198.72.124.35:42452] AH01328: Line too long, URI /pypi/simple/grpcio/,15:56
fungiand it's logged in the proxy error log15:56
fungibut maybe it caches that state? seems odd for it to cache an error condition. bad content sure15:56
clarkbfungi: the internet says that mod substitute may be trying to substitute text in files that are too big15:57
clarkbhttps://bz.apache.org/bugzilla/show_bug.cgi?id=5617615:58
fungiahh, so we could have cached the bad content fastly served us, and then apache is erroring when trying to use the cached state15:58
clarkbI guess that could explain why it is specific packages: their index or contents could be too large15:58
clarkbya maybe. Assuming the bad state is extremely large lines?15:58
fungiso far it's only been the simple api index for grpcio as far as i've seen in logs15:58
clarkbwe substitute values on the indexes so that the urls for the file content point back at our mirrors iirc15:59
fungiright15:59
clarkbview-source:https://pypi.org/simple/grpcio/ all those urls get updated to our urls15:59
clarkband it is per line16:00
clarkbthere are some long lines but nothing over 1MB in there currently. I suspect that what we got back is html without line breaks for some reason16:00
clarkband then since grpcio has lots of releases that in aggregate is over 1MB?16:01
clarkbwe can increase the lenght limit. If it is serving the whole index without line breaks that would be about 1.4MB so maybe bumping the limit to 5MB as in that bug is reasonable?16:02
clarkbBut I suspect any indexes of that form would indicate some bug in the pypi serving process assuming that is what happens16:02
clarkbI've got a dentist appointment soon. But if this continues to persist I think we can a) check our cached values for evidence of no line break indexes to file an issue against pypi and/or b) increase the limit to say 5MB16:04
fungiyes, i agree it seems like pypi probably temporarily served up something that was semi-broken and had extremely long lines16:05
fungifwiw, i don't see any new occurrences in the mirror.mtl01.iweb proxy_8080_error.log after 15:21:44 utc16:05
clarkbfungi: if you grep for one of the shasums in that index or similar in our apache cache I wonder if you can find the cached indexes that way then see if any lack line breaks16:06
yoctozeptoany idea if it's worky now? (i.e., may we issue rechecks?)16:36
fungiyoctozepto: i see no new evidence of that error for over an hour now, so it's probably fine16:37
yoctozeptofungi: thanks, will retry then16:38
fungithough that was in the proxy_8080_error.log on mirror.mtl01.iweb16:38
fungiin mirror_443_error.log on mirror.mtl01.iweb i see a newer burst which happened at 15:57:09-15:57:1716:39
fungii'll check back on the other mirrors16:39
fungi16:02:58 in the mirror_443_error.log on mirror.gra1.ovh16:40
fungithat's the most recent i can find16:41
fungiless than 40 minutes ago, so it may still be going on16:41
kevkoit is same :( 17:01
kevkohttps://zuul.opendev.org/t/openstack/build/2929144b89da4415b504bdce8d721c52/log/kolla/build/000_FAILED_openstack-base.log17:01
kevkofungi: ^^17:01
fungiand yet another region17:02
fungiyeah, same error showing up in proxy_8080_error.log mirror.iad3.inmotion as recently as 16:58:55 utc17:02
fungifirst occurrence in there was at 12:39:57 utc17:04
TheJuliadiablo_rojo: oh hai!17:05
TheJuliaerr17:05
fungioh! though i see some similar errors for different urls earlier on mirror.iad3.inmotion in mirror_443_error.log17:05
TheJuliawrong window!17:05
fungiit was apparently also impacting pymongo and moto17:06
fungiat 02:38:40-02:38:48 and 02:39:14-02:39:16 respectively17:07
fungii'll see about overriding https://httpd.apache.org/docs/2.4/mod/mod_substitute.html#substitutemaxlinelength in our configs as a workaround17:09
opendevreviewJeremy Stanley proposed opendev/system-config master: Override SubstituteMaxLineLength in PyPI proxies  https://review.opendev.org/c/opendev/system-config/+/85067717:15
fungiclarkb: gthiemonge: yoctozepto: kevko: ^17:16
fungione thing the three affected indices all have in common is that their sizes are near or over one megabyte. grpcio is the largest of the three with a current character count of 141146117:20
fungibut moto is presently only 960403 so i'm wondering if the pypi/warehouse admins accidentally fubared the routing for the new json api and were returning it (which is understood to bloat the response size by somewhere around 1.5-2x)17:21
fungiif so, once the workaround lands, we may see jobs breaking because of pip getting json when it expected html, or being too old to support the json api17:22
fungitoday's pip 22.2 release which turned on json api by default is a big hint for me that this could be related17:23
fungiand pretty sure the json version is returned as one very long line17:24
Clark[m]fungi: that lgtm but I'm still at the dentist if you want to self approve17:56
opendevreviewMerged opendev/system-config master: Override SubstituteMaxLineLength in PyPI proxies  https://review.opendev.org/c/opendev/system-config/+/85067718:39
kevkofungi: let me rechec 18:45
kevko*recheck18:45
yoctozeptoit continues failing https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_095/850636/1/check/kolla-build-debian/095006d/kolla/build/000_FAILED_openstack-base.log19:04
* yoctozepto off19:04
kevkofailing :( 19:07
fungidid that configuration deploy to the mirrors yet?19:09
kevkohow can we now ? :P 19:09
fungikevko: yoctozepto: deploy jobs reported successful at 18:51 utc, so you may have rechecked too soon, but let me also confirm the configs are actually on the servers now19:10
Clark[m]https://zuul.opendev.org/t/openstack/build/0614814401ee4fc4bf6db393dcbc62a2is the link. It's just lik any other zuul job19:12
Clark[m]Will show up in the status page and is searchable etc19:12
fungilooks like the configs updated around 18:48, looks at the fs19:12
fungii'm not sure the apache processes are actually reloaded with the new config though19:13
fungii still see apache processes with timestamps from 6 hours ago19:14
fungithe linked failure was hitting mirror.mtl01.iweb at or shortly after the 18:53:03 timestamp at the top of its log, and the config was updated on that server at 18:48:28 according to a stat of the file19:18
fungiso in theory the config was already installed19:18
fungiparent apache2 process has a start time in january, but the worker processes i see on the server are from 17:54 and 18:57 today19:19
fungilooking at /var/log/ansible/service-mirror.yaml.log on bridge, i don't immediately see that we actually reload/restart the apache2 service when its config gets updated19:23
Clark[m]https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/mirror/handlers/main.yaml is the handler that should do it iirc19:24
Clark[m]But Ansible handlers are always iffy19:24
Clark[m]https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/mirror/tasks/main.yaml#L140 that doesn't actually notify though only the a2ensite does which is a noop?19:25
Clark[m]That may explain it19:25
fungiyeah, i'll push up a patch19:26
Clark[m]We may want to reload when config updates and restart when it is written fresh? Maybe that is why it is done that way to avoid restarting when already up?19:26
opendevreviewJeremy Stanley proposed opendev/system-config master: Notify apache2 reload on updates to mirror vhost  https://review.opendev.org/c/opendev/system-config/+/85068619:29
fungiClark[m]: ^ like that?19:29
Clark[m]Yes but that will restart Apache which may cause jobs to fail19:30
Clark[m]I think we may want a separate notify to reload instead? But I don't know how to reconcile that against when we actually do need to restart19:31
fungiahh, yeah for the mailman playbook we have a reload handler19:31
fungiin progress19:31
Clark[m]But it may just work to send both notifies and then have systemd sort it out19:31
opendevreviewJeremy Stanley proposed opendev/system-config master: Notify apache2 reload on updates to mirror vhost  https://review.opendev.org/c/opendev/system-config/+/85068619:33
fungii'm good with the simple approach. we can always optimize somehow if we find that it causes a problem19:33
kevkofungi: nope, still there is something bad in CI ..when building same image locally ...it's working 19:33
fungikevko: yeah, it's not "in ci" but rather the proxies have bad data cached from pypi and are still tripping over it. right now the updated configuration to allow them to serve that probably-bad data is still not applied because the apache services were never told to reload the vhost config updates19:35
fungi850686 should address that in the future, but for now i need to script up something to reload apache2 on all the mirrors19:36
fungisudo ansible mirror -m shell -a 'systemctl reload apache2'19:37
fungithat does what i think it does, right?19:37
fungiguess i'll find out19:39
funginow i see new process timestamps on the apache workers on mirror.mtl01.iweb so the others are presumably the same19:40
Clark[m]Ya I think something like that will work19:41
fungikevko: have another try. it will likely still fail but we'll hopefully get a different (more useful) error this time19:41
Clark[m]And ya if pypi is serving not well formatted data that's not specific to us. It's just we trip on it at the proxy rather than the client19:41
clarkbI won't fast approve 850686 since you've manually done the release now (and landing that won't trigger reloads as the config isn't changing)19:54
fungii'm increasingly suspicious we're ending up with the json index rather than the html one19:54
clarkbhopefully someone else can do a second review of it and make sure we aren't missing some important piece19:55
clarkbfungi: does that imply pypi changed teh simple api?19:55
clarkbI really hope not as it will potentially make our caching proxy setup more difficult to maintain19:55
fungiclarkb: no, the design is that warehouse is supposed to route the requests to the json api vs simple api depending on what content type gets requested19:55
fungibut i feel like maybe they botched something19:56
clarkbfungi: or maybe new pip is requesting json19:56
clarkbI think that is still a big regression for caching19:56
clarkbreally what I'm trying to say is that any switch to json is going to be problematic for those of us that try to be good citiziens19:57
fungiyeah, if the jobs in question are ending up with pip 22.2 maybe, that could explain it19:58
clarkb`curl -H 'Accept: application/json' https://pypi.org/simple/setuptools/` gets me html at least19:59
fungimissing from the changelog but mentioned in the release announcemen here: https://discuss.python.org/t/announcement-pip-22-2-release/1754319:59
fungithough that leaves me wondering if mod_proxy can differentiate those when caching19:59
clarkbunfortunately that pep doesn't list "make caching easier" as a goal20:00
clarkbfungi: yes it should20:00
fungibecause it's the same url just with different responses depending on the accept header20:00
clarkbit is header and content type aware20:01
clarkbcurl -H 'Accept: application/vnd.pypi.simple.v1+json' https://pypi.org/simple/setuptools/ that is how you get the json20:01
clarkband ya I bet the issue is new pip is used and getting large json docs back20:01
clarkbin which case your fix to increase the substitute line length is correct. Maybe we should ask them to add line breaks in the json doc20:01
clarkbgrpcio's json is ~1.6MB so 5MB is plenty there but unlike line broken html this will grow until the end of time20:03
fungi`curl -H 'Accept: application/vnd.pypi.simple.v1+json' https://mirror.mtl01.iweb.opendev.org/pypi/simple/bindep/` does return json for me, so yeah this may be kolla's jobs are suddenly using pip 22.220:04
clarkbya and tehre are no line beraks20:05
fungiand yes, the same for grpcio's index returns 1524169 characters of json20:05
clarkbso I think your change is a good one going forward but we'll have to bump that to 10m and then 20m and so on potentially for specific packages. We should maybe ask them to add some line breaks20:05
clarkbor we can just unleash the bots at their cdn directly <_<20:06
clarkbfungi: the root index is 10MB large and also includes no line breaks20:07
clarkbso ya 5m may already be insufficient20:08
clarkb:/20:08
clarkb*9.3MB20:08
clarkbhowever we don't need to do substitution on the root index so maybe we don't have this problem20:08
clarkbwe apply the substitution to everything under /pypi which includes the root index20:09
clarkbso ya20:09
clarkbI think we should bump the limit to 20MB in the apache config. Ask upstream if they can add an occasionaly line break in the json. And fallback to direct access if those actions don't alleviate the problem20:11
fungii replied to the release announcement topic20:14
clarkbthanks20:15
fungithough any request to alter how the responses are delivered probably warrants an issue filed for warehouse20:16
clarkbyou can add line breaks between entries in json without altering the semantics or validity of json. But ya probably best to track there20:18
clarkbfungi: should I push a change to bump the limit to 20mb or do you want to do that? And I guess we can stack it on top of the reload change and land both together to see if that works as expected20:20
fungihttps://github.com/pypi/warehouse/issues/1191920:24
fungisure, i can push that20:25
opendevreviewJeremy Stanley proposed opendev/system-config master: Increase PyPI substitute line length limit to 20m  https://review.opendev.org/c/opendev/system-config/+/85068820:30
fungiclarkb: ^20:30
clarkbfungi: note the root index is 9.3 MB which is the largest I've found20:32
clarkband we substitute it too as far as I can tell20:32
fungioh, but i guess pip isn't actually hitting it20:33
clarkbya it may not be20:33
fungior else we'd see errors sooner (and errors in the apache log)20:33
clarkbor we don't have any errors because it doesn't contain strings to substitute (I don't knwo if apache will just always fail or only if it needs to make substitutions)20:33
clarkbconsidering we're not seeing errors with the index I think thosetwo chagnes are less urgent and I don't need to approve them now?20:34
fungiagreed20:34
fungiclarkb: gthiemonge: yoctozepto: kevko: so in summary, it looks like the affected jobs probably used pip 22.2 (released earlier today), which defaults to requesting pypi's json simple api rather than its traditional html simple api, and the json responses are all on one line which for some projects exceeds the default limit apache mod_substitute will process. we think we have a working solution20:48
fungiin place now20:48
fungiseparately, i've submitted a feature request for warehouse (the software behind the pypi.org site) to request they insert an occasional linebreak so as to be nicer to downstream proxies: https://github.com/pypi/warehouse/issues/1191920:49
clarkbthank you for running that down. A weird one for sure20:49
fungisystem-config-run-mirror-x86 failed on that last change21:02
fungii'll take a closer look after dinner21:02
*** dasm|ruck is now known as dasm|off21:04
*** dviroel is now known as dviroel|out21:07
TheJuliaso a few times today I've seen foundation/list emails get flagged as either spam or phishing attempts... Did something change?21:07
clarkbTheJulia: not that I know of. Same server and same ips are hosting the lists21:09
clarkbmaybe your mail system is getting crankier about dkim/dmarc and someone is sending with signed messages21:09
clarkbon the openstack-discuss list we pass the email through without modifying things that re typically signed so that the signatures continue to verify. But we left that up to each list to configure as it changes the behavior of the email slightly iirc21:10
TheJuliait is gmail, and it had a neutral spf i.e. not explicitly flaged as permitted, that being lists.openstack.org issuing the helo statement to google's mx21:10
clarkbI suppose one thing that did cahnge is fungi moved the location of that list under openinfra.dev. My gmail copy seems to be complaining that it cannot verify that domain sent the email21:11
clarkbwhereas before the list was under openstack.org21:12
fungiTheJulia: one possible trigger i've seen keep cropping up is that some list owners are using e-mail addresses which automatically classify messages from various sources, and so the moderation queue notifications containing samples of all the spam those list addresses receive from non-subscribers which are held get added to the classification pool for the listserv21:13
clarkbheh if you open the little ? in the gmail interface they explicitly say this is a common thing for mailing lists21:13
clarkbbut ya they specifically want spf and dkim21:13
clarkband make a note that mailing lists often don't do this21:14
TheJuliawell, a nice remarkable increase which will impact trust/use of the mailing lists21:16
clarkbright so, one way to improve that would be for ildikov to dkim sign email and then have the list pass it through as is. Another is to add spf records to openinfra.dev21:17
clarkbthough getting the spf records wrong may make the mailing list problem worse?21:18
TheJuliaopeninfra.dev should have spf as a minimum bar21:18
clarkbTheJulia: yes, I can bring that up with people who manage dns and mail for that domain (it isn't us)21:18
TheJuliahonestly... I've seen whole orgs just trash can not positively listed iwth spf21:18
TheJuliaclarkb: thanks21:18
johnsomAnyone else having slow "git review" times? It sits for a while, then prints the "Creating a git remote called", then sits for a longer while and eventually the patch posts.21:19
clarkbjohnsom: its doing setup if it is creating remotes for you21:20
clarkbwhen it does that it checks a number of thigns to make sure it sets up a working repo.21:20
johnsomYeah, that is a "normal" message for my workflow21:20
clarkbit should only do that the first time you run it in a repo and not every time. Unless you are deleting remotes or something21:20
johnsomIt usually just takes seconds to post a patch, but today it is over a minute it seems21:21
clarkbbasically if it is doing setup that is expected to take longer. If you are getting setup done each time you'll need to look into why the git remote state isn't persisting in your repo which causes it to happen every time21:21
johnsomI have seen this with storyboard issues before, but this repo is on launchpad21:21
clarkbgit review talk to gerrit not launchpad or storyboard21:21
clarkbbut also that looks like it is configuring the repo for use which is expected to take longer. Other things that can take time are performing the rebase test potentially21:22
clarkbbut I would look into why it is creating a git remote for you first21:22
johnsomThere is some link in this process that calls out to storyboard. We have had that issue before. The part where it adds the comment that a patch has been submitted.21:22
johnsomThat message is 100% normal, it's a fresh clone.21:23
clarkbI didn't realize that was inline with pushing though21:23
johnsomBut it is usually much faster.21:23
clarkbjohnsom: ok I guess I'm confused why you called it out. The implication is that it was happenign every time you psuh and I was saying that if that happens every time you push then that is why it is slow :)21:24
clarkbif you push a second time does it tell you it is configuring a remote? If not is it still slow?21:24
clarkb(it will always be slower when configuring the remote)21:24
johnsomJust background on the point at which git review stalls, after that message21:24
clarkbgit review does accept verbosity flags to help show you where it is stalling. That would be the next thing I would look at21:25
clarkbAs far as dmarc/dkim go we'll want to update the board mailing list to pass through most of the message untouched so that signed emails validate on the other end once recieved. We can do that for the board list (it is what we did for the openstack-discuss list and others) just want to get that done before sending signed emails21:27
fungii have a heavy concern for all dmarc enforcement (both spf and dkim) insofar as they effectively partition users of traditional e-mail from those who have hitched their wagons to bulk freemail providers. i disagree with the assertion that "lists.openinfra.dev should have spf as a minimum bar" (whatever else openfra.dev does), though am open to adding it if there's sufficient interest from its21:27
fungiconstituency21:27
clarkbfungi: fwiw I think the issue was openinfra.dev not lists.openinfra.dev21:27
clarkbwe have never had spf records for the list servers as far as I can tell21:28
fungiwell, lists.openinfra.dev is likely to break dmarc bits for anyone posting messages there21:28
clarkbbasically ildikov sent an email and whether or not it was sent directly to TheJulia or via the mail server google would've complained due to lack of info21:28
clarkbfungi: yes we have to do the same pass through taht was applied to openstack-discuss I bet21:29
johnsomclarkb Ok, so it's doing the push --dry-run to review.opendev.org which resolves to 2604:e100:1:0:f816:3eff:fe52:22de which is not responding21:29
fungiahh, yes, openinfra.dev does not publish an spf record the way openstack.org does. that may have been overlooked by the folks who set up the domain21:30
johnsomclarkb Ok, it's on my end. IPv6 isn't working. Comcast did "maintenance" last night here, so probably broke something.21:30
clarkbjohnsom: ok so likely it is waiting for ipv6 to fail then trying ipv4 and proceeding21:30
johnsomYep21:31
TheJuliafungi: yeah, that would do it. I've got a direct email from allison which also got flagged. :(21:31
clarkbfungi: re your concern it is very interesting to compare my fastmail and gmail versions of the same email21:31
clarkbfastmail is basically "the content of the message doesn't look like spam so whatever" and gmail is all "you didn't play by the rules so we'll give you a giant orange banner!"21:32
TheJuliayeah21:33
clarkbfwiw I also have concern because I've recently received emails from reputable companies that passed spf, dkim, and dmarc that were almost certainly spam/phishing (maybe via a compromised email server?) and I expect many put far too much weight on gmail giving the all clear without vlaidating the contents of the email21:33
clarkbI'm still waiting to hear back from said company on whether or not they got owned21:33
fungiyes, all of dmarc is basically a coalition of bulk freemail providers to make the messages between them have a higher delivery confidence while reducing their own workloads21:33
fungiit's nothing to do with actually reinforcing the legitimacy of deliveries, and entirely about entrenching market share for their respective businesses21:34
fungimaking it harder or nearly impossible for small/individual senders to comply with their increasingly complicated "standards" is all part of the design21:35
JayFEmail deliverability used to be a lot more friendly for senders. You used to be able to get feedback loops from major ESPs, where they'd let you know when a person reported you as spam (so you can remove the sender from your list and know that someone abused your service)21:35
TheJuliaJayF: Some days I miss those days21:42
fungii still live in those days, i just don't consider gmail to be e-mail, and prioritize communication with people who aren't trapped in that dimension21:43
JayFI mean, I can tell you with certainty that all those ESPs dropped their feedback loop program literally a decade ago21:43
JayFincluding AOL/Yahoo21:44
JayFand microsoft never did one21:44
TheJuliaI gave up running my own mail server because of the likes of google sinking my emails into /dev/null21:44
fungiif you really want e-mail, then you won't be using gmail. if you're using gmail, you have actively chosen to communicate only with people who also buy into that paradigm21:44
* JayF did 2.5 years at an "email marketing" firm for his first linux job21:44
TheJuliaJayF: fwiw, I was referring to simpler times back when we first met, not that job... omg not that job21:45
TheJuliasimpler technology :)21:45
JayFTheJulia: wait, did you know me when I worked at iContact? 21:45
TheJuliaJayF: yes!21:45
JayFTheJulia: I feel like those two timelines never converged in my head21:45
JayFoh yeah, of course, because I left there when I left NC21:45
* fungi is still ostensibly in nc21:46
* TheJulia gets out the "its a small world" music21:46
JayFTheJulia: you wanna feel old and young simultaneously? We're getting within like, what, 5ish years of having known each other for half our lives?21:46
TheJuliaoh my21:46
fungireturning to the earlier topic, "ERROR! The requested handler 'reload apache2' was not found in either the main handlers list nor in the listening handlers list"21:49
fungidid i miss something in 850688 to make that accessible to the task?21:50
clarkbfungi: its called 'mailman reload apache2' in your handler update21:51
clarkbjust copy paste fail21:51
fungid'oh, yep!21:51
fungithanks, it's clearly getting late here, time for a sake21:52
fungii missed that it also failed on 85068621:52
opendevreviewJeremy Stanley proposed opendev/system-config master: Notify apache2 reload on updates to mirror vhost  https://review.opendev.org/c/opendev/system-config/+/85068621:53
opendevreviewJeremy Stanley proposed opendev/system-config master: Increase PyPI substitute line length limit to 20m  https://review.opendev.org/c/opendev/system-config/+/85068821:53
* fungi sighs loudly at nothing in particular21:53
clarkbI feel like I said I would review something for someone but then didn't. But I reviewed the ca cert stack from ianw and some zuul changes for zuul fokls. Anything important I'm forgetting?23:10
clarkbfungi: ianw also if you have time for https://review.opendev.org/c/opendev/system-config/+/850580 that came out of our meeting this week. The idea we can check for new caches in our gerrit upgrade job23:10
fungithanks, hopefully i can take a look in a bit. i also meant to look at the ca cert stack23:12
clarkbNo real rush behind that. More just wanting to clear it off my todo list23:15
clarkboh the grafana stack I had it in my todo list up a bit23:16
fungigetting things off of everyone's respective todo lists is still a priority for me23:21
ianwclarkb: thanks ... it still has -1 because of the registry issues, but https://review.opendev.org/q/topic:console-version-tags should get zuul to remove the console log streaming files.  it was a bit harder than i first thought but i think a useful addition23:21
clarkbya that is probably a better fix then trying to ensure we run a cleanup system everywhere we need it23:21
ianwyeah, although it's quick for us to add, in general i think it's better than having to have zuul explain why you need to do that, and leaving it up to the admins to figure out23:23
clarkbok I've got a todo list for tomorrow. I'm going to do my best to dig into those reviews23:23
fungii harbor no illusions my todo list will shrink tomorrow, but i do intend to try and thumb my nose at the universe anyhoo23:35
clarkbwell this way I won't be forgetting what I need to do. Whether or not the list shrinks is another story :)23:38
ianwsymmetric_difference is a cool one23:47
fungii'll consider that for my next band name23:51
fungiwith the underscore, of course23:51
ianwoh https://review.opendev.org/c/opendev/system-config/+/850123 is another minor one, that sets the timestamp of the stored productino ansible log files their start time, so when you look it roughly lines up with the start time of the zuul job23:51

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!