Saturday, 2020-08-15

ianwin a TIL, the way that docker sets the MTU means that on our linaro cloud, https connections to fastly don't work.  i have to set it down to 140000:26
ianwnot all connections.  not http connections.  just https connections, to fastly-based things, like, pypi, pythonhosted etc.00:27
ianw(connections *in* the container, outside works fine)00:27
ianwi'll have to put in something proper to zuul-jobs or something; works around it00:29
ianwclarkb/fungi/corvus: i'm happy with our POC for manylinux cryptography wheels -- i've updated the status @
fungiianw: brief reflection on that problem suggests it's probably a pmtud blackhole02:05
fungii'm guessing other connections are negotiating a viable mss but connections to the fastly cdn endpoint may be dropping/eliding the ntf response or similar02:07
openstackgerritHirotaka Wakabayashi proposed openstack/diskimage-builder master: Adds Fedora 32 support
*** seongsoocho has quit IRC02:55
*** mnasiadka has quit IRC02:55
*** mnasiadka has joined #opendev02:59
*** Open10K8S has quit IRC03:03
*** mnasiadka has quit IRC03:06
ianwfungi: yeah ... it's also in a container, in a vm, behind god knows what on the other side of the cloud.  it's a wonder packets get through at all really03:26
*** mnasiadka has joined #opendev03:28
*** Open10K8S has joined #opendev03:28
*** seongsoocho has joined #opendev03:34
*** cloudnull has quit IRC04:58
*** cloudnull has joined #opendev04:59
jrosserwould we expect to find python-glanceclient 3.1.2 in ?07:23
*** moppy has quit IRC08:01
*** moppy has joined #opendev08:01
*** DSpider has joined #opendev08:03
fungijrosser: normally, yes, if that version is in the upper-constraints.txt in openstack/requirements11:57
* fungi hasn't looked yet11:57
jrosserI think that’s in u-c for ussuri12:02
*** tosky has joined #opendev12:21
fungijrosser: you're right
fungiso next to check the periodic build and see what's getting selected12:41
fungiadded by which merged 16:42 utc on wednesday12:43
jrosserI had a whole bunch of jobs blow up similarly and it was surprising that it didn’t fall back grabbing missing things to from pypi12:43
fungiso one of the wheel mirror updates in the last few days should have added it12:43
fungioh, you know what, we publish wheels for that to pypi so we wouldn't have built a separate one for our binary wheel cache12:45
fungi has python_glanceclient-3.1.2-py3-none-any.whl12:46
fungijrosser: have a link to the failure you saw? the job should have fetched that wheel from our caching pypi proxy, we don't (any more, and since some months) cache wheels for things which already have usable wheels on pypi12:48
fungiwe don't separately cache those wheels, i mean12:48
fungii should have remembered that sooner, but only just starting on my morning coffee12:49
jrosserfungi: heres a log
jrosserand as far as i can see thats looked in both and and wasnt able to find the right version in either12:54
jrosseri wonder if there was just some CDN issue with pypi at the time so it didnt find it via the caching proxy....12:58
fungiyeah, it looks like it should have grabbed it from /pypi there13:29
fungidefinitely looks like a stale pypi simple api index page13:33
fungiyou can see in the log it outputs the versions it saw, 3.1.1 and 3.2.0 are there but 3.1.2 and 3.2.1 are missing13:35
fungiboth 3.1.2 and 3.2.1 were published to pypi two days prior to when that job ran13:36
fungiyet were missing from the proxied pypi index13:36
*** auristor has quit IRC14:32
*** auristor has joined #opendev14:57
*** qchris has quit IRC14:57
*** qchris has joined #opendev15:09
*** iurygregory has quit IRC15:36
*** tosky has quit IRC17:29
*** iurygregory has joined #opendev18:14
mnaserinfra-root: could someone check if there’s an extra ipv6 address on mirror in VEXXHOST18:18
mnaser(just like the issue we had a few days ago..)18:25
fricklermnaser: confirmed:
fricklerwhat did you do to fix that, just drop the 2001:db8 addrs?18:29
fricklerwould it help to run a tcpdump to see where the RAs are coming from?18:29
mnaserfrickler: can you run a top dump in the background and remove those addresses?18:32
mnaserit’s a neutron bug :/18:32
mnasercc fungi logan-18:33
fricklermnaser: addresses removed, tcpdump so far only shows RAs for the correct subnet. is that a bug in neutron testing instances or in your deployment of neutron?18:36
openstackgerritMatthew Thode proposed openstack/diskimage-builder master: update gentoo to allow building arm64 images
frickleranyway I'll go back to watching snooker, I'll leave the tcpdump running in a tmux in case it helps to identify some correlation when this happens again18:45
mnaserfrickler: when neutron tests run they send out an RA that eventually gets picked up by openstack, so but in openstack which lets that slip out. Thanks for keeping the tcpdump up19:34
fungigood idea with the packet capture, we tried something similar when this cropped up in limestone, but then gave up after months of not reproducing the issue19:55
fungii wonder if we can work out from the eui64 bits of the linklocal for the rogue gateway which node it originated from?19:56
fungialso the ttl on the route may tell us when it appeared19:56
clarkbI wonder if neutron expects security groups to guard against this20:18
clarkb(it shouldnt but) we open ours up andmanage rules on the hosts instead20:18
clarkbreally even within a group you'd not want rogue RAs justlike you dont want rogue dhcp20:18
fungii don't think they rely on security groups explicitly for guarding against this, but it's certainly possible that's why nobody notices20:19
*** DSpider has quit IRC22:48

Generated by 2.17.2 by Marius Gedminas - find it at!