Tuesday, 2021-11-09

acolesreid_g: there is a (brief) doc section on memcache here https://docs.openstack.org/swift/latest/deployment_guide.html#memcached-considerations which I point out mainly for the notes re sharding and max memcache entry sizes09:51
reid_gI think there is something fishy going on with this memcached server... I should have 7200 max connections but if I increase memcached connection limit to 16k... 32k... they are all being used. The other servers in my pool stay around 7k. Any ideas?13:50
DHEenumerate them? lsof -n -p $PID13:57
reid_gRight now I have it set to 8192 and it is showing lsof -n -p 672679 | grep -c 1121113:58
reid_g798613:58
reid_gIf I changed to 32768 and wait about 30 sec I get messages in journal 'Too many open connections'14:00
reid_glsof -n -p 676945 | grep -c 1121114:00
reid_g3256214:00
opendevreviewAlistair Coles proposed openstack/swift master: Make cmp_policy_info agree with the API behaviour  https://review.opendev.org/c/openstack/swift/+/81673114:22
opendevreviewAlistair Coles proposed openstack/swift master: Improve storage policy index reconciliation unit tests  https://review.opendev.org/c/openstack/swift/+/81689214:22
opendevreviewAlistair Coles proposed openstack/swift master: Re-write reconciler.cmp_policy_info()  https://review.opendev.org/c/openstack/swift/+/81689314:22
acolesclayg: ^^ 14:23
clayg🤩14:23
opendevreviewAlistair Coles proposed openstack/swift master: Re-write reconciler.cmp_policy_info()  https://review.opendev.org/c/openstack/swift/+/81689314:24
DHEwhat I meant was to check... where are the actual connections from? is some host connected more often than others? any foreign connections? (!!)14:35
reid_gah. they are all connections from other swift-proxy servers. did a grep -v -- '->10.40.100' to show anything that shouldn't be there14:50
clayg@acoles "AssertionError: local policy did not change to match remote for replication row scenario no_row" - maybe the tests are still flakey?15:59
acolesjust looking into those failures16:00
acolesclayg: it seems the current implementation favours older put over newer delete. IDK how to think about that. These two tests make exact same assertions: test.unit.container.test_replicator.TestReplicatorSync.test_sync_local_create_policy_over_newer_remote_create and test.unit.container.test_replicator.TestReplicatorSync.test_sync_local_create_policy_over_newer_remote_delete - is that reasonable?16:27
claygso we create r1 with sp0 and r2 with sp1 (at this point r1 should win, r2 was probably a handoff - that's "newer_remote_create") - then we delete r2 (I assume r1 rejected the delete) - so sp1 still wins?  Is that "newer_remote_delete"?16:29
claygI think maybe we nearly almost always prefer the un-deleted spi?16:30
acolesno, sp0 wins (the older put)16:53
acolesand we could also have r1 with sp0, then r2 with sp1,  then delete *r1*...and sp1 wins16:54
acolesas you say, undeleted always wins16:55
acoleswhich I think means that before deleting the container the reconciler would migrate object rows from sp1 to sp0 (earliest put wins), but after deleting will migrate from sp0 to sp1 (undeleted wins)16:57
opendevreviewAlistair Coles proposed openstack/swift master: WIP container-server: set shard ranges in memcache  https://review.opendev.org/c/openstack/swift/+/81729418:24
reid_git looks like we resolved our memcached problem (too many open files) by moving it to another host. Not exactly sure why... config is exatly the same as the old one19:43
opendevreviewClay Gerrard proposed openstack/swift master: Ignoring status_changed_at is one way to fix it  https://review.opendev.org/c/openstack/swift/+/81730219:52
opendevreviewTim Burke proposed openstack/swift master: memcache: Prevent possible pool exhaustion  https://review.opendev.org/c/openstack/swift/+/81730720:52
reid_gWhat have you used to benchmark swift?21:58
opendevreviewTim Burke proposed openstack/swift master: Ensure close socket for memcached if got timeout  https://review.opendev.org/c/openstack/swift/+/33881923:25

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!