Monday, 2021-11-08

*** acoles_ is now known as acoles04:05
*** erlon_ is now known as erlon04:07
opendevreviewMatthew Oliver proposed openstack/swift master: Tracing: Remove all but the OpenTracing code  https://review.opendev.org/c/openstack/swift/+/81611904:30
reid_gHello, Trying to get an understanding on how swift uses memcached. Is it just for caching authtokens for a bit so it doesn't have to check keystone for every request?16:03
reid_gCurrently we have a handful of memcache servers on our PACO nodes that we are pointing the rest of the PACO nodes to. Trying to decide if we need to add more. With 90 nodes we are hitting 7-8k connections to memcached being hosted on 3 servers.16:06
DHEit does cache more, most notably I've noticed it caches the HEAD information from accounts/containers so it can confirm they exist, any ACLs, etc.16:23
claygreid_g: when we scaled out to hundreds of proxy servers we ended up going doing a McRouter setup across the fleet -> https://github.com/facebook/mcrouter/wiki17:04
clayg... basically we bypass all the swift memcache ring stuff and each proxy points to a local mcrouter config 17:04
DHEso you're reducing open connections by making everything through 1 local MC routing proxy vs N local swift proxy processes17:33
reid_gI'm not sure we need that scale yet. Before mcrouter, you were using a memcached per proxy?18:36
timburke_yup; each proxy would run memcached, and everyone would point at everyone else18:57
reid_gSo your use=egg:swift#memcache memcache_servers= would contain all proxies in the cluster?18:59
timburke_yup. if you've got a multi-region deployment, you might want to have separate pools per region19:00
timburke_i think we'd put it all in a separate /etc/swift/memcache.conf file, but the idea's the same19:02
reid_gIs it ideal to have it on every server or just a subset? Our cluseter with 90 proxies is currently just running 3 memcached19:16
reid_gIs there a doc that describes what swift uses memcached for?19:18
reid_gI bring it up because one of them is broken or being spammed and running out of file handles.19:20
timburke_more servers means more load-spreading (though you can definitely still get hot-spotting if there's a particularly active account/container)19:21
timburke_that's mostly about request load, though; with the connection re-use, just adding more servers is unlikely to help with the file handle count19:22
timburke_you might split it up as a few different pools -- say, a pool per rack. it's a bit more overhead since container info (for instance) is now duplicated everywhere, but it'll limit the number of connections19:24
timburke_auth sometimes becomes trickier with multiple memcache pools, but i'm not sure it's a problem for keystone (tempauth and derivatives run into trouble because the only place the token is stored is in memcache, so you can't shared tokens between pools)19:26
timburke_i don't know that we've got it written down explicitly somewhere, but the main uses are: account info, container info, shard range caching (if you've sharded containers), auth token caching (though i don't remember *exactly* how keystonemiddleware uses it), and s3api secrets retrieved from keystone19:28
reid_gSo to create a pool you would have 1 set of proxies with use=egg:swift#memcache memcache_servers="set of ips in pool A" and another set of proxies with use=egg:swift#memcache memcache_servers="set if ips in pool B" and so on?19:31
reid_gAdding more entries to a single pool won't help with connections because every proxy server would connect to every server in use=egg:swift#memcache memcache_servers= ?19:32
timburke_exactly. but splitting up the proxies to only point at one pool or another should reduce the connection count19:33
reid_gInteresting19:35
reid_gAlso, memcache_max_connections default is 2. On a 40 core system with workers = auto, that is 80 connections to each memcached per server. 90 servers =  7200 connections per memcached server.20:10
reid_gWhy would a worker need a max of 2 connections per memcache server?20:11
claygtimburke_: it's hard to make mulitple pools work if youre putting auth tokens in memcache - i think most swift deployments use a single pool of all proxies20:21
clayg... and that works good up to a hundred or more proxy servers20:21

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!