21:05:14 #startmeeting swift 21:05:14 Meeting started Wed Jan 3 21:05:14 2024 UTC and is due to finish in 60 minutes. The chair is acoles. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:05:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:05:14 The meeting name has been set to 'swift' 21:06:16 I did volunteer to chair in Tim's absence. I've missed a couple of previous meetings, but I can at least offer some updates on stuff I've been involved with recently 21:06:48 oh! sorry -- lost track of time 21:06:53 the agenda is here https://wiki.openstack.org/wiki/Meetings/Swift 21:07:03 aha! welcome back timburke 21:07:28 i'll let you do your updates, though ;-) 21:08:23 timburke: can you give us an update on eventlet? I'll let you catch your breath though and start with s3api partNumber support 21:08:39 👍 21:08:42 #topic s3api partNumber 21:09:21 we've been working on this for a while but feel like it's very close to being done 21:09:39 #link https://review.opendev.org/c/openstack/swift/+/894580 21:10:26 recap: the partNumber API feature allows clients to GET or HEAD parts of a multipart upload 21:11:26 swift implements multipart uploads (MPUs) using SLO, so first we need to add support to SLO for GET and HEAD using a part number https://review.opendev.org/c/openstack/swift/+/894570 21:11:31 #link https://review.opendev.org/c/openstack/swift/+/894570 21:11:56 then the s3api takes advantage of the new feature in SLO 21:12:37 along the way we refactored the s3api unit tests to better separate the s3acl True/False test cases 21:13:56 quite a patch, +1261 -40. one question, how do users know how many parts in one SLO object? before they issue s3api partnum HEAD/GET requests. 21:14:18 there's a few pieces still to polish - s3 restricts the maximum nmber of parts to 10000, but swift made it configurable and it defaults to 1000, so we've tried to do the right thing in terms of matching error messages that state what the partNumber range should be https://review.opendev.org/c/openstack/swift/+/904497 21:15:10 jianjian: the s3api MPU etag includes the part number count, something like -, so clients can infer the number of parts from the etag 21:15:44 jianjian, fwiw i've seen s3 libraries do a GET with ?partNumer=1, then look at the response headers to see how many more concurrent download threads to kick off (one per part) 21:16:25 i see, thanks. 21:16:29 unfortunately SLO does NOT have expose its part count in general (so only manifests uploaded via s3api have this form of etag) 21:17:26 but we could expose the part count from SLO as metadata in future (just not for existing objects IIUC) 21:19:07 ok, that's probably enough said on that, watch this space for a merge soon 🤞 21:19:25 okay, this is mainly for s3api 21:20:03 #topic container server GET namespaces 21:20:18 another set of patches that are hopefully near completion! 21:21:39 the patch chain starts with https://review.opendev.org/c/openstack/swift/+/890470 but we're planning to squash patches before merging 21:21:42 #link https://review.opendev.org/c/openstack/swift/+/890470 21:22:30 for a while we have been using a more compact representation of shards when the proxy is caching them in memcache - we called this a Namespace - just has a name and lower/upper bounds\ 21:23:15 this next step is to use the compact Namespace format when the proxy fetches shard ranges from the container server 21:23:16 yeah, the format stored in memcache only has name and lower 21:23:36 right, even more compact when serialized into memcache 21:25:07 so the container server GET grows support for a X-Backend-Record-Shard-Format header that can take value 'namespace', which will cause the container server to return a list of ... guess what... Namespaces :) 21:25:53 and the proxy changes to request this namespace format *when it wants shard ranges that will be cached*, since it only needs the compact form 21:27:00 one note on the performance improvement, for a very large container which has >10K shards, a single query of all shard ranges into container DB will take more than 300 ms, but a query of all namespaces will only take about 80ms. 21:28:16 yes, so we've seen significant improvement in container GET request times with this change already in prod , where we have some very large containers with many shards 21:28:21 i'm starting to wonder if we should *require* shard range caching -- i seem to remember OVH noticing a regression in the no-cache path at one point; i don't think we have any gate tests that really functionally cover it... 21:29:31 timburke: the no cache path worries me a little - I do sometimes set the cache existence times to zero in my SAIO for probe tests, but not routinely I confess 21:30:07 BTW, that's the only way I know to exercise "no cache" path - you gotta have memcache in the pipeline for auth to work 21:31:02 yeah, that's part of my thinking... though i suppose i could push on https://review.opendev.org/c/openstack/swift/+/861271 more to get fernet token support into tempauth 21:31:29 the last patch in the chain (which may remain a separate patch) cleans up the proxy container GET path a lot, which may help reason about the two paths https://review.opendev.org/c/openstack/swift/+/901335 21:31:33 #Link https://review.opendev.org/c/openstack/swift/+/901335 21:32:23 hmmm, it'll be interesting to consider how we would enforce "sharding requires memcache", if that's what you mean 21:33:08 i mean, we already nearly require memcache for account/container info -- if you don't have it, you're going to be incredibly sad 21:33:42 my thinking is mainly to formalize the dependency on memcache more 21:35:30 good point, we should do the same thing to shard range caching, it's much more expansive 21:36:56 agree with acoles, 901335 can be a separate patch 21:37:10 remind, does zuul functional testing use a setup with memcache? 21:38:16 iirc the dsvm jobs at least do; i think maybe the in-process tests use our FakeMemcache? i'm not actually entirely sure... 21:38:43 they must have *some* way of storing/retrieving tokens... 21:39:49 sure enough, FakeMemcache: https://github.com/openstack/swift/blob/master/test/functional/__init__.py#L110-L118 21:39:49 ok, maybe we can explore that more 21:39:57 let's move on? 21:40:12 #topic eventlet 21:40:19 https://github.com/NVIDIA/swift/blob/master/tools/playbooks/saio_single_node_setup/setup_saio.yaml#L114 21:40:42 so, there's been a bunch of eventlet activity lately 21:41:06 among other things, including adding support for py310-py312! 21:41:54 which is all well and good, except when the powers that be went to update upper-constraints... our tests were failing with it 21:42:08 #link https://review.opendev.org/c/openstack/requirements/+/904147 21:42:28 the fun thing, though, is that it wasn't a normal failure; it was a segfault! 21:42:57 i wrote up a bug for eventlet 21:42:59 #link https://github.com/eventlet/eventlet/issues/864 21:43:25 and they wrote up a bug for cpython (since i didn't have time to get to that right then) 21:43:32 #link https://github.com/python/cpython/issues/113631 21:43:51 but the response there was mostly "well, don't do that, then" 21:44:01 haha 21:44:32 the good news is, there seems to be a new approach to greening existing locks for eventlet 21:44:39 #link https://github.com/eventlet/eventlet/pull/866 21:45:07 i'm going to try to put it through its paces, but so far it seems promising 21:46:14 but it might have some implications for when we want to monkey-patch -- i remember us wanting to stop monkey-patching as an import side-effect, but we might need to 21:46:48 the segfault in our unit tests was also independently reported 21:46:51 #link https://bugs.launchpad.net/swift/+bug/2047768 21:47:44 and since there are published eventlet packages out there that can trip the segfault, it seems like we probably want to do what we can to work around it 21:48:29 fortunately, i think there's just one spot where we actually for-realsy monkey-patch in tests, and if we mock it out, no segfault (at least, OMM) 21:48:40 #link https://review.opendev.org/c/openstack/swift/+/904459 21:50:10 timburke: "one spot where we actually for-realsy monkey-patch in tests" ... sorry, I'm probably being dumb, but how do we not do the monkey-patch in other unit tests 21:50:13 ? 21:50:21 is logging the only place in swift which use locks? if we can safely switch to no lock logging, maybe we don't need greening lock anymore 21:52:22 acoles, i'm not actually entirely sure -- i think other tests involving run_wsgi etc. are exercising the error-checking before monkey-patching? or something. maybe i just got lucky 21:53:20 jianjian, there's definitely another lock in (unittest.)mock -- that's the one that started tripping the segfault. not sure about other places there may be locks lurking 21:53:52 i see 21:54:48 on the whole, though, i think it's yet more argument for us finding a way to get off eventlet -- writing cooperative code as though it's blocking doesn't seem long-term sustainable 21:55:45 ok, thanks for the update timburke , and for keeping tabs on eventlet! 21:56:03 or at least get off of monkey-patching? maybe if we were more explicit about things, it'd be better... though at that point, it'd probably be about the same effort to rewrite it all with asyncio 21:57:17 last I read the community's long term plan is to move away from eventlet right? 21:57:32 probably so, at least to get off of monkey-patching 21:58:07 yep -- and eventlet's long-term plan looks to be to move to some legacy/unmaintained status 21:58:20 we have a couple of minutes, so... 21:58:29 #topic open discussion 21:58:37 anything else to discuss? 21:58:41 i think i'll spend a bit of time poking at the py312 test failures next 22:00:22 seems like we're done, so as is traditional... 22:00:25 thank you for coming and thank you for working on swift! 22:00:32 and happy new year 22:00:37 #endmeeting