21:00:10 <timburke> #startmeeting swift
21:00:10 <opendevmeet> Meeting started Wed Feb 14 21:00:10 2024 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:10 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:10 <opendevmeet> The meeting name has been set to 'swift'
21:00:19 <timburke> who's here for the swift meeting?
21:00:22 <mattoliver> o/
21:00:27 <jianjian> o/
21:00:34 <timburke> glad you've got power again, mattoliver :-)
21:00:57 <kota> o/
21:01:12 <timburke> as usual, the agenda's at
21:01:15 <timburke> #link https://wiki.openstack.org/wiki/Meetings/Swift
21:01:43 <mattoliver> me too! this week has been a little ttrang and unproductive.. Seems we use power for most things :P
21:01:54 <mattoliver> *strange
21:02:14 <timburke> first up
21:02:23 <timburke> #topic expirer grace period
21:02:40 <jianjian> Tim and I are going to have another rain storm this weekend. Hopefully I get to keep my power
21:04:08 <timburke> this is something we (nvidia) have been using for a bit, because we've got an expiry-heavy workload where our users set an initial expiry on something, then repeatedly push it further and further out based on usage
21:04:08 <mattoliver> 🤞
21:04:56 <timburke> the trouble comes when there's something that missed an update (maybe the client ran out of retries), so gets accidentally deleted
21:05:37 <timburke> and it gives us an escape hatch to recover the data, between p 874806 and p 874710
21:05:39 <patch-bot> https://review.opendev.org/c/openstack/swift/+/874806 - swift - Add per account grace period to object expirer - 10 patch sets
21:05:40 <patch-bot> https://review.opendev.org/c/openstack/swift/+/874710 - swift - Add x-open-expired to recover expired objects - 27 patch sets
21:06:27 <mattoliver> You would think they could just add a grace period to the expiry themselves.. ie x + <expiry seconds>. but meh.
21:07:33 <mattoliver> where x is suppose to be grace seconds.. did I mention I've only just woke up :P
21:08:05 <timburke> there's still a big glaring hole in that there's no auth requirements on the x-open-expired header, but it's been working well enough for us so we never got around to polishing them to the point that we can land them
21:09:04 <mattoliver> This heavy expirer workload also makes we want to relive the general task queue expirer work that OVH started... but like most things, not enough time in the day.
21:09:24 <timburke> fortunately, we got an intern recently, and one of the things he'll be working on is some improvements for those; hopefully we'll actually feel good about landing everything :-)
21:09:37 <mattoliver> \0/
21:09:38 <timburke> mattoliver, that was ntt as i recall
21:09:58 <jianjian> "no auth requirements on the x-open-expired header", you mean additional auth besides of account authorization?
21:11:20 <timburke> yeah -- anyone who is authed to get the 404 (instead of a 403/401) can add the header and get a 200 (so long as the expirer hasn't laid down a tombstone yet)
21:12:18 <jianjian> I see
21:12:28 <jianjian> and Anish's patch to add per container level grace period has been verified on an internal cluster
21:12:47 <jianjian> this one: https://review.opendev.org/c/openstack/swift/+/907762
21:12:47 <patch-bot> patch 907762 - swift - expirer: add per-container grace period - 8 patch sets
21:13:35 <timburke> 🎉
21:14:09 <timburke> and there's also p 907774 to get some level of configurability for the new header
21:14:10 <patch-bot> https://review.opendev.org/c/openstack/swift/+/907774 - swift - add enable open expired in proxy config - 1 patch set
21:15:15 <timburke> we might still want to get an auth decision in there, but cluster-level config option at least gets us a start :-)
21:15:49 <mattoliver> +1
21:16:22 <timburke> i don't know that there's a lot to discuss about those patches yet, but wanted to bring them to our attention and encourage some useful feedback
21:17:50 <timburke> #topic aws-chunked transfers
21:18:15 <timburke> i finally got around to revisiting p 836755!
21:18:15 <patch-bot> https://review.opendev.org/c/openstack/swift/+/836755 - swift - Add support of Sigv4-streaming - 9 patch sets
21:19:15 <timburke> i got it rebased; the gate failures are pretty fixable
21:19:52 <timburke> and i started hacking it up so i could use mountpoint-s3! p 908953
21:19:52 <patch-bot> https://review.opendev.org/c/openstack/swift/+/908953 - swift - Get basic write support for mountpoint-s3 - 2 patch sets
21:20:42 <mattoliver> I guess I need to read up on more s3 to understand what these are :P
21:21:04 <timburke> still needs a decent bit of work, but i think i managed to clean up the reader a good bit
21:21:16 <timburke> so, some useful reading:
21:21:31 <kota> mountpoint-s3, interesting
21:21:31 <timburke> #link https://github.com/awslabs/mountpoint-s3/
21:21:54 <timburke> #link https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
21:22:36 <mattoliver> kk, thanks!
21:22:50 <timburke> #link https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
21:22:51 <kota> it looks like sort of Fuse imple for s3
21:23:31 <timburke> yup! we had some users express an interest in it, and i alsways enjoy being able to use AWS tooling against a swift cluster :-D
21:23:32 <jianjian> awesome!
21:24:31 <mattoliver> oh wow it's exactly what it sounds
21:25:52 <timburke> the cool thing is, so far it seems to be working! now i just need to actually have all the validation in place so we can feel confident telling users that the data was written correctly ;-)
21:26:14 <mattoliver> nice one timburke !
21:27:30 <zigo> Nice indeed!
21:27:56 <timburke> while i was thinking about that and a recent eventlet patch (https://github.com/eventlet/eventlet/pull/911), i also had a thought about our existing HashingInput we use to validate sha256 values
21:28:02 <jianjian> Nice, will try it out sometime
21:28:08 <mattoliver> I wonder if I could play with it and get it to say mount a package repo stored in swift to a centos controller node so I don't have to mirror it onto the disk while we transistion (sorry downstream work) :hmm:
21:28:58 <timburke> so i wrote up p 909049 -- if we like that direction, there will probably be some implications for the aws-chunked series
21:28:58 <patch-bot> https://review.opendev.org/c/openstack/swift/+/909049 - swift - s3api: Improve checksum-mismatch detection - 1 patch set
21:30:29 <timburke> next up...
21:30:37 <timburke> #topic drive-full-checker
21:30:44 <zigo> o/
21:30:51 <zigo> Thanks for putting it in the agenda.
21:31:15 <timburke> zigo wrote up a new tool to help disable/re-enable rsync based on disk fullness!
21:31:17 <timburke> p 907523
21:31:18 <patch-bot> https://review.opendev.org/c/openstack/swift/+/907523 - swift - drive-full-checker - 24 patch sets
21:32:33 <zigo> I wrote this because after 5 years in production, this finally happened to us in one of the 6 swift AZ of one of our clusters, and got pretty scared. So started writing more puppet, then takashi suggested to push my script to swift rather than puppet-swift.
21:33:00 <zigo> (happened that some partition got full, I meant)
21:33:35 <zigo> I'll write the matching puppet-swift patch too.
21:33:53 <timburke> yeah, disk-full situations can spiral badly, unfortunately :-(
21:34:57 <timburke> thanks for the new operator tooling! i'll try to get some more reviews on it soon (and maybe see about getting some of our operators to weigh in, too)
21:35:03 <zigo> p 909004 already implements tweaking /etc/swift/drive-full-checker.conf with puppet.
21:35:04 <patch-bot> https://review.opendev.org/c/openstack/puppet-swift/+/909004 - puppet-swift - WIP: do not merge. drive-full-checker: implements dfc - 2 patch sets
21:35:17 <mattoliver> yeah me too, looks really interesting!
21:35:47 <mattoliver> I'll also give the link to our SRE, they might be interested in this too and get more eyes on it
21:36:56 <timburke> zigo, anything else you'd like to call out about the patch, or mainly just need reviews?
21:37:26 <zigo> Mainly reviews, as I think the patch is looking good already (after so many iterations).
21:37:42 <timburke> 👍
21:38:13 <timburke> #topic part-number support
21:39:11 <timburke> indianwhocodes is around again! i think these patches are on his list of things to follow up on, so we should see some more movement on them soonish
21:40:14 <timburke> p 894570 and p 894580 are the main patches
21:40:20 <patch-bot> https://review.opendev.org/c/openstack/swift/+/894570 - swift - slo: part-number=N query parameter support - 86 patch sets
21:40:27 <patch-bot> https://review.opendev.org/c/openstack/swift/+/894580 - swift - s3api: Support GET/HEAD request with ?partNumber - 94 patch sets
21:40:55 <mattoliver> cool, I did finally get around to "start" looking at the chain.. but then a big storm in melbourne took out power for a large chunk of the state.. so got sidetracked. But I'll continue looking now that I have power.
21:41:43 <timburke> #topic py312
21:42:22 <timburke> we've still got a few patches needed to get us support: p 904652 and p 904600 at least
21:42:23 <patch-bot> https://review.opendev.org/c/openstack/swift/+/904652 - swift - Add ClosingIterator class; be more explicit about ... - 8 patch sets
21:42:23 <patch-bot> https://review.opendev.org/c/openstack/swift/+/904600 - swift - Stop using deprecated datetime.utc* functions - 2 patch sets
21:43:20 <timburke> i also recently started seeing some failures in test_http_protocol but only in py312 environment
21:43:56 <mattoliver> kk, well as someone whose system default is 3.12 atm I am probably more interested in 3.12 support then most, so I'll take a look at these too :)
21:44:36 <timburke> but when running the test isolated, it'd pass. i double checked a recent verified vote, and none of those jobs seemed affected
21:44:56 <mattoliver> oh that's fun
21:45:20 <mattoliver> so it might be a combination with other tests thing or intermittent failure
21:45:21 <timburke> i wrote up a fix at p 909033; i see that zaitcev already took a look, but i haven't had a chance to try his recommendation
21:45:21 <patch-bot> https://review.opendev.org/c/openstack/swift/+/909033 - swift - tests: Clear txn id in setup for test_http_protocol - 1 patch set
21:45:23 <mattoliver> ?
21:46:23 <timburke> it's that there's a lingering txn id that was set on the main thread's thread locals before the test runs
21:46:36 <mattoliver> kk
21:46:54 <timburke> that's all i've got
21:46:58 <timburke> #topic open discussion
21:47:04 <timburke> anything else we should bring up?
21:48:04 <mattoliver> Where'd we end up on static web + prefixed tempurls. I thought it was looking pretty good. But you said something about something you wanted to follow up on?
21:48:27 <mattoliver> I haven't looked at the patch, but just interesting in seeing if we wanted to finally land it soon :)
21:48:44 <mattoliver> #link https://review.opendev.org/c/openstack/swift/+/810754
21:48:44 <patch-bot> patch 810754 - swift - staticweb: Work with prefix-based tempurls - 14 patch sets
21:50:23 <timburke> i'd love to :-) i think the one question i still had was whether the 3xx redirect should pass along tempurl params, too, but i think i like it more without -- makes it a little more clear that you've probably signed the wrong path
21:50:56 <mattoliver> yeah, that's what I found in my playing with it. So I think it was more a doc prob then code ;)
21:51:40 <mattoliver> also thanks jianjian for looking at https://review.opendev.org/c/openstack/swift/+/877584
21:51:41 <patch-bot> patch 877584 - swift - internal_client: Add iter_{shard_ranges,namespaces... - 14 patch sets
21:52:42 <timburke> oh yeah, i ought to take a look at jianjian's p 908969!
21:52:42 <patch-bot> https://review.opendev.org/c/openstack/swift/+/908969 - swift - proxy: use cooperative tokens to coalesce updating... - 2 patch sets
21:52:58 <mattoliver> Internalclient is also internal to the cluster in my book. But happy to have discussions on it with others in the patch. So others please review if interested in a cached namespace interface
21:53:13 <mattoliver> oh yeah. the co-op token stuff. really interesting
21:54:00 <mattoliver> OK, we obviiusly have too much good stuff in the pipeline, we need to clear out (land) some patches so we can have more time getting this other good stuff in ;)
21:54:50 <timburke> sounds like a plan :-) i'll let y'all go so we can get on it, then
21:55:01 <timburke> thank you all for coming, and thank you for working on swift!
21:55:05 <timburke> #endmeeting