21:03:22 #startmeeting swift 21:03:22 Meeting started Wed Mar 20 21:03:22 2024 UTC and is due to finish in 60 minutes. The chair is mattoliver. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:03:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:03:22 The meeting name has been set to 'swift' 21:03:36 who's here for the swift meeting? 21:03:55 o/ (might as make it feel as normal as possible :P) 21:04:33 As always the agenda is at 21:04:39 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:05:15 And I did update the agenda for those reading along later. Although I forgot to add the first topic 21:05:41 #topic caracal release! 21:06:04 #link https://review.opendev.org/c/openstack/releases/+/912371 21:06:04 patch 912371 - releases - Release Swift for 2024.1 Caracal (MERGED) - 3 patch sets 21:06:39 It's landed, so we have a release! And because we have no one else here I'll have to say it... it's the best swift release yet! 21:07:07 Actually there were alot of awesome things in this release, so kudos to everyone who works on swift! 21:08:16 Moving on, and most of these I'll just mention and move on, because the relevent parties aren't here today to give us real status updates. But will still mention them as they seem to be the active things running atm in swift land. 21:08:30 #topic s3api: Fix handling of non-ascii access keys 21:09:17 I confirm best swift release ever, thanks mattoliver :) 21:09:31 We came across this in our prod. Seems we missed a unicode conversion to wsgi string when doing the py3 migration all those years ago. And we only just discovered it. 21:09:39 lol, thanks JayF 21:09:50 But we have a fix 21:09:59 #link https://review.opendev.org/c/openstack/swift/+/913723 21:09:59 patch 913723 - swift - s3api: Fix handling of non-ascii access keys - 1 patch set 21:10:50 It's caused if someone has a non-asci character in their aws key. 21:11:24 We had tests we thought covered this.. and looks like we did. But our fake app in the tests wasn't a good enough fake :( 21:11:45 Anyway.. just a heads up, I expect that to land before the next meeting. 21:11:57 #topic expirer grace period 21:13:53 This is an older topic from the last meeting, but left it in. There is active work going on it. We have an intern working on it atm, and coming along nice. There is current discussions around maybe changing the name a little. Basically it gives us an optional grace period when expiring objects, so there can be like a soft delete. 21:14:20 one of our users had a need for this. And it's optional. 21:14:51 #link https://review.opendev.org/c/openstack/swift/+/874806 21:14:51 patch 874806 - swift - expirer: per account and container grace period - 17 patch sets 21:15:00 #link https://review.opendev.org/c/openstack/swift/+/874710 21:15:00 patch 874710 - swift - support x-open-expired header for expired objects - 36 patch sets 21:15:14 I'll try and remember to link first :P 21:15:26 #topic cooperative tokens 21:15:38 #link https://review.opendev.org/c/openstack/swift/+/890174 21:15:39 patch 890174 - swift - common: add memcached based cooperative token mech... - 14 patch sets 21:15:47 #link https://review.opendev.org/c/openstack/swift/+/908969 21:15:48 patch 908969 - swift - proxy: use cooperative tokens to coalesce updating... - 10 patch sets 21:16:11 Jianjian is leading the charge here. And it's really interesting and awesome work. 21:17:29 For those who have been running swift for a while, or come to the last few PTGs we'd talked about a thundering herd problem, when there is alot of updates coming to one sharded container and the roots shard-ranges have ttl'ed out of cache. 21:17:45 o/ 21:19:20 Well this uses a token in the cache as a lock that updaters can grab. If they get it then they can go to the back end and get the latest set of shardranges and then they'll place them back in cache. It's a modified version of whats called a ghetto lock. A slightly more concurrent version that better supports a distributed system like swift 21:19:40 Hey indianwhocodes just you and me, so was mostly talking to myself. 21:19:52 but not anymore! now I don't look as crazy :P 21:20:15 ya scrolled up, good so far! 21:20:35 I have currently nothing ready for reviews yet 21:20:50 Anyway, the work is awesome. I hope to get in an review the chain. 21:20:55 thanks :P 21:21:03 ok next topic 21:21:15 #topic Feature/MPU feature branch 21:21:35 So we've created a feature branch! We haven't done one of those in Swift since container-sharding. 21:22:55 And it's because we're working, finally, on something we've been talking about for years. We used to call it ALO, or atomic large object. Ie, a large object where users can't see the segments so we can have a 1:1 connection. 21:24:25 We'll follow the MPU api somewhat. No idea what the name will end up being, swift MPU? So this will go with our DLO and SLO. And our s3api will eventually just use them. And no more weird edgecases of orphaned segments! 21:24:42 sounds promising! 21:24:52 acoles: is leading the charge. No doubt if he was here you'd get a better explanation then I can give :P 21:25:04 It does! 21:25:18 #topic aws-chunked transfers 21:25:46 timburke: would be better at talking about these. And making progress. 21:26:03 I did look at the patches in the past, I should do so again! 21:26:19 #link https://review.opendev.org/c/openstack/swift/+/909049 21:26:20 patch 909049 - swift - s3api: Improve checksum-mismatch detection - 5 patch sets 21:26:28 #link https://review.opendev.org/c/openstack/swift/+/909800 21:26:28 patch 909800 - swift - utils: Add crc32c function - 5 patch sets 21:26:40 #link https://review.opendev.org/c/openstack/swift/+/909801 21:26:40 patch 909801 - swift - s3api: Add support for additional checksums - 6 patch sets 21:27:08 I have been blackbox testing tim's patch with mountpoint-s3 benchmarks 21:27:11 I attempting to recreate the probetest failure of that ^ one. 21:27:26 oh nice 21:27:36 oh I forgot more links 21:27:45 #link https://review.opendev.org/c/openstack/swift/+/909802 21:27:45 patch 909802 - swift - WIP: s3api: Additional checksums for MPUs - 6 patch sets 21:27:53 #link https://review.opendev.org/c/openstack/swift/+/836755 21:27:53 patch 836755 - swift - Add support of Sigv4-streaming - 15 patch sets 21:28:05 man that timburke is a machine! 21:28:33 yeah we need these aws-chucked transfers for mountpoint-s3 don't we. 21:28:38 agreed. 21:28:55 So indianwhocodes your a good man to test and review these :) 21:29:33 i am looking into adding more s3api cross-compat tests to p 909801 21:29:33 https://review.opendev.org/c/openstack/swift/+/909801 - swift - s3api: Add support for additional checksums - 6 patch sets 21:30:05 oh nice! 21:30:33 If/when you have a patch ready, let's add it to the list the patches then :) 21:31:13 Let's move on.. almost at the end of the agenda :) I don't you we're busy! 21:31:26 #topic drive-full-checker 21:31:39 #link https://review.opendev.org/c/openstack/swift/+/907523 21:31:40 patch 907523 - swift - drive-full-checker - 34 patch sets 21:32:51 I think this is awesome, and we want to get this landed at some point. One of our SRE is interested in testing it. But downstream stuff got in the way this last week or so. Looks like timburke's had a play. 21:33:04 I hope to too at somepoint. 21:33:25 Maybe we can play with it in a VSAIO at least. 21:33:58 Hopefully I can trick sre into looking this week or so :P 21:34:16 #topic s3api and slo Partnum support 21:34:27 So the swift side chain landed! 21:34:34 Nice work indianwhocodes !! 21:34:41 finally. 21:34:51 Now we can add support to python-swiftclient! 21:35:02 #link https://review.opendev.org/c/openstack/python-swiftclient/+/902020 21:35:03 patch 902020 - python-swiftclient - support part-num in python swiftClient - 15 patch sets 21:35:13 i intend to play with the drive-full checker too if it goes ahead with mountpoint they will go hand in hand as major wins 21:35:44 Oh yeah, great point! 21:35:48 nice 21:36:28 reason i say that is that it took me just 2 benchmarking jobs to fill up my vsaio, so just the amount of data we have in prod will have a significant impact!!! 21:36:30 Well I'll try and take a look at the swiftclient partnum patch as soon as I can so we can get it all squared away. 21:36:45 sounds good. 21:37:49 Hey it's a jianjian , too bad he missed me talking about his awesome cooperative token stuff, he's going to have to read the logs :P 21:38:12 sorry for being late 21:38:24 that's awesome! yes, I will read the logs. :-P 21:38:44 indianwhocodes: you can increase your vsaio disk size if need be. But maybe easily filling them is what we need to testing the drive-full-checker anyway :P 21:38:57 exactly. 21:39:26 #topic Drop support for liberasurecode<1.4.0 21:39:41 Last topic I have on the agenda before I open the floor. 21:40:05 This is from last meeting, and I think yeah sounds good.. but I guess I should actaully go review it :P 21:40:16 I see jianjian did. Nice 21:40:28 i just did as well, lol. 21:40:38 oh nice 21:41:08 oh you did to, 8 mins ago! 21:41:33 Well hopefully that means it'll land over the next week :) 21:41:49 That's all the topics I gathered (over the 10 mins before this meeting) :P 21:41:52 So 21:41:57 yeah, looked good to me, the new liberasurecode also is able to replace old .so library with static functions, which is nice 21:42:30 oh cool 21:42:41 #topic open floor 21:43:55 I've been blowing dust off 3 year old patches I had for a better auto-sharding leader-election algorithm. Still fairly basic, but better then what we have: 21:44:00 #link https://review.opendev.org/c/openstack/swift/+/667030 21:44:00 patch 667030 - swift - Auto-sharding: first attempt at _elect_leader - 10 patch sets 21:44:39 That's basically a rebase and squash and an attempt to address some comments from 3 years ago. 21:44:42 wow, leader election... that's cool! 21:45:17 Well it's something we always had planned. And we purposely picked a overly simple one to land sharding 21:45:56 But have always told people not to use auto-sharding because it isn't production ready. But we do use auto-sharding in tests 21:46:11 was there a design doc? or mostly it's described in the commit message 21:46:53 Auto-sharding basically means the sharder takes responsiblity not just for sharding but identifying, scanning and initiate the scanning too. 21:47:09 yeah there is, and there's been alot of docs over the years. 21:47:27 I've been trying to gather them all up. I'll find the current link 21:47:42 I have tried it before on a personal swift cluster 21:49:22 thanks. I guess only leader can kick off a sharding with auto-sharding, what happen if leader node dies? will another node stands up to be a new leader? 21:49:51 these are the problems. 21:50:00 #link https://docs.google.com/document/d/1VSpmPcEt1NDhDLb8Btvfl6BwnaeGboONvltSM-ZHUVQ/edit?usp=sharing 21:50:25 ^ that I think only works for nvidians, but I'll make it available to everyone when I get the chance 21:51:00 There are alot of leader election options to choose. But the first version is an increment of the existing one. 21:51:03 👍 21:51:29 Existing one is super simple.. if your index 0 for the container (partition) then your the leader.. 21:52:17 So great for testing and simple, but doesn't actaully use any real ellection and its super easy to get 2 who think they're the leader because of rebalances and eventual consistency 21:53:11 this newer version takes ring versions into account and only listens and gets a qourum of votes from the latest ring. Meaning handoffs (old primaries now don't get a say). 21:54:00 And currently I think there is a double check. Am I leader, scan, am I still the leader, write ranges into shardranges and replicate. 21:54:20 throw away the work if I'm not. 21:55:00 though maybe thats inefficent. But taking the less split brainy approach 21:55:42 so a shard leader could stop in the middle the sharding during rebalancing, and then cancel its work 21:56:21 anther approach is to elect quicker and just get better at dealing and recovering from the split-brains. But in realilty we need to solve this anyway, because it'll happen 21:56:34 not in the middle of sharding.. only in scanning for shardranges. 21:57:01 Once a leader has scanned and inserted the shardranges. Sharding then happens as per normal. 21:57:17 but yeah, atm 21:57:50 I am thinking of also adding a memcache sentinal lock or something 21:57:59 as an enhancement. 21:58:14 Or maybe we just need a branc new approach :) 21:58:34 like I said, these are from years ago and I'm relearning what past Matt was thinking :P 21:58:54 lol 21:59:27 Dream is one day, we have auto-sharding enabled by default in swift. And we never need to think about it. 21:59:50 that'll be cool 21:59:57 And for us downstream, we deprecate it from the controller 22:00:07 Anyway, we're at time 22:00:09 I also feel we need shard shrinking as well regarding to the topic of sharding 22:00:38 yeah 22:01:10 Yeah, there is a shrinking edge case that is blocking out auto-shrinking atm too, which is another blocker. So that also needs to be solved! 22:01:12 Thanks for coming and thanks for working on swift! 22:01:15 #endmeeting