21:00:13 #startmeeting swift 21:00:13 Meeting started Wed Jan 20 21:00:13 2021 UTC and is due to finish in 60 minutes. The chair is timburke_. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:16 The meeting name has been set to 'swift' 21:00:26 who's here for the swift meeting? 21:00:31 o/ 21:00:31 hi 21:00:33 hi 21:00:39 o/ 21:00:45 o/ 21:01:51 as usual, the agenda's at https://wiki.openstack.org/wiki/Meetings/Swift 21:01:55 first up 21:02:03 #topic reconciler pipelines 21:02:15 #link https://bugs.launchpad.net/swift/+bug/1910804 21:02:17 Launchpad bug 1910804 in OpenStack Object Storage (swift) "Encryption doesn't play well with processes that copy cleartext data while preserving timestamps" [Undecided,New] 21:03:00 o/ 21:03:18 i just wanted to give an update on how we (nvidia) are addressing it, and get input on what we (as a community) want to do 21:04:16 nvidia is addressing it as an operations issue -- we wrote bad pipelines before, and now we'll have hardcoded reconciler pipelines that more-closely match what's in upstream docs 21:04:56 of course, there's still the upstream patch i wrote to forcibly remove some middlewares from the reconciler pipeline 21:05:02 #link https://review.opendev.org/c/openstack/swift/+/770522 21:05:25 ...but i'm not sure whether that's necessarily the direction we should be going 21:07:24 we should probably add some doc/comment in reconciler sample conf to emphasise what not to do in the pipeline (and why) 21:07:58 does anyone have opinions on whether there should be new code to try to prevent the situation, or if it's purely a documentation issue? 21:08:42 I'm bothered by the fact that middlewares are no longer true middlewares if the core code knows about their intrinsic characteristics and modifies the pileline on the basis of the middleware names. 21:08:59 Why are they middlewares, then? Might as well re-format them as modules. 21:09:28 I guess there are (at least) 3 possible actions: 1. add doc 2. check pipeline and log warnings 3. modify bad piplines 21:10:10 I'd just dump it on docs, or better yet etc/object-server.conf-sample should have a working pipeline 21:10:33 And write a comment "do not insert symlink, dlo, slo" here. 21:11:12 I would also add a note in the changelog to emphase that operator must fix their configuration (not sure anyone read documentation diff before an upgrade) 21:11:22 zaitcev: it's container-reconciler.conf-sample 21:12:17 rledisez: +1 21:13:01 all right, that souds like a plan then. i'll work on getting a docs/sample conf patch together, and plan on calling it out in the next release's changelog. thanks! 21:13:19 #topic ssync and non-durable frags 21:13:26 acoles, how's it going? 21:14:08 we're deploying a fix this week: https://review.opendev.org/c/openstack/swift/+/770047 21:14:45 and hoping to observe some 'stuck' handoff partitions get sync'd and cleaned up 21:15:21 anything else you need o that for right now? reviews, i suppose? 21:15:27 (the problem being non-durable EC fragments preventing handoff partitions being deleted) 21:15:37 reviews always welcome 21:15:51 maybe you could comment on the review with the outcome 21:16:12 yes, and/or here next week 21:17:42 sounds good 21:17:53 #topic shard cleanup 21:18:32 mattoliverau, clayg, acoles i think you were all pushing on this a bit over the last week 21:18:37 We have 2 patches going in 2 different directions. 21:18:38 how's it going? 21:19:01 https://review.opendev.org/c/openstack/swift/+/770529 and https://review.opendev.org/c/openstack/swift/+/771086 21:19:26 the problem is this only really happens because autosharding + shrinking doesn't exist 21:20:09 so the first potential fix is patch one, that inserts a kind of poison pill, or rather the delete ts of the root when the root 404s 21:20:15 so they can get cleaned up 21:20:47 but the second path, simply makes reclaiming a deleted root container not possible until it has no shards 21:21:19 the second means we could have shards hanging around, but gets us closer to the indended solution 21:21:25 the second is a pre-requisite for most other solutions IMHO 21:21:35 +1 21:21:54 i.e. avoid orphaned shards, then figure out how to delete them so the root can be deleted 21:21:55 So I've been playing with acoles' compact shrinking patches 21:22:08 but dealing with orphans will be assisted by still having the root around 21:22:27 mattoliverau: yeah. that's like the third piece of the story :) 21:22:46 So an op can do something about collapsing the empty shards back into the root so they can be reclaimed.. this shrinking code also is in the sharder for when we get to autosharding so it'll happen automagically at some point 21:23:18 i think i like that part the most ;-) 21:23:19 https://review.opendev.org/c/openstack/swift/+/765623 -> gives us a tool to get rid of empty shards 21:23:42 but again, relies on the root still existing! 21:23:51 will that work against a db that's been marked deleted? have we tested that yet? 21:23:54 So I have a feeling maybe option 2 is best because it gets us further down to correct path. 21:24:06 yup 21:24:10 cool! 21:24:13 I'm almost there on https://review.opendev.org/c/openstack/swift/+/765623 but have one piece left to write 21:24:20 well, I think.. that's what I've been testing in my SAIO 21:24:47 timburke_: good question, need to add that test 21:25:10 sounds like good progress. anything else you guys need input on there? 21:25:13 Apropos sharding, I'm going to enable it on Train. So, I'm going to propose a few backports. 21:25:16 oh, cool, so mattoliverau did you try than manually? 21:25:37 that* 21:25:40 I don't have +2 on stable branches, so I'll have to ask to approve. 21:26:06 timburke_: reviews always welcome :) 21:26:08 Sorry, I thought you moved on from the orphan shards discussion. 21:26:11 acoles: the compact worked to get it down to 1, it's the final collapse I need to test ;) 21:26:12 zaitcev, cool! let us know how it goes. how big are the containers you're looking at, out of curiosity? 21:26:23 so your last peice as you will :) 21:26:31 timburke_: it's the 300GB container that I mentioned. 21:26:46 mattoliverau: the final collapse won't work yet, give me another 24 hours, but did you try it on a deleted root? 21:26:47 ah, right! that does sound familiar 21:27:06 yeah 21:27:11 super 21:27:20 timburke_: A national phone provider has tens of millions of phones archive their text messages once a day. And all of the objects have expiration tags, so the expirers delete tens of millions of objects each day. 21:27:31 And all of that in 1 container 21:27:49 I cannot understand who thought it was a good idea, and how it works at all. They started in Queens. 21:28:10 zaitcev: do you know if the object names are monotonically increasing? 21:28:46 as in new objects append to end of sorted listing 21:28:49 acoles: I think the are "username.timestamp.gz" so they should be increasing. 21:29:20 acoles: wait, nm. User 'a' has all of its archives ahead of user 'z'. 21:29:30 well, at least there will be *some* load-spreading across the shards 21:29:34 I need to look for sure. 21:29:35 I wonder if adding a vaccum facility to containers would be useful for cases like that too. 21:29:54 Oh it would. Bu 21:29:56 rebalacne replicaiton would be a pain 21:30:28 t by the time I got there, sqlite vacuum would 100% fail with a segfault on that container. Not even running out of RAM, just segfault. 21:30:50 lol, well thats useful :P 21:31:14 I had this idea once that we could use the fresh db with new epoch trick to, well, get a fresh root db (without any change to shards), but that was just a wild thought 21:31:34 just copy what we care about to a fresh db and ditch the old 21:32:29 In this particular case I calculated the number of objects and it was something like only 17 bytes per object in the .db, maybe 170. I don't think vacuuming was going to help my case. 21:33:02 all right, one last topic for updates 21:33:06 #topic relinker 21:33:38 rledisez, thanks for reviewing https://review.opendev.org/c/openstack/swift/+/769855 ! i pushed up a new patchset to roll in the test changes you recommended 21:34:47 timburke_: yeah, i reviewed it just before the meeting. code looks good, just a concern that should be easy to fix I think. seems almost good to go 21:35:11 thanks! 21:35:33 i haven't gotten back to https://review.opendev.org/c/openstack/swift/+/769632 (better-handle unmounted disks) yet, but hope to this week. i'm not so sure about the patch as written, since it'll log the warning but not change the exit code (which seems like it's still a bit dangerous) 21:35:44 that's all i've got 21:35:50 #topic open discussion 21:35:58 what else should we bring up this week? 21:37:41 I said my piece about sharding, albeit out of turn. 21:38:09 concerning p 769632 I think the solution is in the way progress is provided to the operator (either recon or a swift-object-relinker status) that would tell "on this device that is in the ring, there is no state file, it's suspect" 21:38:24 Also, I'm still cracking my head about why Romain's patch to delete Queue fails. It now fails with connect() returning EBADF which is just absurt. 21:39:30 seems to still point to some wires getting crossed down in eventlet :-/ 21:40:15 Oh, yeah. I started looking at S3 v4 signatures not working. 21:40:18 rledisez, good thought -- i'll see what i can do with the pre-device hook 21:40:30 No progress but I'm amazed that nobody noticed. 21:40:41 All users just silently back down to v2, I guess. 21:41:08 zaitcev, is it something involving unsigned payloads, aybe? i know there was a regression i accidetally introduced a bit ago... lemme find the patches... 21:41:51 No, or, I don't know. In my case I only ever did GET and they work for some names but not the others. 21:42:06 Almost like even-length name is okay 21:42:47 I need to dig deeper though. It certainly reproduces for me, using s3cmd. I need to re-try with Boto too. 21:42:53 huh. any non-ascii in the names, maybe? 21:43:32 Nope. Just names like s3cmd ls s3://test-1235163301 21:43:49 (regression i was thinking of was caused by https://review.opendev.org/c/openstack/swift/+/767644, fixed by https://review.opendev.org/c/openstack/swift/+/770004) 21:44:20 I'll test, thanks a lot. 21:44:56 speaking of s3, I noticed some pipeline checks were not actually being executed in s3api middleware: https://review.opendev.org/c/openstack/swift/+/771467 21:45:30 it was on my list to review 21:45:36 thanks 21:47:47 all right, looks like we can end a bit early 21:48:06 thak you all for coming, and thank you for working on swift! 21:48:11 #endmeeting