Friday, 2024-01-19

opendevreviewYan Xiao proposed openstack/python-swiftclient master: Add transaction id to errors to help troubleshoot, including:   - error when downloading object with truncted/missing segment   - error when downloading or stat'ing non-existent object   - error when stat'ing or listing non-existent container  https://review.opendev.org/c/openstack/python-swiftclient/+/90377002:55
Steals_From_DragonsIs this the right place to ask some swift operations questions? 16:40
Steals_From_Dragons/msg jamesdenton_ howdy! It's intern (David Alfano). How are things going?16:42
Steals_From_DragonsSorry, It's been awhile since I've done irc16:44
timburkeno worries! and yeah, this is a great spot to ask swift ops questions :-)18:34
Steals_From_Dragons I've been seeing some errors in my container-replicator logs raising a "ValueError('object_count cannot be < 0') in the get_own_shard_range function. I can also get the same error message when running `swift-manage-shard-ranges <path_to_container_db> info`.  This doesn't happen for all of the container databases, just a small number of them. My guess was that the containers associated with those databases no longer exist, but I'm not 18:48
Steals_From_Dragonsentirely sure how to verify that. 18:48
timburkeSteals_From_Dragons, i'd start by using sqlite3 to look at the db directly; something like sqlite3 $DB_FILE 'SELECT * FROM shard_range WHERE object_counts <= 0;'19:01
timburkeseems like there might be some database corruption; hopefully from that query, you can identify the shard DB name, then find that with swift-get-nodes, and see what *that* DB looks like19:02
timburkeit's not clear yet whether the corruption is in the root DB or the shard; if it's the root, we should be able to reset the reported column in the shard to have it re-send the update19:08
timburkeif it's the shard, hopefully it only impacts one replica of the DB (so we can fix it by deleting/quarantining the affected DB and letting replication bring it back up to full durability)19:08
timburkethough we might still need to reset the reported column, too19:09
Steals_From_DragonsOk, let me try that sqlite cmd19:29
Steals_From_DragonsLooks like it returned one row.  I would imagine that it wouldn't return at all if it was corrupt? 19:36
timburkewe've occasionally seen bit flips in dbs, as in https://bugs.launchpad.net/swift/+bug/182378519:39
timburkeSteals_From_Dragons, oh! i should have had you include a -header so we'd know what those results meant :P19:40
timburkebut i suspect there's a negative number somewhere in there, and it's the object_count -- and then somewhere else there's something that looks kinda like a path, which would be the account/container name for the shard19:41
Steals_From_DragonsYea, I see the thing that looks like a path. No negative numbers though. Interestingly both object_count and bytes_used are 019:44
timburkehuh. but swift-container-info still raises the error? maybe check the container_stat table, like sqlite3 -line $DB_PATH 'SELECT * FROM container_stat;'19:46
timburke(my thinking is that it's tripping the error trying to generate its own_shard_range)19:47
Steals_From_DragonsAh, there's the negative number19:48
Steals_From_DragonsObject_count is -119:48
Steals_From_DragonsActually, quite a few in here19:49
Steals_From_DragonsBytes_used, object_count, and both container_sync_points are negative 19:49
timburkethe sync points being negative are fine; that just means it's never successfully replicated to the other primaries19:52
timburkespeaking of other primaries, have you checked the other replicas yet?19:53
Steals_From_DragonsI just pulled them up with swift-get-nodes19:53
timburkemight also want to peek at the object table (though i expect it's empty)19:53
Steals_From_DragonsJust want to make sure, since we are dealing with the container database, I should be using the container ring with swift-get-nodes, correct? 19:56
timburkeyup!19:57
Steals_From_DragonsHm, getting a lot of 404s from the curls 20:01
Steals_From_DragonsBut I think it's probably my cmd and not the actual object 20:02
Steals_From_Dragons`swift-exec swift-get-nodes /etc/swift/container.ring.gz -a <container from the sqlite -line cmd> ` look ok? 20:03
Steals_From_DragonsAh! Got it! It was my cmd. Needed to put the account in there 20:11
timburkeSteals_From_Dragons, that seems right -- as long as you've got the account in there, too, i suppose...20:15
Steals_From_DragonsNow the curls respond with a 204 No content. 20:15
Steals_From_DragonsLooks like they all have the same negative values for bytes_used and object_count20:16
timburkehuh. any of the handoffs have anything? and do any of them have a delete timestamp?20:17
Steals_From_DragonsThe handoffs don't have anything, and the delete timestamp is all 0s20:18
Steals_From_Dragonscorrection: one of the handoffs has the same values as the first 3 20:23
timburkehave you already checked the object table and verified that it's empty on all of them? the simplest thing might be to run something like 'UPDATE policy_stat SET object_count=0, bytes_used=0;' if it really should be empty... though exactly how we got here is still a mystery...20:25
Steals_From_DragonsObject table in the container db? 20:26
timburkeyup -- run something like 'SELECT * FROM object WHERE deleted=0' (since we don't mind there being tombstone rows)20:28
Steals_From_DragonsIt doesn't return anything. Makes me think there might not have been anything in there to begin with? 20:31
Steals_From_DragonsChecking without the 'WHERE deleted=0" doesn't return anything either 20:32
timburkein that case, i feel pretty good about running that UPDATE query -- best to hit all nodes20:34
Steals_From_DragonsBefore I do that. 20:39
Steals_From_DragonsBefore I do that... I decided to check the openstack project that "owns" this container (via the KEY uuid), The container doesn't show up in their container listing. Is it possible they deleted it and something went wrong with the tombstone process? I feel like that would explain why there are no objects.20:39
timburkecould be... makes me think the container got deleted, and even eventually reclaimed, but then the one corrupt handoff started trying to rsync itself back to primaries or something20:40
Steals_From_DragonsSo, if we fix the value, you think the delete process will reactivate? 20:41
timburkesince it doesn't have any delete timestamp, no, not really. might be best to just delete the hashdir on all primaries & the handoff21:02
timburkemight want to stop the replicators first, issue the deletes throughout the cluster, then start them again21:04
Steals_From_DragonsThe hashdir is the directory that's a number after /<drive>/container/, correct? 21:30
Steals_From_DragonsI just confirmed that this container was deleted a year and a half ago, kinda weird it's showing up now after all that time 21:30
Steals_From_DragonsI'll do all of the deletes on Monday so the weekend is safe. Thank you very much for your help today timburke21:35
timburkesure thing! good luck, Steals_From_Dragons21:36
timburkeoh, but the hashdir is down deeper -- like on my dev vm, i've got a db file at /srv/node1/sdb1/containers/450/afd/7089ab48d955ab0851fc51cc17a34afd/7089ab48d955ab0851fc51cc17a34afd.db, the hashdir is /srv/node1/sdb1/containers/450/afd/7089ab48d955ab0851fc51cc17a34afd/21:37
timburkethen /srv/node1/sdb1/containers/450/ is the partition; there are likely many DBs in that partition21:39
timburkeand /srv/node1/sdb1/containers/450/afd/ is the suffix; it's used to keep from having too many subdirectories directly under the partition21:40
Steals_From_DragonsAh ok, thank you for explaining it. 21:41

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!