Tuesday, 2023-06-20

opendevreviewASHWIN A NAIR proposed openstack/swift master: Add X-Open-Expired to recover expired objects  https://review.opendev.org/c/openstack/swift/+/87471016:30
timburkepaladox, i saw your question in -infra -- are the segments for the SLO in the same container as the manifest? if not, do the container ACLs match between the two containers?20:51
paladoxwell there's a seperate container it created e.g. miraheze-obeymewiki-dumps-backup_segments20:51
paladoxi noticed it doesn't copy perms so i set it so anons could access it as well but that didn't fix it20:52
paladoxhttps://www.irccloud.com/pastebin/IJ6WQXdj/20:52
paladoxnormal files work. It just SLO objects that don't unless i use a swift command20:53
fungihah, i was about to suggest asking in here20:55
paladoxon the SLO i see:21:02
paladoxhttps://www.irccloud.com/pastebin/ua1UPcqZ/21:02
paladoxaccessing the segment directly worked21:05
timburkepaladox, looks like the object was uploaded as a DLO, not an SLO -- DLO also needs listing permissions on the segment container21:05
paladoxoh...21:05
* paladox tries fixing that21:05
timburkei'd either re-upload as an SLO, or adding .rlistings to the ACL on the segments container21:05
paladoxthat worked!!!21:08
paladoxtimburke: which is recommended? slo or dlo? And i'm not sure how you would upload as a slo, i think we just used the swift upload command21:08
timburkei'd recommend SLO -- it's the default (assuming the cluster supports it) in recent versions of python-swiftclient, but on old versions you should just need to add --use-slo to the command line21:10
paladoxWe use debian buster so which ever swift version that dist have21:11
paladox*bullseye21:11
timburkecorrection, recent *version* -- there's only been 4.3.0 with it21:12
paladoxhmm,  looks like it should be default for us? we use version 2.2621:14
paladoxand i see no --use-slo21:14
timburkesounds like a server version, i was talking client. bullseye has swiftclient 3.10, even sid is 4.2.0 -- so no 4.3.0 on debian yet21:16
paladoxoh21:17
timburke"i see no --use-slo" -- is that in the CLI help? you get different help text depending on whether you're running `swift --help` or `swift upload --help`21:18
paladoxoh21:25
paladoxi see it in the later21:25
paladoxwhat's the difference between slo and dlo21:25
paladoxis slo more performance downloading & uploading?21:25
timburke(apologies for the delay) slo offers better consistency guarantees by putting the complete list of segments (and their expected MD5s) in the manifest object itself -- dlos rely on container listings, so if there are any delays/inconsistencies there, users may download incomplete data (and likely not realize it!)23:08
paladoxtimburke: do you know how i can get swift-replicator working with an out of storage node? I need to transfer data off it but it doesn't seem to happening? Or am i missing something. I just see Error syncing partition: [Errno 28] No space left on device in the service status.23:13
paladoxi changed the weight23:13
paladoxUnable to read '/srv/node/sda31/objects/938/hashes.pkl'#012Traceback (most recent call last):#012  File "/usr/lib/python3/dist-packages/swift/obj/diskfile.py", line 1247, in __get_hashe>23:15
timburkepaladox, is that an rsync error? sounds like the destination drive may be full23:15
paladoxhmm23:15
paladoxNah, i don't think so.23:15
timburkeah -- no space when rehashing... ick23:15
timburkei think you'll need to delete some data -- i'd pick some arbitrary diskfiles, run swift-object-info to find the current assignments for them, check that they've been replicated there, then delete them (via the filesystem, not the API! don't want to go deleting it cluster-wide ;-)23:18
timburkeit'd be nice if we could tolerate the ENOSPC better, though -- i think i remember seeing that bug once...23:19
timburkehttps://bugs.launchpad.net/swift/+bug/149167623:20
paladoxfound a 5g rsyncd.log file23:21
paladoxi've emptied it23:21
paladoxhmm that still causes it to say out of storage...23:24
paladoxwe don't use replicas (we only have one set of data, i know bad practice but cannot afford to have >=2 replicas)23:35
opendevreviewTim Burke proposed openstack/swift master: Green GreenDBConnection.execute  https://review.opendev.org/c/openstack/swift/+/86605123:42
opendevreviewTim Burke proposed openstack/swift master: tests: Fix replicator test for py311  https://review.opendev.org/c/openstack/swift/+/88653823:42
opendevreviewTim Burke proposed openstack/swift master: tests: Stop trying to mutate instantiated EntryPoints  https://review.opendev.org/c/openstack/swift/+/88653923:42
opendevreviewTim Burke proposed openstack/swift master: fixup! Green GreenDBConnection.execute  https://review.opendev.org/c/openstack/swift/+/88654023:42
opendevreviewTim Burke proposed openstack/swift master: CI: test under py311  https://review.opendev.org/c/openstack/swift/+/88654123:42
timburkethe rsync log was probably on the root drive, not the data drive. the issue will be one of the disks mounted under /srv/node/* -- i'd start with a `df -h` to confirm which disk is full23:44
paladoxtimburke: will it still try and rsync the data to other servers? Or is it now stuck?23:44
paladoxthere's only one drive (we don't have a seperate one for the data)23:44
paladoxhttps://www.irccloud.com/pastebin/MNPKHphu/23:45
timburkeoh. hm. i would've thought the 5G would have helped :-/23:45
timburkedid we just immediately fill it up again? i guess check for other logs we can throw away?23:45
paladoxseems that, that was the only large file.23:47
paladoxfor some reason using fallocate_reserve didn't stop it from filling up23:47

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!