Thursday, 2020-06-18

*** rcernin has quit IRC02:48
*** hoonetorg has quit IRC02:51
*** rcernin has joined #openstack-swift02:59
*** rcernin has quit IRC03:05
*** rcernin has joined #openstack-swift03:20
*** rcernin has quit IRC03:22
*** rcernin has joined #openstack-swift03:22
*** manuvakery has joined #openstack-swift03:27
manuvakerytimburke: as we discussed I proposed the basic tagging support here https://review.opendev.org/73517303:29
patchbotpatch 735173 - swift - s3api: Add basic support for ?tagging requests - 3 patch sets03:29
timburkemanuvakery, yes, i saw! thank you! i've been meaning to give it a spin but got side-tracked by gate issues03:30
manuvakeryOk thanks03:30
*** ravsingh has joined #openstack-swift03:30
*** psachin has joined #openstack-swift03:37
openstackgerritTim Burke proposed openstack/python-swiftclient master: Clean up some warnings  https://review.opendev.org/73642604:05
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-swift04:33
*** hoonetorg has joined #openstack-swift04:35
*** m75abrams has joined #openstack-swift06:24
*** ravsingh has left #openstack-swift06:28
*** noonedeadpunk has joined #openstack-swift06:41
*** rpittau|afk is now known as rpittau06:43
*** rcernin has quit IRC07:54
*** rcernin_ has joined #openstack-swift07:54
*** rcernin_ has quit IRC08:20
*** manuvakery has quit IRC09:00
*** ccamacho has quit IRC09:16
*** tkajinam has quit IRC10:05
*** rpittau is now known as rpittau|bbl10:06
*** manuvakery has joined #openstack-swift10:34
*** rcernin_ has joined #openstack-swift10:48
*** ccamacho has joined #openstack-swift11:22
*** m75abrams has quit IRC12:17
*** rpittau|bbl is now known as rpittau12:20
*** noonedeadpunk is now known as noonedeadpunk_12:57
*** psachin has quit IRC13:20
*** m75abrams has joined #openstack-swift14:05
*** rcernin_ has quit IRC14:14
*** noonedeadpunk_ is now known as noonedeadpunk14:29
*** m75abrams has quit IRC14:55
*** jv has quit IRC15:16
*** jv has joined #openstack-swift15:31
claygtimburke: didn't you have a change related to p 728298 that tried used the most recent backend_path instead of the first one?15:43
patchbothttps://review.opendev.org/#/c/728298/ - swift - proxy-logging: Use swift.backend_path if available - 3 patch sets15:43
*** gyee has joined #openstack-swift15:50
timburkegood morning16:04
timburkeclayg, yeah -- https://review.opendev.org/#/c/735221/16:04
patchbotpatch 735221 - swift - s3api: Set swift.backend_path to the last-used pat... - 1 patch set16:04
timburkei should get rledisez / ovh's take on it16:05
timburkeidk where people usually have ceilometer middleware in the pipeline -- i've never used it16:05
*** manuvakery has quit IRC16:33
openstackgerritTim Burke proposed openstack/swift feature/losf: Merge remote-tracking branch 'gerrit/master' into feature/losf  https://review.opendev.org/73538116:42
*** rpittau is now known as rpittau|afk17:00
openstackgerritTim Burke proposed openstack/swift master: Rip out pickle support in our memcached client  https://review.opendev.org/73678717:32
claygtimburke: where do you come up with these ideas?17:33
timburkei get tired of having useless knobs :P17:33
timburkespeaking of, how's the retry-abort thing looking today?17:33
timburkeer, retry-complete, i mean17:34
timburke8 years seems like a long-enough deprecation period, though ;-)17:35
rlediseztimburke: we usually place ceilometermiddleware before any middleware than can generate subrequests, so it's on the left side of bulk and staticweb17:53
claygtimburke: still going!  it'll be 24 hours very soon17:57
*** manuvakery has joined #openstack-swift18:04
openstackgerritTim Burke proposed openstack/swift master: wip: memcache: Config option to add a chance of skipping  memcache  https://review.opendev.org/73680218:48
claygtimburke: ok, my complete multi part is still working 24 hours after the fact18:55
claygwell shit, no my experiment was botched - so ignore everything I've said 😞18:58
claygok, i have the loop making the right api calls now -19:19
clayg@timburke abort after complete returns 20419:21
claygtimburke: and you can still keep sending complete19:22
claygI'll try again to figure out how long 😞19:22
*** gmann is now known as gmann_afk19:22
timburkeand you can still read the object? seems kinda wild that you can get a successful abort that doesn't actually clean up any data...19:22
claygFWIW doing a list parts after complete fails immediately with NoSuchUpload19:24
claygi can still see it listed it in the bucket - i'll try to download19:24
claygyeah download seems to work19:25
claygwas abort/complete/list-parts the main api calls we were wondering about after a successful complete?19:26
timburkeyeah, pretty sure19:26
claygok, well I have at least one example of a complete failing after 24 hours - so that's something19:27
claygI have another completed upload calling complete every 5 mins (for realzy this time) and we'll find out how long it lasts 🀞19:27
claygI would assume abort would behave similarly... maybe I can convince my loop to do both πŸ€”19:28
openstackgerritTim Burke proposed openstack/swift stable/ussuri: Fix pep8 job  https://review.opendev.org/73681419:31
*** gmann_afk is now known as gmann19:37
*** gmann is now known as gmann_afk20:10
openstackgerritTim Burke proposed openstack/swift stable/ussuri: Fix stable gate  https://review.opendev.org/73682920:17
*** manuvakery has quit IRC20:31
timburkewhat do people usually call their networks? i know clayg, tdasilva, and i use "outward-facing", "cluster-facing", and "replication" but idk how standard that is...20:44
timburkealso i'm thinking about https://review.opendev.org/#/c/735991/ and how it'd really be preferable to have reconciler on the replication network -- but still want to provide an option since that may not actually be accessible from where people currently have their reconcilers deployed20:46
patchbotpatch 735991 - swift - Add X-Backend-Use-Replication-Network header - 1 patch set20:46
timburkehaving a `use_cluster_facing_network = True` option feels stupid and wrong... but use_client_traffic_network isn't great, either (though it seems demonstrably better than use_client_network!)20:47
timburkeanybody want to help me pick a name?20:48
timburkeuse_primary_network, maybe?20:49
rledisezFWIW, I use public, data and replication (data is proxy to storage node)20:57
openstackgerritTim Burke proposed openstack/swift stable/train: Use ensure-pip role  https://review.opendev.org/73684521:42
*** lxkong has joined #openstack-swift21:45
lxkonghi swift team, is there some way for a cloud admin to check (or better, delete) the orphan containers for the tenants that are removed from keystone?21:46
claygdoes swift client ever decide to use a *LO on it's on?  w/o me setting -S ?21:48
timburkelxkong, a reseller admin should be able to delete the account and let the account-reaper clean up all the old data -- see https://docs.openstack.org/swift/latest/overview_reaper.html21:48
timburkeclayg, i think maybe when uploading from stdin? don't think so when you're uploading a file, though21:49
claygyeah i'm upload from stdin!  is there anyway to disable that?  "trust me it's less than 5GB"21:50
claygtimburke: so this is a reconciler option?  do you want the default to be existing behavior or just code the option as a safety latch JIC and immediately deprecate (pls configure your reconciler to have access to replication network)21:52
claygand the sharder we're only not worried about because it's already on the container nodes next to the replicator ... which always use the replication network21:52
*** rcernin_ has joined #openstack-swift21:55
openstackgerritClay Gerrard proposed openstack/swift master: Add concurrent primary feeder to EC GET requests  https://review.opendev.org/71134221:57
timburkeclayg, nah, don't think timur included a clean way to do that in https://opendev.org/openstack/python-swiftclient/commit/2faea932 -- you *could* set the segment-size arbitrarily high, but it'd mean buffering the whole thing in memory22:00
timburkeyeah, i was thinking that reconciler should default to using the replication network, but i ought to include a way to get back to the old behavior since there's no guarantee that the reconciler's running on a system with access to that net. idk about deprecating... hadn't thought that far ahead. could there be a legit reason to put them on the same net? πŸ€”22:02
timburkeas for the sharder, it *already* uses the replication network for a bunch of stuff -- the fact that it *also* uses the primary network sometimes is a pretty straight-forward bug with no real upgrade consequences22:03
timburkelike, it doesn't just live *next to* the replicator, it *is* a replicator (just a very special-case one)22:04
*** gmann_afk is now known as gmann22:04
*** tonyb has joined #openstack-swift22:05
claygOk, so name it use_legacy_node_ip22:11
clayg_and_port if you’re feeling verbose22:12
*** rcernin_ has quit IRC22:13
*** tkajinam has joined #openstack-swift23:00
*** rcernin has joined #openstack-swift23:16
timburkeaccount-reaper gets a little hairier -- like, i *expect* that it'd have replication-network access (since it's walking account-server drives), but it hasn't been explicitly *required* before...23:27
timburkeditto container-sync23:29
openstackgerritTim Burke proposed openstack/swift master: Allow direct and internal clients to use the replication network  https://review.opendev.org/73575123:52

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!