21:07:49 <timburke> #startmeeting swift
21:07:49 <opendevmeet> Meeting started Wed Mar 22 21:07:49 2023 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:07:49 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:07:49 <opendevmeet> The meeting name has been set to 'swift'
21:08:23 <timburke> main things this week
21:08:30 <timburke> #topic vPTG
21:08:36 <timburke> it's next week!
21:08:53 <mattoliver> already!
21:09:01 <timburke> also, i accidentally scheduled a vacation at the same time 😳
21:09:10 <kota> wow
21:09:17 <mattoliver> sure sure :P
21:09:22 <mattoliver> yeah, no stress
21:09:31 <timburke> but it sounds like mattoliver is happy to lead discussions
21:10:23 <mattoliver> yeah, I aint no timburke but I can talk, so happy to lead. But need people to help me discuss stuff :)
21:10:44 <mattoliver> So put your topics down!
21:11:00 <mattoliver> timburke: do we have rooms scheduled etc?
21:11:59 <timburke> no, not yet -- i'd suggest going for this timeslot, M-Th
21:12:23 <timburke> sorry acoles. there isn't really a good time :-(
21:12:59 <timburke> take good notes! i'll read through the etherpad when i get back :-)
21:13:14 <mattoliver> kk
21:13:51 <mattoliver> is there a place I'm suppose to suggest/register the rooms or just register them via the bot like I did for the ops feed back last time?
21:14:39 <timburke> via the bot, like last time. anyone should be able to book rooms over in #openinfra-events by messaging "#swift book <slot ref>"
21:15:05 <mattoliver> cool, I'l come up with something
21:15:29 <timburke> #topic py3 metadata bug
21:15:34 <timburke> #link https://bugs.launchpad.net/swift/+bug/2012531
21:15:36 <mattoliver> So long as acoles is ok with it. Or maybe we have an earler one for ops feedback.. I'll come up with something
21:15:47 <mattoliver> oh this seems like an interesting bug
21:15:59 <timburke> so... it looks like i may have done too much testing with encryption enabled
21:17:01 <timburke> (encryption horribly mangles metadata anyway, then base64s it so it's safer -- which also prevented me from bumping into this earlier)
21:19:15 <timburke> but the TLDR is that py3-only clusters would write down object metadata as WSGI strings (that crazy str.encode('utf8').decode('latin1') dance). they'd be able to round-trip them back out just fine, but if you had data on-disk already that was written under py2, *that* data would cause the object-server to bomb out
21:20:19 <acoles> sorry guys I need to drop off, I'll do my best to make the PTG - mattoliver let me know what you work out with times
21:21:02 <mattoliver> acoles: kk
21:21:10 <timburke> my thinking is that the solution should be to ensure that diskfile only reads & writes proper strings, not WSGI ones -- but it will be interesting trying to deal with data that was written in a py3-only cluster
21:21:12 <mattoliver> timburke: oh bummer
21:22:10 <mattoliver> so diskfile will need to know how to return potential utf8 strings as wsgi ones, so antoher wsgi str dance.
21:22:19 <mattoliver> but I guess it's only for the metadata?
21:23:10 <timburke> yeah, should only be metadata. and (i think) only metadata from headers -- at the very least, metadata['name'] comes out right already
21:24:15 <timburke> hopefully it's a reasonable assumption that no one would actually *want* to write metadata that's mis-encoded like that, so my plan is to try the wsgi_to_str transformation as we read meta -- if it doesn't succeed, assume it was written correctly (either under py2 or py3-with-new-swift)
21:24:47 <mattoliver> yeah, kk
21:25:06 <mattoliver> let me know how you go or if you need me to poke at anything, esp while your away
21:25:24 <timburke> thanks mattoliver, i'll try to get a patch up for that later today
21:25:30 <mattoliver> and thanks for digging into it. thats a bugger of a bug.
21:27:04 <timburke> makes me wish i'd had the time/patience to get func tests running against a cluster with mixed python versions years ago...
21:27:12 <timburke> anyway
21:27:14 <timburke> #topic swiftclient release
21:27:48 <timburke> we've had some interesting bug fixes in swiftclient since our last release!
21:29:13 <timburke> #link https://review.opendev.org/c/openstack/python-swiftclient/+/874032 Retry with fresh socket on 499
21:29:30 <timburke> #link https://review.opendev.org/c/openstack/python-swiftclient/+/877110 service: Check content-length before etag
21:29:48 <timburke> #link https://review.opendev.org/c/openstack/python-swiftclient/+/877424 Include transaction ID on content-check failures
21:30:11 <timburke> #link https://review.opendev.org/c/openstack/python-swiftclient/+/864444 Use SLO by default for segmented uploads if the cluster supports it
21:30:34 <timburke> so i'm planning to get a release out soon (ideally this week)
21:30:46 <mattoliver> ok cool
21:30:59 <timburke> thanks clayg in particular for the reviews!
21:32:37 <timburke> that's most everything i wanted to cover for this week
21:33:37 <mattoliver> nice. If there is anything else anyone wants to cover, put it in the PTG etherpad ;)
21:34:04 <timburke> other initiatives seem to be making steady progress (recovering expired objects, per-policy quotas, ssync timestamp-with-offset fix)
21:34:13 <timburke> #topic open discussion
21:34:21 <timburke> anything else we should talk about this week?
21:34:44 <mattoliver> We did have some proxies with very large memory useage > 10G
21:35:14 <mattoliver> so not sure if there is a bug there. maybe some memory leak with connections.. but it's too early to tell. I'm attempting to dig in. but just a heads up.
21:35:21 <timburke> right! this was part of our testing with py3, right?
21:35:29 <mattoliver> may or may not turn into anything
21:35:43 <mattoliver> yup
21:35:53 <timburke> i'm anxious to see a repro; haven't had a chance to dig into it more yet, myself
21:36:31 <mattoliver> there seems to be alot of CLOSE_WAIT connections, so wonder if its a socket leak or not closing properly or something.
21:36:41 <mattoliver> I'll try and dig in some more today
21:37:40 <kota> nice
21:38:15 <mattoliver> I am also working on an internalclient interface for getting shard ranges, as more and more things may need to become shard aware.
21:38:30 <mattoliver> #link https://review.opendev.org/c/openstack/swift/+/877584
21:38:58 <mattoliver> but it's still a WIP, like other things, let's see how we go.
21:39:49 <mattoliver> if there is a gatekeeper added to the internal client it'll break the function though. Al has suggested one possible fix, I came up with a middleware shim in internal client, clayg seems to think we should just error hard.
21:40:06 <mattoliver> break the interface I mean.
21:40:41 <mattoliver> So dicsussions are happening about that.. might start with the simplest and error loud I guess, but let's see where it goes.
21:41:02 <mattoliver> That's all I have
21:42:41 <timburke> i'm surprised there'd be any internal clients that would want a gatekeeper... huh
21:43:11 <mattoliver> well there aren't
21:43:45 <mattoliver> but if someone creates one with alow_modify_pipeline=True (or whatever it's called), one will be added
21:44:13 <mattoliver> and this would break sharding.. in fact it might already as the the sharder uses interenal client to get shards already, the interface just wants unified
21:44:33 <mattoliver> or a mis configuration from an op.
21:49:31 <timburke> i'll blame it on clayg ;-) https://review.opendev.org/c/openstack/swift/+/77042/1/swift/common/internal_client.py
21:49:31 <mattoliver> So yeah, I could just be doing down an edgecase that doesn't really matter. But it is still a shoot foot edgecase, and do we attempt to avoid it, or assume people will do the right thing.
21:49:31 <mattoliver> lol
21:49:31 <timburke> well, i think i'll call it
21:49:31 <mattoliver> kk
21:49:31 <mattoliver> thats all I have anyway :)
21:49:31 <timburke> thank you all for coming, and thank you for working on swift!
21:49:31 <timburke> #endmeeting