21:00:33 #startmeeting swift 21:00:33 Meeting started Wed May 31 21:00:33 2023 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:33 The meeting name has been set to 'swift' 21:00:41 who's here for the swift meeting? 21:00:45 o/ 21:01:28 o/ 21:01:52 as usual, the agenda's at https://wiki.openstack.org/wiki/Meetings/Swift 21:02:03 i only had a couple items i wanted to bring up 21:02:11 I don't know why, but I'm here. I guess I'll just try and ask acoles to look at https://review.opendev.org/c/openstack/swift/+/787656 21:02:58 i keep meaning to circle back on that one, too :-( 21:03:49 #topic py3 ssync metadata bug 21:04:07 and the fix at 21:04:08 #link https://review.opendev.org/c/openstack/swift/+/884240 21:04:27 zaitcev: darn, sorry! 21:05:38 i haven't left my review yet, but i should later today. i've got a couple small changes to tests i'd like to see, and i'm part way into updating a probe test -- but it's a little tricky since swiftclient can't parse the non-ascii meta :-( 21:06:05 core change looks correct, though 21:07:07 i'll probably push over with my unit test changes but leave the probe test as a follow-up 21:07:31 #topic logging/metrics for account/container info requests 21:08:04 i realized this morning that we no longer emit subrequest logging/metrics for get_account/container_info requests 21:08:41 it was an unintended consequence of https://review.opendev.org/c/openstack/swift/+/875819 21:09:12 by going directly to the proxy-app, i bypassed logging :-( 21:09:50 i've got a fix up -- it's still a little ugly, though 21:09:52 #link https://review.opendev.org/c/openstack/swift/+/884931 21:11:28 o/ (sorry im late) 21:11:36 I'm reminded of some patch mattoliver was working on where we added a pipeline property that held a list of the apps...was it the internal client iter_shard_ranges patch? 21:12:26 i'm torn about whether to stick with what i've got there (take the second-to-last item in the pipeline, assuming it's a ProxyLoggingMiddleware, otherwise go to the proxy app), or go searching for the right-most logging filter 21:12:58 acoles, https://review.opendev.org/c/openstack/swift/+/879128 i think 21:13:20 Yeah we're was that. But yeah it stored them as a list, so it would be easy to search 21:13:29 yeah, thats' where we added _pipeline 21:13:48 it's definitely handy for this fix :-) 21:14:39 timburke: are you concerned about the overhead of searching every time? 21:15:26 no -- my concern is more that there could be some middleware with unexpected consequences in between the last logging mware and proxy-server 21:16:48 something that could do some sysmeta manipulations, say. also, a little worried about the possibility of recursion errors 21:16:51 oh, now I'm confused...can you elaborate? what advantage does _pipeline[-2] have? 21:17:30 I gave up wrapping my mind around that too. 21:19:06 I guess [-2] means logging is the last middleware, then there's the app 21:19:20 yeah 21:19:47 if it's proxy logging, great. we wrote that and can trust it. otherwise, the patch as-written goes to the proxy app; we lose logging/metrics again, but at least we know all sysmeta, acls, etc in the response is intact 21:20:52 i don't think it's *likely* that anyone would have configured their pipelines with a problematic middleware in between last logging and proxy-server, but it made me a little nervous 21:20:57 but documenting 'if you put *any* middleware after the final logger then there's some metric that may not be emitted' seems odd vs documenting 'if you put middleware after logging that does weird stuff then blah blah' 21:22:17 what would have happened, with a weird middleware, before you added the cut-through 21:23:39 you'd get weird behaviors. all the more so if it's non-deterministic at which point in the pipeline the get-info call happens 21:24:46 getting rid of the possibility of weird behaviors was definitely on my mind when i did the cut-through -- i'm not thrilled about re-introducing them :-/ but maybe it's the most defensible path 21:25:18 I think I need to study the patch and the weirdness potential when I'm more awake 21:25:24 thanks for alerting us to it! 21:25:34 +1 21:26:16 seems fair. fwiw, one of the weird cases i was thinking of was something like the 1space shunt, where you're trying to merge namespaces from different accounts/clusters 21:26:32 anyway, that's all i've got 21:26:37 #topic open discussion 21:26:46 what else should we bring up this week? 21:27:48 I've been out this week, so I've got nothing I can think of atm 21:27:54 Me neither. 21:28:28 all right. i think i'll wrap it early, then 21:28:31 we shipped this change this week https://review.opendev.org/c/openstack/swift/+/883367 21:28:47 and it does seem to have eliminated some curious timeouts 21:30:07 yes! it seemed so strange that we were bumping memcached timeouts at all, much less so long after the configured timeout 21:30:31 I guess one takeaway is that the network could move bytes faster than the proxy could EC encode them, so network never would block and the EC put greenthread never would yield to another greenthread :O 21:32:05 all right 21:32:09 thank you all for coming, and thank you for working on swift! 21:32:14 #endmeeting