21:00:19 #startmeeting swift 21:00:20 Meeting started Wed Feb 24 21:00:19 2021 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:23 The meeting name has been set to 'swift' 21:00:26 who's here for the swift meeting? 21:00:29 o/ 21:01:15 o/ 21:01:40 kota sent his apologies earlier in #openstack-swift 21:03:02 hi o/ 21:03:27 agenda's at https://wiki.openstack.org/wiki/Meetings/Swift 21:03:34 first up 21:03:41 #topic sharding backports 21:03:59 zaitcev isn't in here right now, but i wanted to at least give a short status update 21:04:29 looks like he's been busy making sure that all the patches proposed for train made it onto ussuri and victoria 21:04:40 o/ 21:05:02 timburke: do you have any idea why he's doing that? 21:05:09 victoria's just about all caught up, but i haven't looked at any of the ussuri patches yet. expect a bunch more merges 21:05:37 hopeuflly he has clients who just want some shardy goodness 21:06:29 or do you mean why he's backporting to u and v? so if someone takes the new train release with all those fixes, we want to make sure they could upgrade to u/v 21:07:56 some of the patches include things like new states in the sharder's state machine -- at the end of all this, it'd likely be pretty bad to go from tip of stable/train to the master -> stable/ussuri branch point, for example 21:08:59 a bunch of the train patches are in need of rebasing; i'll likely get to it if zaitcev doesn't, but likely after making progres on ussuri and victoria 21:09:10 that's about it 21:09:16 #topic CORS tests 21:09:19 sounds weird, we just don't normally backport stuff like this (fixes to "experimental" features) - so i wasn't sure what was going on 21:10:30 it's getting less and less experimental on master, so it seems reasonable to me to get a bunch of those fixes on whatever version people want to use sharding on, too 21:11:11 speaking of "less experimental" that CORS is pretty hawt 21:11:13 i figure there can be a lot of reasons to hold off on upgrading (even if master swift should Just Work with old OpenStack) 21:11:18 so yeah 21:11:41 a long while back, i started poking at writing some in-browser CORS func tests 21:11:43 #link https://review.opendev.org/c/openstack/swift/+/533028 21:12:32 this was originally to feel out a patch torgomatic was working on to pull CORS handling out to middleware, but that kinda stalled, so my tests did, too 21:13:01 "func" tests - browser integration tests - point is javascript in a browser is the only reasonable way to exercise CORS (since half of it is "the browser won't let the request XYZ") 21:13:25 but for more than a year now i've had users wanting to be able to use s3api and CORS, and it's sure been a useful way to ensure that works 21:13:28 ... *if* the origin says this or that ... which is what our "CORS support" is going about configuring 21:14:40 there have been a few iterations of how the test runs, but the core of it now seems fairly reasonable/useful to people 21:14:47 right; we need to ship some CORS support for s3api - aws s3 has some/enough support for some CORS things we already do in the swift side - so it's mostly just making s3api let things through 21:15:27 I'm getting a strong feel for the makeup of the test suite - the various stages and how things get put together 21:15:31 i think acoles, clayg, and mattoliverau have all managed to run the tests locally, and there's even a gate job 21:15:58 I have some experience debugging the main.py setup and webserver, and the javascript tests 21:16:00 yes, I have played with the tests quite a bit 21:16:22 I tried out selenium today with safari automation, all worked fine 21:16:29 i'm less familiar with how the gate job works - I've run and broken and fixed them locally 21:16:38 acoles: NICE!!! 21:16:48 I also put up a deliberate regression today to check the gate job https://review.opendev.org/c/openstack/swift/+/777405 21:16:51 so i guess this is the point at which we try to sell everyone else in the community on the idea of merging it 21:17:19 acoles: awww but we're still waiting on results 21:17:20 ...which makes it a little unfortunate that the only non-nvidian this week is rledisez ;-) 21:17:49 rledisez: is all about that CORS tho - and testing rledisez loves testing 21:19:51 so rledisez, if you've got any thoughts now, i'd love to hear them; if not, we'll probably bring it up again next week (assuming we've got more people here) 21:21:42 next week it is ;-) 21:21:49 #topic shrinking 21:21:59 how are things going there? 21:22:38 IIRC main progress over last week has been mattoliverau getting shrinking candidate into recon cache 21:23:48 I've been working on estimating tombstone rows in dbs to better inform shrinking decision - hope to have a WIP patch soon 21:24:46 what are the next major pieces of work? what all's already proposed that needs review? 21:24:53 i'm loving shrinking_candidates in recon - all about that compactible_ranges 💪 21:25:04 I'm looking forward to reviews on swift-manage-shard-ranges repair https://review.opendev.org/c/openstack/swift/+/765624 21:25:30 I think we're still trying to figure out if we can update state and keep in_progress around a little bit after the finish: https://review.opendev.org/c/openstack/swift/+/774393 21:25:40 *update state *timestamp* 21:26:25 what clay said :) 21:26:29 sounds good 21:26:36 #topic relinker 21:26:38 hmmmm ] 21:26:45 * acoles is nervous about timestamps 21:26:52 +1 21:27:24 acoles, and mattoliverau were busy while i was out last week -- my chain's down to just one unmerged patch! 21:27:35 #link https://review.opendev.org/c/openstack/swift/+/769633 21:27:50 (relinker: Parallelize per disk) 21:27:54 relink all the things! 21:28:00 thanks for all the reviews and fix-ups! 21:28:51 i'll try to get that last one cleaned up some more, shift from parallelize=yes/no to workers= 21:29:28 #topic debug_logger 21:29:32 #link https://review.opendev.org/c/openstack/swift/+/772092 21:29:40 clayg, i think this is your topic? 21:29:55 oh gross 21:30:23 I guess I just wanted folks to tell me what to name the module if not `from test.unit.logging import debug_logger` or whatever the change is doing currently 21:31:33 what was it before? 21:31:39 `from test.logging import debug_logger` seemed reasonable to me, but because python2 imports are weird I had to `from __future__ import absolute_import` in `test/__init__.py` because otherwise `import logging` might do some nonsense 21:32:01 oh debug_loggger 21:32:33 acoles: I suppose on master today we have test.debug_logger - I could live with `from test.debug_logger import debug_logger` I think 21:33:03 yeah, that seems fairly reasonable to me 21:33:08 IDK I sometimes find it confusing when we have modules with same names, but it's no big deal 21:33:44 what *is* the difference between DebugLogger and debug_logger, anyway? 21:33:47 I mean, same names, different qualified names 21:34:19 * zaitcev flies in 21:34:21 the function is like a factory 21:34:45 debug_logger gets you an adapter wrapped DebugLogger 21:35:02 timburke: `return DebugLogAdapter(DebugLogger(), name)` 21:35:20 kk 21:35:48 i'm ok with dropping the module rename - so I think we're done 21:36:05 👍 21:36:14 clayg: thanks 21:36:48 #topic tempauth system-level read-only role 21:37:12 #link https://review.opendev.org/c/openstack/swift/+/774539 21:37:35 zaitcev already has a +2 on it -- do we have any concerns about merging it? 21:39:01 i guess the main question is, is this a concept that we view as being generally good and useful in an auth middleware? having tempauth support it seems like a precursor to us writing func tests for it, which seems like a good idea 21:41:29 I agree with functests. Although in general functests are supposed to be possible to run against any test cluster, not just SAIO. They would work with Keystone too. 21:41:52 But it's helpful to have the support in tempauth as well. 21:42:27 One thing that I stopped to consider is that it clearly adds baggage. We have so many various knobs and configurations already. 21:42:43 But it's a good feature, right? The upside is significant... right? 21:42:57 absolutely. i'd expect it to get another user entry in /etc/swift/test.conf and we could update the DSVM job to ensure that the tests get run against both auth systems 21:43:43 another way of looking at it: if someone were writing a new auth system today, (1) would we push them to include such a role and (2) would we point to tempauth as a starting point for writing their own thing? 21:44:04 (i think my answer on both of those is probably "yes", but that might just be me) 21:45:02 They could have roles with attributes, I suppose. In keystone you get a role called "compliance", and it has no intrinsic features. Is it a reader? Is it an admin? You don't know. But in a new auth system you could. 21:45:59 Tempauth has no RBAC. It has no roles, only identities which can have attributes such as "reseller reader". 21:46:33 So, I think the answer is yes, we'd ask the authors to support it 21:46:47 Is this about the auth that Nvidia inherited from Swiftstack? 21:49:13 afaik this has nothing to do with nvidia or swiftstack - if we want a feature like this in our legacy of beta auth systems - we'd add that w/o any disruption of upsttream 21:49:18 not really, i don't think. i remember talking to clayg and him having a concern like "idk -- when i'm in a dev env (which is kinda the point of tempauth) auth is never really in the way of me reading data" 21:49:54 that's just a round about way of arguing zaitcev 's point about "baggage" 21:50:27 but it seemed like between the two of you - there's interest; so it's minor baggage for people that want to ignore it o/ 21:50:46 Fundamentally, all the new role adds is safety against buggy scripts run by the audit team. 21:51:32 i could see it growing beyond that -- we could move toward a container-sync that doesn't need an internal-client, say 21:51:59 I didn't think about it. 21:52:04 Hmm. 21:52:19 Well, as far as tempauth goes we might as well land it now, I think. 21:52:31 sounds like a plan 21:52:37 #topic open discussion 21:52:38 I started in keystone because of internal interests wanted Keystone. 21:52:53 last few minutes: anything else we ought to bring up this week? 21:53:05 Well, I'm glad to see Clay. 21:53:10 Online, that is. 21:53:29 I'll flag up this change because it downgrades a warning to info https://review.opendev.org/c/openstack/swift/+/776608 21:54:03 maybe I should wait for more people being here next week, just in case anyone feels it should remain a warning 21:54:18 Two +2 is sufficient 21:54:26 but I'm excited that it fixes a probe test intermittent failure :) 21:55:40 I added a crazy graphviz tool so people could checkout graph views of shard ranges from container-info and s-m-s-r info. Helps to see what's going on in a sharded container esp when there are a lot of shards. https://review.opendev.org/c/openstack/swift/+/775066 21:56:01 No rush on that though, just wanted to put it somewhere and somewhere where it's upstream :) 21:56:11 oh, and i need to get a release together! i keep getting side tracked :-( 21:56:52 anyone opposed to dropping lower-constraints testing from swiftclient? https://review.opendev.org/c/openstack/python-swiftclient/+/776998 21:58:24 oh yeah. if it's getting in our way, kill it!! If they ever fix it we can add it back 21:59:26 mattoliverau: ❤️ 22:00:09 i think it's more a matter of our lower-constraints needing to nix most/all of the keystone/osc requirements and our tests needing to skip a bunch of things if those aren't available. i just felt like i wasted enough time on trying to make it work 22:00:18 all right, we're at time 22:00:28 thank you all for coming, and thank you for working on swift! 22:00:33 #endmeeting