21:00:21 #startmeeting swift 21:00:22 Meeting started Wed Aug 14 21:00:21 2019 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:25 The meeting name has been set to 'swift' 21:00:47 who's here for the swift meeting? 21:00:54 o/ 21:00:55 o/ 21:00:59 hello 21:02:01 hi o/ 21:02:24 agenda's at https://wiki.openstack.org/wiki/Meetings/Swift 21:02:38 #topic swauth 21:03:22 onovy recently sent an email to the mailing list saying that he isn't able to continue maintaining swauth 21:03:25 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008416.html 21:04:22 i guess my question is, do any of us have a need to keep swauth working? do any of us want to pick up maintainership of it? 21:05:17 fwiw, i took a stab at addressing his first two concerns 21:05:20 #link https://review.opendev.org/#/c/676265/ 21:05:50 p 676265 21:05:50 https://review.opendev.org/#/c/676265/ - x/swauth - Add py3 support - 1 patch set 21:06:00 i don't know swauth users for my company and related NTT groups for now. 21:06:13 To be honest I haven't really looked at swauth. 21:06:56 but having other auth's out there isn't a bad thing. especially if it's one that enables Swift to run standalone (ie with out the rest of OpenStack). 21:07:44 cool. that was kinda my assumption; i think swiftstack's in a similar boat -- not really using it or thinking much about it 21:07:44 But no Suse doesn't use it, and I never have myself. 21:08:48 though mattoliverau's point is certainly valid: i definitely feel like there's some benefit to having a few different auth options 21:09:20 anyway, on to updates! 21:09:39 #topic static symlinks and versioning 21:10:02 looks like the static link patch is coming along nicely 21:10:05 #link https://review.opendev.org/#/c/633094/ 21:10:06 patch 633094 - swift - Allow "static symlinks" - 27 patch sets 21:10:44 Yeah, I like the new static and dynamic naming in the Doc it really clears it up 21:10:53 and the cleanup pre-req already landed 21:10:57 #link https://review.opendev.org/#/c/675451/ 21:10:58 patch 675451 - swift - Consolidate Container-Update-Override headers (MERGED) - 2 patch sets 21:11:51 i think clayg is currently working on rebasing the (swift) versioning patch on top of those 21:11:55 #link https://review.opendev.org/#/c/673682/ 21:11:56 patch 673682 - swift - s3api: Implement versioning status API - 1 patch set 21:12:10 cool 21:12:13 er, wrong one 21:12:17 #link https://review.opendev.org/#/c/633857/ 21:12:17 patch 633857 - swift - symlink-backed versioned_writes - 10 patch sets 21:12:23 lol 21:13:00 I'm mid way through another review of the hard symlink patch (since the doc change), and will hopefully finish it today. Though I know clayg has been working on it since. 21:13:13 thanks mattoliverau! 21:13:46 hopefully we'll be able to land the static link patch this week and have a few reviews on the versioning patch 21:14:00 +1 21:14:18 #topic py3 21:15:12 zaitcev's been steadily working through reviewing; we should have a couple more func test modules landed soon 21:15:17 #link https://review.opendev.org/#/c/674716/ 21:15:18 patch 674716 - swift - py3: mostly port s3 func tests - 5 patch sets 21:15:25 #link https://review.opendev.org/#/c/675710/ 21:15:26 patch 675710 - swift - py3: port test/functional/test_versioned_writes.py - 3 patch sets 21:15:40 nice 21:17:11 the last bit of the s3api tests was kinda funny -- there was an import-ordering issue that caused boto3 to get into an infinite recursion because of eventlet monkey-patching 21:17:15 #link https://review.opendev.org/#/c/675227/ 21:17:16 patch 675227 - swift - py3: Finish porting s3 func tests - 3 patch sets 21:17:51 but i think now we have patches proposed for all func tests 21:18:36 and i'm currently working on adding tests for the sharded-listings patch 21:18:39 #link https://review.opendev.org/#/c/671167/ 21:18:39 patch 671167 - swift - py3: fix up listings on sharded containers - 1 patch set 21:19:45 that's about it for py3 -- i'm really excited that we're getting so close, though! 21:19:46 oh wow, fun times 21:20:09 #topic lots of small files 21:20:36 rledisez, is alecuyer back from vacation yet? ;-) 21:21:04 timburke: I hope he made it back home, cause I'm waiting for him on Monday ;) 21:21:16 hehe 21:22:47 fwiw, we here at swiftstack have been trying it out, and i get the impression there's some excitement that the standard deviation for some performance numbers has come down 21:23:36 good to know. any numbers you can share? 21:23:41 cool, I should try and set up an env one of these days. 21:23:47 but there was some issue that came up where we'd get tracebacks making frags durable? 21:24:07 let me dig around a moment... 21:24:40 http://paste.openstack.org/show/757015/ 21:25:31 curious 21:25:42 i'm checking if we have this error un our cluster, but I never saw it 21:26:21 is it on feature/losf HEAD? 21:26:27 yeah, looks like it is missing some files in the kv db, we are still trying to understand it better 21:26:31 kota_: yeah 21:26:50 perhaps, my mis-rebasing to the current master 21:27:09 in the history to track the master. 21:27:34 timburke: nothing close to that in our production 21:27:46 basically, the change around .durable should redirect to kvfile itself or index server. 21:27:58 no .durable 21:28:01 #d 21:28:16 i might be older guy :/ 21:28:27 some performance stats: http://paste.openstack.org/show/757016/ 21:29:34 kota_: i'm not sure I understand your last comment 21:29:35 i think those were with some s3-benchmarking tool, somewhere around a concurrency of 50; i forget how beefy the cluster is, though 21:30:49 tdasilva: ah, i remembered we used .durable file for fragment durability but it's old style fragment durable expression. 21:30:58 reading the code, the DiskFileError is not the original exception. of you get the original one we could dig more 21:32:21 but alecuyer would probably be more efficient than me for that 21:32:54 rledisez: I think the original exception might be this: [Errno 2] No such file or directory: 0dbdb93b1ef1360902c56454414f0d221565777702.34279#3.data ? 21:33:07 anyhow...we can probably take this to #openstack-swift 21:33:12 sounds good. i think we can probably dig for a bit, at least until he gets back 21:33:27 #topic sharding 21:33:55 i saw mattoliverau proposed https://review.opendev.org/#/c/675820/ ! thank you! 21:33:56 patch 675820 - swift - sharder: Keep cleaving on empty shard ranges - 1 patch set 21:34:08 i ought to go review it, since i wrote up the bug ;-) 21:34:15 I haven't done much on the autosharding, but yeah I did play with the emtpy shards sharding bug :) 21:34:19 yeah that 21:35:24 it pretty much just doesn't count empty shards when calulating the shard batch size. 21:35:37 👍 21:35:48 because if the shards are empty then they become a noop 21:36:00 nice 21:36:10 #topic open discussion 21:36:23 anything else we should bring up? 21:37:09 today's the last day for early-bird pricing for the summit 21:37:14 #link https://www.openstack.org/summit/shanghai-2019/ 21:37:35 oh yeah.. time to go nag work some more then :( 21:38:40 i was about to ask whether you thought you'd be able to make it or not :-) 21:39:56 Yeah, I hope so. Work is waiting for to get the budget for it sorted or something.. sigh always seem to be waiting to the last minute... which tends to also make things more expensive. 21:40:21 yeah -- i hope we'll see you there! 21:41:01 not the end of the world if it doesn't work out though. at least we'll have better overlap with your tz ;-) 21:41:44 that's true. But summits closer to my TZ is nice, less jetlag! :) 21:42:04 all right, i'm gonna call it, let mattoliverau and kota_ get breakfast :-) 21:42:15 \o/ 21:42:17 thank you all for coming, and thank you for working on swift! 21:42:23 * mattoliverau is hungry 21:42:25 #endmeeting