21:00:27 #startmeeting swift 21:00:29 Meeting started Wed Aug 22 21:00:27 2018 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:32 The meeting name has been set to 'swift' 21:00:35 who's here for the swift team meeting? 21:00:43 o/ 21:00:46 morning 21:01:19 hi o/ 21:01:25 welcome 21:01:32 o/ 21:01:38 rledisez! 21:02:52 hello, everyone 21:03:39 good to see you. I was out all last week, and I've been trying to catch up most of this week so far 21:04:02 seeing as I *just* updated the agenda for this week's meeting, I think it will be pretty quick :) 21:04:06 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:04:17 the big news is that swift 2.19.0 has been released! 21:04:31 yey 21:04:37 wooo! 21:04:43 \o/ 21:04:48 \o/ 21:04:52 best swift yet 21:04:52 hi 21:05:00 this is what goes into the openstack rocky release, which is set to be released this week or next (Real Soon Now) 21:05:53 thanks for all the reviews and code work everyone did last week to get this release finished 21:06:35 I haven't really gone through the priority review page in great detail yet 21:06:36 #link https://wiki.openstack.org/wiki/Swift/PriorityReviews 21:07:10 but things to keep in mind: we *need* to get the PUT+POST patch landed. https://review.openstack.org/#/c/427911/ 21:07:11 patch 427911 - swift - Replace MIME with PUT+POST for EC and Encryption - 40h 40m 53s spent in CI 21:07:20 if you can help with that, it's appreciated 21:07:51 the patch authors on https://review.openstack.org/#/c/507808 have been very patient, as well, and it's a useful feature 21:07:52 patch 507808 - swift - Add ability to undelete an account. - 32h 4m 31s spent in CI 21:08:20 and the patches that help rledisez's work on LOSF will be good to get in 21:08:40 so those things, in that order, are my opinion as ptl on important thing to get done 21:09:23 Put post is waiting on Pete to address reviews I think 21:09:40 however, FWIW, swiftstack's priorities are around getting the remainder of the s3api patches landed (and a few other non-swift things) 21:09:51 mattoliverau: good to know 21:10:14 I wonder if we need to be a little more cavalier about pushing updates over each other so that we can make more rapid progress 21:10:17 e.g. - in etag for multipart? 21:10:26 kota_: yes 21:10:28 kota_: yeah, among others 21:10:53 KMS keymaster multi key patch would be good, just while it's still loaded in people's head 21:11:20 mattoliverau: it's on my list :-) just gotta remember how to spin up barbican 21:11:25 mattoliverau: I agree completely, but it's not something I know of anyone using, much less needing new feature for :-) 21:11:48 kota_: if you see my name by a patch on https://review.openstack.org/#/dashboard/?S3+API=status:open+file:%255Eswift/common/middleware/s3api/.*+project:openstack/swift -- i want it :-) 21:12:00 for context on the swiftstack s3 stuff, most of these are patches we were carrying on swift3, and we need these landed on s3api so we can move customers there. once that happens, we'll be able to implement some new s3 compat (eg s3 versioning) 21:12:50 timburke: alright, is p 575860 ready for reviews? 21:12:51 https://review.openstack.org/#/c/575860/ - swift - s3api: Include '-' in multipart ETags - 15h 15m 57s spent in CI 21:12:52 kota_: mattoliverau: rledisez: anything to share from your respective employers about priorities? 21:12:53 timburke: there's a vagrant Env you can spin up thanks to tdasilva and Mathias 21:13:05 mattoliverau: once i get barbican loaded into my head again, i might look at a follow-up to https://review.openstack.org/#/c/589270 to have barbican+encryption in a dsvm job 21:13:06 patch 589270 - swift - Move legacy-swift-dsvm-functional job in-tree - 12h 59m 41s spent in CI 21:13:59 kota_: yeah, should be trimmed down to just the S3 Multipart Upload PUT path now iirc 21:14:18 Umm, not really, py3 is probably still very important as we want a py3 only release. 21:14:49 notmyname: on our side, LOSF, reducing eventual consistency effect as customers complains a lot about that, and replication/rebalancing 21:15:08 I don't have so much priority thing for now, but personal perspective, I'd like to get rledisez's p 447129 soon as possible. 21:15:10 https://review.openstack.org/#/c/447129/ - swift - Configure diskfile per storage policy - 26h 33m 33s spent in CI 21:15:13 mattoliverau: yup. there's always py3 21:15:43 timburke: just for reference: https://github.com/thiagodasilva/barbican-swift this worked at some point in the past 21:15:44 rledisez: good to know. please talk as much as possible upstream about those things. we all have the same problems :-) 21:15:53 it's nice to have. 21:16:06 rledisez: when you say "reducing the effect", what does that entail? mostly just trying to reduce object replication/reconstruction cycle times? or updates for container listings? or...? 21:18:40 timburke: actually both. I worked on reducing the delay for container listing update in special case that was harming us, but also moving back data from handoff to primary node as fast as possible 21:19:05 oh, speaking of moving data from handoffs more quickly... 21:19:32 we've had several customers move to the new workers/concurrency settings, and their replication cycle times went *way* down 21:19:41 mattoliverau: i know i mentioned this in a meeting a bit ago, but i forget if you were there for it -- i've got some stuff not in gerrit at https://github.com/tipabu/swift/tree/moar-py3 that's pretty promising... but i haven't had cycles to work on it in a bit 21:20:05 iirc we want something like https://github.com/python/cpython/pull/7932 upstream, because our func tests are insane 21:21:02 especially when deployers have a lot of drives in a single storage node, moving to more workers and less concurrency really helped. eg old setting was effectively 1 worker with 90 concurrency, new setting is 45 concurrency with 2 workers 21:21:09 timburke: cool, no I didn't know 21:22:16 sorry, swap that. 45 workers with a concurrency of 2 21:22:25 notmyname: good to know. I think we used to do that with reconstructor (1 worker per disk, 1 concurrency per worker) 21:22:47 mattoliverau: unfortunately it's a bit of a mishmash -- i'd hack away until i got some new set of (func) tests passing, then commit as a kind of high-water mark. needs to get pulled apart into a reviewable chain 21:23:02 rledisez: interestingly, just doing 1 worker per drive had some problems in a 90-drive box. some kernel hangups or something. doing 45/2 was much better than 90/1 21:23:24 it's not what we would have expected, and it needs more investigation. but it's an interesting data point 21:24:06 notmyname: sure, linux kernel is "sensitive" sometimes :) 21:24:11 heh 21:24:34 the PTG is less than a month away 21:24:36 #link https://etherpad.openstack.org/p/swift-ptg-planning-denver-2018 21:25:05 there's the planning etherpad. I seeded it with some topics, but it needs some more input (eg all the stuff we've just briefly mentioned in this meeting) 21:25:16 so please add stuff that you want to talk about 21:26:32 and add your name at the top if you're attending 21:27:15 Wish I was :( I'll add stuff I want y'all to talk about ;p 21:27:27 thanks mattoliverau. and you'll be missed 21:27:33 what else do we need to discuss this week in this meeting? 21:28:55 nothing from my side 21:29:10 ok 21:29:36 thanks, everyone, for your work on swift 21:29:46 be proud of today's release :-) 21:29:53 thanks for coming today 21:29:55 +1 21:29:58 #endmeeting