21:00:23 <timburke> #startmeeting swift
21:00:24 <openstack> Meeting started Wed Jun 10 21:00:23 2020 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:27 <openstack> The meeting name has been set to 'swift'
21:00:32 <timburke> who's here for the swift meeting?
21:00:36 <seongsoocho> o/
21:00:40 <kota_> hi
21:00:41 <mattoliverau> o/
21:01:05 <rledisez> hi o/
21:01:20 <clayg> o/
21:02:17 <timburke> as usual, agenda's at https://wiki.openstack.org/wiki/Meetings/Swift
21:02:31 <timburke> #topic PTG
21:02:42 <timburke> we all got to see each other last week!
21:03:13 <timburke> thank you all for attending, discussing, and offering feedback
21:03:53 <timburke> fwiw, there's an openstack-wide etherpad for feedback at
21:03:55 <timburke> #link https://etherpad.opendev.org/p/June2020-PTG-Feedback
21:05:21 <timburke> i hope everyone else felt it was productive; i know i got a lot of good feedback about multipart uploads and updater, and got a better idea of what's going on with sharding
21:06:16 <timburke> i updated the priority reviews page to highlight some of the work you all are doing -- though if i missed something, don't hesitate to add it!
21:06:19 <timburke> #link https://wiki.openstack.org/wiki/Swift/PriorityReviews
21:07:43 <timburke> one thing i realized i forgot to have was a swift-community feedback session -- i'd be curious to know everyone's thoughts about how we're doing as a community, what's working and what isn't
21:08:15 <clayg> oh yeah - doh!
21:08:54 <rledisez> i would say what's working: friendly community, everybody is really helpful, everything can be talked about
21:08:54 <rledisez> what's not working: not enough manpower to do everything :)
21:09:43 <timburke> no surprise there given the trends of the last year or two ;-)
21:10:15 <kota_> rledisez: +1
21:10:44 <timburke> would it be worth me putting this as a topic for next week, to give people a chance to collect their thoughts about it more?
21:12:36 <timburke> or we can just let it be :-)
21:12:47 <timburke> #topic bionic
21:12:59 <timburke> #undo
21:13:00 <openstack> Removing item from minutes: #topic bionic
21:13:02 <timburke> #topic focal
21:13:33 <timburke> i don't think this should greatly impact us, but it looks like people are working on migrating jobs from bionic to focal
21:13:41 <timburke> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015358.html
21:14:16 <clayg> focal!
21:14:19 <clayg> i want to upgrade to focal
21:14:23 <kota_> oh
21:14:47 <clayg> sudo apt dist-upgrade
21:15:20 <timburke> mostly just something to be aware of; i don't anticipate many issues
21:15:51 <clayg> timburke: you sure?  didn't vsaio have some issues with focal?
21:16:46 <timburke> ...pretty sure? hopefully? there are those eventlet-py37 bugs we know about
21:17:27 <timburke> though i don't think the threading guy is actually a problem, and none of our gate jobs use tls
21:18:29 <timburke> the other issues for vsaio mainly had to do with other stuff we stand up, like pykmip
21:18:57 <timburke> so, 🤞
21:19:02 <timburke> #topic release model changes
21:19:18 <timburke> i saw an interesting thread on the mailing list
21:19:21 <timburke> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015342.html
21:20:02 <timburke> i don't expect any potential changes to impact swift -- we basically already follow what's being proposed
21:21:50 <timburke> basically, the proposal is to get rid of rc releases and add bugfix releases as needed, with the coordinated release version for a project potentially being a .1 or .2
21:22:17 <timburke> not unlike how our train release was 2.23.1
21:22:40 <timburke> again, just something to be aware of
21:23:15 <timburke> on to updates!
21:23:25 <timburke> #topic py3 probe tests
21:23:37 <timburke> so a while back i'd proposed https://review.opendev.org/#/c/690717/
21:23:38 <patchbot> patch 690717 - swift - Add py3 probe tests on CentOS 8 - 21 patch sets
21:23:57 <timburke> and i was having issues getting libec installed so it could actually *run*
21:24:15 <clayg> oh libec
21:24:19 <timburke> but i finally got around to revisiting it, and managed to get it passing last night!
21:24:52 <timburke> though it looks like the most-recent attempt had some timeouts :-(
21:25:59 <zaitcev> Hmm. How can one have trouble with PyEClib on CentOS 8, when I ... oh... those people. Still on 1.0.2, I suspect.
21:26:09 <timburke> if anyone has some time to review it, i'd appreciate it -- this will finally bring our py3 gate jobs in line with what we've done for py2!
21:26:57 <zaitcev> I almost volunteered when I saw .yaml
21:27:06 <zaitcev> er
21:27:07 <timburke> zaitcev, i think my real trouble was that i wasn't pulling in the RDO repos -- eventually i found where we do that in tools/test-setup.sh, then everything got easier
21:27:12 <zaitcev> I almost volunteered, but then I saw .yaml
21:28:09 <clayg> zaitcev: https://www.redhat.com/sysadmin/yaml-tips
21:28:38 <timburke> i'm mostly ok with the yaml and shell -- it's the bindep stuff that i'm not so sure about
21:29:07 <timburke> that's all i had for that
21:29:31 <timburke> #topic lots of small files
21:29:52 <timburke> alecuyer, rledisez how's it going?
21:30:06 <rledisez> no updates this week, been busy on other subject. I'm not expecting alecuyer to work on it before 2 weeks from now
21:30:42 <timburke> 👍 i'll bump if from the agenda then -- feel free to add it back when you're ready :)
21:30:51 <timburke> #topic open discussion
21:31:00 <timburke> what else would people like to talk about?
21:31:46 <clayg> swift-get-nodes -Q 🔥
21:32:02 <timburke> \o/ it's gonna be so useful
21:32:54 <clayg> I think i'm g2g on https://review.opendev.org/#/c/733919/
21:32:55 <patchbot> patch 733919 - swift - s3api: Allow CompleteMultipartUpload requests to b... - 3 patch sets
21:33:08 <clayg> turns out s3 lets you retry - so really it's just parity
21:33:49 <timburke> oh! what do people think about the SLO changes in https://review.opendev.org/#/c/733026/ ?
21:33:50 <patchbot> patch 733026 - swift - Add a new URL parameter to allow for async cleanup... - 4 patch sets
21:34:05 <timburke> is async segment cleanup a thing that seems reasonable for SLO?
21:34:28 <clayg> the patch hasn't been up a week yet - but i've already packaged it... so... if someone wants to look at it 😬
21:34:51 <clayg> oh yeah, SLO async delete - yeah I'm curious what folks think
21:35:49 <clayg> timburke: you're asking about https://review.opendev.org/#/c/733026/
21:35:49 <patchbot> patch 733026 - swift - Add a new URL parameter to allow for async cleanup... - 4 patch sets
21:36:36 <timburke> yep
21:37:03 <zaitcev> I guess I'll look. Again, I'd like to have it enabled separately.
21:37:35 <mattoliverau> yeah, been playing catch up with work this week. Will find time to review them during the week
21:38:08 <clayg> zaitcev: have... "it" enabled separately?
21:38:14 <zaitcev> But I love clusters to self-repair. That's ultimately what dark data thing should do.
21:39:32 <timburke> so the client has to opt-in to having the deletes happen async -- new query param so it's like ?multipart-manifest=delete&async=true
21:39:43 <zaitcev> clayg, sorry, perhaps I assume too much. I thought you envisioned replicator or whoever remove old SLO segments.
21:39:55 <zaitcev> s/ old SLO/ unreferenced SLO/
21:40:03 <zaitcev> (it does not matter how old they are)
21:40:04 * kota_ guess it means to separate "implement async delete on Swift slo" patch and "make async s3 MPU delete to use the slo staff" patch.
21:40:26 <timburke> definitely *not* looking to do ref-counting :-)
21:40:34 <clayg> unreferenced SLO segments ... yeah
21:40:59 <timburke> kota_, making s3api use it was *really* easy -- i wound up just rolling it all into one patch
21:41:29 <clayg> wtg in tree s3!
21:41:29 <kota_> timburke: yeah, it looks correct. I'm looking
21:41:50 <zaitcev> told you
21:42:00 <zaitcev> now we just need in-tree swauth :-)
21:42:08 <clayg> timburke: can we abandon https://review.opendev.org/#/c/728589/
21:42:09 <patchbot> patch 728589 - swift - s3api: Add config option to include uploadId in GE... - 3 patch sets
21:42:16 <timburke> poor swauth :-(
21:42:25 <clayg> swauth!!! 🤣
21:42:26 <timburke> clayg, yeah, i'm on board with that
21:43:09 <clayg> rledisez: I've kind cooled again on https://review.opendev.org/#/c/571917/
21:43:10 <patchbot> patch 571917 - swift - Manage async_pendings priority per containers - 5 patch sets
21:43:28 <clayg> but i guess i might regret it next time we have bunk container 🤪
21:44:07 <clayg> we're gunna roll out the batched reclaim next release; and then try to get more agressive with automated container sharding
21:44:17 <rledisez> clayg: The way to go is proabably auto-sharding, it would fix most cases I guess (except weird network issue & stuff, but I guess it's up to the operator to fix the network)
21:45:18 <timburke> clayg, i think we may have talked about it after you had to leave one day -- that seemed to be the general consensus. get batched reclaim in prod, get an updater-lag metric like in p 715580, then see how our clusters are doing
21:45:18 <patchbot> https://review.opendev.org/#/c/715580/ - swift - obj-updater: add metric on lag of containers listing - 1 patch set
21:45:44 <clayg> ok, i'll try to look at that one again
21:48:25 <timburke> all right, seems like we're winding down
21:48:54 <timburke> thanks again for coming last week! thank you all for making swift great!
21:49:06 <timburke> #endmeeting