21:01:18 <timburke> #startmeeting swift
21:01:19 <openstack> Meeting started Wed Sep 18 21:01:18 2019 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:22 <openstack> The meeting name has been set to 'swift'
21:01:27 <timburke> who's here for the swift meeting?
21:01:31 <mattoliverau> o/
21:02:13 <kota_> o/
21:02:31 <rledisez> hi o/
21:03:25 <timburke> sounds like tdasilva and clayg are going to be a little late, but i think we could get started
21:03:32 <timburke> agenda's at https://wiki.openstack.org/wiki/Meetings/Swift
21:04:11 <timburke> quick reminder that we want to start collecting topics at https://etherpad.openstack.org/p/swift-ptg-shanghai
21:04:52 <kota_> sure
21:05:23 <timburke> i'm as guilty as anyone of needing to put things on there. i feel like i've been running around without a chance to stop, think, and organize. i'll make sure i get to that this week
21:05:37 <rledisez> timburke: I can help for Running Swift Cluster as you suggested. I'm just not sure what's expected (how long? slides needed?)
21:06:03 <timburke> rledisez, i don't know either! i've never done this before ;-)
21:06:17 <rledisez> lol, let's improvise then
21:06:24 <kota_> :-)
21:06:31 <kota_> needs help?
21:06:36 <timburke> i'm also not sure how much interest there will be. yeah, improvisation will probably work best
21:07:15 <tdasilva> Hello!
21:07:53 <timburke> kota_, maybe? i'd had this thought that i might encourage newcomers to express what they're curious about with regard to swift and then have something of a lecture prepared, but... idk
21:08:02 <kota_> tdasilva: o/
21:08:43 <timburke> i think that would still be valuable, but given the lack of newcomers expressing an interest... i'm not sure that it's the best use of time
21:09:01 <timburke> we'll see what happens
21:09:17 <timburke> anyway, on to updates!
21:09:24 <timburke> #topic py3
21:09:50 <timburke> i didn't get around to making a probe test job running under py3, sorry
21:10:02 <timburke> (part of it was being out of town the last couple days
21:10:24 <kota_> needn't to say, sorry ;-)
21:11:10 <timburke> but we've got a new py3 bug! https://bugs.launchpad.net/swift/+bug/1844368
21:11:10 <openstack> Launchpad bug 1844368 in OpenStack Object Storage (swift) "fallocate_reserve cannot be specified as a percentage running under python3" [High,Confirmed]
21:11:55 <timburke> looks to be due to some changes in config parser -- sounds like tdasilva did a great job confirming the issue
21:12:32 <timburke> i'd like to have it fixed for our train release (which is fast approaching!) but if needed, we can backport the fix
21:12:40 <kota_> hah, ConfigParser
21:13:42 <timburke> #topic lots of small files
21:14:21 <rledisez> as alecuyer is not here i'll try to summarize what happened recently
21:14:40 <rledisez> rpc as http is merged in the feature branch
21:14:57 <kota_> good
21:15:08 <rledisez> we tried to deploy yesterday on production 2.22 + losf (with rpc http) + hashes.pkl support
21:15:27 <rledisez> hashes.pkl to save some CPU
21:15:39 <rledisez> we had to rollback, it seems introducing hashes.pkl introduced some random hangs in REPLICATE
21:15:44 <rledisez> alecuyer is digging it
21:16:17 <timburke> good to know
21:16:29 <rledisez> next step for performance would probably to cache the list of partitions because it's seems it's very CPU intensive to list the partition
21:16:58 <mattoliverau> lol, you guys are cowboys.. just deploy to production.. nope doesn't work :P
21:17:24 <rledisez> mattoliverau: i didn't say it all. just on one server at first. we go to prod by 1/10/100/1000
21:17:32 <rledisez> not so crazy cowboys :D
21:17:42 <mattoliverau> ahh, yeah makes much more sense :)
21:17:51 <kota_> that means, it would be compatible with grpc version??>
21:18:14 <rledisez> kota_: what do you mean?
21:18:19 <timburke> i think that part's internal to a particular object server, no?
21:18:50 <timburke> from the proxy's perspective, the RPC mechanism shouldn't really matter, right?
21:18:52 <kota_> change one server from grpc to http, right?
21:18:55 <rledisez> yes, the protocol between object-server (python) and the RPC server (golang). it was gRPC but due to eventlet, it's now HTTP protocol
21:19:19 <kota_> oic
21:19:24 <rledisez> so, it's totally transparent for anything else than the losf diskfile
21:19:26 <timburke> (and similarly between two different object servers)
21:19:36 <kota_> yeah, and the index-sever stands for each node.
21:19:44 <rledisez> kota_: exactly
21:19:54 <kota_> make sense.
21:21:29 <timburke> i know we here at swiftstack did some performance testing, and darrell talked with alecuyer a bit about what we'd found... unfortunately i didn't keep a close eye on that conversation, but i think we weren't seeing *huge* improvements...
21:22:53 <timburke> rledisez, i think darrell was talking with you; do you remember what the outcome on that was? if there was any idea on what our bottlenecks or bad configuration might be?
21:24:15 <rledisez> hum, I can't exactly remember. maybe darrell could start an etherpad with what you found and we'll check that and confirm or explain or answer
21:24:26 <timburke> sounds good
21:24:40 <clayg> did i miss anything exciting?
21:24:45 <timburke> i also remember alecuyer mentioned he had some patches that needed proposing... do you think any of that might be because of the code delta between what's on the feature branch vs what you're actually running?
21:24:57 <timburke> clayg, don't worry, i moved the versioning stuff down ;-)
21:25:28 <rledisez> timburke: yes, we are (sadly) some patches ahead because we patch our prod first, but I think we are really close to the feature branch
21:25:52 <rledisez> and the goal is to run the last stable + feature branch on our cluster
21:26:00 <timburke> rledisez, that makes perfect sense -- take care of your herd ;-)
21:26:37 <timburke> but i really *do* want ot be better about getting patches landed on the feature branch -- if you're *running it in prod*, that kinda sounds like a +2 to me ;-)
21:28:17 <timburke> #topic sharding
21:28:29 <timburke> mattoliverau, i'm so sorry, i realy need to look at your patches!
21:28:43 <mattoliverau> no stress, we have lot's going on :)
21:29:03 <mattoliverau> I've pushed up https://review.opendev.org/#/c/681970/
21:29:25 <mattoliverau> which I hope addresses the cleaning up cleave context bug
21:29:30 <timburke> remind me, do you have anyone running with sharded containers? or are you doing this just because sharding's kinda your baby?
21:30:35 <timburke> i love that sense of ownership, but i also don't want you getting burnt out on it ;-)
21:30:44 <mattoliverau> our swift deployments tools (with my Suse hat on) doesn't support sharding, in fact they a little behind.
21:31:16 <mattoliverau> It's something I want to fix. But suse doesn't make swift a priority other then giving me some time on it upstream
21:31:54 <timburke> are there other things that Suse would like to see in swift? bugs fixed, features added?
21:32:10 <mattoliverau> So one day. In essence. I just know the code really well and feel responsible for bugs ;)
21:32:32 <clayg> mattoliverau: that's super interesting!  I didn't realize the tons of headers bug was about not reaping old contexts?!
21:32:41 <timburke> what can i do to get you something you can show your boss to say, "hey, swift's valuable for us"? :-)
21:33:31 <mattoliverau> yeah, I wish I knew. Suse just "support" it as apart of OpenStack. They really go out and sell SES to customers with large storage issues.
21:33:57 <timburke> might also be outside the scope of this meeting, but i wanted to mention it ;)
21:34:14 <mattoliverau> I'll been vocal about certain use cases as they come up internally. And always put a how cool Swift would be for that.. so I'm working on the inside :)
21:35:00 <timburke> 👍 thanks for that
21:35:03 <mattoliverau> If I can push the internal deployment tools forward. Just waiting for the customer who wants something big that Ceph can't handle ;P
21:36:11 <timburke> anyway. i'll try to get those patches reviewed. thanks for all your hard work!
21:36:23 <timburke> #topic versioning
21:36:33 <mattoliverau> Anyway. Got some sharding patches up for bugs. Haven't worked on autosharding much. I should play with that some more. comments on the current implemenatino welcome.
21:36:57 <mattoliverau> onto versioning!
21:37:02 <timburke> clayg, tdasilva, take it away ;-)
21:37:58 <clayg> so after a false start trying to expand versioned_writes to work with s3api aws object versioning feature...
21:38:11 <clayg> we decided to expand versioned_writes to work with s3api aws object versioning feature!
21:38:44 <clayg> only this time with a for-realzy new swift object versioning api that works a lot more like aws s3 object versioning
21:39:16 <clayg> tdasilva: is working on that - and I'm sure lots of docs and func tests will be coming to flush that out and describe how we think it should work so we can start to get some feedback
21:39:48 <mattoliverau> so this is the WIP patch?
21:39:54 <mattoliverau> #link https://review.opendev.org/#/c/682382
21:40:11 <timburke> https://review.opendev.org/#/c/682382
21:40:12 <patchbot> patch 682382 - swift - WIP: New Object Versioning mode - 2 patch sets
21:40:14 <clayg> there's a couple sticking points on the implementation side where there might be some lack of system/storage level features make for less than ideal strategies; so we're experimenting with some different things
21:40:16 <timburke> better :-)
21:40:53 <tdasilva> mattoliverau: correct
21:41:22 <clayg> so there's lot to talk about that... and looking down the road maybe some discussion about different things we might do with s3api while that work is ongoing
21:43:16 <clayg> probably worth noting that symlink based versoinsing is still a good idea - but there's some other things in the legacy versioned writes mode(s) that were problematic and it seems better for clients/consumers to just offer a new shiny
21:44:32 <clayg> we've been throwing around the anlogy of DLOs vs. SLOs - but with the idea that maybe if the alternative implementation is really significantly better (and easier for s3 consumers to adopt) we might eventually "sunset" legacy mode
21:44:59 <clayg> but really we're mostly focused on getting an object versioning implementation in swift that is amazing and we can build on for the next decade'
21:45:14 <timburke> fwiw, i'm starting to collect some of the design discussion currently happening internally at https://etherpad.openstack.org/p/swift-object-versioning
21:45:24 <mattoliverau> oh nice
21:45:50 <timburke> because it'll be so much better if mattoliverau and kota_ and rledisez can see what we're thinking and offer feedback on it :-)
21:45:59 <clayg> timburke: thanks for getting up - we can definately help flesh that out as we go
21:45:59 <mattoliverau> clayg: +1 re: amazing versioning is a good focus
21:46:38 <kota_> oh, what, sorry, I'm looking another thing.
21:46:52 <mattoliverau> I'll take a look at the patch today too to get an idea of where things are at.
21:47:06 <timburke> kota_, no worries :-) we just always appreciate your insights
21:47:13 <kota_> good to know for the versioning
21:47:26 <mattoliverau> #link https://etherpad.openstack.org/p/swift-object-versioning
21:47:37 <mattoliverau> just linking it so it's easy to find in the minutes later
21:47:38 <clayg> mattoliverau: we already got at least one new primative (static links,) out of the deal - I'm hoping we get at least one more before we're thorugh
21:47:39 <tdasilva> mattoliverau: sorry for the lack of docs for the moment
21:47:40 * kota_ is in U.S. timezone actually to join another meeting
21:47:44 <mattoliverau> ie. when I loose the link :P
21:48:22 <clayg> I'm sure we could sketch out some stuff on the etherpad that eventually ends up looking like some documentation
21:48:22 <mattoliverau> no stress guys. y'all doing awesome :)
21:49:20 <kota_> there are good items for improvement consideration in the etherpad
21:49:22 <clayg> timburke: I think that's all we got this week - but I'll commit to helping work on that etherpad
21:49:37 <timburke> sounds good
21:50:10 <timburke> oh, one more last minute topic!
21:50:15 <timburke> #topic train release
21:50:23 <clayg> CHOO CHOO!
21:50:36 <timburke> in two weeks or so, i want to be tagging a 2.23.0 for train
21:50:49 <mattoliverau> we should call it, train departure :P
21:50:56 <clayg> ship     it, ship   it, ship it, shipit, shipitshipitshipit CHOOO CHOOOO
21:51:09 <mattoliverau> rofl
21:51:29 <mattoliverau> timburke: so maybe a new priority review section
21:51:39 <timburke> please add items to the priority reviews page as you see fit -- i'll make sure i add a release section at the top (and in general clean it up)
21:51:48 <timburke> mattoliverau, yeah, that :-)
21:51:57 <timburke> #topic open discussion
21:52:11 <timburke> anything else we ought to talk about?
21:53:29 <clayg> shanghai?
21:53:46 <timburke> can't wait!
21:53:55 <clayg> is there a who's coming list already?
21:54:13 <kota_> <- whole week
21:54:24 <timburke> still need to get my travel sorted, though... and make a state of swift talk
21:54:25 <clayg> kota_: AWESOME!!!
21:54:37 <clayg> i'm just trying to learn how to visa
21:54:40 <rledisez> for the record, alex and I have our PTG tickets and plane tickets. we are now trying to get VISA (which is a bit complex for french people). we hope to get them in time
21:55:08 <kota_> hehe, I don't need VISA for shanghai
21:55:22 <timburke> clayg, according to https://etherpad.openstack.org/p/swift-ptg-shanghai, it's the five of us: you, me, kota_, rledisez, and alecuyer
21:55:48 <clayg> #dreamteam
21:55:57 <clayg> mattoliverau: ??? 😢
21:56:13 <mattoliverau> me never got the OK, but here others at SUSE are applying for visa's so assume it's not happening. I should go confirm with my manager (whose been away and then away at training)
21:56:16 <mattoliverau> :(
21:56:20 <timburke> :-(
21:56:44 <mattoliverau> annoying because it's _very_ close to my timezone.
21:56:48 <kota_> :(
21:57:00 <clayg> ahahha!  true!  maybe one of the easier plane rides for you for a change!
21:57:01 <mattoliverau> so I would have thought I'd be cheaper to send too.
21:57:04 <clayg> hahahah
21:57:09 <mattoliverau> lol
21:57:12 <timburke> if nothing else, it'll be nice to have more overlap with you mattoliverau :-)
21:57:43 <mattoliverau> yeah,lets see what can get through the great firewall of china.. but I hope to be there virtually :)
21:57:45 <clayg> we'll have to think about how to dial you in; if you can't make it you'll be missed - until the next one!
21:58:02 <clayg> yeah i'm *nervous* about the tech/equipment/network stuff 😬
21:58:07 <timburke> i wonder where it'll be...
21:58:16 <timburke> clayg, yeah, me too.
21:58:19 <rledisez> clayg: we got special recommendation from our SOC team…
21:58:23 <mattoliverau> probably denver :P
21:58:34 <timburke> mattoliverau, sounds right :P
21:58:58 <timburke> all right, i think i'm gonna call it
21:59:00 <kota_> denver :/
21:59:05 <clayg> "soc" is that a french acronym?
21:59:13 <timburke> thank you all for coming, and thank you for working on swift!
21:59:14 <rledisez> security operation center i think
21:59:16 <mattoliverau> kota_: probably not. Just joking
21:59:27 <timburke> #endmeeting