21:01:02 #startmeeting swift 21:01:03 Meeting started Wed Nov 7 21:01:02 2018 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:06 The meeting name has been set to 'swift' 21:01:09 who's here for the swift team meeting? 21:01:11 o/ 21:01:14 hello 21:01:21 hi o/ 21:01:38 O/ 21:02:15 welcome 21:02:32 not sure if clayg or tdasliva or zaitcev are around... 21:02:45 thanks for the ping 21:02:47 hi 21:02:55 welcome :-) 21:03:15 the openstack summit in berlin is next week. I'll be there. mattoliverau and kota_ will be there too, right? 21:03:26 right 21:03:55 I'll be since Monday, then fly out at Friday afternoon 21:04:05 ok 21:04:05 nope, I wont be there :( 21:04:10 ah, ok 21:04:25 I'll be there on tuesday and i fly out on friday 21:04:26 too soon for the baby. 21:04:33 i understand :-) 21:04:44 I'll be cschwede will be there :-) 21:04:44 got it 21:04:49 oh, and rledisez, right? 21:04:56 notmyname: yes, i'll be there 21:05:08 rledisez: are you bringing anyone with you? 21:05:37 yes, after me, the oldest member of the swift team at OVH 21:05:52 i mean, he's not old, you got me i think ;) 21:06:04 :-) 21:06:28 ok, logistics wise, for this meeting... 21:06:45 next week, we should cancel because of the summit 21:07:03 week after that is the US thanksgiving week. I'll leave it questionable if we have that meeting or not 21:07:18 week after that, i'll be on a sales trip and won't be able to host the meeting 21:07:28 and that takes us to december 21:07:35 wow 21:07:42 wow, the end of year is coming up quick 21:08:08 I know we haven't had a lot of major stuff to discuss during this meeting, but I like having this meeting as a place where we all are online at the same time and can at least say hi and hear what's going on from others 21:08:29 +1 21:08:40 +1 21:09:16 I don't know what that means, given the logistics of the next few weeks, but that's my opinion on it :-) 21:09:58 but for today, let's talk about some of the ongoing work :-) 21:10:17 well let's skip next week, then take it as it comes. I'm happy to run a thanks giving one if need be if we just want a catchup/progress meeting. 21:10:29 mattoliverau: awesome. that sounds great. thanks 21:10:30 +1 21:10:58 s3api patches look nearly done! 21:11:07 thanks mattoliverau 21:11:22 https://review.openstack.org/#/c/592231/ <-- tdasilva wants kota_ to look at this for a +A 21:11:23 patch 592231 - swift - s3api: Include '-' in S3 ETags of normal SLOs - 4 patch sets 21:11:47 https://review.openstack.org/#/c/575838/ <-- kota and tdasilva have both +1'd it, but there's a request for timburke to squash another patch 21:11:47 patch 575838 - swift - Listing of versioned objects when versioning is no... - 3 patch sets 21:12:05 and https://review.openstack.org/#/c/575818/ is the only one without any reviews 21:12:05 patch 575818 - swift - Support long-running multipart uploads - 6 patch sets 21:12:12 It's in my radar, I'll try it in this week (or in Berlin) 21:12:23 saying about p 592231 21:12:23 https://review.openstack.org/#/c/592231/ - swift - s3api: Include '-' in S3 ETags of normal SLOs - 4 patch sets 21:12:25 if timburke is ok with it i'd be happy to send a new patchset for 575838 21:12:37 yeah, i 21:12:38 kota_: thanks 21:12:51 've been meaning to squash that in and fix up the commit message... 21:13:10 when these s3api patches land, I want to cut a swift release (2.20) 21:13:12 timburke: ah ok, you got it then! 21:13:51 aww yeah, best swift evar 21:13:58 notmyname: are we also waiting on p 575818? 21:13:59 https://review.openstack.org/#/c/575818/ - swift - Support long-running multipart uploads - 6 patch sets 21:14:00 clayg: you know it! 21:14:23 kota_: you might also be a good person to review https://review.openstack.org/#/c/613452/ -- i want to try to pull the auth bits out of S3Request... 21:14:23 patch 613452 - swift - s3api: Move authenticator logic to separate module - 4 patch sets 21:14:25 tdasilva: yeah, I'd prefer all of the listed s3api patches to be in for a release 21:14:39 ack 21:15:16 timburke: hew, it looks a bit large refacotring... 21:16:39 yeah... there's a bit of prep work for trying to get an STS-like endpoint, which has its own, separate way of doing signatures :-( 21:17:00 i think the v4 stuff mostly Just Works, but v2 is ... different ... 21:17:01 what's STS? 21:17:22 secure token service? 21:17:31 https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html -- basically, a way of issuing temporary credentials 21:17:33 oic 21:18:38 sounds like keystone :P 21:19:14 yeah, the request made me think of https://review.openstack.org/#/c/603529/ a bit, too -- a little different, though 21:19:14 patch 603529 - swift - s3 secret caching - 10 patch sets 21:19:42 rledisez: how's losf? anything to update us on? any new patches or learning? 21:20:38 notmyname: not much, alecuyer has been thinking/working on extents optimization in XFS. I just made a small update on an SSYNC patch to add concurrency 21:21:07 https://review.openstack.org/#/c/613987/ 21:21:08 patch 613987 - swift - SSYNC: enable multiple SSYNC connections per job - 2 patch sets 21:21:35 (it still misses some tests) 21:21:46 cool. sounds like something that could be useful beyond losf too 21:22:31 oh... speaking of ssync, timburke's been looking at something interesting 21:22:48 so we've got this "bug" https://bugs.launchpad.net/swift/+bug/1510342 21:22:48 Launchpad bug 1510342 in OpenStack Object Storage (swift) "Reconstructor does not restore a fragment to a handoff node" [Low,Confirmed] - Assigned to Bill Huber (wbhuber) 21:23:06 "bug" because we intentionally wrote it that way to start with, but it should probably be changed 21:23:14 and timburke's been thinking about how to fix it 21:23:19 i really want to track last-sync time, and be able to reconstruct to a handoff if one of the neighbors has been responding 507 for a while 21:23:54 timburke: did you decide on the best place to store it? or should we discuss that here to get gut reactions from others? 21:24:07 that's interesting idea to avoid race handoff writing maybe? 21:25:18 yeah, where would you store that? on every frag.. and then do you need a qourum or go with the latest timestamp you find? 21:25:19 still not sure. either at the suffix or all the way down at the diskfile, but maybe both? *shrug* 21:25:31 kota_: side note related to SSYNC and race conditions -> https://review.openstack.org/#/c/611614/ 21:25:31 patch 611614 - swift - Fix SSYNC concurrency on partition - 4 patch sets 21:26:19 rledisez: I think you or alex should probably pay attention to the bug I linked and chat with timburke as he works on a patch. it probably affects ovh a bit 21:26:28 rledisez: thx 21:27:09 notmyname: yeah, i'll read the bug report carefully 21:27:12 mattoliverau: idea would be that each frag would track its last-sync with its neighbors independently 21:27:56 we could probably safely include the source when pushing a new frag to some remote, and new writes from the proxy ought to set it appropriately, but i think those could be later optimizations 21:28:18 timburke: is it something that would travel with a partition? (during a rebalance) or you assume after a rebalance to restart from zero the tracking 21:28:20 ahh ok, interesting 21:28:58 big thing is, i want a disk that gets unmounted but not removed from the ring to not (hugely) negatively affect the durability of objects that might have a frag on that disk 21:30:23 rledisez: seems appropriate for it to travel with the part -- might even get us part of the way toward some of the ideas for tsync? like, we might be able to prioritize reverts following a rebalance with this extra info? not sure yet; still need to play with it 21:30:46 timburke: it sounds reasonable, but are we trying to address the negligence of the operator? 21:31:20 rledisez: lol @ "but can this patch fix operators that aren't doing smart things?" 21:31:30 rledisez: yeah -- i've got some customers that really don't like to have to think about their clusters 21:31:45 lol 21:31:57 ok, i guess i can stop worrying about my production now, you got it all ;) 21:32:25 "i can just check in like once a month and replace drives, right?" while not thinking much about how many failures can pile up hen you've got 2k, 3k drives in your cluster... 21:33:11 or simply can't afford a team the size of a public provider, smaller players need systems that are a bit smarter, IMHO 21:33:14 rledisez: lol 21:33:50 tdasilva: oh, ok, I suppose we can make it better for people other than rledisez :-) 21:34:38 ok, what else do we need to bring up this week? 21:34:45 anything else have a topic to mention? 21:36:18 ok, I guess not then. :-) 21:36:21 thanks for coming today 21:36:28 and thank you for your work on swift 21:36:32 #endmeeting