21:00:10 #startmeeting swift 21:00:16 Meeting started Wed Jan 17 21:00:10 2018 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:20 The meeting name has been set to 'swift' 21:00:21 who's here for the swift team meeting? 21:00:25 hi 21:00:27 o/ 21:00:29 o/ 21:00:41 good (morning|afternoon|evening) o/ 21:01:09 rledisez: what about the night time? 21:01:36 yeah, my bad good night everybody :) 21:01:43 :-) 21:01:47 hello 21:01:55 hello 21:03:17 o/ 21:03:22 welcome 21:03:47 there's been a ton of stuff landing since our last meeting. great work! 21:03:53 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:03:55 Or you could use UGT time then we can all say good morning 21:04:00 there's the agenda for this week 21:04:18 good morning! 21:04:50 mostly I want to review stuff for the 2.17.0 release 21:04:55 hi 21:05:01 #topic next release 21:05:04 #link https://wiki.openstack.org/wiki/Swift/PriorityReviews 21:05:20 nearly everything on there is marked off 21:05:26 patch 532383 21:05:27 https://review.openstack.org/#/c/532383/ - swift - Don't make async_pendings during object expiration 21:05:34 ...is open 21:05:54 as is the data segments in SLOs patch 365371 21:05:54 https://review.openstack.org/#/c/365371/ - swift - Add support for data segments to SLO and Segmented... 21:06:15 unless there's any other major bugs or blockers, I'd like to see those land and then tag the release 21:06:19 I think timburke addressed all the outstanding issues on 365371 21:06:25 I will look at p 532383 again tomorrow, most likely +A 21:06:26 https://review.openstack.org/#/c/532383/ - swift - Don't make async_pendings during object expiration 21:06:35 acoles: thanks 21:06:48 joeljwright: yeah, I think it's just that people have been (rightly) reviewing the expiry patches 21:07:01 +1 21:07:19 I've got the start of the authors/changelog patch at https://review.openstack.org/#/c/534528/ 21:07:19 patch 534528 - swift - WIP 21:07:39 thanks to clayg and timburke for the comments so far 21:08:00 I'll update those today or tomorrow, and add info about the other patches that have landed 21:08:18 any questions on the mechanics or logistics of this release? 21:08:25 torgomatic: we'll get to your question next 21:09:59 ok, let's move on to torgomatic's question then :-) 21:10:04 #topic checkpoint release? 21:10:18 torgomatic: would you like to introduce it? 21:10:36 right now, we've got to consider rolling cluster upgrades across all past versions, and that makes things more difficult 21:11:21 I'd like to introduce some cutoffs where we don't allow upgrades from any old version to the latest, but make you go through some intermediate steps first 21:11:27 or, at least, to talk about doing that 21:12:10 thanks. what does everything think? what questions do you have? 21:12:20 One way is checkpoint releases: you have some releases C1, C2, etc. and you can upgrade from any version v < C1 to C1, then any version v <= C1 < C2 to C2, and so on 21:12:27 (it's a big question, IMO, so I'd expect us to talk about it for a while) 21:12:30 (sorry, I have the slow fingers today) 21:13:36 anyhow, this came up in the context of https://review.openstack.org/#/c/534749/ , where we have a rolling-upgrade compatibility issue with proxies from 2013 and earlier 21:13:37 patch 534749 - swift - Preserve expiring object behaviour with old proxy-... 21:13:41 would it lose us the openstack upgrade tag thing? I forget the correct terminology 21:14:08 acoles: that depends on how we role out the idea 21:14:18 *roll 21:14:41 I think we'd keep it as long as you could perform a rolling upgrade to the latest checkpoint release. (That's a property I very much want to keep, FWIW) 21:16:14 Going from any to the latest us obvious is a big selling point, but alot of extra effort as me do more and more releases. 21:16:25 aside from the patch you just linked, are there other places where "upgrade straight to latest" is currently obviously hurting us? 21:16:33 Checkpoints aren't the only way to go; we could define a window where you can upgrade from the last N minor versions or the last Y years 21:16:58 notmyname: once we land Pete Zaitcev's improvements to the object PUT path, we still have to keep all that old protocol stuff in the proxy forever 21:17:19 one advantage I see is that we are reaching the point where we may not have the collective memory to realise when we let an upgrade issue slip in 21:17:38 those are both very good points 21:18:42 kota_: rledisez: joeljwright: what are your thoughts? 21:19:13 I understand the need, and i'm not against the idea. i'm concerned about for how long (in month/years) would you support an upgrade path? 21:19:20 it sounds reasonable to me, although we always did staged updates anyway 21:20:02 i would say the minimum is a year, probably 2 is better, or to try to sync with some major distro release interval 21:20:07 not opposite opinion but a bit worried about how frequently we should care about the checkpoint. I don't expect it'll happen every release. 21:20:34 rledisez: it seems i'm with you. 21:20:37 It would be good to have stats on how often, people upgrade.. because having a wide enough gap would be good. Say every 3 or 4 years.. or by versions maybe? (Every 2 major versions) 21:20:40 based on the conversation in -swift a few hours ago, I'd said that we needed to make sure a 2 year-old release has supported upgrades but a 4 year-old one seemed like long enough to not worry too mcuh 21:20:40 Yeah, a couple years at minimum for upgrades IMO. 21:21:05 torgomatic: +1 21:21:24 I assumed we were talking in terms of years 21:21:43 seems we're all in agreement on the unit to measure it 21:21:51 Ever couple of major versions would be easy for people to remember.. and represent major API changes 21:22:08 mattoliverau: for all those major version bumps and API updates we do? ;-) 21:22:27 Yeah.. but the difference would be we coild 21:22:31 Could.. 21:23:28 in my mind, we started with swift 1.x. we got to 2.x with storage policies. we'll hit 3.x with a rewrite of the storage layer (eg protocols, new language, etc) 21:23:45 however, that's getting a bit off-topic :-) 21:24:48 torgomatic: help me out here. fi we decided to make a checkpoint at 2.17 and we agreed to support upgrades for 2 years, how long do we support 2.16 features? how long do we support 2.17? 2.18? 21:24:48 Your welcome 21:25:27 notmyname: user-facing features? Pretty much forever, like we do now. 21:25:44 ( rledisez: LOSF is part of that storage layer update, too ;) ) 21:25:53 torgomatic: ok. upgrades then? 21:26:04 If we made 2.17 a checkpoint, then in a couple of years we'd make another checkpoint at, say, 2.34, and then another one at 2.60 in another couple years, and so on 21:26:06 if someone has 2.16 installed, then what? 21:26:19 another way of looking at it: how far back will we feel comfortable removing cruft between 2.17 and 2.18? 21:26:29 so then in five years' time, that person with 2.16 would have to upgrade 2.16 -> 2.17 -> 2.34 -> 2.60 to get to the latest 21:27:00 and in this case, after we made 2.34 a checkpoint, 2.34.1 could remove compatibility cruft for anything 2.17 -> 2.18 21:27:22 ok, It hink I got it 21:27:42 the difference with a time-based method is that we support eg anything that was released 2 years ago 21:28:03 calendar-based 21:28:29 so in 2018 we drop support for upgrading from anything released in 2015 21:28:32 that sort of thing 21:28:52 the difference is that deployers would need to be on a (very slow moving) upgrade train 21:28:56 right? 21:29:13 no, deployers could still wait a long time 21:29:36 when they did decide to upgrade, they'd just have to upgrade several times to get to the latest 21:29:41 we would only be dropping support for upgrading *directly* from 2015 to 2018 releases right? 21:29:42 right 21:29:48 acoles: right 21:29:54 yes 21:29:55 right! 21:30:22 an example: what about the renaming of .durable to #d for EC? should it be converted before moving on with version or will it be maintained forever? 21:30:22 torgomatic: deployers could only upgrade, at most, to something released 2 years after their current version 21:30:37 notmyname: yes 21:30:57 ok. just making sure the different methods are spelled out for everyone (myself included!) 21:31:00 rledisez: that one, we'd probably have to keep forever since we don't automatically convert things 21:31:16 what all would be in-scope for breaking? backend protocols, sure; client APIs, definitely not -- what about operator-facing things like https://github.com/openstack/swift/blob/2.16.0/swift/common/manager.py#L724-L727 21:31:35 on-disk data formats live forever, storage-node and replication protocols live a couple years 21:32:02 ^ seems reasonable 21:32:03 * kota_ is wondering, it may make us to remove rolling container db scheme change for creating storage policy??? 21:33:00 rledisez: torgomatic +! and good clarification, on disk data always supported 21:33:01 oh yeah, speaking of JIT migrations -- https://review.openstack.org/#/c/502529/ 21:33:02 patch 502529 - swift - Create policy_stat table in auditor if missing 21:33:13 here's what I think we should do: ask any further questions about generally how it works, go and sleep on it, and come back together later to talk about specifics or if we even like the idea 21:33:42 I don't think we need to get into "what about lines 200-210 in this module" right now ;-) 21:33:56 notmyname: i was just grepping for 'compat' :-) 21:34:00 heh 21:34:08 no, they're important to ask 21:34:14 see what broad classes of things are out there 21:34:37 I want to make sure everyone feels they understand the idea of time-based vs checkpoint releases for dropping compatibility 21:35:01 are there any questions? do you (everyone) feel good enough with the concept to explain it to your boss? 21:35:14 if not, ask now! 21:35:49 if so, let's each go ponder it for a bit. not to try to slow down conversation too much, but this may be an interesting topic for the PTG 21:36:05 +1 ptg topic 21:36:05 +1 if youre' good with it 21:36:11 notmyname: sorry, i have to wait my boss waking up :P 21:36:28 :D 21:36:35 Lol 21:36:38 :-) 21:36:54 notmyname: let me confirm on the mean you said *vs* between time-based and checkpoint 21:37:17 i'm still wondering whether something like https://github.com/openstack/swift/commit/ebf0b22 would be in scope for removal post-2.17 or not... 21:37:22 kota_: versus. meaning that you know the differences between the two and how they relate 21:37:57 yeah, and... i may be missing the context or the summary here.... (scrolling back to the log...) 21:38:38 kota_: I just want to check that you understand both and can think on it for a while in order to figure out what it means for the codebase 21:38:59 notmyname: got it 21:39:05 FWIW, I'm not totally sold on the idea, nor am I rushing to make our next release a checkpoint one. but I love the conversation, and I'm happy to be convinced either way (doing a checkpoint or not) 21:39:06 thx 21:39:42 kota_: I normally try to leave out idioms and colloquialisms. it's my fault. thanks for asking me to clarify 21:40:04 ok ok. I'm sure I can explain the concept and discuss what we (in company) want! 21:40:15 #topic updates on current work 21:40:16 soemthing tells me this discussion will continue into Dublin... 21:40:26 let's review some ongoing work 21:40:52 container sharding 21:40:58 mattoliverau: acoles: ? 21:41:16 progress continues... 21:41:33 anything blocking you that the rest of us can help with? 21:41:37 since last week we decided to reduce scope on listings while sharding is in progress 21:41:43 which has helped 21:41:49 great! 21:42:12 I have a long patch chain that would be great to get merged, mattoliverau and timburke are helping with that 21:42:23 ok 21:42:33 mattoliverau: timburke: anything to add? 21:42:47 And people should join in discussions on Trello or etherpad if there interested 21:43:16 oh, today I started another etherpad to start to doc the internal API changes, maybe help newcomers 21:43:37 Nice 21:43:40 https://etherpad.openstack.org/p/deep-containers-apis 21:43:45 #link https://etherpad.openstack.org/p/deep-containers-apis 21:44:12 thanks 21:44:35 My week hasn't been the most shard friendly week, but will hopefully spend more time post release reviews (my time is limited :() 21:44:46 i was catching up on checkpoint releases 21:45:01 mattoliverau: and you'll be at lca next week, right? so not too much hands-on-keyboard time there 21:45:06 BTW, all sharding docs are linked from https://etherpad.openstack.org/p/deep-containers 21:45:18 so that is the mother-ship ^^ 21:45:30 Yup LCA next week \o/ 21:45:35 I hate to say that if you're running swift from 2015 upgrading to master isn't exactly something I'd have a high confidence in going great anyway... it's more about putting a name to reality 21:46:15 acoles: I added a link to that parent etherpad to the ideas wiki 21:46:29 kota_: any progress on s3api this week? 21:46:59 i update the logger patch according to tim's comment, and waiting his review 21:47:05 ok 21:47:28 m_kazuhiro: rledisez: any updates for us on the task queue? 21:47:29 i knew there was another patch i should review! 21:47:30 on the functional, I 've started to make sure what's the status of tdasilva 21:47:40 since yesterday 21:47:56 I'm working at patch 517389 21:47:57 https://review.openstack.org/#/c/517389/ - swift - Update object-expirer to use general task queue sy... 21:48:20 I implemented all tests so I removed 'WIP' from the patch. 21:48:39 nice 21:48:48 nice 21:49:10 m_kazuhiro: anything currently blocking you, aside from reviews? 21:50:17 Nothing. Review is what I want. 21:50:51 rledisez: how's progress on LOSF work? 21:51:35 pretty good. we should make a (big) move to put the code on production in the next 2 or 3 weeks 21:51:58 Wow 21:52:02 so, we may be running a whole on production wiht LOSF by the next PTG 21:52:02 Cool 21:52:13 that is "wow" :-) 21:52:42 speaking of the PTG... 21:52:43 also, a new developer will join our team in march, he will assist alexandre in upstreaming the code 21:52:46 that's coming up soon 21:52:59 rledisez: cool 21:53:25 the PTG is at the end of february 21:53:44 I'd like to start organizing topics in two weeks 21:54:01 rledisez: I haven't heard anything about how alexandre ended up dealing with the eventlet + grpc issues he was dealing with? 21:54:15 also, I'll be out next wednesday. would someone like to run the meeting? or would you like to skip it? 21:54:59 Well really you'll just be on my time 21:55:16 But I'll be busy so happy to akio 21:55:22 *skip 21:56:09 clayg: right now there is no real solution. we pinned a specific version of grpc that works (partialy, but enough for us) with eventlet. that's not a long term solution. either droping eventlet or changing grpc for another protocol is the way to go 21:57:36 I think everyone already logged off. so that answers that about skipping next week :-) 21:57:42 :) 21:57:58 meet again in two weeks. jan 31, 2100UTC 21:58:12 thanks for your work on swift. thanks for coming today 21:58:16 #endmeeting