21:00:46 #startmeeting swift 21:00:47 Meeting started Wed May 17 21:00:46 2017 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:51 The meeting name has been set to 'swift' 21:00:52 let's do this! 21:00:53 who's here for the swift team meeting? 21:00:58 o/ 21:00:59 o/ 21:00:59 o/ 21:00:59 o/ 21:01:03 hi 21:01:04 o/ 21:01:11 hi 21:01:13 hi 21:01:16 hello 21:01:57 hello 21:02:04 hello 21:02:05 o/ 21:02:14 good to see everyone 21:02:20 #topic happy birthday 21:02:27 first up, today is swift's birthday! 21:02:40 seven years ago today, swift was put into prod 21:02:41 \o/ 21:02:52 congrats! 21:02:53 happy bday swift 21:02:59 mattoliverau: kota_: (or yesterday, in your tz) ;-) 21:03:01 yay! 21:03:17 Happy Birthday! 21:03:21 lol 21:03:35 Yeah, I had a cake.. or donut (or now have an excuse for eating one) 21:03:49 right. go get yourself some cake, and celebrate 21:04:09 MMM, guilt free cake 21:04:31 thank you to everyone who has offered code, bug reports, use cases, advice, and time to the community 21:04:47 I ws talking to someone yesterday and the question of "what is swift" came up. it's the people. :-) 21:04:50 thanks for being involved 21:04:59 wait, did somebody mention cake? 21:05:09 #topic summit recap 21:05:10 nom nom nom 21:05:15 acoles: your welcome 21:05:18 ok. we had a summit last week. how was it? 21:05:23 if you weren't there, we missed seeing you 21:05:30 if you were there, I'm glad you made it 21:05:40 if you weren't there - you won a productive week! 21:05:43 lol 21:05:56 It was good to catch up, but would've loved more time to talk swift things 21:06:09 does someone have a link to the feedback form? I must have missed it 21:06:12 mattoliverau: +1 21:06:24 mattoliverau: yeah, that's the general sentiment I got 21:06:27 Although the ops turn out was awesome 21:07:00 Great seeing more people in ops feedback then normal, and it usually is always a good turn out 21:07:13 So that was very positive 21:07:28 there are two ideas that came up last week that we should bring up again here 21:07:55 first, the idea of having a regular meeting in addition to this one for people in different timezones 21:08:29 +2! 21:08:30 specifically, mahatic and pavel/onovy/seznam. but of course we've all seen various chinese contributors too 21:08:57 I am going to coordinate with mahatic to find a time and agenda for that meeting 21:09:13 but the point is that it's a place to bring up stuff that those in the other time zones are working on 21:09:32 Cool 21:09:32 I think it's a terrific idea 21:09:43 i bet the guys working on tape would like that too 21:09:50 * clayg pours one out for mahatic 21:09:59 Any idea on how regularly? 21:10:05 are there any other general things/topics/themes that should be included for that meeting? 21:10:17 mattoliverau: probably once every two weeks? 21:10:27 Cool 21:10:35 mattoliverau: but how often would you want it, if it were more convenint for you? 21:10:50 *convenient 21:11:22 EVERYDAY 21:11:35 if there's one thing clayg loves, it's daily meetings 21:11:46 lol 21:11:54 Lol, 2 weeks is a good start 21:12:04 mattoliverau: note that you still have to attend this meeting, notmyname said "in addition", so no sleeping in :P 21:12:47 acoles: damn... But depends on when it is.. it could just as easy be harder for you as I'm closer to Chinese time ;) 21:12:47 my goal is to find a time for it that is so horrible for US timezones that it will be obvious that not everyone needs to be there 21:13:33 ok, the second thing from last week... a virtual hackathon idea 21:13:42 this one is an idea that is very much unformed 21:14:33 but given the less-than-great experience from the summit, some of us talked about figuring out a way to try a hackathon without all of us having to make a long flight 21:14:48 Again timezones make it hard. Ironic did it and mrda (Australian team mate) had to do crazy hours.. but he did say it was beneficial 21:14:59 several of our community also contribute to ceph, and I know they have these virtual events 21:15:06 mattoliverau: ah, interesting. good feedback 21:15:07 Yeah, if only there was a way to send a message... like a mail... to a list of people. And then it could be stored on a computer somewhere, ready to be read in any timezone recepient is in. 21:15:52 mattoliverau: what if it were something like "we're doing it for the next 30 hours, and here's the schedule". and I might have a topic from 11pm to 1am in my tz and someone else might have a topic from 7am to 8am, etc 21:16:02 zaitcev: crazytown! 21:16:08 zaitcev: now your just talkin crazy 21:16:56 so I don't know exactly what it would look like 21:17:09 i think it's supposed to be more about the concerted effort than the actual messaging :-) i'm imagining something like the review sprint to land crypto or something 21:17:24 i love it! 21:17:26 it's certainly not an original idea, so we could definitely learn from others 21:17:41 zaitcev: how does ceph do it? have you participated in those events? 21:17:49 Sure, I guess 30 hours is more doable. It's more what if the people working on setting is in all the timezones.. but I guess only 30 means only 1 bad night 21:18:05 *something 21:18:08 zaitcev: only uses email - then someone in the hack-a-thon reads the email to everyone in the video conference 21:18:09 mattoliverau: one bad night for everyone. or what timburke said and more of the global handoff... 21:18:22 * mattoliverau is typing on phone so autocomplete is a pain 21:18:23 oh god, Ceph is the worst. They have a stand-up every morning, like characters in a Japanese anime about workplace 21:18:25 notmyname: I imagine they would have to be a lot more organized than our typical hackathons. So like you said, schedule a specific time and subject 21:18:32 zaitcev: lol 21:18:57 notmyname: maybe we can start small, with just scheduling a discussion in one topic and see how that goes 21:19:20 tdasilva: scheduled time and subject!? this doesn't even sound like a hack-a-thon anymore 21:19:20 it's really not a hackathon, more like a video conference meeting to talk about specific topic 21:19:29 oh... that's a thing 21:19:30 clayg: exactly, I agree 21:19:48 clayg: but I can't think how else we would do it virtually? 21:20:16 Ceph also has a hackathon, which is very similar to ours, but stricter. Actually, we did end-of-day round table too in ours. Theirs is like... Have you implemented this function today? Why not? 21:20:22 IMO it's an interesting idea (but I don't know what it looks like) that sounds a *lot* more valuable than flying to sydney for only 3 days for the next summit :/ 21:20:26 How else, 3D visors ;) 21:20:35 tdasilva: it is a good point - i have no idea how to do a virtual hack-a-thon 21:20:47 cough LCA cough 21:20:51 The food at Ceph hackathons was amazing though. 21:20:56 mattoliverau: yes, that would be awesome 21:21:17 wait, did someone mention food?? :D 21:21:23 I just did 21:21:32 ok, so it's an idea. might be terrible. might be great. we don't have to make a decision on it right now or plan it out today 21:21:38 I saved pictures somewhere, I was so impressed. 21:21:42 acoles: are you hungry? 21:21:58 mattoliverau: lol, no! we just had a potluck. we're all stuffed 21:22:02 acoles: are people in SF not feeding you? 21:22:10 speak for yourself. brb 21:22:26 ok, so think about it. let's bring it up again 21:22:34 mattoliverau: tdasilva: it's terrible 21:22:40 #topic follow up on tc goals 21:22:42 lol 21:22:53 AFAIK, status is "nothing has changed. still gotta do it" 21:23:08 goals? 21:23:08 zaitcev: I assume the great food wasn't at a virtual hackathon.. or maybe it was :p 21:23:21 notmyname: I think it is worth trying, see how it works 21:23:27 clayg: py3 and running under mod_wsgi^Wuwsgi 21:23:37 acoles: agreed (a virtual hackathon) 21:23:59 notmyname: I looked at the uwsgi/mod-wsgi goal requirement, I think we are almost compliant alredy 21:24:11 "python3 doesn't matter" --unnamed wise man 21:24:32 acoles: thanks. I'll talk to you later about getting something written up to say "here's the things we need to do" 21:24:42 #topic LOSF 21:24:47 i heard someone say onetime the apache stuff doesn't blow up if you're using replicated objects - since devstack doesn't use EC - I think we're golden 21:24:51 mmm queso 21:24:54 rledisez: this is your stuff 21:25:15 clayg: move the requirements to be whatever we've already done? 21:25:19 how about LOLF? it sounds funnier 21:25:19 ok. so, we need some outlook on what we are doing right now (OVH and iQIYI) 21:25:29 long story short: we discussed with Jeff on our implementations of the small file optimization 21:25:50 the goal is to have the same on-disk format so that we can work in parallel on python and golang version 21:25:54 yay! 21:26:09 some of the work could be easily done, but we could not agree about the meta (maybe we just missed time to debate) 21:26:11 impressive you found it possible 21:26:19 #link https://etherpad.openstack.org/p/swift-losf-meta-storage 21:26:29 iQIYI implementation stores metadata only in K/V. OVH implementation stores metadata only in volume files. 21:26:33 Nice 21:27:22 storing in KV could help with HEAD requests if you don't have too much meta to store (otherwise it fulfill memory) 21:27:45 storing in volume allow to rebuild KV in case of corruption and save memory (at the cost of IO on HEAD requests) 21:27:58 rledisez: thanks for the etherpad notes 21:28:19 Hmm, I think I need time to read 21:28:45 rledisez: what's your plan to choose a single way to do it? 21:29:09 right now we are still discussing with jeff. i think he is trying our implem 21:29:15 rledisez: do you want us to read over and discuss later in irc or next week? 21:29:21 oh ok, so he's looking at your implementation 21:29:31 in the end, either one of use make a very good point to convince the other, or we are stuck until somebody decide for us :) 21:29:57 notmyname: yeah, reading ehterpad and having a discussion about it would be very nice 21:30:01 We store container metadata in Sqlite, so that's something, but yeah, I assume we tend to store more object metadata then container 21:30:34 mattoliverau: yeah, but this is more about the xattr/.meta metadata 21:30:59 Yeah 21:31:03 rledisez: ok, thank you for bring this up 21:31:31 rledisez: how about the rest of us read over and comment, then we talk in -swift about it. and get a status update next meeting? 21:31:37 rledisez: is jeff in irc? 21:31:55 notmyname: good for me 21:32:03 i'm not sure. it's 5am for him now 21:32:09 so right now probably not 21:32:13 i'll ask him by mail 21:32:13 rledisez: kota_ has no sympathy ;-) 21:32:21 :) 21:32:25 Lol 21:32:34 6am 21:32:41 for me 21:32:43 kota_: pretty much the same thing ;-) 21:32:53 make sense 21:33:07 rledisez: ok, we *need* him to be in irc. please mention it when you talk to him next 21:33:22 i will ask him again if he can join 21:33:25 thanks 21:33:41 rledisez: anythign else for this meeting that we need to go over on this LOSF topic? 21:33:59 not i can think of for the moment 21:34:10 ok, thanks for bring it up 21:34:16 can we make it LAFS or LAWLS 21:34:39 #topic global ec patches 21:34:46 composite rings 21:35:10 has composite rings landed yet? acoles timburke clayg cschwede_ kota_??!?! 21:35:33 notmyname: i didn't had a chance today to look at it :/ 21:35:48 notmyname: almost, we have 2 +2 but it would be great for kota_ to have chance to add a vote 21:36:00 since we've all had some involvement in authoring 21:36:04 look at it! merge it! I'll +A it right now! 21:36:11 https://review.openstack.org/#/c/441921/ 21:36:11 patch 441921 - swift - Add Composite Ring Functionality 21:36:12 patch 441921 21:36:13 https://review.openstack.org/#/c/441921/ - swift - Add Composite Ring Functionality 21:36:14 notmyname: i had not yet but absolutely i'll have time today 21:36:25 i did a solid read-through on the docs yesterday; look good to me. https://review.openstack.org/#/c/465184/ popped out, addressing some other ring doc stuff along the way 21:36:25 patch 465184 - swift - Ring doc cleanups 21:36:37 sorry i had short day offs in the early of this week. 21:36:38 kota_: thanks! 21:36:39 timburke: wanting to land after or wanting to merge in? 21:37:12 i'm perfectly content for that to be a follow-up. a lot of it is out-of-scope for composite rings 21:37:34 kota_: thanks. when you look, please +A if you like it, that way it's landed during our night and we don't lose another day on it 21:37:48 kota_: please go ahead and +A if you are ok with it, i think that is fine with other +2 votes 21:37:54 acoles: clayg: what comments were you looking for from cschwede_? 21:38:09 notmyname: we got a chance to talk about it in BOS 21:38:14 notmyname: thanks for the info, I concerned if i could give +A for my patch :P 21:38:20 and acoles:^^ 21:38:26 clayg: ah,ok good 21:38:30 notmyname: cschwede_ had the *really* good idea about taking an existing ring and splitting it into composites for upgrade 21:38:31 kota_: yes you can :-) 21:38:33 cschwede_: IIRC had better idea how to force a builder file write 21:38:53 i was going to do some research into some of our existing multi-region deployment rings and scope out what I think we can do 21:38:56 it's a baller idea 21:39:09 but something to do *after* this patch, right? 21:39:14 acoles: that one could be an easy follow up 21:39:15 yeah I think so? 21:39:22 yeah, no blockers 21:39:25 there's a *bunch* of stuff that we probably want to happen *after* this patch 21:39:30 cschwede_: great! 21:39:34 seems like a great idea, but i'm having a hard time imagining how it would work in practice... 21:39:39 cschwede_: +1 it's just doc so follow up if fine 21:39:42 clayg: are you looking into the decompose stuff? 21:39:50 I think getting something like CLI support is up there - and it's possible more docs about how to use it 21:40:02 cschwede_: acctually - no :\ 21:40:14 do we have any other patches open related to global ec clusters? 21:40:35 clayg: ok, i might look into this next week 21:40:45 cschwede_: maybe we could collaborate on that some? I need to think more deeply about it ... :D that would be great 21:41:00 clayg: absolutely! 21:41:01 notmyname: we have patch for per policy affinity config https://review.openstack.org/#/c/448240/ 21:41:02 patch 448240 - swift - Enable per policy proxy config options 21:41:10 cschwede_: I'm curious if there would be something I could run over a replica2part2dev table that would say "XXX parts would have to move" kinda thing? 21:41:12 acoles: that's what i was thinking of 21:41:22 which is related to global EC, we discussed this last week and got some consensus on the conf file format 21:41:40 acoles: but it has a merge conflict? 21:41:42 cschwede_: assuming I want my 1.5/1.5 rings to go to 2/1 or my 2.1/1.9 rings to go 2/2 or whatever it is? 21:41:48 I just saw that is in merge conflict, will fix 21:41:52 acoles: thanks 21:42:27 it also depends on this which should be an easy review https://review.openstack.org/#/c/462619/1 21:42:28 patch 462619 - swift - Add read and write affinity options to deployment ... 21:42:49 tdasilva: kota_: mattoliverau: jrichli: zaitcev: timburke: after the composite rings patch lands, next up is patch 448240. if you can review it, that would be very helpful 21:42:50 https://review.openstack.org/#/c/448240/ - swift - Enable per policy proxy config options 21:42:55 clayg: i was thinking about "take r1 from this ring and write this out to a new builder, now take r2 from the same ring (or another, whatever), and write it out to another builder - gives you full control? 21:43:13 and perhaps, https://review.openstack.org/#/c/443072/ is one of my homework since ec duplication landed 21:43:13 patch 443072 - swift - Eliminate node_index in Putter 21:43:18 Kk 21:43:18 clayg: that will result in rings with a fraction-based replica count 21:43:27 sorry i forget to bring it in BOS :\ 21:43:30 oay 21:43:35 clayg: and then you can set the replicas to your desired value, and rebalance 21:43:44 cschwede_: well... but I mean how big is the replica2part2dev table in each case? 21:43:56 kota_: oh yeah, we still have duplicate frag follow-ups 21:44:08 cschwede_: currently composite rings require integer replica counts :-/ 21:44:33 ok, I'll update the priority reviews wiki page with the global ec patches (and the stuff for after that [ie zaitcev's PUT+POST patch]) 21:45:20 cschwede_: idk, fractional replicas doesn't really allow for the last replica part list to be arbitrarily sparse 21:45:28 cschwede_: but maybe we could make it work 21:46:27 well, the composite ring will use integer-replicas. but getting there might need to split up and change decomposed ring parts 21:47:00 cschwede_: it'll be great 21:47:03 maybe i should draft this first to make that idea work for me and show it to you 21:47:11 that's the ticket! 21:47:13 great :-) 21:47:19 #topic open discussion 21:47:25 anyone have something else to bring up? 21:48:11 * notmyname knows timburke does 21:48:16 https://review.openstack.org/#/c/463849/ has a bit of an api change in it... but i think it's a good thing 21:48:17 patch 463849 - swift - Delete a non-SLO object with ?multipart-manifest. 21:48:18 So, about that PUT+POST, anyone cares about it? Tim looked at it. 21:48:32 yeah timburke is the best! 21:48:35 zaitcev: yes! let's talk right after we go over timburke's thing 21:48:36 everyone cares 21:48:48 i want a way to say "go delete this object, and if it's an SLO, go delete the segments as well" 21:49:15 timburke: what's the current behavior and the proposed behavior? 21:50:01 current behavior is to 400 if we get a DELETE request with ?multipart-manifest=delete for a non-SLO; patch changes that to go ahead and do the delete 21:51:05 timburke: if I'm reading that code right - do *all* ?multipart-manifest=delete requests turninto a GET on the backend before the DELETE verb makes its way down to the object server? 21:51:33 client niceties aside - do we *really* want to encourage that request pattern on the backend when describing "the preferred way" to empty a container? 21:51:54 this stops making HEADs necessary to perform DELETEs (as in https://github.com/openstack/swift3/blob/master/swift3/request.py#L1182-L1187) 21:52:22 why would one expect a failure on delete with the ?multipart querystring? is there a good reason for it that i didn't noticed? 21:52:26 clayg: got a better way? 21:53:00 cschwede_: yeah, that's what I think too. the error is what seems odd, not the proposed new behavior. client sent a delete. delete it 21:53:10 right 21:53:14 +1 21:53:19 +1 21:54:01 ok, so what do we need to do to move forward on this one? 21:54:08 timburke: ? timur: ? 21:54:09 But that's from just thinking about API, not looking at code cause maybe there was a reason 21:54:46 so, if that is merged, then the client could supply the ?multipart-manifest=delete on a bulk-delete in order to have segments deleted with manifests, right? 21:54:58 i can go +A 21:55:08 if everyone's on board 21:55:34 well, I think the patch could be merged. clayg's point is valid. However, a caller right now has to do a HEAD+DELETE when removing objects. I don't think reworking SLO to avoid the GET before DELETE is within the scope of this patch? 21:56:04 jrichli: that would be awesome -- I haven't actually checked if that's what will happen 21:56:08 jrichli: I think the change is a client can ?multipart-manifest=delete on a NON manifest and have the object still be deleted (possibly *not* deleting the segments behind a newer manifest that wasn't servicing the read) 21:56:09 jrichli: maybe? i'd have to look at the bulk deleter; it may think that what we intend to be a query param is part of the object name 21:56:35 but I was talking about changing the bulk deleter behavior earlier today in the swift channel 21:56:41 But yeah, delete is how would delete, the query string should only matter in if match or being caught by middle ware, as not a slo it should just be ignored (the query string not the delete) 21:56:43 I think... it's worth considering... had tempest happened to have a test that hit this behavior - the change would probably already be on the floor 21:57:03 timur: correct, i was bringing up that convo about the bug (well, there are 2 dup bugs on that) 21:57:27 Wow my English is awesome when I type on a phone :p 21:57:36 yeah... it would be nice to have a bug associated with it - i'm glad folks had the good sense to raise awareness in the meeting - kudos all 21:57:54 jrichli: got a link to the bug? 21:58:09 https://bugs.launchpad.net/swift/+bug/1691523 21:58:10 Launchpad bug 1691523 in OpenStack Object Storage (swift) "Multi-delete does not remove SLO segments" [Undecided,New] 21:58:15 https://bugs.launchpad.net/swift/+bug/1691459 21:58:17 Launchpad bug 1691523 in OpenStack Object Storage (swift) "duplicate for #1691459 Multi-delete does not remove SLO segments" [Undecided,New] 21:58:27 ah, it was marked dup already 21:58:34 yea, that was my doing 21:58:38 :-) 21:58:44 zaitcev: haven't forgotten you... 21:58:57 I can ask Andrew to file a bug related to this patch too 21:59:13 it came up in jclouds, as the behavior of HEAD+DELETE is not great 21:59:41 zaitcev: your PUT+POST patch (https://review.openstack.org/#/c/427911/) is very important 21:59:42 patch 427911 - swift - PUT+POST and its development test 21:59:53 zaitcev: it unblocks being able to work on the golang object server 22:00:07 which can be done concurrently to the other replication/reconstructor work 22:00:17 hopefully, you mean. 22:00:26 zaitcev: I'm always full of hope 22:00:36 hmm... we should probably force x-newest around https://github.com/openstack/swift/blob/master/swift/common/middleware/slo.py#L1157 ... 22:00:52 I addressed the concerns raised in Atlanta, I think, but I haven't seen any remarks by Clay/Alastair/Kota/etc 22:01:04 busy busy 22:01:06 oh great, so instead of every DELETE being preceeded with a GET - it's an X-Newest GET! ;) 22:01:53 ok, need review on the delete patch and the bugs there 22:01:59 we're at time 22:02:09 thank you for working on swift! here's to another seven years! 22:02:18 #endmeeting