19:00:38 #startmeeting swift 19:00:39 Meeting started Wed Jan 14 19:00:38 2015 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:43 The meeting name has been set to 'swift' 19:00:46 who's here for the swift team meeting? 19:00:49 o/ 19:00:52 hello 19:01:09 o/ 19:01:24 \o 19:01:27 clayg: peluse: torgomatic: complementary ping 19:01:29 howdy 19:01:36 hi 19:01:47 o/ 19:01:53 how lucky that i'm in this channel for a change 19:02:02 :-) 19:02:20 yo yo 19:02:23 hello, everyone, in whatever timezone and on whatever day it is for you 19:02:54 I think this should be a fast meeting, but there are a few important things to review or go over 19:03:05 #topic ring placement changes 19:03:16 clayg: torgomatic: looks like (most of) the patch chain has landed 19:03:24 what's current status and next? 19:03:29 just the report patch from clayg? 19:03:31 enough of it to be useful, at least 19:03:56 clayg: I haven't studied your long gerrit comment yet. any highlights? 19:04:06 on which one - the dispersion-report? 19:04:26 https://review.openstack.org/#/c/141452/10 you say "Sigh, I'm not sure we're entirely done with this :(" 19:04:29 * clayg didn't realize at first we already have a dispersion report that has nothing to do with ring placement 19:04:32 adding overload 19:04:47 notmyname: oh yeah that... i'm sorta bummed how overload ends up working in practice 19:04:53 how so? 19:05:27 notmyname: well i guess most of the time you want the old as-unique-as-possible placement until you're either a) doing a topology migraiton or b) have mostly full disks 19:06:06 it's nice to have the option now, but in the general sense I'm still seeing rings that don't look entirely unreasonable getting more balance and less dispersion unless you crank overload pretty high 19:06:10 it's as if we can't write a single deployment config that works for everyone! 19:06:33 but I know in the general sense that a really high overload is not really what you want because lots of full disks sucks pretty bad 19:06:41 you can please all of the people some of the time and some of the people all of the time but.... 19:06:42 hmm 19:06:44 So can't you leave the overload at 0 until migrating. I just think we need good documentation on it. 19:06:50 notmyname: seriously - lets go back to requires 5 zones and 100 disks minimum 19:06:54 lol 19:07:43 clayg: for what mattoliverau said. is that what you'd recommend to people? leave it at 0 until it's needed? and docs? 19:07:47 notmyname: also torgomatic has a few rings in his pocket that don't seem to do anything reasonable until you add a bit of overload - and that seems wonky - but he has his debugging stuff now so it's gunna be great 19:08:01 notmyname: i think acctually a little overload is great thing 19:08:39 ya, that makes sense to me, from how I understand it 19:08:39 notmyname: I think it's easier to give up dispersion when you're facing full disks than to try and add overload once you've already got a bunch of weight 19:09:03 torgomatic: what is the current state of docs for overload? 19:09:18 notmyname: I wrote some words in the ring overview doc 19:09:33 notmyname: the part that bugs me is that we should be able to calculate some of this up front and tell people when they're gunna have a bad time - giving them knobs and letting them fight it out with ring placement turns out to be a real try-and-see mess 19:09:39 torgomatic: anything on config options? man page updates? 19:09:46 notmyname: i also wrote words for the dispersion change 19:09:52 ok 19:10:04 notmyname: no, I didn't update the man page; probably should, though 19:10:06 mattoliverau: but more words is find - i'm just not sure what to say at this point 19:10:10 there are no config options 19:10:40 swiftstack just needs to write another in depth blog post on it :P 19:10:43 torgomatic: right. config isn't what I meant. something in the deployment guide or whatever doc has that 19:10:49 mattoliverau: another book... 19:11:04 notmyname: lol, yeah or that.. so typey typey 19:11:50 clayg: looks like https://review.openstack.org/#/c/145970/ is the current end of that patch chain (ie 2 patches not landed). anything else expected? 19:12:16 i don't think the problem is a lack of words really; but if we don't know the problem trying to explain to people why it's hard probably doesn't hurt 19:12:34 yup. that makes a lot of sense 19:12:42 the idea tho is that I'm going to be able to use these knobs in the controller to just "always do the right thing" and I'm not sure what that is yet :\ 19:12:57 I have confidence in you ;-) 19:13:21 notmyname: I think mattoliverau was going to update the commit message on that one for me ;) 19:13:30 but so far yeah - that's the best I've got 19:13:34 clayg: lol 19:13:39 anything else expected after the 145970 patch lands? specifically related to overload or ring dispersion? 19:14:28 ie anything currently in progress that you haven't pushed yet? 19:14:47 cschwede: you gotta any knowledge to drop on us regarding ring placement? 19:15:23 clayg: nothing that you don’t know yet 19:15:36 For container sync, I was asking if the community thought there were existing Swift technology limitations inhibiting its adoption for public cloud (enterprise scale) and relative effort to address if so. 19:15:56 swift-deployer: that's not the topic right now 19:16:12 swift-deployer: let's come back to that at the end of the meeting 19:16:18 I apologize. 19:16:22 Thank you. 19:16:56 clayg: so no more expected patches? (because I'm driving towards the next release) 19:17:24 *I* want to talk about limitations on container-sync! https://review.openstack.org/#/c/103778/ 19:17:33 notmyname: no more ring patches from me until i get smarter 19:17:42 ok, thanks 19:17:53 notmyname: the ring debugger bits are useful - i have other patches that I think would look good in the next release - but I'm all ring'd out 19:18:04 clayg: torgomatic: thanks for working on this and everyone for getting it merged 19:18:16 #topic next release 19:18:25 with the ring placement changes! 19:18:42 clayg: I saw that the other patch landed too. the container replication one 19:19:10 notmyname: cschwede for working on it too! and also for tricking torgomatic and me into breaking it in the first place! (where breaking it ~= making it not suck when adding failure domains) 19:19:27 thanks cschwede! 19:19:36 notmyname: oh did it? with the large out of date or whatever - yeah that's good then 19:19:49 notmyname: maybe I don't have any other known bugs with fixes to merge 19:19:53 anything else from anyone on patches that should land before cutting a release? 19:19:58 you’re welcome! and thank you too for working on this! 19:20:06 notmyname: clayg: yes that has merged (large out of date containers) 19:20:06 after https://review.openstack.org/#/c/145970/ 19:20:18 * clayg group hugs everyone 19:20:37 I think after https://review.openstack.org/#/c/145970/ lands then we cut a release to get it out there for everyone. ie next week. 2.2.2 19:20:37 * clayg wonders if we should make I survived working on the ring t-shirts? 19:21:04 everyone ok with that? a 2.2.2 release next week? 19:21:05 * peluse wishes he has contributed now :( 19:21:27 peluse: oh, I'm getting to you. you're up next ;-) 19:21:47 well, I meant 'had' so I could get a ring t-shert :) 19:21:48 clayg: yeah! i’m in for it! ;) 19:21:52 notmyname: what about https://review.openstack.org/#/c/144432/ 19:22:08 nm, I guess that's included by dependency 19:22:12 ya 19:22:31 that and the child patch both land before a release 19:23:31 I hear no objections or other patches that need to land.... 19:23:56 I'll talk to ttx later and get the machinery set up. go go gadget openstack process 19:24:11 (sorry for the american childrens tv show nostalgia) 19:24:24 ok, next up 19:24:29 #topic ec status 19:24:36 rock n roll 19:24:38 peluse: do we have read/write done yet 19:24:53 #link https://trello.com/b/LlvIFIQs/swift-erasure-codes 19:24:57 is all up to date.... 19:25:16 reconstructor still being overhauled but coming along very nicely, should be able to push something soon 19:25:28 ok 19:25:34 tsg got the eventlet guys to do another relase so we can unblock the completion of PUT so that's coming very soon 19:25:43 and clayg has been doing ring stuff and not GETs. and that's ok 19:25:53 he already has a review for the new eventlet version 19:26:06 eventlet 0.16.1 FTW! 19:26:11 and yeah, when clayg gets done messing around with rings, that'll be the GET side of things 19:26:12 ah, interesting. I overheard something from -infra people about a new eventlet problem. I need to check if there are any problems there 19:26:36 yeh, it was something about building from source 19:26:50 notmyname: something fixed back in oct-ish was on greenlet's __del__ going maximum recursion - all fixed up in the new hawtness 19:27:10 cool 19:27:26 yeah, what we needed was the ability to do more than 1 100-cont and that was there but then the unrelated build problem, thus the new one 19:27:44 peluse: the EC section of https://wiki.openstack.org/wiki/Swift/PriorityReviews is up to date too? 19:27:51 anyway, that's about it. reviews page is also up to date, Yuan has a few that are WIP and map back to trelli 19:27:52 gotcha 19:28:01 great, thanks 19:28:08 trello that is 19:28:16 trelli is the plural? 19:28:57 #topic swiftclient 19:29:02 /justbecause 19:29:35 anything here? there's a couple of outstanding bugs listed on the priority reviews page. and has anyone looks at openstack-sdk work recently? 19:29:37 clayg: you now what your commit message RE: 145970 ;) 19:30:15 mattoliverau: man, that is a *fine* commit message - you're like a poet 19:30:20 lol 19:30:38 notmyname: i'm not sure the openstack-sdk thing is going to pan out :\ 19:31:11 certainly not something we're spending much time on, as a group 19:31:19 clayg: why thank you, I used mostly your own words from comments you wrote inline, so there is a poet inside of you somewhere :P 19:31:46 #topic open discussion 19:32:18 notmyname: it's just gunna be hard and we still have a lot of work to do from swiftclient.services on up; plus I think the dependency is gunna be annoying and we never dicussed a deprecation strategy for the existing swiftclient.client except to nebulous idea that we'd try to stop using it as the sdk depends got better ish? 19:32:32 I'm working on setting up hackathon details. invites should be public next week 19:32:42 notmyname: there's at least one other swiftclient bug fix i would add to the list if thats ok https://review.openstack.org/125759 19:32:44 clayg: oh I totally agree 19:32:48 acoles: yes 19:32:58 do you have the hackathon location yet? 19:33:00 notmyname: I think acoles and I have some outstanding patches to swiftclient that would be pretty good - joel is trying to fix something with downloading huge containers 19:33:11 oh - hi acoles ! 19:33:23 clayg: howdy, yes there's joel's stuff too 19:33:26 brnelson: yes. san francsico 19:33:51 acoles: sorry i haven't been working on fast-POST; fwiw at some point I decided you were right about everything and the crushing blow to my ego has been enough to keep easily distracted by other things 19:34:27 clayg: lol. hey, you may yet be proven right, i sometimes wake up in a cold sweat about it all :) 19:34:47 swift-deployer: what were you asking about container sync? 19:34:55 clayg: i still have bunch of tests to write 19:35:13 I was asking if the community thought there were existing Swift technology limitations inhibiting its adoption for public cloud and relative effort to address if so. 19:35:14 acoles: maybe while you're still being frustated by lack of useful contribution you could update the spec with your current line of work so I can at least approve that as the most likely path forward to eventual success 19:35:46 *I* have opinions about how to make container sync suck less - they are very similar to my opinions on how to make the reconciler scale better 19:35:47 swift-deployer: mostly it's along the lines of cluster interconnect capacity. 19:36:07 oh... well yeah... you need lots of bandwidth - duh 19:36:08 clayg: (1) think real hard (2) type in better code 19:36:20 clayg: will do, i was holding off doing so in case i crashed and burned with the actual code 19:36:26 can you please elaborate on that? 19:37:20 swift-deployer: if you've got a billion objects or lots of TB/PB to sync, it can take a long time. 19:37:36 acoles: nah, it's gunna be great, even if the code is hard it'd be worth it to have a plan written down that I believe should theoritically be solveable - my previous thinking I have finally proven to myself was flawed - it was an interesting proof, but not terribly so since it only proved I can't do it the way I wanted :'( 19:38:40 notmyname: if anyone is interested in making container-sync and the reconciler faster they should consider how many 404's you get when you send a PUT with an x-timestamp and what happens if you get a 409 (I already have that x-timestamp) from a node in the proxies connect_put_nodes 19:39:04 notmyname: then review https://review.openstack.org/#/c/103778/ 19:39:40 clayg: so are you ok with me pushing a revised spec over your version? 19:40:12 acoles: my version was shit, everything in there is garbage, I'm an idiot - anything is better than what's there, what's there won't work 19:40:28 acoles: so - yeah knock yourself out bro! 19:40:39 acoles: i'd consider it a kindness 19:41:01 clayg: ok that sounds like a great commit message :D 19:41:07 lol 19:41:14 clayg: I added it to the priority reviews page 19:41:20 I think clayg needs a hug 19:41:40 clayg: actually fwiw somewhere some good stuff cross fertilised imho 19:41:50 notmyname: the x-timestamp thing? meh, it doesn't need +2's as much as more people telling me I'm an idiot 19:42:04 I know what the problem is - i just need more eyes to flesh out the solution 19:42:12 mattoliverau: I'll have to take your hug back to clayg next week ;-) 19:42:29 notmyname: you do that 19:43:03 ok, let's call it 19:43:11 thank you everyone for coming and participating 19:43:15 thank you for working on swift 19:43:31 (I get to say nice things about you in my LCA talk today) 19:43:39 yay 19:43:39 #endmeeting