Friday, 2014-09-05

notmynamepandemicsyn2: now there is https://pypi.python.org/pypi/gertty/1.0.0 instead of fgerrit00:02
openstackgerritA change was merged to openstack/swift: Avoid usage of insecure mktemp() function  https://review.openstack.org/10979300:09
claygit's cute because tty - i get it00:10
*** jergerber has quit IRC00:10
*** kyles_ne has joined #openstack-swift00:12
*** tkay has quit IRC00:14
mattoliverauand gertty works offline, so you can review while flying ;)00:16
kyles_neis there anyone here who could answer a few questions about how glance is supposed to handle swift as a backend?00:19
kyles_ne(I asked on glance channel, still waiting for a response there, figured I'd ask here too)00:20
*** kyles_ne has quit IRC00:21
*** dmorita has joined #openstack-swift00:25
claygdepends on how specific the question is i 'spose00:31
claygdid you ask *the question* over there - or just ask if you can ask?00:31
claygcause you should definately ask here, dunno if we'll be able to help tho00:32
tab__when will erasure codes be available in swift?00:35
torgomatictab__: when they're done00:38
torgomaticsounds flippant but I'm actually serious; I hate giving out dates because they're never right00:39
*** tab__ has quit IRC00:39
torgomaticeither the code lands after the date and people get cranky because it's "late", or it lands before the date and I look like I was sandbagging00:39
*** judd7 has joined #openstack-swift00:41
*** occupant has joined #openstack-swift00:44
zaitcevclayg: 9 to 1, this guy had ntpd die on him - https://bugs.launchpad.net/swift/+bug/136533000:54
zaitcevclayg: sorry, it's the "cyclic replication" bug00:54
claygtab_: just ask notmyname he's better at guaging the "tone" of torgomatic "nope" when he asks if it's done yet00:56
*** addnull has joined #openstack-swift01:05
*** shri has left #openstack-swift01:12
*** mwstorer has quit IRC01:14
openstackgerritA change was merged to openstack/swift: Spelling mistakes corrected in comments.  https://review.openstack.org/11870101:14
*** xrandr has left #openstack-swift01:31
*** echevemaster has joined #openstack-swift01:41
*** nosnos has joined #openstack-swift01:49
*** elambert has joined #openstack-swift01:56
openstackgerritMatthew Oliver proposed a change to openstack/swift: Add concurrent reads option to proxy  https://review.openstack.org/11771002:10
*** tkay has joined #openstack-swift02:12
*** elambert has quit IRC02:16
*** elambert has joined #openstack-swift02:23
*** haomaiwang has joined #openstack-swift02:30
*** elambert has quit IRC02:33
*** elambert has joined #openstack-swift02:34
*** haomaiwang has quit IRC02:35
*** elambert has quit IRC02:38
*** addnull has quit IRC02:56
*** addnull has joined #openstack-swift02:57
*** tkay has quit IRC03:12
openstackgerritA change was merged to openstack/swift: Small Fix for FakeServerConnection  https://review.openstack.org/11767903:13
*** zaitcev has quit IRC03:34
*** tab_ has quit IRC03:40
*** judd7 has quit IRC04:15
*** tkay has joined #openstack-swift04:20
*** miqui has quit IRC04:23
*** tkay has quit IRC04:26
*** addnull has quit IRC05:01
*** kopparam has joined #openstack-swift05:04
*** pberis has quit IRC05:11
*** echevemaster has quit IRC05:12
*** ppai has joined #openstack-swift05:24
*** pberis has joined #openstack-swift05:31
*** ttrumm has joined #openstack-swift05:37
*** ttrumm_ has joined #openstack-swift05:38
*** ttrumm has quit IRC05:41
openstackgerritMatthew Oliver proposed a change to openstack/swift: Treat 404s as 204 on object delete in proxy  https://review.openstack.org/11412005:52
openstackgerritMatthew Oliver proposed a change to openstack/swift: Treat 404s as 204 on object delete in proxy  https://review.openstack.org/11412006:22
*** k4n0 has joined #openstack-swift06:25
*** gyee has quit IRC06:25
*** kopparam has quit IRC06:30
*** kopparam has joined #openstack-swift06:30
*** kopparam has quit IRC06:35
*** ttrumm has joined #openstack-swift06:35
*** ttrumm_ has quit IRC06:38
openstackgerritJamie Lennox proposed a change to openstack/swift: Use identity_uri instead of auth fragments  https://review.openstack.org/11930006:51
*** homegrown has quit IRC06:57
*** bvandenh has joined #openstack-swift06:58
*** mrsnivvel has quit IRC06:59
*** kopparam has joined #openstack-swift07:01
*** mrsnivvel has joined #openstack-swift07:02
mattoliverauOK my brain is failing to review code, I think I've been staring too long. I'm going to call it a day, have a great weekend all.07:04
*** homegrown has joined #openstack-swift07:05
*** homegrown has left #openstack-swift07:05
*** homegrown has joined #openstack-swift07:06
*** tsg has joined #openstack-swift07:08
*** kopparam has quit IRC07:13
*** kopparam has joined #openstack-swift07:14
*** kopparam has quit IRC07:18
*** tsg has quit IRC07:25
*** ppai has quit IRC07:29
*** tsg has joined #openstack-swift07:38
*** tsg has quit IRC07:42
*** ppai has joined #openstack-swift07:42
*** swat30 has quit IRC07:46
*** swat30 has joined #openstack-swift07:48
*** mkollaro has joined #openstack-swift07:49
openstackgerritLin Yang proposed a change to openstack/swift: Change method _sort_key_for to static  https://review.openstack.org/11931207:58
*** kopparam has joined #openstack-swift08:04
*** mandarine has joined #openstack-swift08:13
mandarineGood morning, there :)08:13
mandarineI have a bit of a problem with a "Not Found" (404) swift container that still appears in the account :(08:15
mandarineLike, if I curl http://swift.url/v1/AUTH_myaccount | grep "mycontainer", it appears08:19
mandarineBut when I curl http://swift.url/v1/AUTH_myaccount/mycontainer , I get a 404 from the container server :(08:19
mandarineI feel like this is a corrupted account and that the file should not appear. But is there a way to check this ? :x08:20
ahalemandarine: i would have a poke around the cluster with swift-get-nodes, see if all account replicas are consistent, check replication.. if its been deleted try track down the delete line and work out what happened that way08:37
mandarineall account replicas are the same, yup :(08:40
mandarineI'll have a look at swift-get-nodes08:41
mandarineThank you very much :)08:45
mandarineTh eproblem is not solved at all but I can check more deeply08:45
*** ppai has quit IRC08:54
*** ppai has joined #openstack-swift09:10
*** dmorita has quit IRC09:15
*** k4n0 has quit IRC09:16
*** vr2 has joined #openstack-swift09:22
*** ppai has quit IRC09:23
vr2in your office09:32
*** ppai has joined #openstack-swift09:35
*** JelleB is now known as a1|away10:22
*** bvandenh has quit IRC10:28
*** bvandenh has joined #openstack-swift10:29
*** aix has joined #openstack-swift10:38
*** kopparam has quit IRC10:39
*** kopparam has joined #openstack-swift10:40
*** ttrumm_ has joined #openstack-swift10:47
*** ttrumm has quit IRC10:49
*** kopparam has quit IRC10:55
*** kopparam has joined #openstack-swift10:56
*** kopparam has quit IRC11:01
*** mkollaro has quit IRC11:02
*** tongli has joined #openstack-swift11:30
*** kopparam has joined #openstack-swift11:40
*** mahatic has joined #openstack-swift11:45
*** dmsimard_away is now known as dmsimard11:53
*** k4n0 has joined #openstack-swift11:55
*** openstackgerrit has quit IRC12:01
*** openstackgerrit has joined #openstack-swift12:03
*** zacksh has quit IRC12:09
*** zacksh has joined #openstack-swift12:11
*** tab_ has joined #openstack-swift12:18
tab_on what basis does Swift decides that disk is gone bad - is this only on basis of disk swift-audit script, or is there also some read/write response factor, in case too long , declated dead?12:19
*** nosnos has quit IRC12:19
*** nosnos has joined #openstack-swift12:20
*** mkollaro has joined #openstack-swift12:22
*** nosnos has quit IRC12:24
*** zacksh has quit IRC12:36
*** zacksh has joined #openstack-swift12:38
*** kopparam has quit IRC12:38
*** kopparam has joined #openstack-swift12:39
*** ppai has quit IRC12:41
*** kopparam has quit IRC12:44
*** zacksh has quit IRC12:44
*** zacksh has joined #openstack-swift12:46
*** miqui has joined #openstack-swift12:50
btorchtab_: swift-drive-audit12:53
btorchtab_: although, we have recently encountered some scenarios where there are xfs corruption on some drives where the kernel doesn't specify the device block or shutsdown the filesystem12:54
*** morganfainberg has quit IRC12:54
btorchtab_: so you probably might want to have some other monitoring checks .. this is on ubuntu precise with dell perc cards btw12:55
btorchtab_: https://github.com/pandemicsyn/stalker12:55
*** morganfainberg has joined #openstack-swift12:57
tab_btorch: thx. yes, i think that possible disk end-of-life is possible to consider much more in advance12:59
tab_fore example , testing write speed under priority, if controller goes bad , this would slow down over time12:59
tab_btorch: maybe also question for you. what swift does in case disk drive, cluster full?13:00
tab_does it still enable reads when writes are not possible? how does swift reposnse to this situation?13:00
btorchtab_: all drives or a couple of drives within  the cluster ?13:00
tab_if we can cover both scenarios, i would be greatful13:01
tab_i guess when couple of disks, data would be moved to empty ones?13:01
tab_what if moving data not possible?13:01
*** zacksh has quit IRC13:04
*** zacksh has joined #openstack-swift13:05
btorchtab_: brb have to look at an issue13:07
tab_ok thx13:08
*** tdasilva has joined #openstack-swift13:11
btorchtab_: ok so first, you don't want to have all drives full :) so consider a drive full at 75%-80%, so you have enough time (hopefully) to get new zones in or scale current ones13:12
tab_ok i understand, but shit happens :)13:12
btorchtab_: when a drive is 100% full, either because of dark data, to many handoff objects .. etc swift will just place that object that should have gone to that drive to another one (handoff)13:14
tab_is the number of handoff nodes per zone configurable?13:14
*** bkopilov has quit IRC13:18
tab_what if even hand-off nodes full? I just want to make sure that Swift  still enables to read data from clsuter?13:19
btorchtab_: no you can't configure that as far as I know13:20
btorchtab_: I think on read it will try the primary nodes and 3 other handoff nodes and if it can't find the object you will get a 40413:21
*** mrsnivvel has quit IRC13:21
btorchtab_: I think deletes only go to the primary nodes13:22
btorchtab_: I think put/posts also just does up to 6 .. I could swear there was an option for that in the proxy though13:25
*** morganfainberg has quit IRC13:25
tab_btorch: what do you mean up to 6? :) i don't think i follow you here13:26
werfor each replicant there is a corresponding handoff that would get hashed AFAIK.  It's not "node" specific but rather object specific.  But would likely be another node,disk,failure domain.  And each zone would have handoff's for each replicant.13:26
btorchtab_: reads should not be an issue, but obviously just because you as the end user is not trying to write to the system, doesn't mean that the swift background jumps like replicators, updaters ...etc is not trying to13:26
werswift-get-nodes will compute the locations for you if you are curious.13:27
*** morganfainberg has joined #openstack-swift13:27
btorchbackgrond jobs :)13:27
tab_ok. i understand. thx13:27
tab_yes there is always somone writing to the wall :)13:27
werswift-get-nodes <ring> objects account  would show you all the locations (6 if replications is x3, 3 locations and 3 handoff locations...)13:28
*** ttrumm_ has quit IRC13:29
*** ttrumm has joined #openstack-swift13:29
btorchswift-get-nodes <account ring> account_hash :) if you need container then <container ring> account container .. end so on13:29
tab_btorch: also "just" read operation , also wants to be logged, which follows by write operation13:31
tab_so i guess in full system weird things is going on?13:32
*** bvandenh has quit IRC13:33
btorchtab_: sorry not sure I follow you there13:33
btorchtab_: yeah weird things happen :)  you could say that13:34
tab_if someone calls READ Swift API call to get object, than this is logged, in case there is no space left on the devices...than possible also read is disturbed13:34
*** bkopilov has joined #openstack-swift13:35
btorchtab_: nah never seen that, if you have 3 replicas it will read from the first primary (if object not there for any reason) it will try another node where the object should be located13:37
ahalebtorch: isnt it just that get-nodes shows the first 3 handoffs by default, but really every possible other drive is a potential handoff13:39
*** vr2 has quit IRC13:39
*** vr2 has joined #openstack-swift13:39
btorchahale: yeah true .. remember the time get nodes showed all handoffs :)13:39
ahaleyeah :)13:40
*** ttrumm has quit IRC13:42
*** zacksh has quit IRC13:44
*** zacksh has joined #openstack-swift13:46
wersorry if I was busting in on your conversation guys :)13:46
btorchwer: nah the more the better :)13:47
wercool :)14:03
*** morganfainberg has quit IRC14:18
*** morganfainberg has joined #openstack-swift14:27
*** vr3 has joined #openstack-swift14:34
*** vr2 has quit IRC14:34
*** jergerber has joined #openstack-swift14:46
*** ttrumm has joined #openstack-swift14:48
*** ttrumm has quit IRC14:49
*** tab__ has joined #openstack-swift14:53
*** k4n0 has quit IRC14:57
openstackgerritLorcan Browne proposed a change to openstack/swift: Update swift-init restart to accommodate unsuccessful stops  https://review.openstack.org/11694414:59
*** tsg has joined #openstack-swift15:05
*** Guest18621 is now known as annegentle15:16
*** gyee has joined #openstack-swift15:49
*** homegrown has left #openstack-swift15:56
dfgclayg: finally got around to cleaning up that slowdown middleware. its not much but here you go: https://github.com/dpgoetz/slowdown16:01
*** mahatic has quit IRC16:01
dfgclayg: its a little weird because the feeding out bytes thing but I had that there from when I was testing the ranged retry on failures for GETs.16:03
*** kyles_ne has joined #openstack-swift16:07
*** elambert has joined #openstack-swift16:08
openstackgerritTushar Gohad proposed a change to openstack/swift: EC: Make quorum_size() specific to storage policy  https://review.openstack.org/11106716:18
*** mordred has quit IRC16:18
*** mordred has joined #openstack-swift16:18
*** tkay has joined #openstack-swift16:25
*** vr3 has quit IRC16:26
notmynamegood morning, world16:26
*** vr1 has joined #openstack-swift16:26
peluseguten morgen16:48
openstackgerritJohn Dickinson proposed a change to openstack/swift: make the bind_port config setting required  https://review.openstack.org/11820016:54
notmynamepep8 fix and a better log message.16:55
*** mkollaro has quit IRC17:01
*** vr1 has quit IRC17:02
*** openstackgerrit has quit IRC17:04
*** HenryG is now known as HenryThe8th17:12
*** elambert has quit IRC17:22
*** elambert has joined #openstack-swift17:23
*** homegrow_ has joined #openstack-swift17:23
*** IgnacioCorderi has joined #openstack-swift17:26
*** homegrow_ has quit IRC17:28
*** homegrow_ has joined #openstack-swift17:29
*** IgnacioCorderi has quit IRC17:30
*** kopparam has joined #openstack-swift17:32
*** homegrow_ has quit IRC17:36
*** homegrow_ has joined #openstack-swift17:37
*** kopparam has quit IRC17:40
*** kopparam has joined #openstack-swift17:40
*** homegrow_ has quit IRC17:44
*** kopparam has quit IRC17:45
*** homegrown has joined #openstack-swift17:45
claygdfg: that's awesome!17:58
*** mwstorer has joined #openstack-swift17:59
dfgclayg: glad you like it :)18:01
*** homegrown has quit IRC18:03
*** shri has joined #openstack-swift18:03
*** IgnacioCorderi has joined #openstack-swift18:03
claygdfg: well i know how much you're into slo'ing things down18:03
dfgya. i'm good at making things slow and overly complicated. glange loves pointing that out :)18:05
*** shri has quit IRC18:08
*** zaitcev has joined #openstack-swift18:09
*** ChanServ sets mode: +v zaitcev18:09
*** shri has joined #openstack-swift18:10
*** aix has quit IRC18:10
*** openstackgerrit has joined #openstack-swift18:12
openstackgerritThiago da Silva proposed a change to openstack/swift: moving object validation checks to top of PUT method  https://review.openstack.org/11599518:15
*** echevemaster has joined #openstack-swift18:19
openstackgerritA change was merged to openstack/swift-bench: Work toward Python 3.4 support and testing  https://review.openstack.org/11881218:25
*** kenhui has joined #openstack-swift18:36
*** tsg has quit IRC18:37
*** shri has quit IRC18:37
openstackgerritSamuel Merritt proposed a change to openstack/swift: Error limit the right node on object PUT  https://review.openstack.org/11944218:43
claygglange!  I miss glange!18:47
claygdfg: do you guys have travel figured out for this fall?  Who's going to the hackathon; whos going to the summit?18:48
glangethe paris summit?  redbo might be going to that, I think18:48
dfgclayg: i'm pretty sure i'll be going18:49
redbohurricanerix_ is going to the hackathon, not sure if anyone else is.  dfg and I are going to the summit.18:49
peluseI'm heading to both18:49
glangewhere is the hackathon?18:50
hurricanerix_glange: Boston18:50
glangelame :)18:50
glangea bunch of openstack developers in Paris sounds pretty romantic18:51
pelusewe're going on a dinner cruise I think18:51
glangesomebody should warn the ladies of France :)18:51
notmynameeating snails in the shadow of the eiffel tower18:51
claygyou mean sheading bikes?18:52
claygwait... i don't think thats the verb form of that :\18:52
claygglange: you're a language snob help me out here18:52
glangewell, English isn't a fully construcive language18:53
glangeso you have to look up the word to see if it is valid18:53
claygtorgomatic: what do you call two crows sitting on a tree branch?18:53
*** geaaru has quit IRC18:53
glangeif you were speaking in Esperanton, it and any any other constructed word would be valid18:53
openstackgerritThiago da Silva proposed a change to openstack/swift: moving object validation checks to top of PUT method  https://review.openstack.org/11599518:53
glangeEsperanto18:53
glangehttp://www.newyorker.com/magazine/2012/12/24/utopian-for-beginners <-- clayg: that is an example of a constructed language that went too far18:55
claygglange: i hear i18n support is coming back into swift - we could work on that Esperanto translation18:55
redboI don't know if you can talk sensibly about conjugations of a noun you've just verbed.18:55
claygglange: oh I read that, freaky!18:55
glangeclayg: yeah, that is a strange story18:56
glangeI really think Esperanto (or something like it) is a good idea18:56
glangeyou learn your native language plus one easy to learn international language and you are all set18:56
peluseclayg, so what about the two crows?18:57
claygpeluse: torgomatic: attempted murder18:58
*** openstackgerrit has quit IRC19:01
claygawww man... nothing?19:02
notmynameclayg: :-)19:03
*** openstackgerrit has joined #openstack-swift19:03
tdasilvaclayg: I had to look up the explanation for that one: http://www.mirror.co.uk/news/weird-news/worlds-geekiest-jokes-explained-after-205130319:03
*** elambert has quit IRC19:18
peluseclayg, OK, I get it now.  A little slow today - a murder of crows :)19:21
*** elambert has joined #openstack-swift19:21
glangehttp://www.thealmightyguru.com/Pointless/AnimalGroups.html <-- there is a lot of material to make similar jokes with :)19:24
glangefor example, what do you call one million midges?  an overbite19:25
notmynamea group of swifts can be called a "screaming frenzy"19:25
notmyname(from http://identify.whatbird.com/obj/232/overview/Black_Swift.aspx)19:26
swifterdarrellnotmyname: sounds like a group of swift developers19:26
notmynameindeed19:26
notmynameI'm actually trying to work that in to the next swift tshirts I make :-)19:26
zaitcevA recent inventory found that I already possess 3 Swift T-shirts.19:27
notmynamezaitcev: then you need 4 more so you only have to do laundry once a week instead of every 3 days ;-)19:28
notmynamezaitcev: do you have a manilla shirts yet?19:28
swifterdarrellnotmyname: really?? I just wear my 3 ten days each and do laundry monthly...19:28
notmynameswifterdarrell: to each his own19:29
swifterdarrell;)19:29
zaitcevnotmyname: with patches like mine it may be too soon to think about a Manila shirt. Also, they would probably just use envelopes...19:29
notmynameheh19:30
*** tsg has joined #openstack-swift19:31
zaitcevswifterdarrell: I use my Colorado shirt as a night gown. It's quite soft, for a swag tee.19:32
notmynamehttp://lwn.net/SubscriberLink/610769/d7957916159f2fff/ <-- "Rethinking the OpenStack development process". it's a summary of recent mailing list threads, but something that will affect swift more if we move to the single release with milestones release plan19:34
torgomaticso we build a nut dispenser that dispenses two nuts per core reviewer, and when you submit a patch you tie its number to a squirrel and release it near the nut dispenser...19:45
torgomaticbut then, the patch queue for Swift is nearing 100, so who am I to talk?19:46
*** zul has quit IRC19:49
tdasilvanotmyname: i was checking the object migration patch and saw your earlier comment on using WSGIContext. I'm still not sure when to use it and when not to use it...do you have any pointers?20:05
notmynametdasilva: it has to do with when things are created. the middleware class is instantiated at startup and then repeatedly called. so if you have any state stored in the class, it will mess up under concurrency. also, if you need to reliably get the response codes from things to the right on the pipeline, then you have to collect that carefully (because of how generators work)20:08
notmynametdasilva: WSGIContext takes care of all that for you20:08
notmynametdasilva: the most common thing is when the middleware needs to take an action based on the response codes. the _app_call() in WSGIContext does the right thing20:10
* tdasilva nods...20:12
notmynametdasilva: https://gist.github.com/notmyname/c8ee8ef2bba474cf163220:12
notmynametdasilva: it's a lot easier to explain on a whileboard :-)20:12
notmynamebut the important part is that the x+=1 isn't executed until the .next() is called on the iterator20:13
notmynameso in wsgi, the response is set to some default (maybe a 200), so if you ask for it before you start iterating over the response_iter, then you don't actually have the right value20:13
notmynameso WSGIContext grabs the first part of the response iter to make sure that the response codes are set properly, then stores that for later when the response actually needs to be sent to the client (client starts reading it)20:14
notmynameif your middleware doesn't do that, then you'll get whatever the default response code is because the whole stack of the pipeline generators isn't executed until the response starts to be read (basically)20:15
*** IgnacioCorderi has quit IRC20:15
notmynametdasilva: make sense?20:16
tdasilvayes, I think so... :-) I'm looking at the WSGIContext code and the existing middleware's to see if understand correctly20:18
tdasilvaI guess the difficulty was trying to see the difference in the existing code where some middleware uses it and other's don't20:18
tdasilvaI'm going by what you said where it depends if the middleware takes action on the response or not20:19
notmynametdasilva: the biggest difference would be if the middleware is doing something to the response based on the response code.20:19
notmynameright20:19
tdasilvafor example, name_check just does some checks and returns bad request, so no need for wsgicontext20:19
*** tsg has quit IRC20:20
notmynametdasilva: right. ratelimit is similar. crossdomain doesn't need one because it passes through anything not on its configured path20:23
pelusetorgomatic, tsg needs your +2 again for https://review.openstack.org/#/c/111067/ as he has to rebase to pass jenkins (no code change though).  I just reposted mine20:27
peluses/has/had20:27
*** tsg has joined #openstack-swift20:28
*** zul has joined #openstack-swift20:29
tdasilvanotmyname: thanks!20:30
tab__is the default auditor interval 30min default for all: acc/container and object auditors or you can configure less frequent interval for object auditor?20:36
*** achhabra has joined #openstack-swift20:39
*** kenhui has quit IRC20:40
*** kenhui has joined #openstack-swift20:41
*** achhabra_ has joined #openstack-swift20:42
*** achhabra__ has joined #openstack-swift20:42
*** miqui has quit IRC20:47
*** achhabra_ has quit IRC20:48
*** achhabra__ has quit IRC20:48
*** echevemaster has quit IRC20:49
*** tsg has quit IRC20:51
torgomaticpeluse: on it20:58
*** kenhui has quit IRC21:17
openstackgerritSamuel Merritt proposed a change to openstack/swift: Pay attention to all punctual nodes  https://review.openstack.org/11948421:23
*** jasondotstar has quit IRC21:24
torgomaticnotmyname: if you wanna go poke https://review.openstack.org/#/c/117907/ again, that might be good21:28
torgomaticyou liked it before, but py2.6 didn't21:29
notmynametorgomatic: days always have 86400 seconds, right? ;-)21:30
torgomaticnotmyname: according to the Python 2.7.7 source, yes :)21:30
notmynameheh. not surprising :-)21:31
torgomaticquoting:   static PyObject *seconds_per_day = NULL; /* 3600*24 as Python int */21:31
*** openstackgerrit has quit IRC21:31
torgomatic(it gets filled in later)21:31
torgomatic    seconds_per_day = PyInt_FromLong(24 * 3600);21:32
torgomaticso what I have is at least as correct as Python 2.7 ;)21:32
notmynameya, it's just easier to point to python when some leap second causes writes to not be ordered21:33
*** openstackgerrit has joined #openstack-swift21:33
notmynameDefCore community meetings are next week http://lists.openstack.org/pipermail/community/2014-September/000868.html21:54
tab__Does swift treats chunked data, which is written to the same container, as each would be it's own object, meaning it would write them to different disks? Some sort of striping chunked data to disks... ?21:56
notmynametab__: what doe you mena by chunked data? pieces of a large object manifest? or an object sent with transfer-encoding: chunked?21:56
tab__notmyname: in the sens of pieces of larger object manifest ...21:58
notmynametab__: every "thing" referenced by a large object manifest (either directly in a static manifest or indirectly in a dynamic manifest) is treated as a separate object in the system that is individually placed and referenceable via the API22:00
*** judd7 has joined #openstack-swift22:02
tab__would writing each chunk to different container, i guess each has it's own sqlite database, fasten write speeds? i am thinking theoretically...22:03
notmynametab__: in general, splaying the data across different containers (in the same or different swift account) will help improve speed.22:05
tab__notmyname: thx. is there any up-limit/recommendet of number per container objects?22:06
notmynametab__: it depends :-)22:06
notmynametab__: suppose you have one container with 100 million objects in it. that container will be slower to write to because the container listing will take longer than a container with 100 objects in it22:07
tab__1 million, 2 millions? :)22:07
notmynameimportantly this only has to do with write speed. read performance is NOT affected22:07
tab__aha great. good to know22:08
notmynametab__: if you are using flash drives for containers (you should), your "limit" will be a lot. you could have tens of millions of objects. or more.22:09
notmynametab__: if you are using spinning drives, your limit will be something like 1-2 million22:09
tab__ok. thx for info.22:09
notmynametab__: a lot of times I recommend to people to use a lot of containers for this. ie design your naming scheme to splay across a lot of different containers22:10
notmynametab__: one way is if you are using a hash to name the objects, then use the first few bytes of the hash in the container name. so if you use 3 hex digits, that will splay it across 4096 containers22:10
tab__is there also some upper limit for number of countainers over which you splash sliced data over - meaning there is i guess some number of container behind which you can not gain on speed anymore...22:13
notmynametab__: so you should do some testing on your own, of course, based around your specific workload. if you are doing a lot of concurrent object PUTs, then using more containers is good. if you have some static set of data that isn't updated many times per second, then you might not need to have a lot of containers22:13
tab__aha ok22:13
notmynametab__: probably. I'd guess it would be <number of spindles in the container ring> / <number of container replicas> (on average)22:13
notmynametab__: ya, and it's important to know that the "limits" I'm talking about here are more like "the point at which you can't do more than 10 writes per second to a container"22:14
notmynametab__: eg if your use case is only doing a hundred writes per minute, then you can have _really_ large containers before you see an impact22:15
notmynametab__: of course, large containers are harder to replicate, so splaying them helps replication too :-)22:16
torgomaticalso note that that's *average* speed; you can still write in fast bursts and the container updates will get saved to async-pending files on the object servers, then pushed to the containers eventually22:16
notmynametorgomatic: ++22:16
tab__great info22:19
tab__thx22:19
notmynametab__: may I ask what you're building your swift cluster for? I'm always curious about how and why people are using swift22:19
notmynamealso, fill out the user survey with your swift info! it will really help. http://openstack.org/user-survey/22:20
tab__:) it's should be used for telco purposes, where you could expect a lot of user generated data... and swift is under consideration22:22
notmynametab__: very cool. please let me know how I can help22:23
notmynametab__: I can also be reached at me@not.mn22:24
tab__ok. will let you know...22:24
*** achhabra has quit IRC22:27
*** dmsimard is now known as dmsimard_away22:47
*** dencaval has quit IRC22:52
*** kyles_ne has quit IRC23:10
*** physcx has quit IRC23:28
*** otoolee- is now known as otoolee23:34
*** DisneyRicky has quit IRC23:35
*** DisneyRicky has joined #openstack-swift23:42
*** elambert has quit IRC23:52
*** gyee has quit IRC23:58
*** dmsimard_away is now known as dmsimard23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!