Monday, 2014-03-03

openstackgerritChangbin Liu proposed a change to openstack/swift: Use new style class for ringbuilder's Commands  https://review.openstack.org/7746300:12
*** miurahr has joined #openstack-swift00:19
*** miurahr has left #openstack-swift00:19
*** Midnightmyth_ has quit IRC00:23
*** miurahr has joined #openstack-swift00:38
*** Dharmit has joined #openstack-swift00:48
*** miurahr has quit IRC00:56
*** nosnos has joined #openstack-swift01:28
*** haomaiwang has quit IRC02:22
*** haomaiwa_ has joined #openstack-swift02:23
Anjunotmyname:  does swift provide Notification configuration on container ?02:57
openstackgerritJenkins proposed a change to openstack/swift: Updated from global requirements  https://review.openstack.org/7559603:35
*** fifieldt has joined #openstack-swift04:18
*** bvandenh has joined #openstack-swift04:26
*** nosnos has quit IRC04:28
openstackgerritZhang Jinnan proposed a change to openstack/python-swiftclient: Use six.StringIO instead of StringIO.StringIO  https://review.openstack.org/7470504:33
*** nosnos has joined #openstack-swift04:34
*** ppai has joined #openstack-swift04:35
*** fifieldt has quit IRC04:43
*** nshaikh has joined #openstack-swift04:53
*** bvandenh has quit IRC04:58
*** vu has quit IRC05:08
*** haomaiwa_ has quit IRC05:48
*** haomaiwa_ has joined #openstack-swift05:49
*** nosnos has quit IRC06:01
*** nosnos has joined #openstack-swift06:06
*** sungju has quit IRC06:30
*** saju_m has joined #openstack-swift06:31
*** Dharmit has quit IRC06:38
*** Dharmit has joined #openstack-swift06:38
*** Dharmit has quit IRC06:40
*** nosnos has quit IRC06:40
*** chandankumar_ has quit IRC06:45
*** chandan_kumar has joined #openstack-swift06:51
*** chandan_kumar has quit IRC06:55
*** Dharmit has joined #openstack-swift06:57
*** nshaikh has quit IRC06:59
*** dharmit_ has joined #openstack-swift06:59
*** nshaikh has joined #openstack-swift06:59
*** Dharmit has quit IRC07:02
*** nosnos has joined #openstack-swift07:04
*** dharmit_ has quit IRC07:12
*** haomaiwang has joined #openstack-swift07:12
*** haomaiwa_ has quit IRC07:15
*** csd has joined #openstack-swift07:22
*** davidhadas has quit IRC07:35
*** sungju has joined #openstack-swift07:36
*** sungju_ has joined #openstack-swift07:38
*** sungju_ has quit IRC07:39
*** sungju_ has joined #openstack-swift07:40
*** sungju has quit IRC07:41
*** sungju_ has quit IRC07:47
*** nshaikh has quit IRC07:51
*** nshaikh has joined #openstack-swift08:08
*** nshaikh has quit IRC08:15
openstackgerritZhang Jinnan proposed a change to openstack/python-swiftclient: Use six.StringIO instead of StringIO.StringIO  https://review.openstack.org/7470508:15
*** mlipchuk has joined #openstack-swift08:21
*** nshaikh has joined #openstack-swift08:24
*** nshaikh has quit IRC08:25
*** nshaikh has joined #openstack-swift08:26
*** davidhadas has joined #openstack-swift08:27
*** csd has quit IRC08:29
*** nshaikh has quit IRC08:32
*** therve_ has joined #openstack-swift08:33
*** haomaiwang has quit IRC08:35
*** haomaiwang has joined #openstack-swift08:35
*** davidhadas_ has joined #openstack-swift08:36
*** nshaikh has joined #openstack-swift08:37
*** davidhadas has quit IRC08:39
*** saju_m has quit IRC08:40
*** saju_m has joined #openstack-swift08:42
*** therve_ has quit IRC08:45
*** Trixboxer has joined #openstack-swift08:45
*** sungju has joined #openstack-swift08:45
*** Dharmit has joined #openstack-swift08:48
openstackgerritChangBo Guo(gcb) proposed a change to openstack/swift: Add common units constants  https://review.openstack.org/7649008:49
openstackgerritChangBo Guo(gcb) proposed a change to openstack/swift: Replace mumeric expression with units constants  https://review.openstack.org/7652308:50
*** saju_m has quit IRC09:02
*** chandan_kumar has joined #openstack-swift09:03
*** mlipchuk has quit IRC09:03
*** sungju has quit IRC09:04
*** mkollaro has joined #openstack-swift09:20
*** tanee-away is now known as tanee09:21
*** Dharmit has quit IRC09:33
*** saju_m has joined #openstack-swift09:34
*** mrsnivvel has joined #openstack-swift09:41
*** nacim has joined #openstack-swift09:43
*** mlipchuk has joined #openstack-swift10:03
*** Midnightmyth has joined #openstack-swift10:06
*** mkollaro has quit IRC10:14
*** mkollaro has joined #openstack-swift10:21
*** saju_m has quit IRC10:25
*** saju_m has joined #openstack-swift10:41
*** vills_ has joined #openstack-swift10:54
vills_hi, guys. Is there any way to stop syncronization between containers, without deleting container itself?10:55
*** saju_m has quit IRC11:11
*** nosnos has quit IRC11:21
*** saju_m has joined #openstack-swift11:25
*** nshaikh has quit IRC11:43
*** erlon has joined #openstack-swift11:45
*** foexle has joined #openstack-swift12:29
*** ppai has quit IRC12:37
*** mkollaro has quit IRC12:45
*** chandan_kumar has quit IRC12:48
*** chandan_kumar has joined #openstack-swift12:49
*** saju_m has quit IRC12:52
*** chandankumar_ has joined #openstack-swift12:54
*** vills_ has quit IRC12:55
*** chandan_kumar has quit IRC12:57
*** mkollaro has joined #openstack-swift13:08
*** acoles has joined #openstack-swift13:17
*** tristanC has quit IRC13:30
*** tristanC has joined #openstack-swift13:30
*** swifterdarrell has quit IRC13:32
*** swifterdarrell has joined #openstack-swift13:32
*** ChanServ sets mode: +v swifterdarrell13:32
tristanCHello folks!13:40
tristanCI'm not sure what to answer to http://lists.openstack.org/pipermail/openstack/2014-February/005662.html (swiftclient compatibility for grizzly)13:40
tristanCI will test swiftclient 2.0 against a grizzly devstack, but what is the official status on that ?13:41
*** creiht has quit IRC13:43
*** miurahr has joined #openstack-swift13:43
*** creiht has joined #openstack-swift13:43
*** ChanServ sets mode: +v creiht13:43
*** clarkb has quit IRC13:44
*** mrsnivvel has quit IRC13:44
*** miurahr has quit IRC13:49
*** tdasilva has quit IRC13:49
*** vills_ has joined #openstack-swift13:50
*** fifieldt has joined #openstack-swift13:51
*** rustlebee is now known as russellb13:56
*** koolhead17 has quit IRC13:57
*** mrsnivvel has joined #openstack-swift13:58
*** tongli has joined #openstack-swift14:01
*** koolhead17 has joined #openstack-swift14:03
*** acoles has quit IRC14:09
*** thomaschaaf has joined #openstack-swift14:12
thomaschaafHey is there a way to get proxy-server to return caching headers? I am running it behind pound.14:12
thomaschaafI simply want to specify that the client should cache as the headers currently don't specify that. Thus chrome fetches it every time14:13
thomaschaafI have WWW -> pound for ssl -> proxy-server set up.14:14
notmynamethomaschaaf: sortof14:15
notmynamethomaschaaf: first (and just a quick sidestep), I'd suggest you look into HAProxy or stud or stunnel for SSL termination. pound, while it works, also prevents use of expect 100 continue. not the end of the world, but also not great14:16
notmynamethomaschaaf: here's what you could do to get the headers: add the caching headers you want to save to the object server config file: https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L8814:17
notmynamethomaschaaf: then you can save those headers on a per-object basis with a PUT or POST14:18
thomaschaafNa I'd rather just replace pound then14:18
notmynamethomaschaaf: and while that's not always an ideal situation (eg you may want to dynamically set them) it would work14:18
notmynamethomaschaaf: pound vs stud is orthogonal to caching headers14:18
thomaschaafOkay thank you notmyname which of the three would you use?14:19
thomaschaafokay so stunnel or haproxy?14:19
notmynamewhat are you using for load balanacing?14:19
thomaschaafthere is no loadbalancing going on for now14:19
*** fifieldt has quit IRC14:19
thomaschaafsimply have 3 servers with dns round robin14:20
notmynameok14:20
*** koolhead17 has left #openstack-swift14:20
notmynamethen stud is really simple to set up14:20
notmynamethat's what I'd look at first14:20
thomaschaafbut with stud I can't set headers or am I wrong?14:20
notmynamecorrect. stud only does ssl termination. setting headers is something else entirely14:21
thomaschaafI'd rather have one application do both14:21
*** mkollaro has quit IRC14:21
notmynameI'm not sure if you can. what are the rules you have for setting the headers?14:22
*** peluse has quit IRC14:22
thomaschaafadd_header  Pragma "public";     add_header  Cache-Control "public";     expires     30d;14:22
thomaschaafthat is all I really want :)14:22
thomaschaafany file in our cdn will have a unique file name14:23
*** peluse has joined #openstack-swift14:23
notmynamethomaschaaf: ah, cool. so HAProxy then :-)14:23
ahaleI've never done it, but you should be able to modify headers with haproxy, and the recent dev releases have the stud ssl termination code which is fine14:23
notmynamethomaschaaf: with HAProxy, I'm told that you should be using the release from last December14:24
thomaschaafthank you :)14:24
notmynamethomaschaaf: ya, because of what ahale said14:24
thomaschaafhow do you guys set the http headers?14:25
* notmyname punts to ahale14:25
ahalehehe thats the bit i've never played with14:25
notmynameme neither14:26
ahalethe rspadd and rspdel commands in a backend stanza look like they will help14:26
thomaschaafNo I am talking about in general? Not with haproxy14:26
thomaschaafbecause it seems awkward not caching at all14:26
ahalewe (rax) put a CDN in front of the stuff we want to cache14:27
thomaschaafcdn being something like varnish?14:27
ahaleyeah pretty much14:27
notmynameahale: akamai is now laughing at you ;-)14:28
*** jamieh has joined #openstack-swift14:29
ahale:)14:29
jamiehhow do you upload a directory with the cli python-swiftclient?14:30
jamiehswift post container ./test is not working14:30
*** ChanServ changes topic to "Current Swift Release: 1.13.0 | Priority Reviews: https://wiki.openstack.org/wiki/Swift/PriorityReviews | Channel Logs: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/"14:31
notmynameSwift 1.13 released http://lists.openstack.org/pipermail/openstack-dev/2014-March/028691.html14:31
jamiehit's `upload` not `post` :)14:31
thomaschaafI'm please to announce :D14:32
notmynameyay :(14:33
*** mlipchuk has quit IRC14:33
*** mlipchuk has joined #openstack-swift14:34
*** mkollaro has joined #openstack-swift14:44
tristanCnotmyname: Could you please help me on this question: http://lists.openstack.org/pipermail/openstack/2014-February/005662.html (swiftclient compatibility for grizzly) ?14:45
tristanCI did some basic tests with swiftclient 2.0 on grizzly, and it's working...14:46
notmynametristanC: the bump to 2.x included some API changes (in method parameters and CLI options), so the best I can do is "maybe"14:46
notmynameI'm going to be at the OpenStack operators gathering today (https://www.eventbrite.co.uk/e/openstack-march-2014-operators-gathering-tickets-10318369521)14:48
notmynameI'm not sure how much I'll be online today14:48
*** jergerber has joined #openstack-swift14:49
*** ccorrigan has quit IRC14:58
*** bada has joined #openstack-swift15:04
*** lpabon has joined #openstack-swift15:04
*** tdasilva has joined #openstack-swift15:05
*** dmsimard has joined #openstack-swift15:07
*** mrsnivvel has quit IRC15:14
*** acoles has joined #openstack-swift15:36
*** mlipchuk has quit IRC15:36
*** mlipchuk has joined #openstack-swift15:37
*** clarkb has joined #openstack-swift15:41
*** chandan_kumar has joined #openstack-swift15:41
*** mjseger has joined #openstack-swift15:49
mjsegernotmyname: portante: clayg: I'm seeing some weird timing numbers across the proxy/object servers I'm hoping someone can explain.  specifically...15:52
mjsegerI'm doing a bunch of 1K PUTs and recording the latency numbers for anything > 2 seconds.  then I grep for the transaction IDs in the logs to see where the time is being spent15:52
mjsegerwhat I'm seeing is instances where 2 and sometimes all 3 objects servers have almost identical times15:53
mjsegerdoes this mean the proxy is delaying between first connecting and then sending the data?  is this a known problem?15:53
luisbgmjseger, 1K PUTs as in 1kilobyte?15:58
luisbgor 1 thousand PUTs15:58
mjseger1024 bytes/put15:58
mjsegersorry for the confusion15:59
luisbgmjseger, no problem! don't mind asking and clarifying15:59
mjsegerjust wondering if anyone has done any detailed investigation of individual timings before16:00
luisbgmjseger, it would make sense for the objects to be written at the same time, the proxy needs to decide which 3 partitions to write to (which is the bulk of it's job) and once then, it can write to the object servers quickly in parallel16:01
luisbgmjseger, I'm too new around here to know about historical performance tests/benchmarks16:01
luisbgmjseger, that said, > 2 seconds sounds very high16:02
mjsegerlet me back up a few...16:02
luisbgmjseger, but how fast the proxy can be depends on the infrastructure as well16:03
mjsegerin general swift takes < 0.1 seconds to do a 1KB PUT, though occasionally they take a little longer.  if you look at the server.log for the object servers involved, you can see how long each server spent on the operation16:03
luisbgmjseger, ooooh! I see16:03
luisbgmjseger, I'm sure core developers would be interested to look at the logs16:03
*** chandan_kumar has quit IRC16:03
mjsegersince each server is independent of each other you typical see different times for each16:03
*** bada has quit IRC16:03
luisbgmjseger, why don't you paste them somewhere (pastie.org/pastebin.com)16:03
mjsegerin this case I'm see almost identical times for each and am trying to understand what's wrong16:04
mjsegerthere is too much data in the logs to paste and they're difficult to read.  that's why I wrote my own analysis script that summaries the data I'm seeing, expliclty ONLY the exceptions.  hang on and I'll paste it.16:05
mjsegerso during this one set of tests, there were 8 PUTs that all took over 2 seconds:  see http://paste.openstack.org/show/71791/16:07
*** dmsimard1 has joined #openstack-swift16:07
mjsegerI've identified the times for the proxy, the 3 object servers and the 3 container servers16:07
mjsegerI'm also including the names of the disks, because I can also looks at the second-by-second collectl data so see if the disk latencies are too long but haven't done so in these cases because clearly the problem must be elsewhere16:08
*** dmsimard has quit IRC16:08
*** dmsimard has joined #openstack-swift16:11
*** dmsimard1 has quit IRC16:13
*** chandan_kumar has joined #openstack-swift16:16
*** tanee is now known as tanee-away16:25
*** tanee-away is now known as tanee16:26
*** dtalton has joined #openstack-swift16:29
*** dtalton has left #openstack-swift16:29
*** mlipchuk has quit IRC16:45
*** davidhadas_ has quit IRC16:52
*** thomaschaaf has quit IRC16:53
*** mlipchuk has joined #openstack-swift16:59
*** tanee is now known as tanee-away16:59
*** tanee-away is now known as tanee16:59
*** tanee is now known as tanee-away17:00
*** gyee has joined #openstack-swift17:05
*** tanee-away is now known as tanee17:13
*** chandan_kumar has quit IRC17:25
*** davidhadas has joined #openstack-swift17:29
*** vills_ has quit IRC17:31
*** nacim has quit IRC17:42
*** tanee is now known as tanee-away17:47
*** shri has joined #openstack-swift17:55
*** piyush1 has joined #openstack-swift17:55
*** mlipchuk has quit IRC18:02
*** gyee has quit IRC18:03
*** odyssey4me has joined #openstack-swift18:08
*** dmsimard1 has joined #openstack-swift18:08
*** dmsimard has quit IRC18:09
*** sfineberg has quit IRC18:17
*** zul has quit IRC18:18
*** zaitcev has joined #openstack-swift18:19
*** ChanServ sets mode: +v zaitcev18:19
*** sfineberg has joined #openstack-swift18:20
*** zul has joined #openstack-swift18:21
*** davidhadas has quit IRC18:21
*** davidhadas has joined #openstack-swift18:22
notmynameacoles: around?18:23
*** dmsimard has joined #openstack-swift18:27
*** dmsimard1 has quit IRC18:29
*** davidhadas has quit IRC18:31
*** dmsimard1 has joined #openstack-swift18:35
*** dmsimard has quit IRC18:36
*** gyee has joined #openstack-swift18:38
*** dmsimard1 has quit IRC18:44
*** dmsimard has joined #openstack-swift18:45
*** dmsimard1 has joined #openstack-swift18:47
*** dmsimard has quit IRC18:49
*** piyush1 has quit IRC18:55
acolesnotmyname: just back, seen email18:56
notmynameacoles: kk18:56
*** piyush1 has joined #openstack-swift19:08
*** piyush1 has quit IRC19:12
*** piyush1 has joined #openstack-swift19:13
*** piyush2 has joined #openstack-swift19:16
*** piyush1 has quit IRC19:17
*** piyush1 has joined #openstack-swift19:18
mjsegernotmyname: are you still around?19:20
notmynamemjseger: yup19:20
mjsegerjust wondering if you saw my posting about what I've been seeing with latencies.  I can restate if need be19:21
*** piyush2 has quit IRC19:21
notmynamemjseger: here in IRC earlier today? ya, and I'm rereading it now19:22
mjsegernotmyname: yes, that's what I'm referring to.  just trying to understand the numbers and whether they indicate problem or not.  my guess is a problem19:22
notmynamemjseger: I think you're potentially finding some stuff that hasn't been as specifically examined before. and that's great. looks like you're finding some things, and I'd love to see further data (and patches)19:25
mjsegernotmyname: I have lots of data, did you see what I posted in http://paste.openstack.org/show/71791/?  I actually have data spread out over a month!  the real question is how best to move forward19:27
mjsegernotmyname: one thought I'd had is to add more debugging to both the proxy and the object servers to show finer grained timings so we can tell exactly where the delays are19:27
mjsegeranother thought is maybe some people would be willing to get together and go over this in more detail at the summit?19:28
mjsegeror maybe do some independent testing19:28
mjsegerthe who idea there is one can run getput, instructing it to save long latencies (you pick the number) to a file with transaction IDs, then then grep thought the server/proxy logs to see what the number say19:29
mjsegerthe typical situation is one object server is slow as expected and the other 2 are fast, also as expected19:29
mjsegerbut sometimes 2 object servers are slow and even other times all 3 are slow19:30
mjsegersometimes an object servers has a >> time than getput sees.  for example, I'll record an 8 second latency and one server may report something like 10 or 20 seconds.19:30
mjsegerI've even seen at least one case where an object server reported a latency of over 500 seconds?!?!?  whart's with that?19:31
notmynameyou've got the data, but I wonder if you're seeing something about the way swift is using eventlet19:32
notmynameit sounds like you've identified an issue, and so you need to next isolate it. then figure out if configs and/or patches can solve it19:32
*** vu has joined #openstack-swift19:37
mjsegernotmyname: sorry, I had to step away.  the first thing I wanted to do we at least get some confirmation from you that this sounds like a problem before I jump in with both feet.  sounds like you're saying it is.19:47
notmynamemjseger: yes, it sounds like something that may in fact be an issue. and honestly doesn't surprise me too much. very few (if any) prod deployments I know of have focused on timings of small objects19:48
mjsegernotmyname: so as I said my thought is to more heavily instrument the proxy/swift server code to print out more detailed records along the way to show where time is being spent.  sound reasonable?19:48
notmynamemjseger: yes, that sounds reasonable. to be explicit though, I think you should first do this locally, and only maybe should it be upstream19:49
notmynamedepends on what the instrumentation is :-)19:50
mjsegernotemyname: absolutely, I'm not even suggesting doing anythign upstream [yet].  the interesting thing about small object timings is these delays really smack you in the face.  when a PUT of a lager object takes multiple seconds you don't even notice them.19:50
notmynamemjseger: ya, exactly19:53
notmynamemjseger: and I'm glad you're looking at it. thanks19:54
mjsegerstay tuned  ;)19:54
*** dmsimard has joined #openstack-swift19:54
*** dmsimard1 has quit IRC19:57
portantemjseger: does slide 35 match some of your findings: http://www.openstack.org/assets/presentation-media/openstackperformance-v4.pdf20:03
luisbgportante, interesting presentation :)20:04
mjsegerportante: isnt't that about large objects?20:05
mjsegerok wait, I see it also talks about 3K?20:05
portanteit is a simple example of 5 clients, each churning on 1 size, and how they affect the large object20:06
mjsegerportante: notmyname: also to be clear, the numbers I'm reporting came from an analysis of a months worth of data during which I wrote small objects for 2 mins every 1/2 hour.  during that time, the vast majority took <2 second with a relatively small handful taking longer.  it was those longer ones I compares the times against20:08
mjsegerI suppose one could do this for all operations or even those with latencies in the 1/2sec or even 1sec range, but there's clearly be a lot more20:08
notmynamemjseger: you might be interested in https://review.openstack.org/#/c/53270/20:09
notmynamemjseger: to test out and see if it can give you any info without needing to do a ton of your own instrumentation20:10
mjsegernotmyname: thx for the pointer20:10
*** acoles has left #openstack-swift20:44
*** mkollaro has quit IRC20:51
*** mkollaro has joined #openstack-swift20:52
*** jamieh has quit IRC21:07
openstackgerritDavid Goetz proposed a change to openstack/swift: Make cors work better.  https://review.openstack.org/6941921:15
notmynamedfg: again? ;-)21:16
dfgchmouel: ok- can you check out that change again?  i don't know what the problems you had before are. I was testing with Chrome21:16
dfgnotmyname: ya. i've been dealing with other crap21:17
luisbghahahaa21:17
notmynamedfg: ah, I first thought it was a new patch. this is another changeset on the same patch21:17
luisbgtopic: "cors_strikes_again_2"21:17
luisbgdfg, hahaha ^21:17
dfgnotmyname: ya- the only diff between this patch and the last is that its not just a change for static web anymroe. its everything21:17
notmynamecool21:18
dfgluisbg: :) i really hate CORS21:18
dfgwell- its just so much annoyance for such little gain21:18
luisbgdfg, stay strong21:19
dfghaha21:19
*** foexle has quit IRC21:20
notmynameI honestly think I'd like CORS better if the spec was all in one color and not a rainbow of cross-referenced terms21:20
notmynameit's just so hard to read21:20
*** tongli has quit IRC21:20
luisbgnotmyname, ? going to google that21:20
dfgya. its not really all that bad. but its history with cloudfiles has been a little iritating21:21
notmynamemaybe it's gotten better http://www.w3.org/TR/cors/21:21
luisbgnotmyname, I am sure you are busy catching up after the workshop, ping me whenever you have time to look at some review that needs the final push (tomorrow or whenever)21:21
luisbgdon't want to be annoying or distracting21:21
notmynamedfg: looks like http://www.w3.org/TR/cors/ was updated in january this year21:21
luisbgvery colorful indeed!21:22
notmynamedfg: thanks for working on it21:22
dfgnp- hopefully it works21:23
gholtBest thing for the CORS spec: Just adjust your monitor to greyscale.21:23
notmynamethere's a lot of good comments at the ops workshop today. fortunately, most don't apply to swift ;-)21:23
notmynamegholt: heh21:23
luisbgnotmyname, is that beecause swift works too well for people to comment with improvements?21:26
*** piyush1 has quit IRC21:32
notmynamehere's an idea that was proposed today at the ops worshop: instead of blueprints, use text files in a git repo for design docs, and use gerrit to proposed and discuss changes21:41
luisbgidea is good, except gerrit :S21:43
creihtlol21:44
creihtthat's when the gating on changes starts21:44
*** tdasilva has left #openstack-swift21:44
creihtsorry you your change proposal failed, because your title ends with a period21:44
notmynamecreiht: the TC has started using such a system for their motions, and they use the no-op gate21:44
torgomatic"instead of blueprints, use X" gets an automatic +1 from me for just about all values of X21:44
creihthaha21:45
clarkbnotmyname: have you been following the storyboard stuff at all? might be worth discussing with them too21:45
luisbgtorgomatic, hehehe21:45
luisbgtorgomatic, X = telegrams sent to your house21:45
notmynameclarkb: storyboard is something that always comes up as a "ya there is this thing, but it's not ready yet"21:45
clarkbnotmyname: right, but if you have specific needs that need addressing bettter to bring them up now than later21:46
torgomaticluisbg: at least with telegrams I get notifications :)21:46
clarkbnotmyname: even if you can't use it today21:46
notmynameclarkb: yup. there is a quite a list of things that have been generated today. fifieldt taking notes21:46
luisbgtorgomatic, +1 from me21:46
*** piyush1 has joined #openstack-swift22:01
*** piyush2 has joined #openstack-swift22:03
*** piyush1 has quit IRC22:05
*** erlon has quit IRC22:12
notmynamehigh praise for swift in the openstack operators summit today. the question was asked about any operator questions with swift (the whole day is for operator feedback on all of openstack), and the answer was just, "no issues. swift just works"22:22
creihtnice22:23
zaitcevsounds a little surprising, I expected them seeing something stuck replicators and hotspots at least somewhat22:26
notmynameya. it's not like we are perfect. but swift scales. and you can upgrade it. etc22:29
notmynamehere's some notes from it:22:30
notmynameswift22:30
notmynameit just works22:30
notmynameit's a simple model22:30
notmynamethere's no message queue22:30
notmynamethere's no database22:30
notmynamehow did you instill this mentality into swift? the people22:30
notmynamethe followup question (after the "no it just works" answer) was "how do you get here? how do you get a team in openstack to do so well at this?" which gave me a great opportunity for me to brag on all of you :-)22:31
*** Trixboxer has quit IRC22:32
shriHey Swifters… I asked a question on the mailing list about optimizing a single node swift instance. http://www.gossamer-threads.com/lists/openstack/dev/3626722:44
shriAny suggestions?22:44
*** miurahr has joined #openstack-swift22:45
*** sungju has joined #openstack-swift22:45
creihtshri: I was thinking about that actually22:47
creihtshri: how many replicas are you using in your ring?22:47
creihtand how many disks on each node22:48
creihterr the node22:48
shrifor now, I'm fine not using any replicas. We have one VM and that has 1 500G disk backed by SSD.22:49
creihtfor your tests that you ran earlier how many replicas did it have?22:49
shriThere was no redundancy. No replication. There were no replicas of each object.22:50
creihtk22:51
creihtare you benchmarking on the same machine or on another client machine?22:52
shriOn another client machine on the same network.22:52
creihthow fast is the network?22:52
*** mkollaro has quit IRC22:53
creihtshri: and with your fastest result, how many containers were you using?22:54
creihtbtw, your dd is going to be a bit misleading22:54
creihtbecause that is straight throughput to the disk22:54
creihtwhere as benchmarking swift is all going to be random io in your cade22:54
creihtcase22:54
shriI have a 1Gig connection to the Swift instance. About sharding across containers, yes, I was using that. Data was being spread across 32 containers.22:57
creihtk22:57
shriAgree that dd will be different than Swift. But wanted to check just how much can the disk support. And thereby how much is the Swift overhead.22:58
creihtin the client that is benchmarking, at what concurrency are you sending the requests?22:58
creihtshri: yes, but sequential io can be orders of magnitude faster than random io22:59
physcxnotmyname, luisbg: I mentioned during the NYC workshop that object_gets with resp_chunk_size set was returning a generator that finished iterating the data of an object without exception  but still missing a large portion of some objects -- well we still have the issue with swclient 2.0.3 and looking thru the logs we find this error within the proxy logs "proxy-server ERROR with Object...22:59
physcx...server 172.17.2.40:6000/disk3p1 re: Trying to read during GET: ChunkReadTimeout (10s) (txn: tx99c8b2a0f8304a0ea9554-0053150156) " followed by "proxy-server Trying to send to client: #012Traceback (most recent call last):#012  File "/usr/lib/python2.7/dist-packages/swift/proxy/controllers/base.py", line 971, in _make_app_iter#012    raise Exception(_('Failed to read all data'#012Exception:...22:59
physcx...Failed to read all data from the source (txn: tx99c8b2a0f8304a0ea9554-0053150156)" -- we checked the object servers and they are showing success for the problem GETs and within our python app no exceptions are being raised and it is proceeding as if it the generator iterating over the objects data finished normally despite only receiving a fraction of the total object22:59
physcxsorry for the ultra long msg22:59
shricreiht:  64 parallel threads, each writing an object of 64K. 10K objects in total.22:59
creihtphyscx: paste.openstack.org next time please :)22:59
luisbgcreiht, +122:59
physcxok23:00
creihtshri: ok so a couple of thoughts23:00
creihtI think your workers are set way to high for a single disk (even if ssd)23:00
*** piyush2 has left #openstack-swift23:01
creihtI would back those down quite a bit, and run iterative tests increasing each time to see where things start to bottom out23:01
shriShould I reduce the # of workers for all servers (proxy, account, containers, object)?23:02
creihtyes23:02
creihthow many cores do you have?23:02
*** jergerber has quit IRC23:02
creihtand did you watch cpu utilization while the test was happening?23:02
shri8 cores.. I briefly looked at whether anything was hogging the CPU, but didn't look like it.23:04
creihtk23:04
shriThere was a suggestion of changing the hash so that the number of directories per object can be reduced. Can that work?23:06
creihtalso if you are doing 20MB/s and 64k objects, that averages out to 320 requests/s23:07
shriI meant, changing the hash so that more objects get placed in the same directory. Lesse directories get created.23:07
creihtwhich isn't too bad for everything going to one node on one vm23:07
creihtshri: what partition power are you using in your ring?23:08
creihtif you put all the files in the same dir, then you run into filesystem issues with having too many files in a single directory23:08
physcx a bit cleaner: http://paste.openstack.org/show/71934/23:08
creihtshri: if you have a large number of partitions, you could make that smaller (assuming you never want to grow beyond one node)23:10
shricreiht: Oh.. so the partition power decides how many files go into each directory?23:10
creihtwhich would cause fewer partition dirs to get created23:10
creihtshri: kind of23:10
shricool.. so there is already a knob I can play with.. let me look into this.23:11
creihtanother knob you could try is playing with object_chunk_size and client_chunk_size23:12
creihtif you know all requests are going to be 64k, you could try setting each of those to 64k23:12
creihtin etc/swift/proxy-server.conf23:13
shriobjects are guaranteed to have a max size of 64k.23:13
shriisn't 64K the default value of object_chunk_size and client_chunk_size?23:14
*** Midnightmyth has quit IRC23:14
creihtoh nm, that defaults to 64k, the default is wrong in the sample config23:14
creihtso other outside thoughts, you are just seeing the limitations of what can be done currently with swift to a single vm with a single disk23:16
creihtit would be interesting to see what would happen if you added another drive23:16
creihtjust for testing23:17
creihtthat use case hasn't really been optimized for23:17
creihtas swift usually scales horizontally out across many disks and nodes23:17
shriyeah.. I did realize that while object stores are trying to scale out, I'm trying to do things unnaturally by using a single node :-)23:18
creihtyeah I know23:18
bsdkurthello. the deployment guide is quiet about memcache and the object-expirer considerations. should the object-expirer use the same memcache servers as the proxy?23:18
creihtI'm just saying that there is likely opportunity for improvement in that area, it just hasn't been the focus for most of us23:18
shrisure. The partition power option looks promising. I'll play with that a bit.23:19
creihtk23:19
*** dmsimard has quit IRC23:19
shrithanks a lot creiht!! appreciate this help!23:20
creihtnp23:20
creihtshri: what are you using to test clientside?23:22
shriWe have a java tool developed using jclouds called BlobStoreBench.23:22
creihtk23:24
creihtyou might also just double check that jclouds isn't doing something like a head before each put or other things like that23:24
creihtas some libs do things like that23:24
creihtit might be interesting to test with something like swift-bench23:25
creihtto see if you get similar results23:25
shriI am reasonably sure that jclouds is doing the right thing. I tested with swift-bench too. I got 302 object writes/s => ~19MB/s23:26
*** judd7 has joined #openstack-swift23:29
creihtk23:34

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!