Friday, 2015-05-15

hogood morning00:04
*** vjujjuri has quit IRC00:30
*** dmorita has joined #openstack-swift00:31
mattoliverauho: morning00:41
*** cdelatte has quit IRC00:51
*** harlowja has quit IRC00:58
*** asettle is now known as asettle-afk00:59
*** gyee has quit IRC01:14
*** haomaiwa_ has quit IRC01:26
*** kota_ has joined #openstack-swift01:27
*** haomaiwa_ has joined #openstack-swift01:28
*** panbalag has joined #openstack-swift01:31
*** panbalag has left #openstack-swift01:32
homattoliverau: morning!01:36
*** mitz has quit IRC02:52
minwoobpeluse notmyname: Quick q -- In the Trello discussion for small file optimizations, https://trello.com/c/QWaYXWNf/120-small-file-optimizations -- could you explain the purpose of the padded zeros in Paul's first suggestion as a solution to the problem?03:10
minwoobThanks.03:10
*** occupant has quit IRC03:21
*** zhill has quit IRC03:24
*** occupant has joined #openstack-swift03:38
kota_minwoob: hi03:39
*** vjujjuri has joined #openstack-swift03:39
kota_miwoob: In my memory, current swift slices the original data to the segment size and encode it.03:42
kota_minwoob: i.e. the object smaller than the segment size will be adjudsted with zero padding.03:42
kota_minwoob: it will cause inefficient disk space usage because the smaller object will be maintain as 1MB (default segment size is 1MB)03:44
kota_minwoob: therefore Paul says that we want to minimize the redandunt (and unnecessary) padding, I think.03:45
kota_I'm looking at current code my description was correct tho.03:46
*** tobe4333 has joined #openstack-swift03:48
minwoobkota_: I see. So it seems like it wouldn't be as straightforward (for a fix), to just not store the padding, right?03:49
minwoobkota_: (I assume they are there for a reason).03:49
*** HenryG has quit IRC03:51
*** asettle-afk is now known as asettle03:53
kota_minwoob: would not be easy, i think.03:53
kota_minwoob: but possible, maybe.03:54
kota_minwoob: current swift decides the encoding/decoding unit from segment size and the numbers of k and m.03:55
kota_minwoob: maybe we have to secialize the last (or small one) segments whose size is different from another fragments in the fragment archive.03:56
kota_minwoob: wow, sorry I might have misunderstanding.03:58
kota_minwoob: current *Swift* doesn't pad additional data to the segments. for now, I'm going to dig PyECLib which behaves like as we are assuming.03:59
minwoobkota_: Ah, I see.04:00
minwoobkota_: Thanks for explaining, btw.04:02
kota_minwoob: no worries, I'll ping you if I get more information for that.04:03
minwoobkota_: Okay, great!04:04
*** tobe4333 has quit IRC04:20
*** vjujjuri_ has joined #openstack-swift04:31
*** vjujjuri has quit IRC04:33
*** vjujjuri_ is now known as vjujjuri04:33
openstackgerritpradeep kumar singh proposed openstack/swift: Swift account auditor fails to quarantine corrupt db due to Disk IO error. This patch fix that by handling Disk IO Error Exception.  https://review.openstack.org/18273404:34
kota_minwoob: not sure but no padding to the *segment*04:36
kota_minwoob: However please note that ec sprits the source data into n data.04:37
kota_minwoob: i.e. if the incoming object consites of 1 byte data, the fragments consist of more than 1 bytes (some reason like as the source divisable by k, aligned 16bytes for performance and adding the ec info as headers)04:39
kota_minwoob: the additional bytes for the fragments might be larger than the benefit of decreasing disk space by EC in the small object case.04:41
kota_minwoob: but it would be better to ask Paul, again. the info looks a bit stale.04:43
minwoobkota_: All right.04:44
kota_s/the info/the info at Trello/04:45
minwoobkota_: We'll see if Paul wants to chime in on this.04:46
kota_minwoob: exactly :)04:47
minwoobSounds good.04:48
*** minwoob has quit IRC04:52
*** SkyRocknRoll has joined #openstack-swift04:55
torgomatichttp://cube-drone.com/2015_05_12-145_Progress.html05:05
*** mwheckmann has joined #openstack-swift05:07
*** mwheckmann has quit IRC05:12
*** ppai has joined #openstack-swift05:18
*** bkopilov has quit IRC05:20
*** bkopilov has joined #openstack-swift05:22
swifterdarrelltorgomatic: lol05:32
swifterdarrelltorgomatic: so true05:33
*** HenryG has joined #openstack-swift05:38
openstackgerritDarrell Bishop proposed openstack/swift: Allow SAIO to answer is_local_device() better  https://review.openstack.org/18339505:40
swifterdarrell^^^^^^^^ that is in prep for some vagrant-swift-all-in-one changes and another patch that provides support for one-object-server-per-device-port-in-all-rings05:42
*** vjujjuri has quit IRC05:47
*** zaitcev has quit IRC05:57
*** zhill has joined #openstack-swift06:37
*** zhill has quit IRC07:16
*** aluria has joined #openstack-swift07:28
*** acoles_away is now known as acoles07:30
*** ho_ has joined #openstack-swift07:31
*** ho_ has quit IRC07:31
*** chlong has quit IRC07:32
*** jistr has joined #openstack-swift07:33
*** silor has joined #openstack-swift07:34
*** SkyRocknRoll has quit IRC07:36
*** geaaru has joined #openstack-swift07:37
*** tobe4333 has joined #openstack-swift07:44
*** kei_yama has quit IRC07:54
*** kei_yama has joined #openstack-swift07:57
*** tobe4333 has quit IRC08:07
*** jordanP has joined #openstack-swift08:07
*** km has quit IRC08:15
*** kei_yama has quit IRC08:26
*** Trozz has joined #openstack-swift08:53
*** early has quit IRC08:57
*** early has joined #openstack-swift09:09
openstackgerritpradeep kumar singh proposed openstack/swift: Handle Disk IO error Exception in swift account auditor.  https://review.openstack.org/18273409:15
*** kota_ has quit IRC10:02
*** wbhuber has quit IRC10:03
*** aix has joined #openstack-swift10:21
*** tobe4333 has joined #openstack-swift10:27
*** kota_ has joined #openstack-swift10:28
*** ho has quit IRC10:46
*** tobe4333 has quit IRC11:07
*** cdelatte has joined #openstack-swift11:30
*** delattec has joined #openstack-swift11:30
*** dencaval has joined #openstack-swift11:51
*** dmorita has quit IRC11:55
*** ppai has quit IRC12:04
*** dencaval has quit IRC12:10
*** links has joined #openstack-swift12:13
*** dencaval has joined #openstack-swift12:21
*** annegentle has joined #openstack-swift12:48
*** kota_ has quit IRC12:50
*** links has quit IRC12:57
*** jkugel has joined #openstack-swift13:17
*** erlon has joined #openstack-swift13:22
*** CaioBrentano has joined #openstack-swift13:23
*** lastops has joined #openstack-swift13:25
*** wbhuber has joined #openstack-swift13:25
openstackgerritChristian Cachin proposed openstack/swift-specs: Updates to encryption spec  https://review.openstack.org/15431813:31
*** CaioBrentano has quit IRC13:43
*** aix has quit IRC13:44
*** fthiagogv has joined #openstack-swift13:54
*** jrichli has joined #openstack-swift13:57
*** annegentle has quit IRC13:59
*** mwheckmann has joined #openstack-swift14:00
*** lcurtis has joined #openstack-swift14:02
lcurtishello all...I added a 3rd container node to my swift cluster but getting low throughput14:03
lcurtisit looks like only one single process running even though i set concurrency to 5000014:04
lcurtis/usr/bin/python /usr/bin/swift-container-replicator /etc/swift/container-server.conf14:04
lcurtisor 5000 rather14:04
lcurtisis this expected behavior?14:07
*** archers has joined #openstack-swift14:09
*** aix has joined #openstack-swift14:22
*** breitz has quit IRC14:24
*** breitz has joined #openstack-swift14:25
openstackgerritThiago Gomes proposed openstack/python-swiftclient: Fix the Upload an object to a pseudo-folder  https://review.openstack.org/16511214:28
openstackgerritThiago Gomes proposed openstack/python-swiftclient: Fix the Upload an object to a pseudo-folder  https://review.openstack.org/16511214:30
*** annegentle has joined #openstack-swift14:31
glangefor the container replicator it's only one process and the conncurrency is an eventlet GreenPool14:33
glangeall the replicators are single processes14:33
*** archers has quit IRC14:38
lcurtisThank you glange14:43
*** zynisch_o7 has joined #openstack-swift14:45
*** mragupat has joined #openstack-swift14:45
*** mragupat has quit IRC14:46
openstackgerritThiago da Silva proposed openstack/swift: WIP: new attempt at single-process  https://review.openstack.org/15928514:57
*** nadeem has joined #openstack-swift15:10
*** mwheckmann has quit IRC15:10
*** mwheckmann has joined #openstack-swift15:14
*** jistr is now known as jistr|mtgh15:15
*** jistr|mtgh is now known as jistr|mtg15:15
*** mwheckmann has quit IRC15:19
*** mwheckmann has joined #openstack-swift15:19
*** minwoob has joined #openstack-swift15:20
*** csmart has quit IRC15:20
*** csmart has joined #openstack-swift15:25
*** jistr|mtg is now known as jistr15:28
*** csmart has quit IRC15:30
*** geaaru has quit IRC15:30
*** acoles is now known as acoles_away15:31
*** csmart has joined #openstack-swift15:31
*** gyee has joined #openstack-swift15:37
openstackgerritMichael Barton proposed openstack/swift: go: log 499 on client early disconnect  https://review.openstack.org/18357715:43
*** mahatic has joined #openstack-swift15:46
*** shakamunyi has quit IRC16:00
*** barra204 has quit IRC16:00
*** harlowja has joined #openstack-swift16:01
*** vjujjuri has joined #openstack-swift16:02
*** harlowja has quit IRC16:03
*** annegentle has quit IRC16:04
ctennislcurtis: are you on a recent version of swift?16:06
lcurtisctennis: 1.13.1-0ubuntu1.116:08
lcurtisany good stuff I am missing?16:08
*** jordanP has quit IRC16:12
*** jordanP has joined #openstack-swift16:13
ctennislcurtis: one thing that comes to mind is that in a more recent version a bug was fixed that cleaned up empty container and account partitions which didn't have anthing in them...in your versino you may have a lot of empty partition directories which impedes replication time16:14
ctennisyou might look at nsee if you have empty partition directories16:14
lcurtisokay! will do..thanks ctennis16:18
notmynamegood morning16:21
*** jordanP has quit IRC16:21
notmynameless than 48 hours until I'm on a place to Vancouver. I'm starting to feel a little rushed to get stuff done ;-)16:29
egonnotmyname: I hear ya. I still have finishing stuff to do on my deck, let alone pack, or figure out simple travel logistics.16:41
egonwhat flight are you on? you're in SF, right?16:42
notmynameegon: i'm leaving early sunday morning. will be in vancouver by lunch16:42
egonI get in at 4-something16:42
egonpm16:43
egonctennis: for cleaning up empty containers, is that a job that runs, or a new feature?16:44
*** acoles_away is now known as acoles16:51
*** mahatic has quit IRC16:51
openstackgerritMichael Barton proposed openstack/swift: go: check error returns part 1  https://review.openstack.org/18360516:51
ctennisegon: it's part of the replicator, it's something it should have been doing all along but was not16:51
*** acoles is now known as acoles_away16:52
egonctennis: we have an application team using swift who pre-creates a lot of containers, because they saw a performance improvement. So they have tons of empty ones. Is that considered a valid use case anymore?16:56
ctennisegon: not empty containers, those are fine.  this is empty partitions of containers17:02
ctennisessentially container data that's moved elsewhere in the system and the enclosing directory was not cleaned up17:03
jodahQ about storage policies. it was suggested that i could use storage policies to represent racks of hosts, and by effective place object replicas across racks by using storage policies. how do i associate groups of hosts, such as racks, with a storage policy, so that the replicas are placed across racks?17:03
egonctennis: oh! gotcha. that makes sense17:03
jodahmy question stems from a discussion with cschwede a while back on the mailing list http://lists.openstack.org/pipermail/openstack-dev/2015-February/057326.html17:03
*** annegentle has joined #openstack-swift17:05
*** zhill has joined #openstack-swift17:05
*** annegentle has quit IRC17:10
*** jistr has quit IRC17:12
*** aix has quit IRC17:15
*** RobOakes has joined #openstack-swift17:19
RobOakesI've been playing with a development cluster we use for OpenStack Swift. It's configured with a single node and a single object copy (no replication). I'd like to add a second node and up the replication count to two. What is the best way to do this?17:22
RobOakesCan I create a new ring with both machines, modify the object count, and still maintain the data on the current storage node?17:22
notmynameyes. or rather, you should add the 2nd node to the existing ring17:23
notmynamethen when you rebalance and deploy the updated ring, swift will rearrange the data and move it to the right place17:23
RobOakesOkay. Once the second node is added, is there a way to up the number of replication copies?17:24
RobOakesMy understanding was that once the number of replication copies is set, that you can't change it.17:25
*** askname has joined #openstack-swift17:27
asknameHi guys, question regarding md5/ETag of object. When a new object is uploaded into Swift, who is calcultaing the checksum of object ? swift-proxy, object-server daemon ?17:28
notmynamethe proxy17:29
notmynameerr...no, sorry17:29
notmynamethe object17:29
tdasilvain EC, it's the proxy, right?17:30
*** annegentle has joined #openstack-swift17:35
*** annegentle has quit IRC17:40
*** NM has joined #openstack-swift17:42
*** jrichli has quit IRC17:54
notmynamehttp://d.not.mn/ec-v-repl.png  <---  graph of EC vs Replication18:11
notmynametdasilva: yea18:11
notmynameaskname: tdasilva: it's actually done in a few places18:11
*** annegentle has joined #openstack-swift18:11
asknamewhat is EC ?18:12
egonnotmyname: is that a performance graph, or rps?18:12
egonaskname: erasure codes18:12
openstackgerritMichael Barton proposed openstack/swift: go: replace ghetto getpwnam with os/user  https://review.openstack.org/18363518:12
notmynameegon: PUTs/sec18:12
notmynameegon: from http://d.not.mn/20150511_run1_15fullness.csv18:13
egonnotmyname: so that's performance of pps, or pps required to finish replication?18:14
notmynameegon: that's from a client perspective. so performance of puts per second18:14
egonnotmyname: gotcha18:15
notmynameegon: taller bars are better18:15
notmynameso eg you can see that there is a point where EC becomes faster than replication18:15
openstackgerritMichael Barton proposed openstack/swift: go: replace ghetto getpwnam with os/user  https://review.openstack.org/18363518:15
egonnotmyname: what are the object sizes, and what happens if you have mixed-workloads?18:19
tdasilvanotmyname: do you have any info on the cluster used for those tests?18:19
notmynameegon: the scenario files used are at https://github.com/swiftstack/ssbench/pull/107/files18:20
notmynametdasilva: it's the community QA cluster. 5 servers, Intel Avoton chips, 8GB memory, 4 drives per server in the policy (6 and 8 TB helium drives)18:21
glangenotmyname: what does that graph show?  requests per second to the cluster or requests per second for replication (or something)18:21
notmynameglange: a benchmark run of replication and EC policies. everything else the same18:22
glangerun of puts to the cluster?18:22
notmynameyes18:22
glangeok18:22
*** RobOakes has left #openstack-swift18:22
notmynameglange: so it shows the puts/sec in each policy. 3x replica and 6+4 EC18:22
*** openstackgerrit has quit IRC18:22
glangeok18:22
*** openstackgerrit has joined #openstack-swift18:22
notmynamethe "medium" category is objects between 5MB and 25MB18:23
notmynamesmall = 1-5MB18:23
notmynamenote that the EC segment size is 1MB18:23
notmynameminiscule = 10-2048 bytes18:24
notmynametiny = 4k - 8k18:24
tdasilvanotmyname: what about ssbench configuration? number of workers, connections, etc...18:24
notmyname4 workers18:28
notmynameto the 5 servers18:28
notmynameconcurrency in that run was 3018:28
*** rdaly2 has joined #openstack-swift18:30
*** rdaly2 has quit IRC18:31
notmynameif you really want to see all the data, I'm uploading it now18:34
notmyname:-)18:34
notmyname191MB compressed, 2G uncompressed18:35
*** shakamunyi has joined #openstack-swift18:36
*** barra204 has joined #openstack-swift18:36
notmynamehttp://d.not.mn/20150512_run2_15fullness.tgz18:38
openstackgerritStuart McLaren proposed openstack/python-swiftclient: Add minimal working service token support.  https://review.openstack.org/18264018:41
*** nadeem has quit IRC18:44
openstackgerritStuart McLaren proposed openstack/python-swiftclient: Add minimal working service token support.  https://review.openstack.org/18264018:44
*** NM1 has joined #openstack-swift18:49
*** nadeem has joined #openstack-swift18:50
*** nadeem has quit IRC18:51
*** NM has quit IRC18:52
*** silor1 has joined #openstack-swift18:57
*** silor has quit IRC18:59
mwheckmannclayg: Tested the patch for bug #1413619. Seems to solve the problem.19:00
openstackbug 1413619 in OpenStack Object Storage (swift) "container sync gets stuck after deleting all objects" [Undecided,New] https://launchpad.net/bugs/1413619 - Assigned to Gil Vernik (gilv)19:00
*** wbhuber_ has joined #openstack-swift19:01
*** silor1 has quit IRC19:01
*** ahale_ has joined #openstack-swift19:03
*** ahale has quit IRC19:03
*** NM1 has quit IRC19:04
*** wbhuber has quit IRC19:04
*** NM has joined #openstack-swift19:22
*** NM has quit IRC19:23
*** tdasilva has quit IRC19:30
*** annegentle has quit IRC19:33
*** tdasilva has joined #openstack-swift19:37
*** azure23 has joined #openstack-swift19:42
*** azure23 has quit IRC19:42
*** vinsh has joined #openstack-swift19:46
*** mahatic has joined #openstack-swift19:47
*** mahatic has quit IRC19:47
*** barra204 has quit IRC19:48
*** shakamunyi has quit IRC19:48
*** jrichli has joined #openstack-swift19:49
openstackgerritThiago da Silva proposed openstack/swift: move replication code to ReplicatedObjectController  https://review.openstack.org/18282619:50
*** vinsh has quit IRC19:51
openstackgerritThiago da Silva proposed openstack/swift: WIP: new attempt at single-process  https://review.openstack.org/15928519:53
*** thumpba has joined #openstack-swift19:54
ekarlsoheya, for devstack swift shouldn't it be enable_service swift s-object s-account s-proxy s-container20:00
ekarlso?20:00
*** dencaval has quit IRC20:01
*** lpabon has joined #openstack-swift20:01
claygmwheckmann: oh that's great!  can you put that on the bug report20:02
mwheckmannclayg: already did :)20:03
claygnotmyname: that graph is nice - with like gradients and stuff - you're stepping up your game20:04
glangehaha20:04
claygnotmyname: still it would be nice for "huge write only" to say what the object size is20:05
*** tab____ has joined #openstack-swift20:05
*** fthiagogv has quit IRC20:07
ekarlsonoone uses devstack ?20:10
glangenot where I work20:11
*** wbhuber__ has joined #openstack-swift20:11
*** wbhuber_ has quit IRC20:15
tdasilvaekarlso: i think most swift devs use their own SAIO dev. environment instead of devstack20:17
*** zaitcev has joined #openstack-swift20:18
*** ChanServ sets mode: +v zaitcev20:18
glangeand we don't run that in production either20:20
*** lastops has quit IRC20:21
ekarlsowhat is it that runs in the gates then ?20:25
notmynameclayg: the huge one is 1-5GB20:29
notmynameall of them are a range20:29
*** askname has quit IRC20:32
*** nadeem has joined #openstack-swift20:38
*** annegentle has joined #openstack-swift20:44
tdasilvataking off for today...hope you guys have fun at the conference...looking forward watch presentations and to hear back from the discussions20:46
jrichlitdasilva: enjoy being with the little one!20:47
ekarlsois there a easy way to deploy a SAIO box and have it be configured towards keystone ?20:50
*** lpabon has quit IRC20:58
*** nadeem has quit IRC21:11
jrichliekarlso: would you like links to SAIO and keystone setup instructions, or are you asking for something "easier" than that?21:16
morganfainbergjrichli: i kind of have a snarky answer to your rhetorical question but i don't want to be too snarky today...21:19
morganfainbergjrichli: also *waves*21:19
morganfainberg:)21:19
jrichlimorganfainberg: I am sorry.  I didn't mean for it to come across that way.  It is difficult to read intention with text only.21:20
morganfainbergjrichli: no i was commenting that i have a snarky response :)21:21
morganfainbergjrichli: and that i didn't want to be the snarky one today :)21:21
ekarlsojrichli: eh, I was wondering if there was a easy way to standup a saio instance that would interact with a existing keystone..21:21
morganfainbergjrichli: your just fine :)21:21
jrichliekarlso: I believe you just have to add keystone middleware and the config section to your proxy-server.conf21:25
jrichlimorganfainberg: good to know, just wanted to make sure. :-)21:27
morganfainbergjrichli: i'm a little punchy / snarky because summit time - means i have a lot to think about.21:28
morganfainbergand a presentation or two to finish writing21:28
morganfainberg:P21:28
zaitcevthe question, I suspect, is what to put into that proxy-server.conf21:28
zaitcevlike, e.g. swiftoperator=????21:28
zaitcevyou probably want to find some kind of group like "users" or create a new one21:28
zaitcevcreate one in Keystone21:28
jrichlimorganfainberg: I hope to see you there!  Good luck on getting things done.21:29
zaitcevall the rest should be trivial... just uncomment from samples in etc/proxy-server.conf-sample21:30
*** jrichli has quit IRC21:34
*** jkugel has quit IRC21:35
*** erlon has quit IRC21:41
*** annegentle has quit IRC21:49
ekarlsodoes swiftclient support keystone sessions ?21:51
*** annegentle has joined #openstack-swift21:51
zaitcevNot sure what Keystone sessions are. Swift client simply does the same thing that e.g. "keystone token-get" would.21:53
zaitcevOh and it also pulls the endpoint from the attached catalog, although I think Keystone server does the interpolation.21:54
morganfainbergzaitcev: keystone sessions are an object that handles re-auth, plugins for different forms of authentication, catalog parsing, etc21:54
morganfainbergzaitcev: not sure if swiftclient uses it or not.21:54
morganfainbergbut for keystone authenticated cases, longterm it should.21:55
*** zhill_ has joined #openstack-swift21:59
* notmyname is happy to see morganfainberg in the -swift channel!21:59
morganfainbergnotmyname: i've been lurking here for about 6-8 months21:59
morganfainbergi just usually stay quiet21:59
notmynamelurking != "here" ;-)22:00
morganfainbergdude, i have been reading the channel22:00
morganfainbergthats enough of being here for IRC... ;)22:00
notmyname:-)22:00
notmynamebut I'm definitely excited that you jump in to help oiut with keystone stuff in here22:00
morganfainbergalso, once we get keystoneauth split out, it should be easier to get swiftclient in a state that it can use it w/o all the other icky deps [for the cases you need keystone authentication-y-stuff]22:01
morganfainbergif you don't already have session fun in the client22:01
*** tdasilva has quit IRC22:01
* morganfainberg hasn't looked at swiftclient tbh22:01
morganfainbergnotmyname: i try and jump in on all the major channels when i can actually speak to what is going on.22:02
minwoobRegarding the GIL/multithreading issue.22:04
minwoobIf it does turn out that the GIL is posing significant barriers to performance22:05
minwoobWhere do we go from there?22:05
minwoobCan't just replace CPython, right?22:05
minwoobPossibly some extensions that need to be worked with.22:05
minwoobAlso, it seems that Kevin was suggesting that GIL shouldn't be a problem where there are only a few I/O bound operations.22:07
minwoobFrom what I've read, it seems that I/O bound operations are fine, but rather the CPU bound operations are where we can really take a performance hit from the lock.22:07
notmynameminwoob: the GIL in python will limit compute-bound workloads (in python code). are you seeing something else?22:07
notmynameor more specifically, it will limit multi-threaded, simgle-process compute-bound workloads22:08
minwoobIt seems that Kevin has a different understanding of this problem, based on his description of it.22:09
minwoobseems to be suggesting that we need to watch out for GIL for I/O operations, but from what I've read that should be fine.22:10
notmynameno, the EC stuff is in a C library, and the GIL is released when a C library is called. therefore it's not an issue there22:10
redboThreading and the GIL only affects threaded code, which the proxy isn't.  Also from a cursory glance, PyECLib never seems to release the GIL.22:14
*** wbhuber__ has quit IRC22:16
notmynamehmmm22:18
notmynameI'll bug tsg about that next week22:18
redboBut not releasing the GIL is fine if you're not planning on using multiple threads22:22
notmynameya22:23
*** tdasilva has joined #openstack-swift22:23
minwoobSo, ideally all the threading should be done in liberasurecode and the pluggable backends, right?22:27
*** proteusguy has joined #openstack-swift22:28
notmynamewell except that there is no threading. it's all in the same process22:34
*** tab____ has quit IRC22:34
minwoobHmm22:35
minwoobI'll think about this more when I come back. Thanks.22:38
*** jamielennox|away is now known as jamielennox22:48
portanteredbo, notmyname: pyeclib is mostly compute boudn right?  Does it perform and non-blocking IO?22:50
notmynameportante: it's just the EC computations. no IO22:50
*** annegentle has quit IRC22:51
*** vinsh has joined #openstack-swift22:52
*** vinsh has quit IRC22:59
*** vinsh has joined #openstack-swift22:59
mattoliverauMorning all, well I'm off to the airport, cya in Vancouver.. In about 30 hours or so :p23:02
*** proteusguy has quit IRC23:15
*** lcurtis has quit IRC23:17
torgomaticif anything, it'd be better for us if pyeclib did not release the GIL23:22
torgomaticsince the proxy is single-threaded, and it's faster to not do something than to do it23:22
redboI kind of think threads wouldn't help any.  Unless the EC operations take a really long time, which isn't the impression I get.23:46
torgomaticI completely agree there. If we were erasure-coding giant wads of data at once, maybe, but we're only doing dinky amounts per call23:47
*** annegentle has joined #openstack-swift23:51
*** annegentle has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!