Saturday, 2015-05-30

*** charlesw has quit IRC00:02
*** jeblair is now known as corvus00:19
*** lyrrad has quit IRC00:25
*** lpabon has joined #openstack-swift00:26
*** lpabon has quit IRC00:27
*** annegentle has joined #openstack-swift00:39
*** annegentle has quit IRC00:54
*** gsilvis has joined #openstack-swift00:59
*** B4rker has joined #openstack-swift01:00
*** annegentle has joined #openstack-swift01:02
*** B4rker has quit IRC01:05
*** annegentle has quit IRC01:07
*** gyee has quit IRC01:25
*** tdasilva has joined #openstack-swift01:30
*** torgomatic has quit IRC01:34
*** classicsnail has quit IRC01:34
*** StevenK has quit IRC01:35
*** lyrrad has joined #openstack-swift01:37
*** charlesw has joined #openstack-swift01:44
*** _bluev1 has quit IRC01:57
*** _bluev has joined #openstack-swift01:58
*** B4rker has joined #openstack-swift01:59
*** B4rker has quit IRC02:04
portanteswifterdarrell: have you considered trying out hummingbird branch?02:09
zaitcevportante: yes he did02:11
portanteand what were the chunk sizes in use?02:12
portanteif this is for small file, threads per disk seems like a bad way to go, but if the chunk size is large enough and the object sizes are large enough, 10gb networks etc. ... okay corner cases maybe?02:13
* portante this coming from swifterwannabe02:13
*** lyrrad has quit IRC02:15
zaitcevI dunno, I thought the speedup in case of threads per disk comes from the elevator on the rotating storage.02:17
zaitcevIn which case smaller blocks should see a greater speed-up over a single-threaded case.02:18
portantebut a single thread participating in the eventlet loop has to dole out I/O via logs and queues, which has an overhead cost02:18
portantefor small I/O this cost is likely too high02:18
portantewhen the eventlet loop blocks on a mutex to sync with another thread, the entire engine will stop02:18
zaitcevoh, right02:18
portanteso non-blocking I/O comes to a halt02:19
portanteif we were using truly async I/O, that'd be another story02:19
zaitcevokay, so the workers per device is really what I'm thinking about02:19
portantethis is one of the traps people fall into when thinking about non-blocking I/O as asynchronous, its not, its still synchronous02:19
portantewe found that bumping the worker count high enough so that you cover all your disks (paper presented at a summit a year or so ago) we got a good speed up02:20
*** lyrrad has joined #openstack-swift02:21
portantesounds like swifterdarrell is taking that kind of direction, but trying to solve the port hand-off problem you get with pure object server counts where the kernel does not round robin between the eventlet processes (one eventlet can gobble up all the accepts and defeat the scheme)02:21
zaitcevBoth him/Sam/Clay and RAX people complained about severe starvation observed in the field -- requests taking forever to service for no reason02:23
*** josed has quit IRC02:30
*** lyrrad has quit IRC02:33
*** lyrrad has joined #openstack-swift02:34
MooingLemurwith thread pools per disk on or off?02:38
portanteshouldn't matter02:49
portanteif an object server starts accepting when it can't service requests, it will slow it all down02:50
portantein some sense, having a set of credits so that an object server won't issue an accept if it is not completing requests might help02:50
*** _bluev1 has joined #openstack-swift02:52
portantetrying lowering the eventlet worker greenlets and raising the number of workers02:52
portantesimple but effective02:52
*** _bluev has quit IRC02:53
*** _bluev1 has quit IRC03:16
*** david-lyle has quit IRC03:25
*** david-lyle has joined #openstack-swift03:26
*** lpabon has joined #openstack-swift03:28
*** lpabon has quit IRC03:33
*** jamielennox is now known as jamielennox|away03:52
*** zaitcev has quit IRC03:59
*** charlesw has quit IRC04:17
*** lpabon has joined #openstack-swift04:30
*** lpabon has quit IRC04:46
*** SkyRocknRoll has joined #openstack-swift04:52
*** SkyRocknRoll has quit IRC06:13
*** zynisch_o7 has joined #openstack-swift06:26
*** SkyRocknRoll has joined #openstack-swift06:26
*** abcdef has joined #openstack-swift06:34
*** SkyRocknRoll has quit IRC06:46
*** abcdef has quit IRC06:57
*** StevenK has joined #openstack-swift06:58
*** SkyRocknRoll has joined #openstack-swift07:00
*** SkyRocknRoll has quit IRC07:06
*** zynisch_o7 has quit IRC07:06
*** silor has joined #openstack-swift07:15
*** SkyRocknRoll has joined #openstack-swift07:18
*** abcdef has joined #openstack-swift07:34
*** abcdef is now known as abcdefgh07:34
*** cdelatte has quit IRC09:52
*** classicsnail has joined #openstack-swift09:55
*** cdelatte has joined #openstack-swift10:02
*** abcdefgh has quit IRC10:05
*** kutija has quit IRC10:20
*** abcdefgh has joined #openstack-swift10:43
*** abcdefgh is now known as jhijij10:50
*** jhijij has quit IRC11:24
*** bkopilov has joined #openstack-swift12:06
*** SkyRocknRoll has quit IRC12:45
*** SkyRocknRoll has joined #openstack-swift12:59
*** proteusguy has quit IRC13:10
*** HiramAbif has joined #openstack-swift13:12
*** proteusguy has joined #openstack-swift13:22
*** charlesw has joined #openstack-swift14:00
*** breitz has quit IRC14:19
*** zynisch_o7 has joined #openstack-swift14:52
*** zynisch_o7 has quit IRC14:56
*** zynisch_o7 has joined #openstack-swift15:07
*** zynisch_o7 has quit IRC15:26
*** zynisch_o7 has joined #openstack-swift15:26
*** zynisch_o7 has quit IRC15:31
*** _bluev has joined #openstack-swift15:51
*** wbhuber has joined #openstack-swift15:59
*** _bluev has quit IRC16:06
*** wbhuber has quit IRC16:18
*** zynisch_o7 has joined #openstack-swift17:14
*** zynisch_o7 has quit IRC17:20
*** josed has joined #openstack-swift17:27
*** geaaru has joined #openstack-swift17:50
*** openstackgerrit has quit IRC17:51
*** openstackgerrit has joined #openstack-swift17:51
*** jrichli has joined #openstack-swift17:53
*** charlesw has quit IRC18:05
*** torgomatic has joined #openstack-swift18:22
*** ChanServ sets mode: +v torgomatic18:22
*** silor1 has joined #openstack-swift18:55
*** silor has quit IRC18:57
*** wbhuber has joined #openstack-swift19:18
*** _bluev has joined #openstack-swift19:25
*** josed has quit IRC19:26
*** charlesw has joined #openstack-swift19:51
*** _bluev has quit IRC19:55
*** _bluev has joined #openstack-swift20:02
*** silor1 has quit IRC20:16
*** zynisch_o7 has joined #openstack-swift20:16
*** _bluev has quit IRC20:21
*** zynisch_o7 has quit IRC20:21
*** wbhuber has quit IRC20:24
*** charlesw has quit IRC20:30
*** wbhuber has joined #openstack-swift20:32
*** wbhuber has quit IRC20:43
*** otoolee has quit IRC21:00
*** otoolee has joined #openstack-swift21:03
*** _bluev has joined #openstack-swift21:09
*** jrichli has quit IRC21:29
*** _bluev has quit IRC21:31
*** SkyRocknRoll has quit IRC21:44
*** zynisch_o7 has joined #openstack-swift22:04
*** zynisch_o7 has quit IRC22:08
*** _bluev has joined #openstack-swift22:12
*** jamielennox|away is now known as jamielennox22:17
*** wbhuber has joined #openstack-swift22:20
swifterdarrellportante: I think you might be conflating 2+ things?22:38
swifterdarrellportante: for "normal" object-server, a single slow drive can hose an entire object server (Intel showed this pretty conclusively 1 to 3 conferences ago, the one before Hong Kong maybe?)22:39
swifterdarrellportante: that's because any object-server can issue an I/O to the crappy disk and starve that object-server eventlet hub, introducing latency into all other pending I/Os to all other disks that same object-server unix proc was handling22:40
swifterdarrellportante: that's crappy, and large clusters see this (as zaitcev mentioned)22:41
swifterdarrellportante: threads_per_disk gives full I/O isolation (a bad disk will only introduce latency into requests dealing with that disk), but the overhead is pretty high22:42
swifterdarrellportante: (too high, really)22:42
swifterdarrellportante: so the current idea is to have N object-server unix procs running which only handle reqs for a single disk (achieved by having unique ports for each disk in each server in the ring)22:43
swifterdarrellportante: this should give the same I/O isolation as thread_per_disk, with less overhead, and what I'm *trying* to get at is also increased utilization of the disks when under heavy load than even the "normal" (manifested as increased req/s, apples-to-apples)22:44
swifterdarrellportante: I'll have some numbers soon enough22:44
*** geaaru has quit IRC22:53
openstackgerritDarrell Bishop proposed openstack/swift: Allow 1+ object-servers-per-disk deployment  https://review.openstack.org/18418923:02
torgomaticpoo, looks like readv2() has not yet landed in Linux (if it's going to at all)23:04
torgomatic"git log -S readv2" doesn't turn up anything23:04
* torgomatic was looking forward to that one23:04
*** wbhuber has quit IRC23:10
openstackgerritDarrell Bishop proposed openstack/swift: Allow 1+ object-servers-per-disk deployment  https://review.openstack.org/18418923:35
swifterdarrelltorgomatic: fancy new kernel features look less exciting when you consider how long it is until they're available in some popular enterprise distros (*cough*RHEL*cough*)23:38
swifterdarrelltorgomatic: :(23:38
*** zynisch_o7 has joined #openstack-swift23:53
*** zynisch_o7 has quit IRC23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!