Sunday, 2018-02-18

*** tosky has quit IRC01:11
*** bkopilov has quit IRC03:10
*** Supun has joined #openstack-swift04:01
*** Supun has quit IRC04:02
*** Supun has joined #openstack-swift04:03
*** bkopilov has joined #openstack-swift04:29
*** yashmurty has joined #openstack-swift04:56
*** psachin has joined #openstack-swift05:24
*** SkyRocknRoll has joined #openstack-swift05:30
*** SkyRocknRoll has quit IRC05:52
*** SkyRocknRoll has joined #openstack-swift05:53
*** zaitcev has joined #openstack-swift05:53
*** ChanServ sets mode: +v zaitcev05:53
*** hoonetorg has quit IRC06:02
*** hoonetorg has joined #openstack-swift06:15
*** SkyRocknRoll has quit IRC06:27
*** yashmurt_ has joined #openstack-swift06:36
*** yashmurty has quit IRC06:39
*** yashmur__ has joined #openstack-swift06:39
*** yashmur__ has quit IRC06:40
*** yashmurt_ has quit IRC06:42
*** Supun has quit IRC07:00
*** Supun has joined #openstack-swift07:01
*** psachin has quit IRC07:05
*** Supun has quit IRC07:08
*** armaan has joined #openstack-swift07:39
*** yashmurty has joined #openstack-swift08:17
*** yashmurty has quit IRC08:21
*** yashmurty has joined #openstack-swift08:36
*** yashmurty has quit IRC08:41
*** yashmurty has joined #openstack-swift08:44
*** yashmurty has quit IRC08:49
*** SkyRocknRoll has joined #openstack-swift08:49
*** yashmurty has joined #openstack-swift09:00
*** yashmurty has quit IRC09:05
*** Supun has joined #openstack-swift09:28
*** Supun has quit IRC09:48
*** armaan has quit IRC10:10
*** armaan has joined #openstack-swift10:17
*** armaan has quit IRC10:18
*** armaan has joined #openstack-swift10:48
*** Supun has joined #openstack-swift10:50
*** Supun has quit IRC11:07
*** bkopilov has quit IRC11:28
*** tosky has joined #openstack-swift12:41
*** bkopilov has joined #openstack-swift13:12
*** links has joined #openstack-swift13:42
*** silor has joined #openstack-swift14:16
*** Supun has joined #openstack-swift15:31
*** yashmurty has joined #openstack-swift15:41
DHEfrom a design standpoint, swift is all about handling failures gracefully. do you think this could be extended to using systems known to be old and (theoretically) more likely to fall over of natural causes?15:42
*** Supun has quit IRC15:42
*** yashmurty has quit IRC15:46
*** Supun has joined #openstack-swift15:54
-openstackstatus- NOTICE: Zuul has been restarted and queues were saved. However, patches uploaded after 14:40UTC may have been missed. Please recheck your patchsets where needed.15:56
*** Supun has quit IRC16:05
*** Supun has joined #openstack-swift16:06
notmynameDHE: I'm not sure what you mean, exactly. you're right that siwft is all about handing failures. failed hardware is normal and expected. are you talking about putting swift "on top of" (whatever that means) other storage systems?16:21
*** Supun has quit IRC16:29
*** Supun has joined #openstack-swift16:29
DHEnotmyname: I have a number of ~8-10 year old servers I'm thinking of throwing into a swift stack. New hard drives though. while the machines have been fairly good, they are well past their intended lifespan16:42
notmynameno worries. if they run python 2, they'll be fine16:44
notmynameif they are 32 bit, you'll have to lower the max file size limit to at most (2**32)-116:44
notmynamefor the whole cluster, not just those machines16:44
DHEthey're 64 bit, but only just. Xeons based on the Core2 Duo CPU16:44
notmynamealso, slower machines could end up being a bottleneck in the cluster for throughput or replication cycles16:45
notmynamebut it'll be fine! :-)16:45
notmynameI've got swift running at home on some 32bit ARM chips. I've got it at work on fancy xeon-D chips. I've put it on a raspberry pi before. it all works16:46
notmyname(note: "works" doesn't imply any particular performance guarantee ;-) )16:47
DHEwell, these machines have 1gig NICs and I'm not planning on changing that. as storage bricks I think that'll be okay.16:47
DHEso I think that'll do16:47
* DHE is just building up his first lab now (not related to these machines) so I have much to do16:48
*** silor has quit IRC16:49
*** silor has joined #openstack-swift16:49
*** Supun has quit IRC17:01
*** links has quit IRC17:51
*** armaan has quit IRC17:58
*** links has joined #openstack-swift18:31
*** SkyRocknRoll has quit IRC18:36
*** armaan has joined #openstack-swift18:40
*** yashmurty has joined #openstack-swift18:42
*** yashmurty has quit IRC18:46
*** links has quit IRC20:03
*** silor has quit IRC20:04
*** bkopilov has quit IRC20:59
*** bkopilov has joined #openstack-swift21:12
DHEfirst lab network is up, but I think I've made a mistake somewhere. I have 2 regions with 2 availability zones each (and 6 hosts total) configured with 3-way replication, but a partition got an object put onto 2 devices on the same host. that seems wrong.21:21
*** nucleo has joined #openstack-swift21:25
notmynameyes it does. check the output of `swift-ring-builder` for that ring. make sure you have set things as you expect21:25
DHEslight correction... 5 hosts, 7 disks. might be due to the weighting maybe? one of the zones has low-weight devices.21:30
notmynameoh, do you mean on the same server or the same drive?21:31
DHEno, but same server different drives. with 3-way replication and 4 availability zones total I assumed this would not happen21:32
notmynameshould never be on the same drive (as reported by swift-get-nodes or anything; in reality it would be the same file on disk)21:32
notmynamecould be because of weighting, if they are not all equal21:32
DHEI guess that makes sense21:32
notmynamewhat's your balance number? is it zero?21:32
DHE0.00 balance, 12.64 dispersion21:33
notmynameif you were adding capacity to an existing ring, it may take a few rebalances to get moved21:33
notmynameah21:33
notmynameso you have different weights?21:33
DHEyes, 5x100, 1x50, 1x3021:33
DHEwhere the 30 and 50 comprise a single availability zone21:33
notmynameyeah, ok21:37
notmynameif you are really worried about it, you can use the "overload" parameter to force balance over dispersion21:37
notmynamebalance will target even percentage used on each drive (ie every drive has the same % used). dispersion will target unique failure domains (but wehn the smallest drive is 100% full, you're done)21:38
DHEI can work with that21:39
notmynameoverload is a parameter that let's you adjust between which one you want21:39
notmynameie if you overload a drive, it will take more partitions (ie dispersion) at the expense of balance21:39
DHEthis is my lab, so pathological cases are business as usual21:39
*** rcernin has joined #openstack-swift21:43
DHEnotmyname: thanks a lot21:46
notmynamenp21:46
*** zaitcev_ has joined #openstack-swift21:54
*** ChanServ sets mode: +v zaitcev_21:54
*** zaitcev has quit IRC21:58
*** threestrands has joined #openstack-swift22:17
*** threestrands has quit IRC22:17
*** threestrands has joined #openstack-swift22:17
*** armaan has quit IRC22:38
mattoliveraumorning22:52
*** nucleo has quit IRC23:11
*** yashmurty has joined #openstack-swift23:13
*** yashmurty has joined #openstack-swift23:14
*** kei_yama has joined #openstack-swift23:15
*** Lei_ has joined #openstack-swift23:22
DHEit's working, in spite of my stupidly disabling xattrs on one of the object servers..23:44
mattoliverau\o/23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!