Monday, 2018-11-05

openstackgerritVu Cong Tuan proposed openstack/python-swiftclient master: Switch to stestr  https://review.openstack.org/58161002:48
notmynameFYI tomorrow (my monday), I'll be traveling to the LA area and won't be online03:50
*** pcaruana has joined #openstack-swift05:23
*** pcaruana has quit IRC05:32
*** e0ne has joined #openstack-swift06:17
*** e0ne has quit IRC06:36
*** ccamacho has joined #openstack-swift07:44
*** pcaruana has joined #openstack-swift08:06
*** gkadam_ has joined #openstack-swift08:44
*** ccamacho has quit IRC09:04
*** e0ne has joined #openstack-swift09:10
*** ccamacho has joined #openstack-swift09:12
*** pcaruana has quit IRC09:31
*** pcaruana has joined #openstack-swift09:33
*** ccamacho has quit IRC10:15
*** ccamacho has joined #openstack-swift10:17
*** tellesnobrega_ is now known as tellesnobrega12:20
*** tellesnobrega has left #openstack-swift12:21
*** e0ne_ has joined #openstack-swift12:37
*** e0ne has quit IRC12:40
*** jistr is now known as jistr|call13:32
*** admin6 has joined #openstack-swift14:01
*** jistr|call is now known as jistr14:08
*** mvkr has quit IRC14:47
*** ccamacho has quit IRC14:50
*** ccamacho has joined #openstack-swift14:52
*** jistr is now known as jistr|call15:04
*** jistr|call is now known as jistr15:10
*** mvkr has joined #openstack-swift15:15
*** d34dh0r53 has quit IRC15:22
*** cloudnull has quit IRC15:22
admin6Hi team, I plan to reduce the number of zones I have in my ring. I currently have 6 zones (only 1 server by zone), but I’d prefer to have only 4 zones, in order to add servers by group of 4. Is that safe, to gradually remove the disks in zone 5 and 6 and add them to other zones until I have no more disks in zone 5 and 6 ?15:34
*** pcaruana has quit IRC16:06
*** ccamacho has quit IRC16:08
*** persia has left #openstack-swift16:09
*** e0ne_ has quit IRC16:36
*** openstackgerrit has quit IRC16:48
*** cwright has quit IRC16:48
*** cwright has joined #openstack-swift16:49
*** gkadam_ has quit IRC16:57
zaitcevadmin6: I don't see any show-stoppers, assuming replication of 3 + 1 handoff.17:01
*** gyee has joined #openstack-swift17:02
admin6zaitcev: thx, in fact, this ring is a 9+3 erasure coding, so I plan to have 3 fragment per zone eventually17:03
zaitcevadmin6: that sounds balanced enough, although you're not having any hand-off for some reason?17:06
admin6zaitcev: do you mean, there will be no handoff if I only have 4 zones in that case ? I thought handoff would be dispersed over zone 1 to 4, no ?17:11
zaitcevI'm not talking about the number of zones in this case. Suppose you have carried out your program and added 6 more servers, now you have 12 servers and 12 fragments (9+3).17:12
zaitcevWell, anyway. You need Sam or Clay to answer this.17:13
admin6zaitcev: Ok. Idea behind the scene is just to buy additionnal servers by group of 4 instead of group of 6 to correctly distribute data over the zones. I’ll see what Sam or Clay think about this :-) Thanks.17:17
*** pdardeau has joined #openstack-swift17:33
pdardeauhi swifters! what are the current recommendations for ratios of PAC nodes to O nodes?17:34
pdardeauor AC nodes to O nodes?17:34
timburkepdardeau! long time no see :-)17:39
pdardeautimburke: indeed! i've had my head down in ceph land17:39
timburkeon the question, i'm not sure... depends on object size, surely, and relative sizes of hard drives (since you'll want SSD for A/C, but will probably prefer cheaper HDD for O)...17:40
timburkemaybe other have stronger opinions than me though17:40
pdardeauassuming O nodes with 16x 12TB HDD per node, how many of those could be served by a single AC node?17:43
pdardeauwith the single AC node having something like 4 or 6 SSD17:43
pdardeauany rules of thumb like 1 AC node per rack?17:44
pdardeauor at most 3 AC nodes per rack?17:44
DHEthere isn't really a set ratio. I mean, for cold storage you can probably do with 3 PAC nodes and unlimited O nodes...17:45
tdasilvaI once heard people talking in % terms, like AC takes certain % of overall storage of a cluster, but now i'm trying to remember what that number was17:46
tdasilvaso it wasn't necessarily number of nodes, but just the amount of storage dedicated to AC based on how much is for O17:46
pdardeautdasilva: oic17:47
tdasilvapdardeau: https://ask.openstack.org/en/question/29811/what-is-the-appropriate-size-for-account-and-container-servers-in-swift/17:48
DHEbut again that's a rather subjective thing. I have a tiny container (~11,000 objects) where each object takes around 256 bytes of space in the container disk, or 768 with 3-way container replication. that will probably grow as the database expands.17:48
pdardeauDHE: yep, understood17:49
pdardeautdasilva: thanks for link!17:51
pdardeauzaitcev said 1-2% of O for AC back in 2014. i wonder if he would still recommend same today?17:53
*** e0ne has joined #openstack-swift17:57
DHEI'm suspecting 2-3 kilobytes per object on the account+container servers is going to be fairly safe assuming 3 way replication. Scale appropriately for other configurations. but now you need to know number of objects, average object size, etc to do the math.17:57
pdardeauDHE: gotcha. tricky for me since folks are saying X PB total cluster size18:00
timburkepdardeau: fwiw, i've got a couple cluster my company uses for ISV integrations/testing, an they seem to be provisioned around that 1-2% guideline. actual usage indicates that may have been high. the object disks are in the neighborhood of 25-33% full, while the a/c disks are barely to 2%18:19
timburkemight have to do with workload, though? maybe those ISVs skew toward larger objects?18:19
pdardeauswiftstack has some guidelines here: https://www.swiftstack.com/docs/admin/hardware.html#proxy-account-container-pac-nodes18:20
timburkeyeah, 0.3% for O-to-A/C seems not crazy, based on what i'm seeing18:22
timburkeer, reverse that, but yeah18:23
pdardeautimburke: thanks real world validation18:38
DHEpdardeau: `swift stat` will, for the indicated user, show a count of how many objects are stored (or at least enough info to add up yourself). so you can work it out, if manually18:39
*** openstackgerrit has joined #openstack-swift18:39
openstackgerritTim Burke proposed openstack/swift master: py3: port account/container replicators  https://review.openstack.org/61465618:39
*** mvkr has quit IRC18:55
pdardeauDHE: understood. i work for hardware vendor and am coming from angle of 'what kind of hardware would customer need to run swift cluster of X petabytes?'19:13
DHEthe main catch with A+C servers is that the unit of division is often large. If someone makes a container with 1 billion objects in it then "more disks of reasonable size" doesn't scale, you need a much bigger disk19:25
DHEcontrast objects which have a 5 GB size limit and if you're at petabyte scale then you have a LOT of them, meaning adding hard drives redistributes it fairly well19:25
timburkethough hopefully we'll be able to automate sharding sooner rather than later and it won't be such an issue :-)19:26
DHE(there's container splitting now to help resolve the first issue, but it's not automatic afaik)19:26
DHEwas just typing that19:26
DHEI'm preparing for a container that might have 42 million files in it which is already pushing my comfort zone by quite a bit...19:30
pdardeauDHE: timburke you guys bring up a lot of great points. would make good blogging material if that's your game.19:33
mattoliverauIt's pdardeau! Hey man, it's been a long time, still working on ceph at Dell? Or have you managed to convince them to let you come and do some Swift stuff? be awesome to have you back again ;)19:52
pdardeaumattoliverau: Hey! It has been a long while. Yep, still doing Ceph at Dell. But Swift appeared in my email out of nowhere, so here I am.19:54
pdardeaumattoliverau: it's odd to see you online this time of day. or did you too participate in a time change?19:57
mattoliverau\o/, of I knew that's all it took Id be happy to continually email you :p19:57
mattoliverauNah, just young kids getting me out of bed :P19:58
*** e0ne has quit IRC20:45
openstackgerritMerged openstack/python-swiftclient master: Switch to stestr  https://review.openstack.org/58161020:58
*** itlinux has joined #openstack-swift21:23
*** dhellmann has quit IRC21:44
mattoliverauok, not I'm really at work, so morning :)21:59
mattoliverau*now21:59
*** e0ne has joined #openstack-swift22:02
*** e0ne has quit IRC22:05
*** dhellmann_ has joined #openstack-swift22:18
*** dhellmann_ is now known as dhellmann22:20
*** itlinux has quit IRC22:45
*** mvkr has joined #openstack-swift22:51
zaitcevdid you guys see a Linkedin update that creight is now at HEB?23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!