Thursday, 2018-11-08

*** dhellmann has joined #openstack-swift00:07
*** gyee has quit IRC00:55
*** SkyRocknRoll has joined #openstack-swift01:42
openstackgerritliuyamin proposed openstack/swift master: Remove setup.py check from pep8 job  https://review.openstack.org/61639402:27
*** fungi has quit IRC03:06
*** fungi has joined #openstack-swift03:09
*** fungi has quit IRC03:10
*** fungi has joined #openstack-swift03:40
*** fungi has quit IRC03:41
*** fungi has joined #openstack-swift03:45
*** SkyRocknRoll has quit IRC07:17
*** SkyRocknRoll has joined #openstack-swift07:21
*** SkyRocknRoll has quit IRC07:24
*** SkyRocknRoll has joined #openstack-swift07:25
*** pcaruana has joined #openstack-swift07:34
*** SkyRocknRoll has quit IRC07:44
*** SkyRocknRoll has joined #openstack-swift07:57
*** e0ne has joined #openstack-swift09:50
*** mvkr has quit IRC09:58
*** mvkr has joined #openstack-swift10:26
*** admin6_ has joined #openstack-swift12:14
*** admin6 has quit IRC12:16
*** admin6_ is now known as admin612:16
*** admin6 has quit IRC12:24
cwrightnotmyname: I'm just now seeing the swift-exporter announcement, thank you very much!12:39
*** ccamacho has quit IRC12:53
*** e0ne has quit IRC13:03
*** SkyRocknRoll has quit IRC13:16
*** ccamacho has joined #openstack-swift13:18
*** e0ne has joined #openstack-swift13:37
*** ccamacho has quit IRC13:52
*** ccamacho has joined #openstack-swift13:53
*** ccamacho has quit IRC14:02
*** kota_ has quit IRC14:02
*** irclogbot_2 has quit IRC14:02
*** kukacz has quit IRC14:02
*** tdasilva has quit IRC14:02
*** mahatic has quit IRC14:02
*** tonyb has quit IRC14:02
*** mattoliverau has quit IRC14:02
*** kukacz has joined #openstack-swift14:04
*** kota_ has joined #openstack-swift14:04
*** ChanServ sets mode: +v kota_14:04
*** mahatic has joined #openstack-swift14:04
*** ChanServ sets mode: +v mahatic14:04
*** tonyb has joined #openstack-swift14:07
*** mvkr has quit IRC15:04
*** ccamacho has joined #openstack-swift15:09
*** tdasilva has joined #openstack-swift15:11
*** ChanServ sets mode: +v tdasilva15:11
*** mvkr has joined #openstack-swift15:20
*** admin6 has joined #openstack-swift15:48
admin6Hi there, I have a small question about the "unique-as-possible" distribution of data. I currently run a 9+3 erasure coding ring over 6 servers. Each server/node currently is declared in a different zone. I’m adding two new servers but I don’t want to add new zones to the ring. What is the best solution to add these new servers : add them only in one zone each, but this will unbalance the zones with zone 1 and16:06
admin6having twice the storage of zone 3 to 6? Is that a problem ? Or is there a better solution to integrate only 2 new servers for the data distribution ?16:06
DHEyou can add and remove zones freely. there isn't really a "process" for making zones. you just declare the server is a member of zone X16:10
*** pcaruana has quit IRC16:14
*** mvkr has quit IRC16:16
*** gyee has joined #openstack-swift16:18
tdasilvaadmin6: do you already have a lot of data in your cluster?16:23
tdasilvathe reason I'm asking is because my first thought is to also just change the rings to reduce the number of zones you have, but i'm not 100% if that will cause a whole lot of data movement16:24
tdasilvabut you are correct in trying to avoid having zones being unbalanced16:25
tdasilvathe 'unique-as-possible' calculations will already take into effect the nodes (i.e., IP address) being a failure domain, so there's no point in "making" 1 zone per node16:27
tdasilvain fact you could be losing the opportunity to declare a rack as a failure domain for example16:28
*** mvkr has joined #openstack-swift16:28
DHE"as uniquely as possible" just means trying to spread as best it can at the highest level (region), then the next level (zone), then the next (node/host). having 1 host per zone doesn't really accomplish anything.16:28
DHEunless they're already in discrete failure domains (eg: discrete racks, individual PDUs and switches) and you're planning for future growth16:30
tdasilvaexactly16:35
admin6Yes, the servers are one different rack, but I have not enough budget to buy 6 new servers at at the same time, only two… So I’m trying to find the best solution. The other option I considered is to reorganize and reduce to only 4 zones, gradually moving server 5 and 6 to zone 3 and 4 and adding the new servers to zone 1 and 2.16:45
admin6but I will have server over multi-zones for a time and that mean I could have more than 3 fragments on a single node, making it a point of failure for some objects.16:47
*** e0ne has quit IRC16:48
admin6unless if the "unique-as-possible" distribution is ables to recognize that it is the same node (ip/port) even if the disks are in different zones. but I doubt this is the case.16:49
admin6tdasilva: yes, the disks are between 80 and 85% full on each server and I’m storing about 600TB in this ring, that’s why I’m adding new servers.16:57
admin6tdasilva: thanks for your answer, I need to leave but I’ll reconnect in a couple of hours :-)17:03
*** admin6 has left #openstack-swift17:03
notmynamegood morning17:28
*** ccamacho has quit IRC17:32
*** mvkr has quit IRC17:35
*** irclogbot_2 has joined #openstack-swift18:02
*** pcaruana has joined #openstack-swift18:10
*** SkyRocknRoll has joined #openstack-swift18:17
openstackgerritMerged openstack/swift master: Remove setup.py check from pep8 job  https://review.openstack.org/61639418:22
*** SkyRocknRoll has quit IRC18:29
*** e0ne has joined #openstack-swift20:19
*** e0ne has quit IRC20:22
openstackgerritTim Burke proposed openstack/swift master: s3api: Add basic support for ?versions bucket listings  https://review.openstack.org/57583821:14
*** mattoliverau has joined #openstack-swift22:07
*** ChanServ sets mode: +v mattoliverau22:07
mattoliveraumorning, damn irc client/bouncer reconnected, and so needed to reauthise so couldn't auto join the channel.. I should really look into auto reauth. I wonder if this irc client can do that.22:13

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!