Thursday, 2021-07-08

opendevreviewMatthew Oliver proposed openstack/swift master: Sharding: root audit epoch reset warning  https://review.opendev.org/c/openstack/swift/+/79955404:50
*** redrobot4 is now known as redrobot04:52
opendevreviewMatthew Oliver proposed openstack/swift master: reconciler: PPI aware reconciler  https://review.opendev.org/c/openstack/swift/+/79956105:18
opendevreviewMatthew Oliver proposed openstack/swift master: sharder: If saving own_shard_range use no_default=True  https://review.opendev.org/c/openstack/swift/+/79996607:27
mattoliver^ That last one is really a belts and braces patch.. I'm not even convinced we can get to either of those code paths without an own_shard_range being present in the broker.. but in any case we probably shouldn't allow the possibility of a default own_shard_range so definitely should be using no_default=True07:29
opendevreviewAlistair Coles proposed openstack/swift master: sharder: add more validation checks on config  https://review.opendev.org/c/openstack/swift/+/79796111:55
opendevreviewGhanshyam proposed openstack/pyeclib master: Moving IRC network reference to OFTC  https://review.opendev.org/c/openstack/pyeclib/+/80005213:28
timssHi, I'm playing around with overload and doing a `swift-ring-builder object.builder set_overload <factor>` seems to trigger the minimum part hours, making it so I can't rebalance immediately afterwards and push out the updated rings. Is the intended behavior to actually wait, do a write_ring first and then follow up with a rebalance later, or..?14:05
DHEthe idea is that after moving data around you need to give the cluster time to actually move the data around and settle14:21
DHEif you move too many copies of the data around too fast you can end up in a situation where swift can't actually find the data. it exists, but not where it expects to be14:22
timssAye, however set_overload doesn't seem to update the .ring.gz-file itself, just the builder, and tells you that "the change will take effect after the next rebalance", i.e. it seems like you're waiting for nothing at that stage14:24
DHEsetting the overload value is just a setting for the rebalance operation. it tells it that it's okay to violate the weights of devices in the name of better region/zone/host placement separation.14:26
DHEbut since doing a rebalance will restart the min_part_hours timer and there may be other changes you want to make, the rebalance command is its own thing. apply all the updates you want one by one, then do the rebalance to actually update the cluster configuration14:26
timssHm ok, thanks. In this scenario the rings in question had already been rebalanced and in use days ago (24h min part hours) and the set_overload triggered the timer. If however I made an entirely new ring, added some dummy devices, and then changed the overload, I could proceed with rebalancing afterwards. I guess I expected changing the overload for an existing ring and rebalancing it14:36
timssas being one change "committed"14:36
timssI don't really need to change the overload on the existing ring as I'll be redeploying this cluster and configuring the overload in the initial ring setup, it just made unsure if I did it correctly in the first place :)14:38
opendevreviewMerged openstack/swift master: sharder: avoid small tail shards  https://review.opendev.org/c/openstack/swift/+/79458217:01
opendevreviewMerged openstack/swift master: Sharding: root audit epoch reset warning  https://review.opendev.org/c/openstack/swift/+/79955419:50

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!