Monday, 2019-12-09

*** irclogbot_2 has joined #openstack-swift00:23
*** irclogbot_2 has quit IRC00:39
*** irclogbot_3 has joined #openstack-swift01:27
openstackgerritMerged openstack/swift master: Set swift_source in account_quotas middleware  https://review.opendev.org/69757801:44
*** irclogbot_3 has quit IRC01:47
openstackgerritMerged openstack/swift master: Set swift_source more in s3api middleware  https://review.opendev.org/69758002:05
openstackgerritPete Zaitcev proposed openstack/python-swiftclient master: Cleanup session on delete  https://review.opendev.org/67432002:11
*** irclogbot_2 has joined #openstack-swift02:47
kota_morning04:47
openstackgerritMerged openstack/swift master: Set swift_source more in versioned_writes  https://review.opendev.org/69758105:13
mattoliveraukota_: o/05:23
kota_mattoliverau: o/05:23
*** rcernin has quit IRC07:12
*** tkajinam has quit IRC08:07
*** tesseract has joined #openstack-swift08:30
*** tesseract has quit IRC08:31
*** tesseract has joined #openstack-swift08:31
*** joeljwright has left #openstack-swift08:35
*** rcernin has joined #openstack-swift09:11
*** rpittau|afk is now known as rpittau09:15
openstackgerritThiago da Silva proposed openstack/swift master: New Object Versioning mode  https://review.opendev.org/68238209:36
tdasilvaclayg, timburke, mattoliverau: ^^^ applied changes from last reviews and fixed the issue with listing/delete when version-id=null. Have not yet tackled container-sync09:38
*** rcernin has quit IRC10:14
*** csmart has quit IRC10:16
*** baffle has quit IRC10:16
*** baffle has joined #openstack-swift11:38
*** csmart has joined #openstack-swift11:38
*** pcaruana has joined #openstack-swift11:46
viks___Hi, Object replication cycle takes around 7-8 hours in my cluster. Is it normal? My settings has ` concurrency = 1` and `replicator_workers = 0`.  Should i increase the worker and concurrency to bring the replication cycle down to around 1 hour? what is the ideal time range for replication cycle i should be targeting?12:22
donnydso this morning I am getting a bunch of these showing up in the logs Unexpected response while deleting object12:57
donnydAlso how many servers should the expiration process be running on?12:59
*** rdejoux has joined #openstack-swift13:02
*** new_student1411 has joined #openstack-swift13:27
new_student1411I am trying to set an acl on object using s3 API. I have set `x-amz-acl` to `public-read` and tried to access the the same using url `https://s3.<domain>/<bucket-name>/file.txt`, but it gives 400 bad request. The `s3_acl` option is true and I can see the object acl to be set using third party tool. Is this how should I handle the acl?13:35
*** new_student1411 has quit IRC14:32
claygviks___: Not necessarily, the cycle time is higly dependent on the available I/O and activity in the cluster.  However, if you're gettig 7-8 hours in steady state when you do a rebalance (add nodes) it's probably going to be unsatisfingly slow.15:27
claygviks___: it's good that you're monitoring your consistency engine - a lot of durability failure calculations I've seen give 24 windows to address faults (unmount failed disk & rebuild).  You can definately increase concurrency settings and monitor I/O.15:29
claygviks___: but the real test will come when there's a fault or you add nodes15:29
claygdonnyd: what version of swift are you running?  newer versions have better reporting in the expirer when handling different status codes (we could try and find the commits if you're fairly recent)15:31
claygdonnyd: you can scale out the expirer to as many nodes as you need to keep up with your backlog, most SwiftStack clusters run the expirer on every object node15:33
claygdonnyd: you should monitor your expirer queue -> https://gist.github.com/clayg/7f66eab2a61c77869e1e84ac4ed6f1df15:33
claygdonnyd: if you get backed up, run more15:33
donnydclayg: I am on stein15:34
claygdonnyd: ok, the stuff I was thinking is old like 2.1715:37
claygdonnyd: when it logs "unexpected status" does it say what the status *was*?15:37
donnydhttps://www.irccloud.com/pastebin/3gLJ8adX/15:38
claygdonnyd: yeah, you're way behind - you need to run more expirers and increase the concurrency/workers15:38
donnydwell that is a super duper handy little tool15:38
donnydI can do that15:39
donnydbut there is nothing coming in atm - FN has been turned off for logging because of the state of it15:39
donnydand because I suck at the swifts15:39
claygthe expirer has two options to scale workers - you have "processes" and "process" - processes is the total number of processes you're running - then in each config you set process =  0-n to assign each worker an index15:40
donnydI will post what I have.. its probably all effed up15:41
claygthen concrrency is probably might hit a sweet spot anywhere between 4 and 3015:41
donnydhttps://www.irccloud.com/pastebin/H3PUSrX0/15:41
claygso that's fine, as long as you have 10 other configs with process = 1..915:41
claygso the neat thing about having 100M stale entries is you're not going to be done anytime soon regardless - so there's no rush 😎15:43
donnydLMAO15:44
donnydOh so I see how those two work together now15:44
claygyeah if you've been running with the above config for awhile (i.e. no one was processing rows % 1..9) that could explain the stale entries15:47
claygtdasilva: thanks for re-spinning versioning15:48
openstackgerritRomain LE DISEZ proposed openstack/swift master: relinker: Improve performance by limiting I/O  https://review.opendev.org/69534415:49
claygi thought about container-sync over the weekend... I think i'd prefer to find some way to gracefully abstain than trying to "one-off" syncing of the most recent version15:49
claygI could see someone trying to turn on versioning on the remote end and just being entirely disappointed ... we really need to up the game on container sync15:50
*** rpittau is now known as rpittau|afk16:46
timburkedonnyd, clayg: the graph to watch will be https://grafana.fortnebula.com/d/9MMqh8HWk/openstack-utilization?orgId=2&refresh=30s&from=now-7d&to=now&fullscreen&panelId=28 -- as long as FN is still out of the log pool, that should only be coming down -- i'd expect it to settle around 5-6% if logs are being retained for 60 days, or around 1% for only 30 days (again, assuming no new ingest)17:14
*** gyee has joined #openstack-swift17:16
donnydtimburke: it has been slowly but surely headed downward17:23
donnyd  Total entries: 11090807517:23
donnydPending entries: 52614417:23
donnyd  Stale entries: 10306194117:23
donnydkilt almost 1M in the 1.5 hours... so only 100 or so hours to go17:24
donnydmaybe 200 LOL17:24
*** rdejoux has quit IRC17:25
*** pcaruana has quit IRC17:31
*** diablo_rojo has joined #openstack-swift17:51
*** pcaruana has joined #openstack-swift18:23
tdasilvaclayg: skipping versioned containers until we have a better solution seems sane, same should be done of static links i guess?18:45
claygtdasilva: i'm struggling with how to make it obvious to the user what's going on... and with normal static links I guess there's always the "hope" that the target eventually get's written in the remote for some reason and next pass it'll work18:52
*** gmann is now known as gmann_afk19:17
openstackgerritTim Burke proposed openstack/swift master: WIP: Add proxy-server option to quote-wrap all ETags  https://review.opendev.org/69513119:19
timburkeso i'm looking at the request timings we log at the object-server: https://github.com/openstack/swift/blob/2.23.0/swift/obj/server.py#L1296-L130119:24
timburkeis there a reason we're doing (roughly) time to first byte instead of looking at the whole transfer?19:25
*** tesseract has quit IRC19:37
DHEtime to first byte is often viewed as a responsiveness metric19:42
*** ab-a has left #openstack-swift20:09
*** ab-a has joined #openstack-swift20:10
*** ab-a has left #openstack-swift20:10
clayg@timburke i'm pretty sure the theory was you could monitor for anomolies watching TTFB - where as the total transfer time would be too variable based on the size of the object20:52
*** ab-a has joined #openstack-swift20:52
timburkeso the context was, i saw object-servers respond real quick saying "hey, yeah, i've got that!" but then serves the data out at ~1/6 the speed of other drives in the cluster (looking at the total transfer time reported in the proxy)20:55
clayg@timburke it's difficult to say for sure if it was serving the data slowly or the proxy was reading the data slowly (N.B. the proxies read is back pressured against the client socket buffers)21:10
claygregardless it's true that the TTFB measurement isn't giving you insight into the read throughput or the bottlenecks21:10
openstackgerritClay Gerrard proposed openstack/swift master: Fix container-sync objects with offset timestamp  https://review.opendev.org/69809221:27
*** pcaruana has quit IRC21:31
*** gmann_afk is now known as gmann21:48
donnydis there any way to get swift to expire things faster?22:09
donnydI worry that this isn't going to get the job done until sometime in 202522:09
*** rcernin has joined #openstack-swift22:16
mattoliveraumorning22:23
mattoliveraunotmyname: is it time to upgrade your mini swift setup at home? https://www.cnx-software.com/2019/12/08/rock-pi-sata-hat-targets-rock-pi-4-raspberry-pi-4-nas/22:24
timburkeheh "The SATA HAT Top Board (with fan) is supposed to be at the top of the NAS, so maybe they could consider some ventilation holes at the top as well."22:34
*** rcernin has quit IRC22:36
tdasilvai wonder what's the best bang for the buck in terms of DIY home nas nowadays, odroid also had some nice boards23:04
*** diablo_rojo has quit IRC23:12
*** tkajinam has joined #openstack-swift23:12
*** diablo_rojo has joined #openstack-swift23:15
*** rcernin has joined #openstack-swift23:17
DHEnow where were those container sharding instructions?23:27
openstackgerritMerged openstack/python-swiftclient master: Cleanup session on delete  https://review.opendev.org/67432023:39
timburkeDHE, https://docs.openstack.org/swift/latest/overview_container_sharding.html23:40

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!