Tuesday, 2017-06-27

kota_good morning00:47
*** hoonetorg has quit IRC00:53
*** lucasxu has joined #openstack-swift00:57
*** lucasxu has quit IRC00:57
*** hoonetorg has joined #openstack-swift01:05
*** klrmn has quit IRC01:08
*** tovin07_ has joined #openstack-swift01:09
*** hoonetorg has quit IRC01:16
*** cshastri has joined #openstack-swift01:25
*** hoonetorg has joined #openstack-swift01:29
*** bkopilov_ has quit IRC01:47
*** aselius has quit IRC02:08
*** m_kazuhiro has joined #openstack-swift02:18
*** klrmn has joined #openstack-swift02:34
*** pdardeau has joined #openstack-swift02:57
*** bkopilov_ has joined #openstack-swift03:04
mahaticgood morning03:24
openstackgerritLingxian Kong proposed openstack/swift master: Write-affinity aware object deletion  https://review.openstack.org/47015803:24
* mahatic had a long weekend03:24
mahatickota_: o/03:24
kota_mahatic: o/03:24
*** links has joined #openstack-swift03:42
*** gkadam has joined #openstack-swift03:44
*** pdardeau has quit IRC03:44
*** psachin has joined #openstack-swift03:48
*** gkadam has quit IRC03:48
*** Dinesh_Bhor has joined #openstack-swift04:02
mattoliveraukota_, mahatic: o/04:05
kota_mahatic: o/04:06
*** m_kazuhiro has quit IRC04:13
*** gkadam has joined #openstack-swift04:27
*** MVenesio has quit IRC04:37
kongmattoliverau: hi, could you please take a look at the latest patchset for https://review.openstack.org/470158 and see if you are happy with that?04:39
patchbotpatch 470158 - swift - Write-affinity aware object deletion04:39
*** sanchitmalhotra has joined #openstack-swift04:39
kongi prefer the default value to be 'auto' since it's clear enough for ops than 'None'04:39
kongalso, clayg ^^04:40
*** MVenesio has joined #openstack-swift04:41
mattoliveraukong: yup I'll take a look when I get a chance (hopefully tonight). When none in a configuration I meant nothing but yeah auto also tells ops that the value will be automatically calculated depending on your system so yeah that's better I think :)04:58
notmynameFYI, no 2100 meeting this week. only the 0700 meeting05:09
notmynamehttps://wiki.openstack.org/wiki/Meetings/Swift is up to date05:09
*** htruta` has joined #openstack-swift05:11
*** etiennem1 has joined #openstack-swift05:12
*** sgundur has joined #openstack-swift05:12
*** edausqu has joined #openstack-swift05:13
*** htruta has quit IRC05:13
*** etienneme has quit IRC05:13
*** mmotiani has quit IRC05:13
*** sgundur- has quit IRC05:13
*** EmilienM has quit IRC05:13
*** edausq has quit IRC05:13
*** edausqu is now known as edausq05:13
*** psachin has quit IRC05:14
*** psachin has joined #openstack-swift05:16
*** adriant has joined #openstack-swift05:20
*** EmilienM has joined #openstack-swift05:22
*** adriant has quit IRC05:25
*** mmmucky_ has joined #openstack-swift05:37
*** mahatic_ has joined #openstack-swift05:37
*** ChubYann has quit IRC05:38
*** mmmucky has quit IRC05:39
*** mahatic has quit IRC05:39
*** ntata has quit IRC05:39
*** rcernin has joined #openstack-swift05:55
*** sanchitmalhotra has quit IRC05:57
mattoliveraukong: there is an config_auto_int_value helper method in utils you can use06:02
*** cschwede has joined #openstack-swift06:07
*** ChanServ sets mode: +v cschwede06:07
*** klrmn has quit IRC06:19
*** rcernin has quit IRC06:26
*** rcernin has joined #openstack-swift06:32
*** hseipp has joined #openstack-swift06:39
*** tesseract has joined #openstack-swift06:55
*** hseipp has quit IRC06:56
*** hseipp has joined #openstack-swift06:56
*** hseipp has quit IRC07:00
*** jeblair has quit IRC07:08
*** jeblair has joined #openstack-swift07:09
*** skudlik has joined #openstack-swift07:21
*** gyee has quit IRC07:23
*** pcaruana has joined #openstack-swift07:32
acolesgood morning07:33
mattoliverauacoles: o/07:38
acolesmattoliverau: hi07:38
*** oshritf has joined #openstack-swift07:53
mahatic_acoles: mattoliverau: hello08:00
mattoliveraumahatic: o/08:04
openstackgerritliuyamin proposed openstack/swift master: Fix the reST field raises in docstrings  https://review.openstack.org/45114308:12
*** oshritf_ has joined #openstack-swift08:22
*** oshritf has quit IRC08:25
*** cbartz has joined #openstack-swift08:41
*** etiennem1 is now known as etienneme08:41
*** d0ugal has quit IRC08:58
*** d0ugal_ has joined #openstack-swift08:59
*** d0ugal_ has quit IRC08:59
*** d0ugal has joined #openstack-swift08:59
*** d0ugal has joined #openstack-swift08:59
*** jarbod_ has joined #openstack-swift09:30
*** tonyb_ has joined #openstack-swift09:30
*** mvk has quit IRC09:30
*** karenc has joined #openstack-swift09:30
*** StevenK_ has joined #openstack-swift09:30
*** jlvillal_ has joined #openstack-swift09:33
*** logan_ has joined #openstack-swift09:33
*** d0ugal has quit IRC09:34
*** hoonetorg has quit IRC09:34
*** rledisez has quit IRC09:34
*** alecuyer has quit IRC09:34
*** StevenK has quit IRC09:34
*** logan- has quit IRC09:34
*** jarbod___ has quit IRC09:34
*** mgagne has quit IRC09:35
*** tonyb has quit IRC09:35
*** karenc_ has quit IRC09:35
*** jlvillal has quit IRC09:35
*** jlvillal_ is now known as jlvillal09:36
*** alecuyer has joined #openstack-swift09:37
*** jlvillal is now known as Guest6031909:37
*** mgagne has joined #openstack-swift09:37
*** mgagne is now known as Guest2879609:37
*** rledisez has joined #openstack-swift09:37
*** logan_ is now known as logan-09:37
*** d0ugal has joined #openstack-swift09:41
*** hoonetorg has joined #openstack-swift09:41
*** chlong has joined #openstack-swift09:46
*** mvk has joined #openstack-swift09:57
*** tovin07_ has quit IRC10:01
openstackgerritwangzhenyu proposed openstack/python-swiftclient master: Enable some off-by-default checks  https://review.openstack.org/47787210:19
openstackgerritLingxian Kong proposed openstack/swift master: Write-affinity aware object deletion  https://review.openstack.org/47015810:42
*** bkopilov_ has quit IRC10:49
*** cshastri has quit IRC11:00
*** chlong_ has joined #openstack-swift11:04
*** chlong has quit IRC11:04
openstackgerritiswarya vakati proposed openstack/swift master: Add python 3.5 in classifier  https://review.openstack.org/47790111:52
*** vint_bra has joined #openstack-swift11:57
*** vint_bra has quit IRC12:03
*** bkopilov_ has joined #openstack-swift12:14
*** lifeless has quit IRC12:30
*** gkadam has quit IRC12:33
*** lifeless has joined #openstack-swift12:43
*** NM has joined #openstack-swift12:43
*** skudlik has quit IRC12:47
*** skudlik has joined #openstack-swift12:51
*** lucasxu has joined #openstack-swift13:05
*** kei_yama has quit IRC13:08
*** klamath has joined #openstack-swift13:42
*** klamath has quit IRC13:42
*** klamath has joined #openstack-swift13:43
*** ukaynar has joined #openstack-swift13:52
*** vint_bra has joined #openstack-swift14:10
*** gsmethells has joined #openstack-swift14:27
*** aselius has joined #openstack-swift14:30
*** d0ugal has quit IRC14:32
*** d0ugal has joined #openstack-swift14:32
*** d0ugal has quit IRC14:32
*** d0ugal has joined #openstack-swift14:32
*** skudlik has quit IRC14:45
*** gkadam has joined #openstack-swift14:52
*** gkadam has quit IRC14:59
*** Guest60319 is now known as jlvillal15:04
*** rcernin has quit IRC15:04
*** links has quit IRC15:05
openstackgerritAlistair Coles proposed openstack/swift master: WIP: Ring rebalance respects co-builders' last_part_moves  https://review.openstack.org/47700015:11
*** lucasxu has quit IRC15:14
*** lucasxu has joined #openstack-swift15:16
*** gsmethells has quit IRC15:20
*** lucasxu has quit IRC15:28
*** klamath has quit IRC15:28
*** klamath has joined #openstack-swift15:30
*** vinsh has joined #openstack-swift15:33
*** gyee has joined #openstack-swift15:34
*** gsmethells has joined #openstack-swift15:37
*** klamath has quit IRC15:38
gsmethellsclayg are you available?15:39
*** klamath has joined #openstack-swift15:39
*** NM has quit IRC15:42
*** ghebda has joined #openstack-swift15:44
gsmethellsis anyone available to help with https://bugs.launchpad.net/swift/+bug/1700585 ?15:48
openstackLaunchpad bug 1700585 in OpenStack Object Storage (swift) "Objects can become orphaned in Swift 2.4.0" [Undecided,Incomplete]15:48
*** psachin has quit IRC15:54
*** ukaynar has quit IRC16:01
*** ukaynar has joined #openstack-swift16:01
*** gyee has quit IRC16:02
jrichligsmethells: clayg is on PST time, so its is straight-up 9am for him now.  He will probably be online soon.16:02
*** links has joined #openstack-swift16:02
gsmethellsThanks jrichli for the heads up16:02
*** chlong has joined #openstack-swift16:05
*** chlong has quit IRC16:05
*** pcaruana has quit IRC16:09
notmynamegood morning16:10
*** cbartz has quit IRC16:11
notmynameI'm off to Boston today, so I'm not sure how much I'll be online this week. I'll be back i SF on Friday16:13
*** lucasxu has joined #openstack-swift16:16
*** itlinux has joined #openstack-swift16:19
*** gyee has joined #openstack-swift16:25
*** lucasxu has quit IRC16:29
*** lucasxu has joined #openstack-swift16:30
*** gsmethells_ has joined #openstack-swift16:30
*** gsmethells has quit IRC16:34
*** klamath has quit IRC16:39
*** tesseract has quit IRC16:40
*** klamath has joined #openstack-swift16:41
*** skudlik has joined #openstack-swift16:41
*** lucasxu has quit IRC16:49
clayggsmethells_: ghebda: did you try the request node count setting with an insane high value?16:53
claygThe dispersion populate isn't going to do much good after the fact - if you have it at 100% population and monitor the report closely it tells you when you're close to loosing >1 replica of a part.16:54
ghebdaclayg: gsmethells: I just updated the ticket with some information. We did not change that value because we weren't sure what exactly that line was supposed to look like16:55
claygI think I saw where the replication cycle time on your cluster is 4 days - which is high for a stable system. Maybe during some replication event it was much much higher?16:56
*** links has quit IRC16:57
claygin app:proxy-server section add request_node_count = 1000 - n.b. will cause significant increase in latency for the cluster16:59
*** klamath has quit IRC17:00
*** chsc has joined #openstack-swift17:02
*** klamath has joined #openstack-swift17:05
ghebdaOK, i set that variable to 1000 and restarted the swift proxy service. I ran the curl -I -XHEAD commands from ticket comment #5 with the same results, meaning the two that came back OK came back OK and the two giving 404s came back with 404s17:11
*** lucasxu has joined #openstack-swift17:12
*** mat128 has joined #openstack-swift17:13
*** klamath has quit IRC17:16
*** klamath has joined #openstack-swift17:17
ghebdaas far as the replication time, that could be because a node shut down and moved, so maybe has not caught up yet...17:18
*** klrmn has joined #openstack-swift17:25
*** mvk has quit IRC17:27
*** oshritf_ has quit IRC17:27
gsmethells_clayg is there anything else we ought to looking into?17:29
*** vinsh has quit IRC17:31
*** vinsh has joined #openstack-swift17:32
*** vinsh has quit IRC17:32
*** vinsh has joined #openstack-swift17:32
*** vinsh has quit IRC17:37
claygwas the response time noticeably higher for the 404 after changing request_node_count - can you confirm from the transaction id and logs that the proxy was hitting all the disks looking for that hash?17:37
*** geaaru has joined #openstack-swift17:44
ghebdai did not run the initial queries, so I can't compare the times, but my results were pretty immediate. Here's the line from the logs for that transaction:17:51
ghebda10.11.12.200 10.11.12.200 26/Jun/2017/15/05/41 HEAD /v1/AUTH_75673124ca7f42968e28bc264ed32331/1/1.2.840.114204.2.2.4.1.243395414945023.14589405468080000/1.2.840.114204.2.2.2.1.199754063548486.14589405856570000.dcm HTTP/1.0 404 - curl/7.19.7%20%28x86_64-redhat-linux-gnu%29%20libcurl/7.19.7%20NSS/3.19.1%20Basic%20ECC%20zlib/1.2.3%20libidn/1.18%20libssh2/1.4.2 47901120da53431e... - - - tx5b2de3447b6a45e8a5a5e-00595122c4 - 1.121317:51
*** hseipp has joined #openstack-swift17:54
*** links has joined #openstack-swift17:55
claygthat looks like the proxy request - were there a bunch of backend object-server requests too?  like... dozens?  with the same txid17:59
*** tonanhngo has joined #openstack-swift18:00
*** links has quit IRC18:02
ghebdaactually with those txids, i'm notseeing any requests in the logs on the nodes18:06
*** tonanhngo has quit IRC18:08
claygthat's not good18:10
clayg;)18:10
claygbut it's not an account or container existence check issue?  You can list the account, list the container, other objects in the container HEAD just fine - always the same objects fail?  Maybe it's just an issue with your greps - can you sanity check you can find the object-server requests for the *successful* txid?18:11
*** tonanhngo has joined #openstack-swift18:12
ghebdadoes the object-server log those only if the request is made from the swift client, or and not do that for curl?18:13
claygfrom the object-server's perspective all requests are made from the proxy18:13
claygregadless of what client the proxy is talking to18:14
claygthe user-agent string shouldn't make any difference anyway18:14
ghebdaok. is verbose logging necessary to see which disks it checks?18:14
*** hseipp has quit IRC18:15
claygobject-server access log lines are logged at INFO18:15
clayghttps://docs.openstack.org/developer/swift/logs.html#storage-node-logs18:15
*** tonanhngo has quit IRC18:17
*** skudlik has quit IRC18:18
*** tonanhngo has joined #openstack-swift18:19
*** tonanhngo has quit IRC18:23
*** noark9 has joined #openstack-swift18:24
*** tonanhngo has joined #openstack-swift18:25
*** gsmethells_ has quit IRC18:28
*** tonanhngo has quit IRC18:29
*** tonanhngo has joined #openstack-swift18:31
ghebdaso i pulled a recent GET operation txid from one of my logs, and on 1 node, i see 10 lines in rapid succession. on node 2, i get nothing from that grep, and on node 3, there are about 20 GET lines in rapid succession18:32
*** tonanhngo has quit IRC18:36
*** tonanhngo has joined #openstack-swift18:37
claygthis is after you made the change to the request-node-count?  if you're seeing ~30 lines for the GET probably so... but it must have been a 404 response, yes?  I can't imagine why one node wouldn't have any logs for that txid?18:39
claygdid it not log them/you can't find them?  Or did it not get sent requests?18:40
*** tonanhngo has quit IRC18:41
*** tonanhngo has joined #openstack-swift18:43
*** gsmethells has joined #openstack-swift18:47
*** tonanhngo has quit IRC18:48
*** JimCheung has joined #openstack-swift18:48
*** tonanhngo has joined #openstack-swift18:50
*** tonanhngo has quit IRC18:54
*** tonanhngo has joined #openstack-swift18:56
*** Renich has joined #openstack-swift18:58
*** geaaru has quit IRC19:00
*** joeljwright has joined #openstack-swift19:01
*** ChanServ sets mode: +v joeljwright19:01
*** tonanhngo has quit IRC19:02
*** ChubYann has joined #openstack-swift19:08
*** noark9 has quit IRC19:11
ghebdaok, i'm actually able to download some of the files that were 404-ing before using the swift client. What's strange is that when I go to one of the storage nodes and swift-get-nodes for one of those files, the curl commands all 404 when I run them back on the proxy19:21
*** Renich has quit IRC19:22
*** mvk has joined #openstack-swift19:23
*** vinsh has joined #openstack-swift19:23
clayg... I don't understand any of that except the bit where you say we found the lost objects ;)19:25
claygthe rest makes no sense :P19:25
*** tonanhngo has joined #openstack-swift19:25
claygI mean.. the words make sense - but I don't understand how all of these things can be true/correct at the same time :D19:25
claygdistributed systems are fun!19:25
*** tonanhngo has quit IRC19:29
ghebdaright. so, we also ran the ls commands on the paths that swift-get-nodes gives us and came up with no such file or directory for all 6 paths (3 primary and 3 handoff locations)19:30
gsmethellsghebda - which server are you running the successful command on? what command is it?19:36
ghebdai'm running a swift download from the proxy19:36
*** tonanhngo has joined #openstack-swift19:37
claygso it's possible the object data is available in *non-primary* locations19:38
claygthat's what the request_node_count is all about - check more nodes19:38
claygtroubleshooting *why* the data isn't on primary locations is one thing - but it's a different thing - durability vs. availability19:39
clayggo find the 200 responses for those txid's - do an ls on those devices/nodes - how far down the list of handoffs do those nodes appear in swift-get-nodes if you use the --all option or whatever it's called19:40
*** tonanhngo has quit IRC19:42
claygmaybe something like this would help figure out what's going on with replication: https://gist.github.com/clayg/4261e7dc654cc2c80a529b741a7cdd5f19:42
*** tonanhngo has joined #openstack-swift19:43
*** tonanhngo has quit IRC19:48
*** tonanhngo has joined #openstack-swift19:50
ghebdaok, yeah, so we ran through the entire list of handoff locations and found 3 that gave us 200s, much deeper down the list from the 3 primaries and 3 initial handoffs. so that makes sense19:50
claygNOICE19:54
*** tonanhngo has quit IRC19:54
clayg... wait ... is that new information to only me?  I wasn't sure if we were looking at potentially a durability issue?  Or even really why you're still seeing some 404's on objects you don't think should have ever been deleted?19:55
*** tonanhngo has joined #openstack-swift19:56
*** alanvitor has joined #openstack-swift19:59
gsmethellsclayg - it sounds like you configured the storage nodes to basically search every possible location (essentially look on all disks). That is clearly non-optimal though it can give us 200s instead of 404s. I'm happy we achieved some level of correctness, but now I wonder how we achieve optimality again. Why are the files stuck in odd handoff locations and not in the primary locations? Is there a way to track that down now?19:59
*** tonanhngo has quit IRC20:00
gsmethellsghebda - have you guys run this? https://gist.github.com/clayg/4261e7dc654cc2c80a529b741a7cdd5f20:00
tdasilvamaybe it has to do with that very long replication time mentioned earlier? was it 4 days?20:01
claygyeah, having the parts on the wrong nodes is a availability issue - request_node_counts was an experiment20:01
gsmethellsyeah, I figured. Are we now at the hard part?20:01
ghebdawe have not run that yet20:02
gsmethellsThere must be a way to get Swift to usher these "orphaned" handoffs to their final (primary) location? And then determine how to prevent it in the future.20:02
claygI think the hard part is not freaking out when you think you may have lost data - so I say we're past the hard part20:02
gsmethellsOh, goodie. :)20:03
claygthis is just optimization and twiddling stuff - we can do that with beers20:03
claygthere's lots of strategies you can employ to deal with mis-placed parts20:04
claygrecognizing the issue (and monitoring for it) is the biggest first step20:04
claygand sort of pre-requisite20:04
*** itlinux has quit IRC20:04
claygsince doing something... to "hurry things along" requires knowing when you can stop doing that - getting visibility is required20:04
claygprobably you should just run the classify stuff - or some kind of part counting sort of collection script - you can graph it or stick it in a database or run it adhoc on demand (w/e)20:05
clayg... then probably just increase replicator concurrency, turn on handoffs_first/only and maybe set handoffs_delete to ... 2 or 1 - probably 1 is fine20:06
claygyou might need to check your rsync max_connections configuration or something too...20:06
openstackgerritThiago da Silva proposed openstack/swift master: Bind SAIO services on different loopback addresses  https://review.openstack.org/47520220:08
gsmethellsI like the suggestions but, what's step 1 here?20:08
*** tonanhngo has joined #openstack-swift20:08
gsmethellsI'm thinking we ought to find a methodology for moving the parts along to their primary locations instead of their current handoff locations. How do we do that?20:09
claygif you had 100% dispersion population - you would normally be using swift-dispersion-report to track rebalances - or at least that's how it's been "classically" - although I prefer part counting/classification these days...20:09
gsmethellsIs that the "classify script"?20:09
claygyeah I think visibility is step 0 - otherwise you can't really tell what's working20:11
claygand since data is kinda heavy you have to have some patience - you might start something off and not know for hours (days?) if it's going to be the solution20:12
*** tonanhngo has quit IRC20:13
*** tonanhngo has joined #openstack-swift20:14
*** spetersen has joined #openstack-swift20:19
*** tonanhngo has quit IRC20:19
ghebdahey clay, thanks. i'm actually jumping out now and scott(spetersen) is taking over.20:20
*** ghebda has quit IRC20:20
*** ukaynar_ has joined #openstack-swift20:20
spetersenhi clay20:20
*** tonanhngo has joined #openstack-swift20:21
claygheh, hi guys20:21
spetersenThanks for helping greg20:21
spetersenWe changed the number to 1000 and we found the data way down the list but we did find 3 copies.20:22
claygyeah it's great when it's not a durability issue *phew*20:22
spetersenyou are not kidding.20:23
spetersengreg said the default was 6 ?20:23
claygso... rolling into the next thing - you need to do something to monitor replication/rebalance - you let it get way out of hand and stuff got messed way up without you knowing - not good20:23
spetersenright20:24
*** ukaynar has quit IRC20:24
claygas an open source project swift has many different monitoring stacks as it does deployments - IMHO, best of bread these days is part counting techniques - you listdir in /srv/node/dXXX/objects every so often and then check which of those part numbers "belong" on this node (according to the current ring version) and which ones don't20:25
claygyou don't need to track the specific part's as much as the counts - 10K primaries 1K handoffs20:25
*** tonanhngo has quit IRC20:25
claygif handoffs is big - it's not great - if it's not going down it's sorta not good - if it's growing it's bad20:26
claygis this a geo cluster?20:26
claygmulti-region?20:26
claygwrite affinity?20:26
spetersenwe moved one of the 3 storage nodes to our main office, 100MB line20:26
*** tonanhngo has joined #openstack-swift20:27
spetersensince then we have not performed a successful replication20:27
spetersen5 days worth20:27
claygcool!20:27
spetersenwe have r1z1n1 r1z2n1 and r1z3n120:28
spetersenwe moved r1z2n1 here20:28
spetersenWe did have a 10GB backbone on the cluster but we reduced that connection by 100 times by moving a node here.20:28
spetersenI believe that to be true, I may be wrong.20:29
claygwhich location do you ingest too?  both?  are you using read/write affinity?20:29
spetersena 3 node cluster with 34 bays each.20:29
spetersenI wanted to enable that on day one before moving the node here but was rushed by management.20:30
spetersenI just enabled read / write affinity on the proxy at our colo.20:30
spetersenread_affinity = r1z1=100, r1z3=100, r1z2=20020:31
*** itlinux has joined #openstack-swift20:31
spetersenwrite_affinity = r1z1, r1z320:31
spetersenIs that right?20:31
*** tonanhngo has quit IRC20:32
spetersenr1z1n1 and r1z3n1 are also at the colo.20:32
claygmaybe - write affinity requires a very high fidelity of careful replication monitoring - or maybe very predictable ingest patterns...20:32
claygread affinity is fine - but I'd recommend you drop write_affinity - even if you do get some kind of handle on replication monitoring...20:33
*** tonanhngo has joined #openstack-swift20:33
claygdoesn't matter than much right at this moment tho...20:33
spetersenok, i commented out write_affinity20:33
spetersendo you recommend moving r1z2n1 back to the colo ?20:34
claygwell... that's probably fine - but it only effects new parts coming in20:34
claygthere's a lot of variables to consider there - i'm sorry I don't have a simple answer for you - best I can do is try to point you at information available online how swift works and let you learn and use your best judgement20:34
spetersencool20:35
claygI don't mind pointing you at a protip there and again - esp. if there's a hair on fire situation :D20:35
spetersenThanks!20:36
claygread over the classify script - test it on a development/lab environment - then check it out on one of the prod nodes - try to build up a mental model for what's going on - let me know if you have any questions or want to confirm/correct any understanding20:36
spetersensure thing!20:36
spetersenthanks!20:36
claygnp, gl20:37
*** tonanhngo has quit IRC20:38
*** tonanhngo has joined #openstack-swift20:39
claygi'm looking at https://gist.github.com/clayg/4261e7dc654cc2c80a529b741a7cdd5f and thinking it'd be nice if line 80 kept a count of the primary parts ondisk too - i only recently started making a distinction between handoffs and misplaced parts - but it's particularly relevant in geo clusters where handoffs are normal/expected with write_affinity - but misplaced parts after a rebalance are just never good unless they're20:41
clayggoing down.20:41
*** tonanhngo has quit IRC20:43
*** ukaynar has joined #openstack-swift20:44
*** ukaynar_ has quit IRC20:47
spetersenThanks clay.20:47
*** lucasxu has quit IRC20:48
*** vinsh has quit IRC20:49
*** spetersen has quit IRC20:51
*** tonanhngo has joined #openstack-swift20:52
*** MVenesio has quit IRC20:53
*** tonanhngo has quit IRC20:56
*** tonanhngo has joined #openstack-swift20:58
tdasilvatimburke: around?20:59
*** cschwede has quit IRC21:01
*** tonanhngo has quit IRC21:02
*** itlinux has quit IRC21:10
*** tonanhngo has joined #openstack-swift21:10
*** tonanhngo has quit IRC21:18
claygtdasilva: he's on PTO this week21:26
tdasilvaclayg: ah cool! good for him. just left him a comment on patch 45902321:27
patchbothttps://review.openstack.org/#/c/459023/ - liberasurecode - Consistently use zlib for crc3221:27
claygyeah i wish I could tell you i know what's going on in that patch - I don't - kota_ ???21:28
*** mmmucky_ is now known as mmmucky21:31
*** tonanhngo has joined #openstack-swift21:32
*** itlinux has joined #openstack-swift21:32
*** joeljwright has quit IRC21:34
*** tonanhngo has quit IRC21:37
*** tonanhngo has joined #openstack-swift21:38
*** tonanhngo has quit IRC21:43
*** tonanhngo has joined #openstack-swift21:44
*** ukaynar has quit IRC21:45
*** tonanhngo has quit IRC21:50
*** tonanhngo has joined #openstack-swift21:51
*** alanvitor has quit IRC21:53
*** tonanhngo has quit IRC21:55
*** tonanhngo has joined #openstack-swift21:57
*** gsmethells_ has joined #openstack-swift22:01
*** tonanhngo has quit IRC22:02
*** tonanhngo has joined #openstack-swift22:03
*** gsmethells has quit IRC22:04
*** gsmethells_ has quit IRC22:04
*** tonanhngo has quit IRC22:07
*** tonanhngo has joined #openstack-swift22:09
*** tonanhngo has quit IRC22:14
*** tonanhngo has joined #openstack-swift22:15
*** tonanhngo has quit IRC22:19
*** tonanhngo has joined #openstack-swift22:21
*** tonanhngo has quit IRC22:26
*** klamath has quit IRC22:32
*** klamath has joined #openstack-swift22:33
*** klamath_ has joined #openstack-swift22:35
*** klamath has quit IRC22:35
*** tonanhngo has joined #openstack-swift22:43
*** itlinux has quit IRC22:46
*** tonanhngo has quit IRC22:47
*** tonanhngo has joined #openstack-swift22:49
*** tonanhngo has quit IRC22:53
*** tonanhngo has joined #openstack-swift22:55
*** tonanhngo has quit IRC23:00
*** tonanhngo has joined #openstack-swift23:01
*** tonanhngo has quit IRC23:06
*** hoonetorg has quit IRC23:09
*** tonanhngo has joined #openstack-swift23:23
*** tonanhngo has quit IRC23:24
*** tonanhngo has joined #openstack-swift23:24
*** hoonetorg has joined #openstack-swift23:26
*** chsc has quit IRC23:31
*** klamath_ has quit IRC23:38

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!