Tuesday, 2016-03-29

*** haomaiwa_ has quit IRC00:01
*** haomaiwa_ has joined #openstack-swift00:01
*** resker has joined #openstack-swift00:08
*** asettle has joined #openstack-swift00:10
*** garthb has quit IRC00:11
*** esker has quit IRC00:12
*** chuck_ has joined #openstack-swift00:17
*** gyee has quit IRC00:26
*** lyrrad has quit IRC00:33
*** arch-nemesis has quit IRC00:43
*** esker has joined #openstack-swift00:47
kota_good morning00:49
*** resker has quit IRC00:51
notmynamehello kota_00:52
kota_notmyname: hello00:52
*** asettle has quit IRC00:54
hosanaikota_: notmyname: morning! & hello!00:54
notmynamehello hosanai00:54
kota_hosanai: o/00:54
*** haomaiwa_ has quit IRC01:01
*** haomaiwang has joined #openstack-swift01:01
*** asettle has joined #openstack-swift01:04
*** haomai___ has joined #openstack-swift01:06
*** haomaiwang has quit IRC01:08
*** asettle has quit IRC01:08
*** klrmn has quit IRC01:12
*** esker has quit IRC01:14
*** chuck_ is now known as zu01:18
*** zu is now known as zul01:18
*** mingdang1 has joined #openstack-swift01:19
*** asettle has joined #openstack-swift01:23
*** takashi_ has joined #openstack-swift01:24
takashi_good morning :-)01:24
*** jamielennox|away is now known as jamielennox01:29
*** bill_az has quit IRC01:34
*** mathiasb has quit IRC01:47
*** mathiasb has joined #openstack-swift01:47
*** StraubTW has joined #openstack-swift01:49
*** bill_az has joined #openstack-swift01:52
*** mingdang1 has quit IRC01:54
*** haomai___ has quit IRC02:01
*** haomaiwang has joined #openstack-swift02:01
*** haomaiwang has quit IRC02:03
*** 7F1AAMJH0 has joined #openstack-swift02:08
*** StraubTW has quit IRC02:09
*** rickyrem has quit IRC02:10
*** StraubTW has joined #openstack-swift02:11
*** 7F1AAMJH0 has quit IRC02:13
*** klrmn has joined #openstack-swift02:14
*** bill_az has quit IRC02:23
*** baojg has joined #openstack-swift02:29
*** lifeless has quit IRC02:43
*** lifeless has joined #openstack-swift02:44
*** baojg has quit IRC02:45
*** baojg has joined #openstack-swift02:46
*** haomaiwang has joined #openstack-swift02:52
*** sheel has joined #openstack-swift02:54
*** sgundur has left #openstack-swift02:57
*** sanchitmalhotra has joined #openstack-swift02:59
*** haomaiwang has quit IRC03:01
*** haomaiwang has joined #openstack-swift03:01
*** StraubTW has quit IRC03:04
*** dmorita has quit IRC03:11
*** links has joined #openstack-swift03:13
charz_morning03:25
janonymouso/03:26
*** dmorita has joined #openstack-swift03:44
*** dmorita has quit IRC03:48
*** asettle has quit IRC03:58
*** haomaiwang has quit IRC04:01
*** 7GHAAMTT3 has joined #openstack-swift04:01
*** sanchitmalhotra1 has joined #openstack-swift04:18
*** sanchitmalhotra has quit IRC04:20
*** SkyRocknRoll has joined #openstack-swift04:40
*** 7GHAAMTT3 has quit IRC05:01
*** haomaiwang has joined #openstack-swift05:01
*** pcaruana has quit IRC05:09
*** asettle has joined #openstack-swift05:12
*** sanchitmalhotra1 has quit IRC05:13
*** baojg has quit IRC05:24
*** baojg has joined #openstack-swift05:25
*** trifon has joined #openstack-swift05:28
*** silor has joined #openstack-swift05:35
*** asettle has quit IRC05:39
*** silor has quit IRC05:43
*** silor1 has joined #openstack-swift05:43
*** silor1 is now known as silor05:45
*** klrmn has quit IRC05:47
*** ChubYann has quit IRC05:51
*** silor has quit IRC05:54
*** silor has joined #openstack-swift05:54
*** asettle has joined #openstack-swift05:54
*** asettle has quit IRC05:59
*** haomaiwang has quit IRC06:01
*** haomaiwang has joined #openstack-swift06:01
*** tesseract has joined #openstack-swift06:19
*** tesseract is now known as Guest5878206:19
*** haomaiwang has quit IRC07:01
*** haomaiwang has joined #openstack-swift07:01
*** rledisez has joined #openstack-swift07:11
*** zaitcev has quit IRC07:13
*** sanchitmalhotra has joined #openstack-swift07:15
*** jordanP has joined #openstack-swift07:21
*** dmellado|off is now known as dmellado07:36
*** rcernin has joined #openstack-swift07:36
*** mingdang1 has joined #openstack-swift07:37
*** haomaiwang has quit IRC07:39
*** mingdang1 has quit IRC07:39
*** mingdang1 has joined #openstack-swift07:40
*** haomaiwang has joined #openstack-swift07:43
*** SkyRocknRoll has quit IRC07:46
*** haomaiwang has quit IRC07:48
*** pcaruana has joined #openstack-swift07:48
*** daemontool has joined #openstack-swift07:50
*** haomaiwa_ has joined #openstack-swift07:53
*** SkyRocknRoll has joined #openstack-swift07:56
*** mmcardle has joined #openstack-swift07:56
*** daemontool_ has joined #openstack-swift07:58
*** haomaiwa_ has quit IRC08:01
*** haomaiwang has joined #openstack-swift08:01
*** daemontool has quit IRC08:02
openstackgerritMarek Kaleta proposed openstack/swift: Order devices in the output of swift-ring-builder  https://review.openstack.org/27795608:17
*** joeljwright has joined #openstack-swift08:19
*** ChanServ sets mode: +v joeljwright08:19
*** mingdang1 has quit IRC08:21
*** jistr has joined #openstack-swift08:23
*** asettle has joined #openstack-swift08:26
*** jmccarthy has quit IRC08:38
*** jmccarthy has joined #openstack-swift08:38
*** daemontool_ has quit IRC08:43
*** daemontool_ has joined #openstack-swift08:43
*** mvk has joined #openstack-swift08:46
*** haomaiwang has quit IRC09:01
*** haomaiwang has joined #openstack-swift09:01
*** stantonnet has quit IRC09:02
*** stantonnet has joined #openstack-swift09:05
*** sileht has quit IRC09:07
*** sileht has joined #openstack-swift09:12
*** asettle has quit IRC09:27
*** takashi_ has quit IRC09:36
*** lifeless has quit IRC09:39
*** lifeless has joined #openstack-swift09:40
*** baojg has quit IRC09:42
*** baojg has joined #openstack-swift09:42
*** baojg has quit IRC09:44
*** baojg has joined #openstack-swift09:45
*** asettle has joined #openstack-swift09:52
*** daemontool_ has quit IRC09:55
*** km__ has joined #openstack-swift09:55
*** km__ has quit IRC09:55
*** daemontool_ has joined #openstack-swift09:55
*** daemontool_ has quit IRC09:56
*** daemontool_ has joined #openstack-swift09:56
*** km__ has joined #openstack-swift09:56
*** km__ is now known as Guest8370109:57
*** asettle has quit IRC09:57
*** km has quit IRC09:58
*** haomaiwang has quit IRC10:01
*** haomaiwang has joined #openstack-swift10:01
*** Guest83701 has quit IRC10:07
*** km has joined #openstack-swift10:08
*** km has quit IRC10:11
*** km has joined #openstack-swift10:12
*** hosanai has quit IRC10:14
*** km has quit IRC10:17
*** km has joined #openstack-swift10:20
openstackgerritMerged openstack/swift: Fix py34 error of indexing 'dict_keys' object  https://review.openstack.org/29209610:21
*** km has quit IRC10:27
*** km has joined #openstack-swift10:28
*** flaper87 has quit IRC10:36
*** flaper87 has joined #openstack-swift10:36
*** kei_yama has quit IRC10:37
*** km has quit IRC10:40
*** asettle has joined #openstack-swift10:56
*** asettle has quit IRC10:56
*** baojg has quit IRC10:59
*** haomaiwang has quit IRC11:01
*** haomaiwang has joined #openstack-swift11:01
*** dmorita has joined #openstack-swift11:12
*** mvk_ has joined #openstack-swift11:15
*** dmorita has quit IRC11:17
*** mvk has quit IRC11:19
*** mingdang1 has joined #openstack-swift11:33
*** mmcardle has quit IRC11:37
*** cbartz has joined #openstack-swift11:48
*** asettle has joined #openstack-swift11:56
*** haomaiwang has quit IRC12:01
*** haomaiwa_ has joined #openstack-swift12:01
*** mmcardle has joined #openstack-swift12:02
*** SkyRocknRoll has quit IRC12:06
*** baojg has joined #openstack-swift12:36
*** natarej has quit IRC12:39
*** dmellado is now known as dmellado|lunch12:40
*** StraubTW has joined #openstack-swift12:47
*** jmb___ has quit IRC12:53
*** kozhukalov has joined #openstack-swift12:56
*** StraubTW has quit IRC12:56
*** haomaiwa_ has quit IRC13:01
*** haomaiwang has joined #openstack-swift13:01
*** links has quit IRC13:02
tdasilvagood morning13:04
*** natarej has joined #openstack-swift13:07
*** bill_az has joined #openstack-swift13:19
*** haomaiwang has quit IRC13:22
*** ametts has joined #openstack-swift13:22
*** StraubTW has joined #openstack-swift13:25
*** jmb__ has joined #openstack-swift13:27
pdardeaugood morning13:35
vinsh_Good morning all.  I just noticed, when running servers per disk - "lsof -i" of any given port in the ring shows something strange.  I have 48 disks in this node, so 48 ports in the ring for that node 1 per disk.  With servers per port setting of "1" I see 48 threads using lsof -i PER disk.13:40
vinsh_If I bump servers per disk to 4 .. thats 236 threads open.13:40
vinsh_so 48 threads PER port in the ring?13:40
vinsh_I would expect to see just 1 or 4 based on the setting.13:41
*** diogogmt has quit IRC13:43
vinsh_wonder if I am seeing https://bugs.launchpad.net/swift/+bug/1554233 ?13:44
openstackLaunchpad bug 1554233 in OpenStack Object Storage (swift) "Servers-per-port can consume excessive OS threads" [High,In progress] - Assigned to Samuel Merritt (torgomatic)13:44
vinsh_but the number of files open per port matches the number of disks each.13:44
*** mingdang1 has quit IRC13:45
vinsh_I should clarify that its not threads I see.. its open files.  (big difference)13:46
vinsh_as ps -ef output shows what I would expect from servers per disk.  I guess I just don't get the lsof -i output. :)13:46
*** diogogmt has joined #openstack-swift13:47
vinsh_Not mounted: d175 on 24.26.92.216:602113:47
vinsh_Not mounted: d174 on 24.26.92.216:602113:47
vinsh_The reason I ask, is I see really bizzare stuff out of swift-recon on this deployment.  Claiming ^^13:47
vinsh_where those disks are NOT even on that host.13:48
vinsh_swift-ring-builder output confirms13:48
*** asettle has quit IRC13:49
*** asettle has joined #openstack-swift13:55
*** asettle has quit IRC14:00
*** DevStok has joined #openstack-swift14:00
DevStokhi all14:00
DevStokI have a env deployed with Fuel 714:01
DevStokHA14:01
DevStokglance + swift14:01
DevStokwhen I upload a lot of snapshots14:01
DevStokthe folder /srv/node/ fill up the partition /14:01
DevStokI want to change swift conf /srv/node to other path with more space14:02
DevStokany ideas?14:02
*** diogogmt has quit IRC14:05
*** daemontool_ is now known as daemontool14:06
*** trifon has quit IRC14:06
*** trifon has joined #openstack-swift14:06
*** sgundur has joined #openstack-swift14:12
*** trifon has quit IRC14:13
*** cbartz has quit IRC14:15
*** dmellado|lunch is now known as dmellado14:21
*** ajiang has left #openstack-swift14:30
*** baojg has quit IRC14:34
*** haomaiwang has joined #openstack-swift14:43
*** kozhukalov has quit IRC14:51
*** kozhukalov has joined #openstack-swift14:51
mmotianiHi Good morning!14:57
mmotianiI am planning to fill gaps in the swift client docs.14:58
mmotianitimburke: Would you be able to give me some suggestions?15:00
*** haomaiwang has quit IRC15:01
*** haomaiwang has joined #openstack-swift15:01
*** arch-nemesis has joined #openstack-swift15:05
*** openstackgerrit has quit IRC15:06
*** arch-nemesis has quit IRC15:07
*** openstackgerrit has joined #openstack-swift15:07
joeljwrightmmotiani: hi, which gaps are you hoping to work on?15:10
*** _JZ_ has joined #openstack-swift15:10
joeljwrightwe have a patch in the pipeline to add in quite a bit of missing detail15:11
mmotianijoeljwright: I can see lot of docs only contains the heading and there is no content for them.15:11
mmotianiSo, planning to work on them.15:11
*** diogogmt has joined #openstack-swift15:11
joeljwrightlet me point you at a current patch (there are still missing bits, but it's much more complete!)15:12
joeljwrightjust need to find it...15:12
mmotianiCould you please share the link of the patch15:12
mmotianiYeah, that would be great.15:12
*** dmorita has joined #openstack-swift15:12
joeljwrighthttps://review.openstack.org/#/c/288566/15:14
patchbotjoeljwright: patch 288566 - python-swiftclient - WIP: This patch adds a new doc structure for swift...15:14
*** arch-nemesis has joined #openstack-swift15:14
notmynamegood morning15:15
joeljwrightmmotiani: I'm working on the patch at the moment to address the existing comments15:15
notmynamemmotiani: great to see interest in improving the client docs! :-)15:15
mmotianijoeljwright: thanks! I will just go through the patch and comments to see how to get start with it.15:16
mmotianinotmyname: good morning :)15:16
notmynamewbhuber: on https://bugs.launchpad.net/swift/+bug/1563362 are you proposing to add full if-match etag support? I'm not entirely sure what that bug is reporting/proposing15:16
openstackLaunchpad bug 1563362 in OpenStack Object Storage (swift) "Doing a PUT on a valid conditional request, If-None-Match, returns unclear statement" [Undecided,New]15:16
*** siva_krishnan has left #openstack-swift15:16
*** dmorita has quit IRC15:17
*** diogogmt has quit IRC15:17
*** diogogmt has joined #openstack-swift15:18
joeljwrightmmotiani: I'll push up an update later today that addresses the existing comments from asettle (and adds another section+example to the swiftservice api part)15:19
*** garthb has joined #openstack-swift15:19
*** siva_krishnan has joined #openstack-swift15:19
*** siva_krishnan has left #openstack-swift15:19
mmotianijoeljwright: Ok, I will look into that too. Thanks!15:20
*** siva_krishnan has joined #openstack-swift15:20
*** siva_krishnan has left #openstack-swift15:20
*** siva_krishnan1 has joined #openstack-swift15:21
*** siva_krishnan1 has left #openstack-swift15:21
*** siva_krishnan has joined #openstack-swift15:21
*** rcernin has quit IRC15:27
*** links has joined #openstack-swift15:32
*** gyee has joined #openstack-swift15:41
*** ametts has quit IRC15:59
*** haomaiwang has quit IRC16:01
*** haomaiwang has joined #openstack-swift16:01
*** esker has joined #openstack-swift16:03
*** Guest58782 has quit IRC16:03
*** links has quit IRC16:04
*** lyrrad has joined #openstack-swift16:05
*** lyrrad has quit IRC16:06
*** lyrrad has joined #openstack-swift16:07
*** esker has quit IRC16:13
*** esker has joined #openstack-swift16:13
*** jistr has quit IRC16:14
*** StraubTW has quit IRC16:18
vinsh_FYI I solved my recon problem I posted a few hours ago.  I had empty dirs in /srv/node that were from a previous setup I had.  Those were unused but still report back to recon. I removed them and all is well.16:18
*** StraubTW has joined #openstack-swift16:18
*** nadeem has joined #openstack-swift16:18
*** nadeem has quit IRC16:19
*** nadeem has joined #openstack-swift16:19
mmotianijoeljwright: notmyname: Hi, do we need to follow the same document structure for rest of the docs in Swift Client. I can see sdk.rst, index.rst, swift client.rst files have gaps. So do we need to follow the same convention for all which Joel is using in cli.rst?16:23
*** haomaiwang has quit IRC16:32
*** dmorita has joined #openstack-swift16:32
*** haomaiwa_ has joined #openstack-swift16:34
*** StraubTW has quit IRC16:37
*** haomaiwa_ has quit IRC16:37
*** haomaiwang has joined #openstack-swift16:38
claygwat16:40
claygvinsh_: oh good - i was curious16:41
vinsh_I have 3 swift endpoints in one cluster now :)16:41
claygwhoa16:41
vinsh_3 different hardware vendors behind each.16:41
vinsh_Yeah, cisco, supermicro and echostreams.16:41
vinsh_all running EC16:41
claygvinsh_: how'd you hack everyone to use different /etc/swift/swift.conf's?16:41
*** rledisez has quit IRC16:42
vinsh_They all exist as distinct clusters.  each registered with a different service name such as "swift" or "swift-perf" from keystone.16:42
*** haomaiwang has quit IRC16:42
vinsh_they each have unique swift.confs.16:42
joeljwrightmmotiani: sdk.rst has been removed16:42
vinsh_So with puppet it's 3 different sets of roles. one per hardware type.16:42
claygvinsh_: yeah - how you'd hack 'em to use different swift.conf's?  stuff like the hash_suffix_prefix is hard coded to use /etc/swift/swift.conf - no "swift_dir" config option involved?16:43
joeljwrightmmotiani: index.rst, cli.rst, client-api.rst and service-api.rst are the files that will continue to exist16:43
joeljwrightmmotiani: oh, and introduction.rst16:44
vinsh_@clayg Each of the 3 sets of nodes has a swift.conf deployed to it that is specific to its set of nodes (using puppet)16:44
mmotianijoeljwright: Alright! and what about apis.rst and swiftclient.rst ?16:45
claygso.... different clusters?16:45
vinsh_this is 3 pool of nodes each running their own ring. all registered to one main keystone.16:45
vinsh_Yeah16:45
claygok, yeah i suppose multiple swift clusters in one keystone is probably a thing that happens - still coll tho!16:46
pdardeauvinsh_: each cluster running on separate machines?16:46
vinsh_yeah16:46
claygvinsh_: very cool!16:46
claygvinsh_: now fill 'em up!16:46
vinsh_each cluster diffrent hardware vendor.  getting some EC stats to share at the summit with you guys/gals.16:46
joeljwrightmmotiani: swiftclient.rst is still there, but it's a container for the autogenerated docstrings I think16:46
claygvinsh_: awesome!16:46
pdardeauvinsh_: cool stuff. a federation of swift clusters!16:47
vinsh_things like swift dispersion dont' understand multiple endpoints yet though.. so that needs work.16:47
joeljwrightmmotiani: apis.rst was split up into cli/client-api/service-api16:47
vinsh_Yeah :)16:47
*** jordanP has quit IRC16:47
*** pcaruana has quit IRC16:48
mmotianijoeljwright: Cool, thanks!16:48
mmotianijoeljwright: What would you suggest me to start with?16:48
*** StraubTW has joined #openstack-swift16:49
vinsh_I wish for a session at the summit on "tuning ec chunksize and those kinda things"16:49
joeljwrightmmotiani: There are CLI examples that still need to be completed16:49
claygvinsh_: i thought dispersion operated on rings?  what does it care about the swift endpoints - you mean just to pick out the right storage url?  that's all like... service filters and junk?16:49
vinsh_@clayg swift-disersion-report I should say16:50
vinsh_It needs to know - usually from keystone what endpoint to operate on.16:50
claygvinsh_: do you have some suggestions?  I'm sure we can make time to discuss tuning EC if you have some suggestions16:50
joeljwrightmmotiani: reviews and suggestions would also be gratefully received16:50
vinsh_If there are multiple in a service catalog.. then it picks one only.16:50
joeljwrightmmotiani: it's my plan to fill in the TODO sections in service-api.rst this week16:51
claygvinsh_: yeah that makes sense to me - for a long time it didn't even support auth v2 - you could add some simple v1 admin auth middleware up in there - or maybe allow it to use internal_client?16:51
vinsh_clayg: I wonder about how to determine what ec_chunk size and all those other chunks sizes in the pipeline for an ec cluster where object size is known/fixed.16:51
claygwhen you have your rings and access to the storage nodes auth is for suckers - run you own proxy FTW16:51
vinsh_ec_object_segment_size etc.16:52
claygvinsh_: notmyname did played around with it for a while - it seemed to have a larger impact on memory than on performance (to a point)16:52
vinsh_clayg: Agreed. Taking keystone out of the mix in the next iteration... less overhead for this use.16:52
mmotianijoeljwright: Ok, I will start from CLI and will look more into it and try to come up with some examples.16:52
joeljwrightmmotiani: there are 3 suggestions for examples16:53
joeljwrightmmotiani: but we want to keep it interesting if possible :)16:53
vinsh_clayg: I got cosbench up and running (it's really nice!)  I can use this and puppet to push different ec_object_segment_size out to the cluster.. find out what the sweet spot is.  Lot of benchmarking to do on my end still.16:53
openstackgerritJoel Wright proposed openstack/python-swiftclient: WIP: This patch adds a new doc structure for swiftclient  https://review.openstack.org/28856616:54
mmotianijoeljwright: Okay :)16:55
*** mvk_ has quit IRC16:56
joeljwrightmmotiani: I have just pushed up the latest version from my laptop - that is my current working state16:56
mmotianijoeljwright: Yeah, I just saw that.16:56
joeljwrightall existing comments are (hopefully) addressed, and all the TODO sections are 'real'16:56
*** chsc has joined #openstack-swift16:56
mmotianiOk, thanks :)16:57
*** ametts has joined #openstack-swift16:57
joeljwrightmmotiani: tahnks for helping!16:57
*** dmorita has quit IRC16:57
*** _JZ_ has quit IRC17:05
*** zhiyan has quit IRC17:07
*** chsc has quit IRC17:07
*** patchbot has quit IRC17:07
*** sudorandom has quit IRC17:07
*** sw3 has quit IRC17:07
*** mathiasb has quit IRC17:07
*** dmsimard has quit IRC17:07
*** early has quit IRC17:07
*** ametts has quit IRC17:07
*** jmccarthy has quit IRC17:07
*** shakamunyi has quit IRC17:07
*** delatte has quit IRC17:07
*** saltsa has quit IRC17:07
*** briancline has quit IRC17:07
*** darrenc has quit IRC17:07
*** CrackerJackMack has quit IRC17:07
*** ejat has quit IRC17:07
*** sc has quit IRC17:07
*** pchng has quit IRC17:07
*** mmotiani has quit IRC17:07
*** mhu has quit IRC17:07
*** sgundur has quit IRC17:07
*** jamielennox has quit IRC17:07
*** nottrobin has quit IRC17:07
*** dfg has quit IRC17:07
*** JelleB has quit IRC17:07
*** Anticimex has quit IRC17:07
*** CrackerJackMack has joined #openstack-swift17:07
*** early has joined #openstack-swift17:07
*** delattec has joined #openstack-swift17:07
*** gyee has quit IRC17:07
*** _JZ__ has joined #openstack-swift17:07
*** barra204 has joined #openstack-swift17:07
*** patchbot` has joined #openstack-swift17:07
*** patchbot` is now known as patchbot17:07
*** sw3_ has joined #openstack-swift17:07
*** mathiasb_ has joined #openstack-swift17:07
openstackgerritAndreas Jaeger proposed openstack/swift: List system dependencies for running common tests  https://review.openstack.org/29831317:07
*** dmsimard1 has joined #openstack-swift17:07
*** briancli1e has joined #openstack-swift17:07
*** Anticime1 has joined #openstack-swift17:07
*** sc__ has joined #openstack-swift17:07
*** saltsa_ has joined #openstack-swift17:07
*** ametts_ has joined #openstack-swift17:07
*** sudorandom_ has joined #openstack-swift17:07
*** sudorandom_ is now known as sudorandom17:07
*** ejat has joined #openstack-swift17:07
*** ejat has quit IRC17:07
*** ejat has joined #openstack-swift17:07
*** sw3_ is now known as sw317:07
openstackgerritAndreas Jaeger proposed openstack/swift: List system dependencies for running common tests  https://review.openstack.org/29831317:08
notmynamegood morning, again17:08
openstackgerritAndreas Jaeger proposed openstack/swift: List system dependencies for running common tests  https://review.openstack.org/29831317:08
*** JelleB has joined #openstack-swift17:08
*** sgundur has joined #openstack-swift17:08
*** jmccarthy has joined #openstack-swift17:08
*** darrenc has joined #openstack-swift17:08
*** mmotiani has joined #openstack-swift17:08
*** dfg has joined #openstack-swift17:12
*** jamielennox has joined #openstack-swift17:12
*** daemontool has quit IRC17:14
*** pchng has joined #openstack-swift17:14
*** klrmn has joined #openstack-swift17:14
*** StraubTW has quit IRC17:14
*** nottrobin has joined #openstack-swift17:15
*** zhiyan has joined #openstack-swift17:17
*** chsc has joined #openstack-swift17:17
*** dmsimard1 is now known as dmsimard17:21
*** sgundur has left #openstack-swift17:21
*** lakshmiS has joined #openstack-swift17:30
*** dmorita has joined #openstack-swift17:39
*** sgundur has joined #openstack-swift17:39
*** StraubTW has joined #openstack-swift17:43
*** delatte has joined #openstack-swift17:47
*** ChubYann has joined #openstack-swift17:48
*** delattec has quit IRC17:51
*** zaitcev has joined #openstack-swift17:52
*** ChanServ sets mode: +v zaitcev17:52
*** delattec has joined #openstack-swift17:53
*** marcin12345_ has joined #openstack-swift17:56
*** delatte has quit IRC17:56
marcin12345_Does anybody know what to do with Swift with: object-server: ERROR container update failed with x.x.x.x:6001/sdb (saving for async update later): Timeout (3.0s) (txn: <....>)17:56
marcin12345_?17:56
marcin12345_?17:56
*** dmorita has quit IRC17:59
*** asettle has joined #openstack-swift18:00
*** dmorita has joined #openstack-swift18:01
*** pcaruana has joined #openstack-swift18:02
claygmarcin12345_: container server is down, maybe device is unmounted or full, but probably it's just a large container and the db was eating it's .pending inserts while that particular node was trying to update the container index - it'll process the async pending after awhile18:03
marcin12345_well it is not disk related, I switched to ramdisk even. It is happening when using Swift with Gnocchi (time series db for Ceilometer), that is writing a lot of small objects all the time. The question is, is it safe to ignore those error? I think not, because it started happpending after adding more than 5 Vms.18:05
marcin12345_container server is not down18:06
tdasilvamarcin12345_: when you say you are using Swift with Gnocchi, do you mean you are sendind swift data to gnocchi, or is swift the data storage for gnocchi?18:07
claygtdasilva: I think the latter!18:08
marcin12345_the later, swift is data storage for gnocchi18:08
claygtdasilva: i get POINTS18:08
tdasilvalol18:08
marcin12345_ok you win ;)18:08
*** cdelatte has joined #openstack-swift18:08
ahalethat sounds like an awesome way to make asyncs18:08
claygahale: lol18:08
ahaleare they also expiring objects? :)18:09
marcin12345_yes18:09
ahale:D18:09
claygmarcin12345_: you're probably going to need to think about the cardinality in your container schema - how quickly are you adding rows to a container - how long do you plan to keep that up?18:09
claygif there answers are like 100/s and forever you're not gunna have a good time18:10
marcin12345_but really with 5VMs+ it already does not scale?18:10
*** asettle has quit IRC18:10
tdasilvamarcin12345_: probably a dumb question, have you checked that you can connect to your container node from that storage node?18:11
*** StraubTW has quit IRC18:11
*** delattec has quit IRC18:11
claygmarcin12345_: depends - on the cardinality in your container schema - the async_pending may not be an issue - but it depends on use case - and generally - yeah tons and tons and tons of teeny teeny tiny objects is the an object stores favorite use-case - sticking them all in one container makes swift panda's cry18:11
claygis *not* the object stores favorite use-case18:12
openstackgerritMerged openstack/python-swiftclient: Clean up some unnecessary variables  https://review.openstack.org/29662018:12
claygtdasilva: he sure made it sound like he had pleanty of container updates working then at somepoint he started seeing timeouts - which sounds to me more like container contention than anything else - but sure - it could be other things18:13
*** cdelatte has quit IRC18:13
claygmarcin12345_: can you answer the question about how many objects you have in these container(s) and how fast they're having new objects added into them?18:13
openstackgerritMerged openstack/swift: Ignore files in the devices directory when auditing objects  https://review.openstack.org/29518318:14
claygahale: lol - i missed the part about expiring - so some day they get twice as many inserts/s18:14
*** arch-nemesis has quit IRC18:15
claygaerwin3: how goes the expiring work?  didn't some guy push up some code for some go obj-expirier daemon?18:15
marcin12345_clayg: 677 containers, 6442 objects, 46141712 Bytes, writing every 30 seconds18:16
marcin12345_tdasilva: container node and storage node sits on the same box18:17
claygso the containers on average have ... 10 objects in them?18:18
*** cdelatte has joined #openstack-swift18:18
claygyeah this isn't a container contention problem - maybe some sort of nutso virtualization contention problem - something in the io scheduler - if you're container db's and objects are on the same devices there may be some weird io scheduling things going on in the host18:19
clayg... only option I can think of that might have an effect is the db preallocate thing18:19
claygtdasilva: you may have been right about network18:19
tdasilvaclayg: so i get POINTS? ;)18:20
tdasilvalol18:20
claygtdasilva: probably - it's not a zero sum game - we can both get points - hopefully even marcin12345_ can get some points18:21
clayg... it's all about the points18:21
*** StraubTW has joined #openstack-swift18:21
tdasilvahehe, jk18:21
claygmarcin12345_: how many of those errors do you have?  are the async_pendings keeping up (they get stored in /srv/node*/*/async_*)18:23
*** StraubTW has quit IRC18:24
*** StraubTW has joined #openstack-swift18:24
*** arch-nemesis has joined #openstack-swift18:24
claygmarcin12345_: do the errors include the /a/c path - any correlation on the specific containers?  or ips?  is it *only* when talking from the local object-server to local container-server but cross server werx?18:24
marcin12345_it is both locally and remotely18:25
marcin12345_at burst 15 errors/s18:26
*** DevStock has joined #openstack-swift18:26
DevStockhi18:29
*** Nyar has joined #openstack-swift18:29
marcin12345_it shows also erros: ERROR __call__ error with PUT /sdb/<..>/AUTH_<...>/measure/<...>/<...>_<...> : LockTimeout (10s) /srv/node/sdb/containers/<...>/<...>/.lock (txn: tx<...>)18:29
*** dmorita has quit IRC18:30
DevStockthe folder /srv/node/ can be configured in other path?18:30
NyarAdding to what marcin12345_ has shared so far, increasing node_timeout to 10s in object-server.conf is "hiding" the problem with your current load18:31
NyarWe are however afraid, base on further testing, that this won't help once we scale to production load18:32
*** dmorita has joined #openstack-swift18:32
NyarMoving the container and account services to use tmpfs devices also did not help. It doesn't seem like we are running into any hardware bottleneck here.. It seems like the object-server process is trying to communicate with container-server too fast18:37
*** joeljwright has quit IRC18:39
NyarAre there any recommendations on the ratio of object-server concurrent workers vs container-server/updater concurrent workers?18:39
jrichliDevStock: you can change the devices value in the object server configuration http://docs.openstack.org/developer/swift/deployment_guide.html#object-server-configuration18:41
DevStockfuel mapped that path in the root18:44
DevStockso when I uploaded a lot of snapshot that path filled up the partition /18:45
*** david-lyle has quit IRC18:45
*** dmorita has quit IRC18:46
*** david-lyle has joined #openstack-swift18:46
*** pauloewerton has joined #openstack-swift18:57
*** dmorita has joined #openstack-swift18:57
*** dmorita has quit IRC19:07
glangeDevStock: that's not the recommended way to run swift :)19:08
*** dmorita has joined #openstack-swift19:08
*** dmorita has quit IRC19:09
*** dmorita has joined #openstack-swift19:09
claygNyar: it's not normally a ratio of object -> container workers; in many clusters there's probably less container workers than object workers in total19:10
claygmarcin12345_: Nyar: are the LockTimeout new containers being created?19:11
*** pauloewerton has quit IRC19:12
claygmarcin12345_: Nyar: i guess that path looks like an object update19:12
claygmarcin12345_: Nyar: can you `swift stat` that AUTH_<...>/measure container - how many objects are in there again now?19:13
NyarContainers in policy "gnocchi": 677  Objects in policy "gnocchi": 6084  Bytes in policy "gnocchi": 4379449319:14
Nyaroh my bad, the container19:14
Nyaron it19:14
Nyar0 object19:15
NyarX-Container-Object-Count: 019:17
*** trifon has joined #openstack-swift19:20
NyarCould someone shed some light on what the Swift pipeline is when an object is written to disk? To my understanding, the proxy-server receives the PUT request, determines to which server/partition to write the data to based on the ring consistent hashing algorithm19:22
claygmarcin12345_: Nyar: well, either that output is wrong/outdated or there's no good reason for that database to be taking that long to do *anything*19:22
Nyaran object-server process writes the actual data to disk19:22
DevStockok not reccomended19:23
Nyarand then the object-server makes a call to the container-server?19:23
DevStockbut i'm using glance + swift19:23
claygNyar: yup19:24
DevStockIs a wrong fuel conf19:25
DevStockset a qrong path19:25
DevStockwrong19:25
DevStockwith small space19:25
NyarSo when we see object-server: ERROR container update failed with x.x.x.x:6001/sdb (saving for async update later): Timeout (3.0s), we are right to assume that the object-server is timeout while trying to communicate with the container-server after writing the data to disk?19:26
claygNyar: yeah that's definately what that error is saying19:26
claygNyar: specifically it's saying "I couldn't tell the container about this data, so I'm going to store this information in a temp file and the object-updater will try to send it to the container layer later"19:27
*** silor has quit IRC19:27
Nyarthanks clayg19:27
claygNyar: if these containers had millions of objects of them it'd all be basically SOP - but nothing about this is making any sense to me19:27
claygNyar: do you have a bunch of files in /srv/node/*/async_pending*19:28
NyarWe do indeed19:29
claygNyar: is the object-updater running?19:30
NyarSince I am not seeing any high io wait or sign of io contention on the container devices19:30
NyarI can only assume this is due to a too high number of requests made by the object-server to the container-server19:30
Nyarobject-updater is running on all nodes yes19:31
claygmaybe grep the logs for messages from the object-updater see if he's having a bad time?19:31
NyarThe only container-server errors we (rarely) see are the LockTimeout19:32
NyarI am going to try to collect more data before spamming you with more questions, hopefully finding a correlation between the object-server timeout errors and some relevant container-server logs19:34
claygwell what about the object-updater - he should log a status message every so often regardless - why isn't he claning up the async pendings - he should mention some kind error - `grep object-updater: /var/log/syslog`19:34
claygNyar: ok, but i'm worried we're not seeing the whole picture - try to get the async pending cleaned up - maybe stop the object-updater and run in in the foreground `swift-init object-updater stop` `swift-init object-updater once -nv`19:35
clayggl19:35
NyarThank you clayg19:36
*** daemontool has joined #openstack-swift19:38
*** sheel has quit IRC19:57
*** esker has quit IRC20:12
*** esker has joined #openstack-swift20:13
openstackgerritPaulo Ewerton Gomes Fragoso proposed openstack/python-swiftclient: Adding keystoneauth sessions support  https://review.openstack.org/29896820:29
*** StraubTW has quit IRC20:29
*** gyee has joined #openstack-swift20:32
*** asettle has joined #openstack-swift20:43
jrichlijust checking : specifying x-newest against a cluster using an EC policy does nothing, right?20:43
notmynamejrichli: right. it's got to talk to all of the primaries anyway20:47
claygit's sorta *always* x-newest20:47
*** StraubTW has joined #openstack-swift20:51
jrichliright, just making sure there wasn't some small thing that was different :-)20:51
jrichlithx!20:52
*** StraubTW has quit IRC21:08
*** StraubTW has joined #openstack-swift21:19
*** lyrrad has quit IRC21:22
*** lakshmiS has quit IRC21:33
notmynameFYI there's a summit schedule attached to http://lists.openstack.org/pipermail/openstack-dev/2016-March/090606.html21:33
*** nadeem has quit IRC21:34
notmynamethere might be a few slots that are juggled for various projects, but it should be relatively stable21:34
claygnotmyname: boom - like the swift action21:35
claygthe fuel session stuck in the thursday afternoon slot is weird :\21:35
notmynametomorrow at the team meeting I want to kick off planning for it. I want to make this the most like a hackathon yet21:36
*** StraubTW has quit IRC21:36
claygnotmyname: can we offer to swap the 5-5:40 slot with them and just go do beers earlier on Thursday?21:36
notmynameclayg: good idea21:36
notmynamehmmm...it looks like the friday room might be shared21:38
claygnotmyname: ewwww with *glance*21:39
clayg:)21:39
notmynamebe nice to glance :-)21:39
claygoh i didn't even see the Swift slots in the other room on Wednesday morning21:40
*** esker has quit IRC21:43
*** sgundur has left #openstack-swift21:43
*** lyrrad has joined #openstack-swift21:43
notmynameactually, even if fuel could take the earlier 1:30 slot, that would be better for us too. more contiguous blocks of time21:44
*** DevStock has quit IRC21:46
claygnotmyname: like swift itself - we prefer to have large allocations of contiguous space21:46
notmynameah, I can guess why we got that oddity. I'm giving a talk then. but I can certainly step out for that time, especially if we run it like the hackathon21:47
claygnotmyname: oh wow21:48
claygnotmyname: idk, we'll probably all just sit around staring at each other if you're not there21:48
notmynameyeah, probably ;-)21:48
notmynamebah. the first one is actually worse. I'm giving a talk then, too. (only 2 talks this summit.) but I'd want to be at the first of the working sessions21:49
claygnotmyname: someone needs to draw a venn diagram and say "Let's take a step back; what problem are we trying to solve"21:49
notmynamemaybe we could do some working session prep during the fishbowl sessions21:49
timburkeclayg: problems we're trying to solve -> O O <- what we're actually talking about21:51
claygtimburke: YES!21:53
claygascii venn diagram FTW21:53
claygnotmyname: nm, you're off the hook - you have timburke now21:53
zaitcevVenn diagram? I thought it was an emoticon.21:53
timburkeproblems that can be solved with Venn diagrams -> ( O <- problems that can be solved with ASCII Venn diagrams )21:54
claygzaitcev: no no, see how the two circles don't overlap - that's because we're not talking about the problems we're trying to solve - it's so genius21:54
claygtimburke: pushing it - but i'm still rofling over here21:55
claygmaybe we can add ascii art venn diagrams to container responses?21:55
timburkeclayg: have a bit of fun with how sp/nbsp sort and you might be able to do it now21:58
*** StraubTW has joined #openstack-swift22:02
*** mmcardle has quit IRC22:02
*** esker has joined #openstack-swift22:02
openstackgerritDavid Goetz proposed openstack/swift: go: fix the async logger  https://review.openstack.org/29900022:03
openstackgerritClay Gerrard proposed openstack/swift: WIP: Cleanup EC backend logging on disconnect  https://review.openstack.org/29782222:05
*** daemontool has quit IRC22:05
*** ametts_ has quit IRC22:15
*** trifon has quit IRC22:43
*** jmb__ has quit IRC22:44
*** esker has quit IRC22:48
*** diogogmt has quit IRC22:48
*** esker has joined #openstack-swift22:57
*** km has joined #openstack-swift23:05
*** esker has quit IRC23:11
*** jmb__ has joined #openstack-swift23:13
*** jmb__ has quit IRC23:18
Nyar@clayg: I haven't been able to collect more logs pertinent to the object-server errors we are seeing when Gnocchi writes to Swift :(23:22
NyarThe object-updater is doing its job and never complaining23:23
*** arch-nemesis has quit IRC23:23
notmynameNyar: are the container DBs on flash or on spinning drives?23:24
NyarThey are currently on spinning drives but we have also tried to move them to tmpfs to confirm if it could be due to i/o contention23:25
NyarThe result was the same: object-server spamming the following a few seconds after gnocchi starts sending PUT request to Swift23:26
*** mingdang1 has joined #openstack-swift23:26
Nyarobject-server: ERROR container update failed with x.x.x.x:6001/sdb (saving for async update later): Timeout (3.0s)23:26
NyarWe also see container-server complaining about a LockTimeout (10s) once in a while23:27
notmynameok. thanks for helping me catch up. :-)23:28
NyarIO wait on the container devices is within acceptable range (~20 max)23:28
Nyar:)23:28
notmynameand IIRC you've got very small containers (small number of obejcts in them) and relatively low request rates, right?23:29
NyarI am losing my mind on this. I have tried tweaking the number of object-server workers down and cranking up the number of container-worker up23:29
notmynameyeah, Iw as about to ask about that23:30
notmynameare these co-located?23:30
notmynamewhat is your current worker count?23:30
NyarGnocchi containers have 9 objects in them, we currently have 1518 of them23:30
NyarProxy and storage services are all running on the same nodes indeed23:30
NyarMy current crazy test was 8 object-servers for 128 container-server23:30
Nyar:D23:30
notmynamehow many cores do you have on one of those machines?23:31
Nyar16 hyperthreaded23:31
Nyarso 32 from a kernel standpoint23:31
NyarOriginally, our workers count was 32 object-server and 32 account-server23:32
notmynameare you using servers per disk?23:32
notmynameerr... servers_per_port23:32
NyarHehe, old habits? :)23:32
notmynameor threads_per_disk?23:33
NyarI was reading about that, we do not currently23:33
*** kei_yama has joined #openstack-swift23:33
NyarBut would not I see high IO waits if that was my issue?23:33
notmynamewell, really, I was just trying to get a picture of where you are23:33
notmynamedid you turn the worker counts back down?23:34
NyarWhat's really troubling me here is that it doesn't seem we are hitting any hardware bottleneck, it seems like object-server is timeout while trying to communicate with container-server after an object-write. But all objects are written to disk succesffully23:34
NyarWent back to 32 proxy, 32 object, 32 account, 32 container23:35
notmynamehave you been running this workload for a while or is it a pretty new thing that you just turned on? IOW, is this a problem that emerged after time, or did it start immediately?23:35
NyarThat's brand new, this Swift cluster was only (under)used by Glance23:35
NyarWe started observing this as soon as Gnocchi entered the mix23:35
notmynamedo you have db_preallocation set in the container server config?23:35
notmynameok23:35
NyarI do not, let me look that up23:36
NyarAnd thank you so much already for your time! Much appreciated!23:36
notmynameit's normally a good thing on account/container DBs deployed on spinning drives. it helps with disk fragmentation over time23:36
notmynameso, just to throw it out there, it is *always* recommended that you use flash for accounts and containers23:37
notmyname:-)23:37
notmyname(and with flash drives, you'd want to turn db_preallocation off)23:37
notmynamebut that's not where we are, so let's figure it out23:38
*** bill_az has quit IRC23:38
NyarThat's going to happen very soon. I must admit our original design was supposed to remain small (glance only) so we went for the cheap and easy.23:38
NyarWe are planning to dedicate flash drives for accounts and containers as soon as we understood what's going on here :)23:39
notmynamewhat's the gnocchi request rate? soemthing like 30/s right?23:39
*** mingdang1 has quit IRC23:40
notmynameNyar: what version of swift are you using?23:41
NyarThat sounds about right, getting the exact numbers as we speak23:41
Nyar2.5.0.1 Liberty23:42
claygnotmyname: thanks for jumping in23:42
notmynamedo you have container_update_timeout set in the object server?23:43
claygNyar: it doesn't make sense to me that you have a bunch of async_pending files piling up while the object-updater is also successfully processing async_pendings23:43
claygNyar: is there anything in /var/log/messages or syslog about network?  nfconntrack or dropped something - some kind of system/kernal warning message?23:44
Nyarnotmyname: I do not. One way to "hide" those error messages I found was to set node_timeout to 10s though23:45
claygNyar: can you find one of these containers on disk?  and check it's size and the size of it's .pending?23:45
NyarBut that was only a work around, the error messages appeared again as soon as we had Gnocchi writing more data (by adding more nodes monitored by Ceilometer)23:45
notmynameclayg: yup. that's the right next step23:46
Nyarclayg: no network error logs23:46
claygnotmyname: i feel like there's more objects in these containers than we're seeing - the LockTimeout doesn't make any sense at all on these tiny containers23:46
claygNyar: are you still on the tmpfs stuff or we're back on spinning rust?23:47
Nyarwe are back on spinning drives23:47
notmynameyeah. I'd think it might be some really big un-vacuumed DB23:47
NyarSince it did not make a difference23:47
*** hosanai has joined #openstack-swift23:47
*** ChanServ sets mode: +v hosanai23:47
Nyarlooking for a container on disk23:48
claygNyar: do the glace containers ever [lock]timeout - or always the gnnochi containers?23:48
NyarI confirm the ~50+ requests/second from Gnocchi by the way23:48
NyarOnly the gnocchi containers but I am not sure how pertinent that is knowing that Glance is just sitting on its ass all day23:48
NyarWe write an image per week to it..23:49
clayg50 requests per second for a day is 4M objects - could there really be that much data hiding in async_pendings?23:50
notmynameof they are expiring, then maybe in unused rows?23:50
notmynameI don't remember what the right word is for "unused"23:50
claygnotmyname: ah!23:51
claygdelted = 1 - of course23:51
claygdeleted23:51
claygi call them tombstone rows sometimes23:51
claygnotmyname: I like the tombstone row idea a lot23:51
notmynamegreat. so 4M tombstone rows + 3 active rows. try to PUT 50/s and get an async23:51
NyarGnocchi is deleting A LOT of objects, the number of objects remains stable and proportional to the number of instances monitored by Ceilometer23:51
clayggb containers here we come!23:51
NyarThe current counts:23:51
Nyar            Containers in policy "glance": 1                Objects in policy "glance": 63                  Bytes in policy "glance": 16774897152            Containers in policy "gnocchi": 677               Objects in policy "gnocchi": 7906                 Bytes in policy "gnocchi": 4649404223:52
Nyarerf, shitty formating, my bad23:52
claygNyar: that's fine - you need to find a container on disk23:52
notmynameNyar: if you can find a gnocchi DB on disk and do `du` on it, that will tell us something23:52
notmynameNyar: do you know how to do that? I can walk you through it23:52
clayg`swift-get-nodes /etc/swift/container.ring.gz gnocchi_account gnocchi_container`23:53
NyarI do not know no, but since we are using storage policies, the only containers on those devices should be Gnocchi's23:54
Nyaroh nice!23:54
Nyarthanks!23:54
notmynameclayg: yeah, the delete=1 rows is what I was thinking when thinking about the fragmentation on disk (db_preallocation=off). it would exacerbate the problem23:54
claygnotmyname: gah - i don't know why I didn't think about tombstone rows!  good call notmyname23:55
claygnotmyname: you know a lot about swift23:55
NyarI just realized after typing this command that my sentence about storage policies was stupid23:55
NyarPlease ignore.23:56
NyarI do not know a lot about Swift :]23:56
*** DevStok has quit IRC23:56
notmynameyou're running it prod. that's what counts :-)23:56
NyarBut I am glad the project lead knows a lot about it :D23:56
notmyname(and that counts for a lot and is really awesome)23:56
NyarSO23:57
Nyar20K     1afd06e9acafdff124db7e72955421bb.db23:58
Nyar0       1afd06e9acafdff124db7e72955421bb.db.pending23:58
claygGD!23:58
notmynamewell that seems reasonable. but totally not helpful to solving our problem!@23:58
NyarI figured as much ^^23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!