Wednesday, 2014-04-09

*** mmcardle has joined #openstack-swift00:00
*** mwstorer has quit IRC00:03
*** mmcardle has quit IRC00:04
gholtclayg: zaitcev: creiht: redbo: anymore? Doesn't look like anyone's called is_deleted with the timestamp kwarg since we open-sourced Swift. My best guess is that it was to be used by whoever was doing reclamation to decide when to remove old deleted containers; but then that logic got pulled out to the db_replicator.py when that refactor happened (pre open sourcing).00:06
zaitcevgholt: thanks00:07
redbocode archaeology00:07
* portante wonders when folks will start getting advanced degrees in that discipline ...00:08
clayglol @ http://en.wikipedia.org/wiki/Software_archaeology00:08
gholtWe probably still have the pre open source git repo around somewhere; that would be fun, heheh.00:08
zaitcevokay, PYTHONPATH=$(pwd) ./.unittests  and committing00:09
portantetox -e pep8?00:09
claygportante: that's what the gate is for?00:09
portante;)00:09
portanteclayg: many a problem with code has been found via a simple pep8 check00:10
*** _bluev has quit IRC00:10
portantezaitcev: have you check code coverage with the unit tests?00:10
openstackgerritA change was merged to openstack/swift: update setup.py with pbr version  https://review.openstack.org/8610100:11
openstackgerritA change was merged to openstack/swift: Update callback with proper bytes transferred  https://review.openstack.org/8461000:11
openstackgerritA change was merged to openstack/swift: Attempt to ensure connect always timesout  https://review.openstack.org/8545700:11
openstackgerritA change was merged to openstack/swift: merge master to EC branch  https://review.openstack.org/8591900:11
gholtI hope redbo wasn't staring at me again while I was in my coma.00:11
redboI don't have to, I have lots of pictures of you when I get lonely00:12
*** _bluev has joined #openstack-swift00:13
redbothough your face isn't in most of them00:13
claygwhoa00:14
redboit's not what you think.  I just take pictures of his crotch.00:14
* clayg struggles to pick himself up off the floor00:17
gholtYeah, it's kinda creepy. Oh, who am I kidding? It's really creepy.00:17
portanteawkward00:21
*** _bluev has quit IRC00:25
*** RockKuo has joined #openstack-swift00:28
*** RockKuo_ has joined #openstack-swift00:28
*** matsuhashi has joined #openstack-swift00:31
zaitcevwell, I ended Seitokai Yakuindomo today, so it's nothing00:38
zaitcevand yeah... mem_backend.py is at around 60% for both a and c00:38
zaitcevneeds a look-over00:39
*** zaitcev has quit IRC00:40
*** shri1 has quit IRC00:40
*** mkollaro has quit IRC00:56
*** matsuhashi has quit IRC00:56
*** acoles has quit IRC01:01
*** mmcardle has joined #openstack-swift01:01
*** jamie_h has quit IRC01:03
*** matsuhashi has joined #openstack-swift01:05
*** mmcardle has quit IRC01:06
*** acoles has joined #openstack-swift01:10
*** ChanServ sets mode: +v acoles01:10
*** haomaiw__ has joined #openstack-swift01:12
*** haomaiwa_ has quit IRC01:12
*** _bluev has joined #openstack-swift01:26
*** nosnos has joined #openstack-swift01:29
*** _bluev has quit IRC01:30
*** changbl has joined #openstack-swift01:34
*** simpleAJ has joined #openstack-swift01:35
simpleAJHi01:36
simpleAJI am trying to understand interaction between different swift components when I upload / download file01:36
simpleAJwhile downloading01:36
simpleAJhttp://saio:8080/v1/AUTH_test/test/amey01:37
simpleAJcurl -i http://saio:8080/v1/AUTH_test/test/amey -X GET -H "X-Auth-Token: AUTH_tk5c38b3f3f8204062a61248370d4b9733"01:37
simpleAJis it talking to object server or proxy server?01:37
*** haomaiw__ has quit IRC01:47
*** haomaiwa_ has joined #openstack-swift01:47
claygboth?01:52
claygclient is talking to proxy, proxy is talking to object server01:53
*** mmcardle has joined #openstack-swift02:03
*** simpleAJ has quit IRC02:06
*** haomaiw__ has joined #openstack-swift02:06
*** mmcardle has quit IRC02:07
*** haomaiwa_ has quit IRC02:09
*** zhiyan is now known as zhiyan_02:11
*** zhiyan_ is now known as zhiyan02:12
*** judd7 has quit IRC02:14
*** praveenkumar has joined #openstack-swift02:21
*** simpleAJ has joined #openstack-swift02:25
*** _bluev has joined #openstack-swift02:27
*** _bluev has quit IRC02:31
*** saurabh_ has joined #openstack-swift02:44
*** saurabh_ has joined #openstack-swift02:44
*** _bluev has joined #openstack-swift02:45
openstackgerritZhang Hua proposed a change to openstack/swift: Add profiling middleware in Swift  https://review.openstack.org/5327002:46
*** gyee has quit IRC02:47
saurabh_Hi there, I have a question:- During ring balancing after adding a new node, some partitions are moved from existing nodes to new node, is there any partition movement between existing nodes02:48
*** _bluev has quit IRC02:49
*** simpleAJ has left #openstack-swift02:51
claygthey're can be if some of the existing nodes already have positive balance02:52
clayg*their02:52
clayg*there acctually, wow02:53
*** fifieldt has joined #openstack-swift02:56
glangenice :)02:56
claygglange: fingers moving faster than the brain apparently02:57
praveenkumarIs complete object move to memory when we create/update/delete a object metadata because I am using python-swiftclient to check object creation and metadata operation so when any metadata operation occur, I can see memory usage drastically change?02:57
claygpraveenkumar: so POSTs to objects are really COPYs under the hood - but no, the entire object should never be in memory, and cirtainly the object is never transfered to the client.02:59
claygYou may just be seeing that POST is heavy weight operation, there's a config option to do something different that works pretty ok except with content-type changes and container-sync.03:00
clayg# object_post_as_copy = true03:00
clayg^ default03:00
claygobject_post_as_copy is sometimes called "fast POST"03:01
clayger... object_post_as_copy = false is sometimes...03:01
claygglange: just shut me up already...03:01
glangeclayg will be here all week!  try the veal :)03:02
praveenkumarclayg: ah so if object_post_as_copy=true then POST's are heavy weight operations, I may need to read more about API options to set it to false instead making change in default config file.03:03
claygpraveenkumar: well that particular option is up to the deployer, it's not really part of the API03:03
*** mmcardle has joined #openstack-swift03:04
praveenkumarclayg: oh, so let assume if deployer set this option true on a box which have not much memory to handle 1Gb's objects POST's operations then he will surly get *defined error* in this case?03:06
*** mmcardle has quit IRC03:08
claygyeah... i mean I guess there's an amplification with COPY requests, so it's probably more likely to overwhelm a deployment targeting that verb than say PUT.03:09
claygbut ultimately there's not any more memory required to support a PUT than a COPY - so I'm not quite sure what you're seeing03:10
claygyour memeory usage during a POST should look a lot like if your client did that same PUT03:10
praveenkumarclayg: alright, let me paste a sample code and memory change, may be it will help.03:11
*** matsuhashi has quit IRC03:12
claygpraveenkumar: maybe?  I'm just saying if you see memory usage "drastially change" from a POST to say a HEAD, that makes sense.  But if that memory usage matches roughly what'd you get from PUT then it makes sense, so you'll have to capture both.03:14
clayga PUT or a COPY - COPY is acctually a little better for that test...03:14
praveenkumarclayg: alright.03:14
* praveenkumar needs some time to check what clayg suggested.03:15
*** Guest_ has joined #openstack-swift03:16
*** cheri has joined #openstack-swift03:18
*** Fin1te has joined #openstack-swift03:24
*** Fin1te has quit IRC03:25
*** nosnos has quit IRC03:26
praveenkumarclayg: so when I am doing a POST operation to create metadata of a object(file.txt) which is 1GB in size I am getting http://fpaste.org/92757/01413513/, so is it the same issue which we are discussing right?03:30
claygpraveenkumar: would have been very helpful to see the before, but the logs are the only place that would really explain why you got a 50303:44
*** _bluev has joined #openstack-swift03:45
*** _bluev has quit IRC03:50
praveenkumarclayg: http://fpaste.org/92758/13970159/ -> logs snippet for error.04:00
*** chandan_kumar has joined #openstack-swift04:04
*** mmcardle has joined #openstack-swift04:05
*** zhiyan is now known as zhiyan_04:06
claygpraveenkumar: yeah that's not memory - the 507 is because you don't have the disk to store the intermediate value (COPY will do a GET streamed to PUT followed by a DELETE)04:07
claygso while the request is in flight you have two copies of the object04:07
clayg507 == ERROR Insufficient Storage04:07
claygi guess you really need to have disk to hold six replicas...04:08
praveenkumarclayg: yes right, I checked the mounted disk size and it's 1.1G which is not sufficient, thanks will add more space.04:09
*** mmcardle has quit IRC04:10
*** zackf has joined #openstack-swift04:12
openstackgerritClay Gerrard proposed a change to openstack/python-swiftclient: Add some bash helpers for auth stuff  https://review.openstack.org/8622404:14
clayg^ i but all of the it looks like I did into that, feel free to push over it or -1 it so it'll expire or whatever04:15
claygwow!  "I put all of the time it looks like I did into that" - gawd I'm so lazy, I can't even manage to type out a half assed self deprecating joke about how lazy I am...04:16
*** saurabh_ has quit IRC04:22
*** matsuhashi has joined #openstack-swift04:24
*** nosnos has joined #openstack-swift04:25
*** ppai has joined #openstack-swift04:31
*** zackf has quit IRC04:33
*** matsuhashi has quit IRC04:42
*** _bluev has joined #openstack-swift04:46
*** matsuhashi has joined #openstack-swift04:50
*** _bluev has quit IRC04:51
*** erlon has quit IRC04:52
*** chandan_kumar has quit IRC05:00
*** mmcardle has joined #openstack-swift05:07
*** mmcardle has quit IRC05:11
*** chandan_kumar has joined #openstack-swift05:14
*** mmcardle has joined #openstack-swift05:23
*** mmcardle1 has joined #openstack-swift05:25
*** mmcardle has quit IRC05:25
*** mmcardle1 has quit IRC05:29
*** zhiyan_ is now known as zhiyan05:31
*** _bluev has joined #openstack-swift05:47
*** _bluev1 has joined #openstack-swift05:49
*** _bluev has quit IRC05:49
*** _bluev1 has quit IRC05:53
*** mlipchuk has joined #openstack-swift05:57
*** ppai has quit IRC06:00
*** Guest_ has quit IRC06:05
*** Guest_ has joined #openstack-swift06:06
*** cheri has quit IRC06:08
*** ppai has joined #openstack-swift06:17
*** ashish_ has joined #openstack-swift06:20
*** psharma has joined #openstack-swift06:37
*** ashish_ has quit IRC06:43
*** haomaiw__ has quit IRC06:47
*** _bluev has joined #openstack-swift06:50
*** _bluev has quit IRC06:54
*** Guest_ has quit IRC07:05
*** Guest_ has joined #openstack-swift07:09
*** haomaiwang has joined #openstack-swift07:10
*** ppai has quit IRC07:12
*** cheri has joined #openstack-swift07:15
*** Guest_ has quit IRC07:16
*** ppai has joined #openstack-swift07:24
*** Gue______ has joined #openstack-swift07:26
*** matsuhashi has quit IRC07:29
*** matsuhashi has joined #openstack-swift07:31
*** nshaikh has joined #openstack-swift07:35
*** matsuhashi has quit IRC07:35
*** Gue______ has quit IRC07:39
*** Gue______ has joined #openstack-swift07:39
*** yuan has joined #openstack-swift07:40
*** matsuhashi has joined #openstack-swift07:49
*** tanee-away is now known as tanee07:50
*** tanee is now known as tanee-away07:50
openstackgerritTakashi Kajinami proposed a change to openstack/swift: Add timestamp checking in AccountBroker.is_status_deleted  https://review.openstack.org/8624807:50
*** Guest_ has joined #openstack-swift07:51
*** tanee-away is now known as tanee07:51
*** Gue______ has quit IRC07:53
*** Guest_ has quit IRC07:53
*** Guest_ has joined #openstack-swift07:54
*** Guest_ has quit IRC07:54
*** cheri has quit IRC08:02
*** cheri has joined #openstack-swift08:02
openstackgerritTakashi Kajinami proposed a change to openstack/swift: Add timestamp checking in AccountBroker.is_status_deleted  https://review.openstack.org/8624808:02
*** nacim has joined #openstack-swift08:06
*** gadb has joined #openstack-swift08:16
*** d89 has joined #openstack-swift08:17
*** ashish_ has joined #openstack-swift08:21
gadbhi.08:22
gadbas far as I know, data can be isolated with the concept of each zones.08:24
gadbbut when i upload object then at first it is not.08:25
gadbatfter a while object moved  each zone08:25
gadbwhat demon do that job08:26
gadb?08:26
gadbobject-server ?08:26
omameobject-replicator?08:26
gadbi think too, but i see the replicator source code .08:26
*** foexle has joined #openstack-swift08:26
gadbbut i can not find logic08:26
omameoh, object-auditor then :)08:26
gadbright?08:27
*** _bluev has joined #openstack-swift08:27
gadbok. i will be check auditor source code. but08:27
gadbthx. your answer08:28
omamenp08:28
*** jamie_h has joined #openstack-swift08:29
*** gvernik has joined #openstack-swift08:36
gvernikhi. does anyone knows why Jenkins failed on test_sleeper (test.unit.obj.test_auditor.TestAuditor) ? (https://review.openstack.org/#/c/85650/)08:37
*** nshaikh has left #openstack-swift08:42
*** _bluev1 has joined #openstack-swift08:44
*** _bluev has quit IRC08:44
*** foexle has quit IRC08:46
*** foexle has joined #openstack-swift08:47
*** chandan_kumar has quit IRC08:49
*** _bluev has joined #openstack-swift08:57
*** _bluev1 has quit IRC09:00
*** foexle has quit IRC09:01
*** chandan_kumar has joined #openstack-swift09:02
*** tanee is now known as tanee-away09:06
*** foexle has joined #openstack-swift09:08
*** gadb has quit IRC09:10
*** tanee-away is now known as tanee09:15
*** haomaiw__ has joined #openstack-swift09:18
*** haomaiwang has quit IRC09:20
*** nshaikh has joined #openstack-swift09:24
*** nshaikh has quit IRC09:25
*** haomaiw__ has quit IRC09:39
*** haomaiwa_ has joined #openstack-swift09:40
*** zhiyan is now known as zhiyan_09:43
*** matsuhas_ has joined #openstack-swift09:45
*** matsuhashi has quit IRC09:45
*** matsuhas_ has quit IRC09:45
*** matsuhashi has joined #openstack-swift09:46
*** ppai has quit IRC09:47
*** haomaiw__ has joined #openstack-swift09:55
*** haomaiwa_ has quit IRC09:57
*** ppai has joined #openstack-swift10:00
*** chandan_kumar_ has joined #openstack-swift10:03
*** foexle has quit IRC10:04
*** foexle has joined #openstack-swift10:04
*** chandan_kumar has quit IRC10:06
*** Trixboxer has joined #openstack-swift10:11
*** matsuhashi has quit IRC10:12
*** ashish_ has quit IRC10:14
*** matsuhashi has joined #openstack-swift10:16
*** tanee is now known as tanee-away10:17
*** tanee-away is now known as tanee10:20
*** jamie_h has quit IRC10:31
*** jamie_h has joined #openstack-swift10:31
*** mkollaro has joined #openstack-swift10:31
*** chandan_kumar_ has quit IRC10:32
openstackgerritA change was merged to openstack/swift: Add missing constraints to /info  https://review.openstack.org/8601510:32
*** praveenkumar has quit IRC10:33
*** mmcardle1 has joined #openstack-swift10:38
*** jamie_h has quit IRC10:41
*** jamie_h has joined #openstack-swift10:46
*** dosaboy has quit IRC10:49
*** dosaboy has joined #openstack-swift10:50
*** chandan_kumar_ has joined #openstack-swift10:58
*** matsuhashi has quit IRC11:05
*** matsuhashi has joined #openstack-swift11:05
*** RockKuo has quit IRC11:09
openstackgerritAlistair Coles proposed a change to openstack/swift: Add tests and comments re constraints in /info  https://review.openstack.org/8628611:15
acolescschwede: notmyname: ^^ tweaks we discussed re constraints in /info. original patch got approved before i managed to update, sorry should have WIP'd11:17
*** matsuhashi has quit IRC11:19
_bluevwe're having some issues with Gerrit / Jenkins - failures with the check-grenade-dsvm tests.  can anyone help with that ?11:27
_bluevhttps://review.openstack.org/#/c/85453/11:27
_bluevIts's related to a new dependancy on python-swiftclient that check-grenade-dsvm seems to be unaware of11:28
portanteacoles: will look at it in a bit11:33
acolesportante: thanks11:34
*** matsuhashi has joined #openstack-swift11:36
*** matsuhashi has quit IRC11:39
*** mmcardle1 has quit IRC11:42
*** mmcardle has joined #openstack-swift11:43
*** praveenkumar has joined #openstack-swift11:44
*** joeljwright has joined #openstack-swift11:49
*** ppai has quit IRC11:50
*** foexle has quit IRC11:56
*** foexle has joined #openstack-swift11:58
*** zhiyan_ is now known as zhiyan12:10
openstackgerritYuan Zhou proposed a change to openstack/swift: Update swift-container-info to be storage policy aware  https://review.openstack.org/8630412:15
*** erlon has joined #openstack-swift12:19
openstackgerritA change was merged to openstack/swift: Support for content-length in the upload object method for internal client.  https://review.openstack.org/8565012:34
*** nacim has quit IRC12:37
*** JuanManuelOlle has joined #openstack-swift12:37
*** nacim has joined #openstack-swift12:57
*** psharma has quit IRC13:03
*** JuanManuelOlle has quit IRC13:04
*** tdasilva has joined #openstack-swift13:13
*** nosnos has quit IRC13:14
*** cheri has quit IRC13:21
*** chandan_kumar_ has quit IRC13:46
*** changbl has quit IRC13:47
*** joeljwright has quit IRC13:51
*** chandan_kumar_ has joined #openstack-swift13:53
*** joeljwright has joined #openstack-swift13:55
*** mmcardle has quit IRC13:57
*** gvernik has quit IRC14:07
*** Longgeek_ has joined #openstack-swift14:14
*** mmcardle has joined #openstack-swift14:24
*** zaitcev has joined #openstack-swift14:24
*** ChanServ sets mode: +v zaitcev14:24
*** saju_m has joined #openstack-swift14:26
*** bill_az has joined #openstack-swift14:28
*** joeljwright has quit IRC14:29
*** bill_az_ has joined #openstack-swift14:33
*** joeljwright has joined #openstack-swift14:33
*** bill_az has quit IRC14:33
*** mmcardle has quit IRC14:36
*** erlon has quit IRC14:42
*** piyush1 has joined #openstack-swift14:43
*** zackf has joined #openstack-swift14:47
*** byeager has joined #openstack-swift14:49
*** saju_m has quit IRC14:58
*** Longgeek_ has quit IRC14:58
*** mmcardle has joined #openstack-swift15:00
*** lpabon has joined #openstack-swift15:02
*** nacim has quit IRC15:23
*** nacim has joined #openstack-swift15:32
notmyname_bluev: I'm not sure why that has a problem. I'll ask around15:34
tdasilvanotmyname: good morning15:36
notmynametdasilva: good morning15:36
tdasilvanotmyname: I remember a few weeks ago in during a openstack-meeting it was decided that icehouse will be released without SP and that SP would be released shortly after, is that still the plan?15:37
creihtfor some measure of "shortly" :)15:37
tdasilvalol15:37
tdasilvais there an estimate?15:37
*** mwstorer has joined #openstack-swift15:38
notmynametdasilva: yes. I'm hoping for the first SP patches to be proposed to master next week. clayg and peluse are working hard on it, and we should also have an update from them at today's meeting15:38
tdasilvanotmyname: thanks!15:40
notmynamealso, the current plan is to have a series of patches with dependencies proposed, block the first from landing, get the whole series approved, then unblock the first, thus allowing them to all land temporally close.15:42
*** mmcardle has quit IRC15:42
notmynamebut that's just the "best idea that's been proposed so far". there are certainly other ways to do it15:42
*** mmcardle has joined #openstack-swift15:43
*** zhiyan has quit IRC15:45
tdasilvanotmyname: ok, this makes sense to me.15:45
tdasilvanotmyname: at what point would we stop using the ec feature branch?15:45
tdasilvanotmyname: right after this first batch of patches?15:46
portantecreiht: you around?15:46
notmynametdasilva: the patches proposed to master will not be from the feature/ec branch (ie not a merge from that branch). the patch proposed to master will be in better logical chunks with a cleaner history15:46
creihtportante: what's up?15:47
notmynametdasilva: at that point, feature/ec will at least be reset. we need to discuss as a group if the long-running feature branch is a good idea or not15:47
creihtthe review about moving the functions?15:47
portanteyes15:48
portanteyou can read my mind15:48
portantescary15:48
portantewhat was that movie, scanners?15:48
creihtheh15:48
portantedid my comments resonate with you?15:49
creihtI get what you are saying15:50
tdasilvanotmyname: thanks for the comments, grabbing some lunch now, but will be back for the meeting15:50
portantecreiht: thanks, so did it sway your position, or should I rework what I am doing for solving the in-process functional tests problem?15:52
creihtso it still isn't clear to me if this is required to solve another issue?15:52
creihtor is it purely for form15:53
creihtportante: I guess I would prefer it to be in one patch just so it is more clear why we are doing things15:55
creihtotherwise it just feels like we are moving code around for moving code around's sake15:55
creihtbut that's just me :)15:56
creihtand my allergies have been killing me, so it might make me tempermental :)15:56
*** judd7 has joined #openstack-swift15:58
*** mlipchuk has quit IRC15:59
anticwredbo: debian recent, i can try with debug yes15:59
notmyname_bluev: ok, I have an answer for you16:01
*** gvernik has joined #openstack-swift16:01
notmyname_bluev: short version: you need to add the futures dependency to the stable/havana branch of the openstack requirements repo16:02
notmynameif python-swiftclient adds a new dependency on master, then that dependency also must be backported to the supported stable branch of the requirements repo16:02
*** zhiyan has joined #openstack-swift16:03
*** changbl has joined #openstack-swift16:05
notmynameconversation explaining this for anyone who's interested: https://gist.github.com/anonymous/b366b2d078290777858b16:06
*** gvernik has quit IRC16:07
*** foexle has quit IRC16:08
*** chandan_kumar_ has quit IRC16:08
notmynamethus resulting in python-*client's master branch being usable by still-supported stable openstack integrated releases16:09
_bluevnotmyname: thanks :-)16:13
*** gyee has joined #openstack-swift16:17
*** byeager has quit IRC16:18
*** tanee is now known as tanee-away16:18
*** dmb__ has joined #openstack-swift16:27
portantecreiht: I can combine the eventlet patch with the move of the functions to the individual tests16:28
portantedo you think I just combine that with the in-process test changes as well?16:28
portanteor is that better kept separately?16:28
portanteso you have allergies to code movement? ;)16:28
*** tanee-away is now known as tanee16:29
*** tanee is now known as tanee-away16:29
*** tanee-away is now known as tanee16:29
creihtportante: that doesn't matter as much to me16:35
portanteokay, I'll update in a bit16:35
*** byeager has joined #openstack-swift16:37
zaitcevcschwede: seen this https://blueprints.launchpad.net/swift/+spec/container-alias16:38
openstackgerritPeter Portante proposed a change to openstack/swift: In-process swift server for functional tests  https://review.openstack.org/6610816:39
openstackgerritPeter Portante proposed a change to openstack/swift: Use eventlet instead of threading for timeout  https://review.openstack.org/8578216:39
zaitcevportante: ouch, I'm pointing out a mis-spelling16:41
*** mmcardle has quit IRC16:41
portantezaitcev: where?16:41
zaitcevportante: I'll post review in a moment16:42
portantek16:42
portantethx16:42
zaitcevbut I'm a very shallow reviewer16:42
zaitcevcan't imagine why John invited me to core16:42
zaitceveventlet.hubs.use_hub(get_hub()) -- I see this construct all the time but why is this not the default in eventlet16:43
*** pberis has quit IRC16:44
portantethere is a description of why in utils, I believe16:44
portantecreiht, redbo, other from rackspace might remember, but epoll had some kind of one in a million billion hang, that was very hard to debug16:45
*** joeljwright has quit IRC16:45
zaitcev"This prevents strange race conditions for those who want to use both Twisted and Eventlet separately." -- oh god you crazy ecosystem you16:45
redboIt was losing a socket close event once every gazillion times under load, and when that socket's file descriptor got reused, eventlet freaked out because it was still waiting for an event from it in another coro.  But that was forever ago and hasn't been retested.16:49
zaitcevThanks16:49
openstackgerritA change was merged to openstack/swift: Add tests and comments re constraints in /info  https://review.openstack.org/8628616:54
*** ashish_ has joined #openstack-swift16:54
zaitcevYou know what really happened, BTW. I saw the get_hub() and somehow misthought it as eventlet's function by the same name, but we actually refer to our function in swift.common.utils.16:55
zaitcevmaybe just put 'selects' there since it's just the test and does not need 10,000 descriptors.16:56
zaitcevSwitched to branch "review/peter_portante/inprocfunc"16:56
zaitcev[zaitcev@guren swift-fetch]$ ./.functests16:56
zaitcevSSSSSSSSSSSSSSEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE........S....16:56
zaitcevuh-oh16:56
portanteare you on a SAIO?16:57
zaitcevno16:57
zaitcevwell, it's a tempauth, so outght to be like SAIO16:57
portantelet me know what the error messages say16:57
*** nacim has quit IRC16:59
*** piousbox has joined #openstack-swift17:04
*** piyush1 has quit IRC17:07
*** shri has joined #openstack-swift17:12
*** byeager has quit IRC17:13
*** csd has joined #openstack-swift17:13
*** piousbox has quit IRC17:19
openstackgerritPeter Portante proposed a change to openstack/swift: Base class for storage servers  https://review.openstack.org/8610917:20
*** tanee is now known as tanee-away17:21
*** chandan_kumar_ has joined #openstack-swift17:24
openstackgerritPeter Portante proposed a change to openstack/swift: In-process swift server for functional tests  https://review.openstack.org/6610817:27
openstackgerritPeter Portante proposed a change to openstack/swift: Use eventlet instead of threading for timeout  https://review.openstack.org/8578217:27
*** chandan_kumar_ has quit IRC17:34
*** zhiyan is now known as zhiyan_17:37
*** d89 has quit IRC17:41
*** byeager has joined #openstack-swift17:44
*** chandan_kumar_ has joined #openstack-swift17:47
*** byeager has quit IRC17:52
*** byeager has joined #openstack-swift17:56
*** ashish_ has quit IRC17:56
*** JuanManuelOlle has joined #openstack-swift18:05
*** lpabon has quit IRC18:06
notmyname50 minutes until the team meeting18:09
*** piyush1 has joined #openstack-swift18:11
*** piyush2 has joined #openstack-swift18:12
*** piyush1 has quit IRC18:15
*** gvernik has joined #openstack-swift18:19
*** jamie_h has quit IRC18:22
*** shri1 has joined #openstack-swift18:22
*** shri has quit IRC18:23
*** scohen has joined #openstack-swift18:25
*** ashish_ has joined #openstack-swift18:28
ashish_Hey everyone could anyone tell me in which directory in ubuntu do I need to run the ssbench commands.18:29
swifterdarrellashish_: unless you use relative paths to files given on the command-line, it shouldn't matter18:32
ashish_swiftdarrel  I did not get your point18:33
ashish_swiftdarrel I tried got the following error. http://paste.openstack.org/show/75441/18:35
swifterdarrellashish_: that's wild!  The command you're after is probably ssbench-master.  See https://github.com/swiftstack/ssbench#example-simple-single-server-run for some examples18:37
swifterdarrellashish_: also that whole README is filled with super-relevant information18:37
cschwedezaitcev: thanks (for the container-alias blueprint). Yes, I thought it would be more simple :) Thinking how to continue on this.18:38
ashish_swiftdarrel Thanks.Please give me a minute to try.18:39
zaitcevcschwede: I am thinking about some reconciliation rules... Like losing alias if a container ever turns up, even tombstoned. But honestly I did not think enough.18:39
*** shri1 has quit IRC18:41
cschwedezaitcev: i’ll think more about this and will collect some ideas in the blueprint. so far one of the challenges is how to notify the user in case something went wrong; ie destination is missing18:44
wermy deletes are getting slower and slower :)  is it worth using db_preallocation for my containers if I have massive amounts of deletes?  Or in general?18:44
ashish_swiftdarrel I tried but got the following error http://paste.openstack.org/show/75442/18:46
creihtwer: db_preallocation is used to prevent file fragmentation of the db on spinning disk18:51
wercreiht: think that would help my  things getting slower by the day?  I suspect fragmentation cause I am not increasing the number of object :/18:52
werbasically I know I have a hotspot on that container... And I am going to shard it.  But I don't like that it gets slower each day :/  And was looking for anything else that might be a problem.18:53
creihtwell if it is already fragmented, turning it on isn't going to change anything for that db18:54
creihtyou could try rebuilding the db to a new file to create a non-fragmented copy of it18:54
*** jamie_h has joined #openstack-swift18:54
wercreiht: oh!  So I am creating a brand new cluster.  which I will migrate to.18:54
creihtahh18:54
creihtif you have your containers on spinning disks, then you do want preallocation enabled18:54
creihtif you are using ssds, you don't18:55
ashish_cschwede Could help me with a ssbench error http://paste.openstack.org/show/75442/18:55
wermakes sense.  thanks again creiht :)18:55
*** csd has quit IRC18:56
werdoes preallocation happen when I create the container?  IE if I enable that, and restart the container-server, and remove readd the container will that do what I think it does?18:56
creihtyes18:56
werossm!18:56
creihtYou could remove one copy18:57
creihtactually18:57
creihtit would likely rsync it over18:57
creihtif you delete one and run replication18:57
creihtI think18:57
creiht:)18:57
werinteresting.18:57
creihtwer you can use some xfs command line tools to see how fragmented the file is18:59
creihtI think xfs_bmap maybe18:59
notmynamemeeting time in #openstack-meeting18:59
creihtyeah you can use that to see what blocks a file occupies19:00
creihtwill at least know if it is fragmented, how bad, and if anything you do improves that19:00
werderp.  is it swift-get-nodes to find the actual database.19:00
cschwedeashish_: hmm, some local permission problems? tried to run it with sudo?19:00
ashish_cschwede Thanks.I will give it a try.19:01
creihtwer: yes19:01
wertotally doing this.19:01
claygohai19:03
*** peterbusque has joined #openstack-swift19:03
*** swills has joined #openstack-swift19:03
swillsso how crazy would it be to put swift data on ZFS?19:03
*** mlipchuk has joined #openstack-swift19:04
wergah this container is just very large.19:05
*** byeager has quit IRC19:06
ashish_cschwede sudo resolves it now I ge the following error.Do I need to pass the storage tokan as well. ERROR:http://paste.openstack.org/show/75444/19:06
*** csd has joined #openstack-swift19:06
*** chandan_kumar_ has quit IRC19:07
cschwedeashish_: please use „http://127.0.0.1:8080/auth/v1.0“ for the url19:07
ashish_cschwede Thanks.That resolves the error But now I got a different error. http://paste.openstack.org/show/75446/19:10
wercreiht: I didn't follow you on the whole delete and then replicate idea.19:12
*** dmb__ has quit IRC19:12
wer95 chunks in this fragmentation btw.  It's 2.5 G19:13
cschwedeashish_: are your credentials correct? accountname is admin, username is admin and secret also?19:13
ashish_cschwede the credentials are correct.19:14
cschwedeashish_: your url is wrong: http://127.0.0.1:8080/v1.0/ - should be http://127.0.0.1:8080/auth/v1.0/19:14
creihtwer: if you delete the container db, container replication will rsync one of the containers from another server to the one you deleted it from19:16
creihthopefully with less fragmentation19:16
*** mlipchuk has quit IRC19:17
ashish_cschwede thanks a lot now my ssbench works perfectly19:17
cschwedeashish_: you’re welcome, nice that it works for you now!19:18
ashish_cschwede are the containers and objects in my admin account being tested in this benchmark19:19
wercreiht: that make me nervous.  I had strange results early on with deleting things.  I could "change" things and they got fixed, via replication, but I have deleted things only to never have them get replaced.....  that was with object replication so maybe I didn't wait long enough :/19:19
creihtyeah object replication is a little different19:20
*** chandan_kumar_ has joined #openstack-swift19:20
creihtI would also stop the deletes while you do that if you can19:20
creihtjust trying to brainstorm easy ways to rebuild the db19:20
weryeah I'm all curious now :)  I 'll have to try  that once I'm in a better position.19:21
swifterdarrellashish_: (sorry for the delay; stepped otu for lunch)19:21
creihtwer: actually what would be easier19:22
creiht1.) stop your deletes19:22
creiht2.) stop one of the container servers19:22
swifterdarrellashish_: that code path gets hit when SUDO_UID and SUDO_GID are in the environment; is that true and your current user is not the superuser?19:22
creiht3.) mv the container db file, and then copy it back to the original name19:22
creihtthat should give you a less fragmented file19:23
ashish_swiftdarrel no issues cschwede got all my doubts solved. cschwede Thanks.19:23
werderp.  Is that just xfs magic?19:23
swifterdarrellashish_: sweet!19:23
werOr just new better "creation" versus all the updating.19:23
*** ryanez has joined #openstack-swift19:25
creihtwell when it creates the new file, it creates it all in one continuous or near continuous block19:26
_bluevis anyone aware of a reason why the content-length in a  GET proxy log message would be less than the size of the object. The status of the message is 200.  User reported a truncated file.    We are running Grizzly 1.8.019:27
ryanezHi installed swift on a vm according to http://pastebin.com/7umP7EB6. I used a loopback device. Is there a guide for stopping swift, resize the loopback devic, and restarting swift? What portions of the install do I have to redo?19:28
werright.  The copy.   A rebalance on the container could do it too probably.... hrm.19:28
creiht_bluev: sounds likely that it got uploaded truncated19:29
ashish_swiftdarrel .Now I am able to run the ssbench.But I have a doubt are the objects in my swift actually being tested by ssbench19:29
_bluevcreiht: I should have said - the filename matched in historic logs shows a larger size than the later 200 GET responce19:30
notmyname_bluev: creiht: or perhaps a bad replica was loaded from disk and quarantined after the read. I don't remember what status code that used to be logged as19:30
wercreiht: think that may be an OK way to unfragment occasionally?  Just rebalance the container ring?  I don't have a ton of containers... and I follow the same giant ring structure.....19:30
creihtwer: yeah dunno, could be worth a try19:31
creihtryanez: you can start/stop swift with swift-init all start/stop19:31
creihtor swift-init main start/stop if you aren't running the background services19:32
creihtre-create the loopback device the saize you want19:32
creihtrun the resetswift script19:32
creihtand you should be good to go19:32
ryanezAwesome! Thanks creiht!19:33
creihtthat will delete any current data you have19:33
creihtjust to be clear :)19:33
creihtryanez: -^19:33
ryanezno problem. This is just for a docker container19:33
creiht_bluev: yeah not sure off the top of my head then, first step might be to look at the three replicas to see if they are all the same, and see if they are correct or truncated19:34
*** openstackgerrit has quit IRC19:34
creihtswills: some have mused about running swift on zfs19:34
creihtdon't think it would be crazy, but might take a little work19:34
swillscreiht: oh? i kinda have it working already19:36
_bluevcreiht notmyname  thanks guys - will check both of those possibilities19:36
creihtswills: well there you go then :)19:37
swillscreiht: main thing was telling it not to check mountpoint, although technically i could have made it a mountpoint...19:37
swillscreiht: my question was more about performance and efficiency19:37
creihtswills: I don't think anyone has gotten much further than you19:37
swillscreiht: scary. ;)19:37
swillsyeah, i actually have it running in jails on zfs on FreeBSD19:38
creihthehe19:38
swillsso that's fun...19:38
creihtwell good to hear that it works19:38
swillsbut i was asking because of the stuff i was reading about using xfs and such19:38
swillsand there being code to check that files aren't corrupt, which zfs also does for you19:39
creihtright19:39
swillsall i really did to get it working was install the port http://www.freshports.org/databases/py-swift/19:39
swillsand read the docs, configure, get it going, etc.19:39
creihtoh nice :)19:39
swillsthere's the client too http://www.freshports.org/databases/py-swiftclient/19:39
swillsanyway, i was just wondering, how much do people really stick to all that stuff about non-redunant disks, xfs, etc.?19:40
swillscause i have a need for a deployment with many disparate nodes but not a ton of data19:40
swillsso i was thinking a smaller number of more redundant nodes19:41
swillsand maybe performance doesn't matter all that much because it's all cached in ram anyway, etc.19:41
creihtswills: well you are in uncharted territory, but I'm sure there are others that would like to hear more as you go19:41
swillsok. :)19:42
*** gvernik has quit IRC19:42
*** openstackgerrit has joined #openstack-swift19:42
creihtI think the team from softlayer did a lot of work to get swift to work on freebsd, but I think they ended up switching to linux just because it was easier19:42
creihthopefully they might be able to chime in19:42
swillsok19:42
swillsi wonder if there was any real work involved, or just removing linuxisms19:43
swills"omg, eth0!"19:43
creihtlol19:43
creihtI don't remember exactly19:43
creihtthere is also certainly opportunity to optimize swift to work better with freebsd/zfs19:44
creihtand hopefully that will get easier over time with the disk abstraction19:44
swillssure19:44
creihtso the xfs specific bits could have a zfs equiv plugin19:45
swillswhat are the xfs specific bits?19:45
zaitcevthe zero-file is the most important19:47
swills(in case it wasn't clear, my jail/zfs implementation was for testing, i'm kinda trying to figure out if it's viable for prod)19:47
creihtyeah so for example, if xfs get corrupted it creates a zero-byte file19:48
creihtthe auditor can do a quick scan for zero-byte files, and validate them19:48
zaitcevthere's a patch to auditor to work around it, but it was stalled for a while19:48
swillsfilesystem corruption, interesting... ;)19:49
creihtlol19:49
zaitcevhttps://review.openstack.org/1145219:49
*** scohen has quit IRC19:49
creihtswills: I know that *never* happens on zfs ;)19:49
*** chandan_kumar_ has quit IRC19:49
zaitcevyeah, just the whole node fails to reboot because it can't get a pool to start19:49
creihtbut that said, some of the process could be written in a way to take advantage of zfs' characteristics19:49
swills*nod*19:50
creihtI think things should mostly work19:50
wercreiht: no fragmentation now.  I'll see if the deletes improve.19:50
creihtwer: cool19:50
weryeah neat :)19:50
*** scohen has joined #openstack-swift19:50
claygI don't think i can acctually get tox tests to run on python-swiftclient right now? (tox.ConfigError: ConfigError: substitution key 'posargs' not found)19:51
*** byeager has joined #openstack-swift19:51
creihtswills: some things are written in a way that follows how xfs works19:52
zaitcevI think auditor is the only one that matters.19:52
swillshmm, ok, like what? could i expect anything to break?19:52
creihtswills: not likely19:52
creihtbut for example19:52
swillsi know nothing of xfs, fwiw19:52
creihtfile metadata writes in xfs are atomic19:52
zaitcevEverything else works perfectly on ext419:53
creihtso we don't fsync when doing writes to metadata things (like setting xattrs since they are stored in the inode)19:53
*** ryanez has left #openstack-swift19:53
creihtswills: yeah these things are subtle19:53
swillsgreat, fsync is the devil.19:53
creihtheh19:53
creihtbut we do fsync the writes of the objects19:54
swillsoh, dang19:54
creihtwe also try to prealloc the size of the object if it is available19:54
creihtso that it doesn't cause fragmentation in xfs19:54
swillsshouldn't have any effect on zfs, i think19:54
creihtswills: but all of that said, things should mostly work19:55
creihtif they don't, please let us know :)19:55
swills*nod* they seem to19:55
creihtit would also be interesting to hear how performance goes for you19:55
swillsoh, it's aweful. :)19:55
creihtand if you run into problems, let us know19:55
creihtheh19:55
swillsbut that's because it's all just running on my desktop for testing19:55
creihtoh19:56
creihthehe19:56
swillsall the jails are on one box on one set of mirrored zfs disks19:56
creihthah19:56
*** byeager has quit IRC19:56
swillsso, you know, low expectations19:56
ashish_swiftdarrel I have objects in the range 1 -16 kb ie for only tiny but still i get the operations on medium and small.Even though I have none of the objects in their range19:56
swillsbut it seemed functional19:56
creihtswills: if you can run the functional and probe tests, then you are doing pretty good19:57
swillstests! that's a good idea! how do i do that?19:57
ashish_swiftdarrel this is my test run output http://paste.openstack.org/show/75447/.Please explain19:57
swillsi just tested with the client uploading some files, downloading them, etc.19:57
*** piousbox has joined #openstack-swift19:57
creihtswills: which instructions did you follow to set up?19:58
clayghrmm... tox version 1.6 seems to be working better19:58
swillshttp://docs.openstack.org/developer/swift/howto_installmultinode.html19:58
swillsmostly those, i think19:58
swillswith a detour into some of the others to figure some things out19:59
creihtswills: in the swift codebase, there is a swift/test/sample.conf20:00
creiht(not sure where that ends up in the freebsd install)20:00
creihtcopy that to /etc/swift/test.conf20:00
creihtof course translate my linuxisms to freebsdisms :)20:01
swillsit's mostly prefixing everythign with /usr/local20:01
swills:)20:01
creihtedit that file to make sure the auth endpoint is correct, and create the test users needed20:01
creihtthen you should be able to run the function tests which are /swift/.functests20:02
creihtthat will go a long ways to testing stuff20:02
creihthttp://docs.openstack.org/developer/swift/development_saio.html20:02
creihtthat describes a bit of a simpler setup that most of us dev on20:02
swillsok, for auth, i don't have keystone20:02
swillslet me see...20:02
creihtand auto sets up stuff for testing20:03
*** byeager has joined #openstack-swift20:03
creihtswills: what are you using for auth?20:03
creihtthe default settings are setup for the simpleauth20:03
swillstempauth20:03
creihtor tempauth20:03
swillsthat user_system_root = testpass .admin stuff20:04
creihtsorry name has changed a couple of times :)20:04
creihtyeah20:04
creihtif you search for the proxy-server.conf on that page20:04
creihtunder filter:tempauth20:05
creihtif you copy all of those over, then you should have all the test accounts needed for default settings20:05
swillsok20:05
creihtswills: I *think* at that point, you should be able to run the .functests20:08
creihtI don't think you will be able to run the probe tests with your current setup20:09
swillsi can run the tests from a linux VM or something if necessary20:10
*** praveenkumar has quit IRC20:10
creihtI don't think it should be necessary20:10
*** openstackstatus has quit IRC20:15
swillsok, finally got it all running again...20:15
swillsi did this like a month ago as proof of concept, didn't get anywhere really as far as $work and hardware goes... so I shut it down... now i've got the go ahead so i'm figuring a few things out, need to start things back up, etc.20:16
*** openstackstatus has joined #openstack-swift20:16
*** scohen has quit IRC20:18
swillsreading this dev doc, makes me wish i wrote some notes...20:21
*** ashish_ has quit IRC20:21
openstackgerritClay Gerrard proposed a change to openstack/python-swiftclient: Add some bash helpers for auth stuff  https://review.openstack.org/8622420:23
*** scohen has joined #openstack-swift20:23
*** Mikalv has joined #openstack-swift20:38
openstackgerritPeter Portante proposed a change to openstack/swift: Base class for storage servers  https://review.openstack.org/8610920:38
*** bmac423 has joined #openstack-swift20:40
*** lpabon has joined #openstack-swift20:40
bmac423Can anyone help with swift-informant? I'm not seeing the stats in graphite, although I seem to have everything configured correctly per the openstack cookbook. Anything I should look for?20:42
_bluevis the content-length for a GET in the object server main log the actual number of byte read from the disk, or is it the metadate content-length  ?20:44
_bluevbmac423: do you have a statsd server up and running ?20:45
bmac423I do20:45
bmac423I can manually send stats to it20:45
bmac423e.g. echo "foo:1|c" | nc -u 127.0.0.1 812520:48
*** G________ has joined #openstack-swift20:50
*** lpabon has quit IRC20:50
bmac423test passes20:51
bmac423Ran 15 tests in 0.013s20:51
bmac423swift-informant-master/test/test_informant.py20:52
pandemicsynbmac423: if you tcpdump on the stats port or run your version of statsd in a verbose/debug mode do you see any udp traffic from your proxy ?20:53
*** tdasilva has left #openstack-swift20:54
openstackgerritA change was merged to openstack/swift: Set permissions on generated ring files  https://review.openstack.org/8554620:55
*** G________ has quit IRC20:56
bmac423pandemicsyn: tcpdump is silent unless I manually send something20:57
pandemicsynweird, and you've got it in your pipeline near the front and specified a port and ip right ?21:01
bmac423it's currently last in the pipeline, does that make a difference?21:01
peluseclayg:  you there?21:02
pandemicsynlast , like last in the list ? that'll definitely cause a problem, because the requests wont ever actually reach informant21:02
bmac423ha, I see. Let me move it up to the front and give it a shot21:03
_bluevPlease can anyone help me to understand these log entries from the time a user received a truncated file. First the PUT. Then the doomed GET21:03
_bluevApr  9 15:56:56 SM-X9DBL36B-S-3-PWP71-GB object-server 10.32.37.73 - - [09/Apr/2014:14:56:56 +0000] "PUT /sdi1/1655169/AUTH_91b3483a7d744f57b5359983b31b95d1/Mouse/_versions/5/_project/514.avb" 201 - "-" "tx1eb33c0e7aad47f4ae14e42d874d5962" "openstack.net/1.1.2.1" 0.115021:03
_bluevApr  9 16:04:37 DE-R210II-S-3-x    proxy-server 127.0.0.1 127.0.0.1 09/Apr/2014/15/04/37 GET /v1/AUTH_91b3483a7d744f57b5359983b31b95abc/Mouse/_versions/5/_project/514.avb HTTP/1.0 200 - openstack.net/1.1.2.1 7e0ba34ee77b4ee4b6903b161dbe4c3f - 524288 - txc306f5e1a27645b9aa424edbe081631c - 49.7439 -21:03
_bluevApr  9 16:04:37 DE-R210II-S-3-x    proxy-server ERROR with Object server 10.32.37.85:6000/sdi1 re: Trying to read during GET: ChunkReadTimeout (45s) (txn: txc306f5e1a27645b9aa424edbe081631c) (client_ip: 127.0.0.1)21:03
_bluevApr  9 16:03:50 SM-X9DBL36B-S-3-x  object-server 10.32.37.73 - - [09/Apr/2014:15:03:50 +0000] "GET /sdi1/1655169/AUTH_91b3483a7d744f57b5359983b31b95d1/Mouse/_versions/5/_project/514.avb" 200 3245061 "-" "txc306f5e1a27645b9aa424edbe081631c" "openstack.net/1.1.2.1" 0.606221:04
_bluevCluster is all running 1.8.0 and node_timeout is 45seconds21:04
claygpeluse: sure21:05
_bluevHere's the proxy server message from the PUT, this time showing the content-length:21:07
_bluevApr  9 15:56:56 DE-R210II-S-3-PWP71-GB proxy-server 127.0.0.1 127.0.0.1 09/Apr/2014/14/56/56 PUT /v1/AUTH_91b3483a7d744f57b5359983b31b95d1/Mouse/_versions/5/_project/514.avb HTTP/1.0 201 - openstack.net/1.1.2.1 7e0ba34ee77b4ee4b6903b161dbe4c3f 3245061 - - tx1eb33c0e7aad47f4ae14e42d874d5962 - 0.1540 -21:07
notmyname_bluev: looking.21:07
notmyname_bluev: there's a couple  of things you can do to help people look at it21:08
notmyname_bluev: first, use paste.openstack.org (or some other pastebin site)21:08
*** csd_ has joined #openstack-swift21:08
notmyname_bluev: second, notice the transaction id in those log lines (eg tx1eb33c0e7aad47f4ae14e42d874d5962 in the last one)21:08
notmyname_bluev: all log lines associated with a single request will have the same transaction id. so you can grep your logs (on every node) for that transaction id and get everything about it21:09
_bluevnotmyname: good point re paste, thank-you. Put them here: http://paste.openstack.org/show/75449/21:10
openstackgerritpaul luse proposed a change to openstack/swift: Add Storage Policy Documentation  https://review.openstack.org/8582421:10
notmyname_bluev: thanks21:11
*** csd has quit IRC21:11
notmyname_bluev: the PUT seems off. there should be more than one storage node log line21:11
_bluevI will grab the other two lines21:11
notmynamethanks21:12
notmyname_bluev: so to sum up, it seems that the PUT logs 3245061 bytes and the GET logs only 524288 bytes21:12
peluseFYI anyone wanting to start reading through the raw content that will become Storage Policy docs, please feel free to start taking a look at the above patch and commenting.  Right now its a mix of end user stuff and internals.21:12
pelusenotmyname:  let me know how you feel on the level of detail in the 'under the hood' section - no diagrams or pics as of yet, just trying to get the text in there and see how it plays21:13
notmyname_bluev: although it looks like the GET to the object server did the right thing21:13
notmynamepeluse: it's at the top of https://wiki.openstack.org/wiki/Swift/PriorityReviews now21:14
*** G________ has joined #openstack-swift21:14
peluseclayg:  I just added a few TODO cards on Trello for places I noticed were still un-SP'd so to speak.  Was going to knock one out, had you looked at ContainerSync at all?21:14
notmynamepeluse: I'll take a look later21:14
pelusegracias21:14
*** scohen has quit IRC21:14
*** openstackstatus has quit IRC21:14
pelusenotmyname/clayg:  also note that account reaper is still TODO, was accidnetally moved to DONE a while back.  SHould be small though21:15
openstackgerritClay Gerrard proposed a change to openstack/python-swiftclient: Add some bash helpers for auth stuff  https://review.openstack.org/8622421:16
*** scohen has joined #openstack-swift21:16
claygpeluse: fun fun!21:16
*** JuanManuelOlle has quit IRC21:16
*** csd_ is now known as csd21:16
claygpeluse: I think I have not looked at ContainerSync21:17
notmyname_bluev: but the error on the read is important in this case, and would explain what you are seeing21:17
peluseclayg: caught on consolidated diff - cool to see the big picture now.21:18
peluseclayg:  no biggy, I'll take care of them and let you know if I have issues21:18
notmyname_bluev: BTW, swift 1.11 included a fix you would like https://github.com/openstack/swift/blob/master/CHANGELOG#L170 (auto server-side retry on this type of failure)21:18
notmyname_bluev: is the bug you are seeing repeatable? does it happen on every request?21:19
_bluev_bluev: not on every request, no. I've double checked the transactions and have updated http://paste.openstack.org/show/75450/21:20
* notmyname wonders what a swift-backed pastebin would look like21:20
*** scohen has quit IRC21:20
pandemicsynnotmyname: heh we use one21:21
notmynamepandemicsyn: cool21:21
pandemicsynhttp://paste.ronin.io21:21
pandemicsynthats the stripped down version21:21
claygacphacphapchaph github link 404's21:21
pandemicsyn(soonish)21:22
pandemicsynbecause im lazy21:22
claygNOW *is* soonish21:22
*** openstackstatus has joined #openstack-swift21:22
pandemicsynclayg: https://www.youtube.com/watch?v=gNIwlRClHsQ21:22
_bluevnotmyname: that server-side read timeout recovery looks excellent, yes.21:23
notmyname_bluev: ya, it's one of those things that is like "how are we just getting this now?"21:24
notmyname(thanks dfg!)21:24
*** judd7 has quit IRC21:32
_bluevnotmyname: so if I understand.- the proxy server is streaming the content through on the GET request. the object server likely has a disk issue or some other problem and we reach node_timeout, whereby the proxy error is logged. The user has received part of the file but the flow stops.  The proxy in 1.8.0 does not try to recover.21:33
notmyname_bluev: that is correct. ther eis one other thing I'm looking at in your logs, though21:36
_bluevnotmyname: assuming the last field in the ammount of the time taken, the object-server log for the GET looks to be bizarrely short to me21:37
notmyname_bluev: well, what's confusing me is the IP addresses there. the PUT goes to one (?!) server (.73). but the error is with an object server on .85. why is that?21:38
_bluevnotmyname: DE-R210II-S-3-x is the proxy server and is .73.  The three object servers in the PUT section at the top are .82 .25 .85.  - they are three separate machines21:42
*** Trixboxer has quit IRC21:46
notmyname_bluev: ah, my mistake. that's the remote address, not the local one21:46
_bluevnotmyname: :-)21:46
notmyname_bluev: so you're right on track. there's 2 things I'd do next21:46
notmyname_bluev: first, check to see if there are other errors on .85. maybe it's a bad NIC or drive or something else that needs to be replaced21:47
notmyname_bluev: second, if there isn't a bad drive, then check the on-disk object on .85 and see what's up with it21:47
_bluevnotmyname:  Thank you - that sounds like a good idea.        The last piece of the puzzle for me is - why does the user receive HTTP 200 - is that because the HTTP status code is determined before a single bit of payload has been sent ?21:48
notmyname_bluev: and if those don't turn anything up, then could it be something that is happening under load? is .85 getting overwhelmed?21:48
notmyname_bluev: yes. headers are sent first, so we can't rewind what's already been sent to the client21:49
notmyname_bluev: which is (another reason) why the etag is nice. the client can check that the bytes received match the value in the header21:49
*** piyush2 has quit IRC21:49
bmac423Getting this error now: LookupError: No section 'informant' (prefixed by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config /etc/swift/proxy-server.conf21:50
_bluevnotmyname: .85 has had a few issues - it's disks are about 2.5yrs old and the replication to push data off of it of course means it has a lot of work to do. load avg is around 7 normally but does spike to 1521:51
notmyname_bluev: how many cores? load average of 15 is fine if you've got 16 cores ;-)21:52
_bluevnotmyname: 4 cores :-(21:52
notmyname_bluev: so it could simply be a hardware bottleneck or it could be a drive issue21:54
pandemicsynbmac423: do you have a typo in your [filter:informant] maybe ?21:54
pandemicsyn[filter:informant] section*21:54
bmac423I don't think so...21:55
_bluevnotmyname: i'm betting on disk , but yes - could be NIC possibly or generally too much load.21:55
bmac423[filter:informant] use = egg:informant#informant statsd_host =21:55
*** bmac423 has quit IRC22:07
*** jamie_h has quit IRC22:15
*** peterbusque has quit IRC22:15
*** peterbus_ has joined #openstack-swift22:15
*** G________ has quit IRC22:26
*** _bluev has quit IRC22:28
*** openstack has joined #openstack-swift22:34
*** mkollaro has quit IRC22:44
openstackgerritClay Gerrard proposed a change to openstack/swift: Expose container info on deleted containers.  https://review.openstack.org/8646522:53
claygnotmyname: ^22:53
notmynameclayg: thanks22:53
notmynameclayg: look at that sexy topic22:53
peluseahhh22:56
notmynameclayg: peluse: https://wiki.openstack.org/wiki/Swift/PriorityReviews22:57
peluseExpose is a good work I think22:57
* clayg looks up virility22:57
claygpeluse: is that really the word you ment?  RE: Diskfile directory structure "external modules have no virility into that detail"22:58
pelusereally, I did that?  :)22:59
pelusepretty sure that was mean to be visibility.  I types all that in an email so it would get spellchecked as I went22:59
notmynamehttp://thesaurus.com/browse/virility  I guess you could use "manhood"22:59
pelusehahah22:59
*** byeager has quit IRC22:59
peluseI say we keep it :)23:00
openstackgerritClay Gerrard proposed a change to openstack/swift: Put X-Timestamp in object 404 responses  https://review.openstack.org/8646723:01
openstackgerritpaul luse proposed a change to openstack/swift: Add Storage Policy Support to Container Sync  https://review.openstack.org/8646923:02
claygpeluse: wow, is that it?23:03
peluseclayg: unit testing could be a bit better.  it only tests policy 0 now but other than that yes23:03
peluseclayg: not sure which tests to manually extend cross policies. Who is a good container_sync person to ask about the various test cases?23:04
notmynamepeluse: gholt23:04
pelusecool, thanks.  Will hit hit up tomorrow.23:05
peluseclayg:  I also kept using some of the names (and hardcoded 0 in unit tests) until we get all this stuff done then we can do one swoop to update variable names, etc23:06
claygpeluse: that's fine23:07
openstackgerritClay Gerrard proposed a change to openstack/python-swiftclient: Add some bash helpers for auth stuff  https://review.openstack.org/8622423:09
peluseclayg:  saw the doc comments, thanks!  Will update tomorrow.  No clue how to correctly format, etc., so jsut focusing on content now and maybe someone else who know this sphincter thing can help :)23:09
claygpeluse: totally!23:09
*** bobf has joined #openstack-swift23:19
*** changbl has quit IRC23:21
claygpeluse: aww man i was looking at the unified diff and saw get_data_dir and get_async_dir - we should pull that bit out of the container-obj-put-409 patch and get it on feature/ec23:29
peluseyeah, I agree23:32
pelusewant me to do it?23:33
*** _bluev has joined #openstack-swift23:34
pelusealso, wrt container sync there might be some more to do there beyond test code (checking if not handling new scenarios) but I'll chat with gholt and see what he thinks too23:34
*** zackf has quit IRC23:34
claygpeluse: sure123:35
bobfdoes the feature/ec branch contain the swift-ring-builder for multiple policies.  After modifying the swift.conf and remakerings to handle storage policy. the output appears not to generate object-<x>.builder as expected.  Am I missing a step for multiple policy enablement?23:36
peluseclayg:  cool, will do23:37
pelusebobf:  you need to add the policy index when you use the builder command, did you check our the emerging docs yet?23:37
bobfpointer to doc?23:37
pelusehttps://review.openstack.org/#/c/85824/23:38
bobfok thanks23:38
notmynamebobf: to be fair, those were pushed publically about 2 hours ag o;-)23:38
pelusepretty raw right now - just posted initial content a few hrs ago so expect lots of changes.  the basics for getting going should work though.  if you have feedback, would love to hear it!23:38
peluseYeah, I know - earlier comment wasn't meant to be an RTFM type thing :)23:39
bobf:)23:39
notmynamelol. peluse is "helpful" by telling people, "that's totally documented in the proposed patch that I pushed up 30 minutes ago! go read the docs!"23:39
peluseyeah yeah :)  you know that's now what I meant... should have said "if you can, please take this opportunity to check out the emerging documentation"...23:40
*** mwstorer has quit IRC23:42

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!