Tuesday, 2014-10-07

*** occupant has quit IRC00:01
notmynamenitika2: mahati (not here): I didn't get to OPW project scheduling today. should be towards the top of my list for tomorrow00:09
*** dmsimard_away is now known as dmsimard00:15
*** dmsimard is now known as dmsimard_away00:18
nitika2notmyname: No problem.00:19
nitika2thanks00:19
*** dmorita has joined #openstack-swift00:26
*** gyee has quit IRC00:27
*** kopparam has joined #openstack-swift00:40
*** addnull has joined #openstack-swift00:42
*** kopparam has quit IRC00:45
*** shri has left #openstack-swift00:46
*** cpen has quit IRC00:52
*** nosnos has joined #openstack-swift01:01
*** oomichi has joined #openstack-swift01:23
*** annegentle has joined #openstack-swift01:34
*** annegentle has quit IRC01:40
*** kopparam has joined #openstack-swift01:41
*** kopparam has quit IRC01:46
*** kyles_ne has quit IRC01:53
*** kyles_ne has joined #openstack-swift01:54
*** kyles_ne has quit IRC01:58
redbodoes that mean I get 2 votes in everything?02:10
redbofungi: I have no idea how I'd remove that account.02:12
fungiredbo: not remove it from gerrit, just check the checkbox next to it in the group membership list and hit the delete button (take the redundant account out of the swift-core member list)02:14
fungiredbo: i wasn't 100% sure which one you were using, or i'd have done it for you02:14
*** annegentle has joined #openstack-swift02:18
redbofungi: I can't check those boxes and don't have a "remove" button, but feel free to remove mike-launchpad@.  It's some zombie account that gerrit and launchpad conspired to create and cause me headaches.02:23
fungiredbo: oh, sorry, just noticed swift-core was set owned by swift-release instead of self-managed. no wonder i had you confused02:24
fungiredbo: cleaned up--thanks!02:25
redbofor some reason my launchpad account had 2 oauth identities associated with it when gerrit started, and it could never decide which one I was.02:25
*** zigo has quit IRC02:27
*** zigo has joined #openstack-swift02:29
*** kopparam has joined #openstack-swift02:42
*** kopparam has quit IRC02:46
*** annegentle has quit IRC02:47
*** notsogentle is now known as annegentle02:47
*** nitika2 has quit IRC02:54
*** kbee has joined #openstack-swift03:22
*** nosnos has quit IRC03:24
*** kbee has quit IRC03:35
*** nosnos has joined #openstack-swift04:10
*** kota_ has joined #openstack-swift04:15
*** fifieldt_ has quit IRC04:42
*** kota_ has quit IRC04:44
*** kota_ has joined #openstack-swift04:45
*** kota_ has quit IRC04:56
*** kyles_ne has joined #openstack-swift04:59
*** cschwede has joined #openstack-swift05:21
*** cschwede has quit IRC05:24
*** zaitcev has quit IRC05:31
*** nosnos has quit IRC05:37
*** nosnos has joined #openstack-swift05:38
*** nosnos has quit IRC05:42
*** nosnos has joined #openstack-swift05:43
*** kopparam has joined #openstack-swift05:45
*** bkopilov has quit IRC05:46
*** cschwede has joined #openstack-swift05:58
*** oomichi has quit IRC06:02
*** jokke_ has quit IRC06:11
*** ttrumm_ has joined #openstack-swift06:12
*** hhuang has quit IRC06:13
mattoliverauWell I'm calling it a night! Night all.06:15
*** kyles_ne has quit IRC06:21
*** kyles_ne has joined #openstack-swift06:22
*** kyles_ne has quit IRC06:26
*** hhuang has joined #openstack-swift06:27
*** delattec has quit IRC07:09
*** delattec has joined #openstack-swift07:12
*** kopparam has quit IRC07:18
*** fifieldt has joined #openstack-swift07:22
*** joeljwright has joined #openstack-swift07:30
*** hhuang has quit IRC07:33
*** hhuang has joined #openstack-swift07:37
*** geaaru has joined #openstack-swift07:39
*** hhuang has quit IRC07:42
*** kopparam has joined #openstack-swift07:46
*** jistr has joined #openstack-swift07:46
*** foexle has joined #openstack-swift08:00
*** acoles_away is now known as acoles08:04
*** openstackgerrit has quit IRC08:11
*** nosnos has quit IRC08:23
*** nellysmitt has joined #openstack-swift08:25
*** nosnos has joined #openstack-swift08:40
*** aix has joined #openstack-swift08:52
*** mkollaro has joined #openstack-swift09:00
*** aix has quit IRC09:00
*** aix has joined #openstack-swift09:01
*** geaaru has quit IRC09:11
*** geaaru has joined #openstack-swift09:15
*** oomichi_ has joined #openstack-swift09:16
*** hhuang has joined #openstack-swift09:18
*** kbee has joined #openstack-swift09:18
*** ChanServ sets mode: +v cschwede09:20
*** aix has quit IRC09:38
*** Dafna has joined #openstack-swift09:40
*** kbee has quit IRC09:42
*** btorch has joined #openstack-swift09:43
*** dosaboy_ has joined #openstack-swift09:43
*** btorch_ has quit IRC09:47
*** kopparam has quit IRC09:48
acolescschwede: hi, you around?09:48
*** kopparam has joined #openstack-swift09:48
*** dosaboy has quit IRC09:48
cschwedeacoles: Hi Alistair!09:49
*** kopparam has quit IRC09:49
acolescschwede: i just hit bug 1376878, assertion error in test_upload09:49
acolesdid you make any progress on a fix, i think i can see the cause09:50
cschwedeacoles: no, no fix from my side yet. i'm curious, what's the cause?09:51
*** aix has joined #openstack-swift09:52
acolescschwede: service.py, line 1189 onwards, jobs to create container and segment container are now submitted to thread pools09:52
cschwedeacoles: ahh, yes, now it makes a lot of sense to me. good catch!09:53
acolesso can occur 'out of order', but test assumes they are ordered (assert_called_with checks the last call to a method)09:53
cschwedeacoles: yes, and i was trying to use something like "has_calls" and just check if the calls are there (in any order). but didn't submit a patch yet09:53
*** kopparam has joined #openstack-swift09:53
acolesalso the segment container job is put in a object_uu_pool ??09:54
acolescschwede: my first thought was to fix the test too, but i'm not sure the behavior is correct.09:54
*** mahatic has joined #openstack-swift09:54
cschwedeacoles: the other idea might be to ensure the segment container is created first, otherwise uploads might fail. wdyt?09:55
acolescschwede: the segment container create will attempt to HEAD the first container, but that HEAD could occur before the first container is PUT?? So ordering should be enforced - would you agree?09:55
cschwedeacoles: yes, agreed, that should be in the correct order09:56
acolesthe 'old' behavior was segment container second09:56
* acoles just looks to double check that09:56
*** ttrumm has joined #openstack-swift09:57
*** ttrumm_ has quit IRC09:57
*** kopparam has quit IRC09:58
*** ttrumm_ has joined #openstack-swift09:59
acolescschwede: ok, the old order was container PUT, then segment container PUT, on a single thread10:01
acoleshttps://github.com/openstack/python-swiftclient/blob/eedb0d4ab5f2fc6ac8b49a80cd2128edcbc5aceb/swiftclient/shell.py#L118210:01
cschwedeacoles: which makes sense to me10:01
*** ttrumm has quit IRC10:01
acolesif the first container PUT fails, then the segment container PUT is still attempted, but thats another issue!10:02
acolescschwede: shall I put up a patch?10:02
*** kopparam has joined #openstack-swift10:03
cschwedeacoles: sure, add me as a reviewer and i'll have a look at it10:03
mahatichi cschwede, I see that you're a core reviewer. Can you take a look at this and suggest any changes? https://review.openstack.org/#/c/125275/10:03
acolescschwede: ok. btw how was your journey home?10:03
cschwedemahatic: sure, i'll have a look at your patch!10:04
cschwedeacoles: thanks, was quite relaxed - i was a little bit worried about traffic, but security took even more time than driving to Boston ;)10:04
*** jokke_ has joined #openstack-swift10:04
joeljwrightacoles cschwede: just spotted this discussion10:05
mahaticcschwede, alright, thank you! will i have to add you as a reviewer there or not necessary?10:05
joeljwrightif the segment container creation job is being put in object_uu_pool then it's in the wrong place10:05
joeljwrightI'm still not happy with the object_uu and object_dd thread pools10:06
acolesjoeljwright: ok i'll move it to the container pool. i'll add you as a reviewer ok?10:06
joeljwrightbut it was the simplest way to avoid deadlocks10:06
cschwedemahatic: no need for this patch, but feel free to add me whenever you need a review10:06
joeljwrightyes that's fine, I'll review too10:06
mahaticcschwede, okay, great. thank you.10:07
*** addnull has quit IRC10:12
*** ttrumm has joined #openstack-swift10:17
*** ttrumm_ has quit IRC10:20
*** addnull has joined #openstack-swift10:21
*** dmorita has quit IRC10:28
*** addnull has quit IRC10:28
*** mkollaro has quit IRC10:30
*** joeljwright has quit IRC10:32
cschwedemahatic: i would like to see a test too, i could add an example to the review if you like. or want to work on your own on this?10:33
*** ttrumm_ has joined #openstack-swift10:33
mahaticcschwede, sure, an example would be great. I'm not quite sure on how/where to add a test.10:34
cschwedemahatic: ok, review is online, i added a sample test and where to put it. let me know if you have questions on this10:35
*** ttrumm has quit IRC10:36
*** kopparam has quit IRC10:36
*** kopparam has joined #openstack-swift10:39
*** jd__ has quit IRC10:41
*** jd__ has joined #openstack-swift10:41
mahaticcschwede, great, thank you! looking at it.10:43
*** jokke__ has joined #openstack-swift10:47
*** jokke_ has quit IRC10:54
*** hhuang has quit IRC10:54
*** delattec has quit IRC10:54
*** mordred has quit IRC10:54
cschwedethat might become quite helpful for Swift: http://permalink.gmane.org/gmane.comp.db.sqlite.general/9054910:58
*** cdelatte has quit IRC10:59
*** dmsimard_away is now known as dmsimard10:59
*** bkopilov has joined #openstack-swift11:04
*** ttrumm_ has quit IRC11:06
*** ttrumm has joined #openstack-swift11:06
*** hhuang has joined #openstack-swift11:08
*** mordred has joined #openstack-swift11:08
*** foexle has quit IRC11:09
*** foexle has joined #openstack-swift11:11
*** cdelatte has joined #openstack-swift11:31
*** delattec has joined #openstack-swift11:31
*** jistr has quit IRC11:32
*** kopparam has quit IRC11:37
*** joeljwright has joined #openstack-swift11:45
*** mkollaro has joined #openstack-swift11:46
*** jistr has joined #openstack-swift11:52
*** cschwede has left #openstack-swift11:54
*** jistr is now known as jistr|english11:54
*** cschwede has joined #openstack-swift11:54
dmsimardWhat does python-swiftclient do exactly when you try to upload a file larger than 5GB ? It seems to do multiple uploads so I would guess it uploads chunks..11:56
*** cschwede has quit IRC11:57
*** cschwede has joined #openstack-swift11:57
*** delattec has quit IRC11:58
*** cdelatte has quit IRC11:58
*** cschwede has quit IRC11:59
*** cdelatte has joined #openstack-swift12:00
*** cschwede has joined #openstack-swift12:00
*** cschwede has quit IRC12:07
*** cschwede has joined #openstack-swift12:08
*** cschwede has quit IRC12:09
*** kopparam has joined #openstack-swift12:09
*** cschwede has joined #openstack-swift12:09
*** ttrumm has quit IRC12:13
*** ttrumm__ has joined #openstack-swift12:15
*** cschwede has quit IRC12:18
*** AnjuT has joined #openstack-swift12:18
*** cschwede has joined #openstack-swift12:18
*** cschwede has quit IRC12:19
*** ttrumm has joined #openstack-swift12:21
*** cschwede has joined #openstack-swift12:22
*** ttrumm__ has quit IRC12:24
*** cschwede has quit IRC12:26
*** cschwede has joined #openstack-swift12:27
*** cschwede has quit IRC12:27
*** cschwede has joined #openstack-swift12:28
*** ChanServ sets mode: +v cschwede12:29
*** davdunc has quit IRC12:35
*** fungi has left #openstack-swift12:39
*** bsdkurt1 has quit IRC12:39
*** NM has joined #openstack-swift12:40
*** kopparam has quit IRC12:45
*** kopparam has joined #openstack-swift12:45
*** AnjuT has quit IRC12:47
*** openstackgerrit has joined #openstack-swift12:49
*** kopparam has quit IRC12:50
cschwededmsimard: by default it does not split the object  - you need to use „--segment-size“ or „-S“ to specify the segment size12:50
dmsimardcschwede: Thanks.12:51
cschwededmsimard: but if you upload multiple objects, swiftclient uses multiple threads12:51
* cschwede is happy to have a working bouncer again12:52
*** geaaru has quit IRC12:53
*** geaaru has joined #openstack-swift13:00
*** jistr|english is now known as jistr13:07
*** miqui has joined #openstack-swift13:09
*** kopparam has joined #openstack-swift13:16
*** ppai has joined #openstack-swift13:20
*** kopparam has quit IRC13:22
*** nosnos has quit IRC13:24
dmsimardcschwede: How would I upload multiple files simultaneously to make use of those multiple threads ? (Feel silly asking that..)13:31
*** CaioBrentano has joined #openstack-swift13:35
dmsimard(I'm running into bottlenecks and trying to put the finger on it)13:36
*** mrsnivvel has quit IRC13:45
*** oomichi_ has quit IRC13:46
cschwededmsimard: for example, let’s assume you have files named obj_1, obj_2, …, obj_10. if you do a „swift upload test obj_*“ you will most likey see that these are uploaded not in the „correct“ order, because different upload threads. so basically you just add multiple filenames for uploading13:51
dmsimardcschwede: Yeah, I kind of just started multiple "while true" uploads on different filenames to see what would happen13:52
*** tab____ has joined #openstack-swift13:58
*** foexle has quit IRC14:05
*** hhuang has quit IRC14:12
*** bkopilov has quit IRC14:15
NMGood morning guys.14:15
NMDoes anyone knows if the memcached used by object-expirer must be the same used by the proxy servers ? Or if there is any recommendation for that?14:16
*** kopparam has joined #openstack-swift14:18
*** nexusz99 has joined #openstack-swift14:23
*** kopparam has quit IRC14:23
*** bkopilov has joined #openstack-swift14:29
openstackgerritAlistair Coles proposed a change to openstack/python-swiftclient: Fix cross account upload using --os-storage-url  https://review.openstack.org/12575914:29
*** kyles_ne has joined #openstack-swift14:57
*** bkopilov has quit IRC14:59
*** elambert has joined #openstack-swift14:59
*** kyles_ne has quit IRC15:03
*** lpabon has joined #openstack-swift15:05
mahatichi, when i try to do any swift operation and get this at the end of stack trace "pkg_resources.DistributionNotFound: python-swiftclient==2.3.2.dev1.gc93df57"15:06
mahaticdoes it mean i will have to get a latest of python-swiftclient?15:06
*** nitika_ has joined #openstack-swift15:10
*** bkopilov has joined #openstack-swift15:11
cschwedemahatic: this worked for me in the past: „easy_install --upgrade pip“15:11
*** mrsnivvel has joined #openstack-swift15:13
mahaticcschwede, "easy_install --upgrade pip" gives me "pip 1.5.6 is already the active version in easy-install.pth"15:15
*** ttrumm has quit IRC15:19
*** kopparam has joined #openstack-swift15:19
*** ttrumm has joined #openstack-swift15:20
*** kenhui has joined #openstack-swift15:24
*** kopparam has quit IRC15:24
*** mrsnivvel has quit IRC15:24
*** bkopilov has quit IRC15:29
openstackgerritpaul luse proposed a change to openstack/swift: Merge master to feature/ec  https://review.openstack.org/12659515:29
notmynamegood morning15:31
*** hhuang has joined #openstack-swift15:32
pelusemorning15:32
*** mrsnivvel has joined #openstack-swift15:32
*** k4n0 has quit IRC15:33
pelusenotmyname, for some reason the topics for mt gerrit patches (at least for merging master to ec) seem to randomly select bugs despite my commit message indicating the swift-ec blueprint... any idea what might be happening?15:37
mahaticcschwede, "python setup.py develop" this installed required dependencies15:37
mahaticcschwede, it's working alright now15:37
*** ttrumm has quit IRC15:37
notmynamepeluse: on https://review.openstack.org/#/c/126595/ ?15:43
*** lcurtis has joined #openstack-swift15:45
notmynamepeluse: or are you referring to the emails that say a bug fix was proposed to feature/ec?15:50
notmynamecschwede: ya, I think the new sqlite would be cool to check out15:54
*** hhuang has quit IRC15:58
notmynametorgomatic: openstack requirements are unfrozen, so it can be updated now (like new eventlet)15:58
*** kyles_ne has joined #openstack-swift15:58
*** kyles_ne has quit IRC16:02
*** mkollaro has quit IRC16:03
*** gyee has joined #openstack-swift16:08
pelusenotmyname, yes to the link above and the 'topic' there isn't something I added and then results in updating the bug and thus sending emails out that the merge to feature/ec is a proposed fix for that bug16:12
notmynamepeluse: I do not know by what magic the "topic" string is chosen. I thought it came from your local branch name16:13
torgomaticI think git-review fishes out the topic string by examining your commit comment16:13
notmynamepeluse: the emails are fine (and expected), though. since you're merging one tree into another (master->ec), the bug fixes that were on master and not yet on feature/ec are now also proposed to feature/ec16:13
torgomaticif you manually push, it's $ git push gerrit $mybranch:refs/publish/master/$topic16:13
peluseyeah, but I specifically added a commit message to keep that from happening16:13
torgomaticor refs/publish/feature/ec/$topic16:14
pelusewell, I mean to specufy the topic I mean.  I just changed it manually on gerrit16:14
openstackgerritJohn Dickinson proposed a change to openstack/swift: Refer multi node install to docs.openstack.org  https://review.openstack.org/9378816:15
peluseman, I can't find where its coming... just annoying is all16:17
pelusenotmyname, yeah, maybe the one that ends up as the "topic" is like the last one in the auto-merge process or something....16:18
peluseit just started with these last 2 merges though, before then the topic would be whatever I selected in the commit message....16:19
*** zaitcev has joined #openstack-swift16:19
*** ChanServ sets mode: +v zaitcev16:19
notmynamethere is no coffee in my house. I must go fix that now...16:21
pelusetorgomatic, were you in on the 'approval pointer' discussion and free for 5-10 min to talk a bit about it?16:28
torgomaticpeluse: yeah, I've got some time16:28
*** jistr has quit IRC16:29
pelusetorgomatic, cool wanna buzz me at 480 554 3688? would be faster than typing :)16:29
peluseor I can call too, either way16:29
torgomaticpeluse: okay, give me just a minute here to go get situated16:29
pelusenp16:29
notmynametorgomatic: peluse: if you want to 3-way dial me in too, I'm available. or not. whatever :-)16:30
peluseahh, thought you were out for coffee.  Can do, I'll just send you guys a bridge to make it easy16:30
torgomatick16:31
elambertpeluse: mind if I lurk on that call?16:31
peluse916-356-2663,  Bridge: 1, Passcode: 465025616:31
peluseall are welcome16:31
pelusewell, I guess the default is 5 people if so someone is unable to joine let me know and I'll figure out how to change it16:33
pelusenotmyname, coming?16:35
*** marcusvrn_ has joined #openstack-swift16:38
*** zigo has quit IRC16:43
*** zigo has joined #openstack-swift16:46
notmynamepeluse: ah, sorry. just walked to get coffee16:46
*** mahatic has quit IRC16:51
*** NM has quit IRC16:55
*** lcurtis has quit IRC16:58
pelusecall over -- thanks guys!!17:05
notmynamepeluse: sorry, I had stepped away before you said something about a bridge. thanks for doing that17:06
*** NM has joined #openstack-swift17:06
notmynamereminder to those interested, the openstack TC nominations are open. if you want to run, you need to send an email to the mailing list. http://lists.openstack.org/pipermail/openstack-dev/2014-October/047749.html17:11
zaitcevtoo much work...17:12
*** mahatic has joined #openstack-swift17:13
*** kyles_ne has joined #openstack-swift17:16
*** alexpec has joined #openstack-swift17:20
*** alexpec has left #openstack-swift17:20
*** kopparam has joined #openstack-swift17:21
NMDoes anyone knows if the memcached used by object-expirer must be the same used by the proxy servers ? Or if there is any recommendation for that?17:21
*** kopparam has quit IRC17:25
notmynameNM: the reason it has a separate config file is because it instantiates an InternalClient (which is basically a simplified in-memory proxy server). therefore, if you share the memcache pool, then the internal client will be able to take advantage of the stuff the proxy has cached (account and container info, basically). but there is no hard requirement on that17:28
*** kopparam has joined #openstack-swift17:28
NMnotmyname: Do you think this sentence on the docs should be reviewd? "Only the proxy server uses memcache."17:31
notmynameNM: depends if we care about it being correct or not ;-)17:32
*** kopparam has quit IRC17:33
notmynameNM: heh, as of about an hour ago, that entire file that had that sentence in it gone removed ;-)17:38
notmynames/gone/got/17:38
notmynameNM: https://review.openstack.org/#/c/93788/17:39
*** kyles_ne has quit IRC17:39
*** kyles_ne has joined #openstack-swift17:40
NMnotmyname: Thanks! https://review.openstack.org/#/q/owner:tom%2540openstack.org+status:open,n,z seens to be pretty straight forward: "was so horribly outdated17:42
NM" :D17:42
NMs/big_wrong_string/Tom Fified/17:42
notmynameeh17:42
notmynameheh17:42
notmynameyeah, fifieldt has got it right there :-)17:43
notmynamehurricanerix: ping17:43
hurricanerixnotmyname: hey17:43
notmynamehurricanerix: re your metadata patch. in functional/test_account.py what is test_bad_metadata3() testing?17:44
*** kyles_ne has quit IRC17:44
*** geaaru has quit IRC17:44
*** kyles_ne has joined #openstack-swift17:44
hurricanerixnotmyname: basically, once the fix was in place, the func tests failed because part of the test would fill the account metadata to it max constraints.17:45
openstackgerritOpenStack Proposal Bot proposed a change to openstack/python-swiftclient: Updated from global requirements  https://review.openstack.org/8925017:45
hurricanerixso the next test would try to add more, and fail when it asserted it would work.17:45
openstackgerritOpenStack Proposal Bot proposed a change to openstack/swift: Updated from global requirements  https://review.openstack.org/8873617:45
notmynamehurricanerix: yeah, I get that. just looking at that specific function17:45
zaitcevI mentioned that in comments bug my -1 got lost in reuploads17:46
zaitcevJust run  SWIFT_TEST_IN_PROCESS=1 ./.functests, it's enough to trigger17:46
hurricanerixoh, let me look, i didn't change the tests, i just broke them up so that the tearDown would clean up and allow the tests to pass17:46
notmynamehurricanerix: the first check for 204 is to ensure that exactly the metadata max size works?17:47
zaitcevThanks to Portante you don't need to set up the whole functests thing anymore.17:47
*** echevemaster has joined #openstack-swift17:47
notmynamezaitcev: what are you triggering?17:47
zaitcevnotmyname: metadata overflow17:47
notmynamezaitcev: yeah, hurricanerix got that taken care of in the patch that landed. split the tests up so it cleans up properly17:48
hurricanerixnotmyname: it looks like it is adding metadata to MAX_META_OVERALL_SIZE (should be 4096), and verifies that it is successful, then tries to add one more key, which would make it exceed that value.17:48
notmynameI'm investigating backporting it to icehouse.17:48
hurricanerix*up to17:48
zaitcevoh17:48
notmynamehurricanerix: ah, thanks. just confirming I was reading it right. I'm getting 2 errors on my backport. one there and one in _bad_metadata2() where it's checking for jsut the max number of keys17:50
hurricanerixnotmyname: ahh.  you could check if your account you are running it on has tempurl keys set17:50
hurricanerixbecause the test assumes they are not set.17:51
hurricanerixneed to look at fixing it so those get cleaned up, or removed before the tests run to prevent that from causing problems.17:51
notmynamehurricanerix: looks like test_bad_metadata2 removes them for that test17:51
hurricanerixnotmyname: yeah, because they were causing that test to fail.  it's not a very good solution though because it assumes that test_bad_metadata2 runs before it.17:53
*** kenhui has quit IRC17:53
hurricanerixnotmyname: i think unittest runs them in order, but one test probably should not be dependent on another one i would think.17:54
notmynamehurricanerix: yeah. I'm getting fun* results17:54
notmyname*not fun17:55
hurricanerixnotmyname: also, wasn't sure if you were just running test_bad_metadata317:55
notmynamerunning the whole TestAccount class and both 2 and 3 fail. reset and rung them individually and whichever I run first passes and the other fails17:55
hurricanerixnotmyname: if you reset and run one of them, then head the account, do you have any metadata still set?17:56
hurricanerixnotmyname: maybe it is not all getting cleaned up like i thought it was.17:56
swift_fanHi -- does anyone have experience using the "balance = source" configuration in HAProxy ?17:56
swift_fanI am wondering if,17:57
swift_fanI were to upload lots of big files from one machine, to my Swift cluster17:57
notmynamehurricanerix: yup. that was my next check. and it's not getting cleaned up17:57
swift_fanEven if HAProxy is inclined towards sending all of the files to one particular Swift proxy server (when the HAProxy configuration is set to "source")17:58
swift_fanwill it start to spread out some of the load to maybe some of the other Swift proxies ?17:58
torgomaticswift_fan: you're asking if, should your load balancer fail to balance the load, will Swift then balance the load itself?18:00
swift_fan(when it sees that a lot of big files are being uploaded by just one machine outside the cluster)18:00
torgomaticit will not; Swift proxies talk to storage nodes, not to other proxies.18:00
swift_fantorgomatic : Possibly, but also whether HAProxy does so as well.18:00
notmynameswift_fan: yeah, haproxy will spread it out based on the source IP when it's set to balance=source. see https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts and https://swiftstack.com/blog/2013/06/10/configuring-haproxy-as-a-swiftStack-load-balancer/18:00
torgomaticswift_fan: I can't speak to haproxy's behavior, but Swift does not include any method for moving load between proxy servers.18:01
hurricanerixnotmyname: strange, was the cleanup stuff added to the teardown after icehouse?18:01
swift_fannotmyname : I know that HAProxy will choose a destination server and try to stick with it, but what happens if there is a particular server that's sitting outside the Swift cluster, that wants to back up a LOT of data at once?18:01
notmynamehurricanerix: not sure18:01
mahaticnotmyname, hello, did you happen to finish your todo list yesterday? :)18:02
notmynameswift_fan: then you'll probably not want to use balance=source18:02
swift_fannotmyname : Will HAProxy (which is set to "balance = source") still try to send all of that data to just one Swift node ?18:02
swift_fannotmyname : In such an "extreme" case ?18:02
notmynamemahatic: almost. I'm working on a backport now and then I should be able to look at OPW stuff18:02
mahaticnotmyname, okay18:03
swift_fannotmyname : Usually, I would prefer HAProxy "balance = source", but I'm concerned about what HAProxy can do in the lots-of-large-objects coming from just one server, scenario ..........18:03
notmynameswift_fan: the haproxy docs say that if you set balance=source then each client will go to the same backend (swift proxy in this case). so I'm not sure what the question is18:04
swift_fantorgomatic : You said that Swift proxies don't talk to other Swift proxies, but is that always the case ? For instance, from the tcpdump utility, I think I was able to see the servers talking to each other for the account,container,object replication services ...........18:04
hurricanerixnotmyname: i don't see any tearDown in the icehouse code, and i am pretty sure that is where it is getting cleaned up.  https://github.com/openstack/swift/blob/stable/icehouse/test/functional/test_account.py18:04
notmynameswift_fan: use haproxy to balance client requests going to swift proxy servers. the swift proxy will choose the appropriate storage nodes in the cluster, but that's completely orthogonal to client load balancing18:05
swift_fannotmyname : I have 3 Swift nodes18:05
swift_fannotmyname : Each has the proxy,account,container,object services.18:05
notmynamehurricanerix: yeah, looks like I should add the tearDown to the icehouse backport18:06
torgomaticswift_fan: you are conflating the idea of "server" (an OS image with a bunch of processes in it) with "Swift proxy server" (/usr/bin/swift-proxy-server)18:07
torgomaticSwift proxy servers do not talk to one another.18:08
* portante they are a funny lot18:08
*** Trixboxer has joined #openstack-swift18:09
swift_fantorgomatic notmyname : So, can an object be downloaded from outside the cluster, even if the Swift cluster hasn't made 3 replicas of an uploaded object yet ?18:09
torgomaticswift_fan: yes18:09
swift_fantorgomatic notmyname : Or, however many replicas it aims to create.18:09
notmynamehurricanerix: yeah, thanks. looks like adding the setup and teardown makes it work. running full tests now and will propose if it works18:09
swift_fantorgomatic notmyname : How ?18:10
swift_fantorgomatic notmyname : What if the Swift proxy server that receives the download request is not able to find the requested object in one of the locations it needs to be replicated in ?18:10
hurricanerixnotmyname: nice, glad i could help.18:11
torgomaticswift_fan: it looks in another location18:11
notmynameimportant to know summary of current happenings in the openstack TC: http://www.openstack.org/blog/2014/10/openstack-technical-committee-update-2/18:12
swift_fantorgomatic notmyname : Ah, I see. So what you're saying is that when a new object is first located, and immediately downloaded by a client, the Swift proxy will first look at a particular node for the requested object,18:14
swift_fantorgomatic notmyname : Then will try another node,18:14
*** aix has quit IRC18:14
notmynamehurricanerix: torgomatic: acoles: https://review.openstack.org/#/c/126645/  <-- metadata backport to icehouse18:14
swift_fantorgomatic notmyname : Until it has either found the object or exhausted all the possible nodes,18:14
swift_fantorgomatic notmyname : Right ?18:14
notmynameswift_fan: ya, basically. but instead of just one location for the first lookup, there are <replica count> possibilities. if it isn't found in any of those locations, then it looks in the rest of the cluster (up to a limit of nodes--you don't want to look on 10000 drives for every 404)18:16
swift_fannotmyname : Okay, But if it's not a newly updated object, and just an update to one -- if a client requests to download that updated object, the Swift proxy that receives the download request will just return the first instance of the requested object that it can find ... ?18:17
swift_fannotmyname torgomatic : Even if the particular object/file replica that the Swift proxy retrieves hasn't been updated to the new version, yet ?18:18
notmynameswift_fan: correct. and that might happen when you have hardware failures in your cluster (eg servers or drives being down). swift doesn't require all the servers to be up in order to respond to a request. this is called eventual consistency18:20
swift_fannotmyname : Okay, thanks. Where in the code can I find how the Swift proxy server communicates with the Swift storage services ? (so that I can also verify what torgomatic said about the Swift proxies never being able to communicate with each other.)18:23
swift_fanas in, with other Swift proxies.18:23
swift_fan(No communications via Swift proxy <-----> Swift proxy) ?18:23
notmynameswift_fan: to make an analogy to biology, I feel like you've jumped from "what color are your eyes" straight into "show me the specific gene that controls my eye color"18:24
notmynameswift_fan: point is, the answer to "where in the code" is https://github.com/openstack/swift/tree/master/swift/proxy but that's not a small set of code18:25
swifterdarrellswift_fan: if you're going to read the code anyway, why didn't you just start there? ;)18:26
*** andreia_ has joined #openstack-swift18:28
*** kopparam has joined #openstack-swift18:29
notmynamethis looks interesting, for those of you who don't mind conferences http://events.linuxfoundation.org/events/vault/program/cfp18:29
notmynamepeluse: ^^18:29
lpabonnotmyname: that's the conference i was supposed to send you :-)18:31
notmynamelpabon: :-)18:31
swift_fannotmyname swifterdarrell : Sorry, what I meant to ask was where I could fine the logic in the code that shows how the cluster retrieves+delivers a download request ?18:31
swift_fannotmyname swifterdarrell : Not necessarily to modify it, but hopefully to gather some more insights about this. Thanks18:32
notmynameswift_fan: same place. https://swiftstack.com/blog/2013/02/12/swift-for-new-contributors/ has some general pointers as to the high-level data flow in the code. that's where you should start18:32
*** kopparam has quit IRC18:33
swift_fannotmyname torgotmatic swifterdarrell : If the Swift proxies never communicate with the other Swift proxies -- then how does each Swift proxy receive updates on the rings (so that they know where certain objects and containers are stored) ?18:37
swift_fannewest updates* on the rings.18:37
notmynameswift_fan: that's the kind of thing talked about in http://docs.openstack.org/developer/swift/admin_guide.html (tl;dr you do it just like you do config management on your servers)18:38
swift_fannotmyname : config management ?18:40
*** mrsnivvel has quit IRC18:41
swift_fannotmyname : Well I mean besides the part where the Swift administrator copies the rings to each Swift proxy-server upon initial installation/configuration of the Swift cluster,18:42
swift_fannotmyname : how do the Swift proxy servers then continue to keep the most updated set of rings, when there isn't any communication between the Swift proxies ?18:42
swifterdarrellswift_fan: chef, puppet, home-grown system, etc...18:44
swift_fanswifterdarrell : You mean that Swift itself doesn't have a way for managing the rings ?18:45
notmynamemanaging != deploying18:46
swift_fanswifterdarrell : How do the Swift proxies know where newly uploaded objects are located (within the cluster), then ?18:46
swifterdarrellswift_fan: well, it has a way for managing the rings (swift-ring-builder) but not for distributing them18:46
swifterdarrellswift_fan: by using the ring18:46
swift_fanreferring to a cluster that's been up and running for a while, not one that's just been deployed.18:46
swifterdarrellswift_fan: to jump a question or two ahead: say you have 3 replicas; you set a "min part hours" >= the longest swift-object-replicator cycle time in your cluster; the swift-ring-builder won't move more than one of a partition's replica per "min part hours"18:48
*** andreia_ has quit IRC18:48
swifterdarrellswift_fan: and that maintains availability during the course of data shuffling (via the swift-object-replicator) as capacity gets added and partitions moved around18:49
*** shri has joined #openstack-swift18:49
pelusenotmyname, cool - yeah I think I saw something about that one.  Didn't notive the 'suggested' topics including Swift and SDS though.... thanks18:49
swifterdarrellswift_fan: if 1 of 3 replicas is in the wrong place, that's no big deal for GET requests; they'll get served from the other 2; new PUTs will go to the new 3 correct locations and the replicator will sort out that that orphaned 3rd location can be reaped18:50
*** brnelson has joined #openstack-swift18:50
zackmdavisswift_fan, re "how do proxies know where newly uploaded objects are located", if it wasn't already clear, you don't need a whole new ring to place and retrieve new objects18:51
swift_fanzackmdavis swifterdarrell : I know that the cluster doesn't need a whole new ring, but it needs to update the ring, though, right?18:52
swifterdarrellswift_fan: this might help https://swiftstack.com/blog/2012/04/09/swift-capacity-management/18:53
swifterdarrellswift_fan: I'm not 100% sure what you're asking18:53
swifterdarrellswift_fan: swift-proxy-server notices new rings on disk automatically18:54
*** andreia_ has joined #openstack-swift18:55
swift_fanzackmdavis swifterdarrell : So is it on the Swift proxy server where the rings are stored ?18:57
notmynameswift_fan: the ring is a file on disk18:58
notmynameand yes, it's on the proxy server machines18:58
swift_fannotmyname : a disk on the Swift proxy servers?18:58
openstackgerritA change was merged to openstack/python-swiftclient: Add tests for account listing using --lh switch  https://review.openstack.org/12540218:58
notmynameyes. to make it simple, yes it's a file on disk on all of the boxes in your swift cluster18:58
zackmdavisswift_fan, no, the ring specifies how to compute where objects should live; it doesn't explicitly store a record of, "foo.jpg lives on partitions bar and quux", but rather specifies a mathematical relationship so that given an object like "foo.jpg", you can derive that it's supposed to be on the "bar" and "quux" partitions, so the ring itself only needs to be updated when the cluster changes (new nodes, drives decomissioned, &c.) not when18:59
zackmdavisthe data stored in the cluster changes18:59
*** occup4nt is now known as occupant18:59
zackmdavisI wrote a little bit about this earlier this year http://zackmdavis.net/blog/2014/01/consistent-hashing/19:00
swifterdarrellswift_fan: this is pretty sweet, you should totally read it!  http://docs.openstack.org/developer/swift/overview_ring.html19:01
swift_fannotmyname zackmdavis swifterdarrell : I think zackmdavis's response hit the spot! :)19:01
swift_fanSorry if it wasn't clear what exactly I was trying to ask .....19:02
swift_fanBut anyways, yeah, that's exactly what I needed.19:02
*** nexusz99 has quit IRC19:03
swift_fannotmyname zackmdavis swifterdarrell : Thanks for all the helpful resources along the way :)19:03
*** acoles is now known as acoles_away19:09
*** tdasilva has joined #openstack-swift19:09
openstackgerritOpenStack Proposal Bot proposed a change to openstack/python-swiftclient: Updated from global requirements  https://review.openstack.org/8925019:09
swift_fanThis is somewhat of a loadbalancing question, but19:10
swift_fanin the /etc/haproxy/haproxy.cfg file here : http://paste.ubuntu.com/8516575/19:10
swift_fanlisten swift_proxy_cluster19:11
swift_fan  bind <Virtual IP>:808019:11
swift_fan  balance  source19:11
swift_fan  option  tcplog19:11
swift_fan  option  tcpka19:11
swift_fan  server controller1 10.0.0.1:8080 check inter 2000 rise 2 fall 519:11
swift_fan  server controller2 10.0.0.2:8080 check inter 2000 rise 2 fall 519:11
swift_fanwhat is19:11
*** Nadeem has joined #openstack-swift19:11
swift_fan"option tcplog"19:11
swift_fan"option tcpka"19:12
swift_fanand19:12
swift_faneach "check inter 2000 rise 2 fall 5"19:12
swift_fan?19:12
notmynameswift_fan: here's the first one: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20tcpka19:13
notmynameswift_fan: I'm pretty sure you could find the others there too19:13
ppaiswift_fan, http://www.haproxy.org/download/1.3/doc/configuration.txt19:13
ppaiswift_fan, you'll find all haproxy config there19:14
zackmdavisswift_fan, I usually find search engines, such as the ever-popular Google, useful for answering these questions. In this case, I found the same documentation that notmyname just linked by searching for _tcplog openstack_, which lead me to this page http://docs.openstack.org/high-availability-guide/content/ha-aa-haproxy.html which itself links to the HAProxy docs19:15
openstackgerritOpenStack Proposal Bot proposed a change to openstack/python-swiftclient: Updated from global requirements  https://review.openstack.org/8925019:17
notmynameI propose we push back tomorrows Swift meeting for 30 minutes. I've got a conflict and won't be available until 1930UTC19:19
notmynamemattoliverau: acoles_away: cschwede: ^^ (since you are all asleep now)19:21
cschwedenotmyname: ok, fine with me. not completely asleep yet ;)19:22
notmynamecschwede: and by asleep I mean you probably shouldn't be lurking in IRC right now ;-)19:22
notmynameexcept for mattoliverau. he better be asleep now ;-)19:22
cschwedenotmyname: yeah, i probably should do other things at this time. but then, swift keeps me up atm ;)19:23
notmynamecschwede: yeah, I can sympathize.19:24
openstackgerritA change was merged to openstack/swift: Refer multi node install to docs.openstack.org  https://review.openstack.org/9378819:27
*** HenryG has quit IRC19:28
swift_fancschwede notmyname : What do you mean swift keeps you up ?19:28
swift_fanBecause it's fun, or because there are lots of tasks to do?19:28
notmynameyes19:28
swift_fan(or both)19:28
swift_fanWhat made you choose to work on Swift ?19:29
swift_fanas opposed to say,19:29
swift_fananother cluster file system ?19:29
swift_fansuch as Ceph19:29
*** kopparam has joined #openstack-swift19:30
notmynameI work on swift because I'm paid to. but I enjoy working on swift because I get to work with talented devs, solve hard problems, and build a storage engine that IMO has a real possibility of storing a large part of the world's data19:33
*** kyles_ne has quit IRC19:34
*** kopparam has quit IRC19:34
notmyname(that's the best 2-line summary of the last 5 years of my life I could come up with off the top of my head)19:34
swift_fannotmyname : You are employed by Rackspace, then?19:34
*** kyles_ne has joined #openstack-swift19:34
*** kyles_ne has quit IRC19:34
*** kyles_ne has joined #openstack-swift19:35
notmynameno. I used to work for rackspace. for the last 2+ years I've worked at swiftstack19:35
zackmdaviswoo SwiftStack19:35
swift_fannotmyname : Just curious, but what does the term "data management" mean to you ?19:35
notmynameswift_fan: not much. ie taken alone it's sorta meaningless19:35
swift_fannotmyname : How so?19:37
swift_fan(as in, why meaningless)19:37
notmyname"data management" doesn't tell you anything about any actual use case or problem being solved. therefore the phrase is general enough to apply to everything and nothing.19:39
swift_fannotmyname : How often in your everyday-work do you have to consider the actual use case / application that your storage systems are supporting ?19:41
notmynameI can't imagine not considering that19:42
swift_fannotmyname : For instance, when setting up and maintaining Swift clusters, do you consider the end application, often?19:42
notmynameotherwise what in the world are we even doing here?19:42
swift_fannotmyname : Well, anything you do can work for a very general case, right ?19:42
swift_fannotmyname : For instance, setting up and maintaining a Swift cluster.19:42
notmynamethere are two sets of users: the deployers running a cluster and the applications/users talking to it. both are always considered19:42
swift_fannotmyname : Pretty much any application that I can think of, that requires storing data, can utilize it.19:43
swift_fannotmyname : Why, aside from object storage, is the cloud *block* storage as well ?19:46
openstackgerritA change was merged to openstack/swift: Merge master to feature/ec  https://review.openstack.org/12659519:46
swift_fannotmyname : I've heard that it's faster, but is it really worth it for application designers to try to manage things on the *block* level ?19:46
notmynameswift_fan: are you asking out of curiosity or are you looking for something specific to an app you're writing?19:47
swift_fannotmyname : I've been wanting to ask this for a while.19:49
swift_fannotmyname : Mostly out of curiosity, but I imagine it could be very useful in the future.19:49
openstackgerritChristian Schwede proposed a change to openstack/swift: Add a reference to the OpenStack security guide  https://review.openstack.org/12670919:54
notmynameswift_fan: I'd suggest the following: https://dl.dropboxusercontent.com/u/21194/distributed-object-store-principles-of-operation.pdf and https://www.youtube.com/watch?v=Og0BHTMH66M and basically anything at https://www.google.com/#q=object+storage+vs+block+storage19:56
swift_fannotmyname : Ok, thanks.19:56
notmynameand I apologize for giving you a link to a google search, but there's a ton of info out there already that you'll be able to peruse at your own speed19:56
swift_fanok, no worries!!19:57
*** HenryG has joined #openstack-swift20:06
swift_fannotmyname : Does SwiftStack support analytics applications on top of SwiftStack's stored data ?20:14
swift_fannotmyname : Or is cloud *block* storage more suited for that ?20:14
notmynameswift_fan: what do you mean by "analytics applications"?20:15
notmynameso "could you store analytics data in swift?" yes, definitely. "is swift good for every use case of storing analytics data?" probably not20:16
swift_fannotmyname : I was trying to ask whether SwiftStack does analytics (e.g., data mining), or supports analytics, on the data that it stores ?20:17
zaitcevsearch for ZeroVM, that's probably the best bet today20:18
notmynameswift_fan: ah. a different question20:18
notmynameswift_fan: yes, swiftstack's product does include both time-series metrics about what's going on in the cluster and utilization info on a per-account basis.20:19
notmynameswift_fan: but note that swiftstack != swift20:19
swift_fannotmyname : Ok. But do SwiftStack customers ever store the data that they use for data analytics, on SwiftStack ?20:21
swift_fannotmyname : Or is SwiftStack mainly used for backup+archive purposes ?20:21
swift_fannotmyname : Or do I have my facts wrong? (e.g., backup+archive doesn't necessarily mean it's not used for data analytics?)20:21
notmynameswift_fan: if you're interested in swiftstack, this isn't really the place for it. you can go to swiftstack.com and get a free trial. but this channel is for swift dev work, and we try to leave product-pitches out :-)20:23
zaitcevDo Red Hat's customers ever store data on Red Hat. No, they store it in Swift. SwiftStack is a company and they offer a product that manages OpenStack Swift, but do not store the data themselves. That is the mental model I have anyway.20:24
notmynamezaitcev: correct20:24
swift_fannotmyname zaitcev : So, SwiftStack doesn't have data centers that store customer data ??20:25
swifterdarrellswift_fan: didn't notmyname just say this wasn't the place to talk about that?20:26
*** cdelatte has quit IRC20:26
swifterdarrellswift_fan: specifically, "if you're interested in swiftstack, this isn't really the place for it."20:26
swift_fanswifterdarrell : This was the last question I had about that, since it was brought up.20:27
swifterdarrellswift_fan: cloud block storage will have different performance characteristics vs. cloud object storage, different consistency/availability guarantees, as well as different scaling characteristics20:29
swift_fanswifterdarrell : I'm still a little uneasy about this concept of "eventual consistency".20:30
*** kopparam has joined #openstack-swift20:30
swift_fanswifterdarrell : Just the POSIX concept of "strong" consistency seems to make a lot more sense.20:31
swift_fanswifterdarrell : It seems kind of like, once you have eventual consistency, in a way you're losing the fidelity of the data.20:31
swift_fanswifterdarrell : Sort of like, (in a sense), losing the "true nature" of it.20:31
swifterdarrellswift_fan: Sometimes.  But sometimes it makes a lot of sense to be able to keep using your system when a switch dies or you otherwise get a network partition20:32
swift_fanswifterdarrell : Do you have any scenarios off the top of your head where it makes more sense to "keep using your system"?20:35
swift_fanin the case of a dead switch, or network partition, etc20:35
*** kopparam has quit IRC20:35
swift_fanswifterdarrell : It still seems dangerous to try to do anything, when you're not observing the most updated copy of your data.20:36
*** byeager_away has quit IRC20:38
*** aerwin has joined #openstack-swift20:39
*** thurloat has quit IRC20:40
*** thurloat has joined #openstack-swift20:41
*** byeager_away has joined #openstack-swift20:41
glangethere are other data stores that aren't eventually consistent if you need that20:41
glangeyou can't imagine a singe use case where it would be better to retrieve an old version of an object than no version of the object ?20:43
swift_fanglange -- like what?20:43
swift_fan(what data store)20:43
swift_fanglange swifterdarrell -- Ok, I see what you are saying. Basically, it's very situational20:43
swift_fan(is what you're saying)20:43
swift_fanglange swifterdarrell -- I can think of some cases, but they seem somewhat contrived .....20:44
glangethere http://ceph.com/ <-- that is one example of a data store that has different properties from swift20:44
acorwinswift_fan: for one simple example, there are many cases where eventual consistency is irrelevant because data is stored and retrieved and never (or extremely rarely) updated20:44
glangeswift_fan: http://en.wikipedia.org/wiki/Doghouse <-- look at the images on that page, would it be ok to serve an old version of those pictures if you can't serve the new version?20:45
zackmdavisacorwin, you mean like backups, or archives?20:45
glangeswift_fan: would it be better to serve an old version than nothing?20:45
swifterdarrellglange: NEW DOGS OR NO DOGS, that's my motto20:45
swift_fanglange -- I see what you're saying20:45
acorwinswift_fan: you know what they say - you can't teach an old dog new tricks20:46
glangeswift_fan: and that is not a contrived situation :)20:46
claygglange: how can we know if we're even seeing the same dogs when we look at that page20:47
glangeclayg: we don't, and that is ok :)20:47
acorwinclayg: do any two people ever truly see the same dog?20:47
*** mkollaro has joined #openstack-swift20:48
elamberttorgomatic: got a sec to discuss range retrieve for EC? Would it be a problem if PyECLib, under the covers,  always retreived at least an entire segment (but still only returned just the requested range)?20:48
claygalso it's not just serving old data - it's also being able to eat new data even if you can't *guarentee* that you won't be able to serve that version20:49
claygI think it's the write characteristics in the face of failure that really differentiate AP from CP20:50
swift_fanclayg : What is AP and CP ?20:53
acorwinswift_fan: http://en.wikipedia.org/wiki/CAP_theorem20:54
swifterdarrellswift_fan: see also http://en.wikipedia.org/wiki/Consistency_model20:56
torgomaticelambert: if that's how it has to work, that's okay, I suppose20:57
torgomaticas long as the segment size isn't too large20:57
elambertuser definable iirc20:58
elambertbut standard seems to be 1MB20:58
torgomaticSwift can certainly throw away data the client didn't want, and now we've got some sanity checks on range requests, so that's probably alright20:58
elambertok, thanks20:59
torgomaticthere'll be some nasty edge cases around multiple ranges that don't overlap, but do require bytes from the same segment, and that'll need good tests and careful coding, but it should be okay20:59
swift_fanWhat kind of data is usually stored in the "Archive Tier" in the tiers of cloud data storage ?20:59
swift_fan(as opposed to the non-archive storage tiers right above it).21:00
elamberttorgomatic... and depending on the encoding scheme it may not be possibel to decode w/o reading the entire segment21:00
torgomaticelambert: yep, exactly... so really, fetching the whole segment is probably fine; it's just some extra overhead21:00
* elambert nods21:01
NMGuys, sometimes I see this on my logs,  (account-replicator is one of the then)  "ERROR syncing"  ending with "#012error: [Errno 1] Operation not permitted".21:09
NMHas anyone seen this?21:09
*** CaioBrentano1 has joined #openstack-swift21:11
claygNM: i only see that when I have some permissions messed up in /srv/node21:11
claygNM: sometimes it's cause I replaced a motherboard os on a node with old disks and the uid/gid's got messed up21:12
*** nellysmitt has quit IRC21:13
claygNM: sometimes permissions are just wrong (probably a operator error/misconfig or chef/puppet gone wild) and I just have to fix them21:13
*** CaioBrentano has quit IRC21:13
claygNM: the rest of the line leading up to the Operation not permitted might be helpful21:13
*** nitika__ has joined #openstack-swift21:14
NMclayg: That was my first thought. But $ find . ! -user swift didn't return anything21:14
*** chrisnelson has joined #openstack-swift21:14
claygyou sure the effective permissins of the daemon are running as swift?21:15
claygagain the traceback might even have the db file name in it21:15
*** nitika_ has quit IRC21:17
*** Trixboxer has quit IRC21:18
NMYes. I checked twice.21:19
NMI was looking at the exception but didn't find anything useful21:19
NMSometimes it work sometimes it doesn't21:21
NMMost of the times it replicate with 0 failures21:22
*** mahatic has quit IRC21:23
mattoliverauMorning all!21:24
pelusemorning21:24
mattoliveraunotmyname: yay a 30 extra sleep in, I'm happy with that!21:24
NMMorning :)21:25
mattoliverau*minute (/me types well today)21:25
NMclayg: grep "Oct  7" aco.log|grep "account"|grep -c "0 failure" = 215221:25
NMgrep "Oct  7" aco.log|grep "account"|grep -c "1 failure" = 3521:25
notmynamemattoliverau: :-)21:25
NMclayg: May be I'm too perfectionist?21:26
*** CaioBrentano1 is now known as CaioBrentano21:27
claygNM: well does the traceback at least reference a line number that's getting the EPERM?21:29
swift_fanDoes anyone know which lines/methods of the code specify where the Swift proxy tries to retrieve an object, which may or may not have been replicated yet ??21:30
swift_fan:)21:30
*** kopparam has joined #openstack-swift21:31
notmynameswift_fan: no. I suggest you look through the proxy source. doing so will help you find those answers, and also give you a good understanding of how things are put together. but know that it's something you won't fully understand with one 30 minute scan of the source21:32
portantenotmyname: wait, we don't have self-documenting code?21:33
portante;)21:33
notmynameportante: it's python, so it's executable pseudo-code right?21:33
portante=)21:33
notmynameportante: especially that best_response() method. that one is _so_ easy and straightforward21:33
* notmyname loves the mutable data type passed to a method executed by greenlets where the successful handling of the request requires the side-effects from each running instance of the method21:34
portanteyeah, what he said21:35
notmynameoh, make_requests() not best_response()21:36
*** kopparam has quit IRC21:36
NMclayg: Yes. It didn't help me :(   http://paste.openstack.org/show/119504/21:37
notmynameNM: interesting that it's a statsd message that is getting the error. ie sending a UDP packet21:40
notmynameNM: also it's interesting that you're using py26 ;-)21:40
claygnotmyname: oh i bet the socket got closed?21:41
claygcan't send datagram to a closed socket21:41
clayghow the hell do you close a datagram socket21:41
zaitcevbut ends with EPERM?21:41
claygtcp has spoiled me21:41
NMnotmyname: Don't mention that :(( When I started working with swift the doc pages said python 2.6. On last summit I got surprised when the guys was running it with 2.7.21:41
glangemaybe those who want to understand swift should take a time machine to a simpler time :) https://github.com/openstack/swift/tree/2ee9b837b5a1e13681ca9359138451719f8641dd21:42
claygzaitcev: sure why not?  don't you get EPERM when you try to send to a close tcp socket?21:42
zaitcevno wai21:42
zaitcevshould be EPIPE21:42
claygglange: hehe that's awesome21:42
dfgglange: thanks for making me want to cry...21:43
NMnotmyname:  and clayg for me it looks like I had a sync error and a exception while sending this sync error to statd.21:43
notmynameNM: http://stackoverflow.com/questions/23859164/linux-udp-socket-sendto-operation-not-permitted21:44
notmynameNM: that says conntrack21:44
*** tab____ has quit IRC21:44
claygzaitcev: connect can raise EPERM, do you have to connect first to send to udp?21:46
claygnotmyname: oh good one!21:47
notmynameNM: the swift docs originally said py26 because it originally targeted lucid. depending on what OS you're running on, I'd suggest moving to py27 since py26 doesn't even get security updates any more21:49
* notmyname hopes the docs don't still say py2621:49
NMnotmyname: Well, that was on the beginner of the year.21:50
NMThe stackoverflow post is good.21:51
NMAnd I got the same error stracing the process21:51
NMBut I still not understanding why the syncing is failing.21:52
claygNM: well... it's not exactly - some code before return sync success is failing :)21:53
*** ppai has quit IRC21:54
NMBut reading the source code, that code runs only in case of exception, right?21:55
*** mkollaro has quit IRC21:57
claygself.logger.increment('no_changes') looks like success to me!21:58
claygjust check dmesg for dropped packets - if this is the only problem you're having your getting whicked lucky21:59
claygNM: or maybe it's something else... but that'd be useful to know21:59
NMOk. I'll fix the nf_conntrack problem22:01
NMWould it be a good idea if the log said "Sync Ok but error sending increment log…" ?22:06
notmynameNM: yes, that would be good22:07
NMShould I open an issue or let it up to you guys?22:08
notmynameNM: at least open a bug on it, please22:09
notmynameNM: https://bugs.launchpad.net/swift/+filebug22:09
*** nitika__ has quit IRC22:09
NMThank you clayg, thank you notmyname :)22:12
*** NM has quit IRC22:16
*** andreia_ has quit IRC22:23
*** aerwin has quit IRC22:31
*** kopparam has joined #openstack-swift22:32
*** kopparam has quit IRC22:36
*** marcusvrn_ has quit IRC22:43
*** Nadeem has quit IRC22:50
*** nitika2__ has joined #openstack-swift22:59
*** kyles_ne has quit IRC23:04
*** kyles_ne has joined #openstack-swift23:05
*** kyles_ne has quit IRC23:09
*** NM has joined #openstack-swift23:15
*** joeljwright has quit IRC23:29
*** joeljwright has joined #openstack-swift23:29
*** elambert has quit IRC23:32
*** kopparam has joined #openstack-swift23:33
*** joeljwright has quit IRC23:34
*** kopparam has quit IRC23:37
*** bsdkurt has joined #openstack-swift23:49
*** NM has quit IRC23:50

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!