Friday, 2015-05-08

*** setmason has quit IRC00:03
hogood morning!00:04
openstackgerritSamuel Merritt proposed openstack/swift: EC: don't 503 on marginally-successful PUT  https://review.openstack.org/18079500:05
mattoliverauho: morning00:05
kota_ho, mattoliverau: morning :)00:07
mattoliveraukota_: moring!00:07
*** jamielennox|away is now known as jamielennox00:29
*** dmorita has joined #openstack-swift00:31
openstackgerritMerged openstack/swift: X-Auth-Token should be a bytestring.  https://review.openstack.org/18009800:46
notmynameclayg: I'm running reconstructor now and seeing errors in the logs. liberasure tracebacks and 404s00:46
notmynamehere's a fun one:00:47
notmynameMay  8 00:02:00 localhost.localdomain object-reconstructor: 192.168.12.15:6003/d33/2179 EXCEPTION in replication.Sender: #012Traceback (most recent call last):#012  File "/usr/lib/pymodules/python2.7/swift/obj/ssync_sender.py", line 72, in __call__#012    self.connect()#012  File "/usr/lib/pymodules/python2.7/swift/obj/ssync_sender.py", line 133, in connect#012    self.node['index'])#012KeyError: 'index'00:47
claygnah lp bug #1452619 is old news00:50
openstackLaunchpad bug 1452619 in OpenStack Object Storage (swift) "object-reconstructor crash in ssync_sender" [Undecided,Confirmed] https://launchpad.net/bugs/145261900:50
claygnotmyname: but the other ones might be something00:50
notmynameclayg: same as we saw on the GETs: Reconstruct ERROR: Insufficient number of fragments.00:51
*** erlon has quit IRC00:51
notmynamealso saw the invalid literal for base 16 error00:53
notmynameclayg: hmm..this seems to kill the reconstructor00:55
claygwhat do you mean by "this" and "kill"00:55
claygi'm not terribly surprised fragment *rebuild* isn't making great progress (yet?) given the fragment placement of the object I was looking at last night00:57
notmynameclayg: are logged in doing stuff?00:57
notmynameonto 0100:57
claygnotmyname: I just logged in and started tailing the all.log00:57
notmynameah ok. my grep on that box started looking weird00:58
notmyname"who's this swiftqa user doing this grep?!"00:58
claygbut I'm referencing my notes from last night about the crazy fragments everywhere - and thinking about the rebuild handling in the reconstructor00:58
homattoliverau: kota_: morning! thanks!00:58
clayghow did I change what *your* grep was looking like?00:58
notmynameclayg: so by "this" and "kill" I mean that after a while the process ends00:58
notmynameas in: "this kills the crab" http://i.imgur.com/gMmnR5p.jpg01:00
clayggross01:01
claygthis is probably darrells segfault bug!  he was basically in the same state - fragments everywhere01:01
claygnotmyname: so lp bug #145255301:01
openstackLaunchpad bug 1452553 in OpenStack Object Storage (swift) "don't rebuild existing fragments" [Undecided,New] https://launchpad.net/bugs/145255301:01
notmynameso the one lesson I'm repeatedly learning in the world of swift: if there's a bug, everyone has seen it. doesn't matter who you are, we all see the same thing ;-)01:02
claygnotmyname: too bad you picked isa-l - jerasure would have worked fine!01:02
notmyname"too bad"01:02
* clayg jabs at peluse for not responding to my email to him and tsg and kmgreen yet01:02
notmynameok, so leave reconstruction off for a next, low-concurrency, run?01:03
notmynameall in a new container?01:03
claygnotmyname: nice thing about working in the open!01:03
notmyname:-)01:03
claygnotmyname: wait no - finding out if the cluster can right it self is way more intresting than benchmarks !!!! :D01:03
claygoh oh oh!01:04
clayghandoffs_first!01:04
notmynameI'm gonna go with "no" to that question01:04
notmynameoh?01:04
notmynamewat wat?01:04
claygif we can get reverts to fix things with the data we have - then we will segfault less - segfault less is good!01:04
notmynameI can agree with that!01:04
claygi feel like reverts should be making some progress tho - i'm not seeing any "Removing partition" in the logs tho...01:05
notmynameso you think 2.conf should have handoffs_first?01:05
claygnotmyname: I'm going to try it - it's only "qa" - what's the worst that could happen?  it won't make a black hole or anything01:05
* clayg goes to get a beer first01:05
notmynameall I ask is that you do it exactly the same on every server. I had to fix something of that earlier. makes updates later really hard when the configs are different01:06
notmynamemaybe we should actually use some config management for this...01:06
claygnotmyname: I know some people that could help you with that...01:08
notmynamewell, it is about 9am in taipei. charz should be online soon ;-)01:08
claygman, how did I leave that lexographical revert sync sort in there :\01:09
claygi would have swore I came up with something better... gd01:10
notmynamemy kids are getting crazy and need to be fed. I'm getting out of the qa cluster. I'll check in later about it01:10
mattoliveraunotmyname: time to teach your kids how to work with the qa cluster ;)01:11
mattoliverauSorry I've been distracted with sharding at large scale issues. so only leaving drive by sarcastic comments01:11
notmynamebah, I accidentally killed the screen session01:12
claygnow every time I type reconstructor I think of the object-retransmogifier01:13
claygzaitcev is such a comedian01:13
notmynamewhat's the command to start a screen session and have the output go into a file?01:13
claygthere's a *command* for that?01:13
claygtypescript?01:14
notmynameI have no idea01:14
notmynamelike how did the screen output go to screenlog.0 on bm01?01:14
mattoliverauclayg: shame april fools is over, otherwise a patch to change the name of the reconstructor would be in roder01:14
mattoliverau*order01:14
claygmaybe just "script"01:14
claygnotmyname: charz is a ninja - ninja's log01:15
notmynameyeah, I tried that. didn't see anything. a sneaky ninja01:15
*** pberis has joined #openstack-swift01:17
claygsee object-reconstructor: 192.168.12.13:6003/d22/855 Early disconnect01:17
clayg^ that means the remote node isn't being a good replication server01:17
claygwhat node is that?01:17
claygqa0X?01:18
mattoliveraunotmyname: looks like screen -L01:18
notmynameclayg: last quad of the IP is 13, so that's qa0301:18
notmynamethey ae .{11..15}01:18
*** pberis has quit IRC01:19
notmynamemattoliverau: yup. that was it. thanks!01:19
notmynameok, gotta run now. I'll try to peek in later01:20
mattoliveraunotmyname: have fun01:21
*** aix has quit IRC01:31
*** rmcall has quit IRC01:45
charzmorning01:50
*** km has quit IRC02:06
charznotmyname: I'm going to do that (config managment) today.02:07
*** km has joined #openstack-swift02:08
*** kei_yama has quit IRC02:08
*** panbalag has quit IRC02:08
*** kei_yama has joined #openstack-swift02:09
*** david-lyle has joined #openstack-swift02:10
*** tamizh_geek has joined #openstack-swift02:14
*** annegentle has joined #openstack-swift02:15
claygredbo: lol - cross post - should have left it to the master as I am ye verily but the student02:20
*** annegentle has quit IRC02:20
claygredbo: not used to seeing you on the mailing list tho - I thought someone should say something ;)02:21
redboI was hoping it'd just go away, but Chuck brought it back to life.02:24
claygrofl02:25
claygi just like pictured him sneazing and everyone's heads popping up over the cubicles02:26
zaitcevand the rest reaching for their guns02:26
redbohe's in a little conference room now, so it's contained02:27
redboPlus it took me a few days to not be snippy.  I feel like all these people think they get a vote about whether or not I can work on something :)02:28
claygyeah I don't know - i was surpirsed when notmyname started the thread - but I guess it's better to be like "ok, let's get it out of your system people because this is happening"02:29
*** jkugel has joined #openstack-swift02:31
*** wbhuber has joined #openstack-swift02:32
claygalso - me likes redo's snippy02:33
clayg\:-|02:33
claygno... maybe not that - I find it humerous when redbo makes snarky comments02:34
zaitcevAt least you didn't find it humerus.02:35
clayg^ see comedian02:37
redboI have direct i/o implemented in a branch, maybe I'll do that once we finish some of the cleanup.02:38
zaitcevSounds nice.02:38
*** annegentle has joined #openstack-swift02:40
claygshit handoff_first doesn't work right because of the way we yield partitions... maybe...02:41
*** jkugel has left #openstack-swift02:44
clayggah poor ssync - why am I getting 409 - the missing check shouldn't be asking me to ship these objects - crap I need acoles :D02:51
claygok, must be swapped - missing check says yeah I want that frag index, but PUT 409s because of another frag same ts - and since revert to handoff isn't working...02:54
*** wbhuber has quit IRC03:11
*** bill_az has quit IRC03:12
*** kei_yama has quit IRC03:26
*** km has quit IRC03:27
*** km_ has joined #openstack-swift03:27
*** kei_yama has joined #openstack-swift03:27
*** fanyaohong has joined #openstack-swift03:41
*** rmcall has joined #openstack-swift03:42
*** annegentle has quit IRC03:43
*** annegentle has joined #openstack-swift03:44
*** tamizh_geek has quit IRC03:50
*** vinsh has quit IRC03:58
*** annegentle has quit IRC04:02
claygcharz: notmyname: ok I added a patch to avoid the segfault to lp bug #1452553 and applied it to all the nodes in the qa cluster - seem to be segfaulting less04:20
openstackLaunchpad bug 1452553 in OpenStack Object Storage (swift) "don't rebuild existing fragments" [Undecided,New] https://launchpad.net/bugs/145255304:20
claygstill seeing a lot of data on handoff nodes - but it may just be swapping fragments04:21
charzclayg: Did you add this patch to all ndoes in qacluster?04:21
claygcharz: yeah04:21
clayg"all"04:21
claygheh04:21
claygit's only 504:21
claygacoles_away: so I ended up wanting to fix the keyerror and leave the 409 in place04:21
charzclayg: Oh, that's why reconstractors are working well04:22
charzclayg: I was confusing in that moment. :-)04:22
claygpeluse: you know how we were thinking swaps should basically almost *never* happen - I'm not sure what notmyname did when this cluster was getting setup - and I know the reconstructor wasn't running *at all* for a number of rebalances...04:23
claygbut swaps happen ;)04:23
kota_notmyname, clayg: I might know something the decoding error at liberasurecode.04:33
kota_notmyname: I saw the decoding error when I was making a ton of POST request (i.e. post as copy)04:34
kota_notmyname: At that time, POST as COPY seemed to make some handoff frags in the cluster because of some reason likes connection timeout.04:35
kota_notmyname: and then, when I tried GET object, sometimes it went to fail because proxy retrieved fragments from both primaries and handoffs (maybe on the GET also hits connection timeout)04:37
kota_notmyname: " ERROR: Insufficient number of fragments." will occur when the number of fragments  is less than k (ndata).04:39
notmynameyes. I saw that a lot04:39
kota_notmyname: current Swift proxy ensure n stream for decode but if the framents duplicated each other04:40
kota_notmyname: i.e. [0,0,1,2,3] for k=504:40
kota_notmyname: that's example, it causes the Insufficient number of fragments.04:41
kota_nomyname: the integer means Fragment Index.04:41
kota_notmyname: e.g. k=5, m=2, primaries have [0, 1, 2, 3, 4, 5, 6] (the integer means fragment index)04:46
kota_notmyname: assuming POST as copy (or Update with same object) and a node (e.g. 0) is failed to connect04:46
kota_notmyname: data layout will be primaries [0, 1, 2, 3, 4, 5] and handoff [0]04:47
kota_notmyname: assuming no recconstructor is running.04:47
*** ppai has joined #openstack-swift04:48
kota_notmyame: no, data layout will be  [0, 1, 2, 3, 4, 5, 6] and handoff [0]04:49
kota_notmyname: and then when trying to GET and some nodes (e.g. 4,5,6) are failed to connect04:50
kota_notmyname: proxy will get k nodes as [0, 1, 2, 3, 0(handoff)] for decode.04:50
kota_notmyname: however acutually only 4 fragments are gathered even though 5 *different* fragments needed for decoding.04:52
kota_notmyname: liberasurecode will raises "ERROR: Insufficient number of fragments." at such a situation.04:53
kota_notmyname: for now, I'm thinking whether we need a validation for the fragment index uniqueness for decoding or not.04:55
kota_imo, we need some imlementation to handle the decoding failure...04:57
kota_hum....04:57
*** annegentle has joined #openstack-swift05:03
notmynamekota_: that sounds very similar to what I saw. it wasn't POST-as-COPY, but the result was effectively the same. reading older data after a ring change. and the reconstructor wasn't running (because of aforementioned segfaults)05:05
notmynamebut mostly I've been feeding clayg info on all this and being his rubber duck for debugging. ;-)05:06
notmynameseems like he has some patches and ideas on the next step for clearing these up05:06
kota_notmyname: cool05:07
kota_notmyname: patches are already in gerrit? I'd love to see05:07
*** annegentle has quit IRC05:09
*** SkyRocknRoll has joined #openstack-swift05:15
*** zaitcev has quit IRC05:22
*** bkopilov has quit IRC05:27
*** bkopilov has joined #openstack-swift05:44
*** vinsh has joined #openstack-swift05:46
*** fanyaohong has quit IRC05:49
*** vinsh has quit IRC05:51
*** tamizh_geek has joined #openstack-swift06:03
*** annegentle has joined #openstack-swift06:04
*** annegentle has quit IRC06:10
*** thumpba has joined #openstack-swift06:32
openstackgerritChristian Schwede proposed openstack/swift: Update my mailmap  https://review.openstack.org/18130506:42
*** bkopilov has quit IRC06:46
hocschwede: hello, is there any rule to add email addresses to the mailmap?06:56
*** annegentle has joined #openstack-swift07:05
*** annegentle has quit IRC07:10
*** krykowski has joined #openstack-swift07:17
*** rmcall has quit IRC07:35
*** jamielennox is now known as jamielennox|away07:36
*** geaaru has joined #openstack-swift07:44
*** thumpba has quit IRC07:53
*** tamizh_geek has quit IRC07:55
*** tobe has joined #openstack-swift08:05
*** annegent_ has joined #openstack-swift08:06
*** annegent_ has quit IRC08:11
*** proteusguy has joined #openstack-swift08:17
*** chlong has quit IRC08:20
*** tobe has quit IRC08:31
*** hunius has joined #openstack-swift08:46
*** hunius_ has joined #openstack-swift08:50
*** bkopilov has joined #openstack-swift08:51
*** hunius has quit IRC08:51
*** acoles_away is now known as acoles08:51
*** rvasilets has joined #openstack-swift08:55
*** tamizh_geek has joined #openstack-swift08:56
*** tamizh_geek has quit IRC09:00
acolesnotmyname: clayg: so i can see from a distance you guys are working hard on reconstructor stuff which is great!09:05
*** annegentle has joined #openstack-swift09:07
acolesnotmyname: clayg some thoughts from over here FWIW: 'the invalid literal for base 16' error gets logged when ssync closes connection early on failures (i.e. its kinda 'normal' although noisy) - *some* of them get cleaned up by patch 17783609:08
patchbotacoles: https://review.openstack.org/#/c/177836/09:08
cschwedeho: indeed, the .mailmap is used by git, so the definition is in the manpage: ftp://www.kernel.org/pub/software/scm/git/docs/git-shortlog.html - and because i’ll cuse my @redhat.com address from now on i added this09:09
acolesclayg: and i am working my way towards avoiding ssync 409s when rx has different FI by having the rx return 'not in sync but don't send' semantic in missing_check response09:10
acolesclayg: but not quite there yet - will build on patch 13849809:11
patchbotacoles: https://review.openstack.org/#/c/138498/09:11
*** hunius_ has quit IRC09:11
acolesnotmyname: clayg thats ^^ not meant to be self-promotion of my patches! just trying to help :)09:12
*** annegentle has quit IRC09:12
*** tamizh_geek has joined #openstack-swift09:22
*** thumpba has joined #openstack-swift09:24
*** vinsh has joined #openstack-swift09:24
*** thumpba has quit IRC09:28
*** vinsh has quit IRC09:28
*** krykowski has quit IRC09:29
*** krykowski has joined #openstack-swift09:29
hocschwede: i see. thanks for the info.09:46
acolesho: hello! are you coming to vancover?09:47
cschwedeho: you’re welcome!09:48
cschwedeacoles: i hope he will do09:48
acolesme too09:48
cschwedeacoles: when are you arriving?09:49
acolescschwede: heh just typed same question for you09:49
acolescschwede: saturday09:49
acoleswondering if the 'panel' could meet sunday evening to prepare?09:50
cschwedeacoles: :) i’ll arrive on sunday, will be in Denmark next week, coming back on saturday. will be at home then for exactly 12 hours ;)09:50
cschwedeacoles: that’s a very good idea!09:50
acolescschwede: oh wow! Denmark for work or vacation?09:51
kota_acoles, cschwede: hi09:51
acoleskota_: hi!09:51
kota_acoles, cschwede: FYI, I'll arrive on sunday09:51
cschwedeacoles: vacation. kids are off school next week, and it’s just a few hours drive - nice for relaxing :)09:52
cschwedekota_: hello Kota!09:52
acolesso maybe we can have a very jet lagged meetup sunday evening09:52
cschwedeacoles: yes, let’s do that!09:52
acolescschwede: sounds good. and makes sense you have been writing specs this week ;)09:53
cschwedei wanted to visit the capilano suspension bridge on sunday, hopefully i’m not to jetlagged then. if anyone else is interested: http://www.capbridge.com/09:53
acoleskota_: cschwede which hotels are you in? (thinking about venue selection for possible dinner)09:54
cschwedeacoles: yes, wanted to have something to discuss in vancouver, and specs are a good start (hopefully)09:54
acolesdefinitely!09:54
kota_ya, absolutely09:55
acolescschwede: hang on, so what time does your flight get in on sunday??09:55
kota_I'll stay at Pinnacle Hotel Vancouver Harbourfront.09:55
cschwedeacoles: i’m in the „days inn vancouver downtown“, 921 West Pender Street09:55
cschwedeacoles: 11:50 local time09:56
acolesah early from europe, i'm late saturday09:56
cschwedekota: i’m just 5 minutes walking distance from your hotel09:57
cschwedeprobably it’s the easiest thing to meet at the convention center entrance?09:57
kota_cschwede: great!09:57
kota_I'll arrived on vancouver at 10:15 am local time.09:57
*** proteusguy has quit IRC09:58
kota_cschwede: that sounds good to meet at the convention center.09:58
acoleskota_: cschwede: i'll ask around the others and then send email with suggested time09:58
kota_acoles: ok, thanks :D09:59
acolescschwede: i have been to capilano! its good, iirc (was ~20 years ago)09:59
*** krykowski has quit IRC09:59
cschwedeacoles: nice, good to know!10:00
acolescschwede: my current plan is to visit friends on sunday though10:00
kota_wow, capbridge looks so interesting...10:01
*** annegent_ has joined #openstack-swift10:08
*** annegent_ has quit IRC10:13
*** kota_ has quit IRC10:20
*** panbalag has joined #openstack-swift10:23
*** krykowski has joined #openstack-swift10:24
*** krykowski has quit IRC10:47
*** krykowski has joined #openstack-swift10:53
*** dmorita has quit IRC10:57
*** rvasilets has quit IRC11:01
hoacoles: hello, I will come to vancouver! yay!11:07
*** annegent_ has joined #openstack-swift11:08
holet me join the dinner on sunday :-)11:12
*** vinsh has joined #openstack-swift11:13
*** annegent_ has quit IRC11:13
hogood night all! have a nice weekend11:14
*** ho has quit IRC11:14
*** vinsh has quit IRC11:17
*** cdelatte has joined #openstack-swift11:17
*** delattec has joined #openstack-swift11:17
*** ppai has quit IRC11:20
*** ppai has joined #openstack-swift11:32
*** fthiagogv has joined #openstack-swift11:59
*** annegentle has joined #openstack-swift12:00
*** delattec has quit IRC12:01
*** cdelatte has quit IRC12:01
*** thurloat_isgone is now known as thurloat12:09
*** MVenesio has joined #openstack-swift12:14
*** SkyRocknRoll has quit IRC12:27
*** chlong has joined #openstack-swift12:46
*** ppai has quit IRC12:47
*** NM has joined #openstack-swift12:47
*** ppai has joined #openstack-swift13:00
*** annegentle has quit IRC13:00
*** vinsh has joined #openstack-swift13:01
*** annegentle has joined #openstack-swift13:02
*** vinsh has quit IRC13:07
*** EmilienM|afk is now known as EmilienM13:15
*** km_ has quit IRC13:22
*** kei_yama has quit IRC13:26
*** jrichli has joined #openstack-swift13:39
*** tamizh_geek has quit IRC13:40
*** esker has joined #openstack-swift13:40
*** esker has quit IRC13:41
*** esker has joined #openstack-swift13:41
*** esker has quit IRC13:41
*** esker has joined #openstack-swift13:41
*** erlon has joined #openstack-swift13:44
*** annegentle has quit IRC13:44
*** annegentle has joined #openstack-swift13:45
*** MVenesio has quit IRC13:50
*** openstackgerrit has quit IRC13:51
*** openstackgerrit has joined #openstack-swift13:51
*** lpabon has joined #openstack-swift13:59
*** vinsh has joined #openstack-swift14:08
*** vinsh has quit IRC14:08
*** annegentle has quit IRC14:16
*** ppai has quit IRC14:19
*** shakamunyi has joined #openstack-swift14:25
*** vinsh has joined #openstack-swift14:27
openstackgerritAlistair Coles proposed openstack/swift: Add POST capability to ssync for .meta files  https://review.openstack.org/13849814:30
openstackgerritAlistair Coles proposed openstack/swift: Don't ssync fragments that conflict with another fragment  https://review.openstack.org/18140714:30
acolesclayg: ^^ that gets rid of some ssync 409s14:31
acolesclayg: somewhere in that patch chain i have a TODO inviting an argument about the message format ;)14:32
*** Guest58709 has joined #openstack-swift14:41
*** bkopilov has quit IRC14:46
*** minwoob has joined #openstack-swift14:48
*** wbhuber has joined #openstack-swift14:58
*** proteusguy has joined #openstack-swift15:08
*** proteusguy has quit IRC15:09
*** proteusguy has joined #openstack-swift15:09
*** wbhuber_ has joined #openstack-swift15:20
*** wbhuber has quit IRC15:23
*** shakamunyi has quit IRC15:25
*** silor has joined #openstack-swift15:32
*** Guest58709 is now known as annegentle15:40
*** krykowski has quit IRC15:48
*** gyee has joined #openstack-swift15:52
*** openstackstatus has quit IRC15:56
*** openstackstatus has joined #openstack-swift15:58
*** ChanServ sets mode: +v openstackstatus15:58
notmynamegood morning15:58
*** annegentle has quit IRC16:02
notmynamecschwede: ping16:02
cschwedenotmyname: good morning!16:03
notmynamehello16:03
notmynamecschwede: on https://review.openstack.org/#/c/181305/ ... if you update the .mailmap you also need to update AUTHORS16:03
notmynamecschwede: normally I'd just do that for you, but since you started.... ;-)16:03
cschwedenotmyname: ah, thx for the info - i knew i probably forgot something  ;)16:04
cschwedenotmyname: i wanted to make it a tad easier for you, but then…16:05
openstackgerritChristian Schwede proposed openstack/swift: Update my mailmap entry  https://review.openstack.org/18130516:05
notmynamecschwede: this has my own path names hard coded in it, but I use https://github.com/notmyname/git-stats/blob/master/new_authors.sh to keep that stuff up to date16:07
notmynamecschwede: thanks. looks good now16:07
cschwedenew_authors.sh looks interesting - looks like you automated all the recurring PTL duties :)16:08
notmynamenot quite :-)16:09
notmynameextra awesome would be to actually patch AUTHORS. I still do that by hand16:09
*** InAnimaTe has joined #openstack-swift16:10
*** dencaval has joined #openstack-swift16:10
*** aerwin has joined #openstack-swift16:13
*** shakamunyi has joined #openstack-swift16:15
*** gyee has quit IRC16:16
*** wbhuber has joined #openstack-swift16:16
*** wbhuber_ has quit IRC16:22
*** gyee has joined #openstack-swift16:22
*** Gues_____ has joined #openstack-swift16:22
*** tamizh_geek has joined #openstack-swift16:23
*** nadeem has joined #openstack-swift16:24
minwoobHi. Are there some performance results for ISA-L compared to other EC libs (liberasurecode, Jerasure, etc) ?16:24
*** nadeem has quit IRC16:27
notmynameminwoob: no, not yet. at least not that I've seen other than Intel marketing video ;-) /cc peluse16:29
*** Gues_____ has quit IRC16:30
wbhuberI'd like to add Minwoo's point.  In the latest relese notes (kilo), there's a known issue published that EC support is lacking outside swiftstack.16:31
notmynamewbhuber: ?16:32
wbhuberAnd there is not much of performance characterization going on.  How can we make a contribution to that content?16:32
wbhuberhttps://wiki.openstack.org/wiki/ReleaseNotes/Kilo16:32
notmynamewhew. "swiftstack" isn't found on that page16:32
wbhuberIs there anything of preliminary performance testnig that had been done or not?16:33
*** Gues_____ has joined #openstack-swift16:33
wbhuberPardon me.  Disregard "swiftstack".16:33
*** shakamunyi has quit IRC16:33
notmynamewbhuber: yes, it's ongoing. if you want to help, the basic idea is to set up a cluster and run some tests. I know that I'm doing that with clayg on the community qa cluster, and we'll continue to be open with any results from that16:33
notmynamewbhuber: ;-)16:33
wbhuberwhat is the configuration like?  beyond a single SAIO environment?  if you have some documentation to lead us, that would be utmost terrfic.16:34
*** thumpba has joined #openstack-swift16:34
notmynamewbhuber: and mattoliverau said he was getting it in a cluster (IIRC) and looking. kota_ has done some stuff (I know because he's seen the same issues we've seen). I hope acoles has started looking at HP. and if you could look at IBM that would be wonder ful16:34
notmynamewbhuber: the community qa cluster is 5 nodes, 12 drives each. we've configure 4 storage policies (20 drives for only replication, 20 drives for both repl and EC, and 20 drives for only EC)16:35
wbhuber5 nodes in addition to proxy and container nodes?16:36
notmynamewbhuber: there's isn't a particular config other than "deploy a cluster like normal". then we're all just adding an EC policy to it16:36
notmynameno, it's 5 nodes total. they all run all the processes. I know that's not great for performance, but it's sufficient for relative tests and basic validation16:36
*** thumpba_ has joined #openstack-swift16:37
notmynameI've got another larger cluster that I'll use later for some more testing after most of the major issues are resolve and we have some baselines16:37
wbhuberi am still digesting the whole cluster configuration to see if we could "replicate" it on our end for further testing and performance number profiling.16:38
notmynameI think the most helpful thing to start with is not to try to get every ounce of performance out of EC to see what's possible, but to explore it from the perspective of "when does EC make sense vs replication". ie what data size, what operations, etc16:39
notmynameand further to see where the particular hardware bottlenecks are so we have a general idea of cluster sizing. eg is it CPU or disk bound first? how do memory requirements compare? etc16:40
*** thumpba has quit IRC16:40
wbhuberdo you have some kind of performance design with questions that need to be answered or some sorts that we can use as a direction?16:40
notmynameI'm planning on sharing any results we have at the summit. I hope we'll get the current issues sorted out soon (today?) so we can have some interesting relative numbers to share16:41
notmynamewbhuber: two important questions that don't have good answers yet: 1) at what object size does it make sense to use EC vs replication? 2) given a particular hardware configuration, what are the appropriate EC parameters?16:42
notmynameI don't know the answer to either of those, but they are some of the first questions asked. so that's my current goal16:43
notmynamegetting data points for that16:43
wbhuberSpeaking of 1), I have been hearing that Replication makes most sense when it is used for less than 1MB objects and EC for larger objects.  I have to get to the sources and research on them further.16:43
notmynamethat is, beyond "does this work?"16:43
notmynameyeah, replication is definitely better at smaller objects. but what is the switch point? 5MB? 50MB? 100MB?16:44
wbhuberGood questions.16:44
*** harlowja has quit IRC16:44
notmynameand is it different for read-heavy workloads vs write-heavy workloads?16:44
*** harlowja has joined #openstack-swift16:44
acolesjrichli: ping16:45
notmynamewbhuber: so unfortunately, the best answer I have right now is "go deploy swift with EC and see what happens" :-)16:46
wbhuber:-)16:46
*** shakamunyi has joined #openstack-swift16:47
notmynamewbhuber: or rather, "I'm really looking forward to seeing your own test numbers in Vancouver at the summit" ;-)16:47
claygnotmyname: I thought that's what the release notes said - that's what we're doing - what is everyone else doing?16:48
notmynameclayg: I think we're all doing the same thing16:48
clayggo team swift!16:48
notmynameand one of the cool parts is that we're all seeing the same thing16:48
wbhubernotmyname: Wish we could have done that sooner but most likely before the next summit after Vancouver.16:48
*** Gues_____ has quit IRC16:48
* clayg goes to see if the qa cluster unhozed it self with my patches16:48
notmynamewell don't only do that for the next six months16:48
notmynamethere's nothing stopping you from sharing in here as soon as you have numbers :-)16:49
wbhuberwe will tune in to you guys every now and then before 6 months16:49
wbhuber:)16:49
claygnotmyname: we have to benchmark go, and object-server per disk, and pypy apparently :'(16:49
*** acoles is now known as acoles_away16:49
notmynamethe only difference with the summit is that at the summit we talk face to face.16:49
minwoobwbhuber: It seems that EC generally performs better for writes than for reads.16:50
notmynameclayg: yeah, testing Go is important. and object-server-per-disk (I expected to already have that done on the other lab cluster, but this ec has taken longer).16:50
wbhubernotmyname: yes, we'd like to get your results not only at the summit but also here.  i'll ask jrichli to pick up the materials if the numbers aren't posted by then.16:51
notmynamewbhuber: they'll all be online16:51
wbhuberminwoob: is there also research statistics on when migrating larger objects from Replication to EC?16:52
wbhubernotmyname: sounds good.16:52
minwoobHaven't looked into migration yet.16:52
*** proteusguy has quit IRC16:53
*** annegentle has joined #openstack-swift16:59
*** zhill_ has joined #openstack-swift17:05
*** bkopilov has joined #openstack-swift17:06
*** shakamunyi has quit IRC17:07
*** wasmum has quit IRC17:09
*** shakamunyi has joined #openstack-swift17:13
*** barra204 has joined #openstack-swift17:14
*** shakamunyi has quit IRC17:14
*** whydidyoustealmy has joined #openstack-swift17:16
*** barra204 has quit IRC17:17
*** whydidyoustealmy has quit IRC17:17
*** shakamunyi has joined #openstack-swift17:18
*** tamizh_g_ has joined #openstack-swift17:19
claygok, so the cluster did not unhoze - issue seems to be the 409's causing early disconnect means your inability to make progress with cleanup (because early disconnect fails the whole ssync) will eventually lead to a large variety of farg indexes and a part and almost any node you might try to sync with have a conflict at some point17:19
*** tamizh_geek has quit IRC17:20
*** tamizh_geek has joined #openstack-swift17:21
*** annegentle has quit IRC17:21
*** shakamunyi has quit IRC17:23
*** tamizh_g_ has quit IRC17:23
*** shakamunyi has joined #openstack-swift17:34
*** bkopilov has quit IRC17:39
*** fthiagogv has quit IRC17:39
*** tamizh_g_ has joined #openstack-swift17:39
*** tamizh_geek has quit IRC17:41
*** annegentle has joined #openstack-swift17:43
*** sandywalsh has left #openstack-swift17:44
jrichliacoles: just got back from lunch.  reading scrollback.17:46
jrichliacoles_away ^^17:49
openstackgerritMerged openstack/swift: Update my mailmap entry  https://review.openstack.org/18130517:50
*** esker has quit IRC17:52
*** Guest___ has joined #openstack-swift17:56
*** tamizh_geek has joined #openstack-swift18:08
*** tamizh_g_ has quit IRC18:10
*** wasmum has joined #openstack-swift18:11
*** esker has joined #openstack-swift18:18
*** zhill_ has quit IRC18:23
*** tamizh_g_ has joined #openstack-swift18:28
*** tamizh_geek has quit IRC18:28
*** annegentle has quit IRC18:31
*** zhill_ has joined #openstack-swift18:31
*** wasmum has quit IRC18:33
*** hub_cap has joined #openstack-swift18:35
hub_caphello friendos. QQ about swift w/ memcache. Is it necessary? Is it stupid if I dont use it? I guess I dont really understand its need/use in the cluster18:36
hub_capfeel free to tell me im a dummy :D18:37
redboIf you don't run memcache, every operation will have to do a lookup on the container, to see if you have permissions at the minimum.  A few features won't work at all like rate limiting.18:47
redboYou'll have to hit the auth system for every request to see if the user is valid.  Stuff like that.18:48
redboI guess that might not be true for those giant encrypted tokens18:49
*** Guest___ has quit IRC18:53
hub_capMy use case is stupid small, like a 3 node cluster with almost no operations... I just need replication, an s3 api, and code that attacks split brain stupidity :)18:55
hub_capthx redbo for the info18:57
*** Guest___ has joined #openstack-swift19:01
*** wasmum has joined #openstack-swift19:07
*** lpabon has quit IRC19:14
*** tamizh_geek has joined #openstack-swift19:15
*** zaitcev has joined #openstack-swift19:17
*** ChanServ sets mode: +v zaitcev19:17
*** tamizh_geek has quit IRC19:17
*** tamizh_g_ has quit IRC19:17
openstackgerritMinwoo Bae proposed openstack/swift: The hash_cleanup_listdir function should only be called when necessary.  https://review.openstack.org/17831719:34
*** Guest___ has quit IRC19:40
*** Guest___ has joined #openstack-swift19:47
*** bkopilov has joined #openstack-swift19:49
*** Guest___ has quit IRC19:50
*** zaitcev has quit IRC19:56
*** rmcall has joined #openstack-swift20:14
*** shakamunyi has quit IRC20:17
*** breitz has quit IRC20:17
*** breitz has joined #openstack-swift20:17
*** thurloat is now known as thurloat_isgone20:18
*** jkugel has joined #openstack-swift20:20
*** annegentle has joined #openstack-swift20:25
*** annegentle has quit IRC20:26
*** annegentle has joined #openstack-swift20:26
jrichliHi all.  I was wanting to learn more about account creation and deletion under the hood.  I know that the identity API is used.  But I wanted to know more about the role that the middleware can play.20:32
jrichliFor example, I know that some accounts are "autocreated", and so an account PUT will not pass through the pipeline for that.20:32
jrichliDoes an account DELETE request pass through the pipeline?  I noticed that there is a direct_delete_account, which sounds like it would be deleting without any help from the pipeline.  Is this right?20:33
*** silor has quit IRC20:36
*** esker has quit IRC20:38
openstackgerritMerged openstack/swift: Functional test for SLO PUT overwriting one of its own segments  https://review.openstack.org/17455720:40
openstackgerritMerged openstack/swift: Bump up a timeout in a test  https://review.openstack.org/17995620:43
*** dencaval has quit IRC20:43
torgomaticjrichli: anything in direct_client.py speaks to the backend servers without going through the proxy20:46
torgomaticbut usually, an account DELETE will go through the proxy20:46
*** shakamunyi has joined #openstack-swift20:47
*** bejorg has quit IRC20:50
jrichlitorgomatic: ok, thx.  we can talk more along these lines at summit.  I wanted to be sure that planning on some middleware actions for acct DELETE is reasonable.20:52
*** shakamunyi has quit IRC21:03
jrichlitorgomatic: is there any way for a client to create an account with user-meta without going through the proxy?21:04
torgomaticjrichli: not that I know of21:04
jrichlik, thx21:05
*** breitz has quit IRC21:08
*** aerwin has quit IRC21:16
*** rmcall has quit IRC21:17
*** shakamunyi has joined #openstack-swift21:21
openstackgerritTim Burke proposed openstack/swift: Properly re-raise exceptions in proxy_logging  https://review.openstack.org/18156621:26
*** esker has joined #openstack-swift21:36
jlkhey folks. I've got some weird stuff going on where if I have both authtoken and keystoneauth in my proxy pipeline I get failures to auth a token. BUT if I take out keystoneauth and just leave authtoken, things work fine. I'm having trouble wrapping my brain around where the misconfiguration can be, can somebody lend me a hand?21:39
jlknotmyname: you were very helpful last time, but I've got a new cluster with a new problem, hoping you'll be able to help again :)  ^^21:41
*** erlon has quit IRC21:41
*** NM has quit IRC21:48
*** jrichli has quit IRC21:50
*** jkugel has left #openstack-swift21:51
*** wbhuber has quit IRC22:03
*** shakamunyi has quit IRC22:04
*** barra204 has joined #openstack-swift22:04
pelusewrt earlier conversation on EC perf - we (intel) are also setting up 2 14 node clusters with varrying CPUs and running a set of tests w/both 3x and EC and plan to have those results available sometime after the summit unfortunately22:08
*** EmilienM is now known as EmilienM|afk22:08
*** zhill_ has quit IRC22:20
*** annegentle has quit IRC22:23
*** zhill_ has joined #openstack-swift22:27
*** annegentle has joined #openstack-swift22:36
*** vinsh has quit IRC22:42
notmynamewheeee. been in meetings since 10:30 am23:09
*** annegentle has quit IRC23:09
notmynamefinally done23:09
notmynamethe longest one was a new hire training, so that was fun :-)23:09
*** zaitcev has joined #openstack-swift23:21
*** ChanServ sets mode: +v zaitcev23:21
*** barra204 has quit IRC23:22
*** jrichli has joined #openstack-swift23:42
InAnimaTenotmyname: do you work at swiftstack?23:43
InAnimaTelol k just answered my own question23:44
InAnimaTethank you for having proper whois information23:44
zaitcevoops23:44
*** zhill_ has quit IRC23:54

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!