Monday, 2015-05-11

*** doxavore has joined #openstack-swift00:02
*** dmorita has joined #openstack-swift00:33
*** kota_ has joined #openstack-swift00:46
kota_morning00:47
mattoliveraukota_: morning!00:56
kota_mattoliverau: good morning! this week is the last one we can spend until OpenStack Summit Vancouver :P00:57
mattoliveraukota_: yup, which is why I've been frantacally working on sharding so we can discuss it.. I think I finally have a verion of the SPEC that relates to the current status of the POC (minus what I've done on the weekend).00:58
mattoliveraukota_: when are you flying out? Saturday?00:59
kota_mattoliverau: from vancouver? scheduled Saturday.01:00
kota_mattoliverau: you want the discussion01:01
kota_whoops01:01
kota_the week end after summit?01:02
mattoliveraukota_: lol, I see my english failed there, I meant fly in.. but fly out is just as interesting ;)01:04
kota_mattoliverau: lol, on Sanday morning, I'll arrive at :)01:06
mattoliveraukota_: sharding discussions can happen at summit, I just frantically doing work on it, so there is something to discuss01:06
mattoliveraukota_: lol, you'll be nicely jetlagged for Monday ;)01:06
mattoliverauI arrive Sat night01:06
kota_mattoliverau: absolutely, I'm worried a bit I could talk well :(01:07
kota_mattoliverau: I guess you've work well about the sharding stuff so I'm looking forward to discussing it at the summit ;)01:09
mattoliveraukota_: actually you might be ok Monday, as you'll sleep all Sunday night due to exhaustion flying all the way, and forcing yourself to stay up all Sunday long.. it'll be tuesday when then jetlag stops you sleeping ;P01:09
kota_mattoliveau: exactly!01:10
mattoliveraukota_: we'll see, I have the sharding "working" was hoping to get time to do some bench marks.. but this week will be busy so I see how far I get with that :)01:10
kota_mattoliverau: :)01:15
*** jrichli_ has joined #openstack-swift01:20
*** orzel has quit IRC01:21
*** jrichli has quit IRC01:22
*** jrichli__ has joined #openstack-swift01:58
*** proteusguy has joined #openstack-swift01:59
*** jrichli_ has quit IRC02:01
*** ho has quit IRC02:04
*** kota_ has quit IRC02:06
*** ho has joined #openstack-swift02:06
*** doxavore has quit IRC02:06
*** proteusguy has quit IRC02:15
*** jrichli__ has quit IRC02:32
*** NM has joined #openstack-swift02:35
*** NM has quit IRC02:43
*** NM has joined #openstack-swift02:44
*** kota_ has joined #openstack-swift02:57
*** NM has quit IRC03:02
*** tobe has joined #openstack-swift03:20
*** NM has joined #openstack-swift03:23
*** NM has quit IRC03:24
*** ho_ has joined #openstack-swift03:26
*** ho has quit IRC03:27
*** ho_ has quit IRC03:28
*** kota_ has quit IRC04:08
*** ppai has joined #openstack-swift04:08
*** vinsh has joined #openstack-swift04:10
*** ho has joined #openstack-swift04:10
*** vinsh has quit IRC04:21
*** kota_ has joined #openstack-swift04:39
*** tamizh_geek has joined #openstack-swift04:43
*** kei_yama has quit IRC04:53
*** kei_yama has joined #openstack-swift04:55
*** tobe has quit IRC05:03
*** chlong has quit IRC05:09
*** silor has joined #openstack-swift05:27
*** rmcall has quit IRC06:00
*** kota_ has quit IRC06:09
*** rmcall has joined #openstack-swift06:14
openstackgerritMatthew Oliver proposed openstack/swift-specs: Large Containers - container sharding spec  https://review.openstack.org/13992106:17
*** dmorita has quit IRC06:23
*** dmorita has joined #openstack-swift06:23
*** ppai has quit IRC06:29
*** jlmendezbonini has joined #openstack-swift06:55
*** jlmendezbonini has left #openstack-swift06:58
*** vinsh has joined #openstack-swift07:22
*** vinsh has quit IRC07:26
*** ppai has joined #openstack-swift07:31
*** tobe has joined #openstack-swift07:42
*** geaaru has joined #openstack-swift07:44
*** rmcall has quit IRC07:48
*** rmcall has joined #openstack-swift07:52
*** fifieldt has joined #openstack-swift07:57
*** fifieldt has quit IRC07:58
*** jistr has joined #openstack-swift07:58
*** jordanP has joined #openstack-swift08:11
*** acoles_away is now known as acoles08:55
*** geaaru has quit IRC09:07
openstackgerritEmmanuel Cazenave proposed openstack/swift: X-Auth-Token should be a bytestring.  https://review.openstack.org/18183409:07
openstackgerritEmmanuel Cazenave proposed openstack/swift: X-Auth-Token should be a bytestring.  https://review.openstack.org/18183609:10
*** aix has joined #openstack-swift09:20
*** knl has joined #openstack-swift09:21
*** haomaiwa_ has quit IRC09:31
*** xnox has quit IRC09:42
*** xnox has joined #openstack-swift09:48
*** geaaru has joined #openstack-swift10:19
*** ho has quit IRC10:47
*** dmorita has quit IRC10:51
*** zhill has quit IRC11:07
*** EmilienM|afk is now known as EmilienM11:28
*** zul has quit IRC11:37
*** zul has joined #openstack-swift11:38
*** tobe has quit IRC11:53
*** tab___ has joined #openstack-swift11:55
tab___Installing and following installation manual for installing Swift and Keystone (http://docs.openstack.org/juno/install-guide/install/yum/content/ch_swift.html), are generating ssh keys needed for multi node deployment for rsync or not, since this manual does not cover within swift chapter, but it does mention it when creating nova network, which i do no need?11:57
tab___calling command for syncing local and remote dir, it does need password to type:  rsync -a dir1 root@swift-paco2:/home/dir111:58
ctenniswhere does it describe the ssh key setup?12:00
ctennisah I see12:01
ctennisit doesn't mention it, and you're wondering if it's required12:01
ctennisno, it's not required.  typically folks used the rsync daemon which listens on port 873 and doesnt use or require ssh12:02
*** ppai has quit IRC12:03
*** ppai has joined #openstack-swift12:04
tab___but if calling the command by hand it wants password12:05
tab___so rsync byitself shoud work just fine without keys?12:05
tab___ok12:05
*** ppai has quit IRC12:09
*** esker has quit IRC12:14
*** ppai has joined #openstack-swift12:24
*** km has quit IRC12:27
*** ppai has quit IRC12:32
*** haomaiwa_ has joined #openstack-swift12:43
*** early has quit IRC12:56
*** early has joined #openstack-swift12:57
*** fthiagogv has joined #openstack-swift12:59
*** NM has joined #openstack-swift12:59
*** haomaiwa_ has quit IRC13:04
*** kei_yama has quit IRC13:07
*** fifieldt has joined #openstack-swift13:17
*** fifieldt has quit IRC13:17
*** chlong has joined #openstack-swift13:22
*** Gue______ has joined #openstack-swift13:30
jordanPHi guys. We're running a third party CI, with a third party Diskfile and we test specifically python 2.6 support. We'd appreciate support for merging these backports to kilo and juno: https://review.openstack.org/#/c/181836/ and https://review.openstack.org/#/c/181834/ Those cherrypicks are one line changes. Thanks a lot !13:32
*** esker has joined #openstack-swift13:32
acolesjordanP: added my +1, I don't seem to get to +2 backports13:34
eikkeacoles: thanks!13:35
*** jkugel has joined #openstack-swift13:35
acoleseikke: np13:35
*** Gue______ has quit IRC13:41
*** Gue______ has joined #openstack-swift13:53
*** wbhuber has joined #openstack-swift14:03
openstackgerritOpenStack Proposal Bot proposed openstack/python-swiftclient: Updated from global requirements  https://review.openstack.org/8925014:04
openstackgerritOpenStack Proposal Bot proposed openstack/swift: Updated from global requirements  https://review.openstack.org/8873614:04
*** bkopilov has quit IRC14:05
*** openstackgerrit has quit IRC14:06
*** openstackgerrit has joined #openstack-swift14:06
tdasilvagood morning14:17
*** openstack has joined #openstack-swift14:20
*** minwoob has joined #openstack-swift14:24
*** breitz has joined #openstack-swift14:28
*** annegentle has joined #openstack-swift14:40
*** vinsh has joined #openstack-swift14:53
*** knl has quit IRC14:54
*** Gue______ has quit IRC15:01
wbhubernotmyname: What are the hardware specs fot the community QA cluster?15:03
wbhuber(To test / analyze the performance for EC)15:04
acolestdasilva: you're back! hows things?15:07
tdasilvaacoles: hey! everything is going well :) getting used to the new life15:08
acolesgreat! good to have you back around15:11
tdasilvaacoles: how are things going here? Seems like you guys have been very busy even after EC landed. How's preparation for vancouver?15:12
*** aix has quit IRC15:12
acolestdasilva: looks like vancouver will be a busy and interesting summit for swift - lots on the agenda15:15
tdasilvaacoles: yes, there are many presentations and it seems like the design summit discussions are very interesting too15:16
tdasilvaacoles: been trying to follow the hummingbird emails discussions, would like to start looking at code soon...sounds very cool15:17
acolestdasilva: yup15:18
*** fifieldt has joined #openstack-swift15:22
*** lpabon has joined #openstack-swift15:23
*** fifieldt has quit IRC15:23
*** lpabon has quit IRC15:24
*** Gue______ has joined #openstack-swift15:26
*** bkopilov has joined #openstack-swift15:27
openstackgerritLouis Taylor proposed openstack/python-swiftclient: POC: Service token support  https://review.openstack.org/18193615:32
*** annegentle has quit IRC15:37
*** annegentle has joined #openstack-swift15:38
*** acoles is now known as acoles_away15:40
*** gyee has joined #openstack-swift15:41
*** dencaval has joined #openstack-swift15:48
*** annegentle has quit IRC15:59
*** annegentle has joined #openstack-swift16:00
*** annegentle has quit IRC16:06
*** Gue______ has quit IRC16:14
*** vinsh_ has joined #openstack-swift16:15
*** jistr has quit IRC16:17
*** G________ has joined #openstack-swift16:17
*** vinsh has quit IRC16:18
*** bkopilov has quit IRC16:29
*** rmcall has quit IRC16:31
*** rmcall has joined #openstack-swift16:32
*** G________ has quit IRC16:32
notmynamegood morning16:38
notmynametdasilva: welcome back!16:38
*** Guest___ has joined #openstack-swift16:39
notmynamepeluse: is tsg around today?16:40
*** bkopilov has joined #openstack-swift16:43
*** esker has quit IRC16:46
*** nadeem has joined #openstack-swift16:46
*** Guest___ has quit IRC16:49
*** annegentle has joined #openstack-swift17:05
*** nadeem has quit IRC17:09
pelusechecking...17:12
pelusenotmyname, he is in an offsite training class today.  I can text him if you need something?17:12
tdasilvanotmyname: hi! thanks! good to be back17:13
notmynamepeluse: ah thanks.17:19
notmynamepeluse: nothing extraordinarily urgent. I've seen a lot of people have trouble with getting pyeclib installed. the differences between the pypi version vs downloading/packaging the source17:20
notmynamepeluse: I was hoping he could write up a couple of paragraphs about it, being explicit in what needs to be done depending on the source, and add it to the EC overview doc17:21
notmynameof course, he doesn't have to be the one to do that, but I was guessing he'd be the one who knows the most17:21
*** haomaiwa_ has joined #openstack-swift17:22
*** tamizh_geek has quit IRC17:31
pelusenotmyname, OK I'l shoot him an email17:31
*** tamizh_geek has joined #openstack-swift17:32
notmynamepeluse: mind if I answer your EC perf email question in her?17:36
notmyname*here17:36
peluseno prob17:36
notmynamethe question was "are you hitting line rate, or are you cpu limited before you get there?"17:37
peluseyes, at the storage node specifically17:37
notmynamethe QA cluster is all-in-one PACO nodes. I've got a graph of CPU/process group17:38
notmynameso that helps, but it is fuzzed since they are PACO nodes17:38
pelusesure17:39
notmynamein general, I started with a pretty high test load and got a lot of errors. a couple of local patches later and after significantly reducing the concurrency, we're running with no errors17:39
pelusewe noticed at line rate CPU pinned and tried shutting down all non-essential services and saw no drop in CPU which seemed a little surprising and, of course, have us questioning the  ability to fo EC reconstruction under heavy load17:39
-openstackstatus- NOTICE: We have discovered post-upgrade issues with Gerrit affecting nova (and potentially other projects). Some changes will not appear and some actions, such as queries, may return an error. We are continuing to investigate.17:40
*** ChanServ changes topic to "We have discovered post-upgrade issues with Gerrit affecting nova (and potentially other projects). Some changes will not appear and some actions, such as queries, may return an error. We are continuing to investigate."17:40
notmynameso I saw that to make it clear that I'm only looking for relative performance over replication instead of "what's possible with EC performance"17:40
notmynamebut...17:40
notmynameyes. CPU usage is high17:40
peluseyeah, our test was with 3x17:40
peluselooking to see how much CPU we would ahve left at the SN running at line rate17:40
notmynameI'm not sure yet if it's CPU-limited, but it is high, and I suspect (but don't have numbers on) that it's with the REPLCIATE verb requests (ie socket connections and md5 of dirlistings)17:41
notmynameat least, that would be in line with what I've seen before17:41
notmynamecompletely unfiltered raw results from a recent run: https://gist.githubusercontent.com/charz/77dd1c3f73b228bdd6a1/raw/393e72ce70b24f960f61bfce4608b578d1767136/gistfile1.txt17:42
peluseI'd have to check w/Bill on our end but I assume he turned on replcaitor since I asked him to shutdown all non-essential services17:42
pelusein your output what specfiically does concurrency map to?17:43
peluseand, also, do you have similar data for 3x repl?17:43
*** annegentle has quit IRC17:43
notmynameconcurrency is from the ssbench config.17:44
notmynameand yes, this is repl and ec. note the policy column17:44
peluseis that basically number of connections?  or, like, number of workers and each has some number of connections?17:44
notmynametotal connections, not per worker. IIRC there are 8 workers serving those17:44
peluseoh, duh on the policy column, wasn't scrolling down far enough :)17:44
notmynameso yes, it's super low. but that's ok for relative tests17:44
*** geaaru has quit IRC17:45
pelusedo you have net/cpu/mem util that you can correlate with the ssbench output?17:45
notmynamefor the 60 drives in the cluster, there are 20 for EC, 20 for repl, and 20 that are for both (4 per server). that's what the "isolation" vs "shared" means17:45
notmynameyeah, it's in the swiftstack controller data. I haven't actually extracted that for the run. but I also have all the logs and configs and rings for those17:46
*** fresh has joined #openstack-swift17:46
notmynameI'm going to talk to charz about getting the time-series data from the controller this week. and whatever we have will be available next week at the summit17:46
notmynameyou can see what the file sizes are (ie what the ssbench scenarios are) in this pull request: https://github.com/swiftstack/ssbench/pull/107/files17:47
pelusethat was going to be my next question :)17:48
notmynamesuper tiny ones (a few bytes) to multi-GB ones17:48
notmynamebasically, it's a few brackets that, I hope, will show a point at which EC and replication become "interesting"17:50
notmynameand this is a 6+4 scheme with ISA-L17:50
peluseyeah, when caleb somes out we'll have lots of comparison-analysis to do wrt test setup, methodoloy, results to date, settings, etc.  Will be great to spend time on this at the summit to better prepare for that trip17:51
notmynameright17:51
peluseand hopefully get some more folks engaged as well, I know kota showed some interest17:51
peluseand mark seger has emailed me as well but he won't be at the summit.  he wants to get involved though once we have our feet under us wrt what we're testing, how, what we're seeing17:52
*** jordanP has quit IRC17:52
notmynamehmm17:53
notmynameI'm just looking at thsoe results for the first time myself17:53
notmynamelooks like the current inflection point is in the "small" tests. at that point, or right after, EC is faster than replication17:53
notmynamesmall == 1MB -> 5MB objects17:54
*** esker has joined #openstack-swift17:56
pelusemaybe throw those results in an etherpad or something so we can put markers in interesting places or something.  not sure which rows/cols are grabbing your attention17:57
notmynamelooking at req/sec17:57
notmynameI want to graph them and see what pops out17:57
notmynamedepending on the size, I want to bundle up some of the data and make a public link to it17:58
notmynameprobably not the logs, since those are multi-GB per run17:58
peluseyeah, better idea17:58
notmynameanyway, that's an interesting way to start monday, but I've got to move on to a couple of other things (defcore patches and summit talk prep)17:59
peluseditto17:59
*** vinsh has joined #openstack-swift18:08
*** vinsh_ has quit IRC18:11
*** annegentle has joined #openstack-swift18:12
wbhuberSorry if I missed anything, but what is a miniscule test as opposed to tiny & small test?18:16
notmynamewbhuber: check the pull request link above. mostly, miniscule < small18:16
notmyname(I had already used "small" and needed something smaller)18:17
claygdid everyone have a good weekend?18:17
notmynameclayg: yup. except when I had to take apart my washing machine. but actually that's kinda fun too, since you get to play with tools18:19
notmynamewbhuber: also, to your earlier question, the community QA cluster is 5 servers, 12 drives each. drives are donated by HGST. there are 4 servers with 6TB drives and 1 server with 8TB drives. the CPUs are donated by Intel and they are Avoton chips (SoC with 8 cores) and 8GB of RAM18:19
notmynameclayg: my oldest finished his spring soccer season this weekend with an undefeated record. so that was good :-)18:20
wbhubernotmyname: got it and thanks. will prolly come up with cascading questions after digesting them.18:22
*** morganfainberg has quit IRC18:25
*** clduser_ has quit IRC18:25
*** swifterdarrell has quit IRC18:25
*** torgomatic has quit IRC18:25
*** rsFF has quit IRC18:25
*** remix_auei has joined #openstack-swift18:25
*** nadeem has joined #openstack-swift18:26
claygnotmyname: your oldest is like 7 - do they even keep score?18:26
*** chrisnelson has quit IRC18:26
*** AbyssOne has quit IRC18:26
*** chrisnelson_ has joined #openstack-swift18:26
*** remix_tj has quit IRC18:26
*** eikke has quit IRC18:26
pelusenice!18:26
*** eikke has joined #openstack-swift18:26
pelusewbhuber, at Intel we are also setting up a perf cluster for EC and working with notmyname's company to coordinate testing18:27
*** morganfainberg has joined #openstack-swift18:27
*** clduser_ has joined #openstack-swift18:27
*** swifterdarrell has joined #openstack-swift18:27
*** torgomatic has joined #openstack-swift18:27
*** rsFF has joined #openstack-swift18:27
*** sendak.freenode.net sets mode: +vv swifterdarrell torgomatic18:27
*** hub_cap has left #openstack-swift18:27
*** silor has quit IRC18:27
peluse2 proxies, 10 sotrage nodes 12 disks each + an SSD each.  Actually 2 clsuter like this, one with Avoton (atom based) CPU in the nodes and one with Xeon18:27
*** nadeem has quit IRC18:27
wbhuberpeluse: good to know.  how soon is the perf cluster ready for test?18:28
minwoobpeluse: Possibly for performance testing, it is better to avoid using a PACO setup?18:28
*** annegentle has quit IRC18:29
*** annegentle has joined #openstack-swift18:29
minwoobIt seems that PACO may be more for functional testing moreso than being meant for stress tests.18:30
*** tamizh_geek has quit IRC18:30
*** AbyssOne has joined #openstack-swift18:31
pelusewbhuber, we're baselining it now w/plans to have real data coming our of it in mid Jun (after the summit and after some key learnings from the work notmyname just mentioned)18:32
notmynameminwoob: yeah, I feel the same way. PACO for testing/validation or small clusters.18:32
peluseditto18:32
notmynameclayg: he's playing a year up in U8 ;-). will be in U9 competitive in the fall18:32
*** NM1 has joined #openstack-swift18:38
*** NM has quit IRC18:38
*** nadeem has joined #openstack-swift18:38
*** Gu_______ has joined #openstack-swift18:39
*** annegentle has quit IRC18:40
*** annegentle has joined #openstack-swift18:41
pelusenotmyname, tsg said no problem on the addtl pyeclib install docs.  I assume you mean on the pyeclib page - we already have a link there in the Swift overview docs but don't on like the SAIO instractions so we could add it there too18:42
notmynamepeluse: I was thinking http://docs.openstack.org/developer/swift/overview_erasure_code.html but really just any place that's easily referencable is good18:43
wbhuberpeluse: do you have more specific HW specs for the perf cluster?  CPU?  Size of the disks?18:44
pelusewbhuber, on the Intel side, sure.  Will dig them up18:44
pelusenotmyname, so yeah I put the pyeclib section in that doc and purposely left off installation details so that the Swift docs wouldn't need to be udpated if/when they change18:44
peluseso there's arleady a link there but no good isntalation directions at the other end :(18:45
*** fthiagogv has quit IRC18:46
peluseonce he updates the pyeclib site with installation details, I'll clarrify the link in our docs to say "For isntallaation details, see.." but I'll spell installation correctly18:46
notmyname:-)18:47
*** harlowja has quit IRC18:52
*** harlowja has joined #openstack-swift18:52
*** Gu_______ has quit IRC18:57
*** Gu_______ has joined #openstack-swift18:59
minwoobpeluse: Are you reserving x number of disks for certain storage policies, or are all being used during each performance run?19:04
minwoobpeluse: And are your proxies using SSDs for the account/container databases?19:14
pelusessd for databases only, yes19:20
pelusewe plan to share all policies on all disks but run Ec tests separately from repl tests19:21
pelusealso note that we use cosbench as a workload generator (as opposed to ssbench) and zabbix on all nodes to gather mem/cpu/net stats19:21
peluseminwoob, but the SSDs holding those are on the storage nodes (1 each) not in the proxies19:23
peluseonly OS boot drive on proxies...19:24
*** nadeem has quit IRC19:24
*** lpabon has joined #openstack-swift19:43
*** dencaval has quit IRC19:52
*** Gu_______ has quit IRC19:53
openstackgerritMerged openstack/swift: Remove workaround for old eventlet version  https://review.openstack.org/18159719:53
*** fresh has quit IRC19:55
*** nadeem has joined #openstack-swift19:56
*** nadeem has quit IRC20:14
*** drwho has joined #openstack-swift20:17
openstackgerritMichael Barton proposed openstack/swift: go: clean up and add to obj tests  https://review.openstack.org/18206520:18
openstackgerritMichael Barton proposed openstack/swift: go: new tmp dir layout  https://review.openstack.org/18206620:19
notmynameredbo: any thoughts on doing that same tmp dir patch for python?20:20
*** nadeem has joined #openstack-swift20:22
redboI don't know, the discussion kind of seemed to stall.20:25
redboI guess I could take it away from whoever was working on it20:32
notmynameah. was there already a patch for it?20:36
*** MVenesio has joined #openstack-swift20:38
*** rmcall has quit IRC20:41
*** openstackgerrit_ has joined #openstack-swift20:43
*** zaitcev has joined #openstack-swift20:46
*** ChanServ sets mode: +v zaitcev20:46
*** Fin1te has joined #openstack-swift20:51
redbonotmyname: there was https://review.openstack.org/#/c/180883/20:53
notmynameredbo: oh yeah. thanks.20:56
notmynameshri normally gets on later in our day, IIRC. might be good to chat with him to see what's the best way to get something landed20:57
*** haomai___ has joined #openstack-swift21:02
*** wbhuber has quit IRC21:02
*** haomaiwa_ has quit IRC21:05
*** annegent_ has joined #openstack-swift21:10
*** esker has quit IRC21:11
*** annegentle has quit IRC21:15
*** Fin1te has quit IRC21:29
notmynamefun fact found while working on summit presentations. current sloc for swift is roughly 109k. 25% is in swift/. ie only about 1/4 of our codebase is the actual code. the rest is tests (with a little for bin files)21:32
*** mwheckmann has joined #openstack-swift21:34
mwheckmannhello. Does anyone know the status of the container-to-container sync feature? That is to say syncing containers between remote distinct clusters?21:36
mwheckmannas described here: http://docs.openstack.org/developer/swift/overview_container_sync.html21:36
mwheckmannIt works great at first but it seems to be fragile.21:37
mwheckmannI can break synchronization by issuing successive deletes of objects (in a for loop for example).21:37
mwheckmannThis is with Juno.21:37
mwheckmannConnectivity between my proxies and remote object nodes is fine21:38
claygmwheckmann: maybe this bug -> https://bugs.launchpad.net/swift/+bug/141361921:39
openstackLaunchpad bug 1413619 in OpenStack Object Storage (swift) "when container sync run, already deleted object is synced" [Undecided,New] - Assigned to Gil Vernik (gilv)21:39
mwheckmannclayg: hmmm.. maybe. that's not the error I'm seeing, but I haven't turned on debug or anything like that.21:41
*** MVenesio has quit IRC21:41
mwheckmannI don't have a specific container-sync log. Didn't turn that on.21:42
mwheckmannclayg: What I'm seeing in the logs is repetitive attempts to delete the objects but they're not actually getting deleted.21:44
claygwell the daemon will log - even if it's just to /var/log/syslog prefixed with container-sync - you might turn on DEBUG - if you have a reproducible error it'd be wonderful if you could include details on a bug report on launchpad21:44
mwheckmannwill try21:44
mwheckmanndefinitely 100% reproducible21:44
claygoh interesting, the except block may need to allow for 40921:45
claygI think a conflict on PUT is translated to accepted, deleting an object with the exact same timestamp probably blows up and prevents sync point from moving forward21:45
clayginteresting21:45
mwheckmannclayg: You're that I'm seeing this behviour because I uploaded these objects at the same time *and* because I'm deleting these at the same time21:47
mwheckmann?21:47
claygmwheckmann: no I think it's just a bug for delete's21:48
claygthe acceptable status for sync_point 2 delete needs to be expanded to allow for a 409 status21:48
mwheckmannWell I'm happy to test a patch. This is a lab cluster. In the meantime, I'll try to get some more useful logs21:49
mwheckmann(lab clusters in the plural)21:50
*** annegent_ has quit IRC21:52
*** jrichli has quit IRC21:52
*** annegentle has joined #openstack-swift21:53
notmynameyay Swift! https://github.com/bouncestorage/swiftproxy21:54
*** NM1 has quit IRC21:56
*** annegentle has quit IRC22:00
*** jkugel has quit IRC22:13
mwheckmannclayg: with debug on I'm now getting the same error as bug #141361922:14
openstackbug 1413619 in OpenStack Object Storage (swift) "when container sync run, already deleted object is synced" [Undecided,New] https://launchpad.net/bugs/1413619 - Assigned to Gil Vernik (gilv)22:14
mwheckmann100% repoducible22:14
*** wbhuber has joined #openstack-swift22:18
claygyeah if it's what I think it is should be trivial to write a probe test for22:18
claygmwheckmann: can you confirm the sync target is responding with a 409 to the second DELETE request?22:18
mwheckmannclayg: checking...22:21
minwoobQuick question - what determines when an approved patch gets merged?22:25
minwoobSpecifically, for the master branch.22:25
mwheckmannclayg: No 409's from the target proxy whatsoever22:26
*** drwho has quit IRC22:26
mwheckmannjust a bunch of 204 responses22:26
mwheckmannfollowed by 404's on subsequent attempts to delete the same object22:26
zaitcevminwoob: One of the core developers approves or "lands" it. Not sure about other projects, but in Swift we have a convention that one core has to review a patch first, and the second one may land it.22:27
mwheckmannclayg: there were 66 objects total in the container to be deleted. The target container proxy only 204'd 59 DELETE requests.22:29
minwoobzaitcev: Ah, I see. Thank you.22:30
mwheckmannclayg: some objects (about 9) were never even subjected to a DELETE22:30
mattoliverauMorning22:32
claygmwheckmann: hrmm... i wasn't expecting the 404's - everythin you said makes sense include it not working - but only if the subseqent delete's 409 - if they return 404 all the way to sync client then it should have made progress22:34
*** nadeem has quit IRC22:40
mwheckmannclayg: It looks like the 404's are subsequent DELETE attempts on objects that *were* successfully deleted.22:42
mwheckmannThe problem is that there are outstanding objects that we *never* end up seeing a remote DELETE request for.22:42
*** esker has joined #openstack-swift22:42
claygmwheckmann: yeah, that would make sense if the sync_points aren't progressing because the sync_point2 sweep over the already delete'd objects blows up because of a 40922:48
*** occupant has joined #openstack-swift22:53
*** esker has quit IRC22:57
mwheckmannclayg: I created a new bug #1453993  as that other one wasn't clear23:09
openstackbug 1453993 in OpenStack Object Storage (swift) "container sync gets stuck after deleting all objects" [Undecided,New] https://launchpad.net/bugs/145399323:09
*** bill_az has joined #openstack-swift23:19
*** esker has joined #openstack-swift23:22
*** km has joined #openstack-swift23:26
*** esker has quit IRC23:37
*** jrichli has joined #openstack-swift23:48
*** ho has joined #openstack-swift23:49
*** minwoob has quit IRC23:50
hogood morning!23:53
-openstackstatus- NOTICE: Gerrit is going offline while we perform an emergency downgrade to version 2.8.23:55
*** ChanServ changes topic to "Gerrit is going offline while we perform an emergency downgrade to version 2.8."23:55
mattoliverauho: morning23:55
homattoliverau: morning!23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!