Thursday, 2018-07-12

paladoxnotmyname does memcached need to be on the swift servers if the proxy is not on them?00:09
*** linkmark has quit IRC00:23
mattoliverauoh morning again. I forgot to say anything here. Was too busy trying to think of something to submit for the Berlin summit.00:31
mattoliveraunotmyname: thanks for the bug reports00:31
openstackgerritJoshua Harlow proposed openstack/python-swiftclient master: Log exceptions received during retrying  https://review.openstack.org/58192100:51
*** labster has left #openstack-swift01:01
*** itlinux has joined #openstack-swift01:36
openstackgerritMerged openstack/python-swiftclient master: Add ability to generate a temporary URL with an IP range restriction  https://review.openstack.org/58137401:46
openstackgerritMerged openstack/python-swiftclient master: Treat 404 as success when deleting segments  https://review.openstack.org/53834901:46
clayghttps://www.irccloud.com/pastebin/dM2SwkHz/01:50
claygwell that's kinda stupid...01:51
claygContextualVersionConflict: (ipaddress 1.0.16 (/usr/lib/python2.7/site-packages), Requirement.parse('ipaddress>=1.0.17'), set(['swift']))01:51
clayg^ cc tdasilva seeing some probetest cent7 failure about that there01:51
claygjust FYI01:52
*** itlinux has quit IRC02:02
kota_clayg: I'm currently with you at p 58191302:19
patchbothttps://review.openstack.org/#/c/581913/ - swift - Include s3api schemas in sdists02:19
claygheheh02:20
kota_I can find the rng schema in my ubuntu distro.02:20
* clayg shrugs02:20
claygpackaging is always a cluster02:20
kota_so I also don't know why it doesn't work with CentOS packaging.02:20
*** yuxin_ has quit IRC02:20
claygi appreciate you confirming i'm not entirely crazy tho02:21
kota_I should go your link [2]02:21
*** yuxin_ has joined #openstack-swift02:21
kota_it's sort of "And creating “smart” built distributions, such as an RPM package or an executable installer for Windows, is far more convenient for users even if your distribution doesn’t include any extensions." things?02:23
kota_interesting02:24
openstackgerritNguyen Hai proposed openstack/swift master: add lower-constraints job  https://review.openstack.org/55625502:24
*** psachin has joined #openstack-swift02:45
openstackgerritClay Gerrard proposed openstack/python-swiftclient master: Add more validation for ip_range args  https://review.openstack.org/58190602:57
*** spsurya_ has joined #openstack-swift03:32
openstackgerritClay Gerrard proposed openstack/swift master: Add unittest for slo_etag  https://review.openstack.org/58194303:51
*** links has joined #openstack-swift05:03
*** SkyRocknRoll has joined #openstack-swift05:17
*** SkyRocknRoll has quit IRC05:25
*** dr_gogeta86 has quit IRC05:26
openstackgerritMerged openstack/swift master: Include s3api schemas in sdists  https://review.openstack.org/58191305:30
openstackgerritMerged openstack/swift master: Include SLO ETag in container updates  https://review.openstack.org/33796005:30
*** d0ugal_ has joined #openstack-swift05:55
*** d0ugal has quit IRC05:56
kota_thanks clayg to look at the patch 337960. I was worried about i may be a blocker to get it merged.06:11
patchbothttps://review.openstack.org/#/c/337960/ - swift - Include SLO ETag in container updates (MERGED)06:11
claygkota_: i'd still love to get your opinon on the follow up - but I'm working (slowly) on getting my s3api setup06:12
tdasilvaclayg: trying to catch up, where do you see probetest failures?06:15
clayghttps://review.openstack.org/#/c/581913/06:15
patchbotpatch 581913 - swift - Include s3api schemas in sdists (MERGED)06:15
*** armaan has joined #openstack-swift06:18
*** cshastri has joined #openstack-swift06:18
*** bkopilov has quit IRC06:30
*** hseipp has joined #openstack-swift06:41
*** hseipp has quit IRC06:45
*** hseipp has joined #openstack-swift06:45
*** hseipp has quit IRC06:57
*** bkopilov has joined #openstack-swift06:58
kota_clayg: alright, I'll do my best, perhaps, I could have time to look at...07:06
claygs'ok if you don't I'll get to it eventually ;)07:07
*** gkadam has joined #openstack-swift07:14
*** rcernin has quit IRC07:20
kota_clayg: much appreciated that you make it progressed XD07:29
claygkota_: timburke: did ya'l see this one p 58033307:37
patchbothttps://review.openstack.org/#/c/580333/ - swift - HEAD to check existence before container PUT07:37
openstackgerritChristian Schwede proposed openstack/swift master: Fix misleading error msg if swift.conf unreadable  https://review.openstack.org/58128007:45
kota_clayg: let me check on what p 580333 would solve, thanks for heads up07:56
patchbothttps://review.openstack.org/#/c/580333/ - swift - HEAD to check existence before container PUT07:56
kota_head up07:56
kota_i have only one head.07:56
kota_:/07:56
*** mikecmpbll has joined #openstack-swift08:01
*** d0ugal_ has quit IRC08:03
*** d0ugal has joined #openstack-swift08:03
*** d0ugal has quit IRC08:03
*** d0ugal has joined #openstack-swift08:03
kota_hmmm... i'm wondering how we should estimate container db heavy load (i.e. got LockTimeout for sqlite db) for the prod clusters.08:10
kota_it sounds obviously unhealthy and then, 503 ServiceUnavailable is not so much bad status for us?08:10
*** itlinux has joined #openstack-swift08:16
*** ccamacho has joined #openstack-swift08:17
kota_clayg: is there any idea how large overhead between HEAD container and PUT container?08:20
*** hseipp has joined #openstack-swift08:21
kota_I'm now looking at the container-server code, then PUT container obviously update the timestamp and commit the change for the db but it looks like HEAD container also calling commit_puts_stale_ok() to get the container info that will make a commit to merge the pending file.08:21
kota_if i'm not crazy on my eyes, changing PUT container to HEAD container is not so much effective to mitigate the load.08:22
*** links has quit IRC08:24
*** mikecmpb_ has joined #openstack-swift08:24
*** mikecmpbll has quit IRC08:25
*** links has joined #openstack-swift08:26
*** mvk_ has quit IRC08:30
*** tesseract has joined #openstack-swift08:37
*** mvk_ has joined #openstack-swift08:54
acolesgood morning08:55
openstackgerritChristian Schwede proposed openstack/swift master: Fix misleading error msg if swift.conf unreadable  https://review.openstack.org/58128009:01
*** cshastri_ has joined #openstack-swift09:03
*** cshastri has quit IRC09:06
*** links has quit IRC09:10
*** links has joined #openstack-swift09:12
*** mvk_ has quit IRC09:16
*** hoonetorg has quit IRC09:27
*** mvk_ has joined #openstack-swift09:28
openstackgerritAlistair Coles proposed openstack/swift master: Check other params preserved when slo_etag is extracted  https://review.openstack.org/58212509:31
*** hoonetorg has joined #openstack-swift09:39
*** mikecmpb_ has quit IRC09:50
*** mikecmpbll has joined #openstack-swift09:52
*** kei_yama has quit IRC11:27
*** armaan has quit IRC11:48
*** armaan has joined #openstack-swift11:49
*** itlinux has quit IRC12:01
*** armaan has quit IRC12:41
*** armaan has joined #openstack-swift12:41
*** zaitcev has joined #openstack-swift13:37
*** ChanServ sets mode: +v zaitcev13:37
*** psachin has quit IRC13:46
*** links has quit IRC13:50
*** psachin has joined #openstack-swift13:51
*** armaan_ has joined #openstack-swift14:01
*** armaan has quit IRC14:02
*** linkmark has joined #openstack-swift14:03
*** psachin has quit IRC14:05
*** mikecmpbll has quit IRC14:08
*** mikecmpbll has joined #openstack-swift14:09
*** ccamacho has quit IRC14:16
*** ccamacho has joined #openstack-swift14:21
claygacoles: good morning14:57
acolesclayg: o/14:57
acolesclayg: I +A'd your patch but it failed test in gate https://review.openstack.org/581943 :(14:58
patchbotpatch 581943 - swift - Add unittest for slo_etag14:58
claygthanks for trying, i guess that test is flakey?14:58
acolesI think the json needs to be loaded to avoid key order variations in the serialized version14:59
acoleswas going to fix it but got engrossed in PUT+POST14:59
claygk, i'll square it14:59
acolesIIRC you're comparing two serialized versions14:59
*** armaan_ has quit IRC15:01
openstackgerritClay Gerrard proposed openstack/swift master: Add unittest for slo_etag  https://review.openstack.org/58194315:02
openstackgerritClay Gerrard proposed openstack/swift master: Check other params preserved when slo_etag is extracted  https://review.openstack.org/58212515:02
acoleswhoosh15:07
*** cshastri_ has quit IRC15:23
*** ccamacho has quit IRC15:26
*** gyee has joined #openstack-swift15:30
*** gyee has quit IRC15:34
*** gkadam has quit IRC15:47
notmynamegood morning15:53
*** bharath1234 has joined #openstack-swift16:03
bharath1234torgomatic, i am studying the unique as possible placement algorithm. I m reading the code in the get_more_nodes function which i believe is used to get the handoff nodes. I didnt get why you hashed the partition number and shifted it by the partition shift. The number of parts in my cluster is 1024 and the when we hash the partition number and shift , i get 192. Could you elaborate as on to why that was done? Thank you16:03
*** bharath1234 has quit IRC16:04
openstackgerritJohn Dickinson proposed openstack/swift master: added docker test target tools  https://review.openstack.org/57746716:06
*** gyee has joined #openstack-swift16:08
*** hseipp has quit IRC16:20
claygbharath1234 you man specifically:         part = struct.unpack_from('>I', key)[0] >> self._part_shift ?16:21
*** armaan has joined #openstack-swift16:22
claygI feel the easiest way to think of that step is just the modulo?  basically you're just placing the key into the bucketspace - but maybe with a little fancy math.16:24
*** itlinux has joined #openstack-swift16:25
claygoh... no your question is more specific - in get_more_nodes we're not hashing a name - we alreayd have a part - so why do the rehash?16:26
*** spsurya_ has quit IRC16:26
clayg        part_hash = md5(str(part).encode('ascii')).digest()16:26
clayg^ yeah idk, that looks kind of weird?!16:26
*** mikecmpbll has quit IRC16:28
claygI think we started hashing the part here: https://review.openstack.org/#/c/23404/16:32
patchbotpatch 23404 - swift - Updated get_more_nodes algorithm (MERGED)16:32
*** itlinux has quit IRC16:32
claygit'd be awesome to ask gholt why that might of been - but he would claim he doesn't remember16:33
openstackgerritAlistair Coles proposed openstack/swift master: PUT+POST: Detect older object server by not sending content-type  https://review.openstack.org/58229816:39
claygtimburke: do we have an example ~/.s3cfg for tempauth test:tester?16:39
clayglike on a saio?16:39
clayghttps://docs.openstack.org/swift/latest/middleware.html#module-swift.common.middleware.s3api.s3api I guess16:40
claygacoles: you're on fire!16:51
*** armaan has quit IRC17:03
*** tesseract has quit IRC17:16
*** mikecmpbll has joined #openstack-swift17:21
notmynametdasilva: timburke: kota_: looking at pyeclib, I don't think there's anything there that needs a release. however, a libec release (just a x.x.1) might be good. that will get a patch with better crc3217:31
zaitcevhttps://www.mail-archive.com/python-committers@python.org/msg05628.html17:53
zaitcevclayg: Without seeing the code, I think you are right to be suspicious. All invocations of str() carry a danger of producing "b'foo'" silently. I would say, extreme danger even. We really should aim to exterminate all str() and not think of it as a handy way to coerce to a native string.17:55
*** mikecmpbll has quit IRC18:10
*** mikecmpbll has joined #openstack-swift18:10
*** mikecmpbll has quit IRC18:11
*** bkopilov has quit IRC18:18
*** mikecmpbll has joined #openstack-swift18:22
*** armaan has joined #openstack-swift18:24
*** armaan has quit IRC18:28
*** armaan has joined #openstack-swift18:29
*** armaan has quit IRC18:33
openstackgerritMerged openstack/swift master: Add unittest for slo_etag  https://review.openstack.org/58194318:48
*** jistr has quit IRC18:50
*** mikecmpbll has quit IRC18:51
*** jistr has joined #openstack-swift19:25
*** mvk_ has quit IRC19:27
*** mvk_ has joined #openstack-swift19:56
paladoxHi, does anyone know how i can balace the storage accross two nodes?21:25
paladoxso that half is on one and the other on the other?21:25
zaitcevmake them the same size21:26
paladoxthe file storage?21:26
paladoxthey are both 150gb :)21:26
paladoxone is near to using 150gb21:26
paladoxthe other is using 33gb.21:27
zaitcevand the sums of device weights in the rings are the same for both nodes?21:27
paladoxyep21:29
paladox14521:29
zaitcevInteresting.21:30
zaitcevI'd start by making sure that the cluster is healthy otherwise and that the replicators and expirers run normally (by looking at logs).21:30
zaitcevThen, I'd try to identify where all the data is21:31
paladoxah ok.21:31
zaitceve.g., make sure nothing crazy is going on with respect to quarantine.21:31
paladoxhmm ok. I did set concurrent to 0 for replication but that was yesturday and swift has been running for over a week.21:32
paladoxit replicated 33gb i think but never deleted any copied data from the other node.21:33
notmynamepaladox: what have you tried so far?21:34
openstackgerritMerged openstack/swift master: Check other params preserved when slo_etag is extracted  https://review.openstack.org/58212521:35
paladoxnotmyname i've been forced to try and reduce load. So i had to set concurrent to 0. I have the replication systemd service up. But i have tryed leaving the swift replication to do it's thing.21:35
paladoxbut rsync kept failing i guess when it used alot of ram.21:35
paladoxmy config: https://github.com/miraheze/puppet/tree/master/modules/swift/templates21:35
*** wer has quit IRC21:42
notmynamepaladox: have you run `swift-recon --all`? any issues reported there?21:42
* paladox runs that21:42
paladoxnotmyname it shows no errors21:43
paladoxpaste: https://phabricator.wikimedia.org/P736421:43
notmynameyour disk usage isn't reported there. shows an error. also shows your replication hasn't run in 20 hours21:45
paladoxnotmyname hmm. The server is on a openvz so i guess that's why the disk usage won't be shown. and for replication i guess it was affected by changing concurrent to 0?21:46
paladoxbut even then it was replicating but not deleting21:46
paladoxstuff it replicated.21:46
notmynamepaladox: you could run `swift-object-replicator` directly from the command line against just one partition and see what happens. maybe that would show you any issues21:56
paladoxah21:56
paladoxwill try that!21:57
paladoxthanks!21:57
*** rcernin has joined #openstack-swift21:58
paladoxnotmyname hi, i just set falloc to 11% of available storage. which swift1 would be affected as it has 4.1gb left. https://static.miraheze.org/traunstoanerwiki/thumb/a/ab/Bergham.jpg/180px-Bergham.jpg is now returning 503.22:11
paladoxbut that exists on swift1.22:11
paladoxim guessing it's because account / container folders are out dated on swift2?22:11
notmynamewhy did you set fallocate reserve?22:13
paladoxnotmyname we were low on storage on swift1 and wanted all PUTs to fall over to swift2.22:13
notmynameremember that swift is not a "fill and spill" storage system22:15
notmynamedid you run the replicator against just one partition?22:17
paladoxnotmyname going to run that now22:18
paladoxnotmyname could it be because i have swift account and swift container rings defined for both swift*22:20
notmynamewhen you get a 503 on a GET, what do the logs say? grep logs on both servers for the transaction id22:21
notmynameeg txcc762fcf0e8048fbba48b-005b47d3d722:21
paladoxok22:22
paladoxnotmyname from what i can see swift says it's a 404 but from the console log it is saying 50322:28
paladox"GET /simfs/0/AUTH_admin/traunstoanerwiki-mw/b/b5/Thalham_02.jpg" 404 - "GET http://185.52.3.121:8080/v1/AUTH_admin/traunstoanerwiki-mw/b/b5/Thalham_02.jpg" "tx96406ad6a280437fa2ab0-005b47cf0e" "proxy-server 5004" 0.0004 "-" 25645 022:28
paladoxswift-object-replicator object-server.conf -p 1 -v22:31
zaitcevReplicator is what actually deletes anything that's deleted. If you don't run it, you'll overflow your storage as all deleted objects accumulate. So it's imperative that you keep it operating.22:32
zaitcevThis does not explain the asymmetry though. Unless it was running normally on the other node, but not on this one.22:32
paladoxzaitcev is there a way to get it so that replication does not run frequently. as it causes very high load22:33
paladoxwhen rsync runs22:33
zaitcevYou can turn the reclaim time down and see if that help.22:33
notmynamezaitcev: for background, paladox has a 1-replica, 1 drive cluster. and he added a second drive on another server. replication has moved some data, but hasn't deleted anything from the first22:33
zaitcevGood lord22:34
notmynameyes :-)22:34
paladoxswift-object-replicator object-server.conf -p 1 -v only shows "swift-object-replicator: Starting object replication pass."22:34
paladoxnotmyname zaitcev yeh, i wish i could have the higher specs (ssds more cores) but budget carn't afford that :)22:36
paladoxwe needed something so we could add more storage.22:36
paladoxwe were prevously on nfs22:36
paladoxzaitcev notmyname i wonder if having container and account on mutiple servers is the problem? Dosen't that container where the objects are (on which server)22:37
paladoxbecause the images are working itermittent22:37
paladoxone minute they work and the next they are saying they do not exist.22:38
notmynamethat would suggest one server is misconfigured. so you could try to hit each one separately. or look at your nginx config to see how it's balancing requests22:38
paladoxnotmyname i have nginx configured to send requests to the swift proxy which is on another server.22:39
notmynameIMO your tests should be directly to a proxy server to test it. when you see that working, then you move up a level. that will help you isolate where the issue may exist22:41
paladoxnotmyname it works on swift122:42
paladoxswift2 dosen't22:42
paladoxi see this:22:42
notmynamethen swift2 is likely where your issue is22:42
paladoxyeh22:42
paladoxthough shoulden't it try swift1, if that fails then swift2?22:43
notmynameno. stop. fix swift2. whatever is going there is likely why you're not getting the balance you need22:43
paladoxnotmyname hmm.22:44
paladoxok22:44
paladoxbut the objects should not be on swift2 though if they are on swift122:47
mattoliveraumorning22:57
zaitcevCould someone take a look at https://review.openstack.org/57922723:01
patchbotpatch 579227 - swift - PUT+POST: break out putter specific test classes23:01
zaitcevI agreed with acoles about it, and made patch 427911 depend on it, but now it's _exceedingly_ inconvenient.23:02
patchbothttps://review.openstack.org/#/c/427911/ - swift - Replace MIME with PUT+POST for EC and Encryption23:02
zaitcevI don't know how you guys deal with stacked patches23:03
zaitcevI mean sure...  git rebase  is slick. BUT what if I want to change anything? The only decent way I found is to try and commit something separately, and then  git rebase -i, ask for sqaush.23:04
zaitcevDoable, but ewww23:04
paladoxmaybe im having the same issue as https://ask.openstack.org/en/question/111731/swift-return-404-when-get-some-objects-after-adding-new-hdds/23:05
*** kei_yama has joined #openstack-swift23:13
*** SPF|Cloud has joined #openstack-swift23:20
*** drewn3ss has quit IRC23:47
*** mikecmpbll has joined #openstack-swift23:50
*** mikecmpb_ has joined #openstack-swift23:56
*** mikecmpb_ has quit IRC23:58
*** mikecmpbll has quit IRC23:58
*** mikecmpbll has joined #openstack-swift23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!