Tuesday, 2018-10-02

*** mvkr has quit IRC00:05
*** hoonetorg has quit IRC00:56
*** two_tired2 has joined #openstack-swift01:03
*** hoonetorg has joined #openstack-swift01:10
*** threestrands has quit IRC01:33
*** _david_sohonet has quit IRC01:40
*** hoonetorg has quit IRC01:49
*** hoonetorg has joined #openstack-swift02:01
*** mvkr has joined #openstack-swift03:29
kota_hello world04:12
mattoliveraukota_: o/04:16
kota_mattoliverau: o/04:17
*** two_tired2 has quit IRC04:18
*** pcaruana has joined #openstack-swift04:36
*** pcaruana has quit IRC04:43
*** e0ne has joined #openstack-swift05:06
*** e0ne has quit IRC05:06
*** d0ugal has joined #openstack-swift06:32
*** hoonetorg has quit IRC06:36
*** e0ne has joined #openstack-swift07:00
*** rcernin has quit IRC07:01
*** pcaruana has joined #openstack-swift07:01
*** psachin has joined #openstack-swift07:35
*** d0ugal has quit IRC07:53
*** gkadam has joined #openstack-swift08:48
*** mvkr has quit IRC09:57
*** psachin has quit IRC09:59
openstackgerritSam Morrison proposed openstack/swift master: s3api: Ensure secret is utf8 in check_signature  https://review.openstack.org/60560310:22
openstackgerritSam Morrison proposed openstack/swift master: s3 secret caching  https://review.openstack.org/60352910:22
*** mvkr has joined #openstack-swift10:26
*** d0ugal has joined #openstack-swift11:01
*** e0ne has quit IRC11:10
*** mvkr has quit IRC11:16
*** mvkr has joined #openstack-swift11:17
*** e0ne has joined #openstack-swift11:50
*** SkyRocknRoll has joined #openstack-swift12:29
*** pcaruana has quit IRC12:57
*** d0ugal has quit IRC13:00
*** frankkahle has joined #openstack-swift13:02
*** frankkahle has quit IRC13:10
*** d0ugal has joined #openstack-swift13:21
*** pcaruana has joined #openstack-swift13:27
*** e0ne has quit IRC13:32
*** d0ugal has quit IRC13:36
*** d0ugal has joined #openstack-swift13:54
*** e0ne has joined #openstack-swift14:02
*** frankie64 has joined #openstack-swift14:48
frankie64Trying to do an installation of openstack-swift on a centos 7 vm.  When I go to instal the swift test dependencies i get "Cannot uninstall 'ipaddress'. it is a disutils installed.......", ANY IDEAS?14:50
*** fultonj has joined #openstack-swift14:50
*** SkyRocknRoll has quit IRC14:52
*** SkyRocknRoll has joined #openstack-swift14:54
tdasilvafrankie64: i've noticed that before, i think you will need to uninstall python-ipaddress and then run a `pip install ipaddress` or something like that....15:01
*** d0ugal has quit IRC15:12
*** d0ugal has joined #openstack-swift15:12
*** e0ne has quit IRC15:19
frankie64ok doing pip install --ignore-installed ipaddress first then running the requirements worked15:25
*** gyee has joined #openstack-swift15:49
*** sasha1 has joined #openstack-swift15:50
sasha1Hi all15:50
sasha1I need an equivalent of "swift upload" with unified cli.  swift upload can upload directories and 'openstack object create' for example only files. Does anybody know?15:50
DHEthat's usually how it goes. "openstack" covers all the basics, but the individual commands like "swift" and "nova" support additional features not covered16:04
frankie64Hi folks , now when i run the unittests i get "ERROR: Failure: ImportError (liberasurecode.so.1: cannot open shared object file: No such file or directory)"16:05
*** Baggypants12000 has joined #openstack-swift16:06
tdasilvafrankie64: do you have liberasurecode and pyeclib installed?16:43
DHEor if you need something in /etc/ld.so.conf  ?16:45
notmynamegood morning16:56
openstackgerritTim Burke proposed openstack/swift master: Listing of versioned objects when versioning is not enabled  https://review.openstack.org/57583816:56
openstackgerritTim Burke proposed openstack/swift master: Support long-running multipart uploads  https://review.openstack.org/57581817:10
timburkei'm still not sure whether i prefer a patch chain or a patch hydra :-/17:11
notmynametdasilva: ...closing a medusa bug guarded by a cerebus edge case?17:13
notmynametimburke I mean. :-)17:13
tdasilvafrankie64: yeah! what DHE said ^^^17:15
timburkenotmyname: with the py3 stuff, i've generally gone with chains -- by fixing swob i can start trying to fix some middlewares, and from there i can try to get a working proxy-server17:16
notmynamepy3 ... 3 ... 3-headed dog ... cerebus!17:17
notmynameall py3 issues are now "cerebugs"17:17
timburkebut the s3api patches generally don't really build on themselves like that -- they just sprawl all over the place, and since it'd be better to have *something* land than to risk it getting stuck waiting on another patch, i did my best to have them be separate...17:18
DHEBRILLIANT!17:18
timburkebut now i have to resolve merge conflicts :P17:19
notmyname"Netplan is a new command-line network configuration utility ... to manage and configure network settings easily in Ubuntu systems. It allows you to configure a network interface using YAML abstraction."17:21
notmynamejust what I always wanted!17:21
notmyname(I'm sure it's amazing, but now I need to learn a new way to do network config)17:21
timburkenotmyname: so how many different ways do you think there will be to write an IP address?17:22
notmyname63! at least! :-)17:22
timburkegotta be at *least* 20, right?17:22
timburkeeven better17:22
tdasilvaip: |17:23
timburkenotmyname: wait... cerebus? like, an aardvark? https://en.wikipedia.org/wiki/Cerebus_the_Aardvark17:24
timburke:P17:24
notmynamelol cerberus!17:24
notmynamecerberbug? doesn't work as well :-(17:25
timburkecerbugus? yeah, you're right...17:25
timburkesorry to spoil the fun17:25
frankie64(tdasilva) yes i have liberasure and pyeclib installed. now i am getting "No module named pbr.version" and i have pbr pbr-4.2.0installed17:27
*** gkadam has quit IRC17:28
*** mvkr has quit IRC17:31
*** openstackgerrit has quit IRC17:51
tdasilvafrankie64: mmm...not sure about pbr17:51
*** pcaruana has quit IRC18:02
*** mordred has joined #openstack-swift18:04
*** mvkr has joined #openstack-swift18:05
mordrednotmyname: if you happen to be bored in life ... I'm trying to track down a pile of 'random' test fail timeouts I've got in openstacksdk functional tests- there are a couple of swift ones that pop up occasionally. mostly pinging in case there is an actual swift issue that we're tickling18:05
mordredI'm guessing the issue is just resource contention so I'm also looking in to splitting things better18:06
mordrednotmyname: http://logs.openstack.org/14/604414/7/gate/openstacksdk-functional-devstack-tips/a3fc28f/ is an example of one of them (issue in downloading an object)18:06
mordredhttp://logs.openstack.org/17/604517/6/check/openstacksdk-functional-devstack-python2/ed4abc2/ is one with list18:07
mordredhttp://logs.openstack.org/80/606980/1/check/openstacksdk-functional-devstack-tips-python2/636b657/testr_results.html.gz is container metadata18:08
notmynamemordred: hmm... interesting. and thanks18:09
notmynameI'm about to get in the car to drive to a customer meeting, so I won't have any more time today to look at it. but perhaps someone else in here will be able to see if anything looks suspicious18:09
mordrednotmyname: and you're saying that debugging thigns while driving isn't what you prefer to do?18:10
timburkemordred: http://logs.openstack.org/14/604414/7/gate/openstacksdk-functional-devstack-tips/a3fc28f/controller/logs/syslog.txt.gz#_Sep_28_18_56_40 shows an OOM-kill18:18
timburkei'm guessing pid 23458 was some test-runner worker? not sure18:18
mordredtimburke: see - there you go with pointing exactly to the problem that I've been staring at for days. THANK YOU18:20
timburkesimilar story with http://logs.openstack.org/17/604517/6/check/openstacksdk-functional-devstack-python2/ed4abc2/controller/logs/syslog.txt.gz#_Oct_02_17_28_2618:20
mordredtimburke: I owe you a pile of beers18:22
timburkeheh, glad to help :-)18:22
timburkefresh eyes can make all the difference18:23
mordred++18:27
*** SkyRocknRoll has quit IRC18:29
*** e0ne has joined #openstack-swift19:32
*** e0ne has quit IRC21:02
*** openstackgerrit has joined #openstack-swift22:09
openstackgerritSam Morrison proposed openstack/swift master: s3 secret caching  https://review.openstack.org/60352922:09
openstackgerritMerged openstack/swift master: Give better errors for malformed credentials  https://review.openstack.org/57583622:16
timburkethanks tdasilva!22:26
mattoliveraumorning23:14
DHEodd behaviour in my lab. one machine with 30 hard drives as pretty much everything short of the proxy server. running the object-reconstructor on it, I'm randomly getting not enough nodes to reconstruct, but the files are visible and survive an audit pass23:29
DHEand by "randomly" I mean between runs of the reconstructor23:29
notmynameDHE: "files are visible"... what files? the logical objects via the API? or the EC fragments on disk?23:30
DHEthey're EC fragments and I'm using swift-get-nodes to identify what I'm up against23:31
notmynameok23:31
notmynameok. the auditor will only verify that the fragment files are valid on disk. the auditor doesn't check that there's enough fragments to reconstruct an object23:32
notmynameis it the reconstructor that's showing the "not enough nodes" error?23:33
notmynameor a client read?23:33
DHEthe reconstructor running on the local host23:33
DHEhonestly I haven't tried a client yet... I should do that...23:35
DHEclient download worked fine23:38
DHEand the checksum is good23:39
notmynameok23:39
notmynamefromt he get-nodes output, were you able to find all the fragments?23:39
DHEyes, though I ran "find /srv/node -name <partition hash>" and found all the hits I expected23:40
notmynamecool23:40
DHEthere's only a few thousand objects so far, so that works nicely23:41
notmynameso given all that you've said, I'd suggest looking at the health of your network. stuff like your ports, switches, cables, etc. it could be that everything is durable but there's some network component flapping somewhere that's preventing a quorum from being read, thus giving the error23:41
DHEI disagree though. there is only 1 storage node with 30 drives in it, and the replicator is running on that machine. it should be all loopback traffic, right?23:42
notmynameah23:42
DHE*reconstructor23:42
notmynameyou're right. there wouldn't be a network issue on a single box23:43
notmynameI'd remembered "30" and thought you had a 30-machine cluster ;-)23:43
DHE2019 Q1 :)23:43
notmynameok. so now I'm really guessing then. first actually check that you've got fragments. then check/track the io load on the drives. or the cpu utilization. could be that a drive was overloaded and caused the request to time out. or if cpu contention, could be a eventlet hub starvation (ie too many green threads running on one core)23:45
DHEwell, I have to go now. I'll check that out in more detail tomorrow. for now I'm loading the system up with more objects. 30 hard drives is very useful from a basic storage needs standpoint.23:51
*** rcernin has joined #openstack-swift23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!