Tuesday, 2018-01-30

*** awkwardpenguin has joined #openstack-swift00:00
*** vint_bra has quit IRC00:23
openstackgerritTim Burke proposed openstack/python-swiftclient master: Update the old http doc links  https://review.openstack.org/53665600:30
*** awkwardpenguin has quit IRC00:33
*** awkwardpenguin has joined #openstack-swift00:33
*** awkwardpenguin has quit IRC00:41
*** awkwardpenguin has joined #openstack-swift00:47
*** tovin07_ has joined #openstack-swift00:52
*** awkwardpenguin has quit IRC00:52
openstackgerritSamuel Merritt proposed openstack/swift master: Disallow fractional replicas in EC policies  https://review.openstack.org/50345400:57
*** awkwardpenguin has joined #openstack-swift00:58
openstackgerritTim Burke proposed openstack/swift master: Improve content negotiation  https://review.openstack.org/20727601:02
*** awkwardpenguin has quit IRC01:02
*** awkwardpenguin has joined #openstack-swift01:08
*** gyee has quit IRC01:11
*** awkwardpenguin has quit IRC01:12
*** awkwardpenguin has joined #openstack-swift01:19
*** awkwardpenguin has quit IRC01:19
*** awkwardpenguin has joined #openstack-swift01:27
*** derekjhyang has joined #openstack-swift01:37
*** awkwardpenguin has quit IRC01:41
*** m_kazuhiro has joined #openstack-swift02:04
m_kazuhirogood morning02:08
mattoliveraum_kazuhiro: morning02:48
*** zaitcev__ has joined #openstack-swift02:50
*** ChanServ sets mode: +v zaitcev__02:50
*** d0ugal has quit IRC02:51
*** d0ugal has joined #openstack-swift02:52
*** zaitcev_ has quit IRC02:53
*** flwang has quit IRC02:55
*** cshastri has joined #openstack-swift02:59
*** cshastri has quit IRC03:15
*** two_tired has joined #openstack-swift03:15
*** psachin has joined #openstack-swift03:25
*** Supun has joined #openstack-swift03:30
*** kei_yama has joined #openstack-swift03:41
*** cshastri has joined #openstack-swift03:42
*** Supun has quit IRC03:47
*** tovin07 has quit IRC03:51
*** tovin07 has joined #openstack-swift03:52
m_kazuhiromattoliverau: morning!04:00
SouravPlease guys, Anything about this.04:04
Sourav<Sourav> Hello guys, I am facing some problem with Swift. I am using 5 data nodes. 4 of them are full and last one has 1.3TB left. But swift can't store files. [22:09] <Sourav> While uploading it shows that segments are uploaded but with `stat` command it can't find that segments. [22:10] <Sourav> Error message just showing that `rsync` error and write error. No disk space left. [22:10] <Sourav> !!!! Please help !!!!04:04
*** links has joined #openstack-swift04:10
mattoliverauSourav: assuming your using 3x replication and all but one of your drives is full then there is nowhere for swift to put 2 of the replicas, so PUTs (uploads) will fail. If the node that still has data has drives not full (more then 1) that are a quorum of replicas then it can potentially accept objects, but everything will go onto it. As handoffs (as this is the only place to put them). Which means swift has duribly04:20
mattoliverausaved them, but a stat might not work (as for most partitions are full so all being dumped on handoff (tempory) storage loactaions).04:20
mattoliverauSourav: if your really out of space, you need to add more drives or servers so data can be moved around (full drives drained) and have space for more objects.04:21
Sourav@mattoliverau Thanks for the reply.04:28
SouravFYI, I am using 1x replication.04:28
mattoliverauSourav: wow really, so no durability then04:29
SouravBecause I wanted to purchase later. And now I have bought some disk to add.04:29
Souravbut this error happening.04:29
mattoliverauon 1x replication, if the drive the object is expected to be on is full, then it'll do write it elsewhere.. and a GET or HEAD (stat) wont know where it is.04:30
Souravsure I will let you know the stat04:30
mattoliveraumore replicas means swift can go place it and keep it accessable.04:30
SouravYes. For durability I would like to do that from now on. But for now, I am unable to retrieve my newly uploaded files.04:32
SouravThis is one swift stat:04:32
SouravContainers: 11                         Objects: 571                           Bytes: 224942900188 Containers in policy "policy-0": 11    Objects in policy "policy-0": 571      Bytes in policy "policy-0": 224942900188     X-Account-Project-Domain-Id: default          X-Openstack-Request-Id: txd086b604d0c345a184862-005a6ff526                     X-Timestamp: 1509513798.77325                      X-Trans-Id: txd086b604d0c345a1804:32
mattoliverauSourav: of course you can't. The swift ring will tell the proxy where to put an object. Ie. What device (drive) is responsible for the object. In multiple replication, there will be a bunch of drives. With 1x there will only be one. If that drive happens to be full, it'll try and find a "handoff" drive that isn't full to place it.04:35
Souravone of my node has 1.3 Disk left. But still throwing error. n 30 13:33:53 object-storage-5 account-replicator: Beginning replication run Jan 30 13:33:53 object-storage-5 account-replicator: Replication run OVER Jan 30 13:33:53 object-storage-5 account-replicator: Attempted to replicate 4 dbs in 0.01806 seconds (221.43340/s) Jan 30 13:33:53 object-storage-5 account-replicator: Removed 0 dbs Jan 30 13:33:53 object-storage-5 ac04:35
*** m_kazuhiro_ has joined #openstack-swift04:35
mattoliverauNow, then you GET it (or stat the object), it wont be where Swift expects, so can't stat it.04:35
SouravYes.04:36
Souravso, from specification you mean, If I use 1x replication, I can't use more than 1 disk??04:36
mattoliverauthe replicators is are finding the handoff objects, containers and accounts and trying to put then where Swift expects them, but they are out of space so can't move it.04:36
*** m_kazuhiro has quit IRC04:37
mattoliverauwell 1x replication means, only store 1 copy of each object in the cluster.. so it can' only live on one disk.04:37
mattoliverauSo 1x isn't very usful unless you have an underlying fs that will handle things for you.04:38
Souravinitially when i started using openstack. It was not approved to have more disks. So I had to use 1x replication with multiple disk to support.04:39
mattoliverauThe more replicas the more durable your cluster and objects are, as you can loose disks and still get your data. Or take a node down for patching without making objects dissapear.04:39
Souravbut one more thing to confirm, but while uploading swift doesn't throw any error on console.04:40
mattoliverauSwift and 1x replication works, defintely not recommended.04:40
Souravit shows all the segments are uplaoded but actually not.04:40
Sourav>Swift and 1x replication works, defintely not recommended.04:40
Souravbut it's not stable, it seems.04:41
Sourav:(04:41
mattoliverauthat's because you have 1x replication, and there is a disk in the cluster with space. So It;ll write it there as a hand off. So it's stored, but can't get found until the replicators move it to the right place04:41
mattoliverauyeah not stable, because swift is trying to keep your data distribute and replicated (so durable) but your telling it not to04:41
SouravYes, true.04:42
SouravWhat do you recommend to overcome this situation ?04:43
Souravrebuild the cluster again?04:43
mattoliverauif 3x replication is too much overhead (ie using 3x as much space). you could use erasure coding (EC). if you picked something like 4:2 then you'd need 6+ disks, but you'll only be storing 1.5% the storage but can still loose 2 disks without loosing data.04:43
mattoliverau1.5x the space I mean04:44
mattoliverauSourav: is this a test cluster or does it have real data on it?04:44
SouravI built this for test.04:45
Souravbut now, we started using it for development purpose.04:45
Souravmeaning, it doesn't contain production data.04:46
mattoliverauoh then you can rebuild is fine :) if it's real data, it's still ok.. but you need to add more drives, increase the replica count and rebalance the ring. And because drives are full, there are things you can do to help speed up the draining of full drives.04:46
mattoliveraubut if it's test, yeah, rebuild is definitely easier.. unless you can to keep the data.04:47
mattoliverau*unless you want to keep the data.. then its the rebalance game04:47
SouravI would like to put the data.04:48
SouravBecause total data is around 3TB.04:48
mattoliverauput as in keep the data?04:48
Souravsorry. I meant keep the data.04:49
mattoliverauok. then first step is to decide on the replica count. If you keep it as 1x then you don't have any durability (so I wouldn't use it for any production data). But should just be able to add drives, rebalance the ring. And potentially play with the handoffs_first replicatior options to force the pushing of handoffs (which on 1x replication would be all it's doing anyway).04:51
SouravIn 3x replication, it's quiet expensive for me. What about 2x replication? If swift can handle 1 disk damage, it's fine for me.04:51
mattoliverauwell 2x is _much_ bette then 1, because at least you have a copy somewhere.04:52
SouravYes.04:52
mattoliverauif you simply double the existing drives at 2x you'll still be out of space.04:52
mattoliveraubecause they'll make a copy of everything04:53
SouravOh, yes.04:53
Souravbut,  I have some new disk now. it's around 24TB (6 x 4TB)04:54
Souravso for durability, if I use 2x replication and later add some more disk into that cluster, things should go well.04:54
Souravright?04:54
*** two_tired has quit IRC04:55
mattoliverauyeah, if you keep adding disks, swift will keep the data spread out across it. So you can keep adding space :)04:56
SouravThank you much for your time.04:56
mattoliveraukeep an eye on the disk usage (you can use swift-recon and swift-recon-cron) if you want to use swift tools, or use whatever monitoring tool you use. And when disks start getting closer to filling up, add more disks :)04:57
SouravI will try to rebuild with 2x replication.04:58
Souravbtw, I can find you here later on also right?04:58
Sourav:)04:58
mattoliverauSwift also has storage policys. So you could always have a colder tier (not accessed as much) and use EC there. For imortant data use more replication and less important use 2x replication.. really the skys the limit once you understand Swift's mechanics ;)04:58
mattoliverauSourav: I'm based in Australia. So I'll be around for another hour or so. But if noone else is around, you can ping me (I'm always connected) and I'll get back to you when I can.04:59
Souravso, you mean for important data and less important data, we should create multiple cluster?05:00
Souravor same cluster can be used for multiple replication ?05:00
mattoliverausame cluster can use multiple replication.05:01
Souravsorry, how to use that? any documentation ?05:02
SouravI am using ocata version.05:02
mattoliverauSourav: https://docs.openstack.org/swift/latest/overview_architecture.html#storage-policies05:02
mattoliverauThe architecture overview might be a good place to start.05:03
SouravThank you so much. I will let you know my progress. :)05:03
mattoliverauThe admin section could be useful too. The admin guide has tips on full drives: https://docs.openstack.org/swift/latest/#administrator-documentation05:04
mattoliverauSourav: your welcome, and plese do :)05:05
openstackgerritKazuhiro MIYAHARA proposed openstack/swift master: Refactor expirer unit tests  https://review.openstack.org/53784105:21
*** SkyRocknRoll has quit IRC05:25
*** SkyRocknRoll has joined #openstack-swift05:39
*** gkadam has joined #openstack-swift05:52
*** jappleii__ has quit IRC06:01
*** geaaru has quit IRC07:04
*** pcaruana has joined #openstack-swift07:10
*** d0ugal has quit IRC07:34
*** m_kazuhiro_ has quit IRC07:37
*** hseipp has joined #openstack-swift07:40
*** neonpastor has quit IRC08:00
*** neonpastor has joined #openstack-swift08:02
*** d0ugal has joined #openstack-swift08:02
*** tesseract has joined #openstack-swift08:17
*** armaan_ has quit IRC08:19
*** armaan has joined #openstack-swift08:19
*** bkopilov has joined #openstack-swift08:29
*** SkyRocknRoll has quit IRC08:37
*** cbartz has joined #openstack-swift08:42
*** cbartz has quit IRC08:48
*** kei_yama has quit IRC08:50
*** cbartz has joined #openstack-swift08:51
*** SkyRocknRoll has joined #openstack-swift08:54
acolesgood morning09:13
*** Sourav has quit IRC09:24
*** mvk has quit IRC09:29
*** itlinux has joined #openstack-swift09:50
*** tovin07_ has quit IRC09:51
*** tovin07_ has joined #openstack-swift09:51
*** mvk has joined #openstack-swift09:58
*** armaan has quit IRC09:59
*** itlinux has quit IRC10:02
*** itlinux has joined #openstack-swift10:07
*** armaan has joined #openstack-swift10:08
*** tovin07_ has quit IRC10:33
*** mvk has quit IRC10:39
*** itlinux has quit IRC10:41
*** silor has joined #openstack-swift10:43
*** itlinux has joined #openstack-swift10:46
*** itlinux has quit IRC10:46
*** mvk has joined #openstack-swift10:51
*** armaan has quit IRC10:52
*** armaan has joined #openstack-swift10:53
*** bkopilov has quit IRC11:08
*** d0ugal has quit IRC11:34
*** armaan has quit IRC11:35
*** armaan has joined #openstack-swift11:35
*** d0ugal has joined #openstack-swift11:54
*** d0ugal has quit IRC11:54
*** d0ugal has joined #openstack-swift11:54
*** itlinux has joined #openstack-swift11:54
*** itlinux has quit IRC12:00
*** geaaru has joined #openstack-swift12:01
*** itlinux has joined #openstack-swift12:01
*** itlinux has quit IRC12:18
*** itlinux has joined #openstack-swift12:18
*** bkopilov has joined #openstack-swift12:25
*** itlinux has quit IRC12:26
*** links has quit IRC12:49
*** zhurong_ has joined #openstack-swift13:03
*** silor1 has joined #openstack-swift13:17
*** silor has quit IRC13:18
*** silor1 is now known as silor13:18
*** zhurong_ has quit IRC13:19
*** zhurong has joined #openstack-swift13:19
*** zhurong has quit IRC13:20
*** ^andrea^ has joined #openstack-swift13:35
*** links has joined #openstack-swift13:43
-openstackstatus- NOTICE: Our ubuntu-xenial images (used for e.g. unit tests and devstack) are currently failing to install any packages, restrain from *recheck* or *approve* until the issue has been investigated and fixed.13:44
*** psachin has quit IRC13:57
*** derekjhyang has quit IRC14:26
*** d0ugal has quit IRC14:44
*** d0ugal has joined #openstack-swift14:47
*** d0ugal has quit IRC14:58
*** links has quit IRC15:05
*** gkadam has quit IRC15:14
*** zhongjun has quit IRC15:18
*** d0ugal has joined #openstack-swift15:25
*** itlinux has joined #openstack-swift15:37
*** itlinux has quit IRC15:42
*** itlinux has joined #openstack-swift15:57
*** armaan has quit IRC16:04
openstackgerritMerged openstack/swift feature/deep: enable single PUT of multiple shard ranges to container servers  https://review.openstack.org/53540716:22
openstackgerritMerged openstack/swift feature/deep: Use backend header rather than param to GET shard record type  https://review.openstack.org/53546616:22
*** ukaynar_ has joined #openstack-swift16:23
openstackgerritMerged openstack/swift feature/deep: simplify shrinking  https://review.openstack.org/53677216:27
*** cbartz has quit IRC16:55
*** geaaru has quit IRC16:56
*** ukaynar_ has quit IRC16:56
*** gkadam has joined #openstack-swift16:58
*** ukaynar_ has joined #openstack-swift17:00
*** mvk has quit IRC17:10
*** hseipp has quit IRC17:25
timburkegood morning17:36
*** tesseract has quit IRC17:46
notmynamegood morning18:08
*** silor has quit IRC18:18
*** cshastri has quit IRC18:19
*** itlinux has quit IRC18:36
notmynametimburke: we don't seem to have any mechanism in swiftclient to expose the x-auth-token-expires header :-(18:38
timburkenotmyname: we do if we start using Sessions... https://github.com/openstack/python-swiftclient/blob/master/swiftclient/authv1.py#L27718:39
timburkethe other thing worth noting, though, is that there's no guarantee that there will be any indication of expiration time18:40
*** armaan has joined #openstack-swift18:43
*** armaan has quit IRC18:58
*** armaan has joined #openstack-swift19:00
notmynametimburke: thanks!19:02
openstackgerritMerged openstack/python-swiftclient master: Update reno for stable/queens  https://review.openstack.org/53897219:18
*** armaan has quit IRC19:21
*** armaan has joined #openstack-swift19:23
*** pcaruana has quit IRC19:52
*** mvk has joined #openstack-swift20:04
*** d0ugal has quit IRC20:26
*** sai is now known as sai-pto20:29
*** ukaynar_ has quit IRC20:38
*** d0ugal has joined #openstack-swift20:41
mattoliverauWell I saw it happen in an SAIO then tried to quickly reproduce in the test. I'll take a closer look after breakfast :)21:06
*** SkyRocknRoll has quit IRC21:06
mattoliverauOh and morning :)21:06
mattoliverauIt could very well be my Env.21:07
timburkei'll keep poking around... i've got the recursion-depth-exceeded message without data segments... now to try something similar with data segs :)21:13
*** ukaynar_ has joined #openstack-swift21:14
timburkehrm... and our call_app in the test is supposed to be consuming the iter...21:21
timburkeno logs get captured... call_count is only 10...21:24
*** ukaynar_ has quit IRC21:29
*** ukaynar_ has joined #openstack-swift21:29
*** ukaynar_ has quit IRC21:33
*** ianychoi has quit IRC21:34
*** ianychoi has joined #openstack-swift21:35
timburkeoh! i has to do with the value of 'bytes' in the sub-manifest records... funny...21:35
*** ukaynar_ has joined #openstack-swift21:36
timburkeit makes sense that curl will hang -- it's expecting N bytes, but swift is throwing an exception after M (< N) are sent21:37
timburkeyou can still get the content actually received by either using `--no-buffer` or `--max-time 1`21:37
*** threestrands has joined #openstack-swift21:40
*** armaan has quit IRC21:46
*** armaan has joined #openstack-swift21:47
timburkethis all has me thinking about two interesting things that could be done21:48
timburkeone, track "slo depth" on manifests. if it doesn't have any sub-manifests, it's defined to be 1; otherwise it's 1+max(x.slo_depth for x in sub-manifests); and if any manifests in the chain are old, slo-depth-less manifests, we just don't store it21:50
timburkeso we could error out at upload time for manifests that you won't immediately be able to fetch21:51
timburke(there's still the issue of operators changing recursion depths over time, of course...)21:51
timburketwo, have something TCO-like that lets you have an unlimited nesting provided the sub-manifests are at the *end* of the prior manifest. 'cause this was primarily intended to prevent memory growth when the proxy has to have too many manifests in its head, yeah?21:53
timburke(well, sub-*manifest*. you'd only be able to have one; basically a linked-list)21:54
torgomatictimburke: I posted a diff that fixes the test; the 'bytes' value was wrong, but it happened to be wrong in the too-big direction, so it didn't matter and we could hit the recursion depth22:07
torgomaticadding the b64-encoded data segment could make it wrong in the too-small direction, so we ran out of bytes before hitting recursion depth22:07
*** ukaynar_ has quit IRC22:10
torgomaticI like the idea of tracking manifest depth on SLOs so users would know at PUT time that they've sent something that's too nested instead of waiting until GET time22:14
*** rcernin has quit IRC22:18
*** itlinux has joined #openstack-swift22:48
*** rcernin has joined #openstack-swift22:52
*** itlinux has quit IRC22:52
*** ^andrea^ has quit IRC22:53
*** ianychoi has quit IRC23:05
*** ukaynar_ has joined #openstack-swift23:06
openstackgerritMerged openstack/swift master: Remove the deprecated "giturl" option  https://review.openstack.org/53347923:06
openstackgerritJohn Dickinson proposed openstack/swift master: authors/changelog updates for 2.17.0 release  https://review.openstack.org/53452823:22
*** kei_yama has joined #openstack-swift23:37
openstackgerritJohn Dickinson proposed openstack/swift master: authors/changelog updates for 2.17.0 release  https://review.openstack.org/53452823:41
*** armaan has quit IRC23:45
andybottingnotmyname: we realised that we could get the PUT 500s from the proxy log (which we keep) and I just filtered out any objects that were glance images, as we know that glance would clean up the failed images. Looks like we're all clean :D23:48
notmynamenice!!23:48
andybottingthanks for your help mate23:49
notmynamekudos to you for noticing the problem early and finding the cause. i'm glad it wasn't a more widespread issue23:53
notmynamehappy to help23:53
*** rcernin has quit IRC23:57
*** ukaynar_ has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!