Saturday, 2014-11-08

*** kyles_ne has quit IRC00:31
*** shri has quit IRC00:41
*** openstackgerrit has quit IRC00:50
*** kyles_ne has joined #openstack-swift00:50
*** lorrie has joined #openstack-swift00:53
lorrieHi, I have a question about using Swift. Currently I installed a swift server using SAIO. I split 4 nodes on the local stoage, let's call it Hard_drive_1. Now Hard_drive_1 is almost full so I add Hard_drive_2 to it and create one node on that. I add it to the ring and rebalance it, it works fine. I have tested by uploading some files and see the used space of Hard disk 2 is increasing, which means there are really files being00:57
lorrieBut my question is, it looks like the uploading also keep writing to Hard Disk 1, which is almost full.00:58
lorrieSo I want to know, what will happen if it Hard disk 1 is really full? will swift automatically notice that and then only write data to Hard Disk2?00:59
lorrieBecause if it still keeps writing to a full disk, it will cause the writing failure for sure.00:59
lorriethanks00:59
MooingLemurlorrie: when you rebalance, it should start redistributing the data so that each drive is roughly equally full01:08
MooingLemurthat is, if you assigned the weights proportionally01:09
MooingLemurhow many devices total do you have configured now?01:10
lorriei have 2 devices, hard disk 1 has 4 node, hard disk 2 has 1 node01:11
lorrieand all the nodes are equal wighted01:11
MooingLemurI'm confused.  nodes should have disks, disks should not have nodes :)01:11
zaitcevshould've moved 2 devices to "Hard_drive_2" instead of adding the 5th01:12
zaitcevwithout any changes to the ring01:12
lorriesorry for my mistyping01:12
lorriei mean, disk 1 has 4 partition01:13
*** kyles_ne has quit IRC01:13
lorriedisk 2 has 1 partition01:13
MooingLemurSAIO is good for learning concepts, but for a real swift cluster, you'll obviously want to have one physical disk mapped to one device in the ring01:14
lorriei only have one node, at first it only has one device with 4 partition. and they are almost full01:14
lorrieso i add one new device with 1 partition to this node01:15
lorrieso now it has two devices01:15
lorrieand i do a rebalance01:16
zaitcevYou can still trick this around with symlinks from /srv/node. Or better yet, bind mounts.01:16
MooingLemuractually.. are you talking about fdisk partitions or swift partitions?01:16
zaitcevMove 2 of 4 to the new drive. Once you verfiry it works, remove the 5th device you added.01:16
lorrieswift partition01:16
zaitcevokay that escalated quickly01:17
MooingLemuryou don't directly specify how many swift partitions are on each drive, but you can specify the weight so that the rebalance fills them proportionally01:17
lorriebut i just did a test and found the disk space of disk 1 is still increasing01:18
zaitcevof course, duh01:18
MooingLemurcan you pastebin the output of: swift-ring-builder object.builder01:18
lorriesure01:18
lorrieobject.builder, build version 6 1024 partitions, 3.000000 replicas, 1 regions, 5 zones, 5 devices, 0.39 balance The minimum number of hours before a partition can be reassigned is 1 Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta              0       1     1       127.0.0.1  6010       127.0.0.1              6010      sdb1   1.00        615    0.1001:19
MooingLemurrun from whatever directory object.builder is in, usually /etc/swift01:19
*** occup4nt has joined #openstack-swift01:20
lorriesdb* is the old disk01:20
lorriesdc1 is the new add device01:20
MooingLemurdidn't see sdc1 in your output01:20
lorrie 0       1     1       127.0.0.1  6010       127.0.0.1              6010      sdb1   1.00        615    0.1001:21
lorrie1       1     2       127.0.0.1  6020       127.0.0.1              6020      sdb2   1.00        615    0.1001:21
lorrie2       1     3       127.0.0.1  6030       127.0.0.1              6030      sdb3   1.00        615    0.1001:21
*** btorch_ has joined #openstack-swift01:21
*** occupant has quit IRC01:21
*** kevinbenton has quit IRC01:21
*** mordred has quit IRC01:21
*** dosaboy has quit IRC01:21
*** echevemaster has quit IRC01:21
*** otherjon has quit IRC01:21
*** btorch has quit IRC01:21
lorrie3       1     4       127.0.0.1  6040       127.0.0.1              6040      sdb4   1.00        615    0.1001:21
lorrie4       1     5       127.0.0.1  6050       127.0.0.1              6050      sdc1   1.00        612   -0.3901:22
*** kevinbenton_ has joined #openstack-swift01:22
*** mordred has joined #openstack-swift01:22
MooingLemuroh, so you do have that many "physical" disks..01:22
*** kevinbenton_ is now known as kevinbenton01:22
*** dosaboy has joined #openstack-swift01:22
lorrieyes01:22
MooingLemuryour weight is 1 on each, which means they will all be preferred equally01:22
lorrieyes that's what i expect01:23
MooingLemurso each partition is roughly the same size?01:23
lorrieyes01:23
MooingLemureach device, that is01:23
*** otherjon has joined #openstack-swift01:23
lorrieactually sdb* is partitioned by swift01:24
MooingLemurso the used space on each device is not roughly equal?01:24
lorrieunder /dev/ it only has sdb101:24
MooingLemurthere's a problem there then01:24
*** echevemaster has joined #openstack-swift01:24
MooingLemurwell, you have /srv/node/sdb1 /srv/node/sdb2, etc?01:25
lorriei configured 4 partition for it on /etc/swift/account-server/ /container /object folder01:25
lorrieyes01:25
MooingLemurwhat devices are mounted on those?01:25
lorried01:26
lorriedo i disconnect01:26
MooingLemurif you only have a real sdb1, I don't understand what you're doing having a mountpoint /srv/node/sdb2 etc01:26
zaitcevends right in root probably :-)01:27
*** lorrie_ has joined #openstack-swift01:27
zaitcevand he probably has mount_check=false because SAIO docs say that01:27
lorrie_sorry looks like i just lose connection01:27
MooingLemur2014-11-07 18:25:39 < MooingLemur> what devices are mounted on those?01:27
MooingLemur2014-11-07 18:26:42 < lorrie> d01:27
MooingLemur2014-11-07 18:26:50 < lorrie> do i disconnect01:27
MooingLemur2014-11-07 18:26:53 < MooingLemur> if you only have a real sdb1, I don't understand what you're doing having a mountpoint /srv/node/sdb2 etc01:27
*** lorrie_ has quit IRC01:27
MooingLemuroh jeez01:28
MooingLemurI give up01:28
zaitcevMooingLemur: http://www.penny-arcade.com/comic/2014/09/19/take-it-from-my-hands01:28
lorriei m back01:28
lorriesorry, are u still there?01:28
lorriei just follow the saio instruction to do it, they create 4 mount points for one device01:29
zaitcevI am still holding a hope that he missed all I said about moving the 2 of the original 4 to sdc1 and then dropping the 5th zoen.01:29
zaitcevBut it's a very small hope01:30
lorriesure i can just use one mount points for one device though, that's what i did for disk 201:30
MooingLemurI don't have time to stick around, but I'm wondering if you simply don't have the rsyncds running properly, or the swift-object-replicator, etc01:30
zaitcevokay, headdesk time01:30
MooingLemurotherwise you're not understanding that all 5 of the mount points are going to all receive the same amount of data each, which means your sdb device will be getting 4x the amount of data as sdc is.01:31
lorriersyncds runs ok, and i can see the additional disk works01:32
MooingLemurand if you tweak that, you're probably still not going to be properly balanced until you add even more devices, because the 3 replica policy is going to supercede your weighting01:32
zaitcevwell even with 100 to 1 balance it's gonna happen anyway if replication is 301:32
lorrieyes i understand what you mean01:32
lorriebut i just don't know if sdb is full01:32
lorriebut sdc is empty01:32
lorriewill sdb still receive data?01:33
MooingLemurshouldn't be empty.01:33
zaitcevbecause of the unique-as-possible spread, some partitions getting into each zone no matter what balance is ordered01:33
zaitcevand he's gog 5 zones01:33
MooingLemurif absolutely empty, make sure permissions /srv/node/sdc1 are set properly01:34
MooingLemurownership01:34
MooingLemurotherwise watch your syslog for clues :)01:34
zaitcevnaah, no need. "I have tested by uploading some files and see the used space of Hard disk 2 is increasing, which means there are really files bein"01:34
lorrieor let's say, how can we expand the size of a swift server if the original disk space is almost full?01:34
MooingLemurto expand the size, you add more devices01:35
MooingLemuronce you rebalance, the usage on the existing disks will go down01:35
zaitcevlorrie: I told you twice, just move some of the old 4 devices to sdc, jeez. Don't touch the ring01:35
zaitcevwell yeah, but why let replicators struggle with that when you can just run cp -a once01:35
lorriethanks a lot for MooingLemur and zaitcev, my swift is working fine after i add new device to it01:36
lorrie@MooningLemur, you mean actually i don't need to do anything and rebalance will help me to do that right?01:37
zaitcevlook, a rebalance moves some of partitions (of which you have 1024)01:38
zaitcevso yeah01:39
zaitcevBut even so you cannot balance it so that 4/5th ends in 1 device (sdc1)01:39
lorrie@zaitcev, when you say move some of the old 4 devices, do you mean?01:39
lorrie@zaitcev, if i divide the new added sdc1 to 4 swift partition like i did for sdb1, will it then be rebalanced?01:41
zaitcevlook, they are all mounted under /src/node, right? so... just relocate some of that stuff with just Linux commands. Although I'm having second thoughts about it, considering the trouble you're having already, this may end in a disaster easily.01:41
zaitcevYou don't even need to do that01:41
zaitcevI'd say just 2 is fine01:42
zaitcevThen make the weight 2.0 for each of new 2... You already have 1.0 on each of existing 4, right?01:42
lorrie@zaitcev, this is just a test system so actually i can just rebuild it01:42
lorrieso it is fine and i just need to know the right way to do it01:43
zaitcev3       1     4       127.0.0.1  6040       127.0.0.1              6040      sdb4   1.00        615    0.1001:43
zaitcevThat 1.00 is weight01:43
lorrie@zaitcev yes i have 1.0 for 4, i can use 2.0 for 2 more01:44
lorrieoh i see, in this way the new files will be writen to the new disk more01:45
zaitcevyeah, you have "build version 6 1024 partitions, 3.000000 replicas", so R=3 (well it's float number nowadays)01:46
zaitcevso... if you have 6 zones, 4 of 1.0 and 2 of 2.0, it should spread itself01:46
zaitcevonce you said "swift-ring-build xxxxx rebalance", you push the ring into /etc/swift01:46
zaitcevthen replicators will copy stuff from overfilled zones to the new, underfilled ones01:47
lorrieok so then i don't have to copy files from one device to another right?01:47
zaitcevthey delete whatever does not match the new ring, so amount of stuff in old drives should decrease01:47
lorriecool, got it!01:48
zaitcevassuming all your replicators and auditors run01:48
lorriethank you very much for your help!01:48
zaitceve.g. don't just do "swift-init main start", you must have 'swift-init rest start' too01:48
lorriecurrently i did :swift-init all stop01:49
lorriethen : startmain01:49
zaitcevoh, and make sure rsync works fine01:49
zaitcevI usually look with  tail -f /var/log/rsync.log01:50
lorrieyes it works fine01:51
lorrieso after rebalance i need to stop swift, start it, and do swift-init rest start?01:52
zaitcevno01:52
zaitcevswift is designed so you don't need to restart01:53
zaitcevwell, actually since you stopped already01:53
zaitcevjust do  swift-init all start01:53
lorrieso i don't need to stop, just do swift-init main start, and swift-init rest start?01:53
zaitcevyou can stop... we used to do this 2 years ago, a restart to pick the new ring01:54
zaitcevnow the daemons check the timestamps and load new rings01:55
lorrieok, stop and then start. thank you so much01:55
*** lorrie has quit IRC02:19
*** tsg has quit IRC03:03
*** goodes has quit IRC03:12
*** goodes has joined #openstack-swift03:14
*** zhiyan has quit IRC03:18
*** zhiyan has joined #openstack-swift03:21
*** abhirc has quit IRC03:33
*** echevemaster has quit IRC04:12
*** foexle has quit IRC04:27
*** zaitcev has quit IRC05:06
*** haomaiwang has joined #openstack-swift05:12
*** anticw has quit IRC05:19
*** SkyRocknRoll has joined #openstack-swift06:43
*** nottrobin has quit IRC06:44
*** nottrobin has joined #openstack-swift06:47
*** sfineberg has quit IRC08:08
*** nellysmitt has joined #openstack-swift08:19
*** kopparam has joined #openstack-swift11:17
*** exploreshaifali has joined #openstack-swift11:21
*** haomaiwang has quit IRC11:34
*** haomaiwang has joined #openstack-swift11:35
*** mkollaro has joined #openstack-swift11:40
*** nellysmitt has quit IRC12:06
*** kopparam has quit IRC12:25
*** mkollaro has quit IRC12:29
*** exploreshaifali has quit IRC12:30
*** haomaiw__ has joined #openstack-swift13:06
*** haomaiwang has quit IRC13:06
*** exploreshaifali has joined #openstack-swift13:14
*** haomaiwang has joined #openstack-swift13:20
*** tsg_ has joined #openstack-swift13:23
*** haomaiw__ has quit IRC13:24
*** kopparam has joined #openstack-swift13:26
*** kopparam has quit IRC13:31
*** kopparam has joined #openstack-swift13:37
*** kopparam has quit IRC13:43
*** kopparam has joined #openstack-swift13:54
*** kopparam has quit IRC13:58
*** SkyRocknRoll has quit IRC14:02
*** SkyRocknRoll has joined #openstack-swift14:03
*** tab____ has joined #openstack-swift14:14
*** kopparam has joined #openstack-swift14:21
*** kopparam has quit IRC14:26
*** nshaikh has joined #openstack-swift14:42
*** tgohad has joined #openstack-swift14:57
*** tsg_ has quit IRC15:00
*** tgohad has quit IRC15:11
*** kopparam has joined #openstack-swift15:18
*** exploreshaifali has quit IRC15:25
*** mkollaro has joined #openstack-swift15:33
*** kopparam has quit IRC15:58
*** haomaiwang has quit IRC16:02
*** haomaiwang has joined #openstack-swift16:02
*** nellysmitt has joined #openstack-swift16:11
*** tsg_ has joined #openstack-swift16:22
*** mkollaro has quit IRC16:29
*** SkyRocknRoll has quit IRC16:34
*** nellysmitt has quit IRC16:35
*** nellysmitt has joined #openstack-swift16:35
*** shakamunyi has joined #openstack-swift16:37
*** nshaikh has quit IRC16:43
*** 1JTAAXF1S has joined #openstack-swift16:48
*** haomaiwang has quit IRC16:52
*** nellysmitt has quit IRC16:53
*** nellysmitt has joined #openstack-swift16:57
*** exploreshaifali has joined #openstack-swift16:57
*** mkollaro has joined #openstack-swift17:05
*** kopparam has joined #openstack-swift17:20
*** kopparam has quit IRC17:26
*** tsg_ has quit IRC17:28
*** EmilienM has quit IRC17:39
*** EmilienM has joined #openstack-swift17:39
*** EmilienM has quit IRC17:53
*** EmilienM has joined #openstack-swift17:53
*** EmilienM has quit IRC17:55
*** EmilienM has joined #openstack-swift17:56
*** shakamunyi_ has joined #openstack-swift18:20
*** shakamunyi has quit IRC18:24
*** brnelson has quit IRC18:42
*** brnelson has joined #openstack-swift18:42
*** Tyger_ has joined #openstack-swift19:14
*** tsg_ has joined #openstack-swift19:14
*** nshaikh has joined #openstack-swift19:27
*** nshaikh has left #openstack-swift19:53
*** infotection has quit IRC20:02
*** infotection has joined #openstack-swift20:07
*** slDabbler has joined #openstack-swift20:11
*** slDabbler has quit IRC20:15
*** Tyger_ has quit IRC20:17
*** exploreshaifali has quit IRC20:19
*** tsg_ has quit IRC20:28
*** kopparam has joined #openstack-swift20:30
*** tsg has joined #openstack-swift20:31
*** kopparam has quit IRC21:22
*** tsg has quit IRC21:36
*** tab____ has quit IRC22:10
*** nellysmitt has quit IRC23:02
*** TaiSHi has quit IRC23:15
*** TaiSHi has joined #openstack-swift23:16
*** TaiSHi has joined #openstack-swift23:16
*** portante has quit IRC23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!