From Glee
Jump to: navigation, search

Hard disks fail. That's just (their) life. When that happens, there usually isn't much you can do, other than double check for errors and confirm that things are looking bad. Hard disks with errors should usually be replaced, since a few correctable errors usually mean that many others are waiting to show up.

To check for bad blocks, use the badblocks command.

Non destructive examples, use on disks where data mustn't be deleted :

# Non destructive read test (data will be kept, fastest but not the most accurate)
badblocks -o ~/sdb.txt -s /dev/sdb
# Non destructive write test (data will be kept, quite fast)
badblocks -o ~/sdb.txt -s -n /dev/sdb

Destructive examples, use only when data on the disks no longer matters :

# DESTROY ALL DATA by writing/re-reading random patterns to the whole disk (quite fast)
badblocks -o ~/sdb.txt -s -w -t random /dev/sdb
# DESTROY ALL DATA by writing/re-reading 4 different patterns to the whole disk (quite slow)
badblocks -o ~/sdb.txt -s -w /dev/sdb

Before throwing away a disk with errors, you might want to wipe out the existing data using shred. If you've performed one of the destructive checks, it might not be necessary.

For a 4TB SATA drive (WDC "Red" NAS) on an HP MicroServer, the default badblocks invocation would write at only 5MB/s. Indicating a block size of 4k and increasing the default of 64 concurrent blocks managed to raise the write speed to 100-130MB/s :

badblocks -o ~/sdb.txt -b 4096 -c 2048 -s -w -t random /dev/sdb