Redundant Array of Independent Disk (revision 141125-r01KI)

Return to Lab 10

In this Lab. you will learn about RAID concept. The RAID's type which will introduce in this Lab is "Software Based RAID". This type is also known as SOftRAID. Some tools that will be used in this lab are mdadm [1][2][3][5],and dd [4], sudo, mkfs.ext4, and also mount and umount.

mdadm is used for manipulate SoftRAID in GNU/Linux.
dd is used for overwrite data in the disk.
mkfs.ext4 is used for create directory system in the RAID disk which is created by mdadm.
mount and unmount is used for mounting and unmounting RAID disk which is have directory system into a directory.

You will learn how RAID recover a broken disk in Array. This process will not disturb the other file in the RAID disk.

Here are some steps for complete this lab:

First Part: Create SoftRAID with Script, create directory system in the RAID and do Mount.

  1. Please check your another 4 disks in your TinyCore using command: cat /proc/partitions|grep sd. You will get the output similar with the picture below. The disks with name sdb, sdc, sdd and sde will be used for create RAID.
  2. Please save the output from command above in "disks.txt" using command:
    $cat /proc/partitions|grep sd > disks.txt
    
  3. Please download mkRAID by using wget, then add an additional execution permission in that script.
    $wget --no-check-certificate https://projects.ui.ac.id/attachments/6405/mkRAID
    $chmod +x mkRAID
    
  4. Run mkRAID's script. This script need your token from apps os as a parameter. For example, if your token is 0U4E9O, then your mkRAID's script will be ran as following:
    $sudo ./mkRAID 0U4E9O
    
  5. Please wait for 3-4 minutes. Script will create RAID's disk, synchronize all disks in RAID, create RAID's configuration file, and prepare RAID's disk so the output look will be similar with picture below.



  6. Please copy your RAID's configuration file in /etc/mdadm.conf into your directory which is has been created in the first step in first part. Change your KELAS and NPM with the following your class and your NPM.
    $cp /etc/mdadm.conf /home/tc/KELAS/NPM/Lab10/mdadm.conf
    
  7. Please make sure that your token today is used for naming your RAID'S disk. For example, if your token today is "0U4E9O", you can check your RAID's disk with command:
    $cat mdadm.conf|grep "0U4E9O" 
    ARRAY /dev/md0 metadata=1.2 name=box:ZCZC-0U4E9O UUID=5e89bfc7:ae79db99:4f775a87:aa4bf116
    
  8. If your token has not appear in the output above, please repeat steps 1 to 5 in this part.
  9. Please save the output from cat /proc/mdstat into mdstat.txt. Save that file into your directory which is have been created in the first step. Please change "KELAS" and "NPM" with your class and your NPM.
    $cat /proc/mdstat > /home/tc/KELAS/NPM/Lab10/mdstat.txt
    
  10. Please make directory system with type ext4 in RAID's disk which is created in /dev/md0 and wait until finish.
    $sudo mkfs.ext4 /dev/md0
    
  11. Please mount /dev/md0 ke titik/lokasi to /mnt/raid
    $sudo mount /dev/md0 /mnt/raid
    

Second Part: Disk Failure Test and RAID recovery

  1. Please download CD Debian with the command below. That file will be saved in /mnt/raid automatically with name debian.iso.
    $sudo wget -v http://kambing.ui.ac.id/iso/debian/6.0.7/i386/iso-cd/debian-6.0.7-i386-businesscard.iso -O /mnt/raid/debian.iso
    
    alternate Link
    
    $sudo wget -v http://opendata.ui.ac.id/os/debian-6.0.7-i386-businesscard.iso -O /mnt/raid/debian.iso
    
  2. Please mount debian.iso with loopback option.
    $sudo mount -o loop /mnt/raid/debian.iso /mnt/cdrom
    
  3. Change your directory to /mnt/cdrom, then do md5sum to all directory's contents with command:
    $cd /mnt/cdrom 
    $md5sum -c /mnt/cdrom/md5sum.txt | grep FAILED
    
  4. If your debian.iso is not corrupt, then the command above will not produce any output.
  5. Set /dev/sdb to Fault by using mdadm's command.
    $sudo mdadm --fail /dev/md0 /dev/sdb
    
  6. Please check RAID's array status with command cat /proc/mdstat. From that command you will see that /dev/sdb will be signed with (F).
  7. Save the output from step above in mdstat-faulty.txt and put it into your directory. Change KELAS and NPM with your class and your NPM.
    $cat /proc/mdstat > /home/tc/KELAS/NPM/Lab10/mdstat-faulty.txt
    
  8. Eject /dev/sdb from array using mdadm's command below:
    $sudo mdadm --remove /dev/md0 /dev/sdb
    
  9. Please check again RAID's array status with command cat /proc/mdstat. You can see that /dev/sdb is not part of the array.
  10. Please save the output from step above in mdstat-removed.txt. Please also put it into your directory.Change KELAS and NPM with your class and your NPM.
    $cat /proc/mdstat > /home/tc/KELAS/NPM/Lab10/mdstat-removed.txt
    
  11. Your RAID will be in degraded condition now. You can check with command sudo mdadm --detail /dev/md0|grep degraded. The output from that command is as picture below.
    $sudo mdadm --detail /dev/md0|grep degraded
           State : clean, degraded
    
  12. Do md5sum in /mnt/cdrom directory like step 3 in this part.
  13. Is data integrity still existed? Is there any relation between data integrity with the md5sum checking result? Please explain it based on your knowledge about RAID in "WHAT-IS-THIS.txt".
  14. Please overwrite the data in /dev/sdb with command below. That command will erase all data in /dev/sdb. Please wait 1-2 minutes until it has been done.
    $sudo dd if=/dev/zero of=/dev/sdb bs=1024
    2097153+0 records in
    2097152+0 records out
    2147483648 bytes (2.0GB) copied, 53.632729 seconds, 38.2MB/s
    
  15. Please combine /dev/sdb in RAID' array using command:
    $sudo mdadm --add /dev/md0 /dev/sdb
    
  16. Observe the output using command cat /proc/mdstat and make sure the recovery process is being done. Save the output in mdstat-recovery.txt into your directory. Change KELAS and NPM with your class and your NPM.
    $cat /proc/mdstat 
    Personalities : [raid6] [raid5] [raid4] 
    md0 : active raid5 sdb[5] sdc[1] sde[4] sdd[2]
          6286848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
          [>....................]  recovery =  0.9% (19500/2095616) finish=1.7min speed=19500K/sec
    $cat /proc/mdstat > /home/tc/KELAS/NPM/Lab10/mdstat-recovery.txt
    
  17. Do step 3 in this part again.
  18. Is data integrity still existed? Is there any relation between data integrity with the RAID's recovery process? Please explain it based on your knowledge about RAID in "WHAT-IS-THIS.txt".
  19. Back to your directory.
    $cd /home/tc/KELAS/NPM/Lab10
    

Reference

[1] mdadm : http://neil.brown.name/blog/mdadm
[2] mdadm Cheat Sheet : http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/
[3] Linux RAID Wiki : https://raid.wiki.kernel.org/index.php/RAID_setup
[4] dd Manual Page : http://linux.die.net/man/1/dd
[5] mdadm Manual Page : http://linux.die.net/man/8/mdadm

Return to Lab 10
Return to Wiki