Redundant Array of Independent Disk (revisi 141125-r03)

Kembali ke Lab 10

  1. Please make a file raid-Lab10.txt. Write these inside the file:
    ZCZC [KELAS] TGS LAB10 [TANGGAL]
    ZCZC [NPM] LOG [KETERANGAN]
    
  2. Change [KELAS] with your class, [TANGGAL] with today's date using DD-MM-YY format, [NPM] with your student ID number and [KETERANGAN] with a simple comment.
    ZCZC TESTING TGS LAB10 13-05-15
    ZCZC 1202000818 LOG OS Bentar lagi UAS, HORE!
    
  3. Run the script to write the output of ypur console.
    $ script -a raid-Lab10.txt
    

In today's session, you will learn about RAID concepts and how to implement them. The type of RAID used will be "Software Based RAID" or commonly known as SoftRAID.

The new tools used in this session are [1][2][3][5] and dd [4].

mdadm is a tool used to manipulate SoftRAID in GNU/Linux.
dd is used to overwite data (Zero Filling) on disk.

You'll learn how RAID does its recovery when one of the disks incorporated within Array is damaged. The recovery process won't affect the files inside the RAID disk.

Here are the steps to finish the task in today's session:

Part One : Creating SoftRAID Using Script, Creating File System in RAID, and Mounting It

  1. Make sure vdb*,vdc*,vdd*,vde*,vdf* already unmounted by using the following command:
    $ sudo umount /mnt/vdb1
    
  2. Repeat previous command for every mounted device (except vda).
  3. Check the location of 5 disks (except vda) inside the VM that you'll use by using this command
    $ cat /proc/partitions | grep vd*
    
  4. You'll get the following output (see picture below). The disk named vdb, vdc, vdd, vde, and vdf will be used to create RAID.
  5. Save the output of the preceeding command into disks.txt
    $ cat /proc/partitions | grep vd* > disks.txt
    
  6. Download script mkRAID using wget, then add permission to execute the script.
    $ wget --no-check-certificate https://projects.ui.ac.id/attachments/download/8359/mkRAID
    $ chmod +x mkRAID
    
  7. Run script mkRAID with privilege as super user. The script will need parameter NPM-KELAS. For example, if NPM is 1202000818 and KELAS is TESTING, then script mkRAID will be run like this :
    $ sudo ./mkRAID 1202000818-TESTING
    
  8. Wait for a while. The script will create RAID disk, synchronize the disk incorporated into RAID, creating RAID configuration file, and prepare so that the RAID disk will be up while booting as shown on the output in the pictures below.

  9. Copy the RAiD configuration file insode /etc/mdadm.conf to your Lab10 directory.
    $ cp /etc/mdadm.conf mdadm.conf
    
  10. Make sure your NPM-KELAS is listed as RAID disk's name that was created. For example, if your NPM is 1202000818 and KELAS TESTING, then check it like this:
    $ cat mdadm.conf | grep 1202000818-TESTING
    ARRAY /dev/md0 metadata=1.2 name=OSLab:ZCZC-1202000818-TESTING UUID=5e89bfc7:ae79db99:4f775a87:aa4bf116
    
  11. If the data isn't listed in the output of the steps before, then repeat the whole process of creating RAID disk from step 1 to 5.
  12. Save the RAID status that was created into mdstat.txt.
    $ cat /proc/mdstat > mdstat.txt
    
  13. Create file system with the type ext4 in RAID disk that was created in /dev/md0 and wait for a while until the process of creating the file system is completed
    $ sudo mkfs.ext4 /dev/md0
    
  14. Mount /dev/md0 to the mount point/location on /mnt/raid directory
    $ sudo mount /dev/md0 /mnt/raid
    

Part Two: Disk Failure Test and RAID Recovery

  1. Download file CD Debian with this command wget. The file will be automatically downloaded and saved into /mnt/raid by the name debian.iso .
    $ sudo wget -v http://kambing.ui.ac.id/iso/debian/6.0.7/i386/iso-cd/debian-6.0.7-i386-businesscard.iso -O /mnt/raid/debian.iso
    

    or alternatively
    $ sudo wget -v http://opendata.ui.ac.id/os/debian-6.0.7-i386-businesscard.iso -O /mnt/raid/debian.iso
    
  2. Mount the file debian.iso with loppback option as follows.
    $ sudo mount -o loop /mnt/raid/debian.iso /mnt/cdrom
    
  3. Do md5sum checking on the content of /mnt/cdrom directory.
    $ cd /mnt/cdrom
    $ md5sum -c /mnt/cdrom/md5sum.txt | grep FAILED
    
  4. If the debian.iso file in good condition, then the preceeding step will not give any output.
  5. Set disk /dev/vdb to Faulty (Damaged) through this mdadm command :
    $ sudo mdadm --fail /dev/md0 /dev/vdb
    
  6. Check the status of RAID array
    $ cat /proc/mdstat
    

    You will find thath the disk /dev/vdb will have (F) mark. See the picture below.
  7. save the output of the steps before into mdstat-faulty.txt. Replace KELAS and NPM with your class and student id.
    $ cat /proc/mdstat > ~/KELAS/NPM/Lab10/mdstat-faulty.txt
    
  8. Get the disk /dev/vdb out of array.
    $ sudo mdadm --remove /dev/md0 /dev/vdb
    
  9. Re-check the status of RAID array
    $ cat /proc/mdstat
    

    You can see that /dev/vdb is not parth of the array anymore. See the picture below.
  10. Don't forget to save the output of the steps before to mdstat-removed.txt. Replace KELAS and NPM with your class and student id.
    $ cat /proc/mdstat > ~/KELAS/NPM/Lab10/mdstat-removed.txt
    
  11. Now your RAID will be in degraded condition. you can check it using this command sudo mdadm --detail /dev/md0 | grep degraded.
    $ sudo mdadm --detail /dev/md0 | grep degraded
           State : clean, degraded
    
  12. Do md5sum checking on /mnt/cdrom directory like what we've done at step 3 of this Part Two.
  13. Is the integrity of the data, which is shown by no output after the md5sum check, kept? Explain it with your own understanding of RAID in WHAT-IS-THIS.txt inside Lab10 folder!
  14. Overwrite the data (zero fill) on disk /dev/vdb by command sudo dd if=/dev/zero of=/dev/vdb bs=1024. That command will erase all data that have been written to /dev/vdb. Wait until the command is completely executed.
    $ sudo dd if=/dev/zero of=/dev/vdb bs=1024
    2097153+0 records in
    2097152+0 records out
    2147483648 bytes (2.0GB) copied, 53.632729 seconds, 38.2MB/s
    
  15. Re-join disk /dev/vdb into RAID array.
    $ sudo mdadm --add /dev/md0 /dev/vdb
    
  16. Take a look on the output by using command cat /proc/mdstat and make sure the recovery process is progressing. Save the output of that command to mdstat-recovery.txt. Replace KELAS and NPM with your class and student id.
    $ cat /proc/mdstat 
    Personalities : [raid6] [raid5] [raid4] 
    md0 : active raid5 vdb[5] vdc[1] vde[4] vdd[2] vdf[3]
          6286848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
          [>....................]  recovery =  0.9% (19500/2095616) finish=1.7min speed=19500K/sec
    $ cat /proc/mdstat > ~/KELAS/NPM/Lab10/mdstat-recovery.txt
    
  17. Do md5sum checking again on /mnt/cdrom directory as on the step 3 of this Part Two.
  18. Is the integrity of the data after recovery process, which is shown by no output after the md5sum check, kept? Explain it with your own understanding of RAID in WHAT-IS-THIS.txt inside Lab10 folder !
  19. Return to Lab10 folder. Replace KELAS and NPM with your class and student id.
    $ cd ~/KELAS/NPM/Lab10/
    
  20. Delete mkRAID script
    $ rm -f mkRAID
    
  21. Stop the recording of console;s output
    $ exit
    
  22. This is the end of today's lab session.

References

[1] mdadm : http://neil.brown.name/blog/mdadm
[2] mdadm Cheat Sheet : http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/
[3] Linux RAID Wiki : https://raid.wiki.kernel.org/index.php/RAID_setup
[4] dd Manual Page : http://linux.die.net/man/1/dd
[5] mdadm Manual Page : http://linux.die.net/man/8/mdadm

Kembali ke Lab 10
Kembali ke Wiki

mkRAID (3.12 KB) Ahmad Rifai Ashari ahmad.rifai31, 16/05/2016 16:49