(RAID0, RAID1) Ubuntu, (RAID1+0) CentOS. Conceptual and Implementation

Hey Dear,

I write here just wanna shared what have i done.. besides to keep it safe πŸ˜‰ , always.
On UNIX/ Linux environment, RAID can be implemented by directly from OS when you are installing them, or you just had software like MDADM also known as Multiple Devices ADMinistration. Of course after your finished installing the OS.

About RAID.. we knew it has a 8 type. It may consists of:
1). RAIDO : Stripped block function, HardDrive 1 and HardDrvie 2 will be merged. Example : HDD 1 has 100GiB and HDD 2 has 100GiB, so it will become 200GiB write into /dev/mdO
2). RAID1 : Mirrored function, we use 2 HDD minimal. If one HDD crashed, data still accessable from another HDD and it will be read as only ONE size, not like a RAIDo. Example : HDD 1 has 100GiB and HDD 2 has 100GiB, so it will be read only 100GiB. Another HDD will be act as a mirror. Then it is recommended to you to implementing this method by HDD with ‘identical’ size.
3). RAID2 : Stripped with parity hamming. In here, we are using at least 5HDD with formula (n+3, n>1). The role of three parts are stored hamming code for the resulted from bit proesssing. Example : HDD 1 has 100GiB, HDD 2 has 100GiB, so the total is 200GiB then HDD[3,4,5] as a storage to keep parity hamming information. if the one of HDD 1 or HDD 2 have crashed, the data still readable by code from parity hamming on HDD[3,4,5].
RAID24). RAID3 : With stripped function, but with low added storage. The formula is (n+1, n>1). So it has to minimal 3 HDD use. The One last HDD act as a parity storage. Example : We have three HDD with HDD 1 has 100GiB, HDD 2 has 100GiB, HDD 3 has GiB. So it can be produced 300GiB with the last HDD as a storage parity.
RAID35). RAID4 : As same as RAID3, but using block from HDD, not a Bit.
RAID46). RAID5 : As same as RAID4, but with distributed parity. It used to prevent bottleneck.
RAID57). RAID6 : Advanced from RAID5, with additional parity into 2 (p+q). The formula is (n+2, n>1) with 4 minimal HDD. With these method, 2 crashed HDD will be tolerant.
RAID68). RAID1+0 : Combining from RAIDO and RAID1, with stripped as a performances and mirrored as a redundances. Minimum number of devices needed to configure software raid1+0 is 4.
RAID10

then.. we will started to RAIDO and RAID1 on Ubuntu 12.04 server:
It is ease for beginner to configure RAIDO and RAID1 on Ubuntu, if you just practiced here, you can use virtualbox with choose additional storage on available feature.
add_storageRAIDO
RAID0CekRAID0

RAID1
RAID1Then you can prove with these way :
RAID1-implementasiit looks like i try to copying all of contain from folder lib on linux, then size of NewVirtualDisk2.vdi getting higher as same as NewVirtualDisk1.vdi. It presume that mirrored method was succeeded.

RAID1+0
We are using CentOS 6.2 64Bit here for fresh installation with two additional harddisks. There’s two point that i will describe,
a. Create RAID1+0 , then
b. Making simulated damage or faulty storage, adding or re-adding them.

a. Create RAID1+0
We are focusing with two storage now (although you can using by 4 addtional storage partially) , 8GiB /dev/sda for our RAID1+0 with fdisk helper (we can see it below or you can using cfdisk that more user friendly) and 8GiB /dev/sdb for primary storage installation.
fdisk
fdisk_t

fdisk_t2
You have to always get remembered by typing w after you have create the disk sectors then just type t to define your storage type to Linux raid autodetect. Finally
~# partprobe <your_raid_device>
In case we are using /dev/sda here. Don’t forget to install these with your yum:
~# yum install part*

So i have 4 partition here, /dev/sda{1,2,3,4}, 2GiB space has been allocated for them. Then we can start to create multiple device with level 10 and watch the statistic while RAID has been built them.
Noted : We didn’t have spare storage here.


[root@centos usr]# mdadm --create /dev/md10 --level=10 --raid-devices=4 /dev/sda{1,2,3,4}
[root@centos usr]# watch -n1 "cat /proc/mdstat"


Starting_AddRAID

Let we examine it, formatted with your proper filesystem, make directory, mount it, then registered it to your fstab, with these params:
[root@centos usr]# mdadm –detail /dev/md10
or
[root@centos usr]# mdadm –query /dev/md10
or
[root@centos usr]# mdadm –examine –scan
[root@centos usr]# mkfs.ext3 /dev/md10
[root@centos usr]# mkdir /raid10 && mount /dev/md10 /raid10/
MDDETAIL
result1RAIDonFSTAB
Those size might not be match with that i’ve explained before (8GiB), but you can understand why did the space of storage has been reduced πŸ™‚

b. Making simulated damage or faulty storage, adding or re-adding them.
First, we make it faulty device
[root@centos usr]# mdadm /dev/md10 –fail /dev/sda4
RAID_MakeItFaulty
Second, we will remove it.
[root@centos usr]# mdadm /dev/md10 –remove /dev/sda4
RAID_AfterRemoved

Now we adding or re-adding the /dev/sda4 into md10 device
[root@centos usr]# mdadm /dev/md10 –add /dev/sda4
Rebuilding_RAIDIf you will stop and remove this raid, do this step:
[root@centos usr]# umount /raid10/
[root@centos usr]# mdadm –stop /dev/md10
[root@centos usr]# mdadm –detail /dev/md10
[root@centos usr]# mdadm –remove /dev/md10
[root@centos usr]# fdisk -cu /dev/sda

Command (m for help): d
Partition number (1-4): 1
Command (m for help): d
Partition number (1-4): 2
Command (m for help): p
Command (m for help): w

# it will remove the partitions used in raid array

Actually, there’s an alternative when we try to create RAID1+0.
First, make it RAID1 into /dev/md0 and /dev/md1 with these params:
[root@centos usr]# mdadm –create /dev/md0 –level=1 –raid-devices=2 /dev/sda1 /dev/sda2
[root@centos usr]# mdadm –create /dev/md1 –level=1 –raid-devices=2 /dev/sda3 /dev/sda4
[root@centos usr]# mdadm –create /dev/md10 –level=10 –raid-devices=2 /dev/md1 /dev/md0

mdadm_alt3
But directly it makes you space of harddisks didn’t merged or stay 😦 like these, i am using 2GiB for each HDD. sda{1,2,3,4} So why did it happened ???
mdadm_alt4
HINT :
The FIRST method (either the first RAID1 or RAID0 that you have implemented) and The SPACE of storage will be affected for the final space.
For example:
1). If we want to implement RAID1+0 with direct method (just one params to produce /dev/md10 directly), although you were using 4HDD with size 8GiB foreach storage (So, total is 32GiiB). It will be output 16GiB (RAID0), the other size (16GiB) will be allocated for mirrored (RAID1).
2). If we want to implement RAID1+0 with alternative method (first, make it RAID with level 1 into /dev/md and /dev/md1), then do the final step with create level 10 (RAID1+0) by using /dev/md0 and /dev/md1. It will be output the first size from method that you’ve choosed.
Example
: If you have 4HDD with size 8GiB foreach storage, and you just want to create RAID by using RAID level 1 first with 2 devices foreach /dev/md, you will get an output 8GiB for the final result.

Capture1CaptureBecause system will do the mirrored first (between 8GiB /dev/sda1 and /dev/sda2), neither do that with /dev/sda3 and /dev/sda4.
And then, when you have just execute the final params with ‘mdadm –create /dev/md10 –level=10 –raid-devices=2 /dev/md{0,1}’ , the system will check what is it has a status with raid1 or raid0 previously. So, because we have do the level 1 first, it stayed become 8GiB for the final result. It is impossible for system to stripped become 16GiB. If it so, where are we put the mirrored for /dev/md0 and /dev/md1 πŸ˜‰

These can be a rule if we want to do the first RAID with level 0. The system will be merged them (/dev/sda1 and /dev/sda2 into /dev/md0), neither do this: /dev/sda3 and /dev/sda4 into /dev/md1.
And then when you have just execute the final params with ‘mdadm –create /dev/md10 –level=10 –raid-devices=2 /dev/md{0,1}’ , the system will check what is is has a status with raid1 or raid0 previously. So, because we have do the level 0 first, it stayed become 16GiB for the final result. It is impossible for system to stripped become 32GiB. If it so, where are put the mirrored for /dev/md0 and /dev/md1 πŸ˜‰
Capture3

3). What if we are using ‘Odd number of storage’ now. Let say we have 5 storage here. /dev/sd{b1,c1,d1,e1,f1} . 2GiB foreach storage.
A. If we are using direct method (with ‘mdadm –create /dev/md10 –level=10 –raid-devices=5 /dev/sd{b1,c1,d1,e1,f1}’) . It will be produced only 5GiB
Odd_RAID1
B. If we are using alternative method with 2 devices for /dev/md0 ON the RAID level 1 (/dev/sd{b1,c1}) and 3 devices for /dev/md1 ON the RAID level 1 (/dev/sd{d1,e1,f1}) , it will be exceeded for /dev/md0 . So, the size can only read with ONLY 2GiB for the first 2 devices. Vice versa (3 devices for /dev/md0 and 2 devices for /dev/md1)
Odd_RAID4Odd_RAID5C. If we are using alternative method withΒ 3 devices for /dev/md0 ON the RAID level 0 (/dev/sd{b1,c1,d1}) andΒ 2 devices for /dev/md1 ON the RAID level 0 (/dev/sd{e1,f1} , it will be exceeded for /dev/md1 . So, the size can only read with ONLY 4GiB for the first 2 devices with stripped method. System will choose the little one. Vice versa (2 devices for /dev/md0 and 3 devices for /dev/md1)

.:. So be careful with choice that you’ve made, choose the right and the proper one method. πŸ˜‰

PS : Step, if we want to share our dial up connection or modem into guest on VirtualBox.
1. First, allowed your Windows firewall ON.
2. Shared your Dial Connection.
3. Setup IP v4 manually on your Virtual-Box Adapter
4. Change your Network Adapter on VirtualBox into ‘Host-only adapter’ then choose VirtualBox Host-Only Ethernet Adapter.
5. On your Guest OS (if you are using centos), setup interface eth0 into local manually.
~# ifconfig eth0 <your_same_range_ip_with_host>
~# echo ‘nameserver 8.8.8.8’ >> /etc/resolve.conf
6. Try to get ip on your site that you want to access and register them.
~# echo ‘<your_site_ip> <your_site_name>’ >> /etc/hosts
~# route add -host <your_site_ip> gw <your_OS_Host_ip>


Cheers

Advertisements

3 responses to “(RAID0, RAID1) Ubuntu, (RAID1+0) CentOS. Conceptual and Implementation

  1. Be mindful of hard errors which are the reason that RAID no longer lives up to its original promise. If one of the drives fails, then during the rebuild if you get an error – the entire array will die.

    http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

    SATA drives are commonly specified with an unrecoverable read error rate (URE) of 10^14. Which means that once every 12.5 terabytes, the disk will not be able to read a sector back to you.

    http://www.lucidti.com/zfs-checksums-add-reliability-to-nas-storage

  2. surprised with it.. the entire array will die whereas parity check is helpful for entire storage
    Thanx @Adam

  3. Pingback: Failover with Heartbeat, DRBD, HTTPD, et al | TifosiLinux

Tinggalkan pesan atau komentar

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s