RAID 0,RAID 1,RAID 5,Explained with Diagrams:
RAID, means Redundant Array of Independent Disks (originally Redundant Array of Inexpensive Disks), is a storage technology that provides increased reliability and functions through redundancy. This is achieved by combining multiple disk drive components into a logical unit, where data is distributed across the drives in one of several ways called "RAID levels"; RAID can be handled either by the operating system software or it may be implemented via a purpose built RAID disk controller card without having to configure the operating system at all.
On most situations you will be using one of the following four levels of RAIDs.
- RAID 0
- RAID 1
- RAID 5
In the figure A, B, C, D, E and F – represents blocks and p1, p2, and p3 – represents parity
RAID level 0 – Striping
In a RAID 0 system data are split up in blocks that get written across all the drives in the array. By using multiple disks (at least 2) at the same time, this offers superior I/O performance. This performance can be enhanced further by using multiple controllers, ideally one controller per disk.Advantages
- RAID 0 offers great performance, both in read and write operations. There is no overhead caused by parity controls.
- All storage capacity is used, there is no disk overhead.
- The technology is easy to implement.
Disadvantages
RAID 0 is not fault-tolerant. If one disk fails, all data in the RAID 0 array are lost. It should not be used on mission-critical systems.Ideal use
RAID 0 is ideal for non-critical storage of data that have to be read/written at a high speed, such as on a Photoshop image retouching station.RAID level 1 – Mirroring
Data are stored twice by writing them to both the data disk (or set of data disks) and a mirror disk (or set of disks) . If a disk fails, the controller uses either the data drive or the mirror drive for data recovery and continues operation. You need at least 2 disks for a RAID 1 array.Advantages
- RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single disk.
- In case a disk fails, data do not have to be rebuild, they just have to be copied to the replacement disk.
- RAID 1 is a very simple technology.
Disadvantages
- The main disadvantage is that the effective storage capacity is only half of the total disk capacity because all data get written twice.
- Software RAID 1 solutions do not always allow a hot swap of a failed disk (meaning it cannot be replaced while the server keeps running). Ideally a hardware controller is used.
Ideal use
RAID-1 is ideal for mission critical storage, for instance for accounting systems. It is also suitable for small servers in which only two disks will be used.RAID level 3
On RAID 3 systems, data blocks are subdivided (striped) and written in parallel on two or more drives. An additional drive stores parity information. You need at least 3 disks for a RAID 3 array.Advantages
- RAID-3 provides high throughput (both read and write) for large data transfers.
- Disk failures do not significantly slow down throughput.
Disadvantages
- This technology is fairly complex and too resource intensive to be done in software.
- Performance is slower for random, small I/O operations.
Ideal use
RAID 3 is not that common in prepress.RAID level 5
RAID 5 is the most common secure RAID level. It is similar to RAID-3 except that data are transferred to disks by independent read and write operations (not in parallel). The data chunks that are written are also larger. Instead of a dedicated parity disk, parity information is spread across all the drives. You need at least 3 disks for a RAID 5 array.A RAID 5 array can withstand a single disk failure without losing data or access to data. Although RAID 5 can be achieved in software, a hardware controller is recommended. Often extra cache memory is used on these controllers to improve the write performance.
Advantages
Read data transactions are very fast while write data transaction are somewhat slower (due to the parity that has to be calculated).Disadvantages
- Disk failures have an effect on throughput, although this is still acceptable.
- Like RAID 3, this is complex technology.
Ideal use
RAID 5 is a good all-round system that combines efficient storage with excellent security and decent performance. It is ideal for file and application servers.RAID level 10 – Combining RAID 0 & RAID 1
RAID 10 combines the advantages (and disadvantages) of RAID 0 and RAID 1 in one single system. It provides security by mirroring all data on a secondary set of disks (disk 3 and 4 in the drawing below) while using striping across each set of disks to speed up data transfers.Necessary command for RAID :
GNU Parted 1.8.1
Using /dev/hda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print free
Model: SAMSUNG SP0802N (ide)
Disk /dev/hda: 80.1GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 22.7GB 22.7GB primary ntfs boot
2 22.7GB 80.1GB 57.3GB extended
5 22.7GB 54.3GB 31.6GB logical ntfs
6 54.3GB 70.0GB 15.7GB logical ext3
7 70.0GB 70.8GB 732MB logical linux-swap
8 70.8GB 71.0GB 206MB logical ext2
71.0GB 80.1GB 9089MB Free Space
[root@localhost ~]# fdisk -l
Disk /dev/hda: 80.0 GB, 80060424192 bytes
255 heads, 63 sectors/track, 9733 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 2764 22201798+ 7 HPFS/NTFS
/dev/hda2 2765 9733 55978492+ 5 Extended
/dev/hda5 2765 6602 30828703+ 7 HPFS/NTFS
/dev/hda6 6603 8514 15358108+ 83 Linux
/dev/hda7 8515 8603 714861 82 Linux swap / Solaris
/dev/hda8 8604 8628 200781 83 Linux
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda6 15G 7.1G 6.5G 53% /
tmpfs 124M 0 124M 0% /dev/shm
/dev/hda8 190M 1.6M 179M 1% /saifulpartion
RAID Configuration for Linux :
[root@localhost ~]# fdisk /dev/hda
The number of cylinders for this disk is set to 9733.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n (Creat New partition)
Command action
l logical (5 or over)
p primary partition (1-4)
l(For logical Partition)
First cylinder (8629-9733, default 8629):Press Enter
Using default value 8629
Last cylinder or +size or +sizeM or +sizeK (8629-9733, default 9733): +500M(Partition size in MB)
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (8691-9733, default 8691):
Using default value 8691
Last cylinder or +size or +sizeM or +sizeK (8691-9733, default 9733): +500M
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (8753-9733, default 8753):
Using default value 8753
Last cylinder or +size or +sizeM or +sizeK (8753-9733, default 9733): +500M
Command (m for help): w(Press w for save)
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@localhost ~]# partprobe /dev/hda(For )
[root@localhost ~]# partprobe /dev/hda
[root@localhost ~]# partprobe /dev/hda
[root@localhost ~]# fdisk -l(For check Newly create partition)
Disk /dev/hda: 80.0 GB, 80060424192 bytes
255 heads, 63 sectors/track, 9733 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 2764 22201798+ 7 HPFS/NTFS
/dev/hda2 2765 9733 55978492+ 5 Extended
/dev/hda5 2765 6602 30828703+ 7 HPFS/NTFS
/dev/hda6 6603 8514 15358108+ 83 Linux
/dev/hda7 8515 8603 714861 82 Linux swap / Solaris
/dev/hda8 8604 8628 200781 83 Linux
/dev/hda9 8629 8690 497983+ 83 Linux
/dev/hda10 8691 8752 497983+ 83 Linux
/dev/hda11 8753 8814 497983+ 83 Linux
Command (m for help): L(For Search ID)
0 Empty 1e Hidden W95 FAT1 80 Old Minix bf Solaris
1 FAT12 24 NEC DOS 81 Minix / old Lin c1 DRDOS/sec (FAT-
2 XENIX root 39 Plan 9 82 Linux swap / So c4 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 83 Linux c6 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 84 OS/2 hidden C: c7 Syrinx
5 Extended 41 PPC PReP Boot 85 Linux extended da Non-FS data
6 FAT16 42 SFS 86 NTFS volume set db CP/M / CTOS / .
7 HPFS/NTFS 4d QNX4.x 87 NTFS volume set de Dell Utility
8 AIX 4e QNX4.x 2nd part 88 Linux plaintext df BootIt
9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM e1 DOS access
a OS/2 Boot Manag 50 OnTrack DM 93 Amoeba e3 DOS R/O
b W95 FAT32 51 OnTrack DM6 Aux 94 Amoeba BBT e4 SpeedStor
c W95 FAT32 (LBA) 52 CP/M 9f BSD/OS eb BeOS fs
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi ee EFI GPT
f W95 Ext'd (LBA) 54 OnTrackDM6 a5 FreeBSD ef EFI (FAT-12/16/
10 OPUS 55 EZ-Drive a6 OpenBSD f0 Linux/PA-RISC b
11 Hidden FAT12 56 Golden Bow a7 NeXTSTEP f1 SpeedStor
12 Compaq diagnost 5c Priam Edisk a8 Darwin UFS f4 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor a9 NetBSD f2 DOS secondary
16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot fb VMware VMFS
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fc VMware VMKCORE
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fd Linux raid auto
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fe LANstep
1c Hidden W95 FAT3 75 PC/IX be Solaris boot ff BBT
[root@localhost ~]# fdisk /dev/hda
The number of cylinders for this disk is set to 9733.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): t(Press t for change ID)
Partition number (1-11): 9(type partition number )
Hex code (type L to list codes): fd(write fd for convert ID to RAID)
Changed system type of partition 9 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-11): 10
Hex code (type L to list codes): fd
Changed system type of partition 10 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-11): 11
Hex code (type L to list codes): fd
Changed system type of partition 11 to fd (Linux raid autodetect)
Command (m for help): w(Press w for save)
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@localhost ~]# fdisk -l(For show ID)
Disk /dev/hda: 80.0 GB, 80060424192 bytes
255 heads, 63 sectors/track, 9733 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 2764 22201798+ 7 HPFS/NTFS
/dev/hda2 2765 9733 55978492+ 5 Extended
/dev/hda5 2765 6602 30828703+ 7 HPFS/NTFS
/dev/hda6 6603 8514 15358108+ 83 Linux
/dev/hda7 8515 8603 714861 82 Linux swap / Solaris
/dev/hda8 8604 8628 200781 83 Linux
/dev/hda9 8629 8690 497983+ fd Linux raid autodetect
/dev/hda10 8691 8752 497983+ fd Linux raid autodetect
/dev/hda11 8753 8814 497983+ fd Linux raid autodetect
[root@localhost ~]# yum install mdadm(For install mdadm package)
For RAID 5:
[root@localhost ~]# mdadm -C /dev/md0 -l 5 -n 3 /dev/hda{9,10,11}
For RAID 0 :
[root@localhost ~]# mdadm -C /dev/md1 -l 0 -n 2 /dev/hda{9,10}
For RAID 1 :
[root@localhost ~]# mdadm -C /dev/md2 -l 1 -n 2 /dev/hda{9,10}
mdadm: array /dev/md0 started.
[root@localhost ~]# mkfs.ext3 /dev/md0(For change file system) mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 124672 inodes, 248928 blocks 12446 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=255852544 8 block groups 32768 blocks per group, 32768 fragments per group 15584 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 32 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@localhost ~]# mkdir /raid5 [root@localhost ~]# mount /dev/md0 /raid5/ [root@localhost ~]# vim /etc/fstab
/dev/md0 /raid5 ext3 defaults 0 0
[root@localhost ~]# mdadm -D /dev/md0(For Show RAID devices) /dev/md0: Version : 0.90 Creation Time : Mon Oct 24 00:03:36 2011 Raid Level : raid5 Array Size : 995712 (972.54 MiB 1019.61 MB) Used Dev Size : 497856 (486.27 MiB 509.80 MB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Oct 24 00:05:28 2011 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 58290efa:377bd1aa:c4e791dc:92062b3c Events : 0.6 Number Major Minor RaidDevice State 0 3 9 0 active sync /dev/hda9 1 3 10 1 active sync /dev/hda10 2 3 11 2 active sync /dev/hda11 [root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 hda11[2] hda10[1] hda9[0] 995712 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] unused devices: <none>