How to create RAID → mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1
How to Create RAID Arrays Using mdadm: Complete Guide to Software RAID Setup
Table of Contents
1. [Introduction](#introduction)
2. [Prerequisites](#prerequisites)
3. [Understanding RAID and mdadm](#understanding-raid-and-mdadm)
4. [Preparing Your System](#preparing-your-system)
5. [Creating RAID Arrays Step-by-Step](#creating-raid-arrays-step-by-step)
6. [Practical Examples](#practical-examples)
7. [Monitoring and Management](#monitoring-and-management)
8. [Troubleshooting Common Issues](#troubleshooting-common-issues)
9. [Best Practices](#best-practices)
10. [Advanced Configuration](#advanced-configuration)
11. [Conclusion](#conclusion)
Introduction
RAID (Redundant Array of Independent Disks) technology provides data redundancy, improved performance, or both by combining multiple physical drives into logical units. The `mdadm` (Multiple Device Administrator) utility is Linux's primary tool for creating and managing software RAID arrays, offering enterprise-level storage capabilities without requiring expensive hardware RAID controllers.
This comprehensive guide will teach you how to create, configure, and manage RAID arrays using mdadm, with a specific focus on the command `mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1`. You'll learn everything from basic RAID concepts to advanced troubleshooting techniques, enabling you to implement robust storage solutions for your Linux systems.
Prerequisites
Before proceeding with RAID creation, ensure you meet the following requirements:
System Requirements
- Linux system with root access or sudo privileges
- At least two physical drives of similar size
- mdadm utility installed (available in most Linux distributions)
- Basic understanding of Linux command line operations
- Backup of important data (RAID setup will destroy existing data on drives)
Software Requirements
Install mdadm on your system:
Ubuntu/Debian:
```bash
sudo apt update
sudo apt install mdadm
```
CentOS/RHEL/Fedora:
```bash
sudo yum install mdadm # CentOS/RHEL 7
sudo dnf install mdadm # Fedora/RHEL 8+
```
Hardware Considerations
- Drives should be of similar size and performance characteristics
- Ensure adequate power supply for multiple drives
- Verify SATA/SAS connections are secure
- Consider drive failure rates and manufacturer reliability
Understanding RAID and mdadm
RAID Levels Overview
RAID 0 (Striping):
- Combines drives for increased performance
- No redundancy - single drive failure destroys array
- Total capacity equals sum of all drives
RAID 1 (Mirroring):
- Creates exact copies across drives
- Provides redundancy - can survive single drive failure
- Total capacity equals smallest drive size
- Example: `mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1`
RAID 5 (Striping with Parity):
- Requires minimum 3 drives
- Can survive single drive failure
- Efficient use of storage space
RAID 6 (Striping with Double Parity):
- Requires minimum 4 drives
- Can survive two simultaneous drive failures
- Lower write performance due to double parity calculation
RAID 10 (Mirrored Stripes):
- Combines RAID 1 and RAID 0
- Requires minimum 4 drives
- High performance and redundancy
mdadm Command Structure
The basic mdadm syntax follows this pattern:
```bash
mdadm [mode] [options] [array] [devices]
```
Key modes include:
- `--create`: Create new array
- `--assemble`: Assemble existing array
- `--manage`: Manage existing array
- `--monitor`: Monitor arrays for issues
- `--examine`: Examine individual devices
Preparing Your System
Identifying Available Drives
First, identify available drives using multiple methods:
```bash
List all block devices
lsblk
Show detailed disk information
sudo fdisk -l
Display drive information with sizes
sudo parted -l
Check current RAID status
cat /proc/mdstat
```
Partitioning Drives
Create partitions on your drives before adding them to RAID arrays:
```bash
Create partition table (GPT recommended for drives >2TB)
sudo parted /dev/sda mklabel gpt
sudo parted /dev/sdb mklabel gpt
Create partitions
sudo parted /dev/sda mkpart primary 0% 100%
sudo parted /dev/sdb mkpart primary 0% 100%
Set RAID flag
sudo parted /dev/sda set 1 raid on
sudo parted /dev/sdb set 1 raid on
```
Verifying Partition Alignment
Ensure partitions are properly aligned for optimal performance:
```bash
Check partition alignment
sudo parted /dev/sda align-check optimal 1
sudo parted /dev/sdb align-check optimal 1
```
Creating RAID Arrays Step-by-Step
Basic RAID 1 Creation
The command `mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1` creates a RAID 1 array. Let's break it down:
```bash
Complete command with explanation
sudo mdadm --create /dev/md0 \
--level=1 \
--raid-devices=2 \
/dev/sda1 /dev/sdb1
```
Parameter Explanation:
- `--create`: Creates new RAID array
- `/dev/md0`: Array device name
- `--level=1`: Specifies RAID 1 (mirroring)
- `--raid-devices=2`: Number of active devices
- `/dev/sda1 /dev/sdb1`: Physical partitions to include
Detailed Creation Process
Step 1: Verify Drive Status
```bash
Ensure drives are not mounted
sudo umount /dev/sda1 /dev/sdb1 2>/dev/null
Check for existing RAID signatures
sudo mdadm --examine /dev/sda1 /dev/sdb1
```
Step 2: Create the Array
```bash
sudo mdadm --create /dev/md0 \
--level=1 \
--raid-devices=2 \
--metadata=1.2 \
/dev/sda1 /dev/sdb1
```
Step 3: Monitor Creation Progress
```bash
Watch synchronization progress
watch cat /proc/mdstat
Detailed array information
sudo mdadm --detail /dev/md0
```
Alternative Creation Methods
Using Wildcards:
```bash
Original command using wildcards
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1
```
With Spare Drives:
```bash
Include hot spare
sudo mdadm --create /dev/md0 \
--level=1 \
--raid-devices=2 \
--spare-devices=1 \
/dev/sda1 /dev/sdb1 /dev/sdc1
```
Practical Examples
Example 1: Basic RAID 1 Setup
Complete walkthrough for setting up RAID 1 for data storage:
```bash
1. Prepare drives
sudo parted /dev/sda mklabel gpt
sudo parted /dev/sda mkpart primary 0% 100%
sudo parted /dev/sda set 1 raid on
sudo parted /dev/sdb mklabel gpt
sudo parted /dev/sdb mkpart primary 0% 100%
sudo parted /dev/sdb set 1 raid on
2. Create RAID array
sudo mdadm --create /dev/md0 \
--level=1 \
--raid-devices=2 \
/dev/sda1 /dev/sdb1
3. Create filesystem
sudo mkfs.ext4 /dev/md0
4. Mount array
sudo mkdir /mnt/raid1
sudo mount /dev/md0 /mnt/raid1
5. Add to fstab for persistent mounting
echo '/dev/md0 /mnt/raid1 ext4 defaults 0 2' | sudo tee -a /etc/fstab
```
Example 2: RAID 5 Configuration
Creating RAID 5 with three drives:
```bash
Create RAID 5 array
sudo mdadm --create /dev/md1 \
--level=5 \
--raid-devices=3 \
/dev/sdc1 /dev/sdd1 /dev/sde1
Monitor build progress
watch -n 1 cat /proc/mdstat
```
Example 3: RAID 10 Setup
Setting up RAID 10 with four drives:
```bash
Create RAID 10 array
sudo mdadm --create /dev/md2 \
--level=10 \
--raid-devices=4 \
/dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
```
Monitoring and Management
Saving RAID Configuration
Create persistent configuration to survive reboots:
```bash
Generate mdadm.conf
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Update initramfs (Ubuntu/Debian)
sudo update-initramfs -u
Update initrd (CentOS/RHEL)
sudo dracut --force
```
Monitoring Commands
Real-time Monitoring:
```bash
Watch array status
watch -n 1 cat /proc/mdstat
Detailed array information
sudo mdadm --detail /dev/md0
Check all arrays
sudo mdadm --detail --scan
```
Health Checks:
```bash
Test array integrity
echo check > /sys/block/md0/md/sync_action
View check progress
cat /sys/block/md0/md/sync_completed
Check for errors
cat /sys/block/md0/md/mismatch_cnt
```
Managing Array Members
Adding Drives:
```bash
Add spare drive
sudo mdadm --manage /dev/md0 --add /dev/sdc1
Grow array (RAID 1 to RAID 1 with more mirrors)
sudo mdadm --grow /dev/md0 --raid-devices=3
```
Removing Drives:
```bash
Mark drive as failed
sudo mdadm --manage /dev/md0 --fail /dev/sda1
Remove failed drive
sudo mdadm --manage /dev/md0 --remove /dev/sda1
```
Replacing Drives:
```bash
Complete replacement process
sudo mdadm --manage /dev/md0 --fail /dev/sda1
sudo mdadm --manage /dev/md0 --remove /dev/sda1
Physically replace drive
sudo mdadm --manage /dev/md0 --add /dev/sda1
```
Troubleshooting Common Issues
Array Won't Start
Symptoms: Array doesn't assemble at boot
Solutions:
```bash
Force assembly
sudo mdadm --assemble --force /dev/md0 /dev/sda1 /dev/sdb1
Check superblock information
sudo mdadm --examine /dev/sda1 /dev/sdb1
Update configuration
sudo mdadm --detail --scan | sudo tee /etc/mdadm/mdadm.conf
```
Drive Failure Scenarios
Detecting Failed Drives:
```bash
Check array status
cat /proc/mdstat
Detailed failure information
sudo mdadm --detail /dev/md0 | grep -E "(State|Failed|Spare)"
```
Recovery Process:
```bash
1. Identify failed drive
sudo mdadm --detail /dev/md0
2. Remove failed drive
sudo mdadm --manage /dev/md0 --remove failed
3. Add replacement drive
sudo mdadm --manage /dev/md0 --add /dev/sdx1
4. Monitor rebuild
watch cat /proc/mdstat
```
Performance Issues
Optimization Settings:
```bash
Increase stripe cache size
echo 8192 > /sys/block/md0/md/stripe_cache_size
Adjust read-ahead settings
sudo blockdev --setra 65536 /dev/md0
Set I/O scheduler
echo deadline > /sys/block/md0/queue/scheduler
```
Metadata Corruption
Recovery Steps:
```bash
Backup existing metadata
sudo mdadm --examine /dev/sda1 > /tmp/md0_sda1_backup.txt
Attempt repair
sudo mdadm --assemble --force /dev/md0 /dev/sda1 /dev/sdb1
Last resort: recreate array (DATA LOSS!)
sudo mdadm --create /dev/md0 --assume-clean \
--level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
```
Best Practices
Design Considerations
Drive Selection:
- Use drives from different manufacturers to avoid batch failures
- Match drive sizes closely to avoid wasted space
- Consider drive RPM and interface speeds for performance
- Plan for spare drives in production environments
Partition Strategy:
```bash
Leave small gap at end for replacement drives
sudo parted /dev/sda mkpart primary 1MiB 99%
Use GPT for drives larger than 2TB
sudo parted /dev/sda mklabel gpt
```
Maintenance Procedures
Regular Health Checks:
```bash
Weekly integrity check
echo check > /sys/block/md0/md/sync_action
Monitor SMART data
sudo smartctl -a /dev/sda
sudo smartctl -a /dev/sdb
Check system logs
sudo journalctl -u mdadm --since "1 week ago"
```
Backup Strategies:
```bash
Backup mdadm configuration
sudo cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.backup
Document array layout
sudo mdadm --detail /dev/md0 > /root/raid_config_backup.txt
```
Performance Optimization
Filesystem Considerations:
```bash
Optimize ext4 for RAID
sudo mkfs.ext4 -E stride=128,stripe-width=256 /dev/md0
Mount with optimal options
mount -o defaults,noatime,data=writeback /dev/md0 /mnt/raid
```
System Tuning:
```bash
Increase dirty ratio for better write performance
echo 15 > /proc/sys/vm/dirty_ratio
echo 5 > /proc/sys/vm/dirty_background_ratio
```
Advanced Configuration
Email Notifications
Configure automatic email alerts for RAID events:
```bash
Edit mdadm.conf
sudo nano /etc/mdadm/mdadm.conf
Add email configuration
MAILADDR admin@example.com
MAILFROM raid-monitor@server.local
```
Test Email Notifications:
```bash
Send test message
sudo mdadm --monitor --test /dev/md0
```
Custom Scripts
Failure Response Script:
```bash
#!/bin/bash
/usr/local/bin/raid-alert.sh
ARRAY=$1
EVENT=$2
DEVICE=$3
case $EVENT in
"Fail")
echo "Drive $DEVICE failed in array $ARRAY" | \
mail -s "RAID Failure Alert" admin@example.com
;;
"DegradedArray")
echo "Array $ARRAY is degraded" | \
mail -s "RAID Degraded Alert" admin@example.com
;;
esac
```
Integration with System Services
Systemd Service for Monitoring:
```bash
Create service file
sudo nano /etc/systemd/system/mdadm-monitor.service
[Unit]
Description=RAID Monitor
After=multi-user.target
[Service]
Type=forking
ExecStart=/sbin/mdadm --monitor --daemonise --mail=admin@example.com --delay=1800 /dev/md0
Restart=always
[Install]
WantedBy=multi-user.target
```
Enable Service:
```bash
sudo systemctl enable mdadm-monitor.service
sudo systemctl start mdadm-monitor.service
```
Benchmark and Testing
Performance Testing:
```bash
Sequential read test
sudo hdparm -t /dev/md0
Random I/O test with fio
sudo fio --name=random-rw --ioengine=libaio --iodepth=4 \
--rw=randrw --bs=4k --direct=1 --size=1G --numjobs=1 \
--runtime=60 --group_reporting --filename=/dev/md0
```
Failure Testing:
```bash
Simulate drive failure (testing only!)
echo offline > /sys/block/sda/device/state
Verify array continues operating
cat /proc/mdstat
sudo mdadm --detail /dev/md0
```
Conclusion
Creating and managing RAID arrays with mdadm provides powerful storage solutions for Linux systems. The command `mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1` demonstrates the straightforward approach to establishing redundant storage, but successful RAID implementation requires understanding of the underlying concepts, proper planning, and ongoing maintenance.
Key takeaways from this guide include:
- Proper Planning: Always backup data before RAID creation and carefully select appropriate RAID levels for your needs
- Configuration Management: Maintain proper mdadm.conf files and ensure arrays assemble correctly at boot
- Monitoring: Implement regular health checks and automated alerting to detect issues early
- Maintenance: Perform regular integrity checks and keep spare drives available for quick replacement
- Documentation: Maintain records of array configurations and replacement procedures
RAID technology significantly improves data availability and can enhance performance, but it's not a substitute for proper backup strategies. Regular backups remain essential even with RAID protection, as RAID cannot protect against data corruption, accidental deletion, or catastrophic failures affecting multiple drives simultaneously.
For production environments, consider implementing comprehensive monitoring solutions, maintaining spare hardware, and developing clear procedures for drive replacement and array recovery. With proper implementation and maintenance, mdadm-based RAID arrays provide reliable, cost-effective storage solutions that can serve your systems for years to come.
Remember that RAID setup is a critical system operation that can result in data loss if performed incorrectly. Always test procedures in non-production environments first and ensure you have verified backups before making changes to existing storage systems.