How to monitor file integrity in Linux
How to Monitor File Integrity in Linux
File integrity monitoring (FIM) is a critical security practice that helps detect unauthorized changes to files and directories on Linux systems. This comprehensive guide will walk you through various methods and tools for implementing robust file integrity monitoring, from basic checksums to advanced enterprise-grade solutions.
Table of Contents
1. [Introduction to File Integrity Monitoring](#introduction)
2. [Prerequisites and Requirements](#prerequisites)
3. [Understanding File Integrity Concepts](#concepts)
4. [Native Linux Tools for File Integrity](#native-tools)
5. [AIDE (Advanced Intrusion Detection Environment)](#aide)
6. [Tripwire Implementation](#tripwire)
7. [OSSEC File Integrity Monitoring](#ossec)
8. [Custom Scripts and Automation](#custom-scripts)
9. [Practical Examples and Use Cases](#examples)
10. [Troubleshooting Common Issues](#troubleshooting)
11. [Best Practices and Security Considerations](#best-practices)
12. [Conclusion and Next Steps](#conclusion)
Introduction to File Integrity Monitoring {#introduction}
File integrity monitoring is the process of validating the integrity of operating system and application files using a verification method between the current file state and a known, good baseline. This security technique helps administrators detect unauthorized changes, potential malware infections, configuration drift, and compliance violations.
In this comprehensive guide, you'll learn how to implement file integrity monitoring using various tools and techniques, from simple command-line utilities to sophisticated enterprise solutions. Whether you're securing a single server or managing a large infrastructure, this guide provides the knowledge needed to protect your Linux systems effectively.
Prerequisites and Requirements {#prerequisites}
Before implementing file integrity monitoring, ensure you have:
System Requirements
- Linux distribution (Ubuntu, CentOS, RHEL, Debian, or similar)
- Root or sudo access for installation and configuration
- Sufficient disk space for storing baselines and logs (typically 1-5% of monitored data)
- Basic understanding of Linux file permissions and system administration
Knowledge Prerequisites
- Familiarity with Linux command line
- Understanding of file systems and permissions
- Basic knowledge of security concepts
- Experience with text editors (vim, nano, or similar)
Network Requirements
- Internet access for downloading tools and updates
- Email server access (optional, for notifications)
- Network connectivity for centralized logging (if applicable)
Understanding File Integrity Concepts {#concepts}
Hash Functions and Checksums
File integrity monitoring relies heavily on cryptographic hash functions to create unique fingerprints of files. Common hash algorithms include:
- MD5: Fast but cryptographically weak, suitable for detecting accidental changes
- SHA-1: More secure than MD5 but deprecated for security applications
- SHA-256: Current standard, provides excellent security and performance balance
- SHA-512: Maximum security for highly sensitive environments
Baseline Creation
A baseline represents the known-good state of your system at a specific point in time. This includes:
- File checksums
- File permissions and ownership
- File sizes and timestamps
- Directory structures
- Extended attributes and ACLs
Change Detection Methods
File integrity monitoring systems use various methods to detect changes:
1. Scheduled scans: Regular comparisons against the baseline
2. Real-time monitoring: Immediate detection using kernel-level hooks
3. Event-driven checks: Triggered by specific system events
4. Hybrid approaches: Combination of multiple methods
Native Linux Tools for File Integrity {#native-tools}
Using md5sum and sha256sum
The simplest approach to file integrity monitoring uses built-in Linux utilities:
```bash
Create checksums for important files
find /etc -type f -exec md5sum {} \; > /var/log/etc-baseline.md5
Create SHA-256 checksums (more secure)
find /etc -type f -exec sha256sum {} \; > /var/log/etc-baseline.sha256
Verify integrity against baseline
md5sum -c /var/log/etc-baseline.md5
Check specific directories
sha256sum /etc/passwd /etc/shadow /etc/sudoers > /var/log/critical-files.sha256
```
Creating a Simple Monitoring Script
Here's a basic shell script for file integrity monitoring:
```bash
#!/bin/bash
Simple File Integrity Monitor
Usage: ./fim.sh [create|check]
BASELINE_DIR="/var/lib/fim"
LOG_FILE="/var/log/fim.log"
MONITOR_DIRS="/etc /usr/bin /usr/sbin"
create_baseline() {
echo "Creating baseline..." | tee -a $LOG_FILE
mkdir -p $BASELINE_DIR
for dir in $MONITOR_DIRS; do
if [ -d "$dir" ]; then
find "$dir" -type f -exec sha256sum {} \; > "$BASELINE_DIR/$(basename $dir).sha256"
echo "Baseline created for $dir" | tee -a $LOG_FILE
fi
done
}
check_integrity() {
echo "Checking file integrity..." | tee -a $LOG_FILE
for dir in $MONITOR_DIRS; do
baseline_file="$BASELINE_DIR/$(basename $dir).sha256"
if [ -f "$baseline_file" ]; then
echo "Checking $dir..." | tee -a $LOG_FILE
if ! sha256sum -c "$baseline_file" --quiet; then
echo "ALERT: Changes detected in $dir" | tee -a $LOG_FILE
# Send notification or trigger alert
fi
fi
done
}
case "$1" in
create)
create_baseline
;;
check)
check_integrity
;;
*)
echo "Usage: $0 {create|check}"
exit 1
;;
esac
```
Using inotify for Real-time Monitoring
For real-time file monitoring, use inotify tools:
```bash
Install inotify tools
sudo apt-get install inotify-tools # Ubuntu/Debian
sudo yum install inotify-tools # CentOS/RHEL
Monitor directory for changes
inotifywait -m -r -e modify,create,delete,move /etc --format '%T %w%f %e' --timefmt '%Y-%m-%d %H:%M:%S'
Create a monitoring script
#!/bin/bash
inotifywait -m -r -e modify,create,delete,move /etc /usr/bin /usr/sbin --format '%T %w%f %e' --timefmt '%Y-%m-%d %H:%M:%S' | while read date time file event; do
echo "$date $time: $file was $event" >> /var/log/file-changes.log
# Add alerting logic here
done
```
AIDE (Advanced Intrusion Detection Environment) {#aide}
AIDE is a powerful, open-source file integrity monitoring tool that creates a database of file attributes and compares them against future scans.
Installing AIDE
```bash
Ubuntu/Debian
sudo apt-get update
sudo apt-get install aide
CentOS/RHEL
sudo yum install aide
or for newer versions
sudo dnf install aide
Verify installation
aide --version
```
Configuring AIDE
The main configuration file is typically located at `/etc/aide/aide.conf` or `/etc/aide.conf`:
```bash
Basic AIDE configuration
Define what to monitor
/bin R
/sbin R
/usr/bin R
/usr/sbin R
/etc R+a+sha256
Custom rules
All=R+a+sha256+X
Binlib=R+sha256
ConfFiles=R+a+sha256
Specific file monitoring
/etc/passwd All
/etc/shadow All
/etc/sudoers All
/boot All
Exclude temporary directories
!/tmp
!/var/tmp
!/proc
!/sys
!/dev
```
Rule Definitions
AIDE uses rule definitions to specify what attributes to monitor:
- R: Read permissions, owner, group, size, mtime, ctime, md5, sha1
- L: Link count
- n: Number of links
- u: User (owner)
- g: Group
- s: Size
- m: Modification time
- a: Access time
- c: Creation time
- S: Check for growing size
- md5: MD5 checksum
- sha1: SHA-1 checksum
- sha256: SHA-256 checksum
Initializing and Running AIDE
```bash
Initialize the database
sudo aide --init
Move the new database to the expected location
sudo mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
Run integrity check
sudo aide --check
Update database after legitimate changes
sudo aide --update
sudo mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
```
Automating AIDE with Cron
Create a cron job for regular integrity checks:
```bash
Edit crontab
sudo crontab -e
Add daily integrity check at 2 AM
0 2 * /usr/bin/aide --check | mail -s "AIDE Integrity Report" admin@example.com
Weekly database update (if needed)
0 3 0 /usr/bin/aide --update && mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
```
Tripwire Implementation {#tripwire}
Tripwire is a commercial file integrity monitoring solution with an open-source version available for Linux.
Installing Open Source Tripwire
```bash
Ubuntu/Debian
sudo apt-get install tripwire
CentOS/RHEL (may require EPEL repository)
sudo yum install epel-release
sudo yum install tripwire
Compile from source (alternative method)
wget https://github.com/Tripwire/tripwire-open-source/archive/master.zip
unzip master.zip
cd tripwire-open-source-master
make release
sudo make install
```
Tripwire Configuration
During installation, Tripwire will prompt for passphrases for site and local keys. The main configuration files are:
- `/etc/tripwire/tw.cfg`: Main configuration
- `/etc/tripwire/tw.pol`: Policy file defining what to monitor
```bash
Initialize Tripwire database
sudo tripwire --init
Run integrity check
sudo tripwire --check
Generate report
sudo tripwire --check --interactive
```
Sample Tripwire Policy
```bash
Basic Tripwire policy rules
(
rulename = "System Configuration Files",
severity = $(SIG_HI)
)
{
/etc/passwd -> $(SEC_CONFIG);
/etc/shadow -> $(SEC_CONFIG);
/etc/group -> $(SEC_CONFIG);
/etc/sudoers -> $(SEC_CONFIG);
}
(
rulename = "System Binaries",
severity = $(SIG_MED)
)
{
/bin -> $(SEC_BIN);
/sbin -> $(SEC_BIN);
/usr/bin -> $(SEC_BIN);
/usr/sbin -> $(SEC_BIN);
}
```
OSSEC File Integrity Monitoring {#ossec}
OSSEC is a comprehensive Host-based Intrusion Detection System (HIDS) that includes robust file integrity monitoring capabilities.
Installing OSSEC
```bash
Download OSSEC
wget https://github.com/ossec/ossec-hids/archive/master.tar.gz
tar -xzf master.tar.gz
cd ossec-hids-master
Install dependencies
sudo apt-get install build-essential libevent-dev libpcre2-dev libz-dev libssl-dev
Run installation script
sudo ./install.sh
Follow the interactive installation prompts
```
OSSEC Configuration for FIM
Edit `/var/ossec/etc/ossec.conf`:
```xml
7200
/etc,/usr/bin,/usr/sbin,/bin,/sbin
/etc/passwd,/etc/shadow
/etc/mtab
/etc/mnttab
/etc/hosts.deny
/etc/mail/statistics
/etc/random-seed
/etc/adjtime
/etc/httpd/logs
HKEY_LOCAL_MACHINE\Software\Classes\batfile
yes
```
Starting OSSEC Services
```bash
Start OSSEC
sudo /var/ossec/bin/ossec-control start
Check status
sudo /var/ossec/bin/ossec-control status
View logs
sudo tail -f /var/ossec/logs/ossec.log
```
Custom Scripts and Automation {#custom-scripts}
Advanced Monitoring Script with Email Alerts
```bash
#!/bin/bash
Advanced File Integrity Monitoring Script
Author: System Administrator
Version: 2.0
Configuration
CONFIG_FILE="/etc/fim/fim.conf"
BASELINE_DIR="/var/lib/fim/baselines"
LOG_FILE="/var/log/fim/fim.log"
TEMP_DIR="/tmp/fim"
EMAIL_RECIPIENT="admin@example.com"
CRITICAL_FILES="/etc/passwd /etc/shadow /etc/sudoers /etc/ssh/sshd_config"
Load configuration
if [ -f "$CONFIG_FILE" ]; then
source "$CONFIG_FILE"
fi
Logging function
log_message() {
local level="$1"
local message="$2"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $message" | tee -a "$LOG_FILE"
}
Send email alert
send_alert() {
local subject="$1"
local body="$2"
echo "$body" | mail -s "$subject" "$EMAIL_RECIPIENT"
}
Create baseline
create_baseline() {
log_message "INFO" "Creating new baseline"
mkdir -p "$BASELINE_DIR" "$TEMP_DIR"
# Create checksums for different categories
find /etc -type f -executable -exec sha256sum {} \; > "$BASELINE_DIR/etc.sha256" 2>/dev/null
find /usr/bin -type f -exec sha256sum {} \; > "$BASELINE_DIR/usr_bin.sha256" 2>/dev/null
find /usr/sbin -type f -exec sha256sum {} \; > "$BASELINE_DIR/usr_sbin.sha256" 2>/dev/null
# Critical files
sha256sum $CRITICAL_FILES > "$BASELINE_DIR/critical.sha256" 2>/dev/null
# Store file permissions and ownership
find /etc /usr/bin /usr/sbin -type f -exec stat -c "%n %a %U %G %s %Y" {} \; > "$BASELINE_DIR/attributes.txt" 2>/dev/null
log_message "INFO" "Baseline creation completed"
}
Check integrity
check_integrity() {
log_message "INFO" "Starting integrity check"
local changes_detected=0
local alert_body=""
mkdir -p "$TEMP_DIR"
# Check critical files first
if [ -f "$BASELINE_DIR/critical.sha256" ]; then
if ! sha256sum -c "$BASELINE_DIR/critical.sha256" --quiet 2>/dev/null; then
log_message "CRITICAL" "Changes detected in critical system files"
alert_body="CRITICAL: Changes detected in critical system files\n"
changes_detected=1
fi
fi
# Check other directories
for category in etc usr_bin usr_sbin; do
if [ -f "$BASELINE_DIR/${category}.sha256" ]; then
if ! sha256sum -c "$BASELINE_DIR/${category}.sha256" --quiet 2>/dev/null; then
log_message "WARNING" "Changes detected in $category"
alert_body="${alert_body}WARNING: Changes detected in $category\n"
changes_detected=1
fi
fi
done
# Check file attributes
if [ -f "$BASELINE_DIR/attributes.txt" ]; then
find /etc /usr/bin /usr/sbin -type f -exec stat -c "%n %a %U %G %s %Y" {} \; > "$TEMP_DIR/current_attributes.txt" 2>/dev/null
if ! diff "$BASELINE_DIR/attributes.txt" "$TEMP_DIR/current_attributes.txt" > /dev/null; then
log_message "WARNING" "File attribute changes detected"
alert_body="${alert_body}WARNING: File permission or ownership changes detected\n"
changes_detected=1
fi
fi
# Send alert if changes detected
if [ $changes_detected -eq 1 ]; then
send_alert "File Integrity Alert - $(hostname)" "$alert_body"
else
log_message "INFO" "No changes detected"
fi
# Cleanup
rm -rf "$TEMP_DIR"
}
Update baseline
update_baseline() {
log_message "INFO" "Updating baseline"
mv "$BASELINE_DIR" "${BASELINE_DIR}.backup.$(date +%Y%m%d)"
create_baseline
}
Main execution
case "$1" in
create|init)
create_baseline
;;
check|scan)
check_integrity
;;
update)
update_baseline
;;
*)
echo "Usage: $0 {create|check|update}"
echo " create - Create initial baseline"
echo " check - Check integrity against baseline"
echo " update - Update baseline (creates backup)"
exit 1
;;
esac
```
Systemd Service for Continuous Monitoring
Create a systemd service for automated monitoring:
```bash
Create service file
sudo tee /etc/systemd/system/fim-monitor.service > /dev/null << EOF
[Unit]
Description=File Integrity Monitor
After=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/fim-advanced.sh check
Restart=always
RestartSec=3600
[Install]
WantedBy=multi-user.target
EOF
Create timer for periodic checks
sudo tee /etc/systemd/system/fim-monitor.timer > /dev/null << EOF
[Unit]
Description=Run FIM check every hour
Requires=fim-monitor.service
[Timer]
OnCalendar=hourly
Persistent=true
[Install]
WantedBy=timers.target
EOF
Enable and start services
sudo systemctl daemon-reload
sudo systemctl enable fim-monitor.timer
sudo systemctl start fim-monitor.timer
```
Practical Examples and Use Cases {#examples}
Web Server Monitoring
For web servers, focus on monitoring web application files and configuration:
```bash
#!/bin/bash
Web Server FIM Configuration
Monitor web application files
WEBROOT="/var/www/html"
APACHE_CONFIG="/etc/apache2"
NGINX_CONFIG="/etc/nginx"
Create web-specific baseline
create_web_baseline() {
find "$WEBROOT" -type f \( -name ".php" -o -name ".html" -o -name ".js" -o -name ".css" \) -exec sha256sum {} \; > /var/lib/fim/webfiles.sha256
if [ -d "$APACHE_CONFIG" ]; then
find "$APACHE_CONFIG" -name "*.conf" -exec sha256sum {} \; > /var/lib/fim/apache.sha256
fi
if [ -d "$NGINX_CONFIG" ]; then
find "$NGINX_CONFIG" -name "*.conf" -exec sha256sum {} \; > /var/lib/fim/nginx.sha256
fi
}
Check for web shell uploads
check_web_shells() {
# Common web shell patterns
local suspicious_patterns="c99|r57|b374k|WSO|FilesMan|Cgishell"
find "$WEBROOT" -type f \( -name ".php" -o -name ".asp" -o -name "*.jsp" \) -exec grep -l -E "$suspicious_patterns" {} \; 2>/dev/null | while read file; do
echo "ALERT: Potential web shell detected: $file" | tee -a /var/log/fim/webshell.log
# Send immediate alert
echo "Potential web shell detected on $(hostname): $file" | mail -s "Web Shell Alert" admin@example.com
done
}
```
Database Server Monitoring
For database servers, monitor configuration files and data directories:
```bash
#!/bin/bash
Database Server FIM
MySQL/MariaDB monitoring
MYSQL_CONFIG="/etc/mysql"
MYSQL_DATA="/var/lib/mysql"
PostgreSQL monitoring
POSTGRES_CONFIG="/etc/postgresql"
POSTGRES_DATA="/var/lib/postgresql"
monitor_database_config() {
# Monitor MySQL configuration
if [ -d "$MYSQL_CONFIG" ]; then
find "$MYSQL_CONFIG" -name "*.cnf" -exec sha256sum {} \; > /var/lib/fim/mysql_config.sha256
fi
# Monitor PostgreSQL configuration
if [ -d "$POSTGRES_CONFIG" ]; then
find "$POSTGRES_CONFIG" -name "*.conf" -exec sha256sum {} \; > /var/lib/fim/postgres_config.sha256
fi
# Monitor critical database files (be careful with large data directories)
# Focus on configuration and schema files rather than data files
}
```
Container Environment Monitoring
For containerized environments, monitor container images and configurations:
```bash
#!/bin/bash
Container FIM
Docker monitoring
DOCKER_CONFIG="/etc/docker"
DOCKER_COMPOSE_FILES="/opt/docker-compose"
monitor_containers() {
# Monitor Docker daemon configuration
if [ -d "$DOCKER_CONFIG" ]; then
find "$DOCKER_CONFIG" -type f -exec sha256sum {} \; > /var/lib/fim/docker_config.sha256
fi
# Monitor Docker Compose files
if [ -d "$DOCKER_COMPOSE_FILES" ]; then
find "$DOCKER_COMPOSE_FILES" -name "docker-compose.yml" -o -name "*.yaml" -exec sha256sum {} \; > /var/lib/fim/compose.sha256
fi
# Monitor running containers for file changes
docker ps --format "table {{.Names}}" | tail -n +2 | while read container; do
# Create checkpoint of container filesystem
docker diff "$container" > "/var/lib/fim/container_${container}_diff.txt"
done
}
```
Troubleshooting Common Issues {#troubleshooting}
False Positives
False positives are common in file integrity monitoring. Here's how to handle them:
```bash
Common false positive sources
1. Log files - exclude or monitor separately
2. Temporary files - exclude temp directories
3. Cache files - exclude cache directories
4. Database files - monitor configuration, not data
5. System updates - update baseline after legitimate changes
Create exclusion rules
create_exclusions() {
cat > /etc/fim/exclusions.txt << EOF
Temporary directories
/tmp/*
/var/tmp/*
/var/cache/*
Log files
/var/log/*
*.log
System generated files
/etc/mtab
/etc/resolv.conf
/proc/*
/sys/*
/dev/*
Application specific
/var/lib/mysql/*
/var/lib/postgresql//main/base/
EOF
}
Apply exclusions in monitoring script
apply_exclusions() {
local file="$1"
local excluded=0
while read pattern; do
if [[ "$file" == $pattern ]]; then
excluded=1
break
fi
done < /etc/fim/exclusions.txt
return $excluded
}
```
Performance Issues
Large filesystems can cause performance problems:
```bash
Optimize scanning performance
optimize_scanning() {
# Use parallel processing
find /etc -type f -print0 | xargs -0 -P 4 -n 100 sha256sum > baseline.sha256
# Limit scan depth
find /usr -maxdepth 3 -type f -exec sha256sum {} \;
# Use incremental scanning
find /etc -newer /var/lib/fim/last_scan -type f -exec sha256sum {} \;
touch /var/lib/fim/last_scan
# Compress baselines to save space
gzip /var/lib/fim/baseline.sha256
}
Monitor resource usage
monitor_resources() {
# Check CPU usage during scan
top -b -n 1 | grep fim
# Check memory usage
ps aux | grep fim | awk '{sum+=$6} END {print "Memory usage: " sum/1024 " MB"}'
# Check I/O usage
iotop -a -o | grep fim
}
```
Database Corruption
Handle baseline database corruption:
```bash
Backup and recovery procedures
backup_baseline() {
local backup_dir="/var/backups/fim"
mkdir -p "$backup_dir"
# Create timestamped backup
tar -czf "$backup_dir/baseline_$(date +%Y%m%d_%H%M%S).tar.gz" /var/lib/fim/
# Keep only last 10 backups
find "$backup_dir" -name "baseline_*.tar.gz" | sort | head -n -10 | xargs rm -f
}
Verify baseline integrity
verify_baseline() {
local baseline_file="$1"
# Check file format
if ! sha256sum -c "$baseline_file" --quiet 2>/dev/null; then
echo "ERROR: Baseline file is corrupted or invalid"
return 1
fi
# Check for duplicate entries
if [ $(cut -d' ' -f3- "$baseline_file" | sort | uniq -d | wc -l) -gt 0 ]; then
echo "WARNING: Duplicate entries found in baseline"
fi
return 0
}
Rebuild corrupted baseline
rebuild_baseline() {
echo "Rebuilding baseline from backup..."
# Find latest backup
local latest_backup=$(find /var/backups/fim -name "baseline_*.tar.gz" | sort | tail -n 1)
if [ -n "$latest_backup" ]; then
tar -xzf "$latest_backup" -C /
echo "Baseline restored from backup: $latest_backup"
else
echo "No backup found, creating new baseline..."
create_baseline
fi
}
```
Network and Email Issues
Troubleshoot notification problems:
```bash
Test email functionality
test_email() {
if command -v mail >/dev/null 2>&1; then
echo "Test message from FIM system" | mail -s "FIM Test" admin@example.com
echo "Test email sent"
else
echo "Mail command not available, installing..."
sudo apt-get install mailutils
fi
}
Alternative notification methods
send_notification() {
local message="$1"
# Try email first
if echo "$message" | mail -s "FIM Alert" admin@example.com 2>/dev/null; then
echo "Email notification sent"
# Fallback to syslog
elif logger -p security.alert "FIM: $message"; then
echo "Syslog notification sent"
# Fallback to SNMP trap (if configured)
elif snmptrap -v 2c -c public monitoring-server '' 1.3.6.1.4.1.2021.251 1.3.6.1.4.1.2021.251.1 s "$message" 2>/dev/null; then
echo "SNMP trap sent"
else
echo "All notification methods failed"
fi
}
```
Best Practices and Security Considerations {#best-practices}
Secure Baseline Storage
Protect your baselines from tampering:
```bash
Store baselines on read-only media or remote systems
Use cryptographic signatures for baseline integrity
Sign baseline with GPG
sign_baseline() {
local baseline_file="$1"
gpg --detach-sign --armor "$baseline_file"
echo "Baseline signed: ${baseline_file}.asc"
}
Verify baseline signature
verify_signature() {
local baseline_file="$1"
if gpg --verify "${baseline_file}.asc" "$baseline_file"; then
echo "Baseline signature valid"
return 0
else
echo "ERROR: Baseline signature invalid!"
return 1
fi
}
Store baseline on remote system
backup_remote() {
local baseline_file="$1"
local remote_host="backup-server.example.com"
scp "$baseline_file" "backup@${remote_host}:/backups/fim/$(hostname)/"
echo "Baseline backed up to remote system"
}
```
Access Control
Implement proper access controls:
```bash
Set appropriate permissions
secure_fim_files() {
# Baseline directory - read-only for monitoring user
chown root:fim-group /var/lib/fim
chmod 750 /var/lib/fim
chmod 640 /var/lib/fim/*
# Log files - append only
chown root:adm /var/log/fim.log
chmod 640 /var/log/fim.log
# Configuration files - read-only
chmod 644 /etc/fim/fim.conf
# Scripts - executable by root only
chmod 700 /usr/local/bin/fim-*.sh
}
Use dedicated user for monitoring
create_fim_user() {
# Create system user for FIM
useradd -r -s /bin/false -d /var/lib/fim -c "File Integrity Monitor" fim
# Add to necessary groups
usermod -a -G adm fim
# Configure sudo access for specific commands
echo "fim ALL=(root) NOPASSWD: /usr/bin/find, /usr/bin/sha256sum" >> /etc/sudoers.d/fim
}
```
Monitoring Strategy
Develop a comprehensive monitoring strategy:
```bash
Risk-based monitoring approach
define_monitoring_levels() {
cat > /etc/fim/monitoring-levels.conf << EOF
Critical - Real-time monitoring, immediate alerts
CRITICAL_FILES="/etc/passwd /etc/shadow /etc/sudoers /etc/ssh/sshd_config"
CRITICAL_DIRS="/etc/ssh /etc/pam.d"
Important - Hourly checks, email alerts
IMPORTANT_DIRS="/etc /usr/bin /usr/sbin /bin /sbin"
Standard - Daily checks, log only
STANDARD_DIRS="/usr/local /opt"
Low priority - Weekly checks
LOW_PRIORITY="/home /var/www"
EOF
}
Implement tiered monitoring
implement_tiered_monitoring() {
# Critical files - real-time monitoring
inotifywait -m -e modify,create,delete $CRITICAL_FILES &
# Important directories - hourly checks
echo "0 /usr/local/bin/fim.sh check important" | crontab -
# Standard directories - daily checks
echo "0 2 * /usr/local/bin/fim.sh check standard" | crontab -
# Low priority - weekly checks
echo "0 3 0 /usr/local/bin/fim.sh check low" | crontab -
}
```
Compliance and Reporting
Ensure compliance with security standards:
```bash
Generate compliance reports
generate_compliance_report() {
local report_file="/var/log/fim/compliance_$(date +%Y%m%d).txt"
cat > "$report_file" << EOF
File Integrity Monitoring Compliance Report
Generated: $(date)
System: $(hostname)
=== Configuration Status ===
FIM Tool: $(which aide || which tripwire || echo "Custom Script")
Configuration File: $(test -f /etc/aide/aide.conf && echo "Present" || echo "Missing")
Baseline Status: $(test -f /var/lib/aide/aide.db && echo "Present" || echo "Missing")
=== Monitoring Coverage ===
Critical Files: $(echo $CRITICAL_FILES | wc -w) files
Monitored Directories: $(echo $IMPORTANT_DIRS | wc -w) directories
Exclusions: $(test -f /etc/fim/exclusions.txt && wc -l < /etc/fim/exclusions.txt || echo "0") patterns
=== Recent Activity ===
Last Scan: $(stat -c %y /var/log/fim.log 2>/dev/null || echo "Never")
Recent Alerts: $(grep -c "ALERT" /var/log/fim.log 2>/dev/null || echo "0")
System Changes: $(grep -c "Changes detected" /var/log/fim.log 2>/dev/null || echo "0")
=== Compliance Status ===
PCI DSS 11.5: $(test -f /var/lib/fim/baseline.sha256 && echo "COMPLIANT" || echo "NON-COMPLIANT")
NIST 800-53 SI-7: $(ps aux | grep -q inotify && echo "COMPLIANT" || echo "PARTIAL")
ISO 27001 A.12.2.1: $(test -f /etc/cron.d/fim && echo "COMPLIANT" || echo "NON-COMPLIANT")
EOF
echo "Compliance report generated: $report_file"
}
Archive and rotate logs
archive_logs() {
local log_dir="/var/log/fim"
local archive_dir="/var/log/fim/archive"
mkdir -p "$archive_dir"
# Compress logs older than 30 days
find "$log_dir" -name "*.log" -mtime +30 -exec gzip {} \;
# Move compressed logs to archive
find "$log_dir" -name "*.gz" -exec mv {} "$archive_dir/" \;
# Remove archives older than 1 year
find "$archive_dir" -name "*.gz" -mtime +365 -delete
}
```
Integration with SIEM Systems
Integrate FIM with Security Information and Event Management (SIEM) systems:
```bash
Syslog integration
configure_syslog() {
# Configure rsyslog for FIM events
cat >> /etc/rsyslog.conf << EOF
File Integrity Monitoring logs
local6.* /var/log/fim/fim.log
local6.* @@siem-server.example.com:514
EOF
systemctl restart rsyslog
}
JSON format for SIEM consumption
log_json_event() {
local event_type="$1"
local file_path="$2"
local details="$3"
local json_log="{
\"timestamp\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",
\"hostname\": \"$(hostname)\",
\"event_type\": \"$event_type\",
\"file_path\": \"$file_path\",
\"details\": \"$details\",
\"source\": \"fim\"
}"
echo "$json_log" >> /var/log/fim/fim_events.json
logger -p local6.warning "$json_log"
}
Splunk integration
splunk_forwarder_config() {
cat > /opt/splunkforwarder/etc/system/local/inputs.conf << EOF
[monitor:///var/log/fim/fim.log]
disabled = false
sourcetype = linux_fim
index = security
[monitor:///var/log/fim/fim_events.json]
disabled = false
sourcetype = json
index = security
EOF
/opt/splunkforwarder/bin/splunk restart
}
```
Performance Optimization
Optimize FIM performance for large environments:
```bash
Parallel processing for large scans
parallel_scan() {
local target_dir="$1"
local output_file="$2"
# Split directory listing into chunks
find "$target_dir" -type f | split -l 1000 - /tmp/fim_chunk_
# Process chunks in parallel
for chunk in /tmp/fim_chunk_*; do
{
while read file; do
sha256sum "$file" 2>/dev/null
done < "$chunk"
} &
done | sort > "$output_file"
wait
rm -f /tmp/fim_chunk_*
}
Incremental scanning
incremental_scan() {
local last_scan_file="/var/lib/fim/last_scan_timestamp"
local current_time=$(date +%s)
# Find files modified since last scan
if [ -f "$last_scan_file" ]; then
local last_scan=$(cat "$last_scan_file")
find /etc -type f -newer "$last_scan_file" -exec sha256sum {} \; > /tmp/incremental_scan.sha256
# Compare with baseline for changed files only
compare_incremental_changes
else
# First scan - full baseline creation
create_baseline
fi
echo "$current_time" > "$last_scan_file"
}
Optimize I/O operations
optimize_io() {
# Use ionice to limit I/O impact
ionice -c 3 -p $$ 2>/dev/null
# Limit concurrent file operations
export FIM_MAX_PROCS=2
# Use faster hash for initial detection, SHA256 for verification
fast_check() {
local file="$1"
local md5_hash=$(md5sum "$file" 2>/dev/null | cut -d' ' -f1)
local baseline_md5=$(grep "$file" /var/lib/fim/baseline.md5 2>/dev/null | cut -d' ' -f1)
if [ "$md5_hash" != "$baseline_md5" ]; then
# File changed, verify with SHA256
sha256sum "$file" >> /tmp/changed_files.sha256
fi
}
}
```
Disaster Recovery and Business Continuity
Implement disaster recovery procedures:
```bash
Backup and recovery procedures
create_disaster_recovery_plan() {
cat > /etc/fim/disaster_recovery.sh << 'EOF'
#!/bin/bash
FIM Disaster Recovery Plan
Backup FIM configuration and baselines
backup_fim_data() {
local backup_location="/backup/fim/$(date +%Y%m%d)"
mkdir -p "$backup_location"
# Backup baselines
cp -r /var/lib/fim/* "$backup_location/"
# Backup configuration
cp -r /etc/fim/* "$backup_location/"
# Backup scripts
cp /usr/local/bin/fim-*.sh "$backup_location/"
# Create manifest
find "$backup_location" -type f -exec sha256sum {} \; > "$backup_location/manifest.sha256"
echo "FIM data backed up to: $backup_location"
}
Restore from backup
restore_fim_data() {
local backup_location="$1"
if [ ! -d "$backup_location" ]; then
echo "ERROR: Backup location not found: $backup_location"
return 1
fi
# Verify backup integrity
if ! sha256sum -c "$backup_location/manifest.sha256" --quiet; then
echo "ERROR: Backup integrity check failed"
return 1
fi
# Restore data
cp -r "$backup_location"/* /var/lib/fim/
cp -r "$backup_location"/* /etc/fim/
echo "FIM data restored from: $backup_location"
}
Test restoration procedures
test_restoration() {
local test_dir="/tmp/fim_test"
mkdir -p "$test_dir"
# Create test backup
backup_fim_data
# Simulate data loss
rm -rf /var/lib/fim/*
# Restore from backup
restore_fim_data "$(ls -1d /backup/fim/* | tail -n1)"
# Verify restoration
if [ -f "/var/lib/fim/baseline.sha256" ]; then
echo "Restoration test: PASSED"
else
echo "Restoration test: FAILED"
fi
}
EOF
chmod +x /etc/fim/disaster_recovery.sh
}
Automated offsite backup
setup_offsite_backup() {
# Configure automated backup to remote location
cat > /etc/cron.d/fim-backup << EOF
Daily backup of FIM data
0 4 * root /etc/fim/disaster_recovery.sh backup_fim_data && rsync -az /backup/fim/ remote-backup:/fim-backups/\$(hostname)/
EOF
}
```
Conclusion and Next Steps {#conclusion}
File integrity monitoring is a crucial component of a comprehensive security strategy. This guide has covered various approaches to implementing FIM on Linux systems, from simple command-line tools to sophisticated enterprise solutions. Here are the key takeaways and recommendations for moving forward:
Key Takeaways
1. Choose the Right Tool: Select FIM tools based on your environment's complexity, compliance requirements, and available resources. Start with simple solutions for basic needs and scale up as requirements grow.
2. Implement Defense in Depth: Use multiple monitoring approaches - combine real-time monitoring for critical files with scheduled scans for comprehensive coverage.
3. Regular Maintenance: FIM systems require ongoing maintenance, including baseline updates, exclusion rule refinement, and performance optimization.
4. Integration is Key: Integrate FIM with existing security infrastructure, including SIEM systems, alerting mechanisms, and incident response procedures.
Recommended Implementation Roadmap
Phase 1: Foundation (Weeks 1-2)
- Install and configure a basic FIM solution (AIDE or custom scripts)
- Create initial baselines for critical system files
- Set up basic alerting and logging
- Document procedures and train staff
Phase 2: Enhancement (Weeks 3-6)
- Implement real-time monitoring for critical files
- Integrate with centralized logging and SIEM systems
- Develop automated baseline update procedures
- Create compliance reporting mechanisms
Phase 3: Optimization (Weeks 7-12)
- Fine-tune exclusion rules to reduce false positives
- Implement performance optimizations for large environments
- Develop comprehensive incident response procedures
- Establish disaster recovery and business continuity plans
Phase 4: Advanced Features (Ongoing)
- Implement machine learning-based anomaly detection
- Develop custom dashboards and visualization tools
- Integrate with configuration management systems
- Establish metrics and KPIs for FIM effectiveness
Best Practices Summary
1. Start Small: Begin with monitoring critical files and directories, then gradually expand coverage
2. Test Thoroughly: Validate FIM configurations in test environments before production deployment
3. Document Everything: Maintain comprehensive documentation of baselines, exclusions, and procedures
4. Regular Reviews: Periodically review and update FIM configurations to address changing requirements
5. Train Your Team: Ensure staff understand FIM alerts and response procedures
6. Monitor Performance: Keep track of FIM system resource usage and optimize as needed
7. Backup Baselines: Protect baseline data with regular backups and integrity checks
Common Pitfalls to Avoid
- Over-monitoring: Don't monitor everything - focus on security-critical files and directories
- Ignoring False Positives: Address false positives promptly to maintain alert fatigue
- Neglecting Updates: Keep FIM tools and baselines current with system changes
- Poor Documentation: Maintain clear documentation of what's monitored and why
- Insufficient Testing: Test FIM configurations thoroughly before production deployment
Future Considerations
As technology evolves, consider these emerging trends in file integrity monitoring:
1. Container Security: Adapt FIM strategies for containerized environments and microservices
2. Cloud Integration: Leverage cloud-native FIM solutions for hybrid and multi-cloud environments
3. AI and Machine Learning: Implement intelligent anomaly detection to reduce false positives
4. DevSecOps Integration: Integrate FIM into CI/CD pipelines for continuous security monitoring
5. Zero Trust Architecture: Align FIM strategies with zero trust security models
Additional Resources
For continued learning and staying current with FIM best practices:
- NIST Cybersecurity Framework: Guidance on implementing detection capabilities
- CIS Controls: Specific recommendations for file integrity monitoring
- PCI DSS Requirements: Compliance requirements for payment card industry
- Open Source Communities: AIDE, OSSEC, and other tool communities for support and updates
- Security Conferences: DEF CON, RSA, and other venues for latest security research
Final Recommendations
File integrity monitoring is not a set-and-forget security control. It requires ongoing attention, refinement, and adaptation to changing threat landscapes and business requirements. Start with the fundamentals covered in this guide, but don't stop there. Continuously evaluate and improve your FIM implementation to ensure it provides effective security monitoring while minimizing operational overhead.
Remember that the most sophisticated FIM solution is only as good as the processes and people supporting it. Invest in training, documentation, and procedures to ensure your FIM implementation delivers maximum security value to your organization.
By following the guidance in this comprehensive guide, you'll be well-equipped to implement robust file integrity monitoring that enhances your Linux system security posture and helps protect against unauthorized changes, malware, and other security threats.