How to log script output in Linux
How to Log Script Output in Linux
Logging script output is a fundamental skill for Linux system administrators, developers, and power users. Whether you're running automated scripts, debugging applications, or maintaining system processes, capturing and storing script output is essential for monitoring, troubleshooting, and compliance purposes. This comprehensive guide will walk you through various methods to log script output in Linux, from basic redirection techniques to advanced logging strategies.
Table of Contents
1. [Prerequisites and Requirements](#prerequisites-and-requirements)
2. [Understanding Linux Output Streams](#understanding-linux-output-streams)
3. [Basic Output Redirection Methods](#basic-output-redirection-methods)
4. [Using the tee Command for Simultaneous Display and Logging](#using-the-tee-command-for-simultaneous-display-and-logging)
5. [Advanced Logging Techniques](#advanced-logging-techniques)
6. [Logging with Timestamps](#logging-with-timestamps)
7. [Script-Based Logging Solutions](#script-based-logging-solutions)
8. [Real-World Examples and Use Cases](#real-world-examples-and-use-cases)
9. [Troubleshooting Common Issues](#troubleshooting-common-issues)
10. [Best Practices and Professional Tips](#best-practices-and-professional-tips)
11. [Conclusion](#conclusion)
Prerequisites and Requirements
Before diving into logging script output, ensure you have:
- Basic understanding of Linux command line interface
- Access to a Linux system with bash shell (most distributions)
- Text editor knowledge (nano, vim, or gedit)
- Understanding of file permissions and directory structure
- Basic knowledge of shell scripting concepts
Required Tools
Most logging techniques use built-in Linux utilities:
- `bash` shell (default on most systems)
- `tee` command (part of coreutils)
- `script` command (usually pre-installed)
- Text editors for creating scripts
- Standard output redirection operators
Understanding Linux Output Streams
Linux processes have three standard data streams that are crucial for logging:
Standard Output (stdout) - File Descriptor 1
This stream carries normal program output. When you run a command successfully, the results typically go to stdout.
```bash
echo "This goes to stdout"
ls /home
```
Standard Error (stderr) - File Descriptor 2
This stream carries error messages and diagnostic information. Even when stdout is redirected, error messages may still appear on the terminal.
```bash
ls /nonexistent-directory # Error goes to stderr
```
Standard Input (stdin) - File Descriptor 0
This stream provides input to programs, though it's less relevant for output logging.
Understanding these streams is essential because you'll often need to capture both normal output and error messages in your logs.
Basic Output Redirection Methods
Redirecting Standard Output to a File
The most fundamental logging technique uses the `>` operator to redirect stdout to a file:
```bash
Redirect output to a file (overwrites existing content)
./myscript.sh > output.log
Append output to a file (preserves existing content)
./myscript.sh >> output.log
```
Example Script (myscript.sh):
```bash
#!/bin/bash
echo "Script started at $(date)"
echo "Processing files..."
ls -la /tmp
echo "Script completed"
```
Running with output redirection:
```bash
chmod +x myscript.sh
./myscript.sh > script_output.log
```
Redirecting Standard Error
To capture error messages, redirect stderr using `2>`:
```bash
Redirect only errors to a file
./myscript.sh 2> error.log
Redirect errors and append to existing file
./myscript.sh 2>> error.log
```
Redirecting Both stdout and stderr
Capture everything by redirecting both streams:
```bash
Method 1: Redirect both streams to the same file
./myscript.sh > output.log 2>&1
Method 2: Using the shorthand (bash 4.0+)
./myscript.sh &> output.log
Method 3: Separate files for output and errors
./myscript.sh > output.log 2> error.log
```
Explanation of `2>&1`:
- `2>` redirects stderr
- `&1` refers to the current location of stdout
- This effectively merges stderr into stdout
Using the tee Command for Simultaneous Display and Logging
The `tee` command is invaluable when you want to see output on the terminal while simultaneously saving it to a file.
Basic tee Usage
```bash
Display output and save to file
./myscript.sh | tee output.log
Append to file instead of overwriting
./myscript.sh | tee -a output.log
```
Capturing Both stdout and stderr with tee
```bash
Method 1: Merge streams then tee
./myscript.sh 2>&1 | tee output.log
Method 2: More explicit approach
./myscript.sh |& tee output.log # bash 4.0+
```
Multiple Output Destinations
Tee can write to multiple files simultaneously:
```bash
Write to multiple log files
./myscript.sh | tee log1.txt log2.txt log3.txt
Combine with redirection for complex logging
./myscript.sh 2>&1 | tee >(cat > full.log) >(grep ERROR > errors.log)
```
Advanced Logging Techniques
Using Process Substitution
Process substitution allows for sophisticated logging setups:
```bash
Log different types of output to different files
./myscript.sh > >(tee stdout.log) 2> >(tee stderr.log >&2)
Filter and log specific patterns
./myscript.sh 2>&1 | tee >(grep -i error > error.log) >(grep -i warning > warning.log)
```
Named Pipes (FIFOs) for Advanced Logging
Create named pipes for complex logging scenarios:
```bash
Create a named pipe
mkfifo /tmp/script_pipe
Start a background process to handle the pipe
cat /tmp/script_pipe | while read line; do
echo "$(date): $line" >> timestamped.log
done &
Run script with output to the pipe
./myscript.sh > /tmp/script_pipe 2>&1
```
Using exec for Persistent Redirection
The `exec` command can redirect output for the entire script duration:
```bash
#!/bin/bash
Redirect all output to log file
exec > script.log 2>&1
echo "This goes to the log file"
ls /some/directory
echo "This also goes to the log file"
```
Logging with Timestamps
Adding timestamps to log entries is crucial for debugging and monitoring:
Simple Timestamp Logging
```bash
Add timestamp to each line
./myscript.sh | while read line; do
echo "$(date '+%Y-%m-%d %H:%M:%S'): $line"
done > timestamped.log
```
Advanced Timestamp Function
Create a reusable timestamp function:
```bash
#!/bin/bash
Function to add timestamps
log_with_timestamp() {
while IFS= read -r line; do
printf '%s: %s\n' "$(date '+%Y-%m-%d %H:%M:%S')" "$line"
done
}
Use the function
./myscript.sh 2>&1 | log_with_timestamp | tee detailed.log
```
Using logger Command
The `logger` command integrates with the system log:
```bash
Send output to system log
./myscript.sh 2>&1 | logger -t "MyScript"
Check system log
tail -f /var/log/syslog | grep MyScript
```
Script-Based Logging Solutions
Creating a Logging Wrapper Script
```bash
#!/bin/bash
log_wrapper.sh - A comprehensive logging wrapper
SCRIPT_NAME="$1"
LOG_DIR="/var/log/scripts"
LOG_FILE="$LOG_DIR/$(basename "$SCRIPT_NAME" .sh)_$(date +%Y%m%d_%H%M%S).log"
Create log directory if it doesn't exist
mkdir -p "$LOG_DIR"
Function to log with levels
log_message() {
local level="$1"
local message="$2"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $message" | tee -a "$LOG_FILE"
}
Start logging
log_message "INFO" "Starting script: $SCRIPT_NAME"
log_message "INFO" "Log file: $LOG_FILE"
Execute the script with full logging
{
echo "=== SCRIPT OUTPUT START ==="
"$SCRIPT_NAME" 2>&1
echo "=== SCRIPT OUTPUT END ==="
} | while IFS= read -r line; do
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $line" | tee -a "$LOG_FILE"
done
log_message "INFO" "Script completed: $SCRIPT_NAME"
```
Built-in Logging for Scripts
Add logging directly to your scripts:
```bash
#!/bin/bash
example_script_with_logging.sh
Configuration
LOG_FILE="/var/log/myscript.log"
DEBUG=true
Logging functions
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}
debug_log() {
if [ "$DEBUG" = true ]; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [DEBUG] $*" | tee -a "$LOG_FILE"
fi
}
error_log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [ERROR] $*" | tee -a "$LOG_FILE" >&2
}
Script logic with logging
log "Script started"
debug_log "Processing configuration"
if [ ! -d "/tmp" ]; then
error_log "Directory /tmp does not exist"
exit 1
fi
log "Processing files in /tmp"
ls -la /tmp >> "$LOG_FILE" 2>&1
log "Script completed successfully"
```
Real-World Examples and Use Cases
Example 1: Database Backup Script Logging
```bash
#!/bin/bash
db_backup.sh - Database backup with comprehensive logging
DB_NAME="production"
BACKUP_DIR="/backups"
LOG_FILE="/var/log/db_backup.log"
Redirect all output to log file while displaying on console
exec > >(tee -a "$LOG_FILE") 2>&1
echo "=== Database Backup Started: $(date) ==="
echo "Database: $DB_NAME"
echo "Backup Directory: $BACKUP_DIR"
Create backup with logging
if mysqldump "$DB_NAME" > "$BACKUP_DIR/backup_$(date +%Y%m%d_%H%M%S).sql"; then
echo "SUCCESS: Database backup completed"
else
echo "ERROR: Database backup failed"
exit 1
fi
echo "=== Database Backup Completed: $(date) ==="
```
Example 2: System Monitoring Script
```bash
#!/bin/bash
system_monitor.sh - System monitoring with rotating logs
LOG_DIR="/var/log/monitoring"
LOG_FILE="$LOG_DIR/system_monitor_$(date +%Y%m%d).log"
MAX_LOG_SIZE=10485760 # 10MB in bytes
Create log directory
mkdir -p "$LOG_DIR"
Function to rotate logs if they get too large
rotate_log() {
if [ -f "$LOG_FILE" ] && [ $(stat -f%z "$LOG_FILE" 2>/dev/null || stat -c%s "$LOG_FILE") -gt $MAX_LOG_SIZE ]; then
mv "$LOG_FILE" "${LOG_FILE}.old"
echo "Log rotated at $(date)" > "$LOG_FILE"
fi
}
Monitoring function with logging
monitor_system() {
{
echo "=== System Monitor Report: $(date) ==="
echo "CPU Usage:"
top -bn1 | grep "Cpu(s)" | head -1
echo ""
echo "Memory Usage:"
free -h
echo ""
echo "Disk Usage:"
df -h
echo "=== End Report ==="
echo ""
} >> "$LOG_FILE"
}
Main execution
rotate_log
monitor_system
Also display recent entries
tail -20 "$LOG_FILE"
```
Example 3: Automated Deployment Script
```bash
#!/bin/bash
deploy.sh - Application deployment with detailed logging
APP_NAME="myapp"
DEPLOY_DIR="/opt/$APP_NAME"
LOG_FILE="/var/log/deploy_${APP_NAME}_$(date +%Y%m%d_%H%M%S).log"
Setup logging to both file and console with timestamps
exec > >(while read line; do echo "[$(date '+%Y-%m-%d %H:%M:%S')] $line" | tee -a "$LOG_FILE"; done) 2>&1
echo "Starting deployment of $APP_NAME"
echo "Deploy directory: $DEPLOY_DIR"
echo "Log file: $LOG_FILE"
Deployment steps with error handling
deploy_step() {
local step_name="$1"
local step_command="$2"
echo "STEP: $step_name"
if eval "$step_command"; then
echo "SUCCESS: $step_name completed"
else
echo "ERROR: $step_name failed"
echo "Deployment aborted"
exit 1
fi
echo ""
}
deploy_step "Stopping application" "systemctl stop $APP_NAME"
deploy_step "Backing up current version" "cp -r $DEPLOY_DIR ${DEPLOY_DIR}.backup.$(date +%Y%m%d_%H%M%S)"
deploy_step "Deploying new version" "rsync -av /tmp/new_version/ $DEPLOY_DIR/"
deploy_step "Starting application" "systemctl start $APP_NAME"
deploy_step "Verifying deployment" "systemctl is-active $APP_NAME"
echo "Deployment completed successfully"
```
Troubleshooting Common Issues
Issue 1: Permission Denied When Writing Log Files
Problem: Script fails with permission errors when trying to write to log files.
Solution:
```bash
Check directory permissions
ls -ld /var/log
Create a user-writable log directory
mkdir -p "$HOME/logs"
LOG_FILE="$HOME/logs/script.log"
Or use sudo for system logs
sudo ./myscript.sh > /var/log/script.log 2>&1
```
Issue 2: Log Files Growing Too Large
Problem: Log files consume excessive disk space.
Solution:
```bash
Implement log rotation
rotate_logs() {
local log_file="$1"
local max_size="$2"
if [ -f "$log_file" ] && [ $(stat -c%s "$log_file") -gt "$max_size" ]; then
mv "$log_file" "${log_file}.$(date +%Y%m%d_%H%M%S)"
touch "$log_file"
fi
}
Use logrotate for system-wide management
sudo cat > /etc/logrotate.d/myscript << EOF
/var/log/myscript.log {
daily
rotate 7
compress
delaycompress
missingok
create 644 user group
}
EOF
```
Issue 3: Missing Error Messages in Logs
Problem: Error messages don't appear in log files.
Solution:
```bash
Always redirect both stdout and stderr
./myscript.sh > output.log 2>&1
Or use the newer syntax
./myscript.sh &> output.log
Verify redirection is working
./myscript.sh > output.log 2>&1
echo "Exit code: $?"
```
Issue 4: Buffering Issues with Real-time Logging
Problem: Output doesn't appear in log files immediately.
Solution:
```bash
Use unbuffered output
stdbuf -oL -eL ./myscript.sh | tee output.log
Or flush output regularly in scripts
echo "Message" | tee -a log.log
sync # Force write to disk
For Python scripts
python -u script.py | tee output.log
```
Issue 5: Timestamps Not Synchronized
Problem: Timestamps in logs don't match system time.
Solution:
```bash
Verify system time
date
timedatectl status
Use consistent timestamp format
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S %Z')
echo "[$TIMESTAMP] Message" >> log.file
Synchronize system time if needed
sudo ntpdate -s time.nist.gov
```
Best Practices and Professional Tips
1. Establish Logging Standards
Create consistent logging practices across your environment:
```bash
Standard log format
LOG_FORMAT="[%s] [%s] [%s] %s\n" # timestamp, level, component, message
log_standard() {
local level="$1"
local component="$2"
local message="$3"
printf "$LOG_FORMAT" "$(date '+%Y-%m-%d %H:%M:%S')" "$level" "$component" "$message"
}
Usage
log_standard "INFO" "DATABASE" "Connection established" | tee -a app.log
log_standard "ERROR" "NETWORK" "Connection timeout" | tee -a app.log
```
2. Implement Log Levels
Use different log levels for better organization:
```bash
#!/bin/bash
Configurable log levels
LOG_LEVEL=${LOG_LEVEL:-INFO} # Default to INFO
LOG_FILE="/var/log/application.log"
Log level priorities
declare -A LOG_LEVELS=(
[DEBUG]=0
[INFO]=1
[WARN]=2
[ERROR]=3
)
should_log() {
local message_level="$1"
[ "${LOG_LEVELS[$message_level]}" -ge "${LOG_LEVELS[$LOG_LEVEL]}" ]
}
log_message() {
local level="$1"
local message="$2"
if should_log "$level"; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $message" | tee -a "$LOG_FILE"
fi
}
Usage examples
log_message "DEBUG" "Variable X = $X"
log_message "INFO" "Process started"
log_message "WARN" "Deprecated function used"
log_message "ERROR" "Failed to connect to database"
```
3. Use Structured Logging
Implement structured logging for better parsing:
```bash
JSON-style structured logging
log_json() {
local level="$1"
local component="$2"
local message="$3"
local extra="$4"
cat << EOF | tee -a structured.log
{
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%S.%3NZ)",
"level": "$level",
"component": "$component",
"message": "$message",
"pid": $$,
"user": "$USER",
"hostname": "$HOSTNAME"
${extra:+, $extra}
}
EOF
}
Usage
log_json "INFO" "auth" "User login successful" '"user_id": "12345"'
```
4. Implement Log Monitoring
Set up monitoring for critical log events:
```bash
#!/bin/bash
log_monitor.sh - Monitor logs for specific patterns
LOG_FILE="/var/log/application.log"
ALERT_PATTERNS=("ERROR" "CRITICAL" "FATAL")
NOTIFICATION_EMAIL="admin@company.com"
monitor_logs() {
tail -F "$LOG_FILE" | while read line; do
for pattern in "${ALERT_PATTERNS[@]}"; do
if echo "$line" | grep -q "$pattern"; then
# Send alert
echo "ALERT: $line" | mail -s "Log Alert: $pattern detected" "$NOTIFICATION_EMAIL"
# Log the alert
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ALERT: Pattern '$pattern' detected: $line" >> /var/log/alerts.log
fi
done
done
}
Run monitor in background
monitor_logs &
```
5. Security Considerations
Protect sensitive information in logs:
```bash
Function to sanitize sensitive data
sanitize_log() {
local message="$1"
# Remove potential passwords, tokens, etc.
message=$(echo "$message" | sed -E 's/password=[^[:space:]]*/password=/gi')
message=$(echo "$message" | sed -E 's/token=[^[:space:]]*/token=/gi')
message=$(echo "$message" | sed -E 's/key=[^[:space:]]*/key=/gi')
echo "$message"
}
Secure logging function
secure_log() {
local level="$1"
local message="$2"
local sanitized_message
sanitized_message=$(sanitize_log "$message")
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $sanitized_message" | tee -a secure.log
}
Set appropriate permissions on log files
chmod 640 /var/log/application.log
chown app:log /var/log/application.log
```
6. Performance Optimization
Optimize logging for high-volume scenarios:
```bash
Batch logging for high-frequency events
LOG_BUFFER=()
BUFFER_SIZE=100
buffered_log() {
local message="$1"
LOG_BUFFER+=("$message")
if [ ${#LOG_BUFFER[@]} -ge $BUFFER_SIZE ]; then
flush_log_buffer
fi
}
flush_log_buffer() {
if [ ${#LOG_BUFFER[@]} -gt 0 ]; then
printf '%s\n' "${LOG_BUFFER[@]}" >> /var/log/buffered.log
LOG_BUFFER=()
fi
}
Ensure buffer is flushed on exit
trap flush_log_buffer EXIT
```
7. Cross-Platform Compatibility
Write portable logging solutions:
```bash
Detect operating system
detect_os() {
case "$OSTYPE" in
linux*) echo "linux" ;;
darwin*) echo "macos" ;;
freebsd*) echo "freebsd" ;;
*) echo "unknown" ;;
esac
}
OS-specific log paths
get_log_path() {
local os=$(detect_os)
case "$os" in
linux) echo "/var/log" ;;
macos) echo "/var/log" ;;
freebsd) echo "/var/log" ;;
*) echo "$HOME/logs" ;;
esac
}
Portable date command
portable_date() {
if command -v gdate >/dev/null 2>&1; then
gdate "$@" # GNU date on macOS
else
date "$@" # Standard date
fi
}
```
Conclusion
Logging script output in Linux is a critical skill that enhances debugging capabilities, system monitoring, and operational visibility. This comprehensive guide has covered various approaches from basic redirection to advanced logging frameworks, each suited for different scenarios and requirements.
Key Takeaways
1. Start Simple: Basic redirection with `>` and `>>` operators covers many use cases
2. Use tee for Visibility: The `tee` command allows simultaneous display and logging
3. Handle Both Streams: Always consider both stdout and stderr in your logging strategy
4. Add Timestamps: Timestamped logs are essential for debugging and analysis
5. Implement Standards: Consistent logging formats improve maintainability
6. Consider Security: Sanitize sensitive information and set appropriate permissions
7. Plan for Scale: Implement log rotation and monitoring for production environments
Next Steps
To further enhance your logging capabilities:
1. Explore system logging with `rsyslog` and `journald`
2. Implement centralized logging with tools like ELK stack or Splunk
3. Learn about log analysis and monitoring tools
4. Study log management best practices for compliance requirements
5. Investigate container logging strategies for Docker and Kubernetes environments
Final Recommendations
- Always test your logging setup before deploying to production
- Document your logging standards and share them with your team
- Regularly review and clean up old log files to manage disk space
- Consider using configuration files for logging parameters
- Implement monitoring and alerting based on log content
By mastering these logging techniques, you'll be well-equipped to handle script output management in any Linux environment, from simple personal scripts to complex enterprise applications. Remember that good logging practices are an investment in system reliability and operational efficiency.