How to check script execution time in Linux
How to Check Script Execution Time in Linux
Measuring script execution time is a fundamental skill for Linux system administrators, developers, and performance engineers. Whether you're optimizing system performance, debugging slow scripts, or monitoring automated processes, understanding how to accurately measure execution time is crucial for maintaining efficient systems and identifying bottlenecks.
This comprehensive guide will walk you through various methods to check script execution time in Linux, from simple command-line tools to advanced timing techniques. You'll learn multiple approaches suitable for different scenarios, from quick one-time measurements to continuous monitoring and detailed performance analysis.
Prerequisites and Requirements
Before diving into the various timing methods, ensure you have the following:
System Requirements
- A Linux distribution (Ubuntu, CentOS, RHEL, Debian, or similar)
- Terminal access with basic user privileges
- Basic understanding of command-line operations
- Text editor for creating and modifying scripts
Knowledge Prerequisites
- Familiarity with Linux command-line interface
- Basic understanding of shell scripting
- Knowledge of file permissions and execution
- Understanding of process management concepts
Tools and Commands
Most timing tools are pre-installed on standard Linux distributions, including:
- `time` command (external and built-in versions)
- `date` command
- Shell built-in timing features
- System monitoring utilities
Understanding Time Measurement in Linux
Linux provides several mechanisms for measuring execution time, each with specific use cases and levels of detail:
Types of Time Measurements
Real Time (Wall Clock Time): The actual elapsed time from start to finish, including time spent waiting for I/O operations, network responses, and system resource availability.
User Time: CPU time spent executing the process in user mode, excluding system calls and kernel operations.
System Time: CPU time spent in kernel mode on behalf of the process, including system calls and kernel operations.
Understanding these distinctions helps you choose the appropriate measurement method and interpret results accurately.
Method 1: Using the External Time Command
The external `time` command is one of the most comprehensive tools for measuring script execution time in Linux.
Basic Usage
```bash
/usr/bin/time ./your_script.sh
```
Detailed Output Format
For more detailed information, use the verbose flag:
```bash
/usr/bin/time -v ./your_script.sh
```
This provides extensive statistics including:
- Elapsed real time
- User and system CPU time
- Memory usage statistics
- I/O operations count
- Context switches
- Page faults
Custom Output Format
Create custom time formats using format specifiers:
```bash
/usr/bin/time -f "Time: %E | CPU: %P | Memory: %M KB" ./your_script.sh
```
Common format specifiers include:
- `%E`: Elapsed real time (wall clock)
- `%U`: User CPU time
- `%S`: System CPU time
- `%P`: CPU percentage
- `%M`: Maximum resident set size (memory)
- `%I`: File system inputs
- `%O`: File system outputs
Example: Timing a File Processing Script
```bash
#!/bin/bash
sample_script.sh - Process large files
echo "Starting file processing..."
find /var/log -name "*.log" -exec wc -l {} \; > /tmp/log_counts.txt
echo "Processing complete"
```
Time this script with detailed output:
```bash
/usr/bin/time -f "Execution Time: %E\nCPU Usage: %P\nMax Memory: %M KB\nFile Inputs: %I\nFile Outputs: %O" ./sample_script.sh
```
Method 2: Using Built-in Shell Time Command
Most shells include a built-in `time` command with simpler output format.
Bash Built-in Time
```bash
time ./your_script.sh
```
The built-in version typically shows:
- Real time (elapsed wall clock time)
- User time (CPU time in user mode)
- System time (CPU time in system mode)
Customizing Built-in Time Format
Set the `TIMEFORMAT` variable to customize output:
```bash
TIMEFORMAT="Script completed in %R seconds (User: %U, System: %S)"
time ./your_script.sh
```
Format specifiers for `TIMEFORMAT`:
- `%R`: Real time
- `%U`: User time
- `%S`: System time
- `%P`: CPU percentage
- `%%`: Literal percent sign
Example: Database Backup Script Timing
```bash
#!/bin/bash
backup_script.sh
TIMEFORMAT="Backup completed in %R seconds (CPU: %P%%)"
time {
echo "Starting database backup..."
mysqldump -u user -p database > backup_$(date +%Y%m%d).sql
echo "Backup finished"
}
```
Method 3: Manual Timing with Date Command
For scripts that need internal timing or when you want to measure specific sections, use the `date` command.
Basic Date-based Timing
```bash
#!/bin/bash
start_time=$(date +%s)
Your script operations here
sleep 5 # Example operation
end_time=$(date +%s)
execution_time=$((end_time - start_time))
echo "Script execution time: $execution_time seconds"
```
High-precision Timing with Nanoseconds
For more precise measurements:
```bash
#!/bin/bash
start_time=$(date +%s.%N)
Your operations here
end_time=$(date +%s.%N)
execution_time=$(echo "$end_time - $start_time" | bc)
echo "Execution time: $execution_time seconds"
```
Advanced Date-based Timing Function
Create a reusable timing function:
```bash
#!/bin/bash
Function to measure execution time
measure_time() {
local start_time=$(date +%s.%N)
"$@" # Execute the passed command
local end_time=$(date +%s.%N)
local execution_time=$(echo "$end_time - $start_time" | bc)
echo "Command '$*' executed in $execution_time seconds"
}
Usage examples
measure_time ls -la /var/log
measure_time find /home -name "*.txt"
measure_time ./your_custom_script.sh
```
Method 4: Using Hyperfine for Benchmarking
Hyperfine is a modern command-line benchmarking tool that provides statistical analysis of execution times.
Installation
```bash
Ubuntu/Debian
sudo apt install hyperfine
CentOS/RHEL (with EPEL)
sudo yum install hyperfine
From source
wget https://github.com/sharkdp/hyperfine/releases/download/v1.16.1/hyperfine-v1.16.1-x86_64-unknown-linux-gnu.tar.gz
tar -xzf hyperfine-v1.16.1-x86_64-unknown-linux-gnu.tar.gz
sudo cp hyperfine-v1.16.1-x86_64-unknown-linux-gnu/hyperfine /usr/local/bin/
```
Basic Hyperfine Usage
```bash
hyperfine './your_script.sh'
```
Advanced Benchmarking
Compare multiple scripts or different parameters:
```bash
hyperfine --warmup 3 --runs 10 './script1.sh' './script2.sh' './script3.sh'
```
Export results to various formats:
```bash
hyperfine --export-json results.json --export-csv results.csv './your_script.sh'
```
Example: Comparing Different Sorting Methods
```bash
Create test scripts for different sorting approaches
hyperfine --prepare 'shuf -i 1-10000 > numbers.txt' \
'sort -n numbers.txt > sorted1.txt' \
'sort -g numbers.txt > sorted2.txt' \
--cleanup 'rm numbers.txt sorted*.txt'
```
Method 5: System Monitoring During Execution
For comprehensive performance analysis, monitor system resources during script execution.
Using htop with Script Execution
```bash
Terminal 1: Start monitoring
htop
Terminal 2: Run your script
./your_script.sh
```
Automated Resource Monitoring
Create a monitoring script that tracks resources:
```bash
#!/bin/bash
monitor_script.sh
SCRIPT_TO_MONITOR="$1"
MONITOR_INTERVAL=1
OUTPUT_FILE="performance_$(date +%Y%m%d_%H%M%S).log"
echo "Monitoring script: $SCRIPT_TO_MONITOR" > "$OUTPUT_FILE"
echo "Timestamp,CPU%,Memory%,Load" >> "$OUTPUT_FILE"
Start the target script in background
"$SCRIPT_TO_MONITOR" &
TARGET_PID=$!
Monitor while script runs
while kill -0 "$TARGET_PID" 2>/dev/null; do
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
MEMORY_USAGE=$(free | grep Mem | awk '{printf("%.2f", $3/$2 * 100.0)}')
LOAD_AVG=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | tr -d ',')
echo "$TIMESTAMP,$CPU_USAGE,$MEMORY_USAGE,$LOAD_AVG" >> "$OUTPUT_FILE"
sleep "$MONITOR_INTERVAL"
done
echo "Monitoring complete. Results saved to $OUTPUT_FILE"
```
Method 6: Profiling with Perf
For detailed performance profiling, use the `perf` tool (part of linux-tools package).
Installation
```bash
Ubuntu/Debian
sudo apt install linux-tools-common linux-tools-generic
CentOS/RHEL
sudo yum install perf
```
Basic Perf Usage
```bash
perf stat ./your_script.sh
```
Detailed Profiling
```bash
perf record -g ./your_script.sh
perf report
```
Custom Performance Counters
```bash
perf stat -e cycles,instructions,cache-references,cache-misses ./your_script.sh
```
Practical Examples and Use Cases
Example 1: Web Server Log Analysis Script
```bash
#!/bin/bash
log_analyzer.sh - Analyze web server logs
echo "Web Server Log Analysis Starting..."
TIMEFORMAT="Log analysis completed in %R seconds"
time {
# Count unique IP addresses
echo "Counting unique visitors..."
UNIQUE_IPS=$(awk '{print $1}' /var/log/apache2/access.log | sort -u | wc -l)
# Find most requested pages
echo "Finding popular pages..."
POPULAR_PAGES=$(awk '{print $7}' /var/log/apache2/access.log | sort | uniq -c | sort -nr | head -10)
# Calculate bandwidth usage
echo "Calculating bandwidth..."
TOTAL_BYTES=$(awk '{sum += $10} END {print sum}' /var/log/apache2/access.log)
echo "Analysis Results:"
echo "Unique visitors: $UNIQUE_IPS"
echo "Total bandwidth: $((TOTAL_BYTES / 1024 / 1024)) MB"
echo "Top pages:"
echo "$POPULAR_PAGES"
}
```
Example 2: Database Maintenance Script with Detailed Timing
```bash
#!/bin/bash
db_maintenance.sh - Database maintenance with timing
log_with_time() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}
measure_operation() {
local operation_name="$1"
shift
local start_time=$(date +%s.%N)
log_with_time "Starting: $operation_name"
"$@"
local exit_code=$?
local end_time=$(date +%s.%N)
local duration=$(echo "$end_time - $start_time" | bc)
if [ $exit_code -eq 0 ]; then
log_with_time "Completed: $operation_name (${duration}s)"
else
log_with_time "Failed: $operation_name (${duration}s) - Exit code: $exit_code"
fi
return $exit_code
}
Main maintenance operations
SCRIPT_START=$(date +%s.%N)
measure_operation "Database backup" mysqldump -u root -p mydb > backup.sql
measure_operation "Index optimization" mysql -u root -p -e "OPTIMIZE TABLE mydb.users;"
measure_operation "Log cleanup" find /var/log/mysql -name "*.log" -mtime +7 -delete
measure_operation "Statistics update" mysql -u root -p -e "ANALYZE TABLE mydb.users;"
SCRIPT_END=$(date +%s.%N)
TOTAL_TIME=$(echo "$SCRIPT_END - $SCRIPT_START" | bc)
log_with_time "All maintenance operations completed in ${TOTAL_TIME}s"
```
Example 3: System Health Check with Performance Metrics
```bash
#!/bin/bash
system_health_check.sh - Comprehensive system check with timing
Configuration
LOGFILE="/var/log/health_check_$(date +%Y%m%d).log"
THRESHOLD_CPU=80
THRESHOLD_MEMORY=90
THRESHOLD_DISK=85
Timing function
timed_check() {
local check_name="$1"
local start_time=$(date +%s.%N)
echo "[$check_name] Starting check..." | tee -a "$LOGFILE"
# Execute the check function
"${check_name}_check"
local result=$?
local end_time=$(date +%s.%N)
local duration=$(echo "scale=3; $end_time - $start_time" | bc)
echo "[$check_name] Check completed in ${duration}s" | tee -a "$LOGFILE"
return $result
}
Individual check functions
cpu_check() {
local cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1 | cut -d',' -f1)
echo "CPU Usage: ${cpu_usage}%" | tee -a "$LOGFILE"
if (( $(echo "$cpu_usage > $THRESHOLD_CPU" | bc -l) )); then
echo "WARNING: High CPU usage detected!" | tee -a "$LOGFILE"
return 1
fi
return 0
}
memory_check() {
local memory_usage=$(free | grep Mem | awk '{printf("%.1f", $3/$2 * 100.0)}')
echo "Memory Usage: ${memory_usage}%" | tee -a "$LOGFILE"
if (( $(echo "$memory_usage > $THRESHOLD_MEMORY" | bc -l) )); then
echo "WARNING: High memory usage detected!" | tee -a "$LOGFILE"
return 1
fi
return 0
}
disk_check() {
local disk_usage=$(df / | awk 'NR==2 {print $5}' | cut -d'%' -f1)
echo "Disk Usage: ${disk_usage}%" | tee -a "$LOGFILE"
if [ "$disk_usage" -gt "$THRESHOLD_DISK" ]; then
echo "WARNING: High disk usage detected!" | tee -a "$LOGFILE"
return 1
fi
return 0
}
Main execution with overall timing
OVERALL_START=$(date +%s.%N)
echo "System Health Check Started: $(date)" | tee -a "$LOGFILE"
Run all checks
timed_check "cpu"
timed_check "memory"
timed_check "disk"
OVERALL_END=$(date +%s.%N)
TOTAL_DURATION=$(echo "scale=3; $OVERALL_END - $OVERALL_START" | bc)
echo "System Health Check Completed: $(date)" | tee -a "$LOGFILE"
echo "Total execution time: ${TOTAL_DURATION}s" | tee -a "$LOGFILE"
```
Common Issues and Troubleshooting
Issue 1: Time Command Not Found
Problem: Getting "command not found" when using `time`.
Solution:
```bash
Use the full path to external time
/usr/bin/time ./script.sh
Or install if missing
sudo apt install time # Ubuntu/Debian
sudo yum install time # CentOS/RHEL
```
Issue 2: Inaccurate Timing Due to System Load
Problem: Inconsistent execution times due to varying system load.
Solutions:
```bash
Run multiple iterations and average
for i in {1..5}; do
echo "Run $i:"
time ./script.sh
done
Use hyperfine for statistical analysis
hyperfine --runs 10 --warmup 2 './script.sh'
Set process priority
nice -n -10 time ./script.sh
```
Issue 3: Measuring Only Part of Script Execution
Problem: Need to time specific sections within a script.
Solution:
```bash
#!/bin/bash
Internal timing for specific operations
time_operation() {
local operation_name="$1"
shift
echo "Starting: $operation_name"
time "$@"
echo "Finished: $operation_name"
}
time_operation "File processing" find /var -name "*.log" -exec wc -l {} \;
time_operation "Database query" mysql -e "SELECT COUNT(*) FROM users;"
```
Issue 4: Memory Usage Not Captured
Problem: Standard time doesn't show memory usage details.
Solution:
```bash
Use external time with verbose output
/usr/bin/time -v ./script.sh
Or create custom monitoring
#!/bin/bash
SCRIPT_PID=$$
(
while kill -0 $SCRIPT_PID 2>/dev/null; do
ps -o pid,ppid,cmd,%mem,%cpu -p $SCRIPT_PID
sleep 1
done
) &
MONITOR_PID=$!
Your script operations here
sleep 10
kill $MONITOR_PID 2>/dev/null
```
Issue 5: Timing Network-dependent Scripts
Problem: Network latency affects timing consistency.
Solutions:
```bash
Separate network time from processing time
network_start=$(date +%s.%N)
curl -s https://api.example.com/data > data.json
network_end=$(date +%s.%N)
network_time=$(echo "$network_end - $network_start" | bc)
processing_start=$(date +%s.%N)
process_data.sh data.json
processing_end=$(date +%s.%N)
processing_time=$(echo "$processing_end - $processing_start" | bc)
echo "Network time: ${network_time}s"
echo "Processing time: ${processing_time}s"
```
Best Practices and Tips
Performance Measurement Best Practices
1. Run Multiple Iterations: Always run scripts multiple times to account for system variations:
```bash
for i in {1..10}; do time ./script.sh; done
```
2. Use Appropriate Timing Granularity: Choose the right precision for your needs:
- Seconds: `date +%s` for long-running scripts
- Milliseconds: `date +%s%3N` for medium-duration operations
- Nanoseconds: `date +%s.%N` for precise measurements
3. Consider System State: Ensure consistent system conditions:
```bash
# Clear caches before timing
sudo sync && sudo echo 3 > /proc/sys/vm/drop_caches
time ./script.sh
```
4. Document Timing Context: Include system information with timing results:
```bash
echo "System: $(uname -a)" > timing_results.txt
echo "CPU: $(lscpu | grep 'Model name')" >> timing_results.txt
echo "Memory: $(free -h | grep Mem)" >> timing_results.txt
echo "Load: $(uptime)" >> timing_results.txt
time ./script.sh 2>> timing_results.txt
```
Script Optimization Tips
1. Identify Bottlenecks: Use detailed timing to find slow sections:
```bash
#!/bin/bash
sections=("database_query" "file_processing" "network_calls")
for section in "${sections[@]}"; do
echo "Timing: $section"
time ${section}_function
done
```
2. Profile Before Optimizing: Establish baseline performance:
```bash
# Create performance baseline
hyperfine --export-json baseline.json './original_script.sh'
# After optimization
hyperfine --export-json optimized.json './optimized_script.sh'
```
3. Monitor Resource Usage: Track CPU, memory, and I/O:
```bash
/usr/bin/time -f "Time: %E | CPU: %P | Memory: %M KB | I/O: %I/%O" ./script.sh
```
Automation and Monitoring
1. Automated Performance Tracking:
```bash
#!/bin/bash
# performance_tracker.sh
SCRIPT="$1"
LOGFILE="performance_$(basename "$SCRIPT").log"
{
echo "Date: $(date)"
echo "Script: $SCRIPT"
/usr/bin/time -f "Real: %E | User: %U | System: %S | CPU: %P | Memory: %M KB" "$SCRIPT"
echo "---"
} >> "$LOGFILE"
```
2. Performance Regression Detection:
```bash
#!/bin/bash
# Check if execution time exceeds threshold
THRESHOLD=60 # seconds
SCRIPT="./long_running_script.sh"
start_time=$(date +%s)
"$SCRIPT"
end_time=$(date +%s)
duration=$((end_time - start_time))
if [ $duration -gt $THRESHOLD ]; then
echo "WARNING: Script execution took ${duration}s (threshold: ${THRESHOLD}s)" | mail -s "Performance Alert" admin@example.com
fi
```
Advanced Timing Techniques
1. Microsecond Precision Timing:
```bash
#!/bin/bash
# For operations requiring microsecond precision
start_time=$(python3 -c "import time; print(time.time())")
# Your operations here
end_time=$(python3 -c "import time; print(time.time())")
duration=$(python3 -c "print($end_time - $start_time)")
echo "Execution time: ${duration} seconds"
```
2. Conditional Timing:
```bash
#!/bin/bash
DEBUG_TIMING=${DEBUG_TIMING:-false}
if [ "$DEBUG_TIMING" = "true" ]; then
time_cmd="time"
else
time_cmd=""
fi
$time_cmd ./your_script.sh
```
Integration with System Monitoring
Log Integration
Integrate timing information with system logs:
```bash
#!/bin/bash
log_integrated_timing.sh
log_timed_operation() {
local operation="$1"
shift
local start_time=$(date +%s.%N)
logger "Starting operation: $operation"
"$@"
local exit_code=$?
local end_time=$(date +%s.%N)
local duration=$(echo "scale=3; $end_time - $start_time" | bc)
if [ $exit_code -eq 0 ]; then
logger "Operation completed: $operation (${duration}s)"
else
logger "Operation failed: $operation (${duration}s) - Exit code: $exit_code"
fi
return $exit_code
}
Usage
log_timed_operation "backup_database" mysqldump mydb > backup.sql
log_timed_operation "compress_logs" gzip /var/log/*.log
```
Metrics Collection
Send timing metrics to monitoring systems:
```bash
#!/bin/bash
metrics_timing.sh
send_metric() {
local metric_name="$1"
local value="$2"
local timestamp=$(date +%s)
# Example: Send to StatsD
echo "${metric_name}:${value}|ms" | nc -u -w1 localhost 8125
# Example: Send to InfluxDB
curl -XPOST 'http://localhost:8086/write?db=metrics' \
--data-binary "${metric_name} value=${value} ${timestamp}000000000"
}
Timed operation with metrics
timed_operation_with_metrics() {
local operation_name="$1"
shift
local start_time=$(date +%s.%N)
"$@"
local exit_code=$?
local end_time=$(date +%s.%N)
local duration_ms=$(echo "scale=0; ($end_time - $start_time) * 1000" | bc)
send_metric "script.${operation_name}.duration" "$duration_ms"
send_metric "script.${operation_name}.exit_code" "$exit_code"
return $exit_code
}
```
Conclusion
Measuring script execution time in Linux is essential for performance optimization, troubleshooting, and system monitoring. This comprehensive guide has covered multiple approaches, from simple command-line tools to advanced profiling techniques.
Key Takeaways
1. Choose the Right Tool: Use built-in `time` for quick measurements, external `/usr/bin/time` for detailed statistics, and specialized tools like `hyperfine` for benchmarking.
2. Consider Your Requirements: Different scenarios require different approaches - one-time measurements, continuous monitoring, or detailed profiling each have optimal solutions.
3. Account for Variables: System load, network conditions, and resource availability affect timing results. Use multiple runs and statistical analysis for accurate measurements.
4. Implement Best Practices: Document system conditions, use appropriate precision, and integrate timing with your monitoring infrastructure.
Next Steps
To further enhance your script timing capabilities:
1. Explore Advanced Profiling: Learn about `perf`, `strace`, and `ltrace` for detailed performance analysis
2. Implement Automated Monitoring: Set up continuous performance monitoring for critical scripts
3. Study Performance Optimization: Use timing data to identify and resolve performance bottlenecks
4. Integrate with CI/CD: Include performance regression testing in your deployment pipelines
Regular performance measurement and optimization ensure your Linux systems remain efficient and responsive. Start with the basic timing methods covered here, then gradually implement more sophisticated monitoring as your needs evolve.
Remember that consistent measurement and analysis are more valuable than perfect precision. Focus on establishing baselines, tracking trends, and identifying significant changes in script performance over time.