lc status
Display the current status, health, and resource usage of all LocalCloud services.Usage
Flags
Flag | Short | Description | Default |
---|---|---|---|
--details | -d | Show detailed metrics | false |
--watch | -w | Continuously monitor status | false |
--interval | -i | Update interval in seconds (with —watch) | 5 |
Examples
Basic status
Detailed metrics
Continuous monitoring
Custom interval
Output Format
Basic Output
Status Indicators
Symbol | Status | Description |
---|---|---|
● | running | Service is running normally |
◐ | starting | Service is starting up |
◑ | stopping | Service is shutting down |
○ | stopped | Service is stopped |
Health Indicators
Symbol | Health | Description |
---|---|---|
✓ | healthy | Service passed health checks |
✗ | unhealthy | Service failed health checks |
… | starting | Health check in progress |
Detailed Metrics
When using--details
, additional information is shown:
Resource Metrics
CPU Usage
- Percentage of allocated CPU
- Number of online CPUs
- System vs user CPU time (in detailed view)
Memory Usage
- Working set (active memory)
- Cache memory
- Total usage vs limit
- Percentage utilization
Network I/O
- Bytes received/transmitted
- Packet counts
- Dropped packets (detailed view)
Disk I/O
- Bytes read/written
- I/O operations count
- Queue depth (detailed view)
Service-Specific Metrics
Ollama (AI)
- Model load time
- Inference speed (tokens/second)
- Number of active models
- GPU utilization (if available)
PostgreSQL
- Active connections
- Transaction rate
- Query performance
- Cache hit ratio
Redis
- Operations per second
- Memory fragmentation
- Connected clients
- Hit/miss ratio
MinIO
- Object count
- Total storage used
- API request rate
- Bandwidth usage
Continuous Monitoring
The--watch
mode provides real-time monitoring:
- Auto-refreshing display
- Clear screen between updates
- Interrupt handling (Ctrl+C)
- Resource trend indicators
Output Formats
Human-Readable (Default)
Formatted tables with colors and symbols.JSON Format
Error States
Docker not running
No project found
No services running
Performance Impact
The status command has minimal performance impact:- Uses Docker stats API
- Samples metrics every second
- Caches results between updates
- No impact on service performance
Troubleshooting
Metrics not available
Some metrics may show-
if:
- Service just started
- Docker stats not ready
- Container restarting
High resource usage
If you see high resource usage:- Check with
lc logs [service]
- Review model sizes
- Check for memory leaks
- Consider resource limits