lc ps

List LocalCloud containers and their current status. Shows running and stopped containers with detailed information about each service.

Usage

lc ps [flags]

Flags

FlagDescriptionDefault
--all, -aShow all containers (including stopped)false

Output Format

The command displays containers in a table format with:
  • Container ID: First 12 characters of the container ID
  • Name: Service name (e.g., localcloud-ollama-1)
  • Image: Docker image with version/model info
  • Status: Current status with color coding
  • Ports: Port mappings (host:container)

Examples

List Running Containers

lc ps
Sample Output:
LocalCloud Containers


CONTAINER ID   NAME                    IMAGE                          STATUS      PORTS
a1b2c3d4e5f6   localcloud-ollama-1     ollama/ollama:latest          Running     11434:11434
7g8h9i0j1k2l   localcloud-postgres-1   postgres:16-alpine            Running     5432:5432
3m4n5o6p7q8r   localcloud-redis-1      redis:7-alpine                Running     6379:6379
9s0t1u2v3w4x   localcloud-minio-1      minio/minio:latest            Running     9000:9000, 9001:9001

Running: 4 containers

List All Containers (Including Stopped)

lc ps --all
Sample Output:
LocalCloud Containers


CONTAINER ID   NAME                    IMAGE                          STATUS      PORTS
a1b2c3d4e5f6   localcloud-ollama-1     ollama/ollama:latest          Running     11434:11434
7g8h9i0j1k2l   localcloud-postgres-1   postgres:16-alpine            Running     5432:5432
3m4n5o6p7q8r   localcloud-redis-1      redis:7-alpine                Running     6379:6379
9s0t1u2v3w4x   localcloud-minio-1      minio/minio:latest            Running     9000:9000, 9001:9001
5y6z7a8b9c0d   localcloud-mongodb-1    mongo:7.0                     Exited      -

Running: 4 containers, Stopped: 1 container

Status with Model Information

lc ps
Output with AI Models:
LocalCloud Containers


CONTAINER ID   NAME                    IMAGE                          STATUS      PORTS
a1b2c3d4e5f6   localcloud-ollama-1     ollama/ollama:latest          Running     11434:11434
                                        Models: llama3.2:3b (active), mistral:7b
7g8h9i0j1k2l   localcloud-postgres-1   postgres:16-alpine            Running     5432:5432
                                        Extensions: pgvector, pg_trgm
3m4n5o6p7q8r   localcloud-redis-1      redis:7-alpine                Running     6379:6379
                                        Memory: 256MB, Persistence: enabled
9s0t1u2v3w4x   localcloud-minio-1      minio/minio:latest            Running     9000:9000, 9001:9001
                                        Buckets: 3, Total size: 1.2GB

Running: 4 containers
Memory usage: 3.2GB / 16GB available

Status Indicators

Status Colors

  • =� Running: Container is active and healthy
  • =4 Exited: Container has stopped
  • =� Starting: Container is starting up
  • =� Restarting: Container is restarting
  • Created: Container created but not started
  • =� Paused: Container is paused

Container Health

lc ps
Output with Health Status:
CONTAINER ID   NAME                    IMAGE                STATUS           PORTS
a1b2c3d4e5f6   localcloud-ollama-1     ollama/ollama       Running (healthy) 11434:11434
7g8h9i0j1k2l   localcloud-postgres-1   postgres:16         Running (healthy) 5432:5432
3m4n5o6p7q8r   localcloud-redis-1      redis:7             Running           6379:6379
9s0t1u2v3w4x   localcloud-minio-1      minio/minio         Starting          9000:9000

Container Information

Service-Specific Details

AI Service (Ollama)

  • Active Models: Currently loaded models
  • Memory Usage: RAM consumed by models
  • API Status: Service availability

Database (PostgreSQL)

  • Extensions: Installed extensions (pgvector, pg_trgm)
  • Connections: Active database connections
  • Data Size: Database size on disk

MongoDB

  • Databases: Number of databases
  • Collections: Total collections
  • Replica Set: Replication status

Cache (Redis)

  • Memory Usage: Current memory consumption
  • Persistence: RDB/AOF status
  • Connected Clients: Active connections

Storage (MinIO)

  • Buckets: Number of buckets
  • Objects: Total object count
  • Storage Used: Disk usage

Detailed Container Inspection

Get Container Details

# Get detailed info about specific container
docker inspect localcloud-ollama-1

# View container logs
lc logs ollama

# Check resource usage
docker stats localcloud-ollama-1

Container Resource Usage

lc ps
Output with Resource Information:
LocalCloud Containers (Resource View)


NAME                    CPU%    MEMORY          NETWORK I/O       DISK I/O
localcloud-ollama-1     15.2%   2.1GB / 4GB     1.2MB / 890KB     45MB / 12MB
localcloud-postgres-1   2.1%    512MB / 1GB     234KB / 567KB     12MB / 89MB
localcloud-redis-1      0.8%    128MB / 256MB   45KB / 23KB       1MB / 2MB
localcloud-minio-1      1.2%    256MB / 512MB   789KB / 234KB     23MB / 45MB

Total CPU: 19.3%
Total Memory: 3.0GB / 5.9GB allocated

Container Management

Start Stopped Containers

# Start specific service
lc start mongodb

# Start all services
lc start

Stop Running Containers

# Stop specific service
lc stop ollama

# Stop all services
lc stop

Restart Containers

# Restart specific service
lc restart postgres

# Restart all services
lc restart

Filtering and Sorting

Filter by Status

# Show only running containers
lc ps

# Show only stopped containers
lc ps -a | grep Exited

# Show containers with issues
lc ps -a | grep -E "(Exited|Error|Unhealthy)"

Service-Specific Views

# View AI service containers
lc ps | grep ollama

# View database containers
lc ps | grep -E "(postgres|mongodb)"

# View infrastructure containers
lc ps | grep -E "(redis|minio)"

Common Patterns

Health Check Workflow

# 1. Check container status
lc ps

# 2. Identify issues
lc ps -a | grep -v Running

# 3. Check logs for problems
lc logs <service-name>

# 4. Restart if needed
lc restart <service-name>

# 5. Verify fix
lc ps

Resource Monitoring

# Check current resource usage
lc ps

# Monitor over time
watch -n 5 'lc ps'

# Check if containers are using too much memory
lc ps | grep -E "([0-9]+\.?[0-9]*GB)"

Troubleshooting Workflow

# 1. List all containers
lc ps -a

# 2. Find stopped containers
lc ps -a | grep Exited

# 3. Check exit codes
docker ps -a --filter "label=com.localcloud.service"

# 4. Investigate logs
lc logs <failed-service>

# 5. Restart if possible
lc start <failed-service>

Output Variations

Minimal Output

lc ps --quiet
Output:
localcloud-ollama-1
localcloud-postgres-1
localcloud-redis-1
localcloud-minio-1

Detailed Output

lc ps --verbose
Output:
LocalCloud Containers (Detailed View)


Container: localcloud-ollama-1 (a1b2c3d4e5f6)
� Image: ollama/ollama:latest
� Status: Running (Started 2 hours ago)
� Ports: 0.0.0.0:11434->11434/tcp
� Health: Healthy (last check: 30s ago)
� Models: llama3.2:3b (active), mistral:7b
� Memory: 2.1GB / 4GB limit

Container: localcloud-postgres-1 (7g8h9i0j1k2l)
� Image: postgres:16-alpine
� Status: Running (Started 2 hours ago)
� Ports: 0.0.0.0:5432->5432/tcp
� Health: Healthy (last check: 30s ago)
� Database: localcloud (34MB)
� Memory: 512MB / 1GB limit

Integration with Other Commands

Combined Workflows

# Check status and start missing services
lc ps -a
lc start

# Monitor logs of running containers
lc ps --quiet | xargs -I {} lc logs {}

# Restart unhealthy containers
lc ps | grep Unhealthy | awk '{print $2}' | xargs lc restart

Automation Scripts

#!/bin/bash
# Health check script

echo "Checking LocalCloud container health..."

# Get container count
RUNNING=$(lc ps --quiet | wc -l)
EXPECTED=4

if [ "$RUNNING" -lt "$EXPECTED" ]; then
    echo "�  Only $RUNNING/$EXPECTED containers running"
    lc ps -a
    exit 1
else
    echo " All $RUNNING containers running"
    exit 0
fi

Troubleshooting

No Containers Shown

# Check if Docker is running
docker version

# Check if in correct directory
lc status

# Check if services were started
lc start

Container Status Issues

Error: Container exited with code 1
Solutions:
  1. Check logs: lc logs <service-name>
  2. Check configuration: lc config validate
  3. Restart service: lc restart <service-name>
  4. Check resources: lc doctor

Permission Errors

Error: Cannot connect to Docker daemon
Solutions:
  1. Start Docker service
  2. Add user to docker group: sudo usermod -aG docker $USER
  3. Use sudo: sudo lc ps