Services
LocalCloud provides a suite of integrated services that work together to create a complete local development environment. Each service runs in its own Docker container and is managed by the LocalCloud CLI.Available Services
AI Service (Ollama)
Run language models locally with OpenAI-compatible API
PostgreSQL
Full-featured relational database with vector extensions
MongoDB
NoSQL document database for flexible data storage
Redis
High-performance caching and message queuing
MinIO
S3-compatible object storage for files and media
Service Architecture
AI Service (Ollama)
Ollama enables you to run large language models locally with excellent performance.Features
- OpenAI API Compatibility: Drop-in replacement for OpenAI API
- Multiple Models: Support for Llama, Mistral, Qwen, and more
- GPU Acceleration: Automatic GPU detection and utilization
- Model Management: Easy model downloading and switching
Default Configuration
Supported Models
- General Purpose
- Code Models
- Small Models
llama2
- Meta’s Llama 2 (7B)llama3
- Meta’s Llama 3 (8B)mistral
- Mistral AI’s model (7B)mixtral
- Mistral’s MoE model (8x7B)qwen2.5
- Alibaba’s Qwen models
API Endpoints
- Generate:
POST http://localhost:11434/api/generate
- Chat:
POST http://localhost:11434/api/chat
- Models:
GET http://localhost:11434/api/tags
- Embeddings:
POST http://localhost:11434/api/embeddings
PostgreSQL Database
Enterprise-grade relational database with powerful extensions for AI applications.Features
- Version 16: Latest stable PostgreSQL
- Vector Storage: pgvector extension for embeddings
- Full-Text Search: Built-in FTS capabilities
- JSON Support: Native JSONB data type
- Extensions: pgvector, pg_trgm, and more
Default Configuration
Connection Details
Common Extensions
- pgvector: Store and query vector embeddings
- pg_trgm: Trigram-based text similarity
- uuid-ossp: UUID generation
- hstore: Key-value storage
Redis Cache
High-performance in-memory data store for caching and messaging.Features
- Caching: Sub-millisecond response times
- Pub/Sub: Real-time messaging
- Queues: Job and task queuing
- Data Structures: Lists, sets, sorted sets, streams
Default Configuration
Use Cases
Response Caching
Response Caching
Cache AI model responses to improve performance:
Session Storage
Session Storage
Store user sessions and temporary data:
Job Queues
Job Queues
Implement background job processing:
Real-time Updates
Real-time Updates
Pub/Sub for live updates:
MinIO Storage
S3-compatible object storage for files, images, and documents.Features
- S3 Compatible: Works with existing S3 SDKs
- Web Console: Visual file management
- Buckets: Organize files logically
- Versioning: Track file changes
Default Configuration
Access Points
- API Endpoint:
http://localhost:9000
- Web Console:
http://localhost:9001
- Default Credentials:
- Access Key:
minioadmin
- Secret Key:
minioadmin
SDK Examples
Service Lifecycle
Starting Services
Services are started in dependency order:- Network Creation: Docker network for inter-service communication
- Database: PostgreSQL starts first
- Cache: Redis starts next
- Storage: MinIO initializes
- AI: Ollama starts last (may download models)
Health Checks
Each service includes health checks:Data Persistence
All service data is persisted in Docker volumes:- Ollama Models:
localcloud_ollama_models
- PostgreSQL Data:
localcloud_postgres_data
- Redis Data:
localcloud_redis_data
- MinIO Data:
localcloud_minio_data
Data persists across
lc stop/start
cycles. Use lc reset --hard
to remove all data.Service Communication
Services communicate over the Docker network:- Internal DNS: Services can reference each other by name
- Port Mapping: Services are exposed to localhost
- Network Isolation: Services are isolated from external networks