Skip to main content

lc setup

Initialize and configure LocalCloud projects. Supports both interactive setup for humans and non-interactive setup perfect for AI coding assistants.

Usage

lc setup [project-name] [flags]

Arguments

  • project-name (optional) - Name for new project directory

Flags

FlagDescriptionDefault
--addComponents to add to existing project
--removeComponents to remove from existing project
--componentsComponents to configure (llm,database,cache,storage,etc)
--modelsAI models to download (llama3.2:3b,nomic-embed-text)
--presetPreset configuration (ai-dev,full-stack,minimal)
-y, --yesAccept all defaults (non-interactive mode)false

Modes

1. Interactive Setup (Human Developers)

For human developers who want to choose components step by step:
# Setup in current directory
lc setup

# Create new project with wizard
lc setup my-ai-app
This launches an interactive wizard allowing you to:
  • Select project type (Chat Assistant, RAG System, Custom)
  • Choose components (AI, Database, Vector Search, Cache, Storage, etc.)
  • Pick AI models based on your hardware
  • Configure services
You’ll see a beautiful component selection interface:
? Select components you need: (Press <space> to select, <enter> to confirm)
❯ ◯ [AI] LLM (Text generation)
  ◯ [AI] Embeddings (Semantic search)
  ◯ [Database] Database (PostgreSQL)
  ◯ [Database] Vector Search (pgvector)
  ◯ [Infrastructure] Cache (Redis)
  ◯ [Infrastructure] Storage (MinIO)

2. Non-Interactive Setup (AI Assistants)

Perfect for AI coding assistants like Claude Code, Cursor, Gemini CLI:
# Quick presets
lc setup my-ai-app --preset=ai-dev --yes
lc setup my-app --preset=full-stack --yes
lc setup simple --preset=minimal --yes

# Custom configuration
lc setup my-app --components=llm,database,storage --models=llama3.2:3b --yes
Benefits for AI assistants:
  • ✅ No interactive prompts (no arrow keys/space bar needed)
  • ✅ Predictable command structure
  • ✅ Auto-generates CLAUDE.md guidance file
  • ✅ Clear preset options

Available Templates

TemplateDescriptionServices
chatChatGPT-like interface with conversation historyOllama, PostgreSQL, Redis
code-assistantAI-powered code editor and assistantOllama, PostgreSQL, Redis, MinIO
transcribeAudio/video transcription serviceWhisper, PostgreSQL, MinIO
image-genAI image generation interfaceStable Diffusion, PostgreSQL, MinIO
api-onlyREST API without frontendOllama, PostgreSQL, Redis

Examples

Configure existing project

# In an initialized project
lc setup

Create chat application

lc setup chat
cd chat
# Project is ready with all services running!

Create API with custom settings

lc setup api-only \
  --name my-api \
  --port 8080 \
  --model mistral

Generate files without starting services

lc setup chat --skip-docker
# Manually start later with: lc start

Overwrite existing directory

lc setup chat --name existing-dir --force

Template Creation Process

When creating from a template, setup performs these steps:
  1. System Check
  • Verifies Docker is installed
  • Checks available RAM and disk space
  • Validates system requirements
  1. Interactive Configuration (if needed)
  • Project name (if not provided)
  • AI model selection
  • Port allocation
  1. File Generation
  • Creates project structure
  • Generates configuration files
  • Copies template files
  • Substitutes variables
  1. Service Startup (unless --skip-docker)
  • Pulls required Docker images
  • Starts all services
  • Waits for health checks
  1. Success Summary
  • Shows service URLs
  • Displays next steps
  • Provides example commands

Template Structure

Templates include:
template/
├── docker-compose.yml      # Service definitions
├── .env.template          # Environment variables
├── README.md              # Project documentation
├── backend/               # Backend code (if applicable)
├── frontend/              # Frontend code (if applicable)
└── config/                # Configuration files

Model Selection

During setup, you can choose from recommended models based on your hardware:

For Limited Resources (8GB RAM)

  • qwen2.5:3b - Fast and efficient
  • phi3 - Microsoft’s compact model
  • gemma2:2b - Google’s tiny model

For Standard Systems (16GB RAM)

  • llama2 - Meta’s popular model
  • mistral - Great for coding
  • codellama - Specialized for code

For High-End Systems (32GB+ RAM)

  • mixtral:8x7b - Powerful MoE model
  • llama3:70b - Large language model
  • qwen2.5:32b - Advanced reasoning

Port Management

LocalCloud automatically manages ports to avoid conflicts:
  1. Default Ports:
  • API: 3001
  • Frontend: 3000
  • AI: 11434
  • PostgreSQL: 5432
  • Redis: 6379
  • MinIO: 9000/9001
  1. Automatic Assignment: If defaults are in use, finds next available port
  2. Manual Override: Use flags to specify custom ports

Error Handling

No project found

Error: no LocalCloud project found. Run 'lc init' first
Solution: Initialize a project first with lc init

Template not found

Error: template 'unknown' not found
Solution: Use lc templates list to see available templates

Insufficient resources

Error: insufficient RAM: required 8GB, available 4GB
Solution: Choose a smaller model or free up system resources

Port already in use

Error: port 3000 is already in use
Solution: Use --port flag to specify a different port

Advanced Usage

Custom Template Variables

Templates can use these variables:
  • {{.ProjectName}} - Project name
  • {{.APIPort}} - API port number
  • {{.FrontendPort}} - Frontend port
  • {{.ModelName}} - Selected AI model
  • {{.DatabaseURL}} - PostgreSQL connection string
  • {{.RedisURL}} - Redis connection string

Environment Configuration

Generated .env file includes:
PROJECT_NAME={{.ProjectName}}
API_PORT={{.APIPort}}
FRONTEND_PORT={{.FrontendPort}}
DATABASE_URL=postgresql://localcloud:localcloud@localhost:5432/localcloud
REDIS_URL=redis://localhost:6379
OLLAMA_URL=http://localhost:11434
AI_MODEL={{.ModelName}}