Quick Start
Get your first LocalCloud project running in 5 minutes. We’ll create a simple AI chat application that runs entirely on your local machine.
Create Your First Project
Choose your preferred setup method:
👨💻 Interactive Setup
🤖 AI Assistant Setup
Perfect for human developers who want to choose components step by step.
Create project directory
mkdir my-ai-app
cd my-ai-app
Run interactive setup
You’ll see a beautiful wizard: ? What would you like to build? (Use arrow keys)
❯ Chat Assistant - Conversational AI with memory
RAG System - Document Q&A with vector search
Custom - Select components manually
Then select services:
AI Model : Choose a model (e.g., llama3.2:3b, mistral)
Database : PostgreSQL for data storage
Vector Search : pgvector for similarity search
Cache : Redis for performance
Storage : MinIO for file storage
Start services
LocalCloud will:
Download required Docker images
Start all services
Configure networking
Show connection details
Perfect for AI coding assistants like Claude Code, Cursor, Gemini CLI.
One-command setup
# Quick AI development stack
lc setup my-ai-app --preset=ai-dev --yes
cd my-ai-app
This automatically configures:
AI models via Ollama
PostgreSQL database
Vector search with pgvector
Auto-generated CLAUDE.md file
Start services
Everything starts automatically - no user interaction needed!
Check the generated guidance
Your project now has complete AI assistant guidance with:
All available commands
Connection details
Development workflows
Export instructions
Available Presets # AI development stack
lc setup my-ai-app --preset=ai-dev --yes
# Full infrastructure stack
lc setup my-app --preset=full-stack --yes
# Minimal AI-only setup
lc setup simple-ai --preset=minimal --yes
# Custom components
lc setup my-app --components=llm,database,storage --models=llama3.2:3b --yes
Verify Everything is Running
Check the status of your services:
You should see output like:
LocalCloud Status
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Project: my-ai-app
Status: Running
Services:
✓ AI (Ollama) http://localhost:11434 [Running]
✓ Database localhost:5432 [Running]
✓ Cache localhost:6379 [Running]
✓ Storage http://localhost:9000 [Running]
Test the AI Service
Let’s test that the AI model is working:
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "Hello! What is LocalCloud?",
"stream": false
}'
Connect to Services
Each service is now accessible locally:
AI Model (Ollama)
API : http://localhost:11434
Models endpoint : http://localhost:11434/api/tags
Compatible with OpenAI API format
PostgreSQL Database
Host : localhost
Port : 5432
Username : localcloud
Password : localcloud
Database : localcloud
Connection string:
postgresql://localcloud:localcloud@localhost:5432/localcloud
Redis Cache
Host : localhost
Port : 6379
No authentication by default
MinIO Storage
API : http://localhost:9000
Console : http://localhost:9001
Access Key : minioadmin
Secret Key : minioadmin
Build a Simple Chat Application
Let’s create a basic chat application using the services:
Create app.py: import requests
from flask import Flask, request, jsonify
import redis
import psycopg2
app = Flask( __name__ )
# Connect to services
redis_client = redis.Redis( host = 'localhost' , port = 6379 )
db = psycopg2.connect(
"postgresql://localcloud:localcloud@localhost:5432/localcloud"
)
@app.route ( '/chat' , methods = [ 'POST' ])
def chat ():
message = request.json[ 'message' ]
# Check cache first
cached = redis_client.get( f "response: { message } " )
if cached:
return jsonify({ 'response' : cached.decode()})
# Call AI model
response = requests.post( 'http://localhost:11434/api/generate' ,
json = {
'model' : 'llama2' ,
'prompt' : message,
'stream' : False
}
)
ai_response = response.json()[ 'response' ]
# Cache the response
redis_client.setex( f "response: { message } " , 3600 , ai_response)
# Store in database
with db.cursor() as cur:
cur.execute(
"INSERT INTO chats (message, response) VALUES ( %s , %s )" ,
(message, ai_response)
)
db.commit()
return jsonify({ 'response' : ai_response})
if __name__ == '__main__' :
# Create table if not exists
with db.cursor() as cur:
cur.execute( """
CREATE TABLE IF NOT EXISTS chats (
id SERIAL PRIMARY KEY,
message TEXT,
response TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""" )
db.commit()
app.run( port = 5000 )
Run the app: pip install flask requests redis psycopg2-binary
python app.py
Create app.js: const express = require ( 'express' );
const { Client } = require ( 'pg' );
const redis = require ( 'redis' );
const axios = require ( 'axios' );
const app = express ();
app . use ( express . json ());
// Connect to services
const redisClient = redis . createClient ({ url: 'redis://localhost:6379' });
const pgClient = new Client ({
connectionString: 'postgresql://localcloud:localcloud@localhost:5432/localcloud'
});
async function init () {
await redisClient . connect ();
await pgClient . connect ();
// Create table
await pgClient . query ( `
CREATE TABLE IF NOT EXISTS chats (
id SERIAL PRIMARY KEY,
message TEXT,
response TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
` );
}
app . post ( '/chat' , async ( req , res ) => {
const { message } = req . body ;
// Check cache
const cached = await redisClient . get ( `response: ${ message } ` );
if ( cached ) {
return res . json ({ response: cached });
}
// Call AI model
const response = await axios . post ( 'http://localhost:11434/api/generate' , {
model: 'llama2' ,
prompt: message ,
stream: false
});
const aiResponse = response . data . response ;
// Cache response
await redisClient . setEx ( `response: ${ message } ` , 3600 , aiResponse );
// Store in database
await pgClient . query (
'INSERT INTO chats (message, response) VALUES ($1, $2)' ,
[ message , aiResponse ]
);
res . json ({ response: aiResponse });
});
init (). then (() => {
app . listen ( 5000 , () => {
console . log ( 'Chat app running on http://localhost:5000' );
});
});
Run the app: npm init -y
npm install express pg redis axios
node app.js
Test your chat application:
curl -X POST http://localhost:5000/chat \
-H "Content-Type: application/json" \
-d '{"message": "What is the capital of France?"}'
Managing Your Project
View logs
# All services
lc logs
# Specific service
lc logs ai
Stop services
# Stop all services
lc stop
# Stop specific service
lc stop postgres
Add more models
# List available models
lc models list
# Pull a new model
lc models pull mistral
Check resource usage
What’s Next?
Congratulations! You now have a fully functional local AI development environment. Here’s what you can explore next:
CLI Commands Master all LocalCloud commands
Service Configuration Customize your services
AI Models Explore available AI models
Examples See more example applications
Tips
Model Performance : Smaller models like qwen2.5:3b or phi3 run faster on modest hardware. Start with these if you have limited resources.
First Start : The initial lc start may take several minutes as Docker images are downloaded. Subsequent starts will be much faster.
Data Persistence : All your data is stored in Docker volumes. Use lc reset --hard only if you want to completely remove all data.