Quick Start
Get your first LocalCloud project running in 5 minutes. We’ll create a simple AI chat application that runs entirely on your local machine.
Create Your First Project
Initialize the project
# Create a new directory and initialize
mkdir my-ai-app
cd my-ai-app
lc setup
This creates a .localcloud directory with your project configuration.Configure services
# Run interactive setup
lc setup
Select the services you want:
- AI Model: Choose a model (e.g.,
llama2, mistral, qwen2.5)
- Database: PostgreSQL for data storage
- Cache: Redis for performance
- Storage: MinIO for file storage
Start services
# Start all configured services
lc start
LocalCloud will:
- Download required Docker images
- Start all services
- Configure networking
- Show connection details
Verify Everything is Running
Check the status of your services:
You should see output like:
LocalCloud Status
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Project: my-ai-app
Status: Running
Services:
✓ AI (Ollama) http://localhost:11434 [Running]
✓ Database localhost:5432 [Running]
✓ Cache localhost:6379 [Running]
✓ Storage http://localhost:9000 [Running]
Test the AI Service
Let’s test that the AI model is working:
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "Hello! What is LocalCloud?",
"stream": false
}'
Connect to Services
Each service is now accessible locally:
AI Model (Ollama)
- API:
http://localhost:11434
- Models endpoint:
http://localhost:11434/api/tags
- Compatible with OpenAI API format
PostgreSQL Database
- Host:
localhost
- Port:
5432
- Username:
localcloud
- Password:
localcloud
- Database:
localcloud
Connection string:
postgresql://localcloud:localcloud@localhost:5432/localcloud
Redis Cache
- Host:
localhost
- Port:
6379
- No authentication by default
MinIO Storage
- API:
http://localhost:9000
- Console:
http://localhost:9001
- Access Key:
minioadmin
- Secret Key:
minioadmin
Build a Simple Chat Application
Let’s create a basic chat application using the services:
Create app.py:import requests
from flask import Flask, request, jsonify
import redis
import psycopg2
app = Flask(__name__)
# Connect to services
redis_client = redis.Redis(host='localhost', port=6379)
db = psycopg2.connect(
"postgresql://localcloud:localcloud@localhost:5432/localcloud"
)
@app.route('/chat', methods=['POST'])
def chat():
message = request.json['message']
# Check cache first
cached = redis_client.get(f"response:{message}")
if cached:
return jsonify({'response': cached.decode()})
# Call AI model
response = requests.post('http://localhost:11434/api/generate',
json={
'model': 'llama2',
'prompt': message,
'stream': False
}
)
ai_response = response.json()['response']
# Cache the response
redis_client.setex(f"response:{message}", 3600, ai_response)
# Store in database
with db.cursor() as cur:
cur.execute(
"INSERT INTO chats (message, response) VALUES (%s, %s)",
(message, ai_response)
)
db.commit()
return jsonify({'response': ai_response})
if __name__ == '__main__':
# Create table if not exists
with db.cursor() as cur:
cur.execute("""
CREATE TABLE IF NOT EXISTS chats (
id SERIAL PRIMARY KEY,
message TEXT,
response TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
db.commit()
app.run(port=5000)
Run the app:pip install flask requests redis psycopg2-binary
python app.py
Test your chat application:
curl -X POST http://localhost:5000/chat \
-H "Content-Type: application/json" \
-d '{"message": "What is the capital of France?"}'
Managing Your Project
View logs
# All services
lc logs
# Specific service
lc logs ai
Stop services
# Stop all services
lc stop
# Stop specific service
lc stop postgres
Add more models
# List available models
lc models list
# Pull a new model
lc models pull mistral
Check resource usage
What’s Next?
Congratulations! You now have a fully functional local AI development environment. Here’s what you can explore next:
Tips
Model Performance: Smaller models like qwen2.5:3b or phi3 run faster on modest hardware. Start with these if you have limited resources.
First Start: The initial lc start may take several minutes as Docker images are downloaded. Subsequent starts will be much faster.
Data Persistence: All your data is stored in Docker volumes. Use lc reset --hard only if you want to completely remove all data.