Run LLMs locally with Ollama integration
lc models pull <model-name>
.lc setup
, you can type any Ollama model name:
http://localhost:11434
with these endpoints:
localcloud_ollama_data
qwen2.5:3b
- Fast and efficientphi3:3.8b
- Great for basic tasksllama3.2:3b
- Good balance of speed and capabilityllama3.2:3b
- Fast, good qualitymistral:7b
- Excellent general purposemixtral:8x7b
- High quality but requires more resourcesdeepseek-coder:6.7b
- Specialized for codecodellama:7b
- Meta’s code modelqwen-coder:7b
- Good for multiple languagesstarling-lm:7b
- Good creative capabilitiesneural-chat:7b
- Conversational focusphi3:3.8b
- Microsoft’s efficient modelqwen2.5:3b
- Fast and capabletinyllama:1.1b
- Ultra-light option