lc models
Manage AI models including language models (LLMs) and embedding models. Download, list, remove, and get information about models running on Ollama.Usage
lc model
, lc m
Commands
lc models list
List all installed AI models.
- Model name and tag
- Model type (LLM, Embedding)
- Size and modification date
- Status (Active/Available)
lc models pull
Download a model from the Ollama registry.
- Progress bar during download
- Automatic model type detection
- Usage examples after successful download
- Disk space validation
lc models remove
Remove an installed model.
- Confirmation prompt before deletion
- Warns if model is currently active
- Shows disk space that will be freed
lc models info
Show AI provider status and model information.
- Ollama service status
- Active models
- Total models installed
- Disk usage
- API endpoints
Available Models
Language Models (LLMs)
Small Models (2-4GB) - Recommended for Development
Model | Size | Description | Use Cases |
---|---|---|---|
qwen2.5:3b | 2.0GB | Alibaba’s efficient model | General chat, coding |
phi3:3.8b | 2.2GB | Microsoft’s compact model | Reasoning, math |
gemma2:2b | 1.6GB | Google’s tiny model | Quick responses |
llama3.2:3b | 2.0GB | Meta’s latest compact | General purpose |
Medium Models (4-8GB) - Balanced Performance
Model | Size | Description | Use Cases |
---|---|---|---|
llama3.2:7b | 4.1GB | Meta’s standard model | Chat, content creation |
mistral:7b | 4.1GB | Mistral AI’s base model | Coding, reasoning |
codellama:7b | 3.8GB | Code-specialized Llama | Programming assistance |
neural-chat:7b | 4.1GB | Intel’s fine-tuned model | Conversations |
Large Models (10GB+) - High Performance
Model | Size | Description | Use Cases |
---|---|---|---|
mixtral:8x7b | 26GB | Mixture of Experts | Complex reasoning |
llama3.1:70b | 40GB | Meta’s largest model | Advanced tasks |
qwen2.5:32b | 19GB | Alibaba’s large model | Professional use |
Embedding Models
Model | Size | Dimensions | Description |
---|---|---|---|
all-minilm:l6-v2 | 91MB | 384 | Fast, general-purpose |
nomic-embed-text | 274MB | 768 | High-quality text embeddings |
mxbai-embed-large | 669MB | 1024 | Large context, high accuracy |
Specialized Models
Model | Size | Specialization |
---|---|---|
codellama:7b | 3.8GB | Code generation |
deepseek-coder:6.7b | 3.8GB | Programming |
solar:10.7b | 6.1GB | Instruction following |
wizardmath:7b | 3.8GB | Mathematical reasoning |
Examples
List Installed Models
Download a Model
Remove a Model
Check AI Service Status
Model Selection Guide
By Hardware Requirements
8GB RAM Systems
16GB RAM Systems
32GB+ RAM Systems
By Use Case
General Chat Applications
Code Generation
RAG (Retrieval Augmented Generation)
Mathematical Reasoning
Model Management
Default Model Configuration
Set default models in your configuration:Model Updates
Batch Operations
Performance Optimization
Memory Management
Disk Space Management
Troubleshooting
Model Download Issues
- Check internet connection
- Verify disk space:
lc models info
- Restart Ollama service:
lc restart ai
- Try different model:
lc models pull phi3:3.8b
Model Not Found
- Check available models:
lc models list
- Pull the model first:
lc models pull custom-model
- Verify model name spelling
Out of Memory
- Use smaller model:
lc models pull qwen2.5:3b
- Close other applications
- Increase system RAM
- Use quantized models
Ollama Service Issues
- Start AI service:
lc start ai
- Check service status:
lc status
- Restart service:
lc restart ai
- Check logs:
lc logs ai
Model API Usage
Direct API Calls
Integration Examples
Related Commands
lc start ai
- Start AI servicelc status
- Check AI service statuslc logs ai
- View AI service logslc component add llm
- Add LLM componentlc info
- System resource information