Manage AI models for language processing and embeddings
lc model
, lc m
lc models list
lc models pull
lc models remove
lc models info
Model | Size | Description | Use Cases |
---|---|---|---|
qwen2.5:3b | 2.0GB | Alibaba’s efficient model | General chat, coding |
phi3:3.8b | 2.2GB | Microsoft’s compact model | Reasoning, math |
gemma2:2b | 1.6GB | Google’s tiny model | Quick responses |
llama3.2:3b | 2.0GB | Meta’s latest compact | General purpose |
Model | Size | Description | Use Cases |
---|---|---|---|
llama3.2:7b | 4.1GB | Meta’s standard model | Chat, content creation |
mistral:7b | 4.1GB | Mistral AI’s base model | Coding, reasoning |
codellama:7b | 3.8GB | Code-specialized Llama | Programming assistance |
neural-chat:7b | 4.1GB | Intel’s fine-tuned model | Conversations |
Model | Size | Description | Use Cases |
---|---|---|---|
mixtral:8x7b | 26GB | Mixture of Experts | Complex reasoning |
llama3.1:70b | 40GB | Meta’s largest model | Advanced tasks |
qwen2.5:32b | 19GB | Alibaba’s large model | Professional use |
Model | Size | Dimensions | Description |
---|---|---|---|
all-minilm:l6-v2 | 91MB | 384 | Fast, general-purpose |
nomic-embed-text | 274MB | 768 | High-quality text embeddings |
mxbai-embed-large | 669MB | 1024 | Large context, high accuracy |
Model | Size | Specialization |
---|---|---|
codellama:7b | 3.8GB | Code generation |
deepseek-coder:6.7b | 3.8GB | Programming |
solar:10.7b | 6.1GB | Instruction following |
wizardmath:7b | 3.8GB | Mathematical reasoning |
lc models info
lc restart ai
lc models pull phi3:3.8b
lc models list
lc models pull custom-model
lc models pull qwen2.5:3b
lc start ai
lc status
lc restart ai
lc logs ai
lc start ai
- Start AI servicelc status
- Check AI service statuslc logs ai
- View AI service logslc component add llm
- Add LLM componentlc info
- System resource information