Setting Up Models
Learn how to install, configure, and manage models in LocalAI
This tutorial covers everything you need to know about installing and configuring models in LocalAI. You’ll learn multiple methods to get models running.
Prerequisites
- LocalAI installed and running (see Your First Chat if you haven’t set it up yet)
- Basic understanding of command line usage
Method 1: Using the Model Gallery (Easiest)
The Model Gallery is the simplest way to install models. It provides pre-configured models ready to use.
Via WebUI
- Open the LocalAI WebUI at
http://localhost:8080 - Navigate to the “Models” tab
- Browse available models
- Click “Install” on any model you want
- Wait for installation to complete
Method 1.5: Import Models via WebUI
The WebUI provides a powerful model import interface that supports both simple and advanced configuration:
Simple Import Mode
- Open the LocalAI WebUI at
http://localhost:8080 - Click “Import Model”
- Enter the model URI (e.g.,
https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct-GGUF) - Optionally configure preferences:
- Backend selection
- Model name
- Description
- Quantizations
- Embeddings support
- Custom preferences
- Click “Import Model” to start the import process
Advanced Import Mode
For full control over model configuration:
- In the WebUI, click “Import Model”
- Toggle to “Advanced Mode”
- Edit the YAML configuration directly in the code editor
- Use the “Validate” button to check your configuration
- Click “Create” or “Update” to save
The advanced editor includes:
- Syntax highlighting
- YAML validation
- Format and copy tools
- Full configuration options
This is especially useful for:
- Custom model configurations
- Fine-tuning model parameters
- Setting up complex model setups
- Editing existing model configurations
Via CLI
# List available models
local-ai models list
# Install a specific model
local-ai models install llama-3.2-1b-instruct:q4_k_m
# Start LocalAI with a model from the gallery
local-ai run llama-3.2-1b-instruct:q4_k_m
Browse Online
Visit models.localai.io to browse all available models in your browser.
Method 2: Installing from Hugging Face
LocalAI can directly install models from Hugging Face:
# Install and run a model from Hugging Face
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
The format is: huggingface://<repository>/<model-file>
Method 3: Installing from OCI Registries
Ollama Registry
local-ai run ollama://gemma:2b
Standard OCI Registry
local-ai run oci://localai/phi-2:latest
Method 4: Manual Installation
For full control, you can manually download and configure models.
Step 1: Download a Model
Download a GGUF model file. Popular sources:
Example:
mkdir -p models
wget https://huggingface.co/TheBloke/phi-2-GGUF/resolve/main/phi-2.Q4_K_M.gguf \
-O models/phi-2.Q4_K_M.gguf
Step 2: Create a Configuration File (Optional)
Create a YAML file to configure the model:
# models/phi-2.yaml
name: phi-2
parameters:
model: phi-2.Q4_K_M.gguf
temperature: 0.7
context_size: 2048
threads: 4
backend: llama-cpp
Step 3: Start LocalAI
# With Docker
docker run -p 8080:8080 -v $PWD/models:/models \
localai/localai:latest
# Or with binary
local-ai --models-path ./models
Understanding Model Files
File Formats
- GGUF: Modern format, recommended for most use cases
- GGML: Older format, still supported but deprecated
Quantization Levels
Models come in different quantization levels (quality vs. size trade-off):
| Quantization | Size | Quality | Use Case |
|---|---|---|---|
| Q8_0 | Largest | Highest | Best quality, requires more RAM |
| Q6_K | Large | Very High | High quality |
| Q4_K_M | Medium | High | Balanced (recommended) |
| Q4_K_S | Small | Medium | Lower RAM usage |
| Q2_K | Smallest | Lower | Minimal RAM, lower quality |
Choosing the Right Model
Consider:
- RAM available: Larger models need more RAM
- Use case: Different models excel at different tasks
- Speed: Smaller quantizations are faster
- Quality: Higher quantizations produce better output
Model Configuration
Basic Configuration
Create a YAML file in your models directory:
name: my-model
parameters:
model: model.gguf
temperature: 0.7
top_p: 0.9
context_size: 2048
threads: 4
backend: llama-cpp
Advanced Configuration
See the Model Configuration guide for all available options.
Managing Models
List Installed Models
# Via API
curl http://localhost:8080/v1/models
# Via CLI
local-ai models list
Remove Models
Simply delete the model file and configuration from your models directory:
rm models/model-name.gguf
rm models/model-name.yaml # if exists
Troubleshooting
Model Not Loading
Check backend: Ensure the required backend is installed
local-ai backends list local-ai backends install llama-cpp # if neededCheck logs: Enable debug mode
DEBUG=true local-aiVerify file: Ensure the model file is not corrupted
Out of Memory
- Use a smaller quantization (Q4_K_S or Q2_K)
- Reduce
context_sizein configuration - Close other applications to free RAM
Wrong Backend
Check the Compatibility Table to ensure you’re using the correct backend for your model.
Best Practices
- Start small: Begin with smaller models to test your setup
- Use quantized models: Q4_K_M is a good balance for most use cases
- Organize models: Keep your models directory organized
- Backup configurations: Save your YAML configurations
- Monitor resources: Watch RAM and disk usage
What’s Next?
- Using GPU Acceleration - Speed up inference
- Model Configuration - Advanced configuration options
- Compatibility Table - Find compatible models and backends
See Also
Last updated 17 Nov 2025, 19:34 +0100 .