Install and Run Models
To install models with LocalAI, you can:
- Import via WebUI (Recommended for beginners): Use the WebUI’s model import interface to import models from URIs with a user-friendly interface. Supports both simple mode (with preferences) and advanced mode (YAML editor). See the Setting Up Models tutorial for details.
- Browse the Model Gallery from the Web Interface and install models with a couple of clicks. For more details, refer to the Gallery Documentation.
- Specify a model from the LocalAI gallery during startup, e.g.,
local-ai run <model_gallery_name>. - Use a URI to specify a model file (e.g.,
huggingface://...,oci://, orollama://) when starting LocalAI, e.g.,local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf. - Specify a URL to a model configuration file when starting LocalAI, e.g.,
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml. - Manually install the models by copying the files into the models directory (
--models).
Run and Install Models via the Gallery
To run models available in the LocalAI gallery, you can use the WebUI or specify the model name when starting LocalAI. Models can be found in the gallery via the Web interface, the model gallery, or the CLI with: local-ai models list.
To install a model from the gallery, use the model name as the URI. For example, to run LocalAI with the Hermes model, execute:
local-ai run hermes-2-theta-llama-3-8b
To install only the model, use:
local-ai models install hermes-2-theta-llama-3-8b
Note: The galleries available in LocalAI can be customized to point to a different URL or a local directory. For more information on how to setup your own gallery, see the Gallery Documentation.
Import Models via WebUI
The easiest way to import models is through the WebUI’s import interface:
- Open the LocalAI WebUI at
http://localhost:8080 - Navigate to the “Models” tab
- Click “Import Model” or “New Model”
- Choose your import method:
- Simple Mode: Enter a model URI and configure preferences (backend, name, description, quantizations, etc.)
- Advanced Mode: Edit YAML configuration directly with syntax highlighting and validation
The WebUI import supports all URI types:
huggingface://repository_id/model_fileoci://container_image:tagollama://model_id:tagfile://path/to/modelhttps://...(for configuration files)
For detailed instructions, see the Setting Up Models tutorial.
Run Models via URI (CLI)
To run models via URI from the command line, specify a URI to a model file or a configuration file when starting LocalAI. Valid syntax includes:
file://path/to/modelhuggingface://repository_id/model_file(e.g.,huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf)- From OCIs:
oci://container_image:tag,ollama://model_id:tag - From configuration files:
https://gist.githubusercontent.com/.../phi-2.yaml
Configuration files can be used to customize the model defaults and settings. For advanced configurations, refer to the Customize Models section.
Examples
# Start LocalAI with the phi-2 model
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
# Install and run a model from the Ollama OCI registry
local-ai run ollama://gemma:2b
# Run a model from a configuration file
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
# Install and run a model from a standard OCI registry (e.g., Docker Hub)
local-ai run oci://localai/phi-2:latest
Run Models Manually
Follow these steps to manually run models using LocalAI:
Prepare Your Model and Configuration Files: Ensure you have a model file and, if necessary, a configuration YAML file. Customize model defaults and settings with a configuration file. For advanced configurations, refer to the Advanced Documentation.
GPU Acceleration: For instructions on GPU acceleration, visit the GPU Acceleration page.
Run LocalAI: Choose one of the following methods to run LocalAI:
For more model configurations, visit the Examples Section.
Last updated 17 Nov 2025, 19:34 +0100 .