• info Overview
    • Quickstart
    • Install and Run Models
    • Try it out
    • Customizing the Model
    • Build LocalAI from source
    • Run with container images
    • Run with Kubernetes
    • Your First Chat with LocalAI
    • Setting Up Models
    • Using GPU Acceleration
    • Deploying to Production
    • Integration Examples
  • newspaper What's New
    • Backends
    • ⚡ GPU acceleration
    • 📖 Text generation (GPT)
    • 📈 Reranker
    • 🗣 Text to audio (TTS)
    • 🎨 Image generation
    • 🔍 Object detection
    • 🧠 Embeddings
    • 🥽 GPT Vision
    • ✍️ Constrained Grammars
    • 🆕🖧 Distributed Inference
    • 🔈 Audio to text
    • 🔥 OpenAI functions and tools
    • 💾 Stores
    • 🖼️ Model gallery
    • Model Context Protocol (MCP)
  • sync Integrations
    • Advanced usage
    • Fine-tuning LLMs for text generation
    • Performance Tuning
    • VRAM and Memory Management
    • Model Configuration
    • Installer options
    • API Reference
    • Model compatibility table
    • Architecture
    • CLI Reference
    • LocalAI binaries
    • Running on Nvidia ARM64
  • quiz FAQ
  • bug_report Troubleshooting Guide
  • security Security Best Practices
    Star us on GitHub ! 
    Star
    • GitHub
    • Twitter / X
    • RSS
  • to navigate
  • to select
  • to close
    • Home
    • Getting Started
    On this page
      • Quick Start
      • Installation Options
      • Setting Up Models
      • What’s Next?
      • Need Help?
      • Quick Start
      • Installation Options
      • Setting Up Models
      • What’s Next?
      • Need Help?
    rocket_launch

    Getting Started

    Install LocalAI and run your first AI model

    rocket_launch

    Quickstart

    rocket_launch

    Install and Run Models

    rocket_launch

    Try it out

    rocket_launch

    Customizing the Model

    article

    Build LocalAI from source

    article

    Run with container images

    article

    Run with Kubernetes


    © 2023-2025 Ettore Di Giacinto