Skip to main content

Installation

Ollama is available for macOS, Windows, Linux, and as a Docker image. Choose your platform below.

macOS

Download the app or use the install script

Windows

Download the installer or use PowerShell

Linux

One-line install script with systemd support

Docker

Official container images on Docker Hub

macOS

Download

Install using the automated script:
curl -fsSL https://ollama.com/install.sh | sh
This will:
  • Download Ollama for macOS
  • Install to /Applications/Ollama.app
  • Add ollama command to /usr/local/bin
  • Start the Ollama server automatically

Requirements

  • macOS 11 Big Sur or later
  • Apple Silicon (M1/M2/M3) or Intel CPU
  • At least 8 GB RAM (16 GB recommended)
Apple Silicon Macs (M1/M2/M3) provide excellent performance with Metal acceleration.

Windows

Download

Install using PowerShell:
irm https://ollama.com/install.ps1 | iex
The installer will:
  • Download Ollama for Windows
  • Install to %LOCALAPPDATA%\Programs\Ollama
  • Add to system PATH
  • Start Ollama as a Windows service

Requirements

  • Windows 10 or later
  • At least 8 GB RAM (16 GB recommended)
  • NVIDIA GPU with CUDA support (optional, for GPU acceleration)
  • WSL2 support for GPU acceleration (if using NVIDIA GPU)
WSL1 is not supported. Please upgrade to WSL2 if using Windows Subsystem for Linux.

GPU Support on Windows

For NVIDIA GPU acceleration:
  1. Install NVIDIA CUDA drivers
  2. Ensure WSL2 is enabled (for GPU passthrough)
  3. Ollama will automatically detect and use your GPU
Verify GPU detection:
ollama run gemma3 --verbose

Linux

Install Script

The recommended way to install on Linux:
curl -fsSL https://ollama.com/install.sh | sh
The script automatically:
  • Detects your Linux distribution and architecture
  • Downloads the appropriate Ollama binary
  • Installs to /usr/local/bin/ollama
  • Creates an ollama system user
  • Sets up systemd service for automatic startup
  • Detects and installs GPU drivers if needed

Supported Distributions

  • Ubuntu 18.04+ (and derivatives like Pop!_OS)
  • Debian 10+
  • Fedora 38+
  • CentOS 7+
  • Red Hat Enterprise Linux 8+
  • Rocky Linux 8+
  • Arch Linux

Manual Installation

1

Download Ollama

Download the binary for your architecture:
# For x86_64
sudo curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/local/bin/ollama

# For ARM64
sudo curl -L https://ollama.com/download/ollama-linux-arm64 -o /usr/local/bin/ollama

sudo chmod +x /usr/local/bin/ollama
2

Create System User

Create a dedicated user for Ollama:
sudo useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
sudo usermod -a -G render ollama  # GPU access
sudo usermod -a -G video ollama   # GPU access
3

Create Systemd Service

Create /etc/systemd/system/ollama.service:
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/bin:/usr/bin:/bin"

[Install]
WantedBy=default.target
Enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable ollama
sudo systemctl start ollama

GPU Support on Linux

NVIDIA

The install script automatically detects NVIDIA GPUs and installs CUDA drivers. For manual installation:
# Ubuntu/Debian
sudo apt install nvidia-driver nvidia-cuda-toolkit

# Fedora
sudo dnf install nvidia-driver cuda

# Arch
sudo pacman -S nvidia cuda
Verify CUDA installation:
nvidia-smi

AMD (ROCm)

For AMD GPUs with ROCm support:
# Install ROCm (Ubuntu/Debian)
curl -fsSL https://repo.radeon.com/rocm/rocm.gpg.key | sudo apt-key add -
echo 'deb [arch=amd64] https://repo.radeon.com/rocm/apt/debian/ ubuntu main' | sudo tee /etc/apt/sources.list.d/rocm.list
sudo apt update
sudo apt install rocm-dkms
Ollama will automatically detect and use ROCm-compatible AMD GPUs.

Service Management

# Check status
sudo systemctl status ollama

# View logs
sudo journalctl -u ollama -f

# Restart service
sudo systemctl restart ollama

# Stop service
sudo systemctl stop ollama

Docker

Official Image

Ollama provides official Docker images on Docker Hub:
docker pull ollama/ollama

CPU Only

Run Ollama with CPU only:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

NVIDIA GPU

Run with NVIDIA GPU support:
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Requires NVIDIA Container Toolkit installed.

AMD GPU

Run with AMD GPU support:
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Using the Container

Run a model:
docker exec -it ollama ollama run gemma3
Access the API:
curl http://localhost:11434/api/generate -d '{
  "model": "gemma3",
  "prompt": "Why is the sky blue?"
}'

Docker Compose

version: '3'
services:
  ollama:
    image: ollama/ollama:latest
    ports:
      - "11434:11434"
    volumes:
      - ollama:/root/.ollama
    restart: unless-stopped

volumes:
  ollama:
Start with:
docker compose up -d

Package Managers

Ollama is also available through various package managers:
brew install ollama

Verify Installation

After installation, verify Ollama is working:
# Check version
ollama --version

# Run a small test model
ollama run gemma3:2b "Hello!"
If successful, you should see a response from the model.

Updating Ollama

Download and install the latest version from ollama.com/download, or:
curl -fsSL https://ollama.com/install.sh | sh

Uninstalling

# Stop the app
pkill Ollama

# Remove application
rm -rf /Applications/Ollama.app

# Remove CLI
sudo rm /usr/local/bin/ollama

# Remove models and data
rm -rf ~/.ollama
  1. Open Settings → Apps → Installed Apps
  2. Find “Ollama” and click Uninstall
  3. Or run the uninstaller from %LOCALAPPDATA%\Programs\Ollama
To remove models:
Remove-Item -Recurse -Force $env:USERPROFILE\.ollama
# Stop and disable service
sudo systemctl stop ollama
sudo systemctl disable ollama

# Remove service file
sudo rm /etc/systemd/system/ollama.service

# Remove binary
sudo rm /usr/local/bin/ollama

# Remove user
sudo userdel ollama

# Remove models and data
sudo rm -rf /usr/share/ollama
# Stop and remove container
docker stop ollama
docker rm ollama

# Remove image
docker rmi ollama/ollama

# Remove volume (models)
docker volume rm ollama

Troubleshooting

Another process is using the default port. Either:
  • Stop the other process
  • Change Ollama’s port:
    export OLLAMA_HOST=0.0.0.0:11435
    ollama serve
    
Ensure your user has access to GPU:
sudo usermod -aG render $USER
sudo usermod -aG video $USER
# Log out and back in
  • NVIDIA: Install CUDA drivers and run nvidia-smi
  • AMD: Install ROCm and check rocm-smi
  • macOS: Metal should work automatically on M1/M2/M3
Models can be large. Remove unused models:
ollama list
ollama rm <model-name>

System Requirements

Minimum Requirements

  • RAM: 8 GB (for 7B models)
  • Disk Space: 4 GB per model (average)
  • OS: macOS 11+, Windows 10+, or modern Linux
  • RAM: 16 GB+ (for 13B models)
  • Disk Space: 50 GB+ for multiple models
  • GPU: NVIDIA with 8GB+ VRAM or Apple Silicon M1+
  • OS: Latest version for best performance

Model Size Guide

Model SizeRAM RequiredGPU VRAMDisk Space
2B-3B4-8 GB4 GB2-4 GB
7B-8B8-16 GB8 GB4-6 GB
13B16-32 GB16 GB8-12 GB
70B+64+ GB48+ GB40+ GB
Quantized models (Q4, Q5) require less memory and perform nearly as well as full precision models.

What’s Next?

Quickstart Guide

Run your first model in minutes

CLI Reference

Learn all available commands

Model Library

Browse available models

API Documentation

Integrate Ollama into your apps

Build docs developers (and LLMs) love