Installation
Ollama is available for macOS, Windows, Linux, and as a Docker image. Choose your platform below.macOS
Download the app or use the install script
Windows
Download the installer or use PowerShell
Linux
One-line install script with systemd support
Docker
Official container images on Docker Hub
macOS
Download
- Automated Install
- Manual Download
Install using the automated script:This will:
- Download Ollama for macOS
- Install to
/Applications/Ollama.app - Add
ollamacommand to/usr/local/bin - Start the Ollama server automatically
Requirements
- macOS 11 Big Sur or later
- Apple Silicon (M1/M2/M3) or Intel CPU
- At least 8 GB RAM (16 GB recommended)
Apple Silicon Macs (M1/M2/M3) provide excellent performance with Metal acceleration.
Windows
Download
- PowerShell Install
- Manual Download
Install using PowerShell:The installer will:
- Download Ollama for Windows
- Install to
%LOCALAPPDATA%\Programs\Ollama - Add to system PATH
- Start Ollama as a Windows service
Requirements
- Windows 10 or later
- At least 8 GB RAM (16 GB recommended)
- NVIDIA GPU with CUDA support (optional, for GPU acceleration)
- WSL2 support for GPU acceleration (if using NVIDIA GPU)
GPU Support on Windows
For NVIDIA GPU acceleration:- Install NVIDIA CUDA drivers
- Ensure WSL2 is enabled (for GPU passthrough)
- Ollama will automatically detect and use your GPU
Linux
Install Script
The recommended way to install on Linux:What the install script does
What the install script does
The script automatically:
- Detects your Linux distribution and architecture
- Downloads the appropriate Ollama binary
- Installs to
/usr/local/bin/ollama - Creates an
ollamasystem user - Sets up systemd service for automatic startup
- Detects and installs GPU drivers if needed
Supported Distributions
- Ubuntu 18.04+ (and derivatives like Pop!_OS)
- Debian 10+
- Fedora 38+
- CentOS 7+
- Red Hat Enterprise Linux 8+
- Rocky Linux 8+
- Arch Linux
Manual Installation
GPU Support on Linux
NVIDIA
The install script automatically detects NVIDIA GPUs and installs CUDA drivers. For manual installation:AMD (ROCm)
For AMD GPUs with ROCm support:Service Management
Docker
Official Image
Ollama provides official Docker images on Docker Hub:CPU Only
Run Ollama with CPU only:NVIDIA GPU
Run with NVIDIA GPU support:Requires NVIDIA Container Toolkit installed.
AMD GPU
Run with AMD GPU support:Using the Container
Run a model:Docker Compose
Package Managers
Ollama is also available through various package managers:- Homebrew (macOS/Linux)
- Arch Linux (AUR)
- NixOS
Verify Installation
After installation, verify Ollama is working:Updating Ollama
- macOS
- Windows
- Linux
- Docker
Download and install the latest version from ollama.com/download, or:
Uninstalling
macOS
macOS
Windows
Windows
- Open Settings → Apps → Installed Apps
- Find “Ollama” and click Uninstall
- Or run the uninstaller from
%LOCALAPPDATA%\Programs\Ollama
Linux
Linux
Docker
Docker
Troubleshooting
Port 11434 already in use
Port 11434 already in use
Another process is using the default port. Either:
- Stop the other process
- Change Ollama’s port:
Permission denied (Linux)
Permission denied (Linux)
Ensure your user has access to GPU:
GPU not detected
GPU not detected
- NVIDIA: Install CUDA drivers and run
nvidia-smi - AMD: Install ROCm and check
rocm-smi - macOS: Metal should work automatically on M1/M2/M3
Out of disk space
Out of disk space
Models can be large. Remove unused models:
System Requirements
Minimum Requirements
- RAM: 8 GB (for 7B models)
- Disk Space: 4 GB per model (average)
- OS: macOS 11+, Windows 10+, or modern Linux
Recommended Requirements
- RAM: 16 GB+ (for 13B models)
- Disk Space: 50 GB+ for multiple models
- GPU: NVIDIA with 8GB+ VRAM or Apple Silicon M1+
- OS: Latest version for best performance
Model Size Guide
| Model Size | RAM Required | GPU VRAM | Disk Space |
|---|---|---|---|
| 2B-3B | 4-8 GB | 4 GB | 2-4 GB |
| 7B-8B | 8-16 GB | 8 GB | 4-6 GB |
| 13B | 16-32 GB | 16 GB | 8-12 GB |
| 70B+ | 64+ GB | 48+ GB | 40+ GB |
Quantized models (Q4, Q5) require less memory and perform nearly as well as full precision models.
What’s Next?
Quickstart Guide
Run your first model in minutes
CLI Reference
Learn all available commands
Model Library
Browse available models
API Documentation
Integrate Ollama into your apps