Skip to main content

Quick Install

Install Ollama on Linux with a single command:
curl -fsSL https://ollama.com/install.sh | sh
This script automatically detects your system and installs the appropriate version.

Manual Installation

If upgrading from a prior version, remove old libraries first:
sudo rm -rf /usr/lib/ollama
1

Download and Extract

Download the Ollama package for your architecture:
curl -fsSL https://ollama.com/download/ollama-linux-amd64.tar.zst \
    | sudo tar x -C /usr
2

Start Ollama

In a terminal, start the Ollama server:
ollama serve
3

Verify Installation

In another terminal, verify Ollama is running:
ollama -v

GPU Support

AMD GPU Installation

If you have an AMD GPU, download and extract the ROCm package:
curl -fsSL https://ollama.com/download/ollama-linux-amd64-rocm.tar.zst \
    | sudo tar x -C /usr
While AMD has contributed the amdgpu driver upstream to the Linux kernel, the version may be older and not support all ROCm features. For best support, install the latest driver from AMD’s official site.

NVIDIA GPU Setup

Ollama automatically detects NVIDIA GPUs with CUDA support. Requirements:
  • NVIDIA GPU with compute capability 5.0+
  • Driver version 531 or newer
Install CUDA drivers:
  1. Download and install CUDA
  2. Verify installation:
nvidia-smi
This should display your GPU details.

Running as a System Service

Recommended for production use and automatic startup.
1

Create Ollama User

Create a dedicated user and group for Ollama:
sudo useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
sudo usermod -a -G ollama $(whoami)
2

Create Service File

Create /etc/systemd/system/ollama.service:
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"

[Install]
WantedBy=multi-user.target
3

Enable and Start Service

Reload systemd and enable the service:
sudo systemctl daemon-reload
sudo systemctl enable ollama
sudo systemctl start ollama
4

Verify Service Status

Check that Ollama is running:
sudo systemctl status ollama

Configuration

Environment Variables

Customize Ollama by editing the systemd service:
sudo systemctl edit ollama
Or create an override file at /etc/systemd/system/ollama.service.d/override.conf:
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_DEBUG=1"
Environment="OLLAMA_MODELS=/mnt/models"

Common Environment Variables

VariableDescriptionDefault
OLLAMA_HOSTServer bind address127.0.0.1:11434
OLLAMA_MODELSModel storage location/usr/share/ollama/.ollama/models
OLLAMA_DEBUGEnable debug logging0
OLLAMA_NUM_PARALLELMax parallel requests1
CUDA_VISIBLE_DEVICESSelect specific NVIDIA GPUsAll GPUs
HSA_OVERRIDE_GFX_VERSIONAMD GPU overrideAuto-detect

Installing Specific Versions

Use the OLLAMA_VERSION environment variable to install a specific version:
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.7 sh
Find available versions in the releases page.

Updates

Update Ollama by running the install script again:
curl -fsSL https://ollama.com/install.sh | sh
Or manually download the latest version:
curl -fsSL https://ollama.com/download/ollama-linux-amd64.tar.zst \
    | sudo tar x -C /usr
sudo systemctl restart ollama

Logs and Debugging

View Logs

For systemd service:
journalctl -e -u ollama
Follow logs in real-time:
journalctl -u ollama --no-pager --follow

Manual Server Logs

If running ollama serve manually, logs appear in the terminal.

Enable Debug Logging

sudo systemctl edit ollama
Add:
[Service]
Environment="OLLAMA_DEBUG=1"
Then restart:
sudo systemctl daemon-reload
sudo systemctl restart ollama

Troubleshooting

NVIDIA GPU Not Detected

If Ollama doesn’t detect your NVIDIA GPU:
  1. Verify drivers are loaded:
    nvidia-smi
    
  2. Load the UVM driver:
    sudo nvidia-modprobe -u
    
  3. Reload the nvidia_uvm driver:
    sudo rmmod nvidia_uvm
    sudo modprobe nvidia_uvm
    
  4. Check for errors in system logs:
    sudo dmesg | grep -i nvidia
    

AMD GPU Issues

For AMD GPU troubleshooting:
  1. Check device permissions:
    ls -l /dev/kfd /dev/dri
    
  2. Ensure user is in the correct groups:
    sudo usermod -a -G video,render ollama
    
  3. Enable debug logging:
    AMD_LOG_LEVEL=3 ollama serve
    
  4. Check for driver errors:
    sudo dmesg | grep -i amdgpu
    

Temp Directory Issues

If your system has noexec on /tmp, set an alternate temporary directory:
sudo systemctl edit ollama
Add:
[Service]
Environment="OLLAMA_TMPDIR=/usr/share/ollama/tmp"

Uninstallation

1

Stop and Disable Service

sudo systemctl stop ollama
sudo systemctl disable ollama
sudo rm /etc/systemd/system/ollama.service
2

Remove Binaries and Libraries

sudo rm $(which ollama)
sudo rm -r $(which ollama | tr 'bin' 'lib')
3

Remove User Data

sudo rm -r /usr/share/ollama
sudo userdel ollama
sudo groupdel ollama

Next Steps

GPU Configuration

Learn about GPU support and optimization

API Reference

Integrate Ollama into your applications

Model Library

Browse available models

Docker Setup

Run Ollama in containers

Build docs developers (and LLMs) love