Skip to main content

Installation

OpenCLIP can be installed in multiple ways depending on your use case. This guide covers installation for inference, training, and development.

Basic Installation

For inference and basic usage, install OpenCLIP via pip:
pip install open_clip_torch
This installs the core package with dependencies for loading models and running inference.
The package name is open_clip_torch on PyPI, but you import it as open_clip in Python.

Optional Dependencies

Some models require additional packages depending on their architecture:

Image Encoders (timm)

Many modern models use timm-based image encoders (ConvNext, SigLIP, EVA, etc.). Install the latest timm for full model support:
pip install timm
If you see “Unknown model” errors for the image encoder, upgrade timm to the latest version:
pip install --upgrade timm

Tokenizers (transformers)

Models that use Hugging Face transformers tokenizers require the transformers library:
pip install transformers

Complete Installation

To install OpenCLIP with all optional dependencies:
pip install open_clip_torch timm transformers

Training Installation

For training CLIP models, install with training dependencies:
pip install 'open_clip_torch[training]'
This includes additional packages for:
  • Distributed training
  • Data loading and augmentation
  • Logging and monitoring
  • WebDataset support
Training requires PyTorch to be installed separately. Follow the PyTorch installation guide for your specific system configuration.

Development Installation

For contributors or those who want to modify the code:
1

Create a virtual environment

First, create and activate a virtual environment:
python3 -m venv .env
source .env/bin/activate
pip install -U pip
2

Clone the repository

Clone the OpenCLIP repository:
git clone https://github.com/mlfoundations/open_clip.git
cd open_clip
3

Install in development mode

Install the package in editable mode:
make install
Or install training dependencies:
make install-training
4

Install PyTorch

Install PyTorch according to your system:
# Visit https://pytorch.org/get-started/locally/
# Example for CUDA 11.8:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118

Testing Installation

To verify your installation and run tests:

Install test dependencies

make install-test

Run all tests

make test

Run specific tests

# Run training tests only
python -m pytest -x -s -v tests -k "training"

# Run inference tests
python -m pytest -x -s -v tests -k "inference"

Verification

Verify your installation by loading a pretrained model:
import open_clip
import torch

# List available pretrained models
print(open_clip.list_pretrained()[:5])  # Show first 5

# Load a model
model, _, preprocess = open_clip.create_model_and_transforms(
    'ViT-B-32',
    pretrained='laion2b_s34b_b79k'
)

print("Installation successful!")
If this runs without errors, your installation is complete.

Platform-Specific Notes

Linux

OpenCLIP works out of the box on most Linux distributions. For CUDA support, ensure you have:
  • NVIDIA drivers installed
  • CUDA toolkit matching your PyTorch version

macOS

OpenCLIP works on macOS with CPU or MPS (Apple Silicon) acceleration:
import torch
import open_clip

# Use MPS on Apple Silicon
device = "mps" if torch.backends.mps.is_available() else "cpu"
model, _, preprocess = open_clip.create_model_and_transforms(
    'ViT-B-32',
    pretrained='openai',
    device=device
)

Windows

For Windows, we recommend using WSL2 (Windows Subsystem for Linux) for the best experience, especially for training. Native Windows installation works but may have some limitations.

Troubleshooting

Make sure you’re importing the correct name:
import open_clip  # Correct
# NOT: import open_clip_torch
Reduce batch size or use gradient accumulation:
# Use mixed precision to save memory
model, _, preprocess = open_clip.create_model_and_transforms(
    'ViT-B-32',
    pretrained='openai',
    precision='fp16'  # or 'bf16'
)
This usually means a timm model is not found. Upgrade timm:
pip install --upgrade timm
Models are cached after first download. The cache location can be set:
model, _, preprocess = open_clip.create_model_and_transforms(
    'ViT-B-32',
    pretrained='openai',
    cache_dir='/path/to/cache'
)

Next Steps

Now that OpenCLIP is installed, proceed to the quickstart guide to run your first example:
  • Quickstart Guide - Run zero-shot classification
  • [Model Zoo(/usage/pretrained-models) - Browse available models
  • [Training Guide(/training/overview) - Train your own CLIP models

Build docs developers (and LLMs) love