Skip to main content
Llama is Meta’s family of open-source large language models. GitWhisper supports all Llama model variants, offering flexibility and strong performance for commit message generation and code analysis.

Available Models

GitWhisper supports the following Llama model variants:

Llama 3.3 Series

  • llama-3.3-70b-instruct - Latest 70B parameter model with improved capabilities

Llama 3.2 Series

  • llama-3.2-1b-instruct - Ultra-lightweight 1B model
  • llama-3.2-3b-instruct - Compact 3B model for efficient deployment

Llama 3.1 Series

  • llama-3.1-405b-instruct - Massive 405B parameter model (best quality)
  • llama-3.1-70b-instruct - Balanced 70B model
  • llama-3.1-8b-instruct - Efficient 8B model

Llama 3 Series

  • llama-3-70b-instruct ⭐ (default) - Production-ready 70B model
  • llama-3-8b-instruct - Fast and efficient 8B model
The default model llama-3-70b-instruct provides excellent performance for most development workflows.

Setup

1

Get API Key

Obtain your Llama API key from Meta’s platform or a Llama provider.
2

Save API Key

Save your API key using GitWhisper:
gitwhisper save-key --model llama --key "your-llama-api-key"
Or set it as an environment variable:
export LLAMA_API_KEY="your-llama-api-key"
3

Use Llama

Generate commits with Llama:
gitwhisper commit --model llama

Usage Examples

# Use default Llama model (llama-3-70b-instruct)
gitwhisper commit --model llama
gw commit -m llama

Model Comparison

ModelParametersQualitySpeedBest For
llama-3.1-405b-instruct405BExcellentSlowComplex analysis
llama-3.3-70b-instruct70BVery GoodMediumLatest features
llama-3-70b-instruct70BVery GoodMediumProduction use
llama-3.1-70b-instruct70BVery GoodMediumBalanced
llama-3.1-8b-instruct8BGoodFastQuick commits
llama-3.2-3b-instruct3BFairVery FastResource-constrained
llama-3.2-1b-instruct1BBasicUltra FastEdge deployment

Features

Open Source Flexibility

Llama models can be deployed anywhere:
# Use via cloud API
gitwhisper commit --model llama --key "your-api-key"

# Or self-host with Ollama
gitwhisper commit --model ollama --model-variant llama3.2:latest

Multiple Size Options

Choose the right size for your needs:
# Maximum quality (requires powerful hardware or API)
gitwhisper commit --model llama --model-variant llama-3.1-405b-instruct

# Balanced (good for most use cases)
gitwhisper commit --model llama --model-variant llama-3-70b-instruct

# Fast (efficient for quick commits)
gitwhisper commit --model llama --model-variant llama-3.1-8b-instruct

Configuration

Set Default Variant

# Set 70B as default
gitwhisper set-defaults --model llama --model-variant llama-3-70b-instruct

# Set lightweight 8B as default
gitwhisper set-defaults --model llama --model-variant llama-3.1-8b-instruct

# Set massive 405B as default (requires API access)
gitwhisper set-defaults --model llama --model-variant llama-3.1-405b-instruct

API Key Management

Add to your shell configuration:
export LLAMA_API_KEY="your-llama-api-key"
Save permanently with GitWhisper:
gitwhisper save-key --model llama --key "your-llama-api-key"
This stores the key in ~/.git_whisper.yaml
Pass key directly to the command:
gitwhisper commit --model llama --key "your-llama-api-key"

Best Practices

Use llama-3-70b-instruct or llama-3.3-70b-instruct for daily development. They offer the best balance of quality and speed.
The llama-3.1-405b-instruct model requires significant resources (API or powerful hardware). It’s best suited for complex analysis tasks where maximum quality is needed.
For self-hosted deployments, consider using smaller models (8B or 3B) via Ollama for better performance on consumer hardware.

Self-Hosting with Ollama

Llama models work great with Ollama for complete privacy:
# Pull Llama model with Ollama
ollama pull llama3.2:latest

# Use with GitWhisper
gitwhisper commit --model ollama --model-variant llama3.2:latest
See the Ollama guide for detailed setup instructions.

Troubleshooting

Ensure your Llama API key is properly configured:
# Check configuration
gitwhisper show-config

# Save key if missing
gitwhisper save-key --model llama --key "your-key"
Verify the variant name is correct:
# List available variants
gitwhisper list-variants --model llama
If you’re getting memory or timeout errors with large models:
  • Use smaller variants (8B or 3B)
  • Switch to cloud API for 405B model
  • Consider self-hosting smaller models with Ollama

All Variants

See complete list of Llama model variants

Ollama Setup

Self-host Llama models with Ollama

Model Overview

Compare all supported AI models

Configuration

Learn more about API key management

Build docs developers (and LLMs) love