Available Models
GitWhisper supports the following Llama model variants:Llama 3.3 Series
- llama-3.3-70b-instruct - Latest 70B parameter model with improved capabilities
Llama 3.2 Series
- llama-3.2-1b-instruct - Ultra-lightweight 1B model
- llama-3.2-3b-instruct - Compact 3B model for efficient deployment
Llama 3.1 Series
- llama-3.1-405b-instruct - Massive 405B parameter model (best quality)
- llama-3.1-70b-instruct - Balanced 70B model
- llama-3.1-8b-instruct - Efficient 8B model
Llama 3 Series
- llama-3-70b-instruct ⭐ (default) - Production-ready 70B model
- llama-3-8b-instruct - Fast and efficient 8B model
The default model llama-3-70b-instruct provides excellent performance for most development workflows.
Setup
Usage Examples
Model Comparison
- Size & Performance
- Use Cases
- Deployment
| Model | Parameters | Quality | Speed | Best For |
|---|---|---|---|---|
| llama-3.1-405b-instruct | 405B | Excellent | Slow | Complex analysis |
| llama-3.3-70b-instruct | 70B | Very Good | Medium | Latest features |
| llama-3-70b-instruct | 70B | Very Good | Medium | Production use |
| llama-3.1-70b-instruct | 70B | Very Good | Medium | Balanced |
| llama-3.1-8b-instruct | 8B | Good | Fast | Quick commits |
| llama-3.2-3b-instruct | 3B | Fair | Very Fast | Resource-constrained |
| llama-3.2-1b-instruct | 1B | Basic | Ultra Fast | Edge deployment |
Features
Open Source Flexibility
Llama models can be deployed anywhere:Multiple Size Options
Choose the right size for your needs:Configuration
Set Default Variant
API Key Management
Using Environment Variables
Using Environment Variables
Add to your shell configuration:
Using Saved Configuration
Using Saved Configuration
Save permanently with GitWhisper:This stores the key in
~/.git_whisper.yamlPer-Command Key
Per-Command Key
Pass key directly to the command:
Best Practices
For self-hosted deployments, consider using smaller models (8B or 3B) via Ollama for better performance on consumer hardware.
Self-Hosting with Ollama
Llama models work great with Ollama for complete privacy:Troubleshooting
API Key Not Found
API Key Not Found
Ensure your Llama API key is properly configured:
Invalid Model Variant
Invalid Model Variant
Verify the variant name is correct:
Model Too Large
Model Too Large
If you’re getting memory or timeout errors with large models:
- Use smaller variants (8B or 3B)
- Switch to cloud API for 405B model
- Consider self-hosting smaller models with Ollama
Related Resources
All Variants
See complete list of Llama model variants
Ollama Setup
Self-host Llama models with Ollama
Model Overview
Compare all supported AI models
Configuration
Learn more about API key management