Skip to main content
Google’s Gemini models offer cutting-edge AI capabilities with exceptional context windows, making them ideal for analyzing large codebases and complex changes.

Available Models

GitWhisper supports the latest Gemini models:

Gemini 2.5 Series (Latest)

The most advanced Gemini models with thinking capabilities:
  • gemini-2.5-pro - Advanced reasoning with thinking mode
  • gemini-2.5-flash - Faster variant updated Sep 2025
  • gemini-2.5-flash-lite - Most cost-efficient option
  • gemini-2.5-flash-image - Specialized for image generation
  • gemini-2.5-computer-use - Agent interaction capabilities

Gemini 2.0 Series

  • gemini-2.0-flash ⭐ (default) - Optimized for speed and performance
  • gemini-2.0-flash-lite - Lowest latency option

Gemini 1.5 Series

Previous generation with proven reliability:
  • gemini-1.5-pro-002 - Supports up to 2M tokens
  • gemini-1.5-flash-002 - Supports up to 1M tokens
  • gemini-1.5-flash-8b - Most cost-effective option
The default model gemini-2.0-flash provides excellent performance for most use cases with fast response times.

Key Features

Massive Context Windows

Gemini models stand out with industry-leading context sizes:
  • Gemini 1.5 Pro: Up to 2 million tokens
  • Gemini 1.5 Flash: Up to 1 million tokens
  • Gemini 2.0/2.5: Up to 1 million tokens
This means you can:
  • Analyze extremely large diffs
  • Process entire repositories
  • Understand complex multi-file changes

Thinking Mode (2.5 Pro)

gemini-2.5-pro includes advanced reasoning with thinking mode:
  • Extended analysis of code changes
  • Deeper understanding of implications
  • More thorough commit message generation

Cost Efficiency

Gemini models are among the most cost-effective options:
  • Competitive pricing per token
  • Large free tier available
  • Excellent value for performance

Usage

Basic Usage

Generate commit messages with Gemini:
# Use default Gemini model (gemini-2.0-flash)
gitwhisper commit --model gemini

# Shorthand
gw commit -m gemini

Specific Variant

Choose a specific Gemini model:
# Use Gemini 2.5 Pro with thinking mode
gitwhisper commit --model gemini --model-variant gemini-2.5-pro

# Use cost-efficient Flash Lite
gitwhisper commit --model gemini --model-variant gemini-2.0-flash-lite

# Use 1.5 Pro for massive context
gitwhisper commit --model gemini --model-variant gemini-1.5-pro-002

Set as Default

Make Gemini your default model:
# Set Gemini as default
gitwhisper set-defaults --model gemini

# Set specific variant as default
gitwhisper set-defaults --model gemini --model-variant gemini-2.5-pro

API Key Setup

You need a Google AI API key to use Gemini models. Get one at makersuite.google.com/app/apikey.
gitwhisper save-key --model gemini --key "AIza..."
The key is stored securely in ~/.git_whisper.yaml

Model Comparison

ModelCapabilitiesSpeedContextBest For
2.5 Pro⭐⭐⭐⭐⭐⭐⭐⭐1M tokensComplex reasoning
2.5 Flash⭐⭐⭐⭐⭐⭐⭐⭐⭐1M tokensBalanced use
2.5 Flash Lite⭐⭐⭐⭐⭐⭐⭐⭐1M tokensCost efficiency

Use Cases

Large Codebase Analysis

Gemini 1.5 Pro with its 2M token context is perfect for:
# Analyze massive refactoring across many files
gitwhisper commit --model gemini --model-variant gemini-1.5-pro-002
  • Repository-wide refactoring
  • Large-scale architectural changes
  • Multi-package updates

Fast Daily Commits

Gemini 2.0 Flash provides excellent speed:
# Quick commits for routine changes
gitwhisper commit --model gemini
  • Rapid iteration during development
  • Small to medium changes
  • Cost-effective for high volume

Complex Reasoning

Gemini 2.5 Pro with thinking mode excels at:
# Deep analysis of complex changes
gitwhisper commit --model gemini --model-variant gemini-2.5-pro
  • Subtle bug fixes
  • Complex business logic changes
  • Algorithm improvements

Code Analysis

Gemini models provide comprehensive code analysis:
# Analyze with default model
gitwhisper analyze --model gemini

# Use thinking mode for deep analysis
gitwhisper analyze --model gemini --model-variant gemini-2.5-pro

# Analyze large changes with max context
gitwhisper analyze --model gemini --model-variant gemini-1.5-pro-002

Analysis Capabilities

1

Pattern Recognition

Identifies patterns and anti-patterns in code
2

Impact Analysis

Assesses how changes affect the broader system
3

Security Review

Spots potential security issues
4

Performance Insights

Suggests performance optimizations

Best Practices

Use Gemini 1.5 Pro when:
  • Changes span dozens of files
  • Need to understand entire repository context
  • Working with monorepos
Use Gemini 2.5 Pro when:
  • Need deep reasoning about complex logic
  • Changes involve subtle implications
  • Want the highest quality analysis
Use Gemini 2.0 Flash when:
  • Standard daily commits
  • Speed is important
  • Cost-effective processing
Even with large context windows, organize your commits:
# Stage related changes together
git add src/auth/*.js
gitwhisper commit --model gemini

# Then commit other changes separately
git add src/api/*.js
gitwhisper commit --model gemini
For maximum benefit from Gemini 2.5 Pro’s thinking mode:
  • Use it for non-obvious changes
  • Allow extra time for processing
  • Review the detailed analysis
  • Use for architecture decisions

Pricing

Google AI offers competitive pricing:
  • Free tier: Generous quota for experimentation
  • Input tokens: Charged per 1K tokens
  • Output tokens: Typically lower rate
  • Flash models: Most cost-effective
Check Google AI pricing for current rates.
The free tier is often sufficient for individual developers and small teams.

Troubleshooting

Error: Invalid API key
Solution: Verify your API key:
  1. Visit Google AI Studio
  2. Create or verify your API key
  3. Save it:
gitwhisper save-key --model gemini --key "AIza..."
Error: Quota exceeded
Solution: You’ve hit the free tier limit. Either:
  • Wait for quota reset (usually daily)
  • Upgrade to paid tier
  • Use a different model temporarily
Error: Model not found
Solution: Some models may not be available in all regions. Try:
gitwhisper list-variants --model gemini
To see available models.

Advantages Over Other Models

vs OpenAI

  • Gemini: Much larger context (1-2M vs 128K)
  • Gemini: More cost-effective
  • OpenAI: Slightly more mature ecosystem

vs Claude

  • Gemini: 10x larger context (2M vs 200K)
  • Gemini: Better pricing
  • Claude: Superior reasoning in some cases

vs Ollama

  • Gemini: State-of-the-art capabilities
  • Ollama: Complete privacy (local)
  • Gemini: No hardware requirements

vs Free Model

  • Gemini: Much higher quality
  • Free Model: No API key needed
  • Gemini: More reliable and consistent

Next Steps

All Variants

Complete list of Gemini model variants

OpenAI Models

Compare with OpenAI models

Code Analysis

Deep dive into analysis features

Configuration

API key setup guide

Build docs developers (and LLMs) love