Prerequisites
- JetBrains IDE (IntelliJ IDEA, PyCharm, WebStorm, etc.) with AI Assistant plugin installed
- Ollama API Proxy running locally or on a remote server
- At least one provider API key configured (OPENAI_API_KEY, GEMINI_API_KEY, or OPENROUTER_API_KEY)
Configuration
Start the Ollama API Proxy
Ensure the proxy server is running. By default, it runs on You should see output showing available providers and models:
http://localhost:11434.Open JetBrains AI Assistant settings
In your JetBrains IDE:
- Navigate to Settings (or Preferences on macOS)
- Go to Tools → AI Assistant
- Click on Model Provider
Configure Ollama as the provider
In the Model Provider settings:
- Select Ollama from the provider dropdown
- Set the Server URL to your proxy location:
- Local:
http://localhost:11434 - Remote:
http://your-server:11434
- Local:
- Click Test Connection to verify the connection
Select a model
Once connected, you’ll see all available models from the proxy:
- OpenAI models:
gpt-4o-mini,gpt-4.1-mini,gpt-4.1-nano,gpt-4o,gpt-5-nano - Google models:
gemini-2.5-flash,gemini-2.5-flash-lite - OpenRouter models:
deepseek-r1,kimi-k2
The available models depend on which API keys you’ve configured in your
.env file. Models will only appear if their corresponding provider is available.Start using AI Assistant
You’re all set! Use JetBrains AI Assistant features:
- Code completion and suggestions
- Chat with AI about your code
- Explain code functionality
- Generate tests and documentation
- Refactor code with AI assistance
Switching between models
You can easily switch between different models:- Open the AI Assistant chat panel
- Click the model selector at the top
- Choose a different model from the available options
Troubleshooting
Connection failed
Connection failed
If the test connection fails:
- Verify the proxy server is running (
curl http://localhost:11434should return “Ollama is running in proxy mode.”) - Check the server URL is correct
- Ensure no firewall is blocking the connection
- Check the proxy logs for any error messages
No models available
No models available
If no models appear in the selection:
- Verify at least one provider API key is configured in your
.envfile - Check the proxy startup logs to see which providers are loaded
- Ensure
models.jsonexists and is properly formatted - Restart the proxy server after adding new API keys
Model not responding
Model not responding
If a selected model doesn’t respond:
- Check the proxy server logs for API errors
- Verify your API key for that provider is valid and has sufficient quota
- Try a different model to isolate the issue
- Check your network connection
Next steps
Supported Models
View all available models and their providers
Vision Support
Learn how to use vision models with images
