nanx supports multiple AI providers for features like commit message generation and smart release analysis. Configure one or more providers to enable AI-powered workflows.
Supported Providers
- Anthropic - Claude models (3.5 Sonnet, Opus, Haiku)
- OpenAI - GPT models (GPT-4, GPT-3.5)
- Google - Gemini models
- OpenCode - Auto-detected when running in OpenCode environment New in v0.3.0
- Custom - OpenAI-compatible APIs (Ollama, local LLMs)
Quick Start
Add a provider to your config file:
# ~/.config/surkyl/nanx/config.yaml
providers:
- name: claude
type: anthropic
api_key: sk-ant-api03-...
model: claude-3-5-sonnet-20241022
repo:
commit:
generate_message:
default_provider: claude Anthropic (Claude)
StableGetting an API Key
- Sign up at console.anthropic.com
- Navigate to API Keys in the dashboard
- Click Create Key and copy your API key
- Store it securely - it starts with
sk-ant-api03-
Configuration
providers:
- name: claude
type: anthropic
api_key: sk-ant-api03-YOUR_KEY_HERE
model: claude-3-5-sonnet-20241022 # Recommended Available Models
| Model | Use Case | Cost |
|---|---|---|
claude-3-5-sonnet-20241022 | Best balance (recommended) | $$ |
claude-3-opus-20240229 | Most capable, slower | $$$ |
claude-3-haiku-20240307 | Fastest, lower cost | $ |
OpenAI (GPT)
StableGetting an API Key
- Sign up at platform.openai.com
- Go to API Keys section
- Click Create new secret key
- Store it securely - it starts with
sk-
Configuration
providers:
- name: gpt
type: openai
api_key: sk-YOUR_KEY_HERE
model: gpt-4-turbo # or gpt-4, gpt-3.5-turbo Available Models
| Model | Use Case | Cost |
|---|---|---|
gpt-4-turbo | Latest, most capable | $$$ |
gpt-4 | High quality | $$$ |
gpt-3.5-turbo | Fast, economical | $ |
Google (Gemini)
BetaGetting an API Key
- Go to Google AI Studio
- Click Get API Key
- Create or select a project
- Copy your API key
Configuration
providers:
- name: gemini
type: google
api_key: YOUR_GOOGLE_API_KEY
model: gemini-pro Available Models
| Model | Use Case |
|---|---|
gemini-pro | Text generation |
gemini-pro-vision | Text + images |
OpenCode New in v0.3.0
BetaWhen running inside OpenCode, nanx can automatically detect and use the OpenCode AI environment. This provides seamless integration without requiring manual API key configuration.
Automatic Detection
nanx automatically detects OpenCode when the OPENCODE_AI_* environment variables are present.
No manual configuration is required - just use nanx commands as normal and they will use OpenCode's AI backend.
Manual Configuration
If you want to explicitly configure OpenCode as a provider:
providers:
- name: opencode
type: opencode
# API key and model are auto-detected from environment Using OpenCode
# When running in OpenCode, AI features work automatically
nanx r cgm # Uses OpenCode's AI backend
# Or explicitly specify the provider
nanx r commit --gm --provider opencode Note: OpenCode provider costs depend on the underlying AI model configured in your OpenCode environment.
Custom Providers (Ollama, Local LLMs)
AlphaUse any OpenAI-compatible API endpoint, including local models via Ollama, LM Studio, or other OpenAI-compatible servers.
Ollama Setup
- Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh - Pull a model:
ollama pull qwen2.5-coder:32b - Start Ollama:
ollama serve - Configure nanx to use it:
providers:
- name: local
type: custom
base_url: http://localhost:11434/v1
api_key: ollama # Can be any value for Ollama
model: qwen2.5-coder:32b Recommended Models for Code
qwen2.5-coder:32b- Excellent for code understandingdeepseek-coder:33b- Strong coding capabilitiescodellama:34b- Meta's code-focused modelmistral- Good general-purpose model
Custom Headers
Some custom providers require additional headers:
providers:
- name: custom-api
type: custom
base_url: https://api.example.com/v1
api_key: your-api-key
model: custom-model
headers:
X-Custom-Header: value
Authorization: Bearer token Using Multiple Providers
Configure multiple providers and switch between them:
providers:
- name: claude
type: anthropic
api_key: sk-ant-...
model: claude-3-5-sonnet-20241022
- name: gpt
type: openai
api_key: sk-...
model: gpt-4-turbo
- name: local
type: custom
base_url: http://localhost:11434/v1
api_key: ollama
model: qwen2.5-coder:32b
repo:
commit:
generate_message:
default_provider: claude # Use Claude by default Override the default provider when needed:
# Use default provider (Claude)
nanx r cgm
# Use specific provider
nanx r commit --generate-message --provider gpt
nanx r commit --gm --provider local Security Best Practices
Recommended Practices
- Use user-level config - Store API keys in
~/.config/surkyl/nanx/config.yaml, not project configs - Set file permissions -
chmod 600 ~/.config/surkyl/nanx/config.yaml - Use environment variables - Reference env vars in config (future feature)
- Rotate keys regularly - Regenerate API keys periodically
- Use .gitignore - Add
.surkyl/to your global gitignore
File Permissions
# Secure your config file
chmod 600 ~/.config/surkyl/nanx/config.yaml
# Add to global gitignore
echo ".surkyl/" >> ~/.gitignore_global
git config --global core.excludesfile ~/.gitignore_global Cost Management
AI API calls incur costs based on token usage. Typical costs for nanx usage:
Estimated Costs per Operation
| Operation | Claude 3.5 | GPT-4 | GPT-3.5 |
|---|---|---|---|
| Commit message | $0.01-0.02 | $0.02-0.04 | $0.001-0.002 |
| Release analysis | $0.05-0.10 | $0.10-0.20 | $0.005-0.010 |
Note: Local models (Ollama) have no per-use costs after setup.
Usage Monitoring
Track your AI usage with the monitor command:
nanx monitor # View AI usage dashboard Troubleshooting
Authentication Errors
- Verify your API key is correct and active
- Check if the key has the required permissions
- Ensure no extra spaces or newlines in the key
Rate Limits
- Each provider has rate limits - check their documentation
- Consider using multiple providers for high-volume usage
- Local models (Ollama) have no rate limits
Connection Issues
- Check your internet connection
- For custom providers, verify the
base_urlis accessible - Test with:
curl -v <base_url>/models
Next Steps
- AI Commit Workflow - Learn how to use AI-generated commits
- Config File Reference - All configuration options
- Repo Commands - Repository command reference