Skip to main content
🟡 Beta

nanx supports multiple AI providers for features like commit message generation and smart release analysis. Configure one or more providers to enable AI-powered workflows.

Supported Providers

  • Anthropic - Claude models (3.5 Sonnet, Opus, Haiku)
  • OpenAI - GPT models (GPT-4, GPT-3.5)
  • Google - Gemini models
  • OpenCode - Auto-detected when running in OpenCode environment New in v0.3.0
  • Custom - OpenAI-compatible APIs (Ollama, local LLMs)

Quick Start

Add a provider to your config file:

# ~/.config/surkyl/nanx/config.yaml
providers:
  - name: claude
    type: anthropic
    api_key: sk-ant-api03-...
    model: claude-3-5-sonnet-20241022

repo:
  commit:
    generate_message:
      default_provider: claude

Anthropic (Claude)

🟢 Stable

Getting an API Key

  1. Sign up at console.anthropic.com
  2. Navigate to API Keys in the dashboard
  3. Click Create Key and copy your API key
  4. Store it securely - it starts with sk-ant-api03-

Configuration

providers:
  - name: claude
    type: anthropic
    api_key: sk-ant-api03-YOUR_KEY_HERE
    model: claude-3-5-sonnet-20241022  # Recommended

Available Models

Model Use Case Cost
claude-3-5-sonnet-20241022 Best balance (recommended) $$
claude-3-opus-20240229 Most capable, slower $$$
claude-3-haiku-20240307 Fastest, lower cost $

OpenAI (GPT)

🟢 Stable

Getting an API Key

  1. Sign up at platform.openai.com
  2. Go to API Keys section
  3. Click Create new secret key
  4. Store it securely - it starts with sk-

Configuration

providers:
  - name: gpt
    type: openai
    api_key: sk-YOUR_KEY_HERE
    model: gpt-4-turbo  # or gpt-4, gpt-3.5-turbo

Available Models

Model Use Case Cost
gpt-4-turbo Latest, most capable $$$
gpt-4 High quality $$$
gpt-3.5-turbo Fast, economical $

Google (Gemini)

🟡 Beta

Getting an API Key

  1. Go to Google AI Studio
  2. Click Get API Key
  3. Create or select a project
  4. Copy your API key

Configuration

providers:
  - name: gemini
    type: google
    api_key: YOUR_GOOGLE_API_KEY
    model: gemini-pro

Available Models

Model Use Case
gemini-pro Text generation
gemini-pro-vision Text + images

OpenCode New in v0.3.0

🟡 Beta

When running inside OpenCode, nanx can automatically detect and use the OpenCode AI environment. This provides seamless integration without requiring manual API key configuration.

Automatic Detection

nanx automatically detects OpenCode when the OPENCODE_AI_* environment variables are present. No manual configuration is required - just use nanx commands as normal and they will use OpenCode's AI backend.

Manual Configuration

If you want to explicitly configure OpenCode as a provider:

providers:
  - name: opencode
    type: opencode
    # API key and model are auto-detected from environment

Using OpenCode

# When running in OpenCode, AI features work automatically
nanx r cgm  # Uses OpenCode's AI backend

# Or explicitly specify the provider
nanx r commit --gm --provider opencode

Note: OpenCode provider costs depend on the underlying AI model configured in your OpenCode environment.

Custom Providers (Ollama, Local LLMs)

🟠 Alpha

Use any OpenAI-compatible API endpoint, including local models via Ollama, LM Studio, or other OpenAI-compatible servers.

Ollama Setup

  1. Install Ollama: curl -fsSL https://ollama.com/install.sh | sh
  2. Pull a model: ollama pull qwen2.5-coder:32b
  3. Start Ollama: ollama serve
  4. Configure nanx to use it:
providers:
  - name: local
    type: custom
    base_url: http://localhost:11434/v1
    api_key: ollama  # Can be any value for Ollama
    model: qwen2.5-coder:32b

Recommended Models for Code

  • qwen2.5-coder:32b - Excellent for code understanding
  • deepseek-coder:33b - Strong coding capabilities
  • codellama:34b - Meta's code-focused model
  • mistral - Good general-purpose model

Custom Headers

Some custom providers require additional headers:

providers:
  - name: custom-api
    type: custom
    base_url: https://api.example.com/v1
    api_key: your-api-key
    model: custom-model
    headers:
      X-Custom-Header: value
      Authorization: Bearer token

Using Multiple Providers

Configure multiple providers and switch between them:

providers:
  - name: claude
    type: anthropic
    api_key: sk-ant-...
    model: claude-3-5-sonnet-20241022

  - name: gpt
    type: openai
    api_key: sk-...
    model: gpt-4-turbo

  - name: local
    type: custom
    base_url: http://localhost:11434/v1
    api_key: ollama
    model: qwen2.5-coder:32b

repo:
  commit:
    generate_message:
      default_provider: claude  # Use Claude by default

Override the default provider when needed:

# Use default provider (Claude)
nanx r cgm

# Use specific provider
nanx r commit --generate-message --provider gpt
nanx r commit --gm --provider local

Security Best Practices

Recommended Practices

  1. Use user-level config - Store API keys in ~/.config/surkyl/nanx/config.yaml, not project configs
  2. Set file permissions - chmod 600 ~/.config/surkyl/nanx/config.yaml
  3. Use environment variables - Reference env vars in config (future feature)
  4. Rotate keys regularly - Regenerate API keys periodically
  5. Use .gitignore - Add .surkyl/ to your global gitignore

File Permissions

# Secure your config file
chmod 600 ~/.config/surkyl/nanx/config.yaml

# Add to global gitignore
echo ".surkyl/" >> ~/.gitignore_global
git config --global core.excludesfile ~/.gitignore_global

Cost Management

AI API calls incur costs based on token usage. Typical costs for nanx usage:

Estimated Costs per Operation

Operation Claude 3.5 GPT-4 GPT-3.5
Commit message $0.01-0.02 $0.02-0.04 $0.001-0.002
Release analysis $0.05-0.10 $0.10-0.20 $0.005-0.010

Note: Local models (Ollama) have no per-use costs after setup.

Usage Monitoring

Track your AI usage with the monitor command:

nanx monitor  # View AI usage dashboard

Troubleshooting

Authentication Errors

  • Verify your API key is correct and active
  • Check if the key has the required permissions
  • Ensure no extra spaces or newlines in the key

Rate Limits

  • Each provider has rate limits - check their documentation
  • Consider using multiple providers for high-volume usage
  • Local models (Ollama) have no rate limits

Connection Issues

  • Check your internet connection
  • For custom providers, verify the base_url is accessible
  • Test with: curl -v <base_url>/models

Next Steps