Skip to main content

LLM Providers

Dorgu can use large language models to enhance its application analysis. LLM integration provides deeper code understanding, framework-specific recommendations, smarter resource sizing, and security best practices tailored to your stack.
LLM enhancement is optional. Dorgu’s static analysis and manifest generation work without any LLM provider configured. The LLM layer adds refinements on top of the base analysis.

Supported Providers

ProviderModelsEnv VariableDefault Model
OpenAIgpt-4, gpt-3.5-turboOPENAI_API_KEYgpt-4
Anthropicclaude-3-sonnetANTHROPIC_API_KEYclaude-3-sonnet
Geminigemini-1.5-proGEMINI_API_KEY or GOOGLE_API_KEYgemini-1.5-pro
Ollamaany locally hosted modelOLLAMA_HOST (endpoint URL)configurable

What LLM Enhances

When an LLM provider is configured, Dorgu uses it to improve several areas of analysis:
  • Deeper code analysis — understands business logic, not just file structure
  • Framework-specific recommendations — tailored settings for Express, Spring Boot, Django, etc.
  • Resource sizing — more accurate CPU/memory recommendations based on application patterns
  • Security best practices — context-aware security recommendations beyond generic defaults

OpenAI

Setup

# Option 1: Environment variable
export OPENAI_API_KEY="sk-proj-..."

# Option 2: Global config
dorgu config set llm.provider openai
dorgu config set llm.api_key "sk-proj-..."

Usage

dorgu generate . --llm-provider openai

Model Override

dorgu config set llm.model gpt-3.5-turbo

Anthropic

Setup

# Option 1: Environment variable
export ANTHROPIC_API_KEY="sk-ant-..."

# Option 2: Global config
dorgu config set llm.provider anthropic
dorgu config set llm.api_key "sk-ant-..."

Usage

dorgu generate . --llm-provider anthropic

Gemini

Setup

# Option 1: Environment variable (either key works)
export GEMINI_API_KEY="AIza..."
# or
export GOOGLE_API_KEY="AIza..."

# Option 2: Global config
dorgu config set llm.provider gemini
dorgu config set llm.api_key "AIza..."

Usage

dorgu generate . --llm-provider gemini

Ollama

Ollama lets you run LLMs locally without an API key. You need a running Ollama instance with at least one model pulled.

Prerequisites

# Install Ollama (see https://ollama.ai)
# Pull a model
ollama pull llama3

Setup

# Option 1: Environment variable for custom endpoint
export OLLAMA_HOST="http://localhost:11434"

# Option 2: Global config
dorgu config set llm.provider ollama
dorgu config set llm.model llama3

Usage

dorgu generate . --llm-provider ollama
Ollama does not require an API key. The OLLAMA_HOST variable is only needed if your Ollama instance runs on a non-default address.

API Key Resolution Order

When Dorgu needs an API key, it checks the following sources in order:
  1. Environment variableOPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, or GOOGLE_API_KEY
  2. Global configllm.api_key in ~/.config/dorgu/config.yaml
  3. Interactive prompt — if running in a terminal and no key is found, Dorgu prompts you to enter one

Setting Provider in .dorgu.yaml

You can also set the LLM provider at the workspace or app level:
# In .dorgu.yaml
llm:
  provider: "openai"
  model: "gpt-4"
This is useful when different projects require different providers or models.