Unified Interface Across AI Clients
One Locus command interface across Claude and Codex -- switch providers without changing workflows.
Locus gives you one CLI to plan, execute, review, and iterate -- regardless of whether you use Claude or Codex under the hood.
What "One Interface" Means
The workflow surface stays the same across providers:
Observe: locus status, locus logs
Provider choice is a configuration toggle:
# Use Claude
locus config set ai.model claude-sonnet-4-6
# Switch to Codex
locus config set ai.model gpt-5.3-codex
When ai.model changes, Locus infers the provider automatically. All commands continue to work identically.
Side-by-Side Workflow Equivalence
The command surface is intentionally identical across clients:
Workflow Stage
Claude
Codex
locus config set ai.model claude-sonnet-4-6
locus config set ai.model gpt-5.3-codex
locus plan "Add webhook retries" --sprint "Sprint 7"
locus sprint active "Sprint 7" then locus run
The only required change is model selection. Everything else stays unchanged.
Supported Models
Claude (Anthropic)
claude-sonnet-4-6 or sonnet
claude-haiku-4-5-20251001 or haiku
Automatic Provider Inference
Locus infers the provider from the model name:
Known Claude aliases (opus, sonnet, haiku) → Claude
Known Codex models (gpt-5.4, gpt-5.4-pro, gpt-5.3-codex, etc.) → Codex
Model name starts with claude- → Claude
Model name contains codex → Codex
You rarely need to set ai.provider directly -- setting ai.model is sufficient.
Per-Command Override
Override the model for a single run without changing your configuration:
The --model flag takes precedence for that execution only.
Environment Variables
AI provider authentication is handled through environment variables or provider CLI login, not through Locus:
Locus does not store provider API keys.
Recommended Adoption Path
Standardize team docs around plan -> run -> review -> iterate.
Treat model/provider as a configuration toggle (ai.model), not a workflow fork.