Configuration
Models
claude-haiku-4-5
provider:
anthropic
model_id:
claude-haiku-4-5-20251001
api_key_env:
ANTHROPIC_API_KEY
context:
200,000
timeout:
120s
max out:
10000
/64,000
cost in:
0.0008
cost out:
0.004
per 1K tokens
claude-opus-4-5
provider:
anthropic
model_id:
claude-opus-4-5
api_key_env:
ANTHROPIC_API_KEY
context:
200,000
timeout:
900s
max out:
19200
/64,000
cost in:
0.005
cost out:
0.025
per 1K tokens
claude-opus-4-6
provider:
anthropic
model_id:
claude-opus-4-6
api_key_env:
ANTHROPIC_API_KEY
context:
200,000
timeout:
900s
max out:
38400
/128,000
cost in:
0.005
cost out:
0.025
per 1K tokens
claude-sonnet-4-5
provider:
anthropic
model_id:
claude-sonnet-4-5-20250929
api_key_env:
ANTHROPIC_API_KEY
context:
200,000
timeout:
900s
max out:
19200
/64,000
cost in:
0.003
cost out:
0.015
per 1K tokens
claude-sonnet-4-6
provider:
anthropic
model_id:
claude-sonnet-4-6
api_key_env:
ANTHROPIC_API_KEY
context:
200,000
timeout:
900s
max out:
19200
/64,000
cost in:
0.003
cost out:
0.015
per 1K tokens
deepseek-chat
provider:
openai
model_id:
deepseek-chat
api_key_env:
DEEPSEEK_API_KEY
api_base:
https://api.deepseek.com
context:
128,000
timeout:
900s
max out:
49152
/163,840
cost in:
0.00028
cost out:
0.00042
per 1K tokens
deepseek-reasoner
provider:
openai
model_id:
deepseek-reasoner
api_key_env:
DEEPSEEK_API_KEY
api_base:
https://api.deepseek.com
context:
128,000
timeout:
500s
max out:
8100
/64,000
cost in:
0.00028
cost out:
0.00042
per 1K tokens
gemini-2.5-pro
provider:
openai
model_id:
gemini-2.5-pro
api_key_env:
GOOGLE_API_KEY
api_base:
https://generativelanguage.googleapis.com/v1beta/openai
context:
1,048,576
timeout:
900s
max out:
19660
/65,536
cost in:
0.00125
cost out:
0.01
per 1K tokens
gemini-3-flash-preview
provider:
openai
model_id:
gemini-3-flash-preview
api_key_env:
GOOGLE_API_KEY
api_base:
https://generativelanguage.googleapis.com/v1beta/openai
context:
1,048,576
timeout:
900s
max out:
19660
/65,535
cost in:
0.0005
cost out:
0.003
per 1K tokens
gemini-3-pro-preview
provider:
openai
model_id:
gemini-3-pro-preview
api_key_env:
GOOGLE_API_KEY
api_base:
https://generativelanguage.googleapis.com/v1beta/openai
context:
1,048,576
timeout:
900s
max out:
19660
/65,536
cost in:
0.002
cost out:
0.012
per 1K tokens
gemini-3.1-pro-preview
provider:
openai
model_id:
gemini-3.1-pro-preview
api_key_env:
GOOGLE_API_KEY
api_base:
https://generativelanguage.googleapis.com/v1beta/openai
context:
1,048,576
timeout:
900s
max out:
19660
/65,536
cost in:
0.002
cost out:
0.012
per 1K tokens
gpt-5.1
provider:
openai
model_id:
gpt-5.1
api_key_env:
OPENAI_API_KEY
api_base:
https://api.openai.com/v1
context:
272,000
timeout:
120s
max out:
38400
/128,000
cost in:
0.00125
cost out:
0.01
per 1K tokens
use_completion_tokens: true
gpt-5.2
provider:
openai
model_id:
gpt-5.2
api_key_env:
OPENAI_API_KEY
api_base:
https://api.openai.com/v1
context:
128,000
timeout:
900s
max out:
15000
/128,000
cost in:
0.005
cost out:
0.015
per 1K tokens
use_completion_tokens: true
gpt-5.2-codex
provider:
openai
model_id:
gpt-5.2-codex
api_key_env:
OPENAI_API_KEY
api_base:
https://api.openai.com/v1
context:
400,000
timeout:
180s
max out:
38400
/128,000
cost in:
0.00175
cost out:
0.014
per 1K tokens
use_completion_tokens: true
gpt-5.2-pro
provider:
openai
model_id:
gpt-5.2-pro
api_key_env:
OPENAI_API_KEY
api_base:
https://api.openai.com/v1
context:
400,000
timeout:
900s
max out:
38400
/128,000
cost in:
0.00175
cost out:
0.014
per 1K tokens
use_completion_tokens: true
gpt-5.3-codex
provider:
openai
model_id:
gpt-5.3-codex
api_key_env:
OPENAI_API_KEY
api_base:
https://api.openai.com/v1
context:
400,000
timeout:
180s
max out:
38400
/128,000
cost in:
0.00175
cost out:
0.014
per 1K tokens
use_completion_tokens: true
gpt-5.3-codex-spark disabled
provider:
openai
model_id:
gpt-5.3-codex-spark
api_key_env:
OPENAI_API_KEY
api_base:
https://api.openai.com/v1
context:
128,000
timeout:
60s
max out:
38400
/128,000
cost in:
0.00175
cost out:
0.014
per 1K tokens
use_completion_tokens: true
grok-4-0709
provider:
openai
model_id:
grok-4-0709
api_key_env:
XAI_API_KEY
api_base:
https://api.x.ai/v1
context:
256,000
timeout:
900s
max out:
39321
/131,072
cost in:
0.003
cost out:
0.015
per 1K tokens
grok-4-1-fast-reasoning
provider:
openai
model_id:
grok-4-1-fast-reasoning
api_key_env:
XAI_API_KEY
api_base:
https://api.x.ai/v1
context:
2,000,000
timeout:
900s
max out:
15000
/30,000
cost in:
0.003
cost out:
0.015
per 1K tokens
grok-code-fast-1
provider:
openai
model_id:
grok-code-fast-1
api_key_env:
XAI_API_KEY
api_base:
https://api.x.ai/v1
context:
256,000
timeout:
900s
max out:
3000
/10,000
cost in:
0.0002
cost out:
0.0015
per 1K tokens
kimi-k2-thinking
provider:
openai
model_id:
kimi-k2-thinking
api_key_env:
MOONSHOT_API_KEY
api_base:
https://api.moonshot.ai/v1
context:
262,144
timeout:
900s
max out:
19660
/65,535
cost in:
0.0006
cost out:
0.0025
per 1K tokens
kimi-k2-thinking-turbo
provider:
openai
model_id:
kimi-k2-thinking-turbo
api_key_env:
MOONSHOT_API_KEY
api_base:
https://api.moonshot.ai/v1
context:
262,144
timeout:
900s
max out:
19660
/65,535
cost in:
0.00115
cost out:
0.008
per 1K tokens
kimi-k2.5
provider:
openai
model_id:
kimi-k2.5
api_key_env:
MOONSHOT_API_KEY
api_base:
https://api.moonshot.ai/v1
context:
262,144
timeout:
900s
max out:
15000
/65,535
cost in:
0.0006
cost out:
0.003
per 1K tokens
minimax-m2.5
provider:
minimax
model_id:
MiniMax-M2.5
api_key_env:
MINIMAX_API_KEY
api_base:
https://api.minimax.io
context:
200,000
timeout:
900s
max out:
19660
/65,536
cost in:
0.0003
cost out:
0.0012
per 1K tokens
minimax-m2.5-fast
provider:
minimax
model_id:
MiniMax-M2.5-highspeed
api_key_env:
MINIMAX_API_KEY
api_base:
https://api.minimax.io
context:
200,000
timeout:
300s
max out:
19660
/65,536
cost in:
0.0006
cost out:
0.0024
per 1K tokens
API Keys
Enter demo to use pre-selected models at no cost, or paste your own API keys for full control.
Keys are saved to /tmp/dvad-sessions/3oUGma4zk8JC6GtQV90wTyegkvvslCIk/.env not created
Loading...
Validation
Configuration is valid.
Role Assignments
Author:
gemini-3.1-pro-preview
Reviewer 1:
claude-opus-4-6
Reviewer 2:
kimi-k2.5
Dedup:
claude-opus-4-5
Normalization:
gemini-2.5-pro
Revision:
gemini-2.5-pro
Integration:
grok-code-fast-1
What do these roles do?
Author: Generates initial responses to reviewer findings and produces revised artifacts
Reviewer (x2): Independently analyzes input for issues, risks, and improvements
Dedup: Consolidates overlapping findings from multiple reviewers into groups
Normalization: Standardizes severity levels and categories across grouped findings
Revision: Creates the final revised artifact incorporating accepted feedback
Integration: Reviews cross-system integration concerns and component interactions
Development
live e2e tests:
disabled
run without -m live flag