#Config file
Claudear uses a single TOML config file. Copy the example and fill in your values:
cp claudear.example.toml claudear.toml
By default, Claudear looks for claudear.toml in the current directory. You can override the path with --config:
claudear --config /path/to/config.toml poll
#Environment variables
Use environment variables for secrets instead of putting them in the config file. This is especially useful in containers and CI.
| Env var | Purpose |
|---|---|
CLAUDEAR_LINEAR_API_KEY | Linear API key override |
CLAUDEAR_SENTRY_AUTH_TOKEN | Sentry auth token override |
CLAUDEAR_GITHUB_TOKEN | GitHub SCM token override |
CLAUDEAR_LINEAR_WEBHOOK_SECRET | Linear webhook verification |
CLAUDEAR_SENTRY_CLIENT_SECRET | Sentry webhook verification |
CLAUDEAR_GITHUB_WEBHOOK_SECRET | GitHub webhook verification |
CLAUDEAR_EMBEDDING_MODEL | Embedding model selection |
CLAUDEAR_EMBEDDING_CACHE_DIR | Embedding cache directory |
CLAUDEAR_SENTRY_DSN | Sentry DSN for error reporting |
CLAUDEAR_SENTRY_ENVIRONMENT | Sentry environment name |
CLAUDEAR_TELEGRAM_WEBHOOK_SECRET | Telegram webhook secret |
CLAUDEAR_WHATSAPP_WEBHOOK_VERIFY_TOKEN | WhatsApp webhook verify token |
CLAUDEAR_VECTORLITE_PATH | Path to vectorlite extension |
Core settings workspace, polling, concurrency
These top-level settings control where Claudear stores data and how fast it works.
workspace: where repos are cloned to on disk.db_path: location of the SQLite database file.webhook_port: the port Claudear listens on for webhooks and the dashboard.poll_interval_ms: how often to check your issue sources for new work (in milliseconds).max_concurrent: how many issues to work on at the same time. Start with1and increase once you are comfortable.known_orgs: your GitHub or GitLab organizations. Claudear uses these to match issues to the right repo.auto_discover_paths: local directories to scan for existing repo clones.
#Tips
- Increase
max_concurrentgradually: too high and you can saturate your AI provider or API rate limits. - Keep
processing_delay_msabove zero to process issues at a steady pace without hammering APIs. - When running in Docker or as a system service, set explicit absolute paths for
db_pathandworkspace.
Retries ([retry]) exponential backoff configuration
When a fix attempt fails, Claudear can automatically retry with exponential backoff.
max_retries: how many times to retry a failed attempt (default: 2).base_delay_ms: initial delay between retries in milliseconds. Each subsequent retry doubles this value.max_delay_ms: the upper limit on retry delay, so backoff does not grow forever.
For most teams, the defaults work well. Lower base_delay_ms if you want faster retries during development; raise it in production to avoid hammering APIs.
AI agent ([agent]) provider, model, instructions
Configure which AI provider Claudear uses and how it behaves.
default_provider: which provider to use (default:"claude").timeout_secs: maximum time an agent run can take before being stopped.use_llm: use the local LLM model as the agent runner instead of an external provider. Requires[llm]to be enabled. Fully offline but much slower. Creates PRs viaghCLI (default:false).
#Claude provider settings ([agent.providers.claude])
model: the Claude model to use (e.g."sonnet","opus","haiku", or a full model ID).classification_model: cheaper model for repo classification (falls back tomodel). E.g.,"haiku".instructions: custom instructions appended to Claude's system prompt. Use this to enforce coding standards, style rules, or project-specific guidance.instructions_file: path to a file with additional instructions. If bothinstructionsandinstructions_fileare set, the file content comes first.permissions: tool permissions granted without prompting (e.g.["Bash(git *)", "Read", "Edit"]).skip_permissions: skip all interactive permission prompts (default:true).binary: CLI binary name or absolute path (default:"claude"). Set the full path when the daemon cannot find the binary via PATH.env: extra environment variables for the agent process. Useful when running as a systemd service where PATH is limited (e.g.env = { PATH = "/home/user/.local/bin:..." }).
User mapping ([users.<slug>]) cross-platform identity linking
Map your team members across platforms so notifications and assignments go to the right person. Each user gets a slug (like [users.jake]) and you fill in their identifiers for each service they use:
- Issue trackers:
linear_names,sentry_usernames,jira_usernames - Source control:
github_usernames,gitlab_usernames - Notifications:
discord_id,slack_id,email,push_user_key,sms_number,whatsapp_number,telegram_chat_id
When an issue is assigned to someone, Claudear routes notifications directly to that person on their preferred channels.
Questions ([ask]) AI ask loop and answer reuse
When the AI gets stuck or needs clarification, it can ask you a question. The question is sent to all your enabled notification channels, and the first reply wins.
enabled: turn the ask loop on or off.wait_timeout_secs: how long to wait for a reply before giving up.poll_interval_secs: how often to check for replies while waiting.max_rounds_per_attempt: limit how many questions the AI can ask per issue attempt.best_effort_on_timeout: if no reply arrives in time, continue with a best-effort note instead of failing.
The semantic_threshold_scoped and semantic_threshold_global settings control automatic answer reuse. If a similar question was already answered, Claudear can reuse that answer instead of asking again. Lower thresholds mean more reuse; higher thresholds mean stricter matching.
Source control ([scm.*]) GitHub, GitHub App, GitLab
#GitHub ([scm.github])
Connect Claudear to GitHub so it can create branches, open pull requests, and respond to review comments.
token: a personal access token with repo access.auto_resolve_on_merge: automatically close the source issue when the PR merges.review_trigger: the tag that triggers Claudear to respond to PR review comments (default:@claudear).allowed_bots: bot usernames whose review comments are processed. Use login name without[bot]suffix (e.g.,"copilot").use_ssh: clone repos over SSH instead of HTTPS.webhook_secret: for verifying incoming GitHub webhooks.
#GitHub App ([scm.github.app])
If you prefer GitHub App authentication over a personal access token, configure your App ID and private key here. The installation ID is auto-detected if not set.
app_id: GitHub App ID.private_key_path: path to a PEM private key file.private_key: inline PEM private key content (alternative toprivate_key_path).webhook_secret: secret for verifying GitHub App webhook payloads.installation_id: installation ID (auto-detected when empty).client_id: OAuth client ID for manifest/user auth flows.client_secret: OAuth client secret.base_url: public base URL for the GitHub App manifest flow.
#GitLab ([scm.gitlab])
Connect Claudear to GitLab for merge request automation.
enabled: enable or disable the GitLab backend.token: GitLab personal access token.base_url: GitLab instance URL (default:"https://gitlab.com"; change for self-hosted).groups: GitLab groups to monitor.trigger_labels: labels that trigger automation.trigger_states: issue states that trigger automation.poll_interval_ms: MR status check interval in milliseconds.auto_resolve_on_merge: close source issues when MRs merge.webhook_secret: secret for verifying GitLab webhook payloads.review_trigger: tag that triggers Claudear on MR review comments.allowed_bots: bot usernames whose MR comments are processed.use_ssh: clone repos over SSH instead of HTTPS.max_issues_per_cycle,max_concurrent: per-source rate limiting overrides.
Issue sources ([issues.*]) Linear, Sentry, Jira, Discord, Slack
Issue sources tell Claudear where to find work. You can enable as many as you need.
#Linear ([issues.linear])
Pull issues from Linear based on labels, states, or assignee.
enabled: enable or disable this source.api_key: Linear API key (required).trigger_labels: labels that trigger automation.trigger_states: workflow states that trigger automation.trigger_assignee: only process issues assigned to this user (display name).team_id: optional team filter.project_id: optional project filter.webhook_secret: secret for verifying Linear webhook payloads.max_issues_per_cycle,max_concurrent,poll_interval_ms: per-source overrides.
#Sentry ([issues.sentry])
Automatically pick up escalating errors from Sentry.
enabled: enable or disable this source.auth_token: Sentry auth token (required).org_slug: Sentry organization slug (required).project_slugs: optional list of project slugs to filter.top_issues_count: number of top issues to fetch (default: 100).top_issues_period: lookback period:"1h","12h","24h","7d","30d".min_event_count: minimum event count before an issue is processed.escalation_threshold_percent: percentage increase to consider an issue escalating.client_secret: secret for verifying Sentry webhook payloads.max_issues_per_cycle,max_concurrent: per-source overrides.
#Jira ([issues.jira])
Works with both Jira Cloud (email + API token) and Jira Server/Data Center (bearer token).
enabled: enable or disable this source.base_url: Jira instance URL (required).email: email address for Basic auth (Jira Cloud).api_token: API token (Cloud) or personal access token (Server/DC).auth_mode:"basic"for Cloud,"bearer"for Server/DC.project_keys: project keys to monitor.trigger_labels: labels that trigger automation.trigger_statuses: issue statuses that trigger automation.trigger_assignee: only process issues assigned to this user.issue_types: filter by issue types (e.g.["Bug", "Task"]).custom_jql: additional JQL appended to the generated query.max_results: maximum results per search request.max_issues_per_cycle,max_concurrent,poll_interval_ms: per-source overrides.
#Discord and Slack ([issues.discord], [issues.slack])
Treat chat messages as issues. Credentials are inherited from the matching notifier section when not set here.
bot_token: bot token for reading messages (inherits from notifier if omitted).listen_channel_id: channel to monitor for messages.guild_id(Discord only): guild ID for constructing message URLs.workspace(Slack only): workspace name for constructing message URLs.user_id(Slack only): bot user ID for reply detection.poll_interval_ms: per-source polling override.
Notifications ([notifiers.*]) Discord, Slack, Email, SMS, Push, WhatsApp, Telegram
Claudear sends status updates and ask-loop questions through your notification channels. Some channels also support receiving replies.
#Discord ([notifiers.discord])
webhook_url: Discord webhook URL for outbound notifications.user_id: Discord user ID to mention in notifications.bot_token: bot token for reply polling and ask-loop questions.channel_id: channel ID to poll for replies.guild_id: guild (server) ID for constructing message URLs.
#Slack ([notifiers.slack])
bot_token: Slack bot token (xoxb-) for API calls.channel_id: channel ID for notifications.webhook_url: incoming webhook URL (notification-only alternative to bot token).user_id: Slack user ID to mention in notifications.workspace: workspace name for constructing message URLs.
#Email ([notifiers.email])
smtp_host,smtp_port: SMTP server address and port.smtp_username,smtp_password: SMTP credentials.from_address: sender email address.to_addresses: list of recipient email addresses.use_tls: use TLS for SMTP (default:true).imap_host,imap_port: IMAP server for reply polling.imap_username,imap_password: IMAP credentials.imap_use_tls: use TLS for IMAP (default:true).imap_folder: IMAP folder to scan for replies (default:"INBOX").
#SMS ([notifiers.sms])
account_sid: Twilio account SID.auth_token: Twilio auth token.from_number: Twilio sender phone number.to_numbers: list of recipient phone numbers.
#Push ([notifiers.push])
api_token: Pushover application API token.user_key: Pushover user key.device: optional device name (sends to all devices when empty).priority: Pushover priority level (-2to2).
#WhatsApp ([notifiers.whatsapp])
phone_number_id: WhatsApp Business phone number ID.access_token: Meta Graph API access token.to_numbers: default recipient phone numbers.source_enabled: also treat incoming WhatsApp messages as issues (default:false).listen_phone_number_id: phone number ID for source mode (falls back tophone_number_id).poll_interval_ms: polling override for source mode.
#Telegram ([notifiers.telegram])
bot_token: Telegram Bot API token.chat_id: default chat ID for notifications.to_chat_ids: additional recipient chat IDs.source_enabled: also treat incoming Telegram messages as issues (default:false).listen_chat_id: chat ID for source mode (falls back tochat_id).poll_interval_ms: polling override for source mode.
Regression monitoring ([regression]) post-deploy regression detection
After a fix ships, Claudear watches for regressions by monitoring Sentry error rates and matching new issues against recently merged fixes.
enabled: enable or disable regression monitoring.check_interval_hours: how often to check for regressions in hours.monitoring_duration_hours: how long to monitor after a release.sentry_event_threshold: minimum Sentry event count to consider a regression.similarity_threshold: semantic similarity threshold for matching related issues.target_repos: repositories whose merges indicate a release is live.github_token: optional GitHub token override (falls back toscm.github.token).github_search_repos: repositories to search for similar issues.
Dependency cascades ([cascade]) downstream follow-up PRs
When a library fix is released, Claudear automatically opens follow-up PRs in downstream repos that depend on it.
enabled: enable or disable cascade chaining.max_depth: maximum cascade depth (0 = unlimited).[[cascade.rules]]: per-upstream/downstream pair rules withupstream,downstream,trigger("merge"or"release"),target_branch,version_update, andinstructions.
Continuous learning ([learning]) auto-improve from outcomes
Claudear learns from execution logs, merged PR diffs, review feedback, and Q&A answers to improve future fix quality.
auto_extract_learnings: extract learnings from Claude execution logs.diff_analysis: analyze PR diffs on merge.qa_promotion: promote repeated Q&A answers to standing instructions.qa_promotion_threshold: occurrences before a Q&A answer is promoted.repo_knowledge: accumulate per-repo knowledge from successful fixes.review_classification: classify review feedback patterns.review_promotion_threshold: occurrences before a review pattern is promoted.strategy_fingerprinting: track how Claude approaches fixes.quality_scoring: score fix quality based on merge velocity.cluster_detection: detect clusters of correlated issues.cluster_window_minutes: time window for cluster detection.min_cluster_size: minimum issues to form a cluster.auto_agent_md: auto-generate anAGENT.mdfrom accumulated knowledge.cross_repo_correlation: detect cross-repo failure correlation.cross_repo_window_hours: time window for cross-repo correlation.
Prioritisation ([prioritisation]) scoring, clustering, suppression
The prioritisation engine scores incoming issues based on severity, frequency, regression risk, blast radius, and content clustering.
enabled: enable or disable the prioritisation engine.severity_weight,frequency_weight,regression_weight,blast_radius_weight,cluster_weight: scoring weights (should sum to ~1.0).critical_paths,core_paths,infra_paths,test_paths,cosmetic_paths: path patterns for blast radius classification.content_clustering: group similar issues by error type and title.cluster_similarity_threshold: similarity threshold for clustering.min_content_cluster_size: minimum issues to form a content cluster.suppression_rules: rules to skip known-noisy issues before they consume processing slots. Each rule specifiesname,field,pattern,match_mode("contains"or"regex"), and optionalsourcesfilter.
Code indexing ([code_index]) tree-sitter semantic search
Tree-sitter based code indexing for semantic search across repositories. Used to give Claude richer codebase context when working on a fix.
enabled: enable or disable code indexing.max_file_size_kb: maximum file size to index in KB.batch_size: embedding batch size.
Self-evaluation ([evaluation]) test, lint, coverage deltas
Runs before/after comparisons (tests, lint, static analysis, coverage) to validate fixes before submitting PRs. Opt-in because it can be slow.
enabled: enable or disable evaluation (default:false).test_delta: run test suite before/after comparison.lint_delta: run linter before/after comparison.static_analysis_delta: run static analysis before/after comparison.coverage_delta: run coverage before/after comparison (slowest).tool_timeout_secs: timeout per tool in seconds.total_timeout_secs: total timeout across all tools.post_pr_comment: post evaluation results as a PR comment.fail_on_regression: fail the fix attempt if a regression is detected.custom_test_cmd,custom_lint_cmd,custom_analysis_cmd,custom_coverage_cmd: custom command overrides (auto-detected when empty).
Local LLM ([llm]) offline repo classification and code chat
Optional local model for offline repo classification and code chat. Uses a GGUF model via llama-cpp-2.
enabled: enable the local LLM (default:false).model_path: path to the GGUF model file.model_url: download URL for auto-download on startup if model is missing.context_length: context window in tokens (default:16384).gpu_layers: layers to offload to GPU,0= CPU only,99= all (default:99).threads: inference threads,0= auto-detect (default:0).inference_timeout_secs: max seconds per inference call,0= no limit (default:120).use_agent: use the configured agent (claude/codex) for repo classification instead of the local model. Much faster but costs API credits (default:false).
Dashboard ([dashboard]) display and cost settings
Display and cost estimation settings for the web dashboard.
max_plan_monthly_cost: monthly cost of your Claude Max plan, used to estimate per-fix cost when token pricing is unavailable. Set to0to disable.hourly_engineer_rate: hourly engineer rate for cost-savings calculations (default:75.0).
TLS auto-provisioning ([tls]) Let's Encrypt ACME certificates
Automatically provision and renew TLS certificates using the ACME TLS-ALPN-01 challenge. No reverse proxy required. When disabled (default), behaviour is unchanged—plain HTTP on webhook_port.
enabled: enable automatic TLS certificate provisioning (default:false).domains: domain names to provision certificates for (required when enabled). Accepts a single string or a list.email: contact email for Let’s Encrypt expiry notifications (recommended).production: use Let’s Encrypt production environment (default:false= staging). Staging is useful for testing; production issues real browser-trusted certificates.cache_dir: directory for caching ACME certificates. Certificates persist across restarts (default:./acme_cache). In Docker, use a volume path like/app/data/acme_cache.https_port: HTTPS listen port (default:443).http_redirect_port: HTTP port for automatic HTTP→HTTPS redirect (default:80). Set to0to disable the redirect listener.
Environment variable overrides: CLAUDEAR_TLS_ENABLED, CLAUDEAR_TLS_DOMAINS (comma-separated), CLAUDEAR_TLS_EMAIL, CLAUDEAR_TLS_PRODUCTION, CLAUDEAR_TLS_CACHE_DIR, CLAUDEAR_TLS_HTTPS_PORT, CLAUDEAR_TLS_HTTP_REDIRECT_PORT.
Minimal config the smallest working configuration
You do not need to configure everything at once. A minimal setup to get Claudear running needs just:
workspace: a directory for cloned repos- One issue source (e.g. Linear with an API key and trigger labels)
- One SCM backend (e.g. GitHub with a token)
- One notifier (e.g. Discord or Slack)
- Agent provider settings (or just use the defaults)
Add known_orgs or auto_discover_paths if you have many repos and want Claudear to automatically figure out which repo an issue belongs to.
Full annotated example complete reference config
# ============================================
# Claudear Configuration
# ============================================
# Copy this file to claudear.toml and fill in your values.
# Environment variables can override any setting (useful for secrets in containers).
# ============================================
# Core
# ============================================
# Working directory where repositories are cloned
workspace = "~/.claudear/repos"
# SQLite database path (default: ./claudear.db)
db_path = "./claudear.db"
# Webhook server port (default: 3100)
webhook_port = 3100
# Polling interval in milliseconds (default: 300000 = 5 minutes)
poll_interval_ms = 300000
# Max issues to process per poll cycle (default: 5)
max_issues_per_cycle = 5
# Max concurrent issue processing (default: 1)
max_concurrent = 1
# Delay between processing issues in ms (default: 5000)
processing_delay_ms = 5000
# Maximum number of activity entries kept in IPC memory (default: 10000)
max_activity_entries = 10000
# IPC request timeout in seconds (default: 30)
ipc_timeout_secs = 30
# GitHub organizations to track
# Repositories from these orgs will be indexed for issue-to-repo inference
known_orgs = [
"utopia-php",
"appwrite",
"appwrite-labs",
"open-runtimes"
]
# Paths to scan for local repository clones
# These directories are scanned to find repos matching known_orgs
auto_discover_paths = ["~/Local"]
[retry]
# Maximum retry attempts for failed fixes (default: 2)
max_retries = 2
# Base delay between retries in ms (default: 60000 = 1 minute)
# Uses exponential backoff: delay = base_delay * 2^retry_count
base_delay_ms = 60000
# Maximum delay between retries in ms (default: 3600000 = 1 hour)
max_delay_ms = 3600000
# ============================================
# Agent
# ============================================
# Supports multiple providers (Claude, Codex, etc.) and A/B experiments.
[agent]
# Default provider to use (default: "claude")
default_provider = "claude"
# Agent process execution timeout in seconds (default: 21600 = 6 hours)
timeout_secs = 21600
# Use the local LLM as the agent runner instead of an external provider (default: false)
# Requires [llm] to be enabled. Fully offline but much slower. Creates PRs via `gh` CLI.
use_llm = false
[agent.providers.claude]
# Model to use (default: Claude CLI default)
# Options: sonnet, opus, haiku, or full model ID (e.g., claude-sonnet-4-5-20250929)
model = "sonnet"
# Cheaper model for repo classification (falls back to model)
# classification_model = "haiku"
# Custom instructions appended to Claude's system prompt
instructions = "Always write tests. Follow existing code style."
# Path to a file containing custom instructions (relative to this config file)
# If both instructions and instructions_file are set, file content comes first
instructions_file = "./claude-instructions.md"
# Tool permissions granted without prompting (--allowedTools)
# See: https://docs.anthropic.com/en/docs/claude-code/settings
permissions = ["Bash(git *)", "Read", "Edit"]
# Skip all permission prompts (default: true)
skip_permissions = true
# ============================================
# Users
# ============================================
# Map team members to their identifiers across services.
# When an issue is assigned to a user, notifications are routed to them specifically.
# Global config fields (e.g. notifiers.discord.user_id) can reference user slugs instead of raw IDs.
[users.jake]
# Source identifiers
linear_names = "Jake Barnby"
github_usernames = "jakebarnby"
sentry_usernames = "jake"
jira_usernames = "jake.barnby"
gitlab_usernames = "jakebarnby"
# Notification channel IDs
discord_id = "123456789012345678"
slack_id = "U0123456789"
email = "jake@example.com"
push_user_key = "pushover_user_key"
sms_number = "+1234567890"
whatsapp_number = "+1234567890"
telegram_chat_id = "123456789"
# ============================================
# Human Q&A Ask Loop
# ============================================
# Claude can ask blocking questions through all enabled notifiers.
# Reply-capable channels: Discord, Slack, and Email.
# Delivery is fan-out to all enabled channels; first reply wins.
[ask]
# Enable human Q&A flow (default: true)
enabled = true
# Wait timeout for replies in seconds (default: 900)
wait_timeout_secs = 900
# Poll interval in seconds while waiting (default: 15)
poll_interval_secs = 15
# Maximum ask rounds per attempt before stopping (default: 2)
max_rounds_per_attempt = 2
# Semantic reuse threshold for source+repo matches (default: 0.82)
semantic_threshold_scoped = 0.82
# Semantic reuse threshold for global matches (default: 0.88)
semantic_threshold_global = 0.88
# Number of reuse candidates to include (default: 3)
max_reuse_candidates = 3
# On timeout, continue with uncertainty note instead of hard-fail (default: true)
best_effort_on_timeout = true
# ============================================
# SCM (Source Control Management)
# ============================================
[scm.github]
# GitHub personal access token (for checking PR status)
token = "ghp_xxxxxxxxxxxx"
# PR status check interval in milliseconds (default: 60000 = 1 minute)
poll_interval_ms = 60000
# Auto-resolve issues on Linear/Sentry when PRs merge (default: false)
auto_resolve_on_merge = false
# Optional: Webhook secret for verifying GitHub webhook signatures
# Set via GITHUB_WEBHOOK_SECRET env var for security
webhook_secret = ""
# Trigger tag for GitHub review comments (default: @claudear)
# Comments must include this tag to trigger Claude.
# Set to empty string to respond to all comments.
review_trigger = "@claudear"
# Bot usernames whose review comments are processed (default: [])
# Use login name without [bot] suffix
# allowed_bots = ["copilot"]
# Use SSH URLs for cloning instead of HTTPS (default: false)
# Set to true if SSH keys are configured for GitHub.
use_ssh = false
# GitHub App Authentication (Optional)
# Configure this section to use GitHub App authentication instead of PAT.
[scm.github.app]
# GitHub App ID
app_id = ""
# Path to private key PEM file (alternative to private_key)
private_key_path = ""
# Inline private key PEM content (alternative to private_key_path)
private_key = ""
# Optional: Webhook secret for GitHub App webhook verification
# Set via GITHUB_APP_WEBHOOK_SECRET env var for security
webhook_secret = ""
# Optional: Installation ID (auto-detected if empty)
installation_id = ""
# Optional: OAuth Client ID for manifest/user auth flows
client_id = ""
# Optional: OAuth Client Secret
client_secret = ""
# Optional: Public base URL for GitHub App manifest flow
base_url = ""
[scm.gitlab]
# Enable/disable GitLab source (default: false)
enabled = true
# GitLab personal access token
# Set via GITLAB_TOKEN env var for security
token = "glpat-xxxxxxxxxxxx"
# GitLab base URL (default: "https://gitlab.com")
# For self-hosted instances, change to your GitLab URL
base_url = "https://gitlab.com"
# GitLab groups to monitor for issues
groups = ["my-group", "my-group/subgroup"]
# Labels that trigger automation (default: ["auto-implement", "claude"])
trigger_labels = ["auto-implement", "claude"]
# States that trigger automation (default: ["opened"])
trigger_states = ["opened"]
# MR status check interval in milliseconds (default: 60000 = 1 minute)
poll_interval_ms = 60000
# Auto-resolve issues when MRs merge (default: false)
auto_resolve_on_merge = false
# Webhook secret for verifying GitLab webhook requests
# Set via GITLAB_WEBHOOK_SECRET env var for security
webhook_secret = ""
# Trigger tag for MR review comments (default: @claudear)
review_trigger = "@claudear"
# Bot usernames whose MR comments are processed (default: [])
# allowed_bots = []
# Use SSH URLs for cloning instead of HTTPS (default: false)
use_ssh = false
# Per-source rate limiting (overrides global values if set)
max_issues_per_cycle = 3
max_concurrent = 2
# ============================================
# Issue Sources
# ============================================
# To use Linear, provide an API key from Linear Settings > API
[issues.linear]
# Enable/disable Linear source (default: true if api_key provided)
enabled = true
# Linear API key (REQUIRED for Linear)
api_key = "lin_api_xxxx"
# Labels that trigger automation
trigger_labels = ["auto-implement", "claude"]
# Optional: Only process issues assigned to this user (display name, case-insensitive).
# When set, trigger_labels becomes optional — issues matching the assignee are
# processed regardless of labels if trigger_labels is empty.
trigger_assignee = "Jane Smith"
# States that trigger automation
trigger_states = ["backlog", "todo"]
# Optional: Filter by team ID
team_id = ""
# Optional: Filter by project ID
project_id = ""
# Optional: Webhook signature verification secret
# Set via LINEAR_WEBHOOK_SECRET env var for security
webhook_secret = ""
# Per-source rate limiting (overrides global values if set)
max_issues_per_cycle = 3
max_concurrent = 2
# Polling interval in milliseconds for Linear source (overrides global)
# poll_interval_ms = 300000
# To use Sentry, provide an auth token from https://sentry.io/settings/account/api/auth-tokens/
[issues.sentry]
# Enable/disable Sentry source (default: true if auth_token provided)
enabled = true
# Sentry auth token (REQUIRED for Sentry)
auth_token = ""
# Sentry organization slug (REQUIRED if using Sentry)
org_slug = "your-org"
# Optional: Filter by project slugs
project_slugs = []
# Number of top issues to fetch (default: 100)
top_issues_count = 100
# Time period for fetching top issues (default: 24h)
# Options: 1h (1 hour), 12h (12 hours), 24h (1 day), 7d (1 week), 30d (1 month)
top_issues_period = "24h"
# Minimum event count for issue to be processed (default: 10)
min_event_count = 10
# Percentage increase to consider issue escalating (default: 50)
escalation_threshold_percent = 50
# Optional: Webhook client secret for signature verification
# Set via SENTRY_CLIENT_SECRET env var for security
client_secret = ""
# Per-source rate limiting (overrides global values if set)
max_issues_per_cycle = 2
max_concurrent = 4
# Polling interval in milliseconds for Sentry source (overrides global)
# poll_interval_ms = 300000
# To use Jira Cloud, provide an API token from https://id.atlassian.com/manage-profile/security/api-tokens
# To use Jira Server/DC, provide a personal access token (PAT).
[issues.jira]
# Enable/disable Jira source (default: true if api_token provided)
enabled = true
# Jira base URL (REQUIRED for Jira)
# Cloud: "https://your-domain.atlassian.net"
# Server/DC: "https://jira.your-company.com"
base_url = ""
# Email address for Basic auth (REQUIRED for Jira Cloud)
email = ""
# API token (Cloud) or personal access token (Server/DC)
# Set via JIRA_API_TOKEN env var for security
api_token = ""
# Authentication mode: "basic" (email:token, Jira Cloud) or "bearer" (PAT, Server/DC)
auth_mode = "basic"
# Jira project keys to monitor (e.g., ["PROJ", "BACKEND"])
project_keys = []
# Labels that trigger automation (default: ["auto-implement", "claude"])
trigger_labels = ["auto-implement", "claude"]
# Statuses that trigger automation (default: ["To Do", "Backlog"])
trigger_statuses = ["To Do", "Backlog"]
# Optional: Only process issues assigned to this user (display name).
# When set, trigger_labels becomes optional — issues matching the assignee are
# processed regardless of labels if trigger_labels is empty.
trigger_assignee = "Jane Smith"
# Optional: Filter by issue types (e.g., ["Bug", "Task", "Story"])
issue_types = ["Bug", "Task"]
# Optional: Custom JQL appended to the generated query
custom_jql = "priority = High"
# Maximum results per search request (default: 50, max: 100)
max_results = 50
# Per-source rate limiting (overrides global values if set)
max_issues_per_cycle = 3
max_concurrent = 2
poll_interval_ms = 300000
# Discord as an issue source (messages become issues)
# Shared credentials (bot_token, channel_id) are inherited from notifiers.discord if not set here.
[issues.discord]
# Bot token for reading messages (inherited from notifiers.discord if omitted)
# bot_token = ""
# Channel to listen for issue messages
# listen_channel_id = ""
# Guild (server) ID for constructing message URLs
# guild_id = ""
# Polling interval in milliseconds (overrides global)
# poll_interval_ms = 300000
# Slack as an issue source (messages become issues)
# Shared credentials (bot_token, channel_id) are inherited from notifiers.slack if not set here.
[issues.slack]
# Bot token for reading messages (inherited from notifiers.slack if omitted)
# bot_token = ""
# Channel to listen for issue messages
# listen_channel_id = ""
# Workspace name for constructing message URLs
# workspace = "mycompany"
# Polling interval in milliseconds (overrides global)
# poll_interval_ms = 300000
# ============================================
# Notifiers
# ============================================
[notifiers.discord]
# Discord webhook URL for notifications
webhook_url = "https://discord.com/api/webhooks/..."
# Discord user ID to mention in notifications
user_id = ""
# Bot token for reply polling (required for Discord replies and ask-questions)
bot_token = ""
# Channel ID to poll for replies
channel_id = ""
# Guild (server) ID for constructing message URLs
guild_id = ""
[notifiers.slack]
# Slack Bot Token (xoxb-) for API calls
bot_token = ""
# Slack channel ID for notifications
channel_id = ""
# Incoming Webhook URL (optional, notification-only alternative to bot token)
# webhook_url = "https://hooks.slack.com/services/..."
# Slack user ID to mention in notifications
user_id = ""
# Workspace name for constructing message URLs
# workspace = "mycompany"
[notifiers.email]
# SMTP server host
smtp_host = "smtp.gmail.com"
# SMTP server port (default: 587)
smtp_port = 587
# SMTP username
smtp_username = ""
# SMTP password
smtp_password = ""
# Sender email address
from_address = ""
# Recipient email addresses
to_addresses = []
# Use TLS (default: true)
use_tls = true
# IMAP host for reply polling (required for Email replies)
imap_host = "imap.gmail.com"
# IMAP port (default: 993)
imap_port = 993
# IMAP username
imap_username = ""
# IMAP password
imap_password = ""
# Use TLS for IMAP (default: true)
imap_use_tls = true
# IMAP folder to scan for replies (default: INBOX)
imap_folder = "INBOX"
[notifiers.sms]
# Twilio Account SID
account_sid = ""
# Twilio Auth Token
auth_token = ""
# Twilio phone number (sender)
from_number = ""
# Recipient phone numbers
to_numbers = []
[notifiers.push]
# Pushover API token
api_token = ""
# Pushover user key
user_key = ""
# Optional: Device name (sends to all devices if empty)
device = ""
# Optional: Priority (-2 to 2)
priority = 0
[notifiers.whatsapp]
# WhatsApp Business phone number ID
# phone_number_id = ""
# Meta Graph API access token
# access_token = ""
# Default recipient phone numbers
# to_numbers = []
[notifiers.telegram]
# Telegram Bot API token
# bot_token = ""
# Default chat ID for notifications
# chat_id = ""
# Additional recipient chat IDs
# to_chat_ids = []
# ============================================
# Monitoring
# ============================================
# Sentry error monitoring (set via environment variables):
# CLAUDEAR_SENTRY_DSN - Sentry DSN for backend error reporting (disabled when empty/unset)
# CLAUDEAR_SENTRY_ENVIRONMENT - Environment name (e.g. "production", "staging")
# CLAUDEAR_SENTRY_RELEASE - Release tag (auto-detected if unset)
#
# Tracks post-fix regressions after releases.
[regression]
# Enable/disable regression monitoring (default: true)
enabled = true
# How often to check for regressions in hours (default: 1)
check_interval_hours = 1
# Monitoring window duration after release in hours (default: 24)
monitoring_duration_hours = 24
# Minimum Sentry event count to consider regression candidate (default: 1)
sentry_event_threshold = 1
# Semantic similarity threshold for matching related issues (default: 0.75)
similarity_threshold = 0.75
# Target repositories that indicate releases are live
target_repos = []
# Optional: GitHub token override for regression issue search
# Falls back to scm.github.token when empty
github_token = ""
# Optional: Repositories to search for similar issues
github_search_repos = []
# Optional: Repo name -> package name overrides when they differ
# Each repo can map to multiple package names.
# [regression.package_names]
# "utopia-php/database" = ["utopia-php/database"]
# Enables chained follow-up fixes across repositories.
[cascade]
# Enable/disable cascade chaining (default: false)
enabled = false
# Maximum cascade depth (default: 0 = unlimited)
max_depth = 0
# Per-dependency cascade rules (optional).
# Each rule controls how a specific upstream->downstream pair is cascaded.
# If no rule matches, default behavior applies (trigger on release, update version).
# [[cascade.rules]]
# upstream = "org/library"
# downstream = "org/application"
# trigger = "release" # "merge" or "release" (default)
# target_branch = "develop" # Override downstream branch (default: repo default)
# version_update = true # Update dependency version in downstream (default: true)
# instructions = "Run composer install after updating the dependency"
# ============================================
# Continuous Learning
# ============================================
# Accumulates knowledge from Claude's execution logs, PR diffs, Q&A answers,
# and review feedback to improve future fix quality.
[learning]
# Auto-extract learnings from Claude execution logs (default: true)
auto_extract_learnings = true
# Analyze PR diffs on merge (default: true)
diff_analysis = true
# Promote repeated Q&A answers to standing instructions (default: true)
qa_promotion = true
# Minimum occurrences before Q&A answer is promoted (default: 2)
qa_promotion_threshold = 2
# Accumulate per-repo knowledge from successful fixes (default: true)
repo_knowledge = true
# Classify review feedback patterns (default: true)
review_classification = true
# Minimum occurrences before review pattern is promoted (default: 3)
review_promotion_threshold = 3
# Track how Claude approaches fixes (default: true)
strategy_fingerprinting = true
# Score fix quality based on merge velocity (default: true)
quality_scoring = true
# Detect clusters of correlated issues (default: true)
cluster_detection = true
# Time window for cluster detection in minutes (default: 30)
cluster_window_minutes = 30
# Minimum issues to form a cluster (default: 3)
min_cluster_size = 3
# Auto-generate AGENT.md from accumulated knowledge (default: false)
auto_agent_md = false
# ============================================
# Prioritisation Engine
# ============================================
# Computes composite severity scores from multiple signals, classifies blast
# radius, clusters content-similar issues, and evaluates suppression rules.
[prioritisation]
# Enable/disable the prioritisation engine (default: true).
# When false, the legacy two-level sort (MatchPriority then IssuePriority) is used.
enabled = true
# Component weights (must sum to ~1.0 for intuitive scores)
severity_weight = 0.30
frequency_weight = 0.25
regression_weight = 0.20
blast_radius_weight = 0.15
cluster_weight = 0.10
# Path patterns for blast radius classification.
# Issues touching these paths are classified at the corresponding tier.
# Case-insensitive segment matching (split on /, \, ., _, -) against filename, function, and culprit metadata.
critical_paths = ["auth", "payment", "billing", "security", "login", "oauth"]
core_paths = ["api", "core", "middleware", "router", "handler"]
infra_paths = ["deploy", "infra", "ci", "docker", "terraform", "k8s", "database", "migration"]
test_paths = ["test", "spec", "fixture", "mock"]
cosmetic_paths = ["readme", "changelog", "license", "docs", "md"]
# Content clustering (groups similar issues by error type + culprit + title similarity)
content_clustering = true
cluster_similarity_threshold = 0.60
min_content_cluster_size = 2
# Suppression rules: skip known-noisy issues before they consume processing slots.
# Each rule matches a field against a pattern. First matching rule wins.
# suppression_rules = [
# { name = "flaky-ci", field = "title", pattern = "flaky", match_mode = "contains", reason = "Known flaky test" },
# { name = "rate-limits", field = "error_type", pattern = "RateLimitError", match_mode = "contains", sources = ["sentry"], reason = "Transient rate limit errors" },
# { name = "docs-typos", field = "filename", pattern = "readme", match_mode = "contains", reason = "Cosmetic documentation issues" },
# { name = "bot-noise", field = "title", pattern = "^\\[bot\\]", match_mode = "regex", reason = "Automated bot issues" },
# ]
# ============================================
# Code Indexing
# ============================================
# Tree-sitter based code indexing for semantic search across repositories.
[code_index]
# Enable tree-sitter code indexing (default: true)
enabled = true
# Maximum file size to index in KB (default: 1024)
max_file_size_kb = 1024
# Embedding batch size (default: 32)
batch_size = 32
# ============================================
# Self-Evaluation
# ============================================
# Runs before/after comparisons (tests, lint, static analysis, coverage)
# to validate fixes before submitting PRs.
[evaluation]
# Enable evaluation (default: false, opt-in — can be slow)
enabled = false
# Run test before/after comparison (default: true)
test_delta = true
# Run lint before/after comparison (default: true)
lint_delta = true
# Run static analysis before/after comparison (default: true)
static_analysis_delta = true
# Run coverage before/after comparison (default: false, slowest)
coverage_delta = false
# Timeout per tool in seconds (default: 300)
tool_timeout_secs = 300
# Total timeout for all tools in seconds (default: 900)
total_timeout_secs = 900
# Post evaluation results as PR comment (default: true)
post_pr_comment = true
# Fail the fix attempt on regression (default: false)
fail_on_regression = false
# Custom command overrides (auto-detected when empty)
# custom_test_cmd = "npm test"
# custom_lint_cmd = "npm run lint"
# custom_analysis_cmd = "phpstan analyse"
# custom_coverage_cmd = "npm run coverage"
# ============================================
# Local LLM
# ============================================
# Optional local model for offline repo classification and code chat.
[llm]
# Enable the local LLM (default: false)
enabled = false
# Path to the GGUF model file
model_path = "~/.cache/claudear/models/qwen2.5-coder-3b-instruct-q4_k_m.gguf"
# Context window length in tokens (default: 16384)
context_length = 16384
# Number of layers to offload to GPU (0 = CPU only, 99 = all; default: 99)
gpu_layers = 99
# Maximum time in seconds for a single LLM inference call (0 = no limit; default: 120)
inference_timeout_secs = 120
# Use the configured agent (claude/codex) for repo classification instead of
# the local model. Much faster but costs API credits. (default: false)
use_agent = false
# ============================================
# Dashboard
# ============================================
# Display and cost estimation settings for the web dashboard.
[dashboard]
# Monthly cost of Claude Max plan (used to estimate per-fix cost when
# total_cost_usd is not available from CLI). Set to 0 to disable. (default: 0.0)
max_plan_monthly_cost = 0.0
# Hourly engineer rate for cost-savings calculation (default: 75.0)
hourly_engineer_rate = 75.0
# ============================================
# TLS Auto-Provisioning (Let's Encrypt)
# ============================================
# Automatically provision and renew TLS certificates using the ACME
# TLS-ALPN-01 challenge. No reverse proxy required.
#
# When enabled, the server listens on https_port (default 443) for HTTPS
# and optionally redirects HTTP traffic from http_redirect_port (default 80).
# When disabled, behavior is unchanged (plain HTTP on webhook_port).
#
# Environment variable overrides:
# CLAUDEAR_TLS_ENABLED tls.enabled
# CLAUDEAR_TLS_DOMAINS tls.domains (comma-separated)
# CLAUDEAR_TLS_EMAIL tls.email
# CLAUDEAR_TLS_PRODUCTION tls.production
# CLAUDEAR_TLS_CACHE_DIR tls.cache_dir
# CLAUDEAR_TLS_HTTPS_PORT tls.https_port
# CLAUDEAR_TLS_HTTP_REDIRECT_PORT tls.http_redirect_port
[tls]
# Enable automatic TLS certificate provisioning (default: false)
enabled = false
# Domain names to provision certificates for (required when enabled)
# domains = ["claudear.example.com"]
# Contact email for Let's Encrypt notifications (recommended)
# email = "admin@example.com"
# Use Let's Encrypt production environment (default: false = staging)
# production = false
# Directory for caching ACME certificates (default: ./acme_cache)
# cache_dir = "./acme_cache"
# HTTPS port (default: 443)
# https_port = 443
# HTTP port for HTTP->HTTPS redirect (default: 80, set to 0 to disable)
# http_redirect_port = 80
#[embedding]
ONNX embedding model settings for code indexing and semantic search. GPU acceleration requires building with --features cuda.
| Field | Type | Default | Description |
|---|---|---|---|
gpu | bool | false | Try CUDA execution provider for GPU-accelerated embeddings. Falls back to CPU gracefully |
device_id | int | 0 | CUDA device index |
pool_size | int | 0 | Model instance pool size (0 = auto-detect from CPUs/RAM). GPU should use 1 |
sub_batch_size | int | 0 | Sub-batch size (0 = auto-detect). GPU can handle 64-256 vs CPU 4-16 |
#[code_index]
Tree-sitter based code indexing for semantic search across repositories.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable tree-sitter code indexing |
max_file_size_kb | int | 1024 | Maximum file size to index in KB |
batch_size | int | 32 | Embedding batch size |
reindex_interval_hours | float | 6.0 | Hours between re-indexing all repos (0 = disable) |
#[evaluation]
Runs before/after comparisons (tests, lint, static analysis, coverage) to validate fixes before submitting PRs.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable self-evaluation (opt-in, can be slow) |
test_delta | bool | true | Run test before/after comparison |
lint_delta | bool | true | Run lint before/after comparison |
static_analysis_delta | bool | true | Run static analysis before/after comparison |
coverage_delta | bool | false | Run coverage before/after comparison (slowest) |
tool_timeout_secs | int | 300 | Timeout per tool in seconds |
total_timeout_secs | int | 900 | Total timeout for all tools |
post_pr_comment | bool | true | Post evaluation results as PR comment |
fail_on_regression | bool | false | Fail the fix attempt on quality regression |
custom_test_cmd | string | auto | Custom test command (auto-detected when empty) |
custom_lint_cmd | string | auto | Custom lint command |
custom_analysis_cmd | string | auto | Custom static analysis command |
custom_coverage_cmd | string | auto | Custom coverage command |
#[llm]
Local inference model used for repo classification and code chat. Runs entirely offline using llama.cpp.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable the local LLM |
model_path | string | — | Path to the GGUF model file |
model_url | string | — | Download URL (auto-downloaded on startup if missing) |
context_length | int | 8192 | Context window length in tokens |
gpu_layers | int | 99 | Layers to offload to GPU (0 = CPU only, 99 = all) |
threads | int | 0 | Inference threads (0 = auto-detect) |
inference_timeout_secs | int | 120 | Max time per inference call in seconds |
use_agent | bool | false | Use the configured agent (claude/codex) for repo classification instead. Faster but costs API credits |
#[chat]
Local code chat feature. Requires [llm] to also be enabled.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable the chat feature |
temperature | float | 0.7 | Generation temperature |
top_p | float | 0.9 | Top-p sampling |
max_tokens | int | 2048 | Max tokens per response |
max_context_chunks | int | 10 | Code chunks to retrieve per query |
max_history_messages | int | 20 | Conversation history messages to include |
session_ttl_days | int | 7 | Session TTL in days, cleaned by housekeeping |
#[tls]
Automatic TLS certificate provisioning using Let’s Encrypt ACME TLS-ALPN-01 challenge. No reverse proxy required.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable automatic TLS provisioning |
domains | list | — | Domain names to provision certificates for |
email | string | — | Contact email for Let’s Encrypt notifications |
production | bool | false | Use production environment (false = staging) |
cache_dir | string | "./acme_cache" | Directory for caching ACME certificates |
https_port | int | 443 | HTTPS listen port |
http_redirect_port | int | 80 | HTTP→HTTPS redirect port (0 = disable) |