Agent Readiness CLI - Codebase Prep for AI Agents

DevToolsYhackernews
10/15
DemandSome InterestBuild2-Week BuildMarketWide Open

The Problem

AI coding agents like Claude Code, Cursor, and Copilot are mainstream, but codebases often lack readiness, leading to poor agent performance; Factory.ai coined 'Agent Readiness' but their tool is paid/proprietary.[signal]. AI tools have increased PR shipping by 98% yet review time up 91%, shifting bottlenecks to preparation.. Over 2 million repositories use similar review tools like CodeRabbit, indicating millions of dev teams need agent-specific prep.. Developers currently spend $10-40/user/month on related AI devtools.

Real Demand Evidence

YFound on hackernews·1 month ago

Factory.ai coined the term Agent Readiness, but their solution is proprietary, cloud only, and paid. So we built a free open source alternative.

Core Insight

Open-source CLI with 39 checks across 7 pillars specifically for Claude Code/Cursor/Copilot readiness, filling gaps in competitors' proprietary/paid models, diff-only focus, high false positives, and lack of agent-specific benchmarking.

Target Customer
Indie hackers/solo founders building with Cursor/Claude Code/Copilot (market: 10M+ individual devs, growing 50% YoY in AI adoption per 2026 rankings), needing quick codebase audits without enterprise pricing.
Revenue Model
Freemium: Free open-source CLI for core scans; Pro SaaS at $15-25/user/month for hosted dashboards, auto-fixes, CI integrations (undercutting $24-40 competitors while adding agent-focused value)

Competitive Landscape

Factory.ai

Paid (proprietary, specific pricing not publicly detailed; contact sales)

Direct

Their Agent Readiness tool is proprietary, lacks open-source transparency, and requires payment without the comprehensive 39 checks across 7 pillars offered by Kodus CLI. No mention of support for specific tools like Claude Code, Cursor, or Copilot in public docs.

Greptile

$30/developer/month for unlimited reviews (discounts for annual)[1]

Adjacent

Focuses on deep codebase analysis for bug detection with high false positive rates, but does not specifically benchmark readiness for AI agents like Claude Code or Cursor across standardized pillars. Limited to GitHub and GitLab platforms.

CodeRabbit

$24-30/developer/month[1]

Indirect

Provides surface-level diff-based AI code reviews on PRs with multi-platform support, but lacks full codebase readiness scoring for AI agents and does not perform 39 targeted checks for tools like Copilot or Claude Code.

CodeAnt AI

$24/user/month (Premium); Enterprise contact sales[2]

Indirect

Offers bundled AI reviews, SAST, and metrics across git platforms, but emphasizes PR bottleneck reduction rather than pre-agent codebase preparation with pillar-based scoring for specific AI coding agents.

Graphite Agent

$40/user/month[1]

Adjacent

Deep full codebase analysis for teams with stacked PRs on GitHub only, but no explicit focus on AI agent readiness metrics or compatibility checks for Cursor, Copilot, or Claude Code.

Willingness to Pay

  • "We replaced SonarQube, cut review time from hours to seconds, and now pay a flat per-developer price..."

    https://www.codeant.ai/blogs/best-ai-code-review-tools[2]

    $24/user/month
  • Cursor pricing and plan changes are also a top concern, with 'Cursor: pay more, get less...' threads garnering ample community engagement.

    https://www.faros.ai/blog/best-ai-coding-agents-2026[6]

    $20/month
  • At $30/developer/month with a $180M valuation after its Benchmark-led Series A...

    https://dev.to/heraldofsolace/the-best-ai-code-review-tools-of-2026-2mb3[1]

    $30/developer/month

Get the best signals delivered to your inbox weekly

Every Monday we pick the top scored opportunities from 9 sources and send them straight to you. Free forever.

No spam. No credit card. Unsubscribe anytime.