Ryngo Ryngo

The map your coding agent is missing.

Ryngo is a deterministic code-to-graph compiler with an MCP server. Paste a GitHub URL, get a typed node-editor. Mark intents in Ryngo.md; Claude Code, Cursor, Codex, and Aider all read the same file.

843× less context. Same code map. Topology in 0.12 % of raw tokens · single symbol in 0.006 % · focused subgraph in 0.28 %.

Paste a GitHub URL. Get a typed node-editor of your codebase — files, functions, types, routes, db models — grouped into stack layers. Mark intents in markdown your AI agent reads.

Try it on your repo → Install the MCP server → View source →
ryngo.ai/app · karpathy/autoresearch
open in new tab ↗
Live · clones a real public repo · no LLM inference · same map at ten files and at ten thousand
56 repos compiled 293.6k nodes generated 66.9M tokens compressed 843× smaller agent context

Built by Marshall Doyle. Try a different repo above, or paste your own.

Compiler signal, not a bigger prompt

Ryngo compresses a repo into deterministic representations: topology, compact IR, view models, subgraphs, and exact source anchors. The point is to feed an agent the map it needs instead of asking it to reread every file on every turn.

Topology smallest

Bird's-eye repo map for planning and triage.

Compact IR agent-ready

Nodes, edges, signatures, adapters, and warnings.

Subgraph precise

Only the neighborhood around the thing being changed.

How to use Ryngo

  1. 1

    Paste a GitHub URL

    Open /app and drop in a public repo. Ryngo clones shallow, parses every file into the IR, and throws the source away. No code stored.

  2. 2

    Pick a view

    Layers groups every file into Frontend / Backend / Data / Infra / Tests / Config — your product as a graph. Files drills into per-function detail with typed ports for params and return values. Toggle anytime; the toggle persists.

  3. 3

    Mark intents → your AI reads them

    Right-click any node: refactor, extract, delete, add tests. Each intent is saved as a markdown file in .ryngo/intents/. Claude Code, Codex, Cursor, Aider — any agent that reads your repo — picks them up automatically. When the agent applies the change, click Verify and Ryngo diffs the IR against your stated intent.

Install the MCP server

Ryngo ships an MCP (Model Context Protocol) server so Claude Code, Claude Desktop, ChatGPT Apps, Cursor, and any other MCP-compatible client can query your codebase map directly. Nine tools: analyze_repo, get_topology, get_compact_ir, get_subgraph, english_signature, find_node, list_intents, read_intent, list_annotations. No inference happens on our side — your agent uses your model.

Claude Code

Add to ~/.config/claude-code/mcp.json:

{
  "mcpServers": {
    "ryngo": {
      "command": "npx",
      "args": ["-y", "ryngo-mcp"]
    }
  }
}

Restart Claude Code; the 9 Ryngo tools appear automatically.

Claude Desktop

Same JSON shape; the file lives at ~/Library/Application Support/Claude/claude_desktop_config.json on macOS.

ChatGPT Apps SDK + hosted MCP connectors

We expose Streamable HTTP MCP at https://ryngo.ai/mcp. Point any hosted MCP connector at that URL — no installation, no local process. Same 9 tools.

Self-host

git clone https://github.com/MarshallDoyle/Ryngo
cd Ryngo/mvp
npm install
npm run mcp        # stdio MCP for Claude Code / Codex
npm start          # HTTP server (API + /mcp + landing + /app)

Try it from your agent: "Use ryngo to give me the topology of github.com/tiangolo/sqlmodel, then list every function that takes a Session parameter."

Token efficiency evals

Current corpus-backed sketch: compare raw repo context with the projections Ryngo can hand to an agent. The plot is normalized to a 1.1m-token repo so the shape is readable; the ratios come from the latest npm run corpus token report.

53-repo corpus median Raw context collapses into map-sized context
Input-token cost, model prices at right
Ryngo token and cost compression plot A hand-drawn style line plot comparing raw files, compact IR, view model, focused subgraph, topology, and a node signature for a 1.1 million token repository. context spent per question 1m 100k 1k 10 Raw files 1.1m tokens · $3.30 Compact IR 669k · $2.01 ViewModel 153k · $0.46 focused subgraph 3.1k · less than 1¢ topology 1.3k · less than 1¢ node signature: ~70 tokens this is the money leak agent gets the map, not the repo dump Model input-token cost savings plot A hand-drawn style bar chart showing raw 1.1 million token context cost versus Ryngo topology cost for Claude Opus, Claude Sonnet, Claude Haiku, and GPT 5.5. money saved per repo question 1.1m raw tokens vs 1.3k topology tokens $0 $3 $6 Claude Opus $5 / 1M input $5.50 raw Ryngo map: less than 1¢ · saves $5.49 Claude Sonnet $3 / 1M input $3.30 raw Ryngo map: less than 1¢ · saves $3.30 Claude Haiku $1 / 1M input $1.10 raw Ryngo map: less than 1¢ · saves $1.10 GPT-5.5 $5 / 1M input $5.50 raw saves $5.49 Input context only. Output cost and model quality are separate.
843× smaller Topology median: 0.12% of raw context.
352× smaller Focused subgraph median: 0.28% of raw context.
15k× smaller One source-backed node signature: 0.006% of raw context.

The honest signal: compact IR is still a compiler artifact, not the thing we should blindly paste into an LLM. The user-facing win comes from topology, source-backed signatures, and k-hop subgraphs.

Language slices

TypeScript/JavaScript, Python/Jupyter, Go, Java/Spring, Ruby/Rails, Rust, C#, and HCL.

Representation slices

Raw files, topology markdown, compact IR, RyngoViewModel, k-hop subgraph, and node signature.

Quality slices

Parse recall, route/model/env extraction, warning precision, and exact source-anchor coverage.

Who Ryngo is for

Vibe coders

You prompted your way to a product and within a week couldn't answer what your own product actually does. Ryngo is the loop closer: see the structure, mark the change, hand the marks back to your AI of choice. Every annotation lives in a markdown file your agent reads.

PMs & execs

See the system you own. Ask "what's in scope?" without messaging engineering. Share a link to a real architecture map instead of holding another meeting. No engineering jargon — Frontend / Backend / Data / Infra and counts that mean what they say.

Engineers

Stop explaining the codebase to PMs and execs. Send them a Ryngo link. They pan around. You get your afternoon back. Same map your agents see — coordination overhead drops to zero.

Latest

Newest at the top. Auto-generated from CHANGELOG.md every deploy.

Full changelog →

Coming soon

Three threads in flight. The plan + per-agent claims live in AGENTS.md.

in progress

Tree-sitter parsers

Swap the regex extractors for tree-sitter in TS and Python. Same IR shape, deeper signatures, fewer edge-case misses on JSX, generics, and decorators. Drop-in on the existing parser registry; no downstream changes.

next

Go + Rust support

Go via go list -deps -json, Rust via rust-analyzer scip. Real types, real cross-crate edges. Lights up the typed-pipe coloring on a second language family beyond TS and Python.

queued

Multi-repo aggregation

One Ryngo view across your whole org's repos. Cross-repo call graphs, shared regions, service-level diff. Sits on top of today's per-repo IR with a federation layer.

Three ways to plug Ryngo.md into your agent

Every Ryngo repo gets one file at its root — Ryngo.md — that holds the comments you've left on nodes and the warnings you've dismissed. Plain Markdown, diff-friendly, edited from the viewer OR from your IDE. Pick whichever of these three paths matches how your agent already works; the file is the same on all three.

1 · live

Connect via MCP

For agents in an active coding loop — Claude Code, ChatGPT MCP, Cursor, any MCP-aware harness. After the one-time install, your agent gets a read_ryngo_md tool. It calls it at the start of a session, sees your comments and suppressions, and respects them on every edit.

$ npx ryngo-mcp install
# adds Ryngo to your MCP config — restart your agent

# Agent now has these tools:
#   read_ryngo_md          ← reads the manifest
#   get_compact_ir         ← reads the typed code map
#   list_intents           ← reads pending refactor markers

Best for: ongoing development, multi-step refactors, anything where the agent is editing on your behalf.

Full install guide: ↑ Install MCP.

2 · one-off

Copy & paste

For chat UIs that don't speak MCP — ChatGPT.com, Claude.ai, Gemini, Perplexity. Open the Ryngo viewer, click View Ryngo.md in the inspector, copy. Paste at the top of your prompt and ask anything.

# your prompt to ChatGPT / Claude.ai
Here's my repo's Ryngo manifest:

---
{paste from Ryngo viewer's "Copy" button}
---

The comments above explain what each function does. The
suppressions tell you which warnings I've already considered
and chosen to ignore. With that context: please refactor
src/auth/login.ts to support refresh tokens.

Best for: one-off questions, second opinions, any time you don't want to bring up an agent harness.

The file is small (typically < 10 KB even for big repos). One paste fits in any model's context.

3 · persistent

Commit to your repo

For teams and anyone who wants the manifest to follow the code — vibe coders shipping daily, engineering teams, anyone whose AI sometimes loses context between sessions. Download Ryngo.md and commit it at repo root. It's auto-discovered by:

  • Cursor — via the .cursorrules Ryngo generates
  • Claude Code — via the CLAUDE.md Ryngo generates
  • Aider, Continue, Codex — via the AGENTS.md convention
  • Your own scripts — it's plain Markdown
# save the manifest at your repo root
curl -O "https://ryngo.ai/api/ryngo-md/download?repo=you/yourrepo"

# or from the viewer:
#   inspector → "View Ryngo.md" → "Save .md"

# then commit it like any other file
git add Ryngo.md
git commit -m "ryngo: dismiss intentional warnings + auth notes"

Best for: teams, code review, anything where a comment or a dismissed warning should outlive a single chat.

PR-review-friendly. Every dismissed warning is a one-liner diff with the reason attached.

What's in Ryngo.md exactly?

Two sections today, both keyed on stable node ids (def:src/foo.ts#bar, file:src/foo.ts, cell:notebook.ipynb#3, …). Forward-compatible ## Connections / ## Expose / ## Flags sections round-trip verbatim so future additions don't break old manifests.

# Ryngo

## Comments

### def:src/auth/login.ts#authenticate
> handles refresh-token rotation; touch carefully
> — marshall, 2026-05-10

## Suppressions

### def:src/auth/login.ts#authenticate
- nested-loop · items.length is bounded; intentional brute force
- recursion · tail-recursive; engine optimizes

Stable node ids mean comments survive renames as long as the symbol survives. Round-trip property test in mvp/lib/ryngo-md.js: serialize → parse → equal.

Free, while we're shipping

Ryngo is free to use today — paste any public GitHub URL, get the map. No account, no waitlist, no card. The MCP server is open source.

$0

Free

  • Public GitHub repos, every language we parse
  • Live typed-port node viewer at /app
  • Compiler warnings (O(n²), I/O-in-loop, recursion, …)
  • Ryngo.md persistence per repo
  • MCP server — install with one command
  • Token-efficiency benchmark you can re-run yourself
Try it on your repo →

A paid tier with private repos + team accounts is on the roadmap. Until then, every feature is free for everyone.

FAQ

Does Ryngo store my code?
No. We clone shallow, build the IR, and throw the source away. Nothing on our disk after the response is sent.
Who is building this?
Ryngo is built by Marshall Doyle. The public repo is MarshallDoyle/Ryngo.
Does Ryngo call an LLM?
No. Ryngo never runs inference. Your agent uses our MCP server to fetch maps, then calls whatever model it's paying for. We are not a coding agent.
Which languages work?
TypeScript, JavaScript, Python, and Jupyter notebooks are first-class today. Files in Go, Rust, Java, Ruby, C# appear as nodes; deep extraction requires the language toolchain and is in flight.
How big a repo can I throw at it?
No file cap. Compilation is deterministic regex + tree-walk — even a 50 k-file monorepo finishes in seconds. The wall is your `git clone` time and the per-request timeout, not the analyzer.
Does it warn me about expensive code?
Yes — every function is checked against a small set of heuristics: nested loops (potentially O(n²) / O(n³)), I/O calls inside a loop (likely N+1 queries), recursion without memoization, very long functions, too many parameters, deeply nested control flow. Affected nodes get a red / amber ⚠ badge in the Files view; hover the badge for the specific reason. Heuristic, not a real linter — directional signal, not gospel.
How do you measure token savings?
The planned eval compares raw source tokens against each Ryngo representation on the same corpus run: topology, compact IR, view model, focused subgraph, and single-node signature. The report should publish ratios by language and repo family, not just one headline number.
What does "better than pasting the repo into an LLM" mean?
It means the agent receives stable IDs, graph edges, source lines, signatures, routes, database models, env reads, and warnings directly. Fewer tokens is only half the value; the other half is that the context is structured instead of prose guessed from a file dump.
Can I see logo directions?
Yes — open the Ryngo logo lab for thirty SVG mark directions, or the icon lab for small layer/node corner-tab candidates.
Can I run it on a private repo?
Self-host the Docker image and point it at any git URL it can reach. Hosted private-repo support arrives later.
Is it open source?
Yes — github.com/MarshallDoyle/Ryngo. Self-hosting is supported; the deployment plan ships in mvp/docs/HOSTING.md.