Show HN: Graph-Oriented Generation – Beating RAG for Codebases by 89%

2026-03-0620:39122github.com

Graph-Oriented Generation (GOG). Contribute to dchisholm125/graph-oriented-generation development by creating an account on GitHub.

Active Research Prototype — Contributions Welcome

Thank you to everyone who checked out the project after the Hacker News and Reddit posts.
The repository reached 24+ stars and 3 forks within the first day, which is very encouraging for an early research prototype.

GOG is currently under active development as part of an ongoing research effort (Paper #2 in progress). My primary focus right now is continuing work on the core mathematical engine, particularly:

  • deterministic dependency traversal
  • $O(1)$ plasticity concepts

Because of that, development time is mostly concentrated on the core algorithm and benchmark framework.

However, the surrounding ecosystem is intentionally open for collaboration. If you're interested in helping expand the project — whether through additional language parsers, benchmarking, or tooling improvements — contributions are very welcome.

Open issues highlight areas where help would be especially valuable.

GOG explores whether dependency graph traversal can replace vector retrieval for codebase reasoning in LLM workflows.

This repository evaluates the efficiency of Symbolic Reasoning Model (SRM) context isolation (GOG) compared to standard Retrieval-Augmented Generation (RAG) for large codebase understanding.

The benchmark consists of three core components:

  • Python Engine: Orchestrates the benchmark, parses the codebase, and interacts with the LLM API.
  • SRM Engine: Uses networkx to build a dependency graph of the codebase and isolate relevant files for a given prompt.
  • Benchmark Harness: A/B tests the context load and execution time between a full codebase dump (RAG) and isolated context (GOG).

Setup takes ~2–3 minutes on a typical machine.

  1. Install Dependencies:

    pip install -r requirements.txt
  2. Install OpenCode CLI: The benchmarking suite uses the opencode CLI for all LLM interactions. Install it via NPM:

  3. Generate the Maze: Inflate the target repository with 50+ dummy files and a hidden "needle" component.

    python3 generate_dummy_repo.py

There are two primary ways to run the benchmark: via the Cloud-based OpenCode CLI or purely locally using an open-source Small Language Model (SLM) via Ollama.

Use this method to benchmark performance using state-of-the-art cloud models.

python3 benchmark_cloud_cli.py

Use this method to prove that GOG is so efficient that it can run entirely on local resources using small models like qwen. This removes API latency and costs completely.

Install Ollama & Prepare the Model:

  1. Download mapping and install Ollama from ollama.com or run:
    curl -fsSL https://ollama.com/install.sh | sh
  2. Pull the specified local LLM (e.g. qwen3.5:0.8b or whichever you prefer):
  3. Run the local benchmark:
    python3 benchmark_local_llm.py

The SRM Engine should demonstrate a 70%+ reduction in token usage on average by deterministically tracing the precise dependency paths, ignoring the dozens of noise components that plague typical Vector RAG setups. Furthermore, the Local Compute Time metric will highlight the fundamental difference in overhead between $O(n)$ vector scaling and $O(1)$ graph traversal.


Read the original article

Comments

  • By ysleepy 2026-03-079:10

    Is that a Paper without any citations?

    Also, how does it differ to providing an language specific LSP MCP to the Agent?

    I dislike the hamfisted way Agents use grep to understand a statically typed codebase which has perfect code navigation in the IDE. So this is generally interesting, but it needs comparison to existing approaches.

    Also lol, seeking collaboration with frontier AI research labs.

    This has major crank vibes.

  • By jaen 2026-03-0712:38

    Single testcase benchmark, no citations, inventing nonsense terms for trivial concepts like "Synaptic Plasticity", LLM-slop style writing.

    Nobody in their right mind would publish this to ArXiv. I suggest looking up and reading guides on how to write a research paper.

HackerNews