FAQ
Section 1
A living map of your organization's knowledge, processes, and automation that humans and AI build together through real work.
No. Hadron Graph is a knowledge engineering system. It uses a graph structure, but it's not a database product. The graph captures what your organization knows, how it works, and who does what — including the automation to act on it. It can be stored in files, databases, or both.
A wiki is documentation someone writes and nobody reads. A knowledge base is a search tool. GraphRAG retrieves context for AI prompts. Hadron Graph is different: it's built through work, not documentation effort. As humans and AI collaborate on real tasks, the graph emerges — capturing not just knowledge but processes, automation, ownership, and history. It's alive because it's used, not maintained.
Through an MCP server — a small service that runs alongside the AI agent and exposes the graph as tools the AI can call. The agent gets tools to read knowledge nodes, run action nodes, and report what it used. From the AI's perspective, the graph is just a set of tools it reaches for when it needs to know something or do something. From your perspective, every read and every action is tracked.
The MCP runs wherever the AI runs: on a developer's laptop, on cloud infrastructure for a chatbot, or on an edge device in a factory. Each deployment gets its own synchronized copy of the relevant graph. MCP stands for Model Context Protocol, an open standard for connecting AI agents to external tools and data sources.
Every piece of knowledge in the graph goes through approval before it's accepted. This can be peer review, supervisor sign-off, or both. Approvals can expire, triggering re-review. Every change is tracked. Every action is logged. The graph knows who owns what, who approved what, and when it was last verified.
Section 2
Both. Hadron Graph is a platform, but building your organization's graph requires domain expertise — yours and ours working together. We help you map your first domains, train your teams on the apprenticeship model, and hand over a living graph that grows on its own.
A pilot typically focuses on one domain or team. We work with your subject matter experts on real tasks for 4-8 weeks. By the end, you have a working graph for that domain — with knowledge, processes, and automation — and a clear picture of what scaling looks like.
Access to one or two domain experts who do the actual work. Not managers, not IT — the people who know how things really operate. They continue doing their work; we work alongside them.
Organizations that want a quick AI chatbot or a document search tool. Hadron Graph is for organizations where precision matters — where getting it wrong has real consequences. If your domain is simple enough that a general AI handles it fine, you don't need this.
Section 3
Not necessarily. The graph is built through collaboration on real work, not by ingesting your document library. If existing documents help provide context, we can use them — but the goal is to capture how your organization actually operates, not to index what's been written down.
Yes. Hadron Graph can work with local models, private cloud deployments, or any AI provider you choose. The graph itself is yours — it can live on your infrastructure, encrypted, with full control over what goes where.
Every node in the graph has an owner and access controls. You decide who can read, edit, or act on each piece of knowledge. Sensitive nodes can be encrypted even within a shared graph. Every action — human or AI — is logged with full history.
Anyone with the right permissions can propose changes. Changes go through approval workflows — peer review, supervisor sign-off, or automated validation. Nothing enters the accepted graph unchecked. The review process is part of the graph itself.
Section 4
Here are three examples across different deployment contexts:
Software engineering. A development team's entire workflow — architecture decisions, deployment procedures, quality standards, and automation. The AI uses this graph to write code, prepare pull requests, and run deployments with the precision of an experienced team member. When a reviewer finds an error, the AI traces it back to the specific node that gave bad guidance and updates it — so the same mistake doesn't happen next time.
Customer-facing chatbot. A company maps its product knowledge, support procedures, and escalation paths. An AI chatbot uses this graph to handle queries with precision. Each conversation is a session — the nodes read, actions taken, and outcome (resolved, escalated, rated) are all recorded. The graph learns which knowledge actually resolves issues, and nodes that consistently precede failed conversations surface as candidates for improvement.
Industrial automation. A manufacturer maps inventory thresholds, preferred suppliers, approval workflows, and procurement procedures. An AI agent monitors inventory, detects shortfalls, contacts suppliers, collects quotes, and escalates for human approval — all using the graph as its operational guide. The agent runs on the factory floor; the graph lives in the cloud and improves with every procurement cycle.
By how much work the graph enables without human intervention — and how precise that work is. Early on, AI needs heavy supervision. Over time, it earns autonomy.
This is tracked concretely. Every session records which nodes were read, which actions were run, and an outcome score. The outcome can be anything your organization defines — a PR merged, a customer issue resolved, a procurement completed on time. The score is a number between 0 and 1, set by your systems after the fact. Over time, you see which parts of the graph correlate with good outcomes, which are never touched, and which are associated with failures. The graph tracks its own maturity, node by node.
Both, but neither is the end-state. The end-state is a living operational map that continuously evolves. Documentation is generated on demand from the graph — always current, never separately maintained. Automation executes directly from the graph. The graph is the single source of truth for how your business operates.
Hadron's knowledge graph improves automatically through a feedback loop built into the normal work process — whatever that process looks like in your context. Here's how it works in a software development setting, which illustrates the full loop most clearly:
1. The AI runs a task. When an AI agent executes a task, the Hadron engine records which knowledge nodes the agent read and in what order. This happens silently — the agent just does its work, and the engine logs everything behind the scenes.
2. The AI produces a work output. For software development, this is typically a pull request. The PR includes a manifest of the Hadron nodes the AI consumed, giving reviewers full traceability into what knowledge informed the AI's decisions.
3. Automated tests verify the output. Before review, the AI runs the appropriate verification for the domain — unit tests for code, checklists for documents, load tests for engineering. Test results become part of the execution record.
4. A human reviews the work. The PR goes through normal peer review. If the reviewer finds issues, the AI addresses them.
5. The AI traces review feedback back to the graph. This is the key step. When a reviewer says "this is wrong," the AI can trace the error back to the specific node that provided insufficient or incorrect guidance. It updates that node to close the gap — so the same mistake won't happen next time.
6. The graph records success. When the PR is approved and merged, the execution record is marked as successful. The nodes that contributed to this successful outcome now have a proven track record. Over time, the graph accumulates evidence of which knowledge produces good results.
The result: Every task execution is both a use of the graph and a test of it. Bad outcomes trigger targeted fixes. Good outcomes build confidence. The graph gets better with every cycle, without requiring anyone to do extra work — the feedback loop is embedded in the workflow people already follow.
In non-developer contexts — a chatbot handling customer queries, an edge agent running procurement — the same loop applies. Sessions are recorded, outcomes are reported, and improvements flow back into the graph automatically. The path is different (no pull request), but the principle is the same: every use is a test, every outcome is a signal.