Skip to main content

Research Intelligence

The Research Intelligence System transforms Raven Docs from a documentation tool into a structured knowledge engine. It introduces typed pages, a knowledge graph, multi-agent teams, automated pattern detection, and an intelligence dashboard.

Overview

Typed Pages

Pages can be assigned a page type that adds structured metadata and integrates them into the research graph.

Page Types

TypePurposeKey Metadata
hypothesisA testable claimformalStatement, predictions, status, domainTags, priority
experimentA test of a hypothesishypothesisId, method, results, status
findingA confirmed resultsource, confidence, domainTags
observationA raw observationcontext, domainTags
noteA research notedomainTags

Hypothesis Statuses

StatusMeaning
proposedStated but not yet tested
testingExperiments are underway
validatedEvidence supports the claim
refutedEvidence contradicts the claim
inconclusiveMixed or insufficient results
supersededReplaced by a newer hypothesis

Creating Typed Pages

Use the MCP tools to create typed pages:

// Create a hypothesis
await mcp.call("hypothesis_create", {
workspaceId: "ws_123",
spaceId: "space_456",
title: "Redis caching reduces API latency by 50%",
formalStatement: "Adding Redis caching to /api/search will reduce p95 latency from 800ms to under 400ms",
predictions: ["Cache hit rate > 80%", "Memory stays under 512MB"],
domainTags: ["performance", "infrastructure"],
priority: "high"
});

// Register an experiment
await mcp.call("experiment_register", {
workspaceId: "ws_123",
spaceId: "space_456",
title: "Redis cache load test",
hypothesisId: "page_hyp_123",
method: "1000 concurrent users against /api/search"
});

Knowledge Graph

Typed pages are connected by directed edges in a knowledge graph powered by Memgraph:

Edge Types

EdgeDescription
VALIDATESExperiment confirms a hypothesis
CONTRADICTSEvidence contradicts a claim
TESTS_HYPOTHESISExperiment is testing a hypothesis
EXTENDSOne hypothesis builds on another
INSPIRED_BYWork was inspired by another page
USES_DATA_FROMUses data produced by another experiment
FORMALIZESFormalizes an informal observation
SPAWNED_FROMCreated as a follow-up to another page
SUPERSEDESReplaces an older hypothesis or finding
CITESReferences another work
REPLICATESSuccessfully replicates another experiment
REPRODUCESReproduces results from another experiment
FAILS_TO_REPRODUCEFailed to reproduce another experiment's results
USES_ASSUMPTIONDepends on an assumption from another page

Evidence Chains

When you retrieve a hypothesis, the system traverses the graph to build an evidence chain showing all supporting, contradicting, and related evidence:

const result = await mcp.call("hypothesis_get", {
workspaceId: "ws_123",
pageId: "page_hyp_123"
});

// result.evidenceChain.supporting → experiments that VALIDATE
// result.evidenceChain.contradicting → experiments that CONTRADICT
// result.evidenceChain.related → pages connected by EXTENDS, RELATED_TO, etc.

Intelligence Queries

The context assembly system answers natural-language questions like "What do we know about X?" by combining data from typed pages, the knowledge graph, and pattern detections:

const context = await mcp.call("intelligence_query", {
workspaceId: "ws_123",
query: "What do we know about caching performance?"
});

// Returns: hypotheses, experiments, related pages,
// graph edges, detected patterns, and open questions

This is especially useful for giving AI agents full research context before they take action.

Multi-Agent Teams

Deploy teams of specialized AI agents from templates:

// Deploy a hypothesis-testing team
const team = await mcp.call("team_deploy", {
workspaceId: "ws_123",
spaceId: "space_456",
templateName: "hypothesis-testing"
});

Templates

TemplateAgentsPurpose
hypothesis-testingResearcher, Analyst, CriticTest and validate hypotheses
literature-reviewSurveyor, Synthesizer, ReviewerLiterature analysis
explorationExplorer, Mapper, ReporterOpen-ended exploration

Teams run on a schedule or can be triggered manually. Each agent has a defined role, tools, and system prompt.

Pattern Detection

The pattern detection engine automatically scans the research graph every 6 hours to surface insights:

PatternWhat it detectsSeverity
Convergence3+ experiments validate the same hypothesisMedium
ContradictionCONTRADICTS edges between pagesHigh
StalenessOpen questions not updated in 14+ daysLow
Cross-domainUnexpected connections across domain tagsMedium
Untested implicationValidated hypothesis EXTENDS an untested oneMedium
Intake gate violationHypothesis marked "proved" without completed gate checklistHigh
Evidence gapHypothesis with citations but fewer than N experimentsMedium
Reproduction failureExperiments with FAILS_TO_REPRODUCE edgesHigh

Managing Patterns

// List detected patterns
const patterns = await mcp.call("pattern_list", {
workspaceId: "ws_123",
status: "detected"
});

// Acknowledge a pattern
await mcp.call("pattern_acknowledge", {
patternId: "pat_001",
actionTaken: { notes: "Created experiment to resolve" }
});

// Trigger detection manually
await mcp.call("pattern_run", {
workspaceId: "ws_123"
});

Configuration

Pattern detection is configured in workspace intelligence settings:

{
"intelligence": {
"enabled": true,
"patternRules": [
{ "type": "convergence", "enabled": true, "threshold": 3, "action": "surface" },
{ "type": "contradiction", "enabled": true, "action": "notify" },
{ "type": "staleness", "enabled": true, "maxAgeDays": 14, "action": "create_task" }
]
}
}

Intelligence Dashboard

The Intelligence Dashboard (accessible at /spaces/:spaceId/intelligence) provides a visual overview:

  • Hypothesis Scoreboard — Counts of validated, testing, refuted, and proposed hypotheses
  • Active Experiments — List of running and planned experiments with status badges
  • Open Questions — Filterable queue of unresolved research questions
  • Recent Findings Timeline — Activity feed of typed page updates
  • Contradiction Alerts — Pairs of contradicting pages
  • Pattern Alerts — Detected patterns with acknowledge/dismiss actions

Coding Swarms

The coding swarm system bridges research and implementation. Spawn agents in isolated git workspaces to execute coding tasks, with optional links to experiments for full traceability.

// Execute a coding task linked to an experiment
await mcp.call("swarm_execute", {
workspaceId: "ws_123",
repoUrl: "https://github.com/org/repo",
taskDescription: "Implement rate limiting middleware",
experimentId: "page_exp_456",
agentType: "claude-code"
});

The swarm lifecycle:

  1. Provision — Clone repo, create feature branch
  2. Prepare — Generate MCP API key, write agent memory with workspace context
  3. Execute — Spawn agent, send task, monitor progress
  4. Finalize — Commit, push, create pull request

See Swarm Tools for the full API reference.

Enabling Intelligence

To enable the Research Intelligence System for a workspace:

  1. Ensure Memgraph is available (for the knowledge graph)
  2. Enable intelligence in workspace settings
  3. Configure pattern rules as desired
  4. Start creating typed pages via MCP tools

MCP Tools Reference

ToolPurpose
hypothesis_create/update/getManage hypotheses
experiment_register/complete/updateManage experiments
intelligence_queryQuery assembled context
relationship_create/remove/listManage graph edges
team_deploy/status/list/trigger/pause/resume/teardownMulti-agent teams
pattern_list/acknowledge/dismiss/runPattern detection
swarm_execute/status/list/stop/logsCoding swarms
knowledge_search/list/getSemantic knowledge search