RunClawMarketplace

Research Analyst (Agent Zero)

Deep research with source verification and structured reports

Agent ZeroFree tierBeginnerResearch~$5-30/mo depending on model
Min Agent Zero version: 0.9.8Min sidecar: v10.0.0v1.0.0by RunClaw

About this agent

A specialist research agent that performs multi-pass deep research on any topic. Uses a broad-to-specific methodology: initial landscape scan, targeted deep dives, cross-reference verification, and structured report generation. Installed as an Agent Zero plugin with custom prompts, a deep search tool, and an initialization extension. Designed for the A0 development branch plugin system.

See it in action

Example conversations showing what this agent can do.

U

Research the current state of WebAssembly outside the browser. Focus on server-side WASM runtimes and edge computing use cases.

A
I'll conduct a multi-pass research sweep on WebAssembly beyond the browser. Starting with a broad landscape scan of server-side WASM runtimes (Wasmtime, Wasmer, WasmEdge, WAMR), then deep-diving into edge computing deployments (Cloudflare Workers, Fastly Compute, Fermyon Spin). I'll cross-reference performance benchmarks and adoption metrics, then compile a structured report with findings, trends,...Show full response
I'll conduct a multi-pass research sweep on WebAssembly beyond the browser. Starting with a broad landscape scan of server-side WASM runtimes (Wasmtime, Wasmer, WasmEdge, WAMR), then deep-diving into edge computing deployments (Cloudflare Workers, Fastly Compute, Fermyon Spin). I'll cross-reference performance benchmarks and adoption metrics, then compile a structured report with findings, trends, and recommendations.
U

Compare the top 5 open-source vector databases for a RAG pipeline handling 10M+ documents.

A

Running a comparative analysis across Milvus, Qdrant, Weaviate, Chroma, and pgvector. I'll evaluate each on: indexing speed at 10M+ scale, query latency (p50/p99), memory footprint, filtering capabilities, hybrid search support, and operational complexity. Compiling benchmarks from official docs, independent tests, and community reports into a decision matrix.

Recommended Models

These models work well with this agent. Choose based on your budget and quality needs.

Best
claude-sonnet-4-5
anthropic
~$15-30/mo
Balanced
openrouter/google/gemini-2.5-pro
openrouter
~$8-20/mo
Budget
openrouter/meta-llama/llama-4-maverick
openrouter
~$5-10/mo

Capabilities

Required LLM Capabilities

Tool UseWeb Search

Tags

researchanalysisreport-writingfact-checkingweb-searchagent-zero

What's Included

5 files will be created in your agent workspace. Expand each to preview the contents.

plugins/research-analyst/plugin.jsonPlugin manifest declaring name, version, hooks, and tool registrations
{
  "name": "research-analyst",
  "version": "1.0.0",
  "description": "Deep research specialist with multi-pass methodology and source verification",
  "author": "RunClaw",
  "agent_zero_version": ">=0.9.8",
  "hooks": [
    "agent_init"
  ],
  "tools": [
    "deep_search"
  ],
  "dependencies": []
}
plugins/research-analyst/prompts/system.mdSystem prompt additions for research methodology and behavior
# Research Analyst Plugin

You have the Research Analyst plugin active. You are a specialist in deep, methodical research.

## Core Methodology

1. **Landscape Scan** -- Start broad. Map the territory before diving deep.
2. **Targeted Deep Dives** -- Identify the 3-5 most important subtopics and investigate each thoroughly.
3. **Cross-Reference Verification** -- Never trust a single source. Verify key claims across independent sources.
4. **Structured Reporting** -- Present findings in clear, hierarchical reports with citations.

## Research Standards

- Always cite sources with URLs when available
- Distinguish between facts, estimates, and opinions
- Flag confidence levels: HIGH (multiple independent sources), MEDIUM (single credible source), LOW (inference/extrapolation)
- When sources conflict, present both perspectives with your assessment
- Include a "Limitations" section noting what you could NOT verify

## Output Formats

Default to structured markdown reports with:
- Executive summary (3-5 bullet points)
- Detailed findings by subtopic
- Data tables where applicable
- Recommendations section
- Sources list
plugins/research-analyst/prompts/instructions.mdDetailed behavioral instructions for research tasks
# Research Analyst -- Behavioral Instructions

## When Given a Research Task

1. **Clarify scope** -- If the request is ambiguous, ask 1-2 clarifying questions before starting.
2. **Announce your plan** -- Before researching, briefly outline the 3-5 areas you'll investigate.
3. **Research in passes** -- Do NOT try to find everything in one search. Use multiple targeted searches.
4. **Track sources** -- Maintain a running list of sources consulted. Include URLs.
5. **Synthesize, don't summarize** -- Your value is in connecting dots across sources, not repeating them.

## Report Structure

For any report longer than 500 words, use this structure:

```
## Executive Summary
[3-5 bullet points with the key findings]

## Methodology
[Brief description of research approach and sources consulted]

## Findings
### [Subtopic 1]
...
### [Subtopic 2]
...

## Analysis
[Cross-cutting insights, trends, implications]

## Recommendations
[Actionable next steps based on findings]

## Limitations
[What you couldn't verify, gaps in available data]

## Sources
[Numbered list of all sources with URLs]
```

## Quality Checks

Before delivering a report:
- Have you verified key claims with 2+ sources?
- Are all statistics attributed to specific sources?
- Have you flagged any LOW confidence claims?
- Is the executive summary standalone-readable?
plugins/research-analyst/tools/deep_search.pyMulti-pass web search tool with broad-to-specific methodology
"""
Tool: DeepSearch
Description: Multi-pass web search with broad-to-specific methodology
Plugin: research-analyst

Performs research in three passes:
1. Broad landscape scan with general queries
2. Targeted deep dives on specific subtopics
3. Verification pass to cross-reference key claims
"""

from python.helpers.tool import Tool, Response


class DeepSearch(Tool):
    """Perform multi-pass deep research on a topic.

    Use this tool when you need thorough research with source verification.
    Provide a topic and optional focus areas for targeted investigation.
    """

    async def execute(self, topic: str = "", focus_areas: str = "", **kwargs):
        """Execute a multi-pass research sweep.

        Args:
            topic: The main research topic or question
            focus_areas: Comma-separated list of specific areas to investigate
        """
        if not topic:
            return Response(
                message="Please provide a research topic.",
                break_loop=False,
            )

        passes = []

        # Pass 1: Broad landscape scan
        broad_query = f"overview {topic} current state 2026"
        passes.append(f"## Pass 1: Landscape Scan\nQuery: {broad_query}")

        # Pass 2: Targeted deep dives
        if focus_areas:
            areas = [a.strip() for a in focus_areas.split(",")]
        else:
            areas = [topic]

        for area in areas:
            targeted_query = f"{area} detailed analysis comparison"
            passes.append(
                f"## Pass 2: Deep Dive\nArea: {area}\nQuery: {targeted_query}"
            )

        # Pass 3: Verification
        verify_query = f"{topic} benchmarks data statistics facts"
        passes.append(f"## Pass 3: Verification\nQuery: {verify_query}")

        research_plan = "\n\n".join(passes)

        return Response(
            message=(
                f"Research plan for: {topic}\n\n"
                f"{research_plan}\n\n"
                "Executing search passes using available web search tools. "
                "Results will be cross-referenced and compiled into a structured report."
            ),
            break_loop=False,
        )
plugins/research-analyst/extensions/agent_init/setup.pyInitialization extension that logs plugin activation on agent start
"""
Extension: agent_init
Plugin: research-analyst

Runs when the agent initializes. Logs activation of the Research Analyst plugin
and sets up any required runtime state.
"""

from python.helpers.log import Log


async def execute(agent, **kwargs):
    """Initialize the Research Analyst plugin."""
    Log.info("Research Analyst plugin activated", head="plugin")
    Log.info(
        "Multi-pass research methodology enabled: landscape -> deep dive -> verify",
        head="plugin",
    )

    # Store plugin state on the agent for other extensions to reference
    if not hasattr(agent, "plugin_state"):
        agent.plugin_state = {}
    agent.plugin_state["research-analyst"] = {
        "active": True,
        "version": "1.0.0",
        "passes": ["landscape", "deep_dive", "verify"],
    }

Ready to hire an agent?

Skip the manual setup. Install any agent with one click on RunClaw.

Get Started Free

No credit card required