Skip to content

Research Automation

Archivus research automation conducts autonomous web research with real-time accuracy verification. Every finding is validated against both your trusted documents and live external sources for maximum reliability.

Dual Validation Architecture

Pro+

What Makes It Unique:

Most AI research tools either hallucinate or provide unverified web results. Archivus is the only platform with dual validation:

  1. Internal Validation: Ground findings against your Source of Truth documents
  2. External Validation: Verify via Perplexity Sonar against live web data

Example Validation Flow:

Research Finding: "Company X acquired Company Y for $2B"

↓ Internal Validation (Phase 4)
├─ Compared against your trusted documents
├─ Grounding Score: 87%
└─ Notes: "Aligns with internal M&A briefing"

↓ External Validation (Phase 4B)
├─ Cross-referenced against live web
├─ Status: "verified" (95% confidence)
├─ Correction: "Final value was $2.1B"
└─ Citations: [reuters.com, techcrunch.com, sec.gov]

Research Workflow

5-Phase Process:

Phase 1: Topic Analysis

  • Parse research topic
  • Generate targeted search queries
  • Identify key entities and concepts
  • Estimate required depth

Search providers:

  • Tavily: AI-optimized search API
  • Serper: Google search API
  • Perplexity: AI-powered search

Cost: 2 credits per query

Phase 3: Synthesis

  • Analyze search results
  • Extract key findings
  • Identify claims and evidence
  • Structure information

Phase 4: Internal Validation

Pro+

Ground findings against Source of Truth:

  • Compare to trusted document collections
  • Calculate grounding score (0-100%)
  • Flag contradictions with internal docs
  • Note supporting evidence

Cost: 1 credit per finding

Phase 5: External Validation

Pro+

Verify via Perplexity Sonar:

  • Cross-reference against live web data
  • Get validation status with confidence
  • Receive corrections if outdated
  • Obtain authoritative citations

Cost: 1 credit per finding

Validation Statuses

Each finding receives a validation status:

Status Description Action
verified Confirmed accurate by live sources ✓ High confidence
contradicted Conflicts with current information ⚠️ Review required
unverified Insufficient data to verify ℹ️ Use with caution
outdated Was accurate, but has since changed 🔄 Update needed

Research Reports

Two Report Modes:

Standard Mode

  • Focused findings (5-10 key points)
  • Internal validation only
  • Quick turnaround (~2 minutes)
  • Cost: ~10-15 credits

Power Mode

  • Comprehensive findings (15-30 points)
  • Dual validation (internal + external)
  • Deep analysis (~5-10 minutes)
  • Cost: ~25-40 credits

Report Format:

# Research Report: [Topic]
Generated: [Timestamp]
Sources Analyzed: [Count]
Validation Mode: [Internal + External]

## Executive Summary
[2-3 paragraph overview]

## Key Findings

### Finding 1: [Title]
**Status**: Verified ✓
**Confidence**: 95%

[Detailed description]

**Internal Validation**:
- Grounding Score: 89%
- Supporting Docs: M&A-Briefing-Q4.pdf

**External Validation**:
- Status: Verified
- Citations:
  - [reuters.com/article...]
  - [sec.gov/filing...]

### Finding 2: [Title]
**Status**: Contradicted ⚠️
**Confidence**: 87%

[Description with contradiction details]

**Internal Validation**:
- Conflicts with: Company-Policy-2025.pdf
- Recommended Action: Update policy

**External Validation**:
- Status: Verified (external source correct)
- Correction: [Updated information]

Source of Truth Integration

Pro+

Define trusted document collections for grounding:

Setup:

  1. Create collection of authoritative documents
  2. Mark collection as "Source of Truth"
  3. Research automatically grounds against these docs
  4. Contradictions flagged for review

Example Collections:

  • Company policies and procedures
  • Approved methodologies
  • Compliance requirements
  • Industry standards
  • Historical precedents

Grounding Score:

Score = (Aligned Statements / Total Statements) × 100

95-100%: Fully aligned with Source of Truth
80-94%:  Mostly aligned, minor gaps
60-79%:  Partially aligned, review recommended
<60%:    Significant contradictions, caution advised

Research Use Cases

Competitive Intelligence

Topic: "AI document management competitors 2026"

Findings:
✓ Market size: $4.2B (verified)
✓ Top competitors identified (8 companies)
⚠️ Pricing data partially outdated
✓ Feature comparison current (validated)

Regulatory Research

Topic: "HIPAA compliance requirements for cloud storage"

Findings:
✓ Current regulations (verified via HHS.gov)
✓ Recent enforcement actions (verified)
⚠️ Internal policy gaps identified
✓ Best practices (validated externally)

Due Diligence

Topic: "Acme Corporation financial stability"

Findings:
✓ Revenue: $450M (verified via SEC filings)
⚠️ Internal projection differs (flagged)
✓ Recent acquisition (validated)
✓ Credit rating: A- (verified)

Market Research

Topic: "Remote work trends 2026"

Findings:
✓ 67% hybrid adoption (verified)
✓ Industry breakdown (validated)
⚠️ Contradicts internal survey (noted)
✓ Future projections (multiple sources)

API Access

Pro+

Programmatic research via API:

# Start research
POST /api/v1/research/start
{
  "topic": "AI regulations financial services",
  "mode": "power",
  "source_of_truth_ids": ["coll-uuid-1", "coll-uuid-2"],
  "external_validation": true
}

# Get research status
GET /api/v1/research/{research_id}/status

# Download report
GET /api/v1/research/{research_id}/report.pdf

Research Templates

Pre-configured research templates:

  1. Competitive Analysis: Market sizing, competitor features, pricing
  2. Regulatory Update: New regulations, enforcement, compliance gaps
  3. Due Diligence: Company background, financials, risk factors
  4. Market Research: Trends, statistics, projections
  5. Technology Assessment: Features, adoption, vendors
  6. Customer Research: Reviews, sentiment, pain points

Cost Management

Credit Usage Breakdown:

  • Topic analysis: 3 credits
  • Web search: 2 credits per query (typically 5-10 queries)
  • Internal grounding: 1 credit per finding
  • External validation: 1 credit per finding
  • Report generation: 5 credits

Typical Research Costs:

  • Simple topic (5 findings): 15-20 credits
  • Standard research (10 findings): 25-35 credits
  • Power research (20 findings): 45-65 credits

Optimization Tips:

  • Use specific topics (reduces query count)
  • Leverage Source of Truth (improves accuracy)
  • Standard mode for quick research
  • Power mode for critical decisions

Security & Compliance

Data Privacy:

  • Research queries isolated by tenant
  • Source of Truth access controls enforced
  • External validation via secure APIs
  • No sharing of internal documents

Audit Trail:

  • Complete research history
  • Query and result logging
  • Validation status tracking
  • Cost attribution per research

Getting Started

Web Interface:

  1. Navigate to Research tab
  2. Enter research topic
  3. Select mode (Standard or Power)
  4. Choose Source of Truth collections (optional)
  5. Enable external validation (Pro+)
  6. Start research
  7. Review findings and download report

Best Practices:

  • Be specific in research topics
  • Define Source of Truth for your domain
  • Review contradictions carefully
  • Update internal docs when external validation corrects them
  • Use Power mode for critical business decisions

View Research API Docs → See Research Examples →