Bring Your Own Bucket (BYOB)

Connect your own S3-compatible storage to Archivus for complete data sovereignty and control.


Overview

BYOB (Bring Your Own Bucket) allows Enterprise customers to use their existing cloud storage infrastructure with Archivus. You maintain full control over your storage, including encryption keys, access policies, and data residency.

Availability: Enterprise plan only


Supported Providers

Archivus supports any S3-compatible storage provider:

Provider Endpoint Format Notes
AWS S3 s3.{region}.amazonaws.com Native S3
MinIO {your-minio-host}:9000 Self-hosted
Wasabi s3.{region}.wasabisys.com Hot storage
DigitalOcean Spaces {region}.digitaloceanspaces.com Spaces
Backblaze B2 s3.{region}.backblazeb2.com B2 Cloud
Cloudflare R2 {account-id}.r2.cloudflarestorage.com Zero egress fees

Prerequisites

Before configuring BYOB:

  1. Enterprise Plan - Verify you’re on an Enterprise subscription
  2. S3-Compatible Bucket - Create a bucket with your chosen provider
  3. Access Credentials - Generate access key and secret key
  4. Bucket Permissions - Configure appropriate IAM/access policies

Step-by-Step Setup

Step 1: Create Your Bucket

Create a bucket in your chosen provider. Recommended settings:

Setting Recommended Value
Access Private (no public access)
Versioning Enabled (recommended)
Encryption Server-side encryption enabled
Region Close to your users

Step 2: Configure Bucket Permissions

Create an IAM policy with minimum required permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::your-bucket-name",
        "arn:aws:s3:::your-bucket-name/*"
      ]
    }
  ]
}

Step 3: Generate Access Credentials

Create a dedicated access key for Archivus:

AWS S3:

  1. Go to IAM > Users > Create User
  2. Attach the policy from Step 2
  3. Create access key (programmatic access)
  4. Save the Access Key ID and Secret Access Key

MinIO:

  1. Go to Access Keys in MinIO Console
  2. Create a new access key
  3. Assign appropriate policy
  4. Save credentials

Other Providers: Follow your provider’s documentation for creating S3-compatible access keys.

Step 4: Contact Support

Provide your configuration to Archivus support:

  1. Email enterprise@archivusdms.com
  2. Subject: “BYOB Configuration Request”
  3. Include:
    • Tenant subdomain
    • Provider name
    • Bucket name
    • Region
    • Endpoint URL (for non-AWS providers)
    • Path prefix (optional)

Do NOT send credentials via email. Support will provide a secure channel for credential transmission.

Step 5: Validation

Once configured, Archivus validates your bucket:

  1. Connection test - Verify endpoint is reachable
  2. Authentication test - Verify credentials work
  3. Permission test - Verify read/write/delete operations
  4. Encryption test - Verify encryption is properly configured

You’ll receive confirmation once validation passes.

Step 6: Migration

If you have existing documents:

  1. Migration begins automatically (zero-downtime)
  2. Monitor progress in Settings > Storage
  3. New uploads go to BYOB immediately after migration starts
  4. Existing files are migrated in the background

Provider-Specific Guides

AWS S3

Bucket Configuration:

aws s3api create-bucket \
  --bucket my-archivus-bucket \
  --region us-east-1 \
  --create-bucket-configuration LocationConstraint=us-east-1

# Enable encryption
aws s3api put-bucket-encryption \
  --bucket my-archivus-bucket \
  --server-side-encryption-configuration '{
    "Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]
  }'

# Block public access
aws s3api put-public-access-block \
  --bucket my-archivus-bucket \
  --public-access-block-configuration '{
    "BlockPublicAcls": true,
    "IgnorePublicAcls": true,
    "BlockPublicPolicy": true,
    "RestrictPublicBuckets": true
  }'

Configuration Values:

Provider: AWS S3
Region: us-east-1
Bucket: my-archivus-bucket
Endpoint: (leave empty for AWS)

MinIO (Self-Hosted)

Prerequisites:

  • MinIO server accessible from Archivus (public IP or VPN)
  • Valid TLS certificate (required for production)

Bucket Creation:

mc mb myminio/archivus-documents
mc anonymous set none myminio/archivus-documents

Configuration Values:

Provider: MinIO
Bucket: archivus-documents
Endpoint: https://minio.yourdomain.com:9000
Region: us-east-1 (can be any value for MinIO)

Wasabi

Configuration Values:

Provider: Wasabi
Region: us-east-1
Bucket: my-archivus-bucket
Endpoint: s3.us-east-1.wasabisys.com

Notes:

  • Wasabi has no egress fees
  • 90-day minimum storage duration
  • Ideal for archival use cases

DigitalOcean Spaces

Configuration Values:

Provider: DigitalOcean Spaces
Region: nyc3
Bucket: my-archivus-space
Endpoint: nyc3.digitaloceanspaces.com

Available Regions:

  • nyc3 (New York)
  • sfo3 (San Francisco)
  • ams3 (Amsterdam)
  • sgp1 (Singapore)
  • fra1 (Frankfurt)

Backblaze B2

Configuration Values:

Provider: Backblaze B2
Bucket: my-archivus-bucket
Endpoint: s3.us-west-004.backblazeb2.com
Region: us-west-004

Notes:

  • Use the S3-compatible endpoint, not native B2 API
  • Get endpoint from B2 bucket details page

Cloudflare R2

Configuration Values:

Provider: Cloudflare R2
Bucket: my-archivus-bucket
Endpoint: {account_id}.r2.cloudflarestorage.com
Region: auto

Benefits:

  • Zero egress fees
  • Global edge distribution
  • Integrated with Cloudflare CDN

Configuration Options

Option Required Description
bucket_name Yes Name of your S3 bucket
region Yes AWS region or provider region code
endpoint_url If not AWS S3-compatible endpoint URL
access_key Yes Access key ID
secret_key Yes Secret access key
path_prefix No Subfolder prefix for all files

Path Prefix

Use a path prefix to organize files within your bucket:

Without prefix: s3://bucket/documents/file.pdf
With prefix:    s3://bucket/archivus/tenant-123/documents/file.pdf

Useful if sharing a bucket across multiple applications.


Security Considerations

Credential Storage

Your S3 credentials are encrypted using:

  • AES-256-GCM encryption
  • Unique encryption key per tenant
  • Encrypted at rest in our database
  • Credentials never logged or exposed

Best Practices

  1. Use dedicated credentials - Create separate keys for Archivus
  2. Principle of least privilege - Only grant required permissions
  3. Enable bucket versioning - For accidental deletion recovery
  4. Use HTTPS endpoints - Never use unencrypted endpoints
  5. Enable access logging - Monitor bucket access
  6. Set up alerts - Notify on unusual access patterns

Custom KMS Keys

Enterprise customers can use their own KMS keys:

{
  "provider_type": "s3_byob",
  "bucket_name": "my-bucket",
  "kms_key_arn": "arn:aws:kms:us-east-1:123456789:key/abc-123"
}

API Reference

Configure BYOB Storage (Admin Only)

curl -X POST https://api.archivus.app/api/v1/admin/storage/config/{tenant_id} \
  -H "Authorization: Bearer YOUR_ADMIN_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "provider_type": "s3_byob",
    "bucket_name": "my-archivus-bucket",
    "region": "us-east-1",
    "endpoint_url": "s3.us-east-1.wasabisys.com",
    "access_key": "YOUR_ACCESS_KEY",
    "secret_key": "YOUR_SECRET_KEY",
    "path_prefix": "archivus/production",
    "reason": "Migrating to customer-owned Wasabi storage"
  }'

Validate Configuration

curl -X POST https://api.archivus.app/api/v1/admin/storage/config/{tenant_id}/validate \
  -H "Authorization: Bearer YOUR_ADMIN_API_KEY"

Response:

{
  "valid": true,
  "checks": {
    "connection": "passed",
    "authentication": "passed",
    "read_permission": "passed",
    "write_permission": "passed",
    "delete_permission": "passed"
  },
  "message": "Storage configuration is valid"
}

Troubleshooting

“Connection failed”

  • Verify endpoint URL is correct
  • Check if bucket region matches endpoint
  • Ensure no firewall blocking outbound connections

“Authentication failed”

  • Verify access key and secret key
  • Check if credentials are active (not disabled)
  • Ensure credentials have not expired

“Permission denied”

  • Review IAM policy for required permissions
  • Check bucket policy doesn’t block access
  • Verify path prefix matches policy Resource

“Bucket not found”

  • Verify bucket name spelling
  • Ensure bucket exists in the specified region
  • Check if bucket was recently created (propagation delay)

“SSL/TLS error”

  • Ensure endpoint uses HTTPS
  • Verify SSL certificate is valid
  • For self-hosted MinIO, use a valid CA-signed certificate

FAQ

Can I use existing buckets with data?

Yes, but Archivus will use its own path structure. Existing files won’t interfere with Archivus operations. Consider using a dedicated bucket or path prefix for clarity.

What happens if my bucket becomes unavailable?

Archivus will retry operations with exponential backoff. Extended outages will result in upload failures. Monitor your storage provider’s status.

Can I rotate credentials?

Yes. Contact support to update credentials. Plan for brief downtime during rotation.

Is my data replicated?

Archivus doesn’t automatically replicate data. Use your provider’s replication features (S3 Cross-Region Replication, etc.) if needed.

Can I switch back to dedicated storage?

Yes. Contact support to migrate from BYOB back to Archivus-managed dedicated storage.


Next Steps


Questions? Contact enterprise@archivusdms.com