Uploading Documents
Learn how to upload documents to Archivus, including batch uploads, file types, and processing options.
Quick Upload
The simplest way to upload a document:
curl -X POST https://api.archivus.app/api/v1/documents/upload \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "X-Tenant-Subdomain: your-tenant" \
-F "file=@document.pdf"
Upload Methods
Single Document Upload
Upload one document at a time:
curl -X POST https://api.archivus.app/api/v1/documents/upload \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "X-Tenant-Subdomain: your-tenant" \
-F "file=@contract.pdf" \
-F "enable_ai=true" \
-F "folder_id=folder_xyz"
Response:
{
"id": "doc_abc123",
"filename": "contract.pdf",
"status": "processing",
"ai_status": "queued",
"created_at": "2025-12-16T10:30:00Z"
}
Batch Upload
Upload multiple documents at once (up to 10 files):
curl -X POST https://api.archivus.app/api/v1/documents/upload-batch \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "X-Tenant-Subdomain: your-tenant" \
-F "files[]=@doc1.pdf" \
-F "files[]=@doc2.pdf" \
-F "files[]=@doc3.pdf" \
-F "folder_id=folder_xyz"
Response:
{
"collection_id": "collection_xyz",
"documents": [
{"id": "doc_abc123", "filename": "doc1.pdf", "status": "processing"},
{"id": "doc_def456", "filename": "doc2.pdf", "status": "processing"},
{"id": "doc_ghi789", "filename": "doc3.pdf", "status": "processing"}
],
"batch_job_id": "batch_xyz"
}
Code Examples
Python
import requests
def upload_document(file_path, api_key, tenant, folder_id=None, enable_ai=True):
url = "https://api.archivus.app/api/v1/documents/upload"
headers = {
"Authorization": f"Bearer {api_key}",
"X-Tenant-Subdomain": tenant
}
with open(file_path, "rb") as f:
files = {"file": f}
data = {
"enable_ai": str(enable_ai).lower()
}
if folder_id:
data["folder_id"] = folder_id
response = requests.post(url, headers=headers, files=files, data=data)
return response.json()
# Usage
document = upload_document(
"contract.pdf",
"YOUR_API_KEY",
"your-tenant",
folder_id="folder_xyz"
)
print(f"Document ID: {document['id']}")
JavaScript
async function uploadDocument(file, apiKey, tenant, options = {}) {
const formData = new FormData();
formData.append('file', file);
if (options.folderId) {
formData.append('folder_id', options.folderId);
}
formData.append('enable_ai', options.enableAI !== false);
const response = await fetch('https://api.archivus.app/api/v1/documents/upload', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'X-Tenant-Subdomain': tenant
},
body: formData
});
return response.json();
}
// Usage
const fileInput = document.querySelector('input[type="file"]');
const file = fileInput.files[0];
const document = await uploadDocument(file, 'YOUR_API_KEY', 'your-tenant', {
folderId: 'folder_xyz',
enableAI: true
});
console.log('Document ID:', document.id);
Upload Options
Enable AI Processing
Control whether AI analysis runs automatically:
# Enable AI (default) - document will be analyzed
-F "enable_ai=true"
# Disable AI - faster upload, no analysis
-F "enable_ai=false"
When to disable AI:
- Uploading many documents at once
- Documents don’t need analysis
- Want to process later manually
Upload to Folder
Organize documents by uploading to specific folders:
-F "folder_id=folder_xyz"
If folder doesn’t exist, Archivus will create it automatically.
Add Tags
Add tags during upload:
-F "tags[]=important" \
-F "tags[]=contract" \
-F "tags[]=legal"
Tags help organize and filter documents later.
Set Custom Filename
Override the original filename:
-F "filename=Q4-Contract-2025.pdf"
Supported File Types
| Type | Extensions | Max Size | AI Processing |
|---|---|---|---|
.pdf |
500 MB | Full support | |
| Word | .docx, .doc |
100 MB | Full support |
| Text | .txt, .md |
50 MB | Full support |
| Images | .jpg, .png, .tiff |
50 MB | OCR support |
| Spreadsheets | .xlsx, .xls |
100 MB | Full support |
| PowerPoint | .pptx, .ppt |
100 MB | Full support |
| HTML | .html, .htm |
50 MB | Full support |
Processing Stages
After upload, documents go through these stages:
- Upload → File received and stored
- Text Extraction → Text extracted from file
- AI Processing → Analysis, tagging, entity extraction (if enabled)
- Embedding Generation → Vector embeddings for semantic search
- Completed → Ready for use
Check Processing Status
curl https://api.archivus.app/api/v1/documents/doc_abc123 \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "X-Tenant-Subdomain: your-tenant"
Status values:
processing- Still being processedcompleted- Processing finished successfullyfailed- Processing failed (check error message)
AI Status values:
queued- Waiting for AI processingprocessing- AI analysis in progresscompleted- AI analysis finishedskipped- AI processing disabled
Upload Limits
Per Request
- Single upload: 1 file, max 500 MB
- Batch upload: Up to 10 files, 500 MB each
Rate Limits
- Free tier: 10 uploads/minute
- Starter tier: 20 uploads/minute
- Pro tier: 30 uploads/minute
- Team/Enterprise: 50+ uploads/minute
Storage Limits
- Free: 500 MB total
- Starter: 2 GB total
- Pro: 25 GB total
- Team: 100 GB total
- Enterprise: Unlimited
Best Practices
File Naming
Use descriptive filenames:
Q4-2025-Service-Contract.pdfEmployee-Handbook-v2.1.docxdocument1.pdfscan_001.jpg
Batch Uploads
For multiple documents:
- Use batch upload endpoint (more efficient)
- Upload to organized folders
- Add consistent tags
- Monitor batch job status
Large Files
For files > 100 MB:
- Compress PDFs if possible
- Split large documents into parts
- Consider using direct storage upload (Enterprise)
Error Handling
Always handle errors:
try:
document = upload_document("file.pdf", api_key, tenant)
print(f"Success: {document['id']}")
except requests.exceptions.RequestException as e:
if e.response.status_code == 413:
print("File too large")
elif e.response.status_code == 415:
print("Unsupported file type")
else:
print(f"Upload failed: {e}")
Troubleshooting
Upload Fails
Error: File too large
- Check file size (max 500 MB)
- Compress PDF or split document
- Upgrade plan for larger limits
Error: Invalid file type
- Verify file extension is supported
- Check file isn’t corrupted
- Try converting to PDF
Error: Rate limit exceeded
- Wait 1 minute and retry
- Use batch upload for multiple files
- Upgrade plan for higher limits
Processing Stuck
Document stays in “processing”
- Wait 2-5 minutes (normal for large files)
- Check worker service status
- Retry upload if stuck > 10 minutes
AI status stuck in “queued”
- Check AI credits available
- Verify
enable_ai=truewas set - Check worker service is running
Storage Full
Error: Storage limit exceeded
- Delete unused documents
- Upgrade plan for more storage
- Archive old documents
Next Steps
- Organizing Documents - Learn about folders and organization
- Searching Documents - Find documents quickly
- Sharing Documents - Share with team members
- API Reference - Complete upload API documentation
Questions? Check the FAQ or contact support@ubiship.com