@repo/storage
Upload, store, and serve files at scale. Switch between Vercel Blob, Cloudflare R2, and Cloudflare Images without changing code. Built-in security, progress tracking, and automatic optimization.
Quick Start
Add file uploads in 10 minutes:
pnpm add @repo/storageHandle files of any size with automatic chunking and progress bars. Skip to Quick Start →
Why @repo/storage?
File storage across cloud providers is inconsistent. Each has different APIs, authentication, and limitations. Managing large files, security, and CDN distribution adds complexity.
@repo/storage solves this with a unified interface that works with any provider. Upload files with progress tracking, automatic security, and global CDN delivery.
Production-ready with multipart uploads for large files, rate limiting, CSRF protection, and automatic retries.
Use cases
- User avatars — Profile picture uploads with automatic resizing and optimization
- Document storage — PDFs, contracts, and files organized by user or organization
- Media galleries — Photos and videos with thumbnail generation and CDN delivery
- File attachments — Support files in chat, tickets, or project management
- Content platforms — User-generated content with moderation and organization
How it works
@repo/storage provides a provider-agnostic API for file operations with automatic optimization and security:
import { upload } from "@repo/storage/client";
// Upload with progress tracking
const blob = await upload(file.name, file, {
access: "public",
onUploadProgress: ({ percentage }) => {
console.log(`${percentage}% uploaded`);
}
});Files automatically chunk for large uploads, retry on failure, and serve from global CDN.
Supported providers
| Provider | Best for | Features |
|---|---|---|
| Vercel Blob | Vercel deployments | Auto CDN, 100GB free, simple auth |
| Cloudflare R2 | S3 replacement | Zero egress fees, S3-compatible |
| Cloudflare Images | Image-heavy apps | Auto optimization, responsive variants |
Key features
Multipart uploads — Files >100MB automatically split into chunks, uploaded concurrently
Progress tracking — Real-time progress callbacks (0-100%) for UI feedback
Security built-in — Rate limiting, CSRF protection, file validation, path sanitization
Direct uploads — Users upload straight to storage (faster, reduces server load)
Automatic retries — Transient failures retry with exponential backoff
Quick Start
1. Install the package
pnpm add @repo/storage2. Configure your storage provider
STORAGE_PROVIDER=vercel-blob
VERCEL_BLOB_READ_WRITE_TOKEN=vercel_blob_rw_xxxxx3. Upload a file
"use client";
import { upload } from "@repo/storage/client";
export function FileUploader() {
async function handleUpload(file: File) {
const blob = await upload(file.name, file, {
access: "public",
handleUploadUrl: "/api/upload",
});
console.log("Uploaded:", blob.url);
}
return (
<input
type="file"
onChange={(e) => e.target.files?.[0] && handleUpload(e.target.files[0])}
/>
);
}4. Create the upload API route
import { handleUpload } from "@repo/storage/server/next";
export async function POST(request: Request) {
return handleUpload({ request });
}That's it! You now have file uploads with automatic progress tracking and CDN delivery.
Track upload progress
Add onUploadProgress callback to show a progress bar:
onUploadProgress: ({ percentage }) => setProgress(percentage);Distribution
This package is available as @oneapp/storage for use outside the monorepo.
npm install @oneapp/storageBuild configuration: Uses tsdown with
createDistConfig('node', ...) for distribution builds.
Technical Details
For Developers: Technical implementation details
Production-grade, multi-provider storage abstraction layer for Next.js applications with full TypeScript support, edge compatibility, and 24+ server actions.
Installation
pnpm add @repo/storageMulti-Provider
Switch between Vercel Blob, Cloudflare R2, and Cloudflare Images with just a configuration change.
Overview
| Property | Value |
|---|---|
| Location | packages/storage |
| Providers | Vercel Blob, Cloudflare R2, Cloudflare Images |
| Test Coverage | 38 test files |
| Edge Ready | Full edge runtime support |
Features
Core Capabilities
- ✅ Multi-Provider Support - Switch providers with configuration
- ✅ Type-Safe - Full TypeScript with strict mode
- ✅ Edge Compatible - Works in Vercel Edge and Cloudflare Workers
- ✅ Server Actions - 24+ Next.js server actions for App Router
- ✅ Multipart Uploads - Automatic chunking for large files (>100MB)
- ✅ Progress Tracking - Real-time upload progress with callbacks
- ✅ Abort Support - Cancel in-progress uploads
- ✅ Retry Logic - Automatic retry with exponential backoff
- ✅ Health Checks - Monitor provider availability
Security Features
- 🔒 Authentication - Built-in auth helpers with session management
- 🔒 Rate Limiting - Configurable rate limits per operation
- 🔒 Input Validation - File size, MIME type, and path validation
- 🔒 CSRF Protection - Origin checking and token validation
- 🔒 Path Sanitization - Prevent directory traversal attacks
Export Paths
| Path | Description |
|---|---|
@repo/storage | Environment variables |
@repo/storage/client | Client-side upload functions |
@repo/storage/server | Server-side storage operations |
@repo/storage/client/next | Next.js client hooks |
@repo/storage/server/next | Next.js server actions |
@repo/storage/server/edge | Edge runtime-compatible functions |
@repo/storage/types | TypeScript types |
@repo/storage/validation | Validation utilities |
Quick Start
1. Configure Environment
# Required: Choose your provider
STORAGE_PROVIDER=vercel-blob # or cloudflare-r2, cloudflare-images, multi
# Vercel Blob
VERCEL_BLOB_READ_WRITE_TOKEN=vercel_blob_rw_xxxxx
# Cloudflare R2
R2_ACCOUNT_ID=your-account-id
R2_ACCESS_KEY_ID=your-access-key
R2_SECRET_ACCESS_KEY=your-secret-key
R2_BUCKET=my-bucket
# Cloudflare Images
CLOUDFLARE_IMAGES_ACCOUNT_ID=your-account-id
CLOUDFLARE_IMAGES_API_TOKEN=your-api-token
# Security (recommended for production)
STORAGE_ENFORCE_AUTH=true
STORAGE_ENABLE_RATE_LIMIT=true
STORAGE_ENFORCE_CSRF=true
# File limits
STORAGE_MAX_FILE_SIZE=104857600 # 100MB
STORAGE_MAX_FILES_PER_UPLOAD=102. Basic Upload (Client)
"use client";
import { upload } from "@repo/storage/client";
import { useState } from "react";
export function FileUploader() {
const [uploading, setUploading] = useState(false);
async function handleUpload(file: File) {
setUploading(true);
try {
// highlight-start
const blob = await upload(file.name, file, {
access: "public",
handleUploadUrl: "/api/upload",
});
// highlight-end
console.log("Uploaded:", blob.url);
} catch (error) {
console.error("Upload failed:", error);
} finally {
setUploading(false);
}
}
return (
<input
type="file"
onChange={(e) => e.target.files?.[0] && handleUpload(e.target.files[0])}
disabled={uploading}
/>
);
}3. Server Action Upload
"use server";
import { uploadMediaAction } from "@repo/storage/server/next";
export async function uploadFile(file: File) {
// highlight-start
const result = await uploadMediaAction(`uploads/${Date.now()}-${file.name}`, file, {
contentType: file.type,
maxFileSize: 10 * 1024 * 1024, // 10MB
allowedMimeTypes: ["image/jpeg", "image/png", "image/webp"]
});
// highlight-end
if (!result.success) {
throw new Error(result.error);
}
return result.data;
}4. List Files
import { storage } from "@repo/storage/server";
export async function listFiles(prefix: string) {
// highlight-start
const files = await storage.list({
prefix,
limit: 100
});
// highlight-end
return files.map((file) => ({
key: file.key,
url: file.url,
size: file.size,
lastModified: file.lastModified
}));
}Usage Patterns
Progress Tracking
Real-Time Updates
Use the onUploadProgress callback to show real-time upload progress to users.
"use client";
import { upload } from "@repo/storage/client/next";
import { useState } from "react";
export function ProgressUploader() {
const [progress, setProgress] = useState(0);
async function handleUpload(file: File) {
const blob = await upload(file.name, file, {
access: "public",
handleUploadUrl: "/api/upload",
// highlight-start
onUploadProgress: ({ percentage }) => {
setProgress(percentage ?? 0);
},
// highlight-end
});
return blob;
}
return (
<div>
<input type="file" onChange={(e) => e.target.files?.[0] && handleUpload(e.target.files[0])} />
{progress > 0 && <progress value={progress} max={100} />}
);
}Multiple File Upload
"use client";
import { uploadMultipleFiles } from "@repo/storage/client/next";
export async function uploadFiles(files: File[]) {
// highlight-start
const results = await uploadMultipleFiles(
files.map((file) => ({
file,
key: `uploads/${Date.now()}-${file.name}`
})),
{
onProgress: (key, progress) => {
console.log(`${key}: ${progress.percentage}%`);
},
onError: (key, error) => {
console.error(`${key} failed:`, error);
}
}
);
// highlight-end
return results;
}Multipart Upload (Large Files)
Large Files
Files >100MB automatically use multipart uploads with concurrent part uploads for optimal performance.
import { createMultipartUploadManager } from "@repo/storage/server";
import { storage } from "@repo/storage/server";
export async function uploadLargeFile(file: File) {
// highlight-start
const manager = createMultipartUploadManager(storage, `large-files/${file.name}`, file, {
partSize: 10 * 1024 * 1024, // 10MB parts
queueSize: 8, // 8 concurrent uploads
onProgress: (progress) => {
console.log(`Progress: ${progress.percentage}%`);
}
});
// highlight-end
const result = await manager.upload();
return result;
}Presigned URLs (Direct Upload)
import { storage } from "@repo/storage/server";
export async function getUploadUrl(filename: string) {
// highlight-start
const presigned = await storage.getPresignedUploadUrl(`uploads/${filename}`, {
expiresIn: 3600, // 1 hour
contentType: "image/jpeg"
});
// highlight-end
return presigned;
}
// Client-side: Upload directly to presigned URL
async function uploadWithPresignedUrl(file: File, presigned: PresignedUploadUrl) {
const formData = new FormData();
Object.entries(presigned.fields).forEach(([key, value]) => {
formData.append(key, value);
});
formData.append("file", file);
const response = await fetch(presigned.url, {
method: "POST",
body: formData
});
return response.ok;
}Delete Operations
import { deleteMediaAction, bulkDeleteMediaAction } from "@repo/storage/server/next";
// Delete single file
export async function deleteFile(key: string) {
const result = await deleteMediaAction(key);
if (!result.success) {
throw new Error(result.error);
}
}
// Delete multiple files
export async function deleteMultipleFiles(keys: string[]) {
// highlight-next-line
const result = await bulkDeleteMediaAction(keys);
console.log("Succeeded:", result.data?.succeeded);
console.log("Failed:", result.data?.failed);
}Abort Uploads
"use client";
import { upload } from "@repo/storage/client";
import { useState } from "react";
export function AbortableUploader() {
const [controller, setController] = useState<AbortController | null>(null);
async function handleUpload(file: File) {
const abortController = new AbortController();
setController(abortController);
try {
// highlight-start
const blob = await upload(file.name, file, {
access: "public",
handleUploadUrl: "/api/upload",
signal: abortController.signal,
});
// highlight-end
console.log("Uploaded:", blob.url);
} catch (error) {
if (error.name === "AbortError") {
console.log("Upload cancelled");
}
}
}
function handleCancel() {
controller?.abort();
}
return (
<div>
<input type="file" onChange={(e) => e.target.files?.[0] && handleUpload(e.target.files[0])} />
<button onClick={handleCancel}>Cancel Upload</button>
);
}Server Actions (24+ Actions)
Upload Actions
import { uploadMediaAction, uploadMultipleMediaAction, uploadWithProgressAction } from "@repo/storage/server/next";
// Single upload
const result = await uploadMediaAction(key, file);
// Multiple uploads
const results = await uploadMultipleMediaAction(files);
// With progress tracking
const blob = await uploadWithProgressAction(key, file, {
onProgress: (progress) => console.log(progress)
});Delete Actions
import { deleteMediaAction, bulkDeleteMediaAction, deleteFolderAction } from "@repo/storage/server/next";
// Single delete
await deleteMediaAction(key);
// Bulk delete
await bulkDeleteMediaAction(keys);
// Delete entire folder
await deleteFolderAction(prefix);List & Query Actions
import { listMediaAction, getMediaDetailsAction } from "@repo/storage/server/next";
// List files
const files = await listMediaAction({ prefix, limit: 100 });
// Get file details
const details = await getMediaDetailsAction(key);Provider Configuration
Vercel Blob
Vercel Optimized
Best for Vercel deployments with automatic edge caching and global CDN distribution.
STORAGE_PROVIDER=vercel-blob
VERCEL_BLOB_READ_WRITE_TOKEN=vercel_blob_rw_xxxxxFeatures:
- Global edge network
- Automatic CDN
- Generous free tier (100GB)
- Simple authentication
Cloudflare R2
S3-Compatible
Drop-in replacement for S3 with zero egress fees and global distribution.
STORAGE_PROVIDER=cloudflare-r2
R2_ACCOUNT_ID=your-account-id
R2_ACCESS_KEY_ID=your-access-key
R2_SECRET_ACCESS_KEY=your-secret-key
R2_BUCKET=my-bucketFeatures:
- S3-compatible API
- Zero egress fees
- Global replication
- Lifecycle policies
Cloudflare Images
STORAGE_PROVIDER=cloudflare-images
CLOUDFLARE_IMAGES_ACCOUNT_ID=your-account-id
CLOUDFLARE_IMAGES_API_TOKEN=your-api-tokenFeatures:
- Automatic image optimization
- Responsive variants
- WebP/AVIF conversion
- Global CDN
Multi-Provider Mode
Advanced Usage
Use multi-provider mode to distribute assets across multiple providers for redundancy.
STORAGE_PROVIDER=multi
MULTI_STORAGE_PRIMARY=vercel-blob
MULTI_STORAGE_FALLBACK=cloudflare-r2Security
Authentication
import { withAuth } from "@repo/storage/server";
// Require authentication
export const uploadFile = withAuth(async (file: File, userId: string) => {
// userId automatically provided from session
return await uploadMediaAction(`user/${userId}/${file.name}`, file);
});Rate Limiting
import { withRateLimit } from "@repo/storage/server";
// Apply rate limit (100 requests per minute)
export const uploadFile = withRateLimit(async (file: File) => {
return await uploadMediaAction(file.name, file);
});File Validation
import { validateFile } from "@repo/storage/validation";
// Validate before upload
const validation = validateFile(file, {
maxSize: 10 * 1024 * 1024, // 10MB
allowedMimeTypes: ["image/jpeg", "image/png"],
allowedExtensions: [".jpg", ".jpeg", ".png"]
});
if (!validation.valid) {
throw new Error(validation.error);
}CSRF Protection
import { withCSRF } from "@repo/storage/server";
// Enforce CSRF token
export const uploadFile = withCSRF(async (file: File) => {
return await uploadMediaAction(file.name, file);
});Edge Runtime Support
Edge Compatible
All core functions work in Vercel Edge Runtime and Cloudflare Workers.
import { storage } from "@repo/storage/server/edge";
// Edge middleware
export const runtime = "edge";
export async function GET(request: Request) {
const files = await storage.list({ prefix: "public/" });
return Response.json(files);
}Health Checks
import { checkProviderHealth } from "@repo/storage/server";
export async function checkStorage() {
const health = await checkProviderHealth();
console.log(health.status); // 'healthy' | 'degraded' | 'down'
console.log(health.latency); // Response time in ms
console.log(health.provider); // Active provider
}Testing
# Run tests
pnpm --filter @repo/storage test
# Watch mode
pnpm --filter @repo/storage test:watch
# Coverage
pnpm --filter @repo/storage test:coverageTroubleshooting
Upload Fails Silently
Check that STORAGE_PROVIDER is correctly configured and credentials are valid.
Rate Limit Errors
Increase rate limits in environment variables or implement client-side retry logic.
CORS Issues
For presigned URLs, ensure your storage provider has CORS configured correctly.
Large File Timeouts
Use multipart uploads for files >100MB and increase timeout settings.
Related Packages
- @repo/observability - Logging and monitoring
- @repo/security - Security utilities
External Resources
- Vercel Blob Docs - Vercel Blob API reference
- Cloudflare R2 Docs - R2 documentation
- AWS S3 API - S3-compatible API reference