Setup
Learn how to set up S3-compatible storage for your application.
The starter kit supports any S3-compatible storage provider. This guide covers setting up storage with Cloudflare R2, AWS S3 and other providers.
Cloudflare R2 Setup
1. Create an R2 Bucket
- Go to your Cloudflare Dashboard
- Navigate to R2 > Create bucket
- Enter a bucket name (e.g.,
my-app-images) - Choose a location
- Click Create bucket
2. Create API Tokens
- Go to Manage R2 API Tokens
- Click Create API Token
- Set permissions:
- Object Read & Write (for uploads and downloads)
- Bucket Read (optional, for listing)
- Copy the Access Key ID and Secret Access Key
3. Configure Environment Variables
Add the R2 credentials to your .env file:
S3_ACCESS_KEY_ID=your-r2-access-key-id
S3_SECRET_ACCESS_KEY=your-r2-secret-access-key
S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
S3_REGION=auto
NEXT_PUBLIC_IMAGES_BUCKET_NAME=my-app-imagesFinding Your Endpoint
Your R2 endpoint URL can be found in the R2 dashboard. It follows the pattern:
https://<account-id>.r2.cloudflarestorage.com
4. Configure CORS (Optional)
If you need to upload files directly from the browser, configure CORS:
- Go to your bucket settings
- Navigate to Settings > CORS Policy
- Add the following policy:
[
{
"AllowedOrigins": ["https://yourdomain.com", "http://localhost:3000"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}
]AWS S3 Setup
1. Create an S3 Bucket
- Go to AWS S3 Console
- Click Create bucket
- Enter a bucket name
- Choose a region
- Disable public access (we'll use signed URLs)
- Click Create bucket
2. Create IAM User
- Go to IAM Console
- Navigate to Users > Create user
- Enable Programmatic access
- Attach the
AmazonS3FullAccesspolicy (or create a custom policy with limited permissions) - Copy the Access Key ID and Secret Access Key
3. Configure Environment Variables
S3_ACCESS_KEY_ID=your-aws-access-key-id
S3_SECRET_ACCESS_KEY=your-aws-secret-access-key
S3_ENDPOINT=https://s3.amazonaws.com
S3_REGION=us-east-1
NEXT_PUBLIC_IMAGES_BUCKET_NAME=my-app-images4. Configure CORS
- Go to your S3 bucket
- Navigate to Permissions > CORS
- Add the following configuration:
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedOrigins": ["https://yourdomain.com", "http://localhost:3000"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}
]DigitalOcean Spaces Setup
1. Create a Space
- Go to DigitalOcean Spaces
- Click Create a Space
- Choose a name and region
- Disable public file listing
- Click Create a Space
2. Generate Access Keys
- Go to API > Spaces Keys
- Click Generate New Key
- Copy the Access Key and Secret Key
3. Configure Environment Variables
S3_ACCESS_KEY_ID=your-spaces-access-key
S3_SECRET_ACCESS_KEY=your-spaces-secret-key
S3_ENDPOINT=https://<region>.digitaloceanspaces.com
S3_REGION=nyc3
NEXT_PUBLIC_IMAGES_BUCKET_NAME=my-app-imagesMinIO Setup (Local Development)
1. Run MinIO with Docker
services:
minio:
image: minio/minio
container_name: minio
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
ports:
- '9000:9000'
- '9001:9001'
volumes:
- minio_data:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
volumes:
minio_data:2. Create a Bucket
- Access MinIO Console at
http://localhost:9001 - Login with
minioadmin/minioadmin - Click Create Bucket
- Name it (e.g.,
my-app-images)
3. Configure Environment Variables
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
S3_ENDPOINT=http://localhost:9000
S3_REGION=us-east-1
NEXT_PUBLIC_IMAGES_BUCKET_NAME=my-app-imagesStorage Configuration
The storage service is configured in lib/storage/service.ts and automatically uses the environment variables you've set.
Security Note
Never commit your access keys to version control. Always use environment
variables and ensure .env is in your .gitignore.
Testing the Setup
You can test your storage setup by checking if the service initializes correctly:
import { storageService } from '@/lib/storage';
// This will throw an error if credentials are missing or invalid
const testUrl = await storageService.getSignedUploadUrl(
'test/file.txt',
process.env.NEXT_PUBLIC_IMAGES_BUCKET_NAME!
);
console.log('Storage setup successful!');