File uploads are one of the riskiest features in any web application. Users upload malicious files disguised as images, oversized files that crash your server, and formats you never intended to support. At the same time, upload UX expectations are high — drag-and-drop, progress bars, instant previews, and resumable uploads are baseline.
This guide covers both sides: the security practices that prevent uploads from becoming attack vectors, and the UX patterns that make uploads feel polished and reliable.
Frontend Validation: First Line of Defense
Frontend validation is for UX, not security. It gives users immediate feedback before wasting time on an upload. But never trust it — all validation must be repeated server-side.
<input
type="file"
accept="image/jpeg,image/png,image/webp,.pdf"
id="upload"
multiple
>The accept attribute filters the file picker dialog but doesn't actually prevent other file types. Users can bypass it by typing a filename or using drag-and-drop.
const MAX_SIZE = 10 * 1024 * 1024; // 10MB
const ALLOWED_TYPES = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];
function validateFile(file) {
const errors = [];
// Check MIME type (from browser, not fully reliable)
if (!ALLOWED_TYPES.includes(file.type)) {
errors.push(`${file.name}: File type ${file.type || 'unknown'} is not allowed`);
}
// Check size
if (file.size > MAX_SIZE) {
errors.push(`${file.name}: File is ${(file.size / 1024 / 1024).toFixed(1)}MB (max ${MAX_SIZE / 1024 / 1024}MB)`);
}
// Check extension as secondary check
const ext = file.name.split('.').pop().toLowerCase();
const allowedExts = ['jpg', 'jpeg', 'png', 'webp', 'pdf'];
if (!allowedExts.includes(ext)) {
errors.push(`${file.name}: .${ext} files are not allowed`);
}
return errors;
}
Drag-and-Drop Upload
Drag-and-drop is expected UX for file uploads. The File API and DataTransfer API make it straightforward:
const dropzone = document.getElementById('dropzone');
// Prevent default drag behavior
['dragenter', 'dragover', 'dragleave', 'drop'].forEach(event => {
dropzone.addEventListener(event, e => {
e.preventDefault();
e.stopPropagation();
});
});
// Visual feedback
dropzone.addEventListener('dragenter', () => dropzone.classList.add('drag-over'));
dropzone.addEventListener('dragleave', () => dropzone.classList.remove('drag-over'));
dropzone.addEventListener('drop', () => dropzone.classList.remove('drag-over'));
// Handle dropped files
dropzone.addEventListener('drop', (e) => {
const files = [...e.dataTransfer.files];
files.forEach(file => {
const errors = validateFile(file);
if (errors.length > 0) {
showErrors(errors);
return;
}
uploadFile(file);
});
});
// Also support click-to-browse
dropzone.addEventListener('click', () => {
const input = document.createElement('input');
input.type = 'file';
input.multiple = true;
input.accept = 'image/*,.pdf';
input.addEventListener('change', (e) => {
[...e.target.files].forEach(uploadFile);
});
input.click();
});Paste support is often overlooked but valuable — users can paste screenshots from their clipboard:
document.addEventListener('paste', (e) => {
const files = [...e.clipboardData.files];
if (files.length > 0) {
e.preventDefault();
files.forEach(uploadFile);
}
});
Upload Progress Tracking
Users need feedback during uploads, especially for large files. XMLHttpRequest provides progress events (fetch API does not for uploads):
function uploadFile(file) {
const formData = new FormData();
formData.append('file', file);
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener('progress', (e) => {
if (e.lengthComputable) {
const percent = Math.round((e.loaded / e.total) * 100);
updateProgressBar(file.name, percent);
}
});
xhr.addEventListener('load', () => {
if (xhr.status === 200) {
showSuccess(file.name);
} else {
showError(file.name, xhr.statusText);
}
});
xhr.addEventListener('error', () => showError(file.name, 'Upload failed'));
xhr.open('POST', '/api/upload');
xhr.send(formData);
}With fetch (upload progress via ReadableStream, limited support): As of 2026, fetch still doesn't support upload progress in all browsers. Use XMLHttpRequest for upload progress tracking, or use a library like Uppy or tus-js-client that handles this.
File Type Verification: Magic Bytes
File extensions lie. MIME types from the browser lie. The only reliable way to verify file type is by reading the file's magic bytes — the first few bytes that identify the file format.
// Client-side magic byte checking
async function getFileType(file) {
const buffer = await file.slice(0, 12).arrayBuffer();
const bytes = new Uint8Array(buffer);
// JPEG: FF D8 FF
if (bytes[0] === 0xFF && bytes[1] === 0xD8 && bytes[2] === 0xFF) return 'image/jpeg';
// PNG: 89 50 4E 47
if (bytes[0] === 0x89 && bytes[1] === 0x50 && bytes[2] === 0x4E && bytes[3] === 0x47) return 'image/png';
// WebP: 52 49 46 46 ... 57 45 42 50
if (bytes[0] === 0x52 && bytes[1] === 0x49 && bytes[2] === 0x46 && bytes[3] === 0x46 &&
bytes[8] === 0x57 && bytes[9] === 0x45 && bytes[10] === 0x42 && bytes[11] === 0x50) return 'image/webp';
// PDF: 25 50 44 46 (%PDF)
if (bytes[0] === 0x25 && bytes[1] === 0x50 && bytes[2] === 0x44 && bytes[3] === 0x46) return 'application/pdf';
// GIF: 47 49 46 38
if (bytes[0] === 0x47 && bytes[1] === 0x49 && bytes[2] === 0x46 && bytes[3] === 0x38) return 'image/gif';
return null; // Unknown
}
// Server-side (Node.js) with file-type package
import { fileTypeFromBuffer } from 'file-type';
const buffer = await fs.readFile(uploadedFilePath);
const type = await fileTypeFromBuffer(buffer);
if (!type || !ALLOWED_MIMES.includes(type.mime)) {
throw new Error('Invalid file type');
}Always verify server-side. Client-side magic byte checking improves UX (instant feedback) but can be bypassed. The server must re-verify using a library like file-type (Node.js), python-magic (Python), or by reading the bytes directly.
Presigned URLs: Direct-to-Storage Uploads
Instead of uploading through your server (which consumes bandwidth and CPU), generate a presigned URL that lets the client upload directly to cloud storage.
// Server: Generate presigned URL
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
app.post('/api/upload-url', async (req, res) => {
const { filename, contentType } = req.body;
const key = `uploads/${crypto.randomUUID()}/${filename}`;
const command = new PutObjectCommand({
Bucket: 'my-bucket',
Key: key,
ContentType: contentType,
// Optional: set max size via Content-Length condition
});
const url = await getSignedUrl(s3Client, command, { expiresIn: 300 }); // 5 min
res.json({ url, key });
});
// Client: Upload directly to S3/R2
async function uploadDirect(file) {
// 1. Get presigned URL from your server
const { url, key } = await fetch('/api/upload-url', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ filename: file.name, contentType: file.type })
}).then(r => r.json());
// 2. Upload directly to storage
const xhr = new XMLHttpRequest();
xhr.upload.onprogress = (e) => updateProgress(e.loaded / e.total);
xhr.open('PUT', url);
xhr.setRequestHeader('Content-Type', file.type);
xhr.send(file);
}Works with: AWS S3, Cloudflare R2, Google Cloud Storage, MinIO, any S3-compatible storage. The pattern is identical — your server generates the signed URL, the client uploads directly.
Benefits: Zero upload bandwidth on your server. No file proxying. Scales with the storage provider's infrastructure, not yours.
Chunked and Resumable Uploads
For files over 50-100MB, single-request uploads are fragile — a network hiccup at 95% means starting over. Chunked uploads split the file into parts and upload them independently. If a chunk fails, only that chunk is retried.
tus protocol is the open standard for resumable uploads:
import * as tus from 'tus-js-client';
function uploadLargeFile(file) {
const upload = new tus.Upload(file, {
endpoint: '/api/tus',
chunkSize: 5 * 1024 * 1024, // 5MB chunks
retryDelays: [0, 1000, 3000, 5000], // Retry on failure
metadata: {
filename: file.name,
filetype: file.type,
},
onProgress: (bytesUploaded, bytesTotal) => {
const percent = ((bytesUploaded / bytesTotal) * 100).toFixed(1);
updateProgress(percent);
},
onSuccess: () => {
console.log('Upload complete:', upload.url);
},
onError: (error) => {
console.error('Upload failed:', error);
}
});
// Check if there's a previous incomplete upload
upload.findPreviousUploads().then(previousUploads => {
if (previousUploads.length > 0) {
upload.resumeFromPreviousUpload(previousUploads[0]);
}
upload.start();
});
}Server-side tus implementations: tus-node-server (Node.js), tusd (Go, the reference implementation), django-tus (Python). The tus protocol handles chunk reassembly, resume state, and expiration automatically.
Client-Side Preprocessing Before Upload
Resizing images before upload saves bandwidth and server processing. A 12MP phone photo is 4-8MB; resizing to 2000px max width brings it under 500KB.
async function resizeImage(file, maxWidth = 2000, quality = 0.85) {
return new Promise((resolve) => {
const img = new Image();
img.onload = () => {
let { width, height } = img;
// Only resize if larger than max
if (width > maxWidth) {
height = Math.round((height * maxWidth) / width);
width = maxWidth;
}
const canvas = document.createElement('canvas');
canvas.width = width;
canvas.height = height;
const ctx = canvas.getContext('2d');
ctx.drawImage(img, 0, 0, width, height);
canvas.toBlob(resolve, 'image/jpeg', quality);
};
img.src = URL.createObjectURL(file);
});
}
// Usage: resize before upload
async function handleImageUpload(file) {
const resized = await resizeImage(file, 2000, 0.85);
uploadFile(new File([resized], file.name, { type: 'image/jpeg' }));
}For format conversion before upload (e.g., HEIC to JPG for iPhone photos), use libraries like heic2any or leverage ChangeThisFile's client-side conversion engine.
Server-Side Security Checklist
- Verify file type by magic bytes — never trust the extension or Content-Type header
- Enforce size limits server-side — multer's
limits: { fileSize: 10 * 1024 * 1024 }, nginx'sclient_max_body_size - Rename uploaded files — use UUIDs or hashes, never the original filename (prevents path traversal attacks)
- Store outside the web root — uploaded files should never be directly accessible via URL unless intentionally public
- Scan for malware — ClamAV for server-side scanning, or cloud services (AWS Macie, Google Cloud DLP)
- Sanitize SVG uploads — SVGs can contain JavaScript via
<script>tags. Use DOMPurify or a dedicated SVG sanitizer - Strip EXIF metadata from images — EXIF can contain GPS coordinates, camera serial numbers, and other PII. Use sharp's
.rotate()(auto-rotates then strips orientation EXIF) or explicit metadata removal - Set Content-Disposition — serve uploaded files with
Content-Disposition: attachmentunless they're images/PDFs you want to display inline - Set X-Content-Type-Options: nosniff — prevents browsers from MIME-sniffing uploaded files into executable types
File uploads touch every layer of your stack: frontend UX (drag-and-drop, progress), transport (chunking, presigned URLs), validation (magic bytes, size limits), storage (cloud, local), and security (malware scanning, path traversal). Cutting corners on any layer creates either a bad user experience or a security vulnerability.
Start with the basics: accept attribute for UX, magic byte verification for security, and size limits on both client and server. Then add presigned URLs when you need to scale, and chunked uploads when you need to handle large files. Each improvement is independent — you can implement them incrementally.