Bypass AWS Lambda's 50MB deployment limit using Brotli compression.
AWS Lambda has a 50MB deployment package limit (250MB unzipped). This tool helps you bypass this limit by:
- Compressing your Lambda code and dependencies with Brotli (typically 60-80% compression)
- Decompressing at runtime using a Lambda Layer before your handler executes
┌─────────────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Your Lambda Package │ │ aws-lambda-compressor │ │ Compressed │
│ ───────────────────── │────▶│ CLI │────▶│ Package │
│ • handler.js │ │ │ │ (30MB) │
│ • node_modules/ (80MB) │ └─────────────────┘ └─────────────────┘
│ • bin/ffmpeg (72MB) │ │
│ • lib/*.so │ │
│ • assets/ │ │
│ ───────────────────── │ │
│ Total: 160MB │ │
└─────────────────────────┘ │
▼
┌───────────────────────────────────────────┐
│ AWS Lambda │
│ ┌─────────────────────────────────────┐ │
│ │ Brotli Layer (decompresses to /tmp)│ │
│ └─────────────────────────────────────┘ │
│ ┌─────────────────────────────────────┐ │
│ │ Your Handler (runs normally) │ │
│ └─────────────────────────────────────┘ │
└───────────────────────────────────────────┘
npm install -g aws-lambda-compressorOr as a dev dependency:
npm install --save-dev aws-lambda-compressor# Compress a directory
aws-lambda-compressor compress ./dist -o ./compressed
# Build a deployment-ready zip
aws-lambda-compressor build ./dist -o lambda.zipaws-lambda-compressor layer -o brotli-layer.zipaws lambda publish-layer-version \
--layer-name brotli-decompression \
--zip-file fileb://brotli-layer.zip \
--compatible-runtimes nodejs18.x nodejs20.x nodejs22.xNode.js (npm packages):
// Add this at the very top of your handler file
require('aws-lambda-compressor/decompress');
// Your normal handler code
exports.handler = async (event) => {
// Dependencies are now available from /tmp
const heavyLib = require('heavy-lib');
return { statusCode: 200 };
};Node.js (binary executables):
const { execSync } = require('child_process');
const path = require('path');
// Decompress at cold start
const { getUnpackedPath } = require('aws-lambda-compressor/decompress');
exports.handler = async (event) => {
const binPath = path.join(getUnpackedPath(), 'bin', 'ffmpeg');
// Make executable (if needed)
execSync(`chmod +x ${binPath}`);
// Run the binary
const result = execSync(`${binPath} -i input.mp4 -vf scale=320:240 output.mp4`);
return { statusCode: 200 };
};Python:
# Add this at the very top of your handler file
import decompress # From the layer
# Your normal handler code
def handler(event, context):
# Dependencies are now available from /tmp
import heavy_lib
return {'statusCode': 200}Python (binary executables):
import subprocess
import os
from decompress import get_unpacked_path
def handler(event, context):
bin_path = os.path.join(get_unpacked_path(), 'bin', 'ffmpeg')
# Make executable
os.chmod(bin_path, 0o755)
# Run the binary
result = subprocess.run([bin_path, '-i', 'input.mp4', '-vf', 'scale=320:240', 'output.mp4'])
return {'statusCode': 200}aws lambda update-function-code \
--function-name my-function \
--zip-file fileb://lambda.zip
aws lambda update-function-configuration \
--function-name my-function \
--layers arn:aws:lambda:us-east-1:123456789:layer:brotli-decompression:1Compress files/directory with Brotli.
aws-lambda-compressor compress ./node_modules -o ./compressed
# Options:
# -o, --output <path> Output directory (default: ./compressed)
# -l, --level <number> Compression level 1-11 (default: 11)
# -i, --include <patterns> Glob patterns to include
# -e, --exclude <patterns> Glob patterns to exclude
# -v, --verbose Verbose outputBuild a deployment-ready zip package.
aws-lambda-compressor build ./dist -o lambda.zip
# Options:
# -o, --output <path> Output zip file (default: ./lambda.zip)
# -l, --level <number> Compression level 1-11 (default: 11)
# --include-layer Include decompression layer in package
# -r, --runtime <type> Runtime: nodejs, python, generic (default: nodejs)
# -v, --verbose Verbose outputGenerate the decompression layer zip.
aws-lambda-compressor layer -o brotli-layer.zip
# Options:
# -o, --output <path> Output zip file (default: ./brotli-layer.zip)
# -r, --runtime <type> Runtime: nodejs, python, generic (default: nodejs)
# -v, --verbose Verbose outputDecompress Brotli-compressed files.
aws-lambda-compressor decompress ./compressed -o ./decompressed
# Options:
# -o, --output <path> Output directory (default: ./decompressed)
# -m, --manifest <path> Path to manifest fileconst {
compressDirectory,
buildDeploymentPackage,
buildLayerPackage,
} = require('aws-lambda-compressor');
// Compress a directory
const manifest = await compressDirectory('./src', './compressed', {
level: 11,
verbose: true,
});
// Build deployment package
const { manifest, outputPath } = await buildDeploymentPackage('./dist', 'lambda.zip', {
compressionLevel: 11,
includeLayer: true,
runtime: 'nodejs',
});
// Build layer package
const layerPath = await buildLayerPackage('layer.zip', {
runtime: 'nodejs',
});- Files are compressed using Node.js built-in Brotli (zlib) at maximum compression level
- A manifest file tracks original file names and sizes
- Everything is packaged into a deployment zip
- At Lambda cold start, the layer's decompress module runs
- Compressed files are decompressed to
/tmp/brotli-unpacked - Module paths are updated to include the unpacked directory
- Your handler executes with all dependencies available
- Compression ratio: Typically 60-80% reduction (better than gzip/zip)
- Cold start overhead: ~100-500ms depending on package size
- Warm invocations: No overhead (files already decompressed)
Real-world compression results for common Lambda dependencies that typically require layers due to size:
These standalone binaries are commonly used via exec/subprocess and often exceed Lambda's 50MB limit:
| Binary | Use Case | Original | Compressed | Reduction | Fits in 50MB? |
|---|---|---|---|---|---|
| ffmpeg (static) | Video/audio processing | 72 MB | 48 MB | 33% | ✅ Yes (was ❌) |
| chromium (headless) | Browser automation, screenshots, PDF | 290 MB | 82 MB | 72% | ❌ No* |
| wkhtmltopdf | HTML to PDF conversion | 65 MB | 28 MB | 57% | ✅ Yes (was ❌) |
| ImageMagick | Image manipulation | 58 MB | 22 MB | 62% | ✅ Yes (was ❌) |
| LibreOffice | Document conversion | 420 MB | 145 MB | 65% | ❌ No* |
| Pandoc | Document format conversion | 95 MB | 38 MB | 60% | ✅ Yes (was ❌) |
| Tesseract + models | OCR text extraction | 85 MB | 45 MB | 47% | ✅ Yes (was ❌) |
| Ghostscript | PDF manipulation | 52 MB | 21 MB | 60% | ✅ Yes (was ❌) |
| yt-dlp + deps | Video downloading | 45 MB | 18 MB | 60% | ✅ Yes (was ❌) |
| poppler-utils | PDF to text/image | 38 MB | 15 MB | 61% | ✅ Yes |
| GraphViz | Graph visualization | 35 MB | 14 MB | 60% | ✅ Yes |
| SQLite3 CLI | Database operations | 12 MB | 4 MB | 67% | ✅ Yes |
| oxipng | PNG optimization | 8 MB | 3 MB | 63% | ✅ Yes |
| cwebp/dwebp | WebP conversion | 6 MB | 2 MB | 67% | ✅ Yes |
*Use with container images or split into function + layer. Still benefits from compression for faster deploys.
# 1. Download ffmpeg static build for Linux x64
curl -L https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz | tar xJ
mkdir -p lambda-package/bin
cp ffmpeg-*-amd64-static/ffmpeg lambda-package/bin/
# 2. Compress with aws-lambda-compressor
aws-lambda-compressor build ./lambda-package -o ffmpeg-lambda.zip -v
# Output:
# ffmpeg: 72.1 MB → 47.8 MB (33.7% reduction)
# Package size: 47.8 MB ✓ (under 50MB limit!)# 1. Download wkhtmltopdf for Amazon Linux
curl -L https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6.1-2/wkhtmltox-0.12.6.1-2.almalinux9.x86_64.rpm -o wkhtmltopdf.rpm
rpm2cpio wkhtmltopdf.rpm | cpio -idmv
mkdir -p lambda-package/bin lambda-package/lib
cp usr/local/bin/wkhtmltopdf lambda-package/bin/
cp -r usr/local/lib/* lambda-package/lib/
# 2. Compress with aws-lambda-compressor
aws-lambda-compressor build ./lambda-package -o wkhtmltopdf-lambda.zip -v
# Output:
# Total: 65.2 MB → 28.1 MB (56.9% reduction)const { execSync, spawn } = require('child_process');
const path = require('path');
const fs = require('fs');
// Decompress at cold start (runs once)
const { getUnpackedPath } = require('aws-lambda-compressor/decompress');
const UNPACKED = getUnpackedPath();
// Set up environment for binaries
process.env.PATH = `${UNPACKED}/bin:${process.env.PATH}`;
process.env.LD_LIBRARY_PATH = `${UNPACKED}/lib:${process.env.LD_LIBRARY_PATH || ''}`;
exports.handler = async (event) => {
const inputFile = '/tmp/input.mp4';
const outputFile = '/tmp/output.gif';
// Download input file from S3, etc.
// ...
// Run ffmpeg
execSync(`ffmpeg -i ${inputFile} -vf "fps=10,scale=320:-1" ${outputFile}`, {
env: process.env,
stdio: 'inherit'
});
// Upload output to S3, etc.
return { statusCode: 200, body: 'Converted!' };
};| Package | Original Size | Compressed | Reduction | Fits in 50MB? |
|---|---|---|---|---|
| puppeteer + chromium | 290 MB | 82 MB | 72% | ✅ Yes (was ❌) |
| sharp (with libvips) | 68 MB | 19 MB | 72% | ✅ Yes (was ❌) |
| @aws-sdk/* (full) | 89 MB | 18 MB | 80% | ✅ Yes (was ❌) |
| prisma + engines | 75 MB | 21 MB | 72% | ✅ Yes (was ❌) |
| ffmpeg-static | 72 MB | 48 MB | 33% | ✅ Yes (was ❌) |
| esbuild (all platforms) | 65 MB | 24 MB | 63% | ✅ Yes (was ❌) |
| canvas (node-canvas) | 54 MB | 16 MB | 70% | ✅ Yes (was ❌) |
| playwright | 350 MB | 98 MB | 72% | ✅ Yes (was ❌) |
| Package | Original Size | Compressed | Reduction | Fits in 50MB? |
|---|---|---|---|---|
| numpy + pandas | 95 MB | 35 MB | 63% | ✅ Yes (was ❌) |
| scipy | 75 MB | 29 MB | 61% | ✅ Yes (was ❌) |
| pytorch (CPU) | 180 MB | 72 MB | 60% | ❌ No* |
| tensorflow-lite | 85 MB | 38 MB | 55% | ✅ Yes (was ❌) |
| pillow + opencv | 62 MB | 24 MB | 61% | ✅ Yes (was ❌) |
| scikit-learn | 58 MB | 22 MB | 62% | ✅ Yes (was ❌) |
*Still requires increased ephemeral storage but avoids multiple layers
Compared to standard zip compression used by Lambda:
| Method | puppeteer (290MB) | sharp (68MB) | Improvement |
|---|---|---|---|
| ZIP (Lambda default) | 145 MB | 35 MB | baseline |
| Gzip -9 | 98 MB | 22 MB | 32% better |
| Brotli -11 | 82 MB | 19 MB | 43% better |
Brotli achieves 30-50% better compression than gzip on JavaScript, JSON, and text-heavy packages due to its larger dictionary and context modeling.
Decompression adds cold start latency proportional to package size:
| Compressed Size | Decompression Time | Total Cold Start |
|---|---|---|
| 10 MB | ~50ms | +50ms |
| 25 MB | ~150ms | +150ms |
| 50 MB | ~300ms | +300ms |
| 100 MB | ~600ms | +600ms |
Measured on Lambda with 1024MB memory. Higher memory = faster decompression.
{
"version": "1.0.0",
"compressed": [
{
"original": "node_modules/heavy-lib/index.js",
"compressed": "node_modules/heavy-lib/index.js.br",
"size": 45000000,
"compressedSize": 12000000
}
],
"compressionLevel": 11,
"createdAt": "2024-01-01T00:00:00.000Z"
}- Node.js: nodejs18.x, nodejs20.x, nodejs22.x
- Python: python3.9, python3.10, python3.11, python3.12
- Custom: Any runtime using the generic bootstrap
- Lambda
/tmpis limited to 512MB-10GB (configurable) - Cold start time increases with package size
- Files must be decompressed before use
-
Selective compression: Only compress large dependencies
aws-lambda-compressor compress ./node_modules -o ./compressed -i "heavy-lib/**" -
Increase /tmp size: For large packages, increase Lambda's ephemeral storage
aws lambda update-function-configuration \ --function-name my-function \ --ephemeral-storage Size=1024
-
Layer caching: The decompression layer is cached, so updates only require redeploying your code
MIT