Puppeteer Lambda Cold Start
Why it takes 8-15 seconds and what you can do about it
The Problem
Your Lambda function works fine locally, but in production the first request after idle time takes 8-15 seconds:
// Typical cold start timeline:
// 0.0s - Lambda container starts
// 0.5s - Node.js runtime initializes
// 1.5s - Your code loads, Puppeteer imports
// 3.0s - browser.launch() called
// 8-12s - Chromium binary extracts and starts
// 12-15s - First page.goto() begins
//
// User has been waiting 15 seconds for a screenshot...Why Cold Starts Are So Slow
Chromium Extraction
The compressed Chromium binary must be extracted to /tmp on every cold start. This alone takes 3-5 seconds.
Process Spawning
browser.launch() spawns multiple Chromium processes. Lambda's container filesystem is slower than a real server.
Memory Pressure
Chromium needs 500MB-1GB RAM. Lambda memory allocation affects CPU. Under-provisioned = slower everything.
No Shared State
Each Lambda instance starts fresh. No warm browser pool. Every cold start pays the full initialization cost.
Common Mitigation Attempts
1. Provisioned Concurrency
Keep Lambda instances warm to avoid cold starts.
# Cost for 5 provisioned instances:
# $0.000004646 per GB-second
# 1GB * 5 instances * 86400 seconds/day * 30 days
# = ~$60/month just to keep Lambdas warm
# + actual execution costs on top- Expensive for sporadic workloads
- Still get cold starts if traffic exceeds provisioned capacity
- Paying 24/7 for something you might use a few times per hour
2. Keep-Warm Pings
CloudWatch scheduled events to ping the Lambda every 5 minutes.
- Only keeps one instance warm (one concurrent request)
- Second concurrent request still gets cold start
- Adds complexity and cost
3. Increase Memory
More memory = more CPU = faster initialization.
# Memory vs approximate cold start:
# 1024 MB: 12-15 seconds
# 2048 MB: 8-12 seconds
# 3008 MB: 6-10 seconds
#
# Cost increases linearly with memory- Helps but doesn't solve the fundamental issue
- Still 6+ seconds at maximum memory
- 2-3x cost increase for marginal improvement
Alternative: No Cold Start at All
Instead of optimizing Lambda cold starts, avoid them entirely by not running Chrome in Lambda:
// Before: 8-15 second cold starts
const browser = await puppeteer.launch({
args: chromium.args,
executablePath: await chromium.executablePath(),
});
const page = await browser.newPage();
await page.goto(url);
const screenshot = await page.screenshot();
// After: Consistent 3-4 seconds, no cold start
const response = await fetch("https://api.riddledc.com/v1/run", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.RIDDLE_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({ url })
});
const screenshot = await response.arrayBuffer();When Self-Hosting Still Makes Sense
Self-hosted Puppeteer on Lambda can work if:
- You have consistent, high-volume traffic (warm instances stay warm)
- You can afford provisioned concurrency costs
- You need features not available via API (custom Chrome extensions, specific versions)
- Compliance requires running in your own AWS account
For sporadic workloads, prototypes, or when you just need screenshots without infrastructure overhead, an API is simpler.
Skip the Cold Start Problem
Get consistent 3-4 second screenshots without managing Chrome.