What’s Measured
The benchmark tests end-to-end webhook flow execution: an HTTP request hits a Catch Webhook trigger, runs a Code action, and returns a response via Return Response. This measures the full request lifecycle including routing, execution, and response delivery.Results
| Configuration | Throughput (req/s) | Mean Latency | P50 | P99 |
|---|---|---|---|---|
| 1 App, 2 Workers | 95.1 | 20.8 ms | 15.4 ms | 62.7 ms |
| 2 Apps, 4 Workers | 167.6 | 23.4 ms | 20.0 ms | 65.1 ms |
| 3 Apps, 6 Workers | 231.8 | 24.8 ms | 21.2 ms | 71.5 ms |
| 1 App, 4 Workers | 174.5 | 22.4 ms | 19.0 ms | 56.7 ms |
| 2 Apps, 8 Workers | 286.5 | 26.0 ms | 22.3 ms | 89.5 ms |
| 3 Apps, 12 Workers | 336.6 | 35.1 ms | 34.8 ms | 51.5 ms |
Test Environment
- Runner:
depot-ubuntu-24.04-16(16 cores) - App container: 2 CPU / 4 GB memory
- Worker container: 0.5 CPU / 1 GB memory
- Execution mode:
SANDBOX_CODE_ONLY - Rate limiter: Disabled (
AP_PROJECT_RATE_LIMITER_ENABLED=false) - Total requests: 500 per configuration
How to Reproduce
Via GitHub Actions UI
- Go to the Actions tab in the repository
- Select the Benchmark workflow
- Click Run workflow
- Optionally adjust
total_requests(default: 500) - All 6 configurations (1:2 and 1:4 ratios at 1x/2x/3x scale) run in parallel automatically