- Rust on arm64 completed CPU-intensive tasks up to 5 times faster than x86
- Arm64 reduces cold start latency across all runtimes by up to 24%
- Python 3.11 on arm64 outperforms newer versions in memory intensive workloads
Benchmarking AWS Lambda this year shows that the arm64 architecture consistently outperforms x86 across most workloads.
Tests covered CPU-intensive, memory-intensive, and lightweight workloads across Node.js, Python, and Rust runtimes.
For CPU-bound tasks, Rust on arm64 completed SHA-256 hashing loops 4-5x faster than x86 Rust when architecture-specific assembly optimizations came into play.
Cold start and warm start efficiency
Python 3.11 on arm64 also outperformed newer Python versions, while Node.js 22 ran significantly faster than Node.js 20 on x86.
These results show that arm64 not only improves raw computing performance, but also maintains consistency under varying memory configurations.
Cold-start latency plays a critical role in serverless applications, and arm64 delivers clear improvements over x86.
Across all runtimes, arm64 delivered 13–24% faster cold boot initialization.
Rust in particular recorded almost imperceptible cold start times of 16ms, making it suitable for latency-sensitive applications.
Warm-start performance also favored arm64, and memory-intensive workloads benefited from the architecture’s ability to handle larger memory allocations more efficiently.
Python and Node.js showed slightly more variability, although the gains from arm64 remained.
These performance improvements are amplified in production environments where frequent cold starts occur.
The cost analysis shows that arm64 delivers an average of 30% lower computational cost compared to x86.
For memory-heavy workloads, cost savings reached up to 42%, especially for Node.js and Rust.
Light workloads, which rely heavily on I/O latency rather than raw computation, showed minimal performance differences between architectures.
This shows that cost optimization matters more than runtime choice in these scenarios.
Across CPU-intensive and memory-intensive workloads, arm64 delivered stronger cost-to-performance ratios, confirming its value in production deployments.
These benchmarks indicate that arm64 should be the default CPU target for most Lambda workloads unless specific library compatibility issues arise.
Rust workloads on arm64 maximize both performance and cost savings, while Python 3.11 and Node.js 22 provide solid alternatives for other use cases.
Organizations that rely on Lambda for enterprise-scale applications or run multiple functions in a single data center are likely to see clear efficiency gains.
From a workstation perspective, the results suggest that developers compiling locally for CPU-intensive workloads can also benefit from arm64-native builds.
Although these benchmarks are comprehensive, individual workloads and dependency configurations may lead to different results, so further testing is advisable before full-scale adoption.
Organizations leveraging Lambda for enterprise-scale applications or running multiple functions in a single data center are likely to see tangible efficiency gains.
From a workstation perspective, the results suggest that developers compiling locally for CPU-intensive workloads can also benefit from arm64-native builds.
Although these benchmarks are comprehensive, individual workloads and dependency configurations may produce different results, so further testing is advisable before full-scale adoption.
Via Chris Ebert
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



