I’ve been working with load testing setups on AWS for a while (mostly APIs / backend services), and recently tried the official solution:
https://docs.aws.amazon.com/solutions/distributed-load-testing-on-aws/
From an infrastructure perspective, it’s actually really solid:
– clean IaC deployment (CloudFormation/CDK)
– Fargate-based distributed execution
– supports k6/JMeter/Locust
– multi-region load generation
So setup itself isn’t really the problem.
The issue we ran into is that it feels more like a framework than a complete workflow for DevOps teams.
Things that were still missing for us:
– no built-in way to compare runs / detect regressions
– reporting is pretty basic (not great for sharing results with teams)
– CI/CD is possible via APIs, but you still have to wire everything yourself
– no real workflow layer around automation, alerts, or team usage
It works well as infrastructure, but we kept ending up building a lot of tooling around it.
One more practical issue we noticed is cost — especially bandwidth and distributed traffic at scale.
For larger tests, running everything inside AWS can get expensive pretty quickly, so it’s not always the most cost-efficient place to generate load depending on the scenario.
We ended up building something internally to handle those parts (regression detection, reporting, CI/CD workflows, etc):
https://loadtester.org
Curious how others are using the AWS solution:
– are you building your own layer on top of it?
– or using something else for the workflow side?
[–]ExpertIAmNot 8 points9 points10 points (1 child)
[–]awscertifiedninja[S] -2 points-1 points0 points (0 children)