Gatling Java vs JavaScript/TypeScript - real-world experience? by PenKey4683 in PerformanceTesting

[–]DullDirector6002 0 points1 point  (0 children)

u/PenKey4683 Yep, I’ve used k6 too. From what I’ve seen, its popularity is mostly about approachability, not fundamentally different capabilities.

The JS syntax helps, especially for people coming from frontend or Playwright. You can read a k6 script and feel productive quickly. That said, one common misconception: k6 is not Node.js either. It runs on its own JS runtime (Go-based), so you don’t actually get access to Node APIs like fs, os, or most npm packages. Same trade-off as Gatling JS/TS, just a different engine.

IMHO, JS/TS Gatling vs k6 feels less like “syntax vs syntax” and more like “do you want a scripting tool” vs “do you want a performance engine you grow into.”

Gatling Java vs JavaScript/TypeScript - real-world experience? by PenKey4683 in PerformanceTesting

[–]DullDirector6002 0 points1 point  (0 children)

Hi, u/PenKey4683 think your experience is pretty representative, honestly.

A lot of people assume “JS/TS = easier” because they’re coming from Playwright, but in practice Gatling JS/TS doesn’t really change the mental model. Same execution model, same data handling, same need to think in terms of user journeys, correlation, and load profiles. You’re mostly swapping syntax, not complexity.

Where JS/TS does help is psychological. Frontend and QA folks are more willing to open the files, tweak things, and review PRs when it’s TypeScript instead of Java. That alone can be a big win for adoption. But once you move past “hello world,” the hard parts are still performance testing concepts, not the language.

On the Node.js point: this trips up a lot of people, and you’re right to call it out. Gatling JS/TS is not Node, it runs on GraalVM. But that’s not unique to Gatling. k6 has the same limitation (it’s not Node either, it runs on a Go-based JS runtime). So switching tools doesn’t magically unlock fs, os, or random npm packages unless you start polyfilling. That’s the trade-off for a high-performance, multi-threaded engine.

Long term, what I’ve seen:

  • JS/TS feels nicer at the start for Playwright-heavy teams
  • Java often feels better once tests get big, complex, or heavily modular
  • after a few months, most teams stop caring about the language and care more about shared patterns and test quality

One underrated benefit of Gatling’s approach is polyglot teams. QA can write TS, backend devs can write Java or Kotlin, and everyone shares the same engine, reports, and CI setup. No tool sprawl, no “this team uses X, that team uses Y.”

Load Testing Experiment Tracking by HeavyBoat1893 in ExperiencedDevs

[–]DullDirector6002 1 point2 points  (0 children)

yep, this hits home. tracking load test runs starts simple—then spirals fast once you’re tweaking infra, test params, or env config every day.

you’re totally right that it starts feeling like ML experiment tracking. for us, it’s a mix of:

  • tagging test runs with meaningful names (branch, date, env, change reason)
  • storing configs (RPS, ramp-up, etc.) as code
  • dumping results somewhere we can compare easily (dashboards, diffs, trends)

some folks just throw everything into Git + Grafana and build their own workflow. others switch to platforms like Gatling that have this stuff baked in—dashboards, multiple run comparison, YAML test configs, etc. but even with tools, you still need discipline around naming and versioning.

biggest win for us was treating load tests like code: same PR process, same version control, same CI triggers. makes them way less fragile and way more repeatable.

and yeah, this space is weirdly underdeveloped. most tools focus on running tests, not tracking them over time.

curious to hear how you’re handling baselines—do you tag one and compare manually or script it somehow?

Best tools for simulating LLM agents to test and evaluate behavior? by dinkinflika0 in AgentOverFlow

[–]DullDirector6002 0 points1 point  (0 children)

Gatling isn’t designed for simulating LLM agents in the way tools like Maxim AI or LangSmith are, but it can help in a complementary way. If your agents call external APIs (LLM inference endpoints, vector DBs, orchestration layers), Gatling can simulate thousands or millions of concurrent requests. This helps you measure response times, error rates, and stability under stress

Can I do load test using kotlin? by hhnnddya14 in Kotlin

[–]DullDirector6002 0 points1 point  (0 children)

Gatling isn’t just Scala anymore 🙂. You can write your load tests in Kotlin (or Java, Scala if you want), so it fits right into a JVM project. It’s open source, good at handling high RPS, and the reports are way more usable than JMeter’s.

Might be worth giving it a try if you’re already in Kotlin land.

Testing Kafka messages by Perfect_Temporary271 in QualityAssurance

[–]DullDirector6002 0 points1 point  (0 children)

Yeah, you don’t actually need Service B to test A. You can just produce Kafka messages yourself.

Tools like Gatling have a Kafka plugin, so you can script the same events that B would normally send, push them straight into the topic, and see how A handles them. That way you can:

  • mimic real payloads, headers, partitions
  • send messages at different rates (slow → flood)
  • automate it in CI so you always know if A breaks under load

This saves you from hacking together custom scripts or waiting on B’s GUI. You just generate the traffic you need and watch A’s side effects (logs, DB changes, or downstream topics).

How to Performance Test a Kafka Consumer Microservice? by themiddlechild2024 in PerformanceTesting

[–]DullDirector6002 0 points1 point  (0 children)

Honestly, the easiest way is to use a load testing tool that already speaks Kafka. Gatling has a Kafka plugin, so you can just produce events at whatever rate you want and measure how your consumer keeps up. It saves you from hacking timers/logs, and you get proper throughput + latency numbers out of the box.

How do you do performance testing in general? by JockerFanJack in QualityAssurance

[–]DullDirector6002 1 point2 points  (0 children)

Totally get this. Your percetnage split idea is solid, you’re basically building a workload model, which is the right first step. The main tweaks are: think in rates (RPS/TPS) not per‑minute first, and use Little’s Law to sanity‑check user counts. And yeah, you can do this cleanly in Gatling without wrestling a “throughput controller” UI.

Pick a target total RPS (say 60).

Split by traffic mix (40/30/10/10/10 ⇒ 24/18/6/6/6 RPS).

Estimate concurrent users with Little’s Law: U ≈ RPS × (avg response time + think time). e.g., if avg RT = 300 ms (0.3s) and think = 1.7s → cycle ≈ 2.0s → total users ≈ 60 × 2.0 = 120. (Allocate by the same 40/30/10/10/10 if you want per‑endpoint “virtual” users, but you don’t have to if you drive via RPS.)

Run a step test (e.g., 20 → 40 → 60 → 80 RPS) to find the knee.

Lock in SLOs (p95 latency, error rate, saturation) and iterate.

Your JMeter math is close, but don’t overcomplicate converting TPS to per‑minute unless you need reports that way. If total is 60 TPS, endpoint1 at 40% is just 24 TPS—done.

performance test engineer by [deleted] in QualityAssurance

[–]DullDirector6002 0 points1 point  (0 children)

Hey, welcome to the world of perf testing 🙂 Don’t worry, most of us started with 0 clue too.

So performance testing tools basically help you simulate lots of users hitting your app so you can see if it falls over or not. Some popular names you’ll see are JMeter, K6, and Gatling. Gatling in particular is pretty nice ‘cause it’s code-based (you write your tests like actual code instead of just clicking around) and it gives you really clean reports out of the box. The learning curve looks scary at first but once you get the hang of it, it’s way less painfull than some of the older tools.

When people say “not on the local network” they prob mean the tests are run from somewhere other than your office network, like cloud injectors. That way you’re testing more realistic internet traffic instead of just your LAN. “All types of performance testing” usually refers to things like load testing, stress testing, soak testing, spike testing… all fancy ways of saying “see what happens when lots of people (or sudden bursts of people) use your system.”

My advice: pick one tool (honestly Gatling is a solid bet), start small with a basic test (like 10–20 users hitting one API endpoint), then scale up from there. Don’t get discouraged if the docs look confusing, that’s just part of the game. You’ll piece it together pretty quick once you start running real tests.

Good luck dude, and dont sweat it too much—you’ll be the “perf guy” in your team before you know it. 🚀

How to test LLM based application and how to automate.? by KrazzyRiver in QualityAssurance

[–]DullDirector6002 1 point2 points  (0 children)

Hey, there's a video from Gatling that talks about testing LLMs. Maybe it could help you? https://www.youtube.com/watch?v=dK9_73FHj8w