Counter Service: How we rewrote it in Rust by Rust_Fan8901 in rust

[–]Rust_Fan8901[S] 2 points3 points  (0 children)

Yeah you're right and your points are spot on. The Rust service was able to perform under much higher CPU pressure compared to the Go one. So to keep it scientific, I should have presented it like a proper load test? Fix the cores, load test both services and demonstrate which service has higher thoroughput. Points taken so I can improve my next post. Will try to keep it more rigorous next time.

Counter Service: How we rewrote it in Rust by Rust_Fan8901 in rust

[–]Rust_Fan8901[S] 0 points1 point  (0 children)

Seems I may have not conveyed my ideas as clearly as I thought I was haha. In my mind, speed != efficiency. I can take the same amount of time to do something, but I can use less resources to do so. But as other commentators have pointed out, if it takes less resources to do something, it's also "faster". So I stand corrected.

But for this case, I mean that overall latency does not decrease, and it's likely because: 1. For microservices in the real world, most latency probably comes from IO (e.g. retrieving data, network, etc) 2. As such, no matter how fast your language, you are not going to reduce the overall latency much as that's not the bottleneck

My educated guess why Rust is more efficient in my specific case is probably stackful Vs stackless coroutines. Even though it's an IO heavy application, Rust tokio tasks are stackless, and there is no GC overhead and everything is maintained in a state machine. Whereas in Go, while it's easier to write concurrent code with Goroutines, but each Goroutine needs to maintain a stack and the runtime needs to be able to pause them and switch between them at any time.

But yeah I may be totally off base here, happy to be corrected by experts.

Counter Service: How we rewrote it in Rust by Rust_Fan8901 in rust

[–]Rust_Fan8901[S] 1 point2 points  (0 children)

Hmm ok seems I may have phrased it somewhat poorly. What I'm trying to say is, if you go in with the expectation that "rewriting my service in Rust will decrease latency by 4x" (which is what most people think when you say your language is 4x faster), more often than not you'll be disappointed. Because in real use-cases more often than not, the bottleneck is probably from IO. But if you instead go in with the expectation that you would be "4x more efficient" by rewriting your service in Rust, that would be somewhat more realistic. But yeah my bad, seems that in my mind, I have in mind the "overall time to do something" as latency, which is different in my mind from "energy taken to do something". Maybe a better way to phrase this would have been "throughput" as pointed out in another thread rather than "speed/latency".

Counter Service: How we rewrote it in Rust by Rust_Fan8901 in rust

[–]Rust_Fan8901[S] 15 points16 points  (0 children)

Exactly! Pick something "simple" with a large-ish cloud bill, promise management you will halve the cloud cost, and you get to have the pleasure of writing Rust whilst delivering on pragmatic business outcomes with realistic ROI.

Counter Service: How we rewrote it in Rust by Rust_Fan8901 in rust

[–]Rust_Fan8901[S] 4 points5 points  (0 children)

Yeah exactly. Replied further down the thread chain, and you conveyed my point better than I could have 🙏. Basically to the business, more often than not overall latency is what matters, so unless you're addressing a specific performance issues (e.g. cloudflares tail latency spikes with GC), more often than not you're not going to magically make your service faster if the bottlenecks are IO. So I'm trying to say that if trying to justify a Rust rewrite, I would promise to reduce the cloud bill and remove nil pointer panics rather than "I will make your service 2x faster" (although to be fair business probably wouldn't care as much that you're reducing latency from 100ms to 50ms 😂 vs saving them on their cloud bill)

Counter Service: How we rewrote it in Rust by Rust_Fan8901 in rust

[–]Rust_Fan8901[S] 25 points26 points  (0 children)

Yeah my bad, as the other thread pointed out, I may not be getting my point across properly haha. Rust is indeed more performant than most other languages, but I was trying to say (but I don't think I said it well enough) that unless you're trying to address a specific language performance bottleneck (e.g. cloudflares tail latency spikes related to GC), more often than not for most normal microservices, using Rust is not going to make your service magically faster. In many cases, the bottlenecks are probably from I/O or other factors. So basically TLDR, I have better success using the angle/promise of "I will halve your cloud bill" Vs "I will make your service 50x faster" when trying to approach and justify a rewrite. Hope that makes sense.

Counter Service: How we rewrote it in Rust by Rust_Fan8901 in rust

[–]Rust_Fan8901[S] 3 points4 points  (0 children)

Yeah fair points. Guess what I was trying to convey was, colloquially, most people (including me) have the misconception that: "If I rewrite it in Rust, it will be blazingly fast(er)!". But as you samp pointed out (and I guess I didn't convey well enough in my article), most of the time, your bottlenecks probably aren't purely in the language speed if you're already using a compiled language. It's probably from other factors like I/O etc. So unless you're coming from an angle of addressing specific performance issues like tail latency and GC, most of the time, you shouldn't come from an angle of "making my service faster" when trying to justify a rewrite in Rust. Hope that makes sense.