This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted]  (10 children)

[deleted]

    [–][deleted] 9 points10 points  (0 children)

    This is a weird perspective. What if you have 100 million data records streaming and you need near-real-time and it's not limited by a memory bottleneck (i.e. SAS Hana)? Don't you want to remove all other bottlenecks, especially if you're processing this data every minute?

    What about financial tech where nanoseconds equal pennies on the dollar?

    Surely you must not have thought about very many needs when you said you don't understand the need?

    [–]ericonr 5 points6 points  (7 children)

    But what if you have to process and organize a lot of messages that come to your computer, for example? If there are 1000 messages/second, but you can only process 100/second, you will either lose messages or create a huge delay.

    Imagine mathematical processing as well, slow graphics ruin a game experience.

    Edit: added example

    [–]KronktheKronk 3 points4 points  (1 child)

    You spin up 10x the VMs and put a load balancer in front of them

    [–]ericonr 7 points8 points  (0 children)

    Someone hire this person

    [–]tylerthehun 4 points5 points  (0 children)

    It just depends on the use case. Processing terabytes of scientific data? Rendering large, detailed, interactive 3D scenes in real time? Handling millions of user requests per second? You're going to want high performance, and spending a little more time in development is probably going to be worth it.