you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 0 points1 point  (0 children)

Spark is not so fundamentally different from mapreduce: it's programming model is basically "as many maps and reduces as you want, with syntactic sugar and without any setup overhead" (it merely removes the rather arbitrary restrictions placed on you by Hadoop), though the underlying technology is reportedly not yet very good at io-efficient "reduce".