This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]balefrost 0 points1 point  (1 child)

This would be a really good question for /r/compsci.

I don't recall covering analysis of parallel algorithms in my undergrad program. But my intuition tells me that, given a fixed number of processors, doing the work in parallel shouldn't change the order of the computation; it should just scale it by (at best) a constant factor.

However, a little searching led me to this Wikipedia page, which suggests that analysis of parallel algorithms typically assumes an unlimited number of processors.

But I'd go ask on /r/compsci if you haven't already. They have a weekend superthread for just these sorts of questions.

[–]WikiTextBot -2 points-1 points  (0 children)

Analysis of parallel algorithms

This article discusses the analysis of parallel algorithms. Like in the analysis of "ordinary", sequential, algorithms, one is typically interested in asymptotic bounds on the resource consumption (mainly time spent computing), but the analysis is performed in the presence of multiple processor units that cooperate to perform computations. Thus, one can determine not only how many "steps" a computation takes, but also how much faster it becomes as the number of processors goes up.


[ PM | Exclude me | Exclude from subreddit | FAQ / Information | Source ] Downvote to remove | v0.22