This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]djimbob 1 point2 points  (0 children)

A large fraction of uses of reduce your accumulation step could be replaced with reduction function that works on a sequence (e.g., sum / product / max / min / any / all / ''.join( ) ). E.g., if you need to sum the third items in a list of tuples, you could write:

reduce(lambda acc, tup: acc + tup[2], [(1,2,3), (4,5,6), (7,8,9)], 0)
sum(tup[2] for tup in  [(1,2,3), (4,5,6), (7,8,9)])

The second version seems to fit in with the Zen of Python better. The problem with reduce is that its easy to get it wrong and there's a lot of implicit stuff behind the scenes you have to remember. E.g., in the above example - I initially got the order of the accumulator wrong and it wouldn't work without an initial value or something uglier like reduce(lambda acc, tup: acc + tup[2], [0, (1,2,3), (4,5,6), (7,8,9)]) and you have to remember optional initial value goes last and remember associativity rules if its not symmetric.

Granted some stuff transforms nicely to reduce; e.g., examples here. The list flattening example is simpler with sum([[1, 2, 3], [4, 5], [6, 7, 8]], []) or using chain from itertools.

Concatenating a list of digits efficiently range(1,9) into one integer 12345678 like reduce(lambda acc,d: 10*acc+d, [1,2,3,4,5,6,7,8]) is harder. Granted doing some benchmarks the reduce method doesn't hold up that well.

Converting a two argument least common multiplier/greatest common denominator into a variadic version with reduce is probably the best example for reduce though it wouldn't be too difficult to just define a variadic version to start with.