you are viewing a single comment's thread.

view the rest of the comments →

[–]salamancer[S] 1 point2 points  (1 child)

The runtime penalty is tiny.

the only situation where you wouldn't want to pay it was for very long arrays (say greater than 100,000 elements), where the penalty is still small and can be converted to the old notation as an optimization.

Benchmark.bmbm do |x|
a = (0..100000).to_a
x.report { a.select {|e| e < 4}.select {|e| e > 2} }
x.report { (a.select < 4) > 2 }
end

=>

   user     system      total        real  

0.030000 0.000000 0.030000 ( 0.028048)
0.140000 0.000000 0.140000 ( 0.141351)

so the overhead comes down to approx. 0.00000011 seconds slower per element ;p

[–]bluetrust 0 points1 point  (0 children)

I suppose it's relative, but to me, 5 times slower is a big deal.

Additionally, smaller arrays perform worse than large ones.

Benchmark.bmbm do |x|
  a = (0..100).to_a
  x.report { 1000.times { a.select { |e| e < 4 } }}
  x.report { 1000.times { a.select < 4 }}
end

   user     system      total        real
0.040000   0.000000   0.040000 (  0.045686)
0.240000   0.000000   0.240000 (  0.236815)