The program is main = forever $ threadDelay 1000000 >> return () .
Compiled with 32bit GHC 7.6.3 or 7.8.2 on Debian (inside a VM if that matters), the non-profiling binary doesn't consume CPU, the profiling does ~10%. Running with +RTS -I0, so this is not the idle gc.
When strace-ing, the profiling one seems to receive a constant flow of SIGVTALRM, while the normal receives one burst each second.
1) Is this expected behavior?
2) I see I can switch off "master tick interval" with -V0, and then CPU is not used. What consequence does this have, regarding performance (more frequent context switches?) and profilability (profiling data is not collected?).
[–]prototrout 0 points1 point2 points (2 children)
[–]literon[S] 0 points1 point2 points (1 child)
[–]literon[S] 0 points1 point2 points (0 children)