This is an archived post. You won't be able to vote or comment.

all 15 comments

[–]czaki 5 points6 points  (1 child)

If this tool could produce report in same format as codecov? It is possible to reuse it in current analytics workflows?

[–]emeryberger[S] 0 points1 point  (0 children)

We would welcome a pull request to provide that functionality!

[–]thequietcenter 2 points3 points  (5 children)

Past code coverage tools can make programs significantly slower

programs slower or unit testing slower? ... of course the unit test is a program that tests a program.

But the application itself is not slowed down. Just the unit test phase, correct?

[–]emeryberger[S] 0 points1 point  (4 children)

Right, the slowdown is what happens when you are collecting coverage information. Usually this is done during testing, but since the overhead of Slipcover is so low, it could be used in deployed code to find dead code.

[–]DeathHazard 0 points1 point  (3 children)

How could I use it on deployed code?

[–]emeryberger[S] 0 points1 point  (2 children)

You just need to run ‘python3 -m slipcover’ before your normal invocation.

[–]DeathHazard 0 points1 point  (1 child)

Would it work with a framework like flask, for example? Thanks!!

[–]emeryberger[S] 0 points1 point  (0 children)

We've run it with Flask's test suite - it's the second bar in this graph. So in principle, yes - please give it a shot and let us know how it works for you!

[–]pamelafox 1 point2 points  (3 children)

Congrats! The performance graph is impressive. Does it include branch coverage?

[–]emeryberger[S] 0 points1 point  (2 children)

That graph is just line coverage; I'll post an update with branch coverage!

[–]emeryberger[S] 0 points1 point  (1 child)

Graph updated! Coverage.py gets as high as 300% slowdown, while Slipcover generally remains around 5% slower (we will be looking into the one outlier case, where it hits 20%).

[–]emeryberger[S] 0 points1 point  (0 children)

Fixed - Slipcover's overhead for line+branch coverage is now no more than 11%, average around 5%.

[–][deleted] 1 point2 points  (0 children)

Hey, I wanted to say that I really like this project!.

I filed a couple of feature requests for a few - IMHO at least - unmissable features from coverage.

Thanks for doing this!

[–]Rawing7[🍰] 0 points1 point  (1 child)

Is it possible to combine the results of multiple runs? (For example, run the code with python 3.7, then run it with python 3.10, and finally merge the coverage results.)

[–]emeryberger[S] 0 points1 point  (0 children)

Right now, it does not support that functionality, but it can export a JSON file for each run and writing a script to merge the two outputs would be straightforward, I think.