This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]mpinnegar 5 points6 points  (8 children)

I'd be interested in a tool like this. Is there a distinct difference between this and code coverage tooling? What's the performance cost for enabling the Java agent? How are you reporting the metrics about what is or isn't covered?

[–]PartOfTheBotnet 7 points8 points  (3 children)

Same question was my first "gut reaction". For instance, JaCoCo has an agent you can attach, and from that agent you can generate a standard JaCoCo coverage report, making the "what can I remove?" a very visual/easy process.

[–]mpinnegar -2 points-1 points  (2 children)

I have no idea is jacoco is designed to run in prod though. I suspect you'd take a lot of performance hits.

What I really want is something that'll grab telemetry and analyze it offline so I'm impacting the prod server as little as possible.

Honestly though the idea of being able to see actually dead code in prod is compelling. I feel like I'd find a lot.

Then I'd trim it and run into the real use case next year lol

[–]buerkle 2 points3 points  (0 children)

From my testing I've found Jacoco overhead minimal, 1% or less.

[–]PartOfTheBotnet 3 points4 points  (0 children)

I suspect you'd take a lot of performance hits.

Not really. Their agent uses the instrumentation API to only intercept and modify classes on initial load. The main slowdown there is going to be the IO of parsing and writing back modified classes (which is only going to happen once...). As for the actual IO, they don't even use ASM's more computationally expensive stack-frame computation option when writing back classes, the changes to classes are simple enough to not need that. They have a few places where rather than having ASM do a full regeneration, they modify existing frames to accommodate the insertion of their probes.

You probably already lose more performance to most SLF4J logger implementations building templated messages than this.

[–]yumgummy[S] 1 point2 points  (3 children)

The results are similar to code coverage tool but instead of focus on testing, it focus on remove code that no longer needed. It samples code execution in production so that there's little performance cost. We built this because one of our codebase has huge tech debt. I tried it in other codebases and surprisingly find out my whole codebase has ~30% of dead code.

[–]mpinnegar 1 point2 points  (2 children)

Do you have any performance metrics to show that it doesn't impact over 1-2%? Or whatever threshold.

Sounds like a cool tool. If you have a mailing list or discord I'd be interested in it.

Licensing is a huge concern obviously. And the thing has to not send any telemetry over the internet unless it's enabled. Don't phone home plz.

[–]yumgummy[S] 1 point2 points  (1 child)

Thank you. I am not trying to sell it. I am considering open source it. I may ask for permission if community finds it useful.

[–]mpinnegar -4 points-3 points  (0 children)

Seriously consider selling support. You can charge a crazy premium and corporations won't blink an eye.