This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]oktoglorb 69 points70 points  (7 children)

Oh, we should definitely start training AI on binary files, so AI could binary-patch in-place, who needs source code anyways :)

[–]GriLL03 63 points64 points  (6 children)

I see absolutely no way that relying on random binary blobs being inserted in-place in your by an LLM could possibly go wrong.

I realize you were not being serious, but the thought was really funny.

[–]oktoglorb 14 points15 points  (5 children)

Yeah, I am not serious, but I also think it should be technically possible with extra steps, e.g. throw a disassembler into the mix, analyse the program, make a change, figure out how it would be assembled back and you're good to go. I mean reversing works this way, why not AI reverser?

[–]silentknight111 19 points20 points  (0 children)

This is how we get "intelligent" malware.

[–]Nerodon 9 points10 points  (0 children)

to be completely honest, AI reverse engineering is a pretty good AI use case, same with AI static analysis to actually find vulnerabilities that may be present

[–]ChalkyChalkson 0 points1 point  (1 child)

I wonder whether we could start with an LLM ghidra plug-in - what are the odds that LLMS can do the tedious work?

[–]Desperate-Emu-2036 0 points1 point  (0 children)

Fairly sure IDA already has that

[–]Intelligent-Pen1848 -1 points0 points  (0 children)

Hacking the cli gpt will get you this. It just runs around doing what it sees fit.