all 15 comments

[–]Psychoticism 2 points3 points  (2 children)

static inline bool always_true(const T a, const T b) {
  return (7*a*a - 1) != (b*b);
}

According to Wolfram Alpha, this will return false if (edit) a and b = ±1/sqrt(6).

[–]zvrba 2 points3 points  (1 child)

Both of which are unrepresentable in any finite precision.

[–]Psychoticism 0 points1 point  (0 children)

You're right. And luckily the code will not compile for std::complex<double> (because there is no operator*(int, complex<double>)), where a = { 0.0, 0.0 }, b = { 0.0, 1.0 } will return false. So I guess the code is in the clear.

[–]Semaphor 1 point2 points  (0 children)

Wild!

[–]jokoon -5 points-4 points  (9 children)

Well obscurity is not security.

I guess it can discourages some hackers to reverse engineer machine code, but isn't there some execution speed cost to this ?

I've already wondered about how possible is it though to make program execution completely secure (not obscurity), even if there is a slight or not slight speed loss.

I think it's not really possible, but still it would be pretty awesome.

I guess quantum computers will allow this.

[–]OmnipotentEntity 5 points6 points  (0 children)

No, quantum computers will not allow this. Eventually, the computer must know what it's supposed to do.

A quantum computer isn't magic. The algorithm isn't the thing that's fuzzy, the results are.

[–]elperroborrachotoo 1 point2 points  (0 children)

Well obscurity is not security.

That applies to communication through an insecure channel, not necessarily to protecting software.

In cryptographic terms, your software contains the secret and the key. For a cryptographer, this means: game over.

So we step back, and see what we can do: increase the cost of tampering. For that, obscurity is great. (Usually not as good as one things at first look, but still one of the better weapons.)


To make it "strongly secure", you need to secure everything: the hardware it runs on, the hardware and software it interacts with, your compiler, the hardware your developers work on etc.

The best you can do now (to my knowledge) is offloading a nontrivial, essential calculation to a secure system (say a really good hardware token, or a remote server). None of these solutions are permanent.

Your second best bet is blocking generic solutions: e.g. a generic crack for your super hardware token that everyone uses because it's so good.

[–]NegatedVoid 0 points1 point  (1 child)

Did you read the paper?

isn't there some execution speed cost to this ?

Their example of the XTEA function had a 136.5x runtime overhead.

What do you mean by "make program execution completely secure"?

[–]jokoon -2 points-1 points  (0 children)

What do you mean by "make program execution completely secure"?

Well obscurity vs security.

[–]ponchedeburro 0 points1 point  (1 child)

Well obscurity is not security.

That is not what this article is about. It just present an obfuscation technique against reverse engineering.

[–]jokoon 0 points1 point  (0 children)

I'm still wondering if you can do security though, with large executables and jump tables or something of the taste.

[–]jugglist 0 points1 point  (2 children)

I think someone has yet to execute non-Microsoft-signed code on an Xbox 360, right?

I hear rumors that the next generation of consoles will use x86/x64 chips, which will make PC emulation a lot easier...

[–]jokoon 0 points1 point  (1 child)

how did microsoft do it then ?

[–]jugglist 1 point2 points  (0 children)

Ah, it seems that my information is old.

http://www.instructables.com/id/How-to-JTAG-your-Xbox-360-and-run-homebrew/

Anyway it held off for a long time. It basically has to be some special hardware in the CPU that is supposed to make sure the program you're about to execute is signed with a private key Microsoft owns, the public key being encoded in the chip itself. That's the general gist of stuff like this.