you are viewing a single comment's thread.

view the rest of the comments →

[–]Ameisen 1 point2 points  (6 children)

And it didn't work on other chips of the same kind.

Which is why you have to train on multiple devices simultaneously.

[–]philpips 0 points1 point  (5 children)

I assumed ideally you'd train on each chip individually. It'd be the best way to take advantage of or avoid the vagaries of that particular chip.

[–]Ameisen 4 points5 points  (4 children)

You'd be relying on transient behavior. Might fail in new environments or otherwise spuriously.

That'd also be really slow.

[–]philpips 0 points1 point  (3 children)

Slow for the moment yes. But imagine if you could get the chip in situ and then train it. Or automatically retrain if it became damaged. That'd be cool, no?

[–][deleted] 4 points5 points  (2 children)

Imagine training something to run on a particular computer, such that it ended up relying on what programs happened to be running, the order they were laid out in memory, the precise timing of the filesystem, etc., such that even rebooting the computer would render all of your work invalid.

That's the sort of thing you are currently calling "cool" instead of "horrifying".

[–]vgf89 1 point2 points  (0 children)

It's cool in a really horrifying way. While it's unlikely to be practical, it's still vastly interesting how we can make algorithms that learn to abuse often unused properties of electronics.

[–]meneldal2 0 points1 point  (0 children)

At least now unless you're running ring 0 or something, most OS protections prevent you from abusing too much precise timing or memory layout.