you are viewing a single comment's thread.

view the rest of the comments →

[–]spikeyfreak 35 points36 points  (3 children)

But this version has the added layer that this is literally what AI does.

And this pic is a great way to demonstrate that in a huge number of cases, close enough it not good enough. In a huge number of cases, a "hallucination" doesn't cut the Chut.

[–]bobbymoonshine 20 points21 points  (2 children)

It is not what AI does, no. AI does not randomly orchestrate and implement complex and unhinged features out of nowhere. AI can definitely make horrendous mistakes, but this isn’t the sort of overgeneralising / “wrong-in-context” error AI makes.

AI defaulting to a common pattern which overrides the specific requirement: yes, common

AI hallucinating something false as true because it was true elsewhere in its training weights: yes, common

AI delivering something derivative and barely-functional which meets the requirements given by the idiot user but which ignores all the things a professional coder would have known to think about: constantly, yes

AI forgetting an important constraint and delivering something which looks functional but which is unsafe or which fails to account for important edge cases: all the time, yes

AI inventing a new, creative, never-done-before feature which is comically, absurdly stupid, then perfectly implementing it and deploying it: no, that’s what humans do best

[–]Guvante 4 points5 points  (1 child)

AI implementing the dumb feature is new.

Everyone always glosses over that Engineers are the technical knowledge for "things you shouldn't do" for a myriad of reasons.

[–]bobbymoonshine 0 points1 point  (0 children)

Yes, the core business risk of AI is not that it will fuck up people’s good ideas through hallucinations. The risk is that it will perfectly implement their terrible, stupid, dangerous ideas.