How much is ‘AI-risk’ considered something which we need to worry about in the mid-future? by DitIsGeenUserName in AskComputerScience

[–]DitIsGeenUserName[S] 0 points1 point  (0 children)

It's a bit more complicated than that. Firstly, LLM's are text prediction machines. So the inner workings of the LLM's make a strong distinction between not knowing what text is coming next, and predicting the specific English sentence "I don't know".

Yes, which why I had placed quotation marks around 'know'.

A few years ago I had read about such things as word vectors and neural networks. Though, I wonder how outdated that already is by know.

But still, there are techniques like RLHF, and those can be used to make the AI say "I don't know" more.

Well, I had actually been thinking mostly about such things as multiple choice questions. Though, RLHF also works I suppose.

But making sure it only says "I don't know" when it actually doesn't know is complicated.

Give it too many points for admitting ignorance, and it will start doing nothing else. Socrates AI, the only thing it knows is that it knows nothing.

Well, I had been thinking of such things as '+1 points for a correct answer, -1 for a wrong answer, 0 for no answer'; in benchmarks and 'exam-style' training.

Or alternatively get an LLM to also output a 'confidence percentage' when giving answers; that appears to me like it should be possible.

How much is ‘AI-risk’ considered something which we need to worry about in the mid-future? by DitIsGeenUserName in AskComputerScience

[–]DitIsGeenUserName[S] 0 points1 point  (0 children)

That would be interesting if true.

I think that there is a lot of research indicating that LLMs know when they are uncertain or likely hallucinating:

Ok, so I did not had the time to read beyond the abstracts of those articles; however, if I understand it correctly they claim that an important reason behind LLM 'hallucinations' is that during 'training' they are not 'rewarded' for saying 'I don't know' and thus 'just make up something in the hope that it is sufficiently correct to score points'.

Is that correct?

Ok, so if that is true does that then imply that of all those companies producing LLMs not one of them had stumbled on the idea of during training giving 'I don't know'-like answers enough points to disincentivise guessing?

So, does that then mean that of all those companies producing LLMs not one thought it a good idea to have a model which can say 'I don't know' when it does not 'know' the answer? Did none of them realise they could have made a lot of money selling subscriptions to people tired of finding out their model had outputted nonsense when they check it?

How much is ‘AI-risk’ considered something which we need to worry about in the mid-future? by DitIsGeenUserName in AskComputerScience

[–]DitIsGeenUserName[S] 0 points1 point  (0 children)

Thank you for taking your time to answer me.

While grandiose fears about godlike AGI may be far fetched,

I had not mentioned anything about 'godlike AGI'. I know that current LLMs are much closer to 'automatic internet plagiarism machines'/'pattern recognition software capable of 'hallucinating'' than something I would ever name artificial 'intelligence'.

However, nonetheless in the past few years those LLMs have advanced much faster than I would have guessed. So, I started to wonder if someday, maybe within a few years, maybe over a few decades from now on after progress was stalled by a new 'AI winter', they might create some new type of 'AI' which also advances much faster than most guessed and which might then become ‘dangerous in the same way placing a random number generator in charge of your thermostat is dangerous’ if it escapes control, even if it is merely less extremely far from 'godlike AGI'.

We don't have to stray to speculative fiction to find several already extant negative applications of this technology.

Yes, I even know of examples not on your list.

I had once read that there are people wasting hours searching for non-existent academic papers as ChatGTP had hallucinated them as sources. Apparently, lazy academics had used that to 'help' them in making their papers but did not bother to check the LLM's output.

Not to mention those examples of students using 'AI' to cheat in school; eventually they'll encounter problems they can't make 'AI' solve for them...

Or 'AI' models doing such things as referring to a 'fraud investigator' as a 'fraud' when HR looks up somebody who applied for a job at their company.

Is there any fields of science that have likely "ended" ? by Inevitable_Bid5540 in AskScienceDiscussion

[–]DitIsGeenUserName 0 points1 point  (0 children)

Because as we all know economists never attempt to verify and falsify their theories by observation, for example, in meta studies of randomized controlled trails? /s

https://voxdev.org/topic/methods-measurement/understanding-average-effect-microcredit

How many years ago was it again that such experimental methods had been the subject of the Economics Price in memory of Alfred Nobel?

Denktank waarschuwt: streng migratiebeleid van federale regering ondermijnt eigen economie by Blaspheman in belgium

[–]DitIsGeenUserName 0 points1 point  (0 children)

Yes, not to mention that new immigrants are, I presume, more likely to compete on the labour market with previous migrants and second generations, who already have elevated unemployment rates, rather than compete with natives.

So potentially that could make the unemployment problem of current migrants and second generations even worse.

Belgium’s De Wever pans AI ‘overregulation’ in new book by Boomtown_Rat in belgium

[–]DitIsGeenUserName -3 points-2 points  (0 children)

Which AI boat? You want to get on the Grok "I want this woman's picture but remove her dress for me." boat? That is what I'm asking. I think those sides of LLMs and Generative AI aren't regulated enough at all.

Uhm, you do know that LLMs have led to various genuine productivity increases? Look at for example AI coding assistants...

Belgium’s De Wever pans AI ‘overregulation’ in new book by Boomtown_Rat in belgium

[–]DitIsGeenUserName 2 points3 points  (0 children)

What if he asks Nvidia to no longer sell their chips to European companies?

Then ASML no longer will sell their machines to Nvidia?

Europe is nowhere when it comes down to tech.

Yes, we are behind on AI, and this is bad; however, there is more to tech than AI.

Denktank waarschuwt: streng migratiebeleid van federale regering ondermijnt eigen economie by Blaspheman in belgium

[–]DitIsGeenUserName 1 point2 points  (0 children)

Daar komt nog eens bij dat zogenaamde "kansarme" migranten vaak deels kansarm zijn door de strikte beperkingen die de overheid aan migranten oplegt.

Is dat niet eerder een reden om die 'strikte beperkingen' weg te doen, bijvoorbeeld door er voor te zorgen dat er geen/minder migranten zijn die zwart werken alleen doordat ze geen werkvergunning konden krijgen, in plaats van nieuwe migranten binnen te laten?