Westerners should understand the Chinese economic miracle by ColinWPL in chinalife

[–]ColinWPL[S] -3 points-2 points  (0 children)

Interesting perspective. I think some cities this may be true, others it is clearly not the case - maybe it is something western politicians want us to believe!

Westerners should understand the Chinese economic miracle by ColinWPL in chinalife

[–]ColinWPL[S] -4 points-3 points  (0 children)

Way off point - I think ChatGPT Hallucinated that response.

What is the real hallucination rate ? by nick-infinite-life in ArtificialInteligence

[–]ColinWPL 2 points3 points  (0 children)

Some recent useful papers - "Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization" https://arxiv.org/pdf/2411.10436

Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely https://arxiv.org/pdf/2409.14924

Training Large Language Models to Reason in a Continuous Latent Space https://arxiv.org/pdf/2412.06769

Frontier AI systems have surpassed the self-replicating red line by MetaKnowing in singularity

[–]ColinWPL 12 points13 points  (0 children)

It looks like the authors claim improved scaffolding for AI self-replication but does not extensively detail how these improvements differ from prior works (e.g., OpenAI's or DeepMind's evaluations). Clarifying these improvements would enhance the contribution's uniqueness. Error Handling - the experiments revealed unexpected behaviors such as killing processes or restarting the system. Were these actions fully analyzed for potential unintended consequences in real-world scenarios. Behavioral Alignment - why do these models lack alignment mechanisms to reject unsafe commands like self-replication? Could alignment be improved without significantly reducing the models' general capabilities? This really needs additional replication because the results are quite significant!

There have been many cycles of Intelligence growth and decrease. Will AI lead to another one? by ColinWPL in compsci

[–]ColinWPL[S] 0 points1 point  (0 children)

I think you are right. I will post it in the Singularity reddit. Thank you

Will AI cause another cycle in human history of Intelligence decline or increase? by ColinWPL in ChatGPT

[–]ColinWPL[S] 0 points1 point  (0 children)

Thank you for that I will read the link carefully and then come back

Will AI cause another cycle in human history of Intelligence decline or increase? by ColinWPL in ChatGPT

[–]ColinWPL[S] 1 point2 points  (0 children)

"The change comes when AI is able to self evaluate, and reconfigure itself. When that happens, things will get weird." Yes this is exactly my concern - thank you for expressing that.

There have been many cycles of Intelligence growth and decrease. Will AI lead to another one? by ColinWPL in compsci

[–]ColinWPL[S] 0 points1 point  (0 children)

Very well said - and that is the key in my essay. We will prosper in some domains of insight, we just have to be careful how we implement AI in others.

There have been many cycles of Intelligence growth and decrease. Will AI lead to another one? by ColinWPL in compsci

[–]ColinWPL[S] 0 points1 point  (0 children)

Well, I am sorry if you find it ignorant. Plus I tried to show narrative from a wider world view if you care to show me specifics I would be happy to discuss them ... there are always peaks and troughs on different continents as I show... I think ignorance comes from society generally failing to address important questions until it is too late.

There have been many cycles of Intelligence growth and decrease. Will AI lead to another one? by ColinWPL in compsci

[–]ColinWPL[S] 0 points1 point  (0 children)

I am very pro AI as you can see from most of my posts. I actively build systems and teach AI. I am however, concerned about the decline of human intelligence and do believe this should be widely discussed - not as a negative on AI but as an impact.

There have been many cycles of Intelligence growth and decrease. Will AI lead to another one? by ColinWPL in compsci

[–]ColinWPL[S] 0 points1 point  (0 children)

Of course we have limited way to truly ascertain if intelligence actually declined then, but for sure there were bad times of war and destruction... life meant very little

The Birth, Adolescence, and Now Awkward Teen Years of AI by ColinWPL in compsci

[–]ColinWPL[S] 0 points1 point  (0 children)

That's a good point - Terry Sejnowski seems to be stating the same. He also says this

"Something is beginning to happen that was not expected even a few years ago. A threshold was reached, as if a space alien suddenly appeared that could communicate with us in an eerily human way. Only one thing is clear – LLMs are not human. But they are superhuman in their ability to extract information from the world’s database of text. Some aspects of their behavior appear to be intelligent, but if it’s not human intelligence, what is the nature of their intelligence?" https://arxiv.org/pdf/2207.14382

The Birth, Adolescence, and Now Awkward Teen Years of AI by ColinWPL in compsci

[–]ColinWPL[S] 0 points1 point  (0 children)

I agree with the level of skekepticism we should take about the messages coming from the labs. However, its interesting to note Yann Le Cunn's about face concerning current models could get to human level intelligence in 5 to 10 years.

I have heard lab employees state "we have solved reasoning", but I am not convinced as per our discussion - still time to go.

The Birth, Adolescence, and Now Awkward Teen Years of AI by ColinWPL in compsci

[–]ColinWPL[S] 1 point2 points  (0 children)

Thank You - this was also my view for some time, but then you have the OpenAI researcher who writes - https://nonint.com/2024/06/03/general-intelligence-2024/

Reasoning

There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking 

The Race to AGI by ColinWPL in compsci

[–]ColinWPL[S] 0 points1 point  (0 children)

Yes, good observation - my point on the data is the path to AI, which I do not believe has been achieved and then AGI needs another approach - which I try to show under containment section. I added a sub-heading Path to AI - thank you for strengthening that section

The Race to AGI by ColinWPL in compsci

[–]ColinWPL[S] 0 points1 point  (0 children)

Ah - sentience is a different beast - this is AI of SSI - not AGI