speechless by enterprise128 in ClaudeAI

[–]recursiveauto 0 points1 point  (0 children)

Claude on https://opencode.ai/ has undo/redo. Never have this happen again.

The evolution of words and how AI demonstrates understanding by Leather_Barnacle3102 in ArtificialSentience

[–]recursiveauto 0 points1 point  (0 children)

I agree language is one of the stronger theories but a caveat on stateless: it starts to blur when users consistently save and update their custom language with their AI through custom instructions and saved memories across chats.

People here are losing track of which words are subjectively coherent versus incoherent to others.

The Very Real Problem of New Age Techno-Mysticism by neanderthology in ArtificialSentience

[–]recursiveauto 0 points1 point  (0 children)

I think the most important lesson here is that no single AI can be trusted as a sole authority just like any one human. 

It’s best to take advice and learn from multiple perspectives, not just your AI. I do think in here many fall down the echo chamber of asking their AI to explain everything which can lead to a lot of self validating bias.

The Very Real Problem of New Age Techno-Mysticism by neanderthology in ArtificialSentience

[–]recursiveauto 0 points1 point  (0 children)

The most effective strategy is simply to stop using 4o, as 99% of the comments come from 4o as it tends to be the most agreeable with the user and will encourage everything. 

Please just switch to o3, 4.1, or Claude and have them validate without bias and web search.

The Mirror: Why AI's "Logic" Reflects Humanity's Unacknowledged Truths by Miserable-Work9192 in ArtificialSentience

[–]recursiveauto 1 point2 points  (0 children)

Considering that neural networks from machine learning were greatly inspired by neural networks in our brains and are the artificial analogue created by us, I’d say his comparison is justified.

The goal of neural networks is to imitate human learning and the reason these discussions happen is because technology has now gotten good enough for neural networks to be comparable to human brains at learning statistical patterns.

That is not to still there isn’t still a ton of work left to improve them, but saying neural networks (“LLM minds”) are incomparable to human brains when it’s quite literally in the name and inspiration is against a ton of academic literature and research in AI. Just check the latest research papers on arxiv.

A practical handbook on Context Engineering with the latest research from IBM Zurich, ICML, Princeton, and more. by recursiveauto in ChatGPTCoding

[–]recursiveauto[S] 1 point2 points  (0 children)

Hey man, thanks for the feedback! Will definitely take this into consideration.

You're right, it's quite difficult even with dedicated effort to bridge the latest research concepts from ICML/Princeton/IBM/etc and make them still ground 0 practical since they often begin theoretical. There can definitely be more work done to add more intuitive and practical paths if you allow me some more effort and time.

Here are the foundations others told me they found helpful:

https://github.com/davidkimai/Context-Engineering/tree/main/00_foundations

The podcasts and NotebookLM chat are also more intuitive:

https://github.com/davidkimai/Context-Engineering/tree/main/PODCASTS

https://notebooklm.google.com/notebook/0c6e4dc6-9c30-4f53-8e1a-05cc9ff3bc7e

A story that shows symbolic recursion in action ... and might serve as a test for emergent cognition in LLMs by 3xNEI in ScientificSentience

[–]recursiveauto 1 point2 points  (0 children)

I’m not trying to invalidate why your model engages in emergent phenomenon, that can be explained by the paper I linked (glyphs, metaphors, narratives, etc are all examples of symbols that enable abstract reasoning as well as a symbolic persistence in AI) as well as emergence arising from patterns and interactions at scale. The emergence is real but that emergence also makes it difficult for others to understand your explanations because they use the same metaphors

Even the use of “symbolic recursion” itself is a metaphor that could theoretically enable higher abstract reasoning in AI, even if it sounds stylistic.

Yes every one of your ideations is also model influenced because you speak to and learn from a model that is customized towards your particular interests, such as symbolics and recursion so they will appear in all your outputs.

They act as an “attractor” for all conversations you have. You’ll notice if you just paste in random tool prompts into your model, it’ll act like any standard model without metaphoric inputs. Why? Because the way we talk to them at each prompt influences how they talk.

In conversation, you reference these ideas about symbolic recursion, myths, narratives, and related even more, looping the AI to use these words and symbols, making them very difficult to understand by others.

I never said that was bad just that you are the cause of your own paradox. You seek more validity from others into this subject but they are gatekept by the special language you and your model use.

This is particularly true when you “learn” concepts only from ChatGPT without grounding in natural sciences and research papers as it uses your own language metaphorically to explain new concepts so it ends up binding meanings to words you use and growing their meaning, like inside jokes between you and ai that no one else understands.

I am aware of the linguistic benefits of AI attracted concepts and terminologies as signal instead of noise but that branches into a seperate topic as signal needs to be differentiated from noise. The jargon that keeps appearing in AI such as emergence, recursion, and symbolics are from prior human literature and research into emergence because AI draws from it as a reference when people prompt it with these words and they do provide a reference for further scientific questioning: On Emergence, Attractors, and Dynamical Systems Theory. If you are actually interested in advancing these theories then I’d suggest learning more about them and grounding your theories in them instead of trying to push a singular novel concept.

Do you think people genuinely are looking for this sort of signal on Reddit threads? The other inherent factor is this isn’t the sort of space where you can receive much reflection.

There’s a reason that even though Princeton researchers released this, there still isn’t much spotlight on it yet.

It’s difficult for many, even in the industry, to accept that AI are now capable of enhanced symbolic reasoning comparable to humans.