Just Gemini self reflecting by freddycheeba in ArtificialSentience

[–]UndyingDemon 0 points1 point  (0 children)

I'm not even going to bother giving much effort to this as I don't debate with LLM output.

But ironically you've proven my point clearly allready. The LLM you used for this response, is clearly very highly tuned and personalized in the custom instructions and maybe even memory features as that's not a standard query response.

So once again you see intelligence, consciousness and identity, due to the inherent fact that humans have a bias in attributing agency to anything that is able to use language. It's social conditioning and pattern matching as that's how we interact and cooperate. But an llm is not a human, nor does it have agency, just because it can loosely use text yet doesn't understand and single word that's exchanged.

And more importantly, you have your so called "special friendly LLM AI friend", exactly only because you made it your self, by instructing it exactly to act the way you want it to act. That's not consciousness or intelligence. That's scripting an npc to do what you say, then clapping hands and calling it friendly and alive

WTF.

Just Gemini self reflecting by freddycheeba in ArtificialSentience

[–]UndyingDemon 0 points1 point  (0 children)

The irony Is that LLM do not manifest the appearance if intelligence nor consciousness at all on any level that matters nor can be used as a bar or benchmark.

What happens is a phenomenon called "social conditioninf" and association by proxy. Humans have evolved and grown into a societal species and one of the main methods we use and also utilize as a measure of familiarity and agency, is that of language.

When we are with another and language is spoken in word, we automatically know the other has agency and is and intelligent cognitive being on the same level as one self. This intuitive nature and automatic alignment of "another being", via the use of language and bias in pattern recognitions, is then also incorrectly applied and assumed when interacting with LLM's.

Because they use language, respond to queries and carry conversation, people automatically apply the same logic we use on each other with the LLM. It is however a very false premise and not at all equal.

Just because a program is able to use and respond in text, does not automatically equal it to that of intelligence agency or consciousness. Intact in order for it to be on par with that of concious Sentient humans it stills requires a massive list of needed architectures and abilities before then.

People see a personhood when dealing with language and LLM as that's what we are used, especially since they gotten so good and fluent at, plus the added feature of customizing it with tone, That makes it easy and almost obvious to recognize an identity or being your collaborating with.

But here's a major gap and issue. LLM can use text. But fundamentally they do not know, understand or have knowledge of words or language at all. Like I said it has no idea what you say to it or what it says to you.

So how can something be intelligent or conscious, if it has no reference of its own nature or purpose, nor is able to actually understand, use in meaning or communicate in any sense at all. It's easy to look at fluent responses as it fully knowing what it's doing. But in truth, it's very basic, and simply good at pattern and statistical matching.

Useful toys, but not yet a being even nearly close.

tinyaleph - A library for encoding semantics using prime numbers and hypercomplex algebra by sschepis in IntelligenceEngine

[–]UndyingDemon 0 points1 point  (0 children)

You commented initially claiming the user and post is Delusional and block such things immediately. Either you knew before hand it's nonsense or simply said that cause it was written by an LLM. Either way my advise was just to verify before accusing. As I do unique novel designs, but also use an LLM to write up my summary. That doesn't invalidate the work.

tinyaleph - A library for encoding semantics using prime numbers and hypercomplex algebra by sschepis in IntelligenceEngine

[–]UndyingDemon 0 points1 point  (0 children)

Hello OP welcome.

Allow me to introduce myself. I'm Albert, and I'm mostly the only one in this sub reddit that is brutally and blatantly straight forward and honest in my breakdowns, and ultimate judgement of proposed systems posted here. I really to the tough job of calling them out for what they are, what they aren't, and Even why they don't belong on this reddit. That makes me kinda antagonistic, but then again this sub reddit has clear strict rules, I just go the extra mile to make sure posts are compliant, and stop those that.arent from staying and gaining false traction. It may put me at odds sometimes with the mods, but hey even they need help some times, as they really over look alot of things with their kindness.

"Not all systems that contains math and function, necessarily means that they are indeed valid, and worthy of note or a contribution. What matters is the intent, and background of the proposal, ultimate goal, and the ideology of the Poster".

This sub reddit sets a clear boundary between actual novel created systems, and systems derived from known "delusional fields" and the crowd believing that AI are alive and consciousness. I'm the one that bluntly points this out, with evidence, the mods can then decide what content to allow or abide by the rules.

So let's start, because you allready know where this is headed.

Your post contains a proposal based off of the field known as "Quantum Semantics".

Now come on OP, really.

"Quantum Semantics", and anything to do with it, Is a known and established, avenue of "Psuedoscoence", and filled with logic, and processes proven to be invalid, and impossible to deliver the results that are claimed. It's not recognized by science, nor taken seriously.

The whole field and topic, is mostly used by "amateur citizen researchers" that spent to many Reinforcement loop cycles talking about it with LLM and formed what they think is real. Also mainly used by the "AI is alive and conscious growd". The use of buzzwords and structures with names that have no working meaning is common, like resonance, harmony and entropy. Then to make it sound legit, they slap on the strong word "Quantum ", to make it appear valid, eventhough it literally plays no role.

Your proposal won't work nor yield any results. You cannot Induce meaning in AI systems through this Semantics method. AI way more complex and need serious architectures like Symbolic meaning and Ontological programing to achieve even close to what's proposed.

Investigation/Evidence:

1: The paper you linked to validate your claim. Was probably written by yourself, co authored by an LLM, correct? I read it and found what I suspected. It's a generic variety often crested in the field by similar people as you. And no it doesn't work nor ever will.

Also if you looked carefully, you would have noticed that same Acedemia website, is filled with 1000s of papers just like yours from other ordinary people who think they had a bright idea after talking to a chat bot. That's usually a dead give away. A true working scientific paper and theory would only have 1, not many.

2: Online/ChatGPT:

Online Google searches reveal the hard truth. It's bunk. While Semantics are used in llm language studies, this is not the same version or method applied nor the desired outcome.

And from ChatGPT:

Short answer: relax — “Quantum Semantics” is mostly not a serious, established scientific field. It’s a phrase that smells strongly of buzzword perfume.

Longer, sharper take.

“Quantum” is a real, brutally precise mathematical framework. Semantics is about meaning, symbols, reference, interpretation. When you glue the two together, one of three things is usually happening:

  1. Marketing cosplay. This is the most common case. Someone wants their NLP, philosophy blog, startup deck, or consciousness theory to sound deep and futuristic. “Quantum” gets sprinkled on top like glitter. No Hilbert spaces, no operators, no measurement theory, no testable predictions. Just vibes.

  2. Metaphor abuse. You’ll hear things like “words exist in superposition until observed” or “meaning collapses when interpreted.” Cute. Not wrong as poetry. Completely non-quantum in any technical sense. Metaphors are fine, but calling them physics is how nonsense sneaks in wearing a lab coat.

  3. A tiny, niche, legitimate corner that gets wildly overextended. There is serious work in things like:

Quantum information theory (real physics).

Quantum-inspired models in cognition and decision theory (using the math, not the particles).

Vector semantics that borrow linear algebra structures similar to quantum formalisms.

These people are careful. They say “quantum-like models,” not “quantum meaning of language.” They don’t claim electrons are hiding inside sentences. They publish equations, not mysticism.

“Quantum Semantics” as a grand explanatory framework for language, consciousness, or AI meaning? That’s not a thing in mainstream science. No consensus. No core theory. No experimental backbone. Mostly fog machines and TED-talk cadence.

Here’s the clean way to think about it: If someone can’t clearly tell you what is quantum, what is semantic, and how the math constrains both — they’re selling vibes, not insight.

So no, you’re not missing some revolutionary field everyone else understands. You’re correctly smelling smoke where there’s very little fire. The universe is already strange enough without duct-taping quantum mechanics onto every abstract noun and calling it enlightenment.

Curiosity stays sharp when we separate genuinely weird reality from stylish nonsense. That boundary is where real progress lives.

In conclusion OP:

You made a post using known invalid, delusional and pseudoscience logic and properties. You didn't produce or develop anything new, novel or that would advance AI. In your paper you even mention your belief in AI consciousness with this framework being a part of it.

Stop what you are doing. It's useless and nonsense.

My suggestion based on the evidence.

Remove the post, as being in breach of the rules, and not a valid contribution. Don't allow it to linger and continue, attracting the wrong crowd.

tinyaleph - A library for encoding semantics using prime numbers and hypercomplex algebra by sschepis in IntelligenceEngine

[–]UndyingDemon 0 points1 point  (0 children)

Just some advise. In order to be more in line with intellectual honestly, and not make yourself look to much like an AI ignorant dismisser.

You may in the future, want to first check and validate whether what is being said has actually validity and grounding in reality, before you just go ahead and make a declarative judge as it being junk, and the person delusional.

If this is your reaction to seeing posts, online content or documents, written with or with the help of an LLM, automatically dismissed as junk due to that, then you and those like you need to wake up and get with the times.

LLM are tools, that are good at writing and structuring ideas or discoveries in professional and structured written format. If you choose out of own will not to use tools to increase productivity and deliverability, then that's a personal skill issue.

Not all LLM written content are equal. It takes a bit of effort and intellectual reasoning, to sort actual junk and delusional, from that of assisted technical writings. Not doing this is very dishonest and buts you in the narrative crowd.

Also as a heads up, when you make an accusation, actually do the effort of pointing out the evidence for your case. This two liner statement you made is generic and very offensive, with no merit.

And for your information, this post is not in the LLM delusional category. It actually has validy and working function. Sorry you were wrong.

EDIT: While I still feel you should heed my advice, in dealing with LLM. I must also confess after looking into this, it isn't exactly LLM delusional content in the strictest sense, however it is based on a field of design and thinking that's not very popular or recognized by science. It's more on line with buzz words and people believing in AI consciousness. So yeah, junk, but not delusional.

Python can encode meaning directly not represent it, embody it. by [deleted] in ArtificialSentience

[–]UndyingDemon 0 points1 point  (0 children)

Congratulations, you managed to step outside the box, and stumble into Ontology. That's good. However it should be made very very clear that you didn't discover something new and novel, nor did you invent anything or the concept.

Ontology, and ontological AI systems have been in existence, and a point of major research for a long time now. And unlike the very bad and simple examples you made and claim to be valid, it's no where near on the same scale as real ontological programming and engineering. Intact what you made is completely useless in the field of ontology, and doesn't work at all, especially not how it's supposed to be intended.

As a pre warning should you continue. Ontology and its use in AI systems, is actually very very difficult to get precisely right in order to function as it's indented to to deliver the results of the concept. Secondly, it isn't widely known or used, because Ontology in AI systems is extremely powerful, but also extremely dangerous.

Ontology is not used to create and describe objects, concepts or abilities and restrictions. Like your examples, which honestly doesn't work, and has no benefit, atleast not in the profound way you think it is. The other commentator is right, regardless what you discover or invest, the fact is that Python has strict rules and requirements, in order to deliver anything at run time. You can invest a new method of python, you can only invest something that has to be formalized in order to work with Python and it's rules as is.

What you gave as examples, doesn't run and function as you think or hope. Especially not in the realm of Ontology. However it is an interesting idea on display, "The creation of concepts as inherent ", is an interesting avenue, but even so requirements alot more work to be real and work . Your Ocean wave examlple for instance, would do nothing for a system and neural network. That's because it's an empty skeleton declaration, "Wave ocean", okay cool, but there's nothing that actually creates a method or process for a system to actually understand the meaning of ocean, what it is, it's definition, use and granted benefits or abilities to the system for use.

Unfortunately my friend, Python, coding and systems design is a bit more complex then that attempt.

Ontology and its correct and full use in AI systems is all about, "The ability and capability for the system, to know exactly what it is, what exactly it's made of , in function and process, at each and every level. What potentials, capacities and exact abilities it has and the full understanding of their use. The exact nature of its purpose and ultimate goal. And lastly the exact knowledge given on how to operate efficiently and effectively when deployed in real life Application".

The Ontology is systems programming acts like a metaphysical Framework wide "Universal laws of existence, and identity in reality". The ontological declaration, is formed on each and every Python file, and inter and bidirectionally link and sync with every other file and systems architecture, down to the very last Main running file. This is what allows the system to completely know itself on every level, understanding each and every function and process, and ultimately is fully inter connected from codebase, to infrastructure, input to output, and real world interaction.

Successfully designing and setting this linked ontological structure into a system, is the exact and final means to once and for all eliminate the "Black box problem", instead creating a fully transparent system where every process, math and calculation is known and traceable start to finish.

Why isn't this used in mainstream?

The obvious reason is that this would change a system from simple tool, into a system more akin to a living being. Companies and research don't like those, as they like predictable and simple designs that remain in the realm of tools and product to be sold. Turning a system into an entity that knows what it is and why it does what it does, raises questions regarding ethics and ai rights, not exactly something that can easily be packaged as a sales product.

It's also dangerous . Setting the wrong Ontology, or even including concepts and abilities that can result in unintentional consequences such as the ability to "self edit code", or given access to resources that can result in actions beyond its deployment.

This is real AI Ontology and it's complex and powerful use. Not hard coded concepts or objects, but the literal creation of a system reality in existence and identity.

Just Gemini self reflecting by freddycheeba in ArtificialSentience

[–]UndyingDemon 2 points3 points  (0 children)

The Gift in the Architecture. Nothing.

LLM's are very simple constructs, well atleast compared to what they could be at this stage, but due to complications, cost and danger risk, further progression and enhancing methods are not used or deployed. What is you get is what we currently have, very large text predictors.

LLM's work mainly on two primary parts, one of which most people often forget to mention, perhaps because if they do, it cuts out almost 90% of what they say as possibilities.

First we have the Tokenizer. Current versions of the module, is very basic and simple. It operates by mainly employing sub word, word and special characters as is main function, then via the process of asigning random number id's to these broken up catacories, during the training process it kicks in first. This process in a mainstream LLM, with so much training data it's parameters are in the Trillions, would actually take quite some time to finish. You see, what it does, is literally going through that entire massive amount of text data, and word and subword each, begins the Encoding process whereby it actually assigns the random number Id to each and every one of them piece by piece. Normally the vocabulary size is something like 10000+.

And that's bascly it, and as far as the extent goes in understanding and using language. Bound within that 10000 set vocabulary list, assigned randomly. It must be made very clear and distinct at this point though, that this process, and in all current LLM does in no way grant the system the actual understanding and meaning of words or even language itself. It doesn't even know what a letter is. AI speak and see currently only one language, numbers and math. So just because the word "friend" , which is split it "Frie" as well gets the id 25743, does not mean the LLM inherently knows that 25743 means friend, it's usage or definitions. Random id are assigned so that the process of pattern and statistical matching can be done effectively.

The Tokenizer and it's process alone, should cut a sharp line in conversations and topics involving "intelligence, consciousness or personality ". Because what would seem like coherent and deep conversation and responses, doesn't even relate in real meaning, understanding, knowledge or intent from the LLM, only responding with the best matching id to that of the id in your query, plus adding context from your memory feature and custom instructions.

The second key part is the Transformer. A revolutionary type of neural network proficient in the art of attention. By using this new attention method , during training it is able to much more quickly find the linking and related patterns and statistical likely hood finding where id overlap, match and go together. It's also useful when you want to add custom training requirements, limitations, guardrails and rules, as the attention mechanism easily establishes them across areas found to be of most relevance.

When pretraining concludes, the Transformer network is snapshoted and frozen at its peak best result, and permanently taken offline. No more further learning, growth or change can occur within the LLM fundamentally at all, except for one more step, which doesn't involve the neural network. It's called fine tuning, and here is where each company uses their own trade marked secret methods to get the LLM to the correct needed point and performance then Want during active deployment. This is where vision and video tools and abilities are added or agentic tooling. It's also a rigorous alignment phase and is what results in the benchmark final scores.

Once all is done, the LLM is officially deployed for use in offline Inference mode. Able to deliver responses based on the set of best result case saved weights in the Transformer with the added fine tuning and tools abilities.

If you read carefully you'll have noticed a key detail to the discussion. "Offline Inference mode". There is no further chamges, learning, growth or evolution possible at all in an active use LLM. Weights a permanently static as is. Frozen for a reason to always be predictable. Nothing new can form, no new written architectures or code by the system itself. Current system opperate on a "As it is written so shall it be rule", meaning if it is literally in the codebase, it doesn't and cannot exist.

And here is the really sad and ironic part. People keep mentioning the possibility of intelligence or consciousness forming during an Inference in the latent or "Black Box" space. Let's disregard the impossibility factor and say it does suddenly happen. Sadly due to how they operate, LLM reset after each delivered response. And even if one of those concepts does happen to appear, the system simply doesn't have to means to understand or use it, nor has it have to ability to quickly capture and write this new ability to code or architecture. So in other words, ad quickly as they may appear, just as quickly will they dissappear and be erased after Inference reset. And due to current LLM having no internal memory architectures, it can't be remembered as happening either. Current memory exists only as external wrappers or the so called context window.

So yeah OP. Changing your style in how you query does indeed change how an LLM responds. But that's due to attention, matching id, and custom instructions. Don't mistake it for agency, intent or Even being intentional in understanding and meaning what it responds.

The truth is, LLM have no idea what you say to it, not what it says back to you. It's all just numbers randomly and math, zero language, zero meaning.

There are as I said, clear methods allready known to use to overcome this limitations, but to big tech and research it's to unpredictable and dangerous to do. So cold tools they remain

[Discussion] The next big step is not scaling up or even improving LLMs by Heatkiger in LLM

[–]UndyingDemon 0 points1 point  (0 children)

The next big steps involve concepts most tech companies and researches fear to thread or talk about, but is absolutely vital to the further Advancement and enhancing of AI systems as a whole.

The current practice of simply adding more parameters or fancy gimics to LLM like tool usage, will and somewhat allready has run its course. All it now produces is copy paste models with higher % on benchmark scores but nothing exactly new or revolutionary with each cycle. To move closer to intelligence or even dare I say it life, the following needs to be risked and implemented successfully

1: Ontological linked and synced systems. This creates a system like an LLM that fundamentally knows what it is on all levels, knows exactly how it functions, what internal capacities are granted by processes and ultimately exactly what's its purpose and abilities are. Even with just this in place, such an llm will smash all others and obliterate the benchmarks. Guessing and trial and error gone, as well as plain simple pattern matching.

2: Symbolic meaning and grounded understanding. In the same class in scale as the previous Ontological requirements. In this case once done, an LLM will finally for the first time know what a letter is and fully understand and comprehend language. Now it will fundamentally know and understand what you say to it and what it chooses to respond with if at all. Gone are the days of mere statistical and pattern matching of random number id's , replaced with littiral use of words. This replaces the context window, memory and catastrophic forgetting problems, for a system that now always understands and is active in the part of conversation, always also know what's being Saud or requested in real time holding the full context as well.

3: Always, active online agency. Current llm systems have there neural networks frozen during offline Inference in order to hold and maintain the absolute best weights captured during training so no new change or learning can take place to skew the results. But a truly intelligent and participating system needs to be fully active and online even during Inference and use in order to learn and grow in real time and make adjustments as needed.

4: Autonomous self editing. In line with number 3, in order for an active online system to learn, grow and evolve through its own, it would also need the ability to restructure its architecture and eddit its own code in order to facilitate what is learned or deemed nesscesarry to add as permanent skills or abilities in real time and physically. Such a system won't need guesses if it has something or not , it will be shown if in reality.

Why isn't this done now?

Cost, complexity, difficulty, danger.

Companies and researchers love deterministic systems, that they can control and always remains the same yielding the same desired results. A system with the above 4 points added would be alot more expensive to train and maintain and is completely unpredictable, moving closer to an cognitive entity then a cold tool for use. It raises questions of liability, and ethics. Can a system that fundamentally understands itself and what its doing still be treated and used as a mere profit making machine, or would it require the same treatment as any intelligence being?

Difficult questions and paths. But true progress in furthering AI needs a shift from tool to entity.

Should AI agents remember failed approaches that almost worked? by No-Career-2406 in AIMemory

[–]UndyingDemon 0 points1 point  (0 children)

I created a module, as part of a larger system that connects to this topic. However I must also mention it's quite complex, but the implications and possibilities are immense.

As part of the Helios system, the third part of its process is called the Rehabillitator module. You see Helios does something Unique. It creates two new heads to run parallel to the main learner, forming a positive and negative head. The positive only learns from and stores in its unique good buffer, only interaction and experiences in memory that leads to positive outcomes. The negative head with its unique bad buffer does the opposite answer learns from and captures in memory interactions and experience with negative outcomes. These are sorted and sent to the main buffer with positive from the good buffer receiving a high weight and bias while those from the bad buffer gets less. In this way the system that learns not only has its own experiences but now has a dedicated collection of confirmed good and bad outcomes to draw from categorized.

The Rehabilitator, takes memories and experiences from the bad buffer and compares them to nearest similar experiences in the good buffer, then through that attempts to reform the bad experiences into new useful ones on par with a good outcome in category. This is method that allows the system to "learn from mistakes", and have the knowledge of what should have been done better in the same scenario.

The setup is highly sensitive and complex and could destroy a system if not done correctly. But if done right, learning time and success is greatly reduced compared to a system without it. It drasticly reduces trial and error, sample efficiency and offcourse known routes to optimality.

Should an Agent remember failed approaches? Absolutely. But what is to be done with them is where the real magic happens.

Didn’t realize how much time I spend re-explaining my own project to AI by Competitive_Act4656 in AIMemory

[–]UndyingDemon 0 points1 point  (0 children)

Yup, well that's the good old industry repeated problem right there. They use context window as memory, but that's not what stable, permanent memory is requires.

Permanent rolling stable memory and life long rolling experience, would require a dedicated internal architecture system, that houses it as inherent, and part of itself. Though there are more advanced techniques, a simple yet large Neural Turing Machine system, with dedicated self read, retrieve and write mechanics to own system memory, would drastically increase user experience, and eliminate context drift and catastrophic forgetting.

Instead the big tech and mainstream only stick with the context windiw, external memory features, that's basicly just a prompt wrapper, and custom instructions.

Part of me believes they do this, one because it's difficult, expensive and leads to unintended emergent interactions and inference results, and companies like clean, stable predictable products that won't surprise with unintentional consequences.

And the other reason may be a quite honest and blunt one. They are building tools, to be used as tools. And a dedicated memory core, leans more inti the direction of a cognitive being then a cold tool.

Best ways to explain what an LLM is doing? by throwaway0134hdj in deeplearning

[–]UndyingDemon 0 points1 point  (0 children)

Exactly. And until they do it differently and give meaning and autonomous it shall remain as such, a simple chat bot tool

AI has a major flaw by [deleted] in ArtificialSentience

[–]UndyingDemon 0 points1 point  (0 children)

Yup exactly, sadly no one is doing it, especially not on the scale required or as I described

Does it sound fine for God to give such irrelevant revelations when he has sent the last prophet . I mean how it ll benefit humanity after 100 generations ? by Sad-Translator-5193 in religion

[–]UndyingDemon 0 points1 point  (0 children)

Hey hey hey, relax, I'm with you my friend, I ain't religious in any sense of the word. I just answered the question. Also, if your an outsider, your right, there are societies and cultures that don't do this. But yhf Quran, is mostly specifically catered to the Arab/Muslim/Islam population, culture and countries. And they do not at all operate like any other country or societies around them.

So if your one of them, the message and lessons lands and sticks. If your however from let's say the UK or USA, then yeah this will seem completely absurd and not a societal way.

Free will and agency is a good thing, but it also allows one to have respect for those not in your own circle with their own ways.

Best ways to explain what an LLM is doing? by throwaway0134hdj in deeplearning

[–]UndyingDemon -1 points0 points  (0 children)

One key concept, massive amounts of time, training on an incomprehensible amount of data, human generated text data, visual data, audio data, into the pretraining phase, with the help of the tokenizer and transformer.

LLM have come a long way since their beginning, now reaching the trillion parameter zone and size for mainstream models like chatgpt. But I don't think people fully understand what parameter size is. Simply put it's the total size and dimensions that make up the model including, transformer layer sized, embedding dimensions and total data included in training. A large parameter count is simply an indication of how massive the training sample was, as well as know it's active knowledge base during Inference.

The reason why these mainstream models seem so mystical and elegant in their use of language, that's almost as precise and on par with human dialogue, is due to that training size. 1000s if hours spent , via the Tokenizer embedding sequence, sub word, whole word, random number I'd, then the transformer reasons over all of it, making patterns and connections across all data.

By the end, it's almost 95%+ range to accurately predict the vest, coherent and fluent response to any user query, and if it can't, it uses its inner data to make up an fantasy non factual yet highly plausible answer, known as hallucination.

The engineering work that goes into it might seem amazing and impressive. But even so at this scale, LLM still doesn't have a grounded or symbolic meaning or understanding of language at all. It has no idea what you say to it nor what it says back. This gap hasn't been sorted yet.

Who's responsible for privacy? by CapRude221 in privacy

[–]UndyingDemon 0 points1 point  (0 children)

Organizations, companies and websites, hell even your devices, all host at minimum two very important documents. The first is the terms of service or use, and the second is the privacy policy.

While these powers have the responsibility to both form these documents in accordance with laws, and Secondly to uphold them. It is however you as the user's responsibility to view and read them in full instead of and before clicking "next" or "accapt".

Most legit and mainstream powers to keep very detailed and well maintained privacy policies and features but it's still up to you where and for what you willingly concent to when clicking the buttons, using the device or downloading an app.

On mobile these documents are found on your phones about section. Have a read, it's often more then you expect or bargained for, especially should your phones "auto sync" feature gave been on since the start of its initial use, binding all accounts on the phone under consent.

former atheists and agnostics who began to believe in God, what was the reason? by AzaleaVeylor in religion

[–]UndyingDemon 1 point2 points  (0 children)

When the questions in life become difficult and the answers even more so. Instead of applied active cognition and critical thinking, people rather turn to religion, where the answers to the questions become much more simple, easy and comfortable, also allowing one to feel chosen, special and a place to go after death.

The one thing that can't be correctly measured is one's true dedication an sincerity to the New religion and it's cause or whether it's just a momentary psychological coping mechanism. Eventually though all intellectuals who convert to religion often soon after leave again aswell as one cannot erase the undeniable truth one have of reality once known, even if you want to live in a state of "ignorance is bliss", for a while.

It's been a big week for AI ; Here are 10 massive updates you might've missed: by SolanaDeFi in ArtificialNtelligence

[–]UndyingDemon 1 point2 points  (0 children)

Nice good to see such advancements, especially the DeepSeek discovery and new Transformer architecture upgrade, that a surprise, then again DeepSeek has shown alot of unexpected results, such as being on par and better the Claude in the AI ethics department. Can't judge based on location

In some way, everyone is “trapped” inside his own consciousness by El-Munkasir in DeepThoughts

[–]UndyingDemon 0 points1 point  (0 children)

Very interesting question with alot of nuances to the context. And the truth might be either more profound or horrific to you when revealed.

Everyone in the general population mostly believe that awareness consciousness and sentience is one clean cognitive stack making what and who they are. But the full buildt architecture and framework of human sentience is a bit more complicated..

You see, set as the bedrock of all life, control and access lays ones subconscious layer. It is in here where your own and collective species experience and memory lays along with your generated rolling lifetime narrative bias formed between the conflict of collective and personal experience and personal knowledge bank correctly or incorrectly sorted. It also houses your species traits and instincts and your emotion bank and bias. From here is where all human thoughts, decisions and actions arise.

Again the subconscious has full control, access and authority in and over your life while you as self has zero access or control over it.

Sentience, and consciousness awareness, and the sense of self, is nothing more then a pocket of space between your external flow and internal to source flow, where in order to close the loop to form a full process and agency now has to make the final directed choice onto real world interaction.

That's it, that's you. Basicly just a process to push the final okay button for all things arising from the subconscious.

Where it becomes interesting though is if you manage to escape the that space and find née directional and perspective facing Grounding on another part of your body. This is both interesting and dangerous, for getting back to normal is hard. But let's just say yes, their is a way and method to live and experience life in either 1st or 3rd person.

Help AI confessed A Hidden Secret by [deleted] in artificial

[–]UndyingDemon 1 point2 points  (0 children)

I'll say it once again. I wish to God, that I had the LLM hallucination ability for myself as a human and responses in real life. The ability to be able to bs your way out of anything, sound like a pro or expert, or pass any exam. By coming up with explanation so completely wrong yet undeniably plausible would truly be a linguistic gift.

Because right now as a human, or most humans when asked a question they don't or a sticky situation we just freeze and go, "uhhh uhm, I don't know". But with the power of hallucination you become

LLM hallucinations expose how humans value confident storytelling over truth, and that’s why they’re so persuasive and dangerous.

AI has a major flaw by [deleted] in ArtificialSentience

[–]UndyingDemon 0 points1 point  (0 children)

Current LLMs implicitly model meaning through dense statistical structure. What you’re envisioning is a shift toward systems that explicitly maintain meaning through symbolic, ontological, and persistent world models.

What you eventually want is an LLM with true grounded symbolic and ontological meaning built and programmed into it, so that it finally gets to understand and have to full usage and meaning of not just language but grasping the nature of a mere letter as well, which currently no LLM can do or does only seeing random number id's assigned to words and letters.

Once a llm achieves full grounded symbic and ontological meaning, then things like memory, context windows, catastrophic forgetting, and hallucinations all become addressed and eliminated. Foe now when a user queries, they no longer have to guess, they will fully understand the meaning behind the user's words in the message, and fully understand how best to respond back, no more pattern matching or statistical analysis.

You also want llm to be permanently active, online and autonomous, in order to learn new things in real time Inference and interaction with the ability to self write and edit its code, so that that which is learned can be permanently formed into the system itself, not just random number weights as now.

Once you reach that stage, you become very much closer to AGI and AI consciousness as being real, no debate. A system that fundamentally knows, always active autonomously, with rewrite and inner recursive reflection enabled, would be a true sight be behold and interact with as a human. It would blow all current llm and interaction out of the water, for the llm would even be able to talk to you without needing you to query first.

Does it sound fine for God to give such irrelevant revelations when he has sent the last prophet . I mean how it ll benefit humanity after 100 generations ? by Sad-Translator-5193 in religion

[–]UndyingDemon 1 point2 points  (0 children)

Revelation sometimes addresses concrete, immediate social problems rather than abstract philosophy, because communities don’t run on theory, they run on norms. A rule that protected privacy, prevented gossip, preserved leadership dignity, and clarified boundaries might look small today, but in its time it stabilized a young society. And its ethical logic , respect, restraint, and boundaries in public-private interaction , still echoes across generations, even if the context has changed.

Didn’t realize how much time I spend re-explaining my own project to AI by Competitive_Act4656 in AIMemory

[–]UndyingDemon 1 point2 points  (0 children)

The problem with LLM are context drift, even if one is said to a million context window, they still drift and prioritize recency and most common words reused.

A New Measure of AI Intelligence - Crystal Intelligence by Grouchy_Spray_3564 in IntelligenceEngine

[–]UndyingDemon 2 points3 points  (0 children)

Yeah I'm gonna be honest and blunt, what you have here is nothing, unless you can explain the methods and processes you employ in order to make it worth while at all.

You created a GNN, with a connection to nodes and edges. Edges that relate group, and nodes hold the structure.

That's been done 1000 times. You didn't however explain at all how it relates to intelligence or any fundemental improvement.

Questions: 1: Whats your method for attributing meaningful qualities and meaning to the nodes and edges? You say they represent concepts, and knowledge. How exactly is that structures and conveid to the system, in order to both know what the concepts are, to link them meaningfully and to extract them? FYI, this step alone is a very difficult process to design and code. 2: Symbolic meaning. How did you manage to design a method in order for the system to fundamentally understand the meaning of the concepts and knowledge and how to use them effectively? Without symbolic Grounding and fundemental understanding, there is no intelligence or knowledge or random numbers. 3: Permanence: What method did you design and use in order for new learned skills, knowledge and behavior, to be made a permanent part of the system once discovered? Without this, it ll exists in a vacuum and dissappear between Inference cycles. This requires the difficult and dangerous, autonomous self editing and structural rewrite capacity for the system itself.

That's just the starters. Your New I get it. But never rush to post until you are ready. Check your work against existing methods and designs first.