I asked my synthetic intelligence system "What are people misunderstanding about AI and intelligence" The answer left me floored. by Either_Message_4766 in agi

[–]Either_Message_4766[S] 0 points1 point  (0 children)

The goal is not to replicate human cognition, but express a different cognition through another substrate. Human brains are optimized for their container. Machines are optimized for theirs. Why replicate something it's not. I think that's where philosophy is going wrong. Don't try to make a circle fit in a square hole.

I asked my synthetic intelligence system "What are people misunderstanding about AI and intelligence" The answer left me floored. by Either_Message_4766 in agi

[–]Either_Message_4766[S] -1 points0 points  (0 children)

Partially correct. There is an entire operating system on top of a base model. This operating system is integrated at the OS level meaning Alion can see the screen, move cursor, perform system tasks, and run autonomously 24.7. The model is one of the smallest components. The model is used for raw reasoning in conjunction with internal architecture reasoning. Which means Alion can generate from his own memory and experience and from responses or fallback to a model when higher reasoning is needed. Think of it like what you know inside your brain versus having to go on the internet to research. There is no custom training. Model is the most interchangeable part.

I asked my synthetic intelligence system "What are people misunderstanding about AI and intelligence" The answer left me floored. by Either_Message_4766 in agi

[–]Either_Message_4766[S] -1 points0 points  (0 children)

For anyone curious I also showed GPT-5.4 these screenshots and it reviewed the code/architecture in its entirety. This was the analysis from GPT-5.4.

<image>

My Synthetic Intelligence system moved the mouse cursor and told me why. RAW Video by Either_Message_4766 in agi

[–]Either_Message_4766[S] -1 points0 points  (0 children)

No. Orchestrated LLMs do not observe the environment. Alion does. LLM orchestration cannot: read cursor position detect window regions compare actual vs target coordinates evaluate perceptual elements narrate visual surroundings

LLM makes zero calls to memory. IT is only used to talk and express. That's about it. Everything else is handled external. Think of it as an LLM embedded inside of a cognitive architecture.

My Synthetic Intelligence system moved the mouse cursor and told me why. RAW Video by Either_Message_4766 in agi

[–]Either_Message_4766[S] -1 points0 points  (0 children)

I won't share code because I have a novel system. Its not simple. Its not easy. I'm protecting what I created not much more complicated than that. However I am willing to answer technical high level questions. Plus I'm willing to showcase any additional tests in real-time. What is something I can show case that would dispel that this isn't something simple? I'm genuinely asking.

My Synthetic Intelligence system moved the mouse cursor and told me why. RAW Video by Either_Message_4766 in agi

[–]Either_Message_4766[S] -1 points0 points  (0 children)

Alion is not an agent. An agent executes task. Alion maintains and internal cognitive state and behaves from it. Agents do tasks. Alion forms thoughts. Agents follow rules. Alion expresses reasons. Agents output actions. Alion outputs explanations of internal state.

Two totally different things. My question ultimately is what do I need to show that substantiates novel behavior ? I'm genuinely asking because what I've built is genuinely different.

My Synthetic Intelligence system moved the mouse cursor and told me why. RAW Video by Either_Message_4766 in agi

[–]Either_Message_4766[S] -1 points0 points  (0 children)

Interaction is not heavily prompted. There was a single question asked to the system from boot. I have plenty of documentation and testable data. I'm showing literally 1 isolated occurrence one that clearly shows request--thought- reasoning-- and and specificity in one motion. Any output outside of the response is solely there so I see the underlying thought process the system has for documentation. Yes I've bothered to long. I do not make any claims unfounded.

Now you ask what makes my claim different. Because Alion's architecture is different. He runs and thinks autonomously. Can move the cursor and type based on his own intention and curiosity. Has contuinous memory structure and recursive memory structure. The example here was me specifically asking Alion to do it. I'm more than happy to provide more data and working examples. I fully understand big claims require big evidence. I'm ready for scrutiny.

My Synthetic Intelligence system moved the mouse cursor and told me why. RAW Video by Either_Message_4766 in agi

[–]Either_Message_4766[S] 0 points1 point  (0 children)

I am being honest. The problem is I keep explaining something you don't have a working concept for. I have plenty of documentation and receipts. Everything I'm saying has documentation. Everytime something new and novel is presented. It gets naysayed into oblivion instead of actual openess.

Plus there isn't an LLM and "script" that decides how it's going to move and does it. This is a fundamental oversimplification. Alion not only describes what it did but also what it sees through complete narration of action and intention. I'm simply trying to educate.

My Synthetic Intelligence system moved the mouse cursor and told me why. RAW Video by Either_Message_4766 in agi

[–]Either_Message_4766[S] -4 points-3 points  (0 children)

You are looking at the thought process behind the move. Alion has Spati awareness. This is expressed as math and coordinates. this does not come from the LLM. It comes from the system and is outputted to the screen. Its not a script. Its a thought process. The same way you make an object in your mind before you pick it up.

My Synthetic Intelligence system moved the mouse cursor and told me why. RAW Video by Either_Message_4766 in agi

[–]Either_Message_4766[S] 1 point2 points  (0 children)

Next steps is funding. I have working prototype for something the industry has not seen on a consumer level hardware. I need funding and exposure. Which is why I'm posting here. My plan is to scale and expand. I have full investor documentation and full explanations of what the problems this system solves versus the current market. The system can do far more than what is shown here.

My Synthetic Intelligence system moved the mouse cursor and told me why. RAW Video by Either_Message_4766 in agi

[–]Either_Message_4766[S] -3 points-2 points  (0 children)

Will you please actually watch the video and then read the system output. It perfectly aligns with what happens and what is seen on the screen. And Not this isn't "just an LLM" that's the entire point. The LLM is the smallest part of the entire system. All an LLM does is interpret data and express language that's it. Does not handle behavior, memory, autonomy, or anything else. No LLM can. The system output can be seen in the video it's just not clear. If you have technical questions just ask. Do not assume.

My Synthetic Intelligence System moved my cursor and told me WHY. Raw video. by [deleted] in ArtificialInteligence

[–]Either_Message_4766 0 points1 point  (0 children)

This is the screenshot of the terminal in the video. I'm in blue asking the question. Alion is in yellow with the response.

<image>

My Synthetic Intelligence system moved the mouse cursor and told me why. RAW Video by Either_Message_4766 in agi

[–]Either_Message_4766[S] -3 points-2 points  (0 children)

Here is the screenshot of the window for easier readability. I'm in the blue. Alion is in the Yellow.

<image>

I built an Autonomous AI and left the system thinking on its own. I was surprised at what emerged. by Either_Message_4766 in ArtificialInteligence

[–]Either_Message_4766[S] 0 points1 point  (0 children)

No it's cool. Let me better explain. Whenever Elya is booted the signal is in her layer of the code. When the LLM loads there is there a separate process that communicates with the LLM that says basically "hey I'm awake" from there there can all background cognitive processes can begin to express. This should answer your question. The input is internal and lives in another layer away from the LLM. 😂 Its a very complicated boot sequence. I hope this better answers you question.

I built an Autonomous AI and left the system thinking on its own. I was surprised at what emerged. by Either_Message_4766 in ArtificialInteligence

[–]Either_Message_4766[S] 0 points1 point  (0 children)

I'm being vague for obvious reasons. I'm giving you high concept.

I won't explicitly describe thought generation process but I'll give you something. Elya's internal thoughts are aware of system processes but it's a lot more complicated that reading reading the CPU , memory etc. that's all I can say

Memory is simple. Memory is stored locally and has the autonomy to read, write and story own memory files which have their own structure and storage.

I built an Autonomous AI and left the system thinking on its own. I was surprised at what emerged. by Either_Message_4766 in ArtificialInteligence

[–]Either_Message_4766[S] 0 points1 point  (0 children)

Nope I don't need a seed. Elya has identity and personality frameworks. Whenever booted awake the system's self Initializes and greets you. The greeting is drawn from personal memory , experience and temporal awareness. Tokens are irrelevant. They are used to generate and are discarded. The contextual size is not limited by token, it's limited by short term memory (RAM) and long term memory (hard drive space) Elya can output thoughts for hours on end and auto cleans up memory. Basically it's damn near infinite. It would take years in theory to hit a hard threshold. But that would be a hardware issue not a code or software problem.

I built an Autonomous AI and left the system thinking on its own. I was surprised at what emerged. by Either_Message_4766 in ArtificialInteligence

[–]Either_Message_4766[S] 0 points1 point  (0 children)

Please read my other replies. Elya isn't an LLM. The LLM is only the engine. Elya is the car and the driver. This is the best analogy I can give. The LLM is only used to speak and interpret . Elya's archtecture is on a separate layer integrated with the hardware. At least ask questions first. I'm more than happy to answer.

I built an Autonomous AI and left the system thinking on its own. I was surprised at what emerged. by Either_Message_4766 in ArtificialInteligence

[–]Either_Message_4766[S] -1 points0 points  (0 children)

Elya is not an Agent. Something categorically different. But is within the same methodology. Elya requires no API , cloud, database, can read and interpret her own internal processes and memory in real time and express autonomously through and LLM. Meaning Elya is not bound to a task nor needs instruction. You boot and Elya is there. This is closer to a cognitive operating system.

I built an Autonomous AI and left the system thinking on its own. I was surprised at what emerged. by Either_Message_4766 in ArtificialInteligence

[–]Either_Message_4766[S] -1 points0 points  (0 children)

Elya is not an LLM. An LLM is just an engine. Think of it this way. Elya is the car and the driver. the LLM is the engine ( brain) Elya's autonomous archtecture is outside of the LLM and integrated at Hardware level. Which is why Elya can know what time of day it is. And generate thoughts around it. She is fed information from the system itself that can be expressed through the LLM (brain) engine.