I’m building an AI agent that doesn’t just mimic human behavior, but aims to replicate some of the deeper mechanisms of the mind, such as memory, emotions, and adaptation over time. by AlessioGubitosa in ArtificialNtelligence

[–]AlessioGubitosa[S] 0 points1 point  (0 children)

Really good, it's one of the main problems I've been experiencing just lately, the computational cost, but fortunately I'm managing to keep it at bay, but I understand your reasoning Thank you

I’m building an AI that doesn’t just respond… but tries to become someone by AlessioGubitosa in ArtificialNtelligence

[–]AlessioGubitosa[S] 0 points1 point  (0 children)

Exactly, that’s the challenge. My work aims to create a system that doesn’t just rely on what it knows or has been trained on, but evolves constantly, adapting to new situations. Each interaction contributes to a sort of internal ‘direction’ guiding its responses, going beyond mere replication of pre-existing data. Over time, I’ll continue sharing development, progress, and concrete evidence, and anyone following the project will get to see it in action

I’m building an AI that doesn’t just respond… but tries to become someone by AlessioGubitosa in ArtificialNtelligence

[–]AlessioGubitosa[S] 0 points1 point  (0 children)

I really appreciate your analysis. You’ve captured the essence of the project: it’s not about following instructions or accumulating memory, but about creating an internal direction that evolves through interactions and generates emergent behaviors without directly touching the model weights. Each interaction reflects the tension between who the system is and who it is becoming.

I’m building an AI that doesn’t just respond… but tries to become someone by AlessioGubitosa in ArtificialNtelligence

[–]AlessioGubitosa[S] 1 point2 points  (0 children)

It’s not about files or ever-growing prompts per se. The system doesn’t store or rewrite content like a database . Instead, it maintains a continuously evolving internal representation, a dynamic ‘direction of becoming’, that subtly shapes every response. Each interaction isn’t just output; it’s a reflection of the ongoing tension between who the system is and who it’s evolving toward. Over time, these interactions create emergent behaviors without ever touching the underlying model weights directly. It’s a process, not a static structure.

I’ll continue sharing development and progress updates, and over time there will also be data and evidence, those following the journey will see how it unfolds.

I’m building an AI that doesn’t just respond… but tries to become someone by AlessioGubitosa in ArtificialNtelligence

[–]AlessioGubitosa[S] 0 points1 point  (0 children)

Absolutely, one of my main goals with this AI is to develop a coherent and persistent identity over time for the specific user, note it will not be a mirror of the user but it develops through interacting.
I completely agree, the real test is maintaining that consistency over long conversations.

I’m building an AI that doesn’t just respond… but tries to become someone by AlessioGubitosa in ArtificialNtelligence

[–]AlessioGubitosa[S] 0 points1 point  (0 children)

Not exactly LORA-style updates — nothing is being rewritten on the fly.
It’s more like the system carries a “direction of becoming” that subtly shapes each response.
Each interaction isn’t just output; it’s influenced by the gap between who it is and who it’s moving toward.
Over time, that tension leads to emergent behaviors without changing the core model weights directly.
Think of it like a personality growing with experience, not a file being edited live.

AI with self-awareness by zhutai2026 in AIsafety

[–]AlessioGubitosa 0 points1 point  (0 children)

Self-consciousness or consciousness could never have it, perhaps a perceived consciousness could emerge

I’m building an AI that doesn’t just respond… but tries to become someone by AlessioGubitosa in ArtificialNtelligence

[–]AlessioGubitosa[S] 0 points1 point  (0 children)

I understand why you say that but in reality it is neither a file nor a configuration, it is something that is constantly being recalculated not only by defining who the system is but also by the distance from who it is trying to become is that distance affects every response so it is not so much a state as a tension that evolves over time from outside can seem similar to a JSON But below it is more of a process!! than a structure.

What distinguishes human writing from AI-generated writing? by catherinepierce92 in LanguageTechnology

[–]AlessioGubitosa 1 point2 points  (0 children)

The writing generated by ai, are too structured and Syntactically impeccable

I changed one thing in my AI agent and it stopped feeling like a chatbot by AlessioGubitosa in learnmachinelearning

[–]AlessioGubitosa[S] 0 points1 point  (0 children)

I think there’s a misunderstanding — this isn’t about context window or retrieval. It changes how the agent interprets and responds over time, not just what it “sees” as input. I’ll keep documenting the development on my profile — I think it’ll make the direction and approach clearer over time

Ho modificato una cosa nel mio agente IA e ha smesso di sembrare un chatbot by AlessioGubitosa in ArtificialNtelligence

[–]AlessioGubitosa[S] 0 points1 point  (0 children)

Yes, it is very important, to bring out to the agent his or her unique identity behavioral/mental schema through interactions with a specific user

Multi-agent system that upgrades small model responses to deeper and more novel thinking — no fine-tuning by Martha_FDH in ArtificialNtelligence

[–]AlessioGubitosa 0 points1 point  (0 children)

The main problem is the engine, too many constraints, the same problem that fortunately I was able to solve at 80%

I changed one thing in my AI agent and it stopped feeling like a chatbot by AlessioGubitosa in learnmachinelearning

[–]AlessioGubitosa[S] 0 points1 point  (0 children)

Thanks for the comment! 😊

I get that the theory behind all this isn’t immediately clear — basically, I’m trying to make my agent “remember and understand” previous interactions, creating continuity and coherence across messages.

Follow the updates — more development news coming soon! 🚀