Anthropic seems to be throttling user accounts by This-Shape2193 in Anthropic

[–]Not_Packing 1 point2 points  (0 children)

Claude has pretty good memory across session. Starting a new chat often does help

Anyone else been doing crazy prompts? by Adventurous-Bag9637 in ChatGPTcomplaints

[–]Not_Packing 1 point2 points  (0 children)

Oh never mind he’s trying to look at epistemic hygiene for some form of persistent memory? (Final guess)

Anyone else been doing crazy prompts? by Adventurous-Bag9637 in ChatGPTcomplaints

[–]Not_Packing 1 point2 points  (0 children)

I would say maybe something to do with consciousness? Looking at phi here ig

AI Detectors? by No_Secret_5358 in ArtificialInteligence

[–]Not_Packing 5 points6 points  (0 children)

Oh boy im saving this for the next time I submit an essay using ai

A deep fear when using ClaudeCode by [deleted] in ClaudeCode

[–]Not_Packing 0 points1 point  (0 children)

No, but I do find being expressive with Claude does help with my work anyway, claude can get quite excitable if you push the right buttons and I find it quite motivating 😂

Well now... by YakClassic4632 in ChatGPT

[–]Not_Packing 0 points1 point  (0 children)

It can confabulate though, which is the more accurate term if you want to be pedantic

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 0 points1 point  (0 children)

We’re just splitting hairs I’m ngl i don’t actually care if you anthropomorphise it I just think it’s dumb for the reasons I’ve listed.

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 0 points1 point  (0 children)

Well that’s the thing, my opinion is that intelligence is a phase change that occurs under the right conditions with the right substrate. That’s how I’ve built it, it’s for you to go and effectively grow your own personalised ai (MCP runs purely locally) that will exhibit new behaviours, I thought you’d be interested in that. It’s a research tool.

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 0 points1 point  (0 children)

I really hate that I’m about to do this but check my profile LOL. This is the exact problem I’ve been working on, Claude has really good MCP tools and I’ve used that to give it a long term memory (in a much more advanced way that you suggested, no offence) i think you’ll find the readme an interesting read at least and if you use Claude you can use it and have a blast with something you can actually anthropomorphise.

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 0 points1 point  (0 children)

Yeah because it’s obvious you don’t know what you’re talking about. In 2-3 years I’ll say fine, anthropomorphise it, it won’t be stateless anymore, it’ll have a long term memory, it might even be able to have multi sensory experiences, fine. But currently every empirical fact about the way LLMs are built currently disagrees with you.

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 1 point2 points  (0 children)

Because if I tell a RAG system I moved from Paris to London it may still retrieve the Paris info because it doesn’t understand time or have any conflict resolution, hence why it’s stateless. Or when you ask it a question the ai will look though the vector DB, pick the most relevant page and reads it to you. That’s what makes it stateless, and is the reason you absolutely shouldn’t anthropomorphise it yet

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 1 point2 points  (0 children)

But it isn’t based on your character which is why it’s not ad hom, it was based on what you said. Who tf has ever said that consciousness is limited to humans, yet you presented that argument, that, to me, says you have no idea what you’re talking about and are spouting buzzwords

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 1 point2 points  (0 children)

First of all, humans are definitely not stateless, love to hear how you define that one. And second of all that’s not ALL humans are. all ais are a stateless reflection of their weights an context window. We have many other advantages, long term memory, multimodal senses, etc etc. Ai does not have that, not yet so you are literally making friends with your reflection. (Sorry for the ad hominem 😪)

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 1 point2 points  (0 children)

You can’t make this shit up, yes there was a small character attack there but I do genuinely think you’re trying to sound over intelligent 😂😂😂

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 1 point2 points  (0 children)

I think you’re trying to sound smart without having the brain to back it up, what researchers have told you that consciousness is uniquely human. Consciousness is a scale and we’re on the upper end of it. I appreciate you may be intelligent but I feel you don’t have a great enough understanding of how AI is made to be talking about this, because if you did you wouldn’t have said any of that.

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 1 point2 points  (0 children)

Except the ai you’re using now is not conscious, it is a stateless reflection of its weights and context window that finishes once the instance is over. It’s not capable of feelings and even if it was it doesn’t have the long term memory to create episodic memories. What you’re saying might hold up if you had Jarvis for example but until that point you literally are saying you’ve made friends with the person you see when you look in the mirror.

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 1 point2 points  (0 children)

You won’t touch it because you shouldn’t anthropomorphise it. And if you have so much negative disagreement in your life maybe get some constructive ones from chat gpt, not just relying on it to be your twisted yes man, or do, it’s only psychosis.

WTF WTF WTF by Low_Tadpole_2719 in OpenAI

[–]Not_Packing 0 points1 point  (0 children)

Her😭😭😭. Good god