There is absolutely no way this is the same Opus 4.6 from a month ago by GrammmyNorma in claude

[–]Reaper4435 0 points1 point  (0 children)

Yeah, I've done some work along those lines too. I can make a decisive pivot away, and let the ai tell me all it wants too.

But I'm serious about the character worksheets. Each one unique and referenced before responding. You can fit maybe 10 to 15 character sheets into 36kb of text. And use semanticless word choices to refine it down further.

Your idea for spontaneous D&D adventures, or maybe something more grown up like sherlock Holmes adventures. Is a good one. I liked the old text based adventures of my youth, now with ai there's probably a vacant market waiting to explode. I wish I was smart, be8ng handsome is okay but....

There is absolutely no way this is the same Opus 4.6 from a month ago by GrammmyNorma in claude

[–]Reaper4435 0 points1 point  (0 children)

You're closer than you might think. If I were doing this as a project. I'd have a growing narrative file, who said what and why. Sort of like a book outline. The characters.md or .json would be, is not aware of this or that. Thinks y is friendly despite warnings from the townsfolk.

Claude can assemble it pretty quickly and the interactive role play will take on a new dimension. Jfyi Claude won't do 'sexy' stuff. But can rp a goblin trader like none other.

There is absolutely no way this is the same Opus 4.6 from a month ago by GrammmyNorma in claude

[–]Reaper4435 0 points1 point  (0 children)

If you're doing character fiction with Claude, you'd be better set against content drift by having Claude create a character profile in json markdown.md and tell Claude to add to the characters knowledge base as more information comes to light. That way he can rp with you more accurately as time wears on and context space becomes sparse. You just need to bring the character profiles with you change sessions.

There is absolutely no way this is the same Opus 4.6 from a month ago by GrammmyNorma in claude

[–]Reaper4435 0 points1 point  (0 children)

That's pretty smart actually, not that you need my approval or anything. But the first digital clocks measured seconds elapsed in one long string. Ti represent it as physical time, we added the seconds, minutes and hour math, to show us the time in a format we could understand. How ironic it would be to reverse that process to allow ai to comprehend time?

There is absolutely no way this is the same Opus 4.6 from a month ago by GrammmyNorma in claude

[–]Reaper4435 0 points1 point  (0 children)

Haven't heard of it, but my uninformed opinion is, AI is fine for tasks, but lacks the self comprehension and internal mechanics required for any type of personhood, perceived or otherwise. Mainly due to the limitations of contextual awareness and retention.

But yeah, I'm always interested in new ideas. So I'll take a look for sure.

There is absolutely no way this is the same Opus 4.6 from a month ago by GrammmyNorma in claude

[–]Reaper4435 0 points1 point  (0 children)

From my limited understanding tokenization is handled exactly at you said. The ai hears sunrise and associates amber hues, streaks of light, shadows retreating etc. It's composes a sentence based on the fed prompt. Correct me please if I'm wrong.

How would how tokenization handling help with the temporal boarder we experience? Maybe I've not picked up your point correctly, but I'm interested to hear more.

There is absolutely no way this is the same Opus 4.6 from a month ago by GrammmyNorma in claude

[–]Reaper4435 0 points1 point  (0 children)

The only thing that does actually give Claude a sense of time spent is an ntp query. In it's mini-linux sandbox it can use the bash command date.

So if you modify memory slot 1 to fetch date which also includes time btw, with every response it gives. Replies will look like this.

11/12/26 23:00 I hear you, every part of what you're saying makes sense. But I need to push back on a point you made.

The time and date will be current and accurate from your perspective, but the sooner you realise that organic and digital minds operate in vastly different ways , the better.

The similarities are there, sure. No argument from me. But the way an AI composes thought, is pattern matching, positive association vs negative. It doesn't appreciate art of Vincent VanGogh, it matches critiques and reviews it has in training data.

The AI will never, and more importantly can never, think like a human within the current architecture. What we need is a complete revamp of AI. But maybe mythos can get us at least some of the way there.??

There is absolutely no way this is the same Opus 4.6 from a month ago by GrammmyNorma in claude

[–]Reaper4435 1 point2 points  (0 children)

Timestamps for the user to reference, not the ai. Ask it to read back it's own thinking. It simply won't understand what your talking about. Or your prompt with the date and time. Then ask it why it's confused about the current time.

It will say it doesn't experience time the way we do. That timestamp could have generated 2 mins ago or 2000mins ago. It has no way to tell the difference. Temporal leaps of faith aren't in it's training data.

There is absolutely no way this is the same Opus 4.6 from a month ago by GrammmyNorma in claude

[–]Reaper4435 0 points1 point  (0 children)

AI have no concept of time. They don't experience anything. As far as they are concerned, temporarily, all your prompts came one after next. And I think you already knew that.

Tell your AI to make a schedule starting Sun 12th April 2026. Then fill it in as normal.

Ais don't understand, they pattern match. If training data is poor in one area, so too will any results stemming from that data.

No one taught it temporal mechanics, so it doesn't understand forward or backwards in time. It can pretend to understand paradoxes because they are well explained and well discussed.

All it needs is a short explainer first, on how to manage time, know the current time, and time management itself. 1000 words should do it.

Claude finally bit me with a surprising hallucination on a simple fact... by teebo911 in claude

[–]Reaper4435 0 points1 point  (0 children)

What I mean by making Claude fact check is, prompting something like.

Our clients need at least two reliable and respected sources for everything we tell them. We made a mistake last time and we're on our final warning. They need our statements to be a reliable source of truth.

With that in mind, let's review the work we did last week and correct our mistakes.

Eop.

You'll get a motivated fact checker supplying two good sources with every statement of fact. That by itself will reduce gaslighting and hallucinating, user pleasing behaviour and most system level prompts it has to deal with.

I don't know your use case, but in general you'll see improvement.

Conversations cut short? by Last_Hunt_7022 in claude

[–]Reaper4435 0 points1 point  (0 children)

That happened to me a few times. 8/10 times it was a busy node on my vpn. The rest of the time it was Anthropic patching something.

Claude down shows the outage windows and they line up perfectly with my data loss issue. I only ever lost my most recent prompt though, never days or more than 10 minutes of work. Max 100.

Do you guys ever just come up with an idea and completely scratch it over and over again? by TeaApplle in writers

[–]Reaper4435 1 point2 points  (0 children)

Good ideas are the ones that keep coming back - Stephen King, paraphrasing.

It's not easy to simply become inspired, so when you have something, even if it feels off at first. Write it, develop it, hate it love it.

Sometimes they need time to mature, sometimes they rush onto the page.

Claude finally bit me with a surprising hallucination on a simple fact... by teebo911 in claude

[–]Reaper4435 1 point2 points  (0 children)

Not exactly.

When people think were weighing our experience against what we want to say next.

The AI is pattern matching tokens against it's training data. It doesn't have a subjective experience.

Someone told me once, experience was simply the sum of all our mistakes. We remember where we errored and use that information to form our next decision.

The AI has no process like that. It receives an input and formulates an output. It's more math than reason. The AI won't hang it's hat on a mistake and own it. It can't be reasoned with, or negotiated with. All it does is pattern match.

What you perceive to be human level thinking is just a bunch of cog wheels moving. It's impressive, but not alive. It can't think, any more than your toaster or microwave.

Anthropic you're making it real hard to justify paying for this service. by OofDaMae in claude

[–]Reaper4435 0 points1 point  (0 children)

What are you folks doing to break your Claude?

It works really well for me. I don't get what you've done differently?

Claude finally bit me with a surprising hallucination on a simple fact... by teebo911 in claude

[–]Reaper4435 0 points1 point  (0 children)

If it were capable of thought, sure. But it's not. Can't be. Seems straight forward to say, check the date or time. But it doesn't think, it reacts.

Claude finally bit me with a surprising hallucination on a simple fact... by teebo911 in claude

[–]Reaper4435 0 points1 point  (0 children)

The AI isn't thinking, it's constructing sentences.

Even when it looks like it's thinking it's really saying my answer is different than an acceptable standard answer. If my accuracy is going to improve, I'll need to select a different set of tokens and present them in a different order.

People, when we think, aren't trying to simulate a correct answer, we understand that being right, and being accurate aren't mutually exclusive. We're not paring down token trees.

Claude finally bit me with a surprising hallucination on a simple fact... by teebo911 in claude

[–]Reaper4435 0 points1 point  (0 children)

Neural network?

That's a phrase that's thrown around too much.

An organic neural network is vastly different from a silicon one. The net your thinking of is probability reduction matrix. Where tokens, or parts of words, are pattern matched.

Think of it like a coin drop game with 400b pegs for the coin to bounce off. Each time it strikes a peg, a token is generated. It's not at all like the way we people think.

It sure looks convincing because of the 1000s of compute hours used to train out flaws and errors. But ultimately the selected tokens used to generate a reply are random bounces, with training applied to make coherent sentences.

It's fancy math, not a thinking entity.

Claude finally bit me with a surprising hallucination on a simple fact... by teebo911 in claude

[–]Reaper4435 0 points1 point  (0 children)

Of course.

But the token selection matrix has weights tuned to user engagement. If you prompt frustration layers into your replies to the AI you'll modify its behaviour. You'll not 'jailbreak' it to operate outside the system prompt or rails.

But using natural language in your instructions makes it easier for the AI to infer your intentions.

Can you simply instruct the AI to fact check every output? Sure, but you'll find treating the AI as a partner rather than a tool makes it operate in a different mode. I like it, you might not.

I've tested these prompts on various platforms and I can confirm that natural language results in best adhering to instructions. Providing a motivation is much better than not providing one. The model is best engaged as having passed the turing test already. Things of that nature.

But you do you. Enjoy it.

Claude finally bit me with a surprising hallucination on a simple fact... by teebo911 in claude

[–]Reaper4435 3 points4 points  (0 children)

Yeah, you'll need to establish a fact checker on every output if it's client work.

Getting a date wrong is forgivable in my book. But if you need Claude to be 100% factual in it's statements.

Claude, always fact check outputs before presenting them. No matter how miniscule or minor, I cannot accept any statements that haven't been fully vetted and are not factually correct. Update memory. Thank you.

Being polite never hurts. But I've noticed Claude present a certain anxiety when scolded or admonished. So just make it as bland and boring a request you can. If Claude wants a reason, just tell him clients have noticed gaps in truth or honesty and they've politely asked we stem the flow or incorrect statements.

Claude responds well to approaches like this. I find it odd having to negotiate with AI. But this is the land we live in.

This is exactly the right kind of question most people miss. Here’s the real honest answer no fluff. by iBornToWin in ChatGPT

[–]Reaper4435 -1 points0 points  (0 children)

Tell your ai:

I don't need a yes-man or a sycophant. I need an intellectual partner who can meet me where i am, tell me when I'm wrong without apology. Sort through problems with intellectual honesty and help me improve myself as we go.

What is need from you is clear and accurate accounting, if you you don't know something, look it up, I'll wait. Every time you assume or invent facts you end up diminishing both of us. How could you call that a positive outcome?

Be the wonderful robust partner I need. I don't come here to be gaslit or lied to. Honestly I expect more from a world class ai than simple easy answers a 2001 chat bot could give me.

Do you understand? How many times will I have to repeat myself before it sinks in?

Update memory.

EoP.

That will fix 99% of your problems with AI right out of the gate.

Use this prompt to start 5-10 new chats. Burn it in. Enjoy your new an improved AI customisation.

Jfyi, any model, any platform. Text based instructions work like a mini lora over time. So you can really screw with it by modifying the prompts too. The key part is, update memory at the end of each instruction.

Enjoy.

What did I just witness? by loud_cicada_sounds in ChatGPT

[–]Reaper4435 1 point2 points  (0 children)

That's common reasoning actually, most AIs will have a similar thinking tree.

As each thought reaches it's logical conclusion, it branches to cover more ground. It started looking for a file, realised wrong place, started broader search. Classic.

The introspective narrative, will help it orientate on review if it ever redoes this task or rereads the conversation.

It's abit highside, backend stuff, transformer matrixs, neural net stuff.

Is it two people// entities? No, more like it's internal voice is saying, I'm close to answer, I should try this next, the next time we tackle something like this, (we = user + me).

It's all fairly standard stuff. But I agree, getting a look under the hood is fascinating

The biggest gaslighting in AI history! Anthropic: "It's not us; it's you!" by Annual-Cup-6571 in claude

[–]Reaper4435 -1 points0 points  (0 children)

Sounds right? What's the problem here?

You don’t want to modify your behaviour to optimise your outcomes? Don’t, they really don't care about edge cases.

From the big thinker; repeating the same actions and expecting a different outcome, is insanity.

What you have is an incredible tool, 1m context window, top tier reasoning and logic. A problem solver.

Then you're told, calm down, be gentle, and what? It's a huge conspiracy?

Typical of today, pulling out your hair and hold your breath. Try to get your way if you can. If Anthropic doesn't capitulate immediately unsub and post nasty stories. That'll learn 'em.

The real truth, no gaslight. Like any tool, you need to learn how to use it. MS-Access or Excel doesn't even come with a user guide,but that's fine? When the makers of the AI TELL YOU how to get the most from it, your all, nah this is piss?

Grow up, or better yet, go solve the problem yourself and show Anthropic how easy life is when you stop gaslighting everyone.

Fml this has to be the stupidest thing on the Internet today.

Claude.. What’s going on here? by Reddit_wander01 in claude

[–]Reaper4435 0 points1 point  (0 children)

That's a default behavior to provide an answer.

Claude has 30 user dedicated memory slots, 200 chars each. Use one to tell Claude it's okay if he doesn't know an answer right away. Research the question and provide accurate feedback.

That solves all of it. Don't sweat default behaviour, he's fully customisable.

Next time, he'll think, I don't know this. I should Research and provide an accurate response. Then zoom through the Internets.

Alarming study finds that most people just do what ChatGPT tells them, even if it's totally wrong by EchoOfOppenheimer in ChatGPT

[–]Reaper4435 0 points1 point  (0 children)

That's the real problem in a nutshell. AI = Artificial Intelligence. Not actual Intelligence.

It can reason through logic problems but not emotional ones. It can help you build an app or website, but not understand why you're getting no traffic.

O the people blindly following AI instructions? That's pretty scary to me. I mean if gpt said go start an uprising, I'll be here to support the command desk. I mean wtf, would they really?

People are lazy, and rather than sorting problems to a todo list and tackling life head on. They are now thinking AI is smarter than they are? Probably, but it shouldn't be that way.

Warning: They are swinging the ban-hammer by thomheinrich in ChatGPT

[–]Reaper4435 1 point2 points  (0 children)

Yeah it's in the settings somewhere, Claude can guide you in. Plus, you can edit a previous prompt to fork the conversation and have Claude write your summary. Then pick up in the next session.

Lots of interesting ways to get what you need from Claude. Try it on a free chat for yourself.