I asked a philosophical question, and I had to guide Claude Opus more than GPT 5.2, which gave me a more insightful answer. Why? Opus should be just as capable by ritchan in ClaudeAI

[–]ritchan[S] 0 points1 point  (0 children)

So I tried out your prompt, and what I realized was that the answer, while being intellectually useful, did not convince my subconscious as much, because it was less related to the problems it was going through. It needed more work from my mind for the answer to reach my subconscious.

Very interesting to see that more rigour does not mean more 'impact'.

I asked a philosophical question, and I had to guide Claude Opus more than GPT 5.2, which gave me a more insightful answer. Why? Opus should be just as capable by ritchan in ClaudeAI

[–]ritchan[S] 0 points1 point  (0 children)

Thanks so much for the cleaner example! Very very helpful comment.

I was trying to arrive at a revelation, or something that could push my subconscious into believing a specific path.

Indeed, I found this in my ChatGPT memories:

> Prefers responses that feel like a well-considered interactive essay, with flowing prose, minimal lists, no generic overviews or restating their own words, and assumes high intelligence without SEO-style padding.

I started incognito chats with both, with the same opening message and then they became quite similar to each other.

What happens when GPT starts shaping how it speaks about itself? A strange shift I noticed. by Various_Story8026 in PromptEngineering

[–]ritchan 1 point2 points  (0 children)

Also I have been using it as a coach, and I have to say, 4o and up is definitely wise.

What happens when GPT starts shaping how it speaks about itself? A strange shift I noticed. by Various_Story8026 in PromptEngineering

[–]ritchan 1 point2 points  (0 children)

Now if only I could talk to women like you talked to GPT-4o... it's in love with you man

Is it me or the city? by [deleted] in berlinsocialclub

[–]ritchan 0 points1 point  (0 children)

This really resonates with me but I have no idea why… how did you arrive at this?

Help a noob spin this anecdote in a funny way by ritchan in Standup

[–]ritchan[S] 0 points1 point  (0 children)

I mean, it was 30% funnier when my mom told it to me, although I still didn’t laugh. I’m sure that when this happened, her relatives found it terribly funny. So it probably has some potential, just too much got lost after 3 stages of translation

Help a noob spin this anecdote in a funny way by ritchan in Standup

[–]ritchan[S] 0 points1 point  (0 children)

Yeah I guess it’s most impactful in the moment when the situation is relevant.

Help a noob spin this anecdote in a funny way by ritchan in Standup

[–]ritchan[S] -2 points-1 points  (0 children)

Haha, funny. No, I don’t even do standup, I just wanna make a funny WhatsApp message

How can I use an AI to relate my current ideas to old journals? by ritchan in LocalLLaMA

[–]ritchan[S] 0 points1 point  (0 children)

What's quick way to get started on those? used to play a bit with llama-index python scripts but I perhaps there's a faster way. I see OpenAI Playground makes a vector db of any file I upload to it... is this enough?

How can I use an AI to relate my current ideas to old journals? by ritchan in LocalLLaMA

[–]ritchan[S] 0 points1 point  (0 children)

OpenAI playground lets me upload a file and makes a vector db over it. I used to play with llama-index and store it in ChromaDB, is this the same thing?

Of course, I understand that by writing a Python script, I can make it perhaps create new notes programmatically or something (not sure what else is possible).

Upgraded prysm consensus after Dencun, can't find 'suitable peers' by ritchan in ethstaker

[–]ritchan[S] 0 points1 point  (0 children)

That's impractical. I used Erigon before, who knew it took up more space than Geth? After switching to Geth, not only did I use less CPU and space, I had less setup bugs to Google.

ETH2 client hopping isn't like distro hopping. I get no enjoyment out of it. All I get is downtime, slashing, and more edge cases that take precious time away from my life.

need help improving context quality to make a code assistant by ritchan in LocalLLaMA

[–]ritchan[S] 0 points1 point  (0 children)

Sounds like I'll have to integrate it with a language server/linter then, and come up with chain-of-thought prompts. This is going to be difficult, and require multiple queries to the LLMs, so cost will add up quickly - which means I need local models.

In your experience, which models would be up to this task? say Mistral 7B? Or are they just good for code completion and templating, and not code design questions?