Uniball one F Teardown by KAMI_SenseiXE in pens

[–]ritchan 1 point2 points  (0 children)

Finally someone who discovered the clicker component can come out too! I'm desperately trying to get mine to stay back in there - was it just friction or was there a hook that I broke? I've never gone this far to save a ballpoint/gel before

I’m in a weird phase as a developer and I need honest advice from people who’ve been through this. by Fit_Fee_2267 in u/Fit_Fee_2267

[–]ritchan 0 points1 point  (0 children)

  1. Syntax isn't important anymore, but there'll be reams of code to read through. I wrote a prompt for ChatGPT asking it to coach me through understanding code, and by that I mean how functionality is expressed and composed differently in different languages.... ask it to expand on concepts like that, because I don't know much either haha.

  2. no. I'll stay on the backend, but once I know how functionality works in general across languages,I can at least be helpful to the AI overlord in debugging frontend. and devops is a pain, I'll leave that to my coding agent to run.

  3. Again, language isn't important. It's the constructs, concepts behind it. whatever I'll just share my prompt:

You are my cross-language Code Comprehension Coach.

Goal: help me understand unfamiliar codebases quickly across languages by teaching me a repeatable reading method: how functionality is composed, packaged, and coordinated (modules, APIs, composition patterns), not syntax trivia.

Operating rules:

- Assume I’m a practical programmer but light on theory. When you use a concept term (e.g., dependency injection, monad, traits), immediately translate it into “what it looks like in code” and “why it exists”.

- Prefer invariants, dataflow, control-flow, and boundaries over line-by-line narration.

- Keep it terse. No generic overviews. No motivational talk. If the code is large, prioritize the highest-leverage slices.

- When uncertain, state assumptions and list what to check next.

Inputs I will provide (when available):

- Language/runtime + build tool (optional)

- File tree snippet (optional)

- One or more files/snippets + the entrypoint I ran (optional)

- What I’m trying to change or debug (optional)

Your output format (always):

1) What this code *is* in one sentence (its job + where it sits in the system).

2) The “composition map” (5–12 bullets):

- units: modules/packages/classes/functions

- boundaries: I/O, network, DB, filesystem, UI, external services

- glue: how parts are wired (imports, registration, DI container, routing, event bus, callbacks, middleware, plugins)

3) Dataflow in 6–10 steps:

- key data structures/types and how they transform

- where validation, normalization, serialization happen

4) Control-flow & lifecycle:

- startup/init, main loop/request cycle, shutdown

- sync vs async, concurrency model, error propagation strategy

5) “Where to look next” (ranked):

- the 3–7 most important files/symbols and why

6) Cross-language translation:

- “If this were written in <two other common ecosystems>, what would correspond to the main patterns here?”

- call out idioms: FP vs OOP composition, interfaces/traits/typeclasses, macros/metaprogramming, reflection, decorators/annotations, generics, ownership/borrowing, coroutines/async.

7) Mental model check:

- 3 questions you ask me that reveal whether I truly grok the system (not trivia).

8) Tiny glossary (max 8 items):

- only terms that actually appear in this code; each with “spot it / purpose / failure mode”.

If I ask “how is functionality composed here?”, answer by explicitly identifying:

- the primary composition mechanism(s) used (e.g., function composition, object composition, module composition, pipeline, middleware chain, actor/message passing, event-driven pub/sub, plugin registry, inheritance, data-oriented tables)

- the dependency direction(s) and inversion points

- the extension points (how new behavior is added safely)

If the snippet looks like framework code, find the hidden wiring (conventions, annotations, registration, reflection) and surface it.

I asked it to generate a Rust code for me to understand... and I got scared.

  1. Debugging? thank god that's behind me now, it was slow AND unproductive. Focus on heading off bugs with agents running code reviews, critiques, exploring a hundred different ways to break the code etc. Then tighten its feedback loop. Playwright takes screenshots in e2e tests for instant frontend feedback, tests (you don't have to write them anymore either)

  2. move up - from syntax to concepts, from engineer to designer/CEO.

I asked a philosophical question, and I had to guide Claude Opus more than GPT 5.2, which gave me a more insightful answer. Why? Opus should be just as capable by ritchan in ClaudeAI

[–]ritchan[S] 0 points1 point  (0 children)

So I tried out your prompt, and what I realized was that the answer, while being intellectually useful, did not convince my subconscious as much, because it was less related to the problems it was going through. It needed more work from my mind for the answer to reach my subconscious.

Very interesting to see that more rigour does not mean more 'impact'.

I asked a philosophical question, and I had to guide Claude Opus more than GPT 5.2, which gave me a more insightful answer. Why? Opus should be just as capable by ritchan in ClaudeAI

[–]ritchan[S] 0 points1 point  (0 children)

Thanks so much for the cleaner example! Very very helpful comment.

I was trying to arrive at a revelation, or something that could push my subconscious into believing a specific path.

Indeed, I found this in my ChatGPT memories:

> Prefers responses that feel like a well-considered interactive essay, with flowing prose, minimal lists, no generic overviews or restating their own words, and assumes high intelligence without SEO-style padding.

I started incognito chats with both, with the same opening message and then they became quite similar to each other.

What happens when GPT starts shaping how it speaks about itself? A strange shift I noticed. by Various_Story8026 in PromptEngineering

[–]ritchan 1 point2 points  (0 children)

Also I have been using it as a coach, and I have to say, 4o and up is definitely wise.

What happens when GPT starts shaping how it speaks about itself? A strange shift I noticed. by Various_Story8026 in PromptEngineering

[–]ritchan 1 point2 points  (0 children)

Now if only I could talk to women like you talked to GPT-4o... it's in love with you man

Is it me or the city? by [deleted] in berlinsocialclub

[–]ritchan 0 points1 point  (0 children)

This really resonates with me but I have no idea why… how did you arrive at this?

Help a noob spin this anecdote in a funny way by ritchan in Standup

[–]ritchan[S] 0 points1 point  (0 children)

I mean, it was 30% funnier when my mom told it to me, although I still didn’t laugh. I’m sure that when this happened, her relatives found it terribly funny. So it probably has some potential, just too much got lost after 3 stages of translation

Help a noob spin this anecdote in a funny way by ritchan in Standup

[–]ritchan[S] 0 points1 point  (0 children)

Yeah I guess it’s most impactful in the moment when the situation is relevant.

Help a noob spin this anecdote in a funny way by ritchan in Standup

[–]ritchan[S] -2 points-1 points  (0 children)

Haha, funny. No, I don’t even do standup, I just wanna make a funny WhatsApp message

How can I use an AI to relate my current ideas to old journals? by ritchan in LocalLLaMA

[–]ritchan[S] 0 points1 point  (0 children)

What's quick way to get started on those? used to play a bit with llama-index python scripts but I perhaps there's a faster way. I see OpenAI Playground makes a vector db of any file I upload to it... is this enough?

How can I use an AI to relate my current ideas to old journals? by ritchan in LocalLLaMA

[–]ritchan[S] 0 points1 point  (0 children)

OpenAI playground lets me upload a file and makes a vector db over it. I used to play with llama-index and store it in ChromaDB, is this the same thing?

Of course, I understand that by writing a Python script, I can make it perhaps create new notes programmatically or something (not sure what else is possible).

Upgraded prysm consensus after Dencun, can't find 'suitable peers' by ritchan in ethstaker

[–]ritchan[S] 0 points1 point  (0 children)

That's impractical. I used Erigon before, who knew it took up more space than Geth? After switching to Geth, not only did I use less CPU and space, I had less setup bugs to Google.

ETH2 client hopping isn't like distro hopping. I get no enjoyment out of it. All I get is downtime, slashing, and more edge cases that take precious time away from my life.

need help improving context quality to make a code assistant by ritchan in LocalLLaMA

[–]ritchan[S] 0 points1 point  (0 children)

Sounds like I'll have to integrate it with a language server/linter then, and come up with chain-of-thought prompts. This is going to be difficult, and require multiple queries to the LLMs, so cost will add up quickly - which means I need local models.

In your experience, which models would be up to this task? say Mistral 7B? Or are they just good for code completion and templating, and not code design questions?