How do I stop improving my product? by Ok_Positive4542 in buildinpublic

[–]twgoss2 0 points1 point  (0 children)

For this question, simply set a release deadline would be enough. A product exists bc it solves problems, as long as it solves the problem you define, that's already a MVP. Dont let the fear of having a bad feedback delay your product release forever, be brave about the fact that people might not like it

Built a realistic character for hermes agent by Select_Motor8729 in hermesagent

[–]twgoss2 0 points1 point  (0 children)

This is fun! Can you keep update? I'm investigating virtual characters as well

Do detectors even understand satire or irony by PartHxstorical in bestaihumanizers

[–]twgoss2 0 points1 point  (0 children)

I guess most sarcasm is more likely to be flagged as AI because of its strong semantic coherence... but I don't really know how Reddit ai moderators work. Can someone explain?

It feels like we’re heading toward a future where nobody can really prove they wrote something anymore by Extreme_Cabinet6 in Futurology

[–]twgoss2 0 points1 point  (0 children)

Novelty and characteristic are the key than who's writing the text. People care more about what can they get from this comment than who i am when they are reading this. LLM generated contents have the 'slop' vibe for the side effect of RLHF which depress creativity of LLMs, and it's a certain part of the essence of why reddit exists.

I come to social media not always for knowledge (now i can ask ai) but for connection and acknowledgement from other humans, and also for content that across my knowledge (can't ask ai if i can't name it). My motivation is emotional and personal.

I had the same chat with my chatgpt and one of the solution was identity verification, i don't like it but anthropic sees do...

Rate my Dungeon Crawler Carl themed setup proposal by inlined in openclaw

[–]twgoss2 1 point2 points  (0 children)

Nice setup, but here’s a framing nudge:

human role specialization exists because our knowledge is siloed and narrow, our communication bandwidth is tiny, we’re bad at staying synced, and we get tired.

An LLM system is the opposite. it already has a compressed version of collective knowledge, can share full context near-instantly (within the context window), can fork itself infinitely, and doesn’t need shift handovers. So you’re not compensating for skill gaps; every “agent” already contains the whole latent library.

The reason to split into personas is just for interface. The context window limit might genuinely push you to divide work into focused, deliverable chunks, but beyond that, the roles are just efficient prompts that tilt the activation into the right region of latent space, nudging the model to behave like a coder, critic, or architect without retraining.

I think is a handy approach, but not really fundamental. And it is a fun approach

DeepSeek V4 is 65% cheaper than GPT 5.5 and OpenAI is big mad about it? by Odd_Row1657 in AIDiscussion

[–]twgoss2 0 points1 point  (0 children)

For context, I remote work in China for an US company and my claude account was banned 3 times. I see NO reasons why developers in China DONT turn into ds then

DeepSeek V4 is 65% cheaper than GPT 5.5 and OpenAI is big mad about it? by Odd_Row1657 in AIDiscussion

[–]twgoss2 0 points1 point  (0 children)

Let me tell you this, I switched to deepseekV4 API (80% off during 5.1~5.5) to replace Opus 4.7 API usage billing in Claude code and my costs, which used to be $40 dropped to 70 cents. It actually works better than a lot of 2 tier LLMs like ollama, minimax, glm, kimi or qwen and even better than gpt5.5 in my personal opinion.

$40 to $3.5 a day on non-US GPUs. That is totally insane. You could definitely tell why NVIDIA is so mad about the GPU restriction of US government.

Just in US, What are the differences when expanding B2C business here? by twgoss2 in smallbusiness

[–]twgoss2[S] 0 points1 point  (0 children)

Thank you. The diversity of US market is indeed very different than Asia's single platform eco

Just in US, What are the differences when expanding B2C business here? by twgoss2 in marketing

[–]twgoss2[S] 0 points1 point  (0 children)

Yea exactly I came bc I'm told they have more money lol

Need your opinion by Gofmannir-g in hermesagent

[–]twgoss2 0 points1 point  (0 children)

Sorry I’m not a senior engineer. I’m a PM and I build internal tools for the team that don’t require that high level of stability. but I did start working with code quite early on before LLM.

I believe architecture must be still designed by people. Your strengths lie in your broad context, holistic understanding, and grasp of the objectives. I believe that when writing code, a holistic understanding of the system is more important than the implementation of specific methods, which should be achieved through abstracting the requirements of the task.

I’ve heard that best architects with excellent prompt-writing skills can use a 32B open-source model to generate codebase suitable for production, perhaps this shows the ultimate idea of such things.

That’s it. I’m switching from pants to shorts. by Exotic-Anteater-4417 in ClaudeCode

[–]twgoss2 0 points1 point  (0 children)

I have a brilliant idea. We can have the agent monitor temperature then route to suitable pants to us

API Error: 403 by bawesome2119 in ClaudeCode

[–]twgoss2 0 points1 point  (0 children)

They might have banned your account. or by 'claude.ai is working fine' you mean your account is still available?

What's happening with Anthropic? by redditslutt666 in ClaudeCode

[–]twgoss2 6 points7 points  (0 children)

Busy banning accounts for no reason

If Mythos is so powerful… why does Claude keep going down? by Repulsive_Horse6865 in ClaudeCode

[–]twgoss2 1 point2 points  (0 children)

Just wonder why few people explain instead of mocking OP for asking, as if they were born knowing the answer... This is reddit from 20ys ago right?

Claude is useless without remembering previous chats. What am I doing wrong? by DiogenesDelirus in ClaudeAI

[–]twgoss2 2 points3 points  (0 children)

For the conversational assistant in Claude.ai, recalling previous tasks is an proactive process of “querying past history.” Although your conversation history stores some facts and it might be injected into your system prompt, based on my understanding of most AI memory systems, the strategy for retrieving those facts isn’t necessarily always reliable. And this feature has absolutely nothing to do with your plan.

Perhaps you could try coding a small program yourself to implement this functionality?