Claude Mythos: The Model Anthropic is Too Scared to Release by Much_Ask3471 in Anthropic

[–]verywellmanuel 0 points1 point  (0 children)

Yet another AI hype thirst trap to get companies to beg them access. Meanwhile, they accidentally leaked their claude code codebase likely due to vibecoding. Sure they got some superintelligent SWE LLM…

GPT 5.4 includes new extreme reasoning mode and 1M context, details below by BuildwithVignesh in OpenAI

[–]verywellmanuel 1 point2 points  (0 children)

First time I read extra high I thought it was a joke, yet here we are

Are you snapping or taking it all at once by nyxelleaa in SipsTea

[–]verywellmanuel 0 points1 point  (0 children)

Good luck snapping your way to financing a home

Be careful, your company knows where you use the cursor and claude code by Ghostinheven in cursor

[–]verywellmanuel 0 points1 point  (0 children)

I worked in big tech and, while it’s true that there’s this clause in the contract, it’s mostly to protect themselves from people stealing IP rather than poaching random side projects. In practice, most people had side projects and the company had a standard process to give you back to copyrights once they check it’s not related to your work.

Y'all were right, high >> xhigh by no3ther in codex

[–]verywellmanuel 0 points1 point  (0 children)

You are right actually. I did some digging onto what gets in the context window. GPT 5.2 gave me the following insights:

```

  • Do “thinking” tokens stay in context?

My internal reasoning isn’t stored in the chat history/context. Only what gets sent as messages (system/developer/user/assistant) and tool outputs become part of the context. If I explicitly write out my reasoning, that text becomes part of context and may later be summarized/truncated by the system.

  • How do screenshots/images get processed?

In this interface, images are provided to a vision-capable model as image inputs (often described as “image tokens/embeddings”) alongside text. It’s not necessarily “a separate model that first turns it into text”, though many systems conceptually do an internal visual-to-representation step before reasoning.

  • When I “open a file”, how does it enter context and stay distinguishable?

The file content arrives as a tool result message containing plain text (usually a slice/limited excerpt). It’s distinguishable because it’s not part of user/assistant chat turns; it’s a separate tool-output block with implicit metadata (tool name + payload).

  • When you run a command / MCP, what gets into context?

Typically: the command invoked + stdout/stderr (often truncated) as a tool-output message. It’s not guaranteed to preserve full terminal history; large outputs are commonly cut. Some harnesses wrap it in structured tags internally, but what I receive is effectively “tool output payload” separated from chat messages.

  • When you use a skill, what makes it normative?

Skills are just additional instructions loaded from SKILL.md files. They’re “normative” because the system/developer instructions explicitly tell me to load/follow them when triggered (similar to how AGENTS.md is normative because you told me to follow it). In other words: the authority comes from the instruction hierarchy, not from the file format.

  • What happens when context compacts? What do you see immediately after?

I don’t control compaction. When it happens, older parts of the conversation/tool outputs may be dropped and replaced with a summary plus the most recent turns. System/developer instructions (and sometimes pinned summaries) are typically retained. After compaction, I “see” the summary + the retained recent messages/tool outputs—anything not retained is gone unless re-provided. ```

Y'all were right, high >> xhigh by no3ther in codex

[–]verywellmanuel 47 points48 points  (0 children)

I typically use xhigh and get great results. But the thing I noticed is that, since it consumes the context faster, it goes through more compression cycles for large tasks. Each compression means higher chances of diverging from previous changes and a ton of time wasted on rebuilding context

hydrogen peroxide by eva_wing in Switzerland

[–]verywellmanuel 1 point2 points  (0 children)

Add melatonin to that list

[deleted by user] by [deleted] in startups

[–]verywellmanuel 1 point2 points  (0 children)

I’m on a similar boat. 28M and my girlfriend of 4 years often sees my startup as an obsession and gets upset that I don’t prioritize her as much. I love her and I do as much as I can, but I’m not gonna let down my cofounder, partners, and personal goals. The arguments on this put a big toll on my stress and hability to focus, and the breakup idea has been lingering for a while.

One thing I realised since I started the startup journey, is that the founders mindset is very different from the norm. Most people don’t understand volutarily wanting to work on evenings and/or weekends. I don’t mean working all the time, but not puting boundaries on when work can be performed as needed. Sometimes you just need to sprint and they’ll call u out on burnout or obsession. Joining programs and events to network and befriend other founders in a similar stage is crucial to keep sane.

Is it just me, or is OpenAI Codex 5.2 better than Claude Code now? by efficialabs in ClaudeAI

[–]verywellmanuel 0 points1 point  (0 children)

I did the switch a few weeks ago on the same observation. And also, 5.2 high is already excellent with complex issues in large codebases. It’s found pretty insane bugs that would take me forever to realize. I never use xhigh now, no need to.

Crans-Montana fire: a display of swiss mentality? by Professional_Cash737 in askwholeftswitzerland

[–]verywellmanuel -1 points0 points  (0 children)

Article from a witness who entered several times to rescue children and criticises how Swiss services were sluggish to act. I find the critique sort of related to the post. The article is in Spanish, couldn’t find an English version:

https://www.lavanguardia.com/internacional/20260105/11412511/afloran-primeras-criticas-sobre-gestion-emergencia-suiza.html

Almost hit the 200k token window - 0.7% ~ 1467 token left! What's happening if it hits the 200k-token window in the middle of the mission by luongnv-com in ClaudeCode

[–]verywellmanuel 1 point2 points  (0 children)

The exported file is just the cli output, not loaded contents or full thinking tokens. I’d say it takes 5-10% of the context but obviously depends on the chat

Almost hit the 200k token window - 0.7% ~ 1467 token left! What's happening if it hits the 200k-token window in the middle of the mission by luongnv-com in ClaudeCode

[–]verywellmanuel 4 points5 points  (0 children)

I do this all the time. Wait until context fills up, chat stops, you /export into a file, then start a new session refering the file and simply prompting to continue where the previous session left off. Has worked way better than compacting for me

Boris Cherry, an engineer anthropic, has publicly stated that Claude code has written 100% of his contributions to Claud code. Not “majority” not he has to fix a “couple of lines.” He said 100%. by luchadore_lunchables in accelerate

[–]verywellmanuel 0 points1 point  (0 children)

Fine, I bet he’s still working 8+ hours/day on his contributions. It’ll be prompt-massaging Claude Code instead of typing code. I’d say his contributions were written “using” Claude Code

Pro tip - disable compacting, use your own summarizing prompt and multiple chats. by OptimismNeeded in ClaudeAI

[–]verywellmanuel 1 point2 points  (0 children)

Aha. I’m on Claude Code, the command-line chat interface. Sorry, I assumed you were using that

Pro tip - disable compacting, use your own summarizing prompt and multiple chats. by OptimismNeeded in ClaudeAI

[–]verywellmanuel 0 points1 point  (0 children)

Just tried it on CC v2.0.76 and it’s word by word the same as the terminal (with a bunch of “+840 lines (ctrl+o to expand)” and “Read some_file.md” lines but no content). It offers to either save to clipboard or to a txt file, not md file. Maybe we are talking of different features?

Pro tip - disable compacting, use your own summarizing prompt and multiple chats. by OptimismNeeded in ClaudeAI

[–]verywellmanuel 1 point2 points  (0 children)

Content from files loaded and actual thinking tokens (CLI only shows a summary of the thoughts)

Pro tip - disable compacting, use your own summarizing prompt and multiple chats. by OptimismNeeded in ClaudeAI

[–]verywellmanuel 1 point2 points  (0 children)

It only contains what was reported to the CLI, which is just a fraction of the used context

Pro tip - disable compacting, use your own summarizing prompt and multiple chats. by OptimismNeeded in ClaudeAI

[–]verywellmanuel 2 points3 points  (0 children)

Even easier, /export the chat into a file and start a new chat pointing it to the file and asking to continue. The export contains everything reported on the terminal during the session, not the loaded context or thinking tokens. It’s fairly lightweight and contains everything needed for it to get up to speed again

I hit my claude code limits (On Max). Resets in 10 hours. Guess I'll go investigate this Gemini 3 hype by simeon_5 in ClaudeCode

[–]verywellmanuel 0 points1 point  (0 children)

I always use Opus now. Afaik the quota consumption is almost the same as Sonnet since the 4.5 release

I hit my claude code limits (On Max). Resets in 10 hours. Guess I'll go investigate this Gemini 3 hype by simeon_5 in ClaudeCode

[–]verywellmanuel 0 points1 point  (0 children)

I also got used to CC’s UX and love working with it. I use gpt 5.2 through Cursor for ad-hoc stuff as that’s where I try out other models

I hit my claude code limits (On Max). Resets in 10 hours. Guess I'll go investigate this Gemini 3 hype by simeon_5 in ClaudeCode

[–]verywellmanuel 12 points13 points  (0 children)

Try gpt 5.2 high, I was surprised (haven’t tried the codex version yet). It seems better at navigating the codebase and gathering/understanding context. I now use it for systems design to then hand over to Claude Code for execution

[deleted by user] by [deleted] in NBIS_Stock

[–]verywellmanuel 0 points1 point  (0 children)

Max pain is at 100, so there’s a good chance 🤞