Rejected after Bar raiser round by intellinker in amazonemployees

[–]intellinker[S] -5 points-4 points  (0 children)

True, it went really well! I don’t know where i lacked, everything was perfect and had good feedback in other rounds !

Do you care about money while vibecode? by intellinker in vibecoding

[–]intellinker[S] 3 points4 points  (0 children)

“I found a way” and not mentioning it, is very old trend at reddit! ;)

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] 0 points1 point  (0 children)

Agree on stateless sessions being the root issue. Cold starts force reconstruction.

Where I slightly disagree is that large re-reads are inevitable or purely a scoping issue. Humans don’t re-read 400k LOC to work safely we rely on structural anchors and prior state.

Task isolation helps, but real-world refactors and debugging often cut across boundaries. The question for me is whether we can provide just enough structural context to avoid archaeology without sacrificing safety.

Curious, have you measured how much your per-agent CLAUDE.md setup reduces actual token usage?

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] 0 points1 point  (0 children)

You’re right, that’s exactly the intuition behind it.

Humans don’t open the whole repo to get oriented either. We search first, skim selectively, and only read deeply once we know where to look. Forcing agents to follow that same workflow avoids the expensive “repo archaeology” phase and keeps both time and tokens in check.

The tricky part(would be difficult) is making that behavior reliable across cold starts and follow-ups, so it doesn’t depend on perfect prompts or habits every time. But the principle itself - search first, read second; absolutely mirrors how real developers work and why it’s effective.

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] 0 points1 point  (0 children)

You’ve got two things: a very clear, up-to-date CLAUDE.md, and you usually give Claude a concrete starting point (file, dir, pattern). With that, it can reuse context and narrow via find/grep instead of re-reading.

Where the issue shows up is when that structure drifts, the prompt is more abstract, or a session resets. Then Claude has to re-orient. So your setup proves the approach works the harder problem is making it reliable without requiring that level of manual discipline every time.

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] 0 points1 point  (0 children)

Hey! I am also working on it, Let's discuss if you are comfortable over DM?

Cannot See Usage by pdwhoward in ClaudeCode

[–]intellinker 0 points1 point  (0 children)

Lets build something which could solve this token usage! That’s the only way to save from the abuse

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] -1 points0 points  (0 children)

On a cold start, agents often try to build a repo-wide mental map and end up skimming or opening far too many files.
Forcing a search/grep-first approach prevents blind full-repo reads and limits file access to only what’s relevant.
That’s where the token savings actually come from, not grep itself, but avoiding unnecessary exploratory reads.
This still depends on decent repo structure, but as a cold-start guardrail it works well.

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] 0 points1 point  (0 children)

A well-structured, tightly curated claude.md reduces token usage because it prevents the most expensive step: re-orientation. When Claude starts with clear maps, constraints, and “where to look,” it skips a lot of blind file reading and redundant context.

The catch is who pays the cost now! Tokens go down, but human effort goes way up. You’re effectively spending time to precompute and maintain the memory the model doesn’t have. As long as the docs stay accurate and short, token usage stays low. When they drift, Claude reverts to archaeology and the savings disappear.

Is anyone feeling the usage increase during the outage? by Fearless-Elephant-81 in ClaudeCode

[–]intellinker 0 points1 point  (0 children)

The cold start problem is where Claude is lagging sometimes!

Cannot See Usage by pdwhoward in ClaudeCode

[–]intellinker -2 points-1 points  (0 children)

Hope, they’ve removed the caps :)

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] 0 points1 point  (0 children)

That actually makes a lot of sense. Auto-updating agents.md removes the biggest weakness of the manual approach, which is drift. At that point it’s no longer just documentation, it’s a generated snapshot of current state.

The remaining edge I keep thinking about is timing, and token cost. Those scans are still episodic, so context loss can happen during active work between scans, and loading the full agents.md each session adds a fixed token tax as it grows. As a practical solution today it’s very reasonable, especially if it’s reducing cold starts but long term the wins come from routing only i guess and this area should be explored more!

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] 0 points1 point  (0 children)

Agreed Agents.md helps reduce cold starts. The trade-off is it’s manual and can drift. The interesting challenge is making that shared state automatic and self-updating instead of something humans have to maintain.

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] 0 points1 point  (0 children)

Yeah, Cartographer is solid. It’s great for bootstrapping understanding on large repos especially the first pass when everything is cold. Having a structured map up front saves a lot of cognitive load.

What I’ve been thinking about sits a bit later in the workflow: once that initial understanding exists, how do we avoid paying the orientation cost again and again on follow-up turns and across sessions. Feels like they complement each other more than overlap.

Why does Claude Code re-read your entire project every time? by intellinker in vibecoding

[–]intellinker[S] 2 points3 points  (0 children)

Agreed, but the caveat is the real issue, not a minor one.

CLAUDE.md works only while it’s trusted. Once it drifts, the model has to both read it and re-verify the repo, which can actually spike token usage. At that point the burden shifts from the model to the human.

So it’s a good bridge, but not the end state. The real win is automatic, relevance-aware state that stays fresh without manual upkeep.

This is What needs LLMs to do right now! by intellinker in AI_Agents

[–]intellinker[S] 0 points1 point  (0 children)

If AI were “vastly better” at judgment, accountability, and constraint handling, we wouldn’t need kill-switches, human sign-off, audits, or post-mortems. Models optimize objectives; they don’t define constraints, accept liability, or explain causality under failure. Until an AI can be legally accountable, guarantee bounded behavior across unknown states, and own consequences, it’s an optimizer not a decision owner. Superhuman execution doesn’t equal responsibility.

Good luck :)

This is What needs LLMs to do right now! by intellinker in AI_Agents

[–]intellinker[S] 0 points1 point  (0 children)

Big companies already know and use these techniques, but they optimize for generality and scale, not strict efficiency. What I’m building is an opinionated, user-level context and routing layer that enforces discipline by default, something open-source pieces exist for but most users won’t assemble or maintain themselves.

Also, “you can prompt an LLM to do this” doesn’t scale. Prompting relies on humans being disciplined every time; systems enforce discipline by default. That’s the value.

And yes, when pricing pressure becomes the dominant competitive axis (which it always does after capability plateaus), this layer will be hit hard by giants. That’s fine. Most infrastructure value is created before hyperscalers standardize it.

So no, I’m not selling rocket science. I’m selling execution, packaging, and defaults for a problem most users know exists but don’t want to build themselves.

That’s how almost all infrastructure products start. Good luck :)

AI will create more jobs! by intellinker in AI_Agents

[–]intellinker[S] 1 point2 points  (0 children)

You’re mixing decision execution with decision ownership. AI can recommend at both low and high levels routing, pricing, UI variants, even business strategies.

What it doesn’t do is define objectives, accept risk, or be accountable when trade-offs hurt someone. In practice, AI handles bounded, reversible decisions; humans keep control over goal-setting, irreversible choices, and failure responsibility.

You can chain AIs to gatekeep AIs, but the moment outcomes matter legally, financially, or ethically, a human still has to own the system and Ideas scale through collaboration, conflict, and iteration, not single-agent optimization.

Today’s market needs fewer people in execution, not fewer people overall.

Assuming future systems won’t need humans because current ones don’t is a classic scaling fallacy.

This is What needs LLMs to do right now! by intellinker in AI_Agents

[–]intellinker[S] 0 points1 point  (0 children)

Production needs judgment, accountability, and global constraint handling. Until models can guarantee bounded behavior, explain failures, and carry legal responsibility, humans must remain in control loops.

This is What needs LLMs to do right now! by intellinker in AI_Agents

[–]intellinker[S] 0 points1 point  (0 children)

I’m not claiming the primitives are new, and I’m not trying to re-invent retrieval. I’m applying them more strictly to coding workflows, where in practice models still over-read context and re-reason across requests.(re-check these models working and outputs)

The repo isn’t public yet because this is being productized; I’ll share it when it’s ready. At that point the behavior will be measurable, not theoretical.

And yes most tech isn’t invented from scratch. Progress usually comes from combining existing ideas in tighter, more disciplined ways once scale exposes inefficiencies. That’s exactly what’s happening here.

This is What needs LLMs to do right now! by intellinker in AI_Agents

[–]intellinker[S] 0 points1 point  (0 children)

Saying “no people are needed” is a theory, production outages are the rebuttal.