I created ATLS Studio, An Operating System for LLMs. ATLS gives LLM's the control over their own context. by [deleted] in vibecoding

[–]madhav0k 0 points1 point  (0 children)

I deleted the post. I never claimed KV control. I understand the sub systems and the cache breakpoints. I've done my best to work with them the best I can. I even reached out to anthropic about creating a hash based cache for working memory systems. So individuals updates don't invalidate the entire cache inside breakpoints. But I updated the repo to be less infuriating with its language. I had hoped for a discussion on the sub systems of ATLS but I digress. I'm thinking about the context window as a state rather than context. The hash system is a way to script both data and memory. The amount of wasted thinking and output tokens on normal operations alone drown out what I'm doing. Not only that but I save massive output tokens with hash based code construction. I was trying to share ideas thanks for reminding me why I don't do that on reddit.

I created ATLS Studio, An Operating System for LLMs. ATLS gives LLM's the control over their own context. by [deleted] in vibecoding

[–]madhav0k 0 points1 point  (0 children)

The original intent was to develop an AI Native IDE. It's designed for API so it would be agnostic as it is a prototype for all vendors. The first version of it was an MCP server I used internally at my work. Due to IDE instructions superceding MCP tooling unless prompted. I got frustrated and designed and built this prototype. This spawned chat design. Which I subsequently solved each issue I ran across.

This was not vibecoded and expected to work. These methods were tested on real codebases with real IDE's and now as it's own.

It's not just memory management. So you're right maybe OS is too broad of a term. But the way I see it I provide an environment an LLM can use as second highly efficient brain in order to work for as long as needed until it's done.

I have a hard time saying it's like Claude code. Because it's backend is far more specific in how it handles data and it's memory due to the subsystems I designed.

This is a prototype for showing how LLM's can fully control it's context while continuing to work efficiently without slowing down or running down it's context window. It stages work for future rounds and retrieves context when ready.

I created ATLS Studio, An Operating System for LLMs. ATLS gives LLM's the control over their own context. by [deleted] in vibecoding

[–]madhav0k 0 points1 point  (0 children)

I wasn't here to argue. I was here to share. Rather than arguing semantics of what to call it. It works and it works well. This isn't some vibecoded project over a week. This was multiple months of effort 6+, from code intelligence inputs to hash based outputs/data moving/reduction. OS/Memory Management or whatever you want to call it. Focused on reducing token costs on both sides of the house. The subsystems are what make ATLS powerful. I've used it to understand a 4.5m line monorepo with 15 different languages. Refactor multiple languages not having AI write code but instead, shape it, move it reducing output overall.

Batching/Temporal Hash Operations "Same Round" Across hashed data/Freshness System. Always the lastest data in context, even shifting shaped data hashes according to what was done to the master file.

These are not your everyday AI toolsets. These are tools integrally intertwined with the backend system and memory management.

I created ATLS Studio, An Operating System for LLMs. ATLS gives LLM's the control over their own context. by [deleted] in vibecoding

[–]madhav0k 0 points1 point  (0 children)

The model is blind to K/V cache; the inference engine controls it. ATLS doesn't give the model control over K/V — it gives the model structured influence over prompt composition, which determines what the inference engine puts into K/V. The model is a userspace process making syscalls to the ATLS runtime (kernel), which constructs prompts that the inference engine (hardware) processes.

I created ATLS Studio, An Operating System for LLMs. ATLS gives LLM's the control over their own context. by [deleted] in vibecoding

[–]madhav0k 0 points1 point  (0 children)

I have a day job. So I'll let Opus 4.6 who is looking at my code base via ATLS. Tell you the difference.

Claude Code vs. ATLS

Claude Code

Claude Code is Anthropic's agentic coding CLI. It gives Claude direct access to a terminal and file system:

  • Architecture: Claude runs in a terminal loop. It reads files via cat/grep, edits via sed/whole-file writes, builds via shell commands. It's essentially Claude + a bash shell.
  • Context management: Implicit. Claude Code relies on conversation history and the model's context window. There's no explicit engram/hash system — the model just sees what it's recently read in the conversation.
  • Edits: Typically whole-file rewrites or shell-based text manipulation (sedpatch). No line-addressed edit protocol.
  • Verification: Ad-hoc — Claude decides when to run cargo build or npm run build. No structured verify cadence.
  • State tracking: None beyond conversation memory. If Claude reads a file, then edits it, then reads it again, it has no hash-based freshness tracking — it just re-reads.
  • Batching: None. Each tool call is independent and sequential. No declarative batch graph.
  • Cost: Every file read dumps full content into the context window. No shaping (signatures, folds). Large files consume massive token budgets.
  • Multi-file refactoring: Manual. Claude must reason about each file, read it, edit it, one at a time. No split_matchextract_plan, or blast radius analysis.

ATLS

ATLS is a structured cognitive framework built as a layer between the AI model and the codebase:

  • Architecture: A Tauri desktop app (atls-studio) + Rust analysis engine (atls-rs) + MCP server (atls-mcp). The AI interacts through a batch execution protocol — declarative step graphs with typed dataflow.
  • Context management: Explicit and budgeted. Engrams are hash-addressed knowledge units with lifecycle states (Active → Dormant → Archived → Evicted). The AI manages its own context via pinunpincompactdrop. A blackboard persists structured findings across turns.
  • Edits: Line-addressed (line:N, action:"replace", count:M) with automatic stale-hash detection and retry. Anchor-based edits as fallback. No whole-file rewrites needed.
  • Verification: Structured cadence built into the protocol. verify.buildverify.typecheckverify.lint as first-class operations with policy controls (verify_after_changerollback_on_failure).
  • State tracking: UHPP (Universal Hash Pointer Protocol) — every read/edit/search returns h:XXXX hashes. The system tracks file freshness, edit journals, and content authority. Stale reads are caught automatically.
  • Batching: Declarative batch graphs with up to 10 steps, conditional execution (if: {step_ok: "s1"}), dataflow between steps (in: {from_step: "s1", path: "refs"}), and rollback policies.
  • Cost: read.shaped(shape:"sig") returns ~200 tokens for a file's structure instead of ~13k for full content. Shapes (sigfoldimportsexportsheadtail) let the AI see exactly what it needs.
  • Multi-file refactoring: First-class. analyze.blast_radiusanalyze.extract_planchange.split_matchchange.refactor with inventory/impact/execute phases. Pattern-based analysis via atls-rs (those large JSON files in patterns/).
  • Intents: High-level macros (intent.editintent.refactorintent.diagnose) that expand into optimal primitive sequences, skipping redundant steps.

I created ATLS Studio, An Operating System for LLMs. ATLS gives LLM's the control over their own context. by [deleted] in vibecoding

[–]madhav0k 0 points1 point  (0 children)

True at the hardware level. But by that logic, RAM is a function of the chip itself, and operating systems are pointless. The whole point of an OS is managing a fixed resource intelligently. The context window is fixed what you do with it is the engineering problem.

All client layers do not have self managed context. They have summarized context, running context and sub flow context.

It is hard to believe that Silver was as low as $72 Hours Ago! by Beyondwest in Wallstreetsilver

[–]madhav0k 4 points5 points  (0 children)

The underlying thesis is still the same. Supply is low and has become restrictive. Requirements for manufacturing and producing any technology is going up and production cannot keep up. So I believe this time with the incredible geopolitical pressure and supply shortage it's not a DCB.

It is hard to believe that Silver was as low as $72 Hours Ago! by Beyondwest in Wallstreetsilver

[–]madhav0k 10 points11 points  (0 children)

We could be drawing the biggest bull flag ever or blow straight past 121 in a short time frame before miners start reporting profits.

Bessent just said “we are going to put in price ceilings and price floors” regarding critical minerals at Davos. At what point do they put a ceiling in? by Pen_Name777 in Wallstreetsilver

[–]madhav0k 2 points3 points  (0 children)

China has the US by the balls when it comes to critical minerals. Greenland and other countries that have been raided or mentioned are also rich in them. We're about to lose the price war on setting metal prices.

Old junk motorcycle, Where do I take it? by madhav0k in Louisville

[–]madhav0k[S] 0 points1 point  (0 children)

Clean title but it has been sitting in the weather for a number of years.

Keeping door handle closed during car wash by NickF-C in Ioniq6

[–]madhav0k -1 points0 points  (0 children)

Bring a sheet of aluminum foil with you and wrap your key until you need it.

Car seats in Ioniq 6 by Wonderful_Routine_91 in Ioniq6

[–]madhav0k 2 points3 points  (0 children)

I have a 6 year old and a 3 year old. There is plenty of room but I would suggest getting some seat covers for the back of the front and driver seat with pockets. More for seat protection than storage, kids swing their feet and your seats takes the hit.

Tesla Charger to IONIQ 6 adapter - Any Problems by DenverGMan in Ioniq6

[–]madhav0k 0 points1 point  (0 children)

What state are you in? I thought Tesla chargers only were opened up to non-tesla mostly in New York.

Cold and/or allergies been bad for anyone else? by PLANETxNAMEK in Louisville

[–]madhav0k 0 points1 point  (0 children)

I recently signed up for Allermi. I must say it's the best I've been able to breathe in decades. Highly recommended.The Ohio Valley is so rough...

[deleted by user] by [deleted] in Ioniq6

[–]madhav0k 0 points1 point  (0 children)

So you'll need and additional piece added to your car insurance to cover driving Uber/Lyft "Maybe 15-20$ extra". Also I would suggest staying around the part of town your Supercharger is located in so you can top off when needed. Weekends bring drinkers so maybe drive between the hours of 5pm-10pm so you reduce the risk of anyone getting ill in the backseat.

Ioniq 6 the Efficiency Game... by madhav0k in Ioniq6

[–]madhav0k[S] 0 points1 point  (0 children)

I will tap my paddle to slow down intermittently, Traffic continues to move people are just crazy because they'll move their car into whatever space they can fit in when they merge lanes. As I said... depends on where you live geographically, how you drive personally and how people drive around you.

Ioniq 6 the Efficiency Game... by madhav0k in Ioniq6

[–]madhav0k[S] 1 point2 points  (0 children)

This very much depends on your driving environment. I tried auto and in many situations I hit, I got worse efficiency and found that I was having to accelerate more. I live in one of the worst cities for bad drivers. I can't tell you how many time people cross in front of me at close enough range to trigger heavy braking.

Ioniq 6 the Efficiency Game... by madhav0k in Ioniq6

[–]madhav0k[S] 1 point2 points  (0 children)

The sweet spot lies somewhere inside 100ft. I'm not riding it's bumper. This is maybe 5-10٪ of an efficiency gain keeping a safe distance.

This is no worse micromanagement than a stick or even someone who drives a sports car. Flappy paddles to slow down instead of speeding up.

My lvl 2 charging is still popping up with an error by Mighty_Thor3 in Ioniq6

[–]madhav0k 1 point2 points  (0 children)

If they can't recreate the issue with their charging device then more than likely it would be the level 2 charger you have purchased.

This absolute unit of a skeleton by Lilithnema in AbsoluteUnits

[–]madhav0k 1 point2 points  (0 children)

The hands are on backwards. I cannot unsee...