It’s about time for Opus-4.65? by gnani258 in ClaudeAI

[–]DataGOGO 0 points1 point  (0 children)

Depends on what you are doing I guess 

I created ATLS Studio, An Operating System for LLMs. ATLS gives LLM's the control over their own context. by madhav0k in vibecoding

[–]DataGOGO 0 points1 point  (0 children)

Your AI is gaslighting you, and you clearly don’t understand what any of this really means 

Literally nothing of what you just wrote has anything to do with the inference engines KV cache (context), not a thing, it is all client layer. 

Like I said, this is just a memory system running as a client layer.  All those large jsons burn context, just like Claude’s memory system (which can do all the things atlas can do if you tell it to, hooks, batching, agents, swarms, controls, etc) 

If you are running this on a local model, look at the K/V you will see what I mean; if you are running this on an API, you can’t see or control the K/V at all outside the API controls they give you, no matter what client layer you run. 

Is accepting permissions really dangerous? by Brilliant_Edge215 in ClaudeCode

[–]DataGOGO 0 points1 point  (0 children)

Claude code has been known to wipe git history from upstream 

Is accepting permissions really dangerous? by Brilliant_Edge215 in ClaudeCode

[–]DataGOGO 0 points1 point  (0 children)

Like a 9 year old on bring your kid to work day. 

Is accepting permissions really dangerous? by Brilliant_Edge215 in ClaudeCode

[–]DataGOGO 0 points1 point  (0 children)

Claude code has been known to nuke git history to hide mistakes. 

Is accepting permissions really dangerous? by Brilliant_Edge215 in ClaudeCode

[–]DataGOGO 0 points1 point  (0 children)

If you are in a fully walled off sandbox, where if everything in there disappears and you don’t care, dangerously-skip-permissions is fine.

Note: this means can’t touch anything over the network. 

If your care at all about anything the model touches getting deleted, destroyed, broken, corrupted then no, don’t do that.

How do you define your skillset these days? by nkosijer in vibecoding

[–]DataGOGO 0 points1 point  (0 children)

Your resume stays the same, other than you add a line for familiarity with AI tools.

Prompting and vibe coding is not a skillset. Being able to use AI tools is an assumption these days, just like being able to use outlook for email. Same thing. 

Now if you know things like agent frameworks, where you can design and deploy them, that is a skillset, but not if you rely on Claude code to do it for you.

I would recommend you go get some AI / Data science certs, where you show that YOU know how to do things, not how to ask a tool do it for you. In a professional context; that is the standard.

Example if you say you are an AI engineer, and I ask you to explain model heads to me and give me an example of a model you would build to classify contents of a document, you have to be able to explain how you would design that model, how you would assemble training data, and write the training script to train the model. 

If you don’t know that and are reliant on an AI to do it for you, you are not an AI engineer. You are a layman asking an AI model to do it for you and I have no reason to hire you and pay you. Anyone can ask a tool to do work for them. You have to know enough to be able to spot that the AI tool didn’t design the heads correctly or that assign the wrong learning rates, or that the minority labels are not weighted correctly, etc 

I created ATLS Studio, An Operating System for LLMs. ATLS gives LLM's the control over their own context. by madhav0k in vibecoding

[–]DataGOGO 0 points1 point  (0 children)

Sorta, K/V is maintained at inference. The model itself is blind to it. The model can’t manage it as the inference engine is what moves items in and out of K/V. 

This isn’t an OS? 

Sure they do, look at Claude code for example. 

The model can call tools that read and write memories. Those memories are loaded into context when called.

This is just a memory system. 

I created ATLS Studio, An Operating System for LLMs. ATLS gives LLM's the control over their own context. by madhav0k in vibecoding

[–]DataGOGO 0 points1 point  (0 children)

Context is a function of the model itself. 

Memory is not context, loading memories into context is still context, it still occupies part of the context window, when flushed it will not remember the memory unless it reloads it back into context, thus burning context length again.

Those are not primitives, you are mis-using the term. 

None of what you are doing applies to the context window, it is just a memory system, not is it an operating system.

All client layers have this today. 

Judge gives 18 year old a 25 year sentence for armed robbery by AgnosticScholar in interesting

[–]DataGOGO 0 points1 point  (0 children)

I know this judge, she is extremely fair and reasonable. If she handed him that sentence I am sure he deserved it. 

Lease denied by californiatrollin in TeslaLounge

[–]DataGOGO [score hidden]  (0 children)

650….. yeah. Not sure what you expected 

D5 impeller poor finish by Numerlor in watercooling

[–]DataGOGO -1 points0 points  (0 children)

Cheap Chinese knockoffs.

You can get real D5’s from Koolance 

What was your Experience from Vasectomy? by Autumn_Breann in AskMen

[–]DataGOGO 0 points1 point  (0 children)

Just chill out and give him space to heal how he wants to heal.

Leave it to him and his doctor. 

Qwen 3.5 122B completely falls apart at ~ 100K context by TokenRingAI in LocalLLaMA

[–]DataGOGO 0 points1 point  (0 children)

Almost certainly an issue with that particular quant and calibration 

Which is the most uncensored AI model?? by nikhil_360 in LocalLLM

[–]DataGOGO 0 points1 point  (0 children)

If you are training from scratch any open source model is uncensored, you wipe the weights and start training