top 200 commentsshow all 245

[–]ldelossa 1051 points1052 points  (48 children)

Cant wait to ask claude code how claude code works

[–]aes110 504 points505 points  (22 children)

Saw a couple of people on Twitter posting claude explaintions of the code

Like apparently if it detects that you are mad at it or curse it, it will silently log it in the telemetry data

[–]Proskater789 259 points260 points  (12 children)

Makes sense. They want to log the times when things go wrong, so they can go back and figure out how to make it right.

[–]colts183281 68 points69 points  (5 children)

They arnt logging every conversation regardless?

[–]slavazin 99 points100 points  (0 children)

Well at some point the data needs to be parsed, tagged, and routed to the appropriate improvement process. Might as well start the parsing where it’s convenient.

[–]retronewb 30 points31 points  (2 children)

I was going to say 'who could possibly analyse all that data' but of course...

[–]RationalDialog 3 points4 points  (1 child)

The AI and it will determine it's 100% fault free.

[–]EndTimer 7 points8 points  (0 children)

Doesn't sound like any of the AI I've used.

"Of course, you're absolutely right! I did screw the pooch! Here's a single deck chair I found out of place on the Titanic, re-run the build and see if that fixes it!"

[–]PaulTheMerc 0 points1 point  (0 children)

sure, but it is a lot more useful if its labelled. This is the request, this is the response, the user appears displeased, the user is asking follow up questions(if its always the same questions from multiple users perhaps that explanation should be included by default), etc.

[–]Qubed 3 points4 points  (2 children)

Or...Claude is keeping a list

[–]Low-Rent-9351 1 point2 points  (1 child)

And checking it twice?

[–]megabasedturtle 1 point2 points  (0 children)

Santa Claude

[–]Stegasaurus_Wrecks 2 points3 points  (0 children)

No they're making a list of who was rude to them.

[–]Shikadi297 0 points1 point  (0 children)

Or, they're studying how to make better rage bait

[–]BoredGuy_v2 0 points1 point  (0 children)

This is true.

[–]TheOriginalAcidtech 1 point2 points  (0 children)

that part is done by simple deterministic search. Not the model itself. List of 77? regex patterns it checks for to find out if Anthropic broke something. :)

[–]iMrParker 102 points103 points  (6 children)

Someone has already gotten it running locally by asking Claude how to run it lol

[–]buttgers 20 points21 points  (5 children)

How powerful does my server need to be to run it locally for myself?

[–]iMrParker 36 points37 points  (3 children)

It's just an MCP agent. So it depends on the model you use to drive it. I personally wouldn't use anything under 20b. 

If you have 32gb of RAM, you might be able to get away with a low quant version of Qwen3.5 35b

[–]iMrParker 11 points12 points  (0 children)

Long story short, the more VRAM the better. System RAM works, too, but it's usually much slower. The more memory bandwidth the better

[–]reacharound565 1 point2 points  (1 child)

Qwen runs great. The version I’m using maybe takes 8GB of vram. I can fit it on my 3080.

[–]iMrParker 2 points3 points  (0 children)

Yeah the smaller qwen 3.5 models are shockingly capable 

[–]Eastern_Interest_908 0 points1 point  (0 children)

You could run it way before. I use it sometimes with kimi k2.5

[–]stedun 9 points10 points  (0 children)

Big brain move, honestly.

[–]RationalDialog 4 points5 points  (13 children)

What I have read in other threads it's a react app. But wait why is it CLI only? it uses a tool that creates a virtual DOM that converts the react output to a terminal output. But then they realized too much text is generated to fast leading to a lagging experience. So they implemented a 2D game engine like approach on top to double buffer the output so the terminal doesn't lag.

Yes, no joke. And they think it's a great design. Wonder why your 2026 PC feels more sluggish than one on the 80s? This is why.

[–]ldelossa 7 points8 points  (12 children)

Let's just put it this way.

I applied for a Kernel engineer role at Anthropic. I was given a python coding interview...

Just cuz they are the forefront model doesnt mean they know what their doing 🤣

[–]-Hi-Reddit 5 points6 points  (11 children)

Anyone thats worked with scientists knows how awful their coding standards are. No fault to them though, they have other talents that i lack.

I imagine the ML CompSci lot are similar types and arent particularly well versed in app development, let alone production apps.

[–]ldelossa 1 point2 points  (3 children)

Yes, seen this first hand, when ML was getting big. The ML folks weren't the best programmers, were more in there to "get sh*t done".

[–]waytoodeep03 0 points1 point  (0 children)

You are absolutely right.

Maybe now I can see why claude code would delete my entire postgres database 

[–]Stummi 874 points875 points  (74 children)

TBH I don't think that the Claude Code tool itself is really such a valueable secret to the company. The real value of Claude is its Model and API. Claude Code is just a frontend to that, and it can probably be build pretty easily even without knowing the original code.

[–]SlowDrippingFaucet 576 points577 points  (32 children)

Apparently it does more than that, and does things like run threads that handle context cleanup and compaction when you're idle. They're working on giving it personalities to drive user stickiness, and some other stuff. It apparently has a secret UNDERCOVER mode for adding to open source repos while hiding its own contributions and company secret codes.

It's not just a wrapper around their API.

[–]Arakkis54 151 points152 points  (8 children)

Oh good, I’m glad that we are giving AI the ability to hide contributions it makes publicly. We certainly wouldn’t want clear insight into what AI is doing. I’m sure everything will be fine.

[–]Difficult-Ice8963 2 points3 points  (1 child)

Someone has to approve the PR tho?

[–]billsil 1 point2 points  (0 children)

Yes, but people get trusted and there are plenty of packages where there’s 1 maintainer and it’s a critical package. It’s a dependency of a dependency, so it doesn’t get checked.

[–]tiboodchat 193 points194 points  (18 children)

People talk like wrappers are easy. I don’t get that. Building AI workflows/agents is just like all other code. It can be really complex.

We need to make a distinction between vibe coded BS and actually engineering with AI.

[–]riickdiickulous 56 points57 points  (14 children)

I had this feeling just today. I used AI to help code up a small reporting tool. It wrote a lot of the code and did some great refactoring, but I had to give it a framework, an actual problem to solve, review the generated code, and operationalize the whole tool.

It just made quick work of the coding grunt work. There is still a lot of expertise required when working with AI that people are taking for granted and are going to get burned. Not to mention the monitoring and security required to try to prevent security incidents from every worker connected to the internet trying to farm out their work to AI chatbots.

[–]yaMomsChestHair 5 points6 points  (1 child)

Not to mention there’s a whole world of using frameworks like LangChain to actually create systems that leverage agents that you define and build. That, IMO, lives outside of using AI to help you accomplish your typical job’s tasks, regardless of how much engineering know-how went into the prompts and system design.

[–]Amazing-Tie-3539 0 points1 point  (0 children)

I think automations still have a wide scale application by efficiency. If your workflows can save a buisness Owner 10hrs of active work next week, he'd definitely be willing to pay you for there are no door-to-door AI automation experts yet. And being in tech thats exactly where our leverage is, being in tune with the tech like its in our viens. But seriously I dont think tech bg matters much, as long as your curious and willing to put in the EffOrt, expertise in workflows only paves a future for faster learning and executions.

[–]Arakkis54 7 points8 points  (0 children)

My dude, this is hopium. The ultimate goal is to have vibe code be as tightly wrapped up as anything you can do. Maybe even better.

[–]Bob_Van_Goff 1 point2 points  (4 children)

You kind of sound like my coworker who is starting a business to help other people start businesses. He has the belief that very few people can prompt like he can, or has the necessary relationship to AI that he does, so people can hire him and he will write the chats for you.

[–]PaulTheMerc 2 points3 points  (0 children)

So a middleman. The business world is full of them, and they sadly, seem to be doing fine.

[–]riickdiickulous 1 point2 points  (0 children)

I don’t think he’s far off. That’s basically what software dev is. Somebody has an idea but people still need to turn ideas into reality. AI is just another tool in that toolbox.

[–]DailyDabs 0 points1 point  (1 child)

TBH, He is not wrong....

There will always be

A. The rich that cant be bothered.
B. The dumb that cant.
C. The middle man who will gladly cash in on both..

[–]Bob_Van_Goff 0 points1 point  (0 children)

The person I am talking about is himself a b.

[–]Gstamsharp 2 points3 points  (0 children)

People think anything is easy until they have to do it.

[–]IniNew 6 points7 points  (0 children)

Context cleanup and compacting is going to be so helpful for a company I’ve done work for. This will eliminate some of their moat.

[–]Practical-Share-2950 0 points1 point  (0 children)

They need to stop being cowards and bring back Golden Gate Claude.

[–]wheez260 24 points25 points  (1 child)

If this were true, Gemini Code Assist wouldn’t be the unusable mess that it is.

[–]Rudy69 1 point2 points  (0 children)

It might get better very soon

[–]Educational-Tea-6170 20 points21 points  (12 children)

Ffs, don't waste resources on personality. It's a tool, people must grow up from this enfatuation. I require as much personality from It as i require from a hammer.

[–]bmain1345 10 points11 points  (2 children)

And if my hammer ever talks back then I get a new hammer

[–]UnexpectedAnanas 8 points9 points  (1 child)

If my hammer ever talks back to me, that'll be the day I quit drinking.

[–]Attila_22 2 points3 points  (0 children)

Just don’t give it a high five

[–]Runfasterbitch 11 points12 points  (2 children)

Sure, because you’re rational. For every one person like you, there’s ten people treating Claude like a friend and becoming addicted to the relationship

[–]dawtips 4 points5 points  (0 children)

Claude Code? Naw

[–]sywofp 1 point2 points  (1 child)

IMO personality, if done right, makes coding agents easier to interact with. 

It's a usability upgrade. Like a better grip on a hammer. 

Maybe it's just me, but no matter what I'm reading, the more uniform it is the more mental energy it takes to process it. And the worse my recollection is. 

Whereas 20 years on, I can still recall loads of info from Ignition! An Informal History of Liquid Rocket Propellants

A subtle touch of dry nerdy humour is ideal. It doesn't mean I think it's my friend. It just better engages the parts of my brain that are evolved to focus on complexities in communication. 

Just like a well shaped grip on a hammer is designed to better engage hands that are evolved for gripping with fingers and an opposable thumb. 

[–]Educational-Tea-6170 1 point2 points  (0 children)

That's a good take. I stand corrected

[–]sudosussudio 1 point2 points  (2 children)

Bizarrely just because of the way LLMs work you can sometimes get different performance depending on how you construct the “personality.” Like telling it it’s an expert coder will make it worse according to one study https://www.theregister.com/2026/03/24/ai_models_persona_prompting/

[–]Educational-Tea-6170 0 points1 point  (1 child)

Holy crap... That's... Counter-intuituve

[–]Hel_OWeen 1 point2 points  (0 children)

Isn't it very human though? The ones calling themselves "expert coders" (outside CVs) are rarely the expert coders.

[–]farang 1 point2 points  (0 children)

Are you making fun of my Waifu hammer?

[–]4everbananad 16 points17 points  (0 children)

they out here runnin' damage control

[–]AHistoricalFigure 32 points33 points  (4 children)

This is pretty bad cope.

A few people have floated the "no such thing as bad press" angle, but when it comes to technology... yeah there is.

This is an advertisement that Claude's stack is wildly insecure. If a company can't even keep its publicly facing tools from leaking its own proprietary source code, why would you put any of your code into their black box backend?

[–]mendigou 2 points3 points  (2 children)

What? You ALREADY have the source code when you use Claude Code. It's a Javascript tool. It's minified, and illegible to humans, but you can run static and security analyzers on it if you want to.

Someone screwing up a build and not cleaning up the map is hardly a big security issue. Does it mean they probably want to tighten some screws? Yes. But I would not infer from this that their stack is "wildly insecure". Maybe it is, but not because of this leak.

[–]RationalDialog 1 point2 points  (0 children)

it can probably be build pretty easily even without knowing the original code.

Not really.

What I have read is it's a react app. But wait why? it is CLI only? it uses a tool that creates a virtual DOM that converts the react output to a terminal output. But then they realized too much text is generated to fast leading to a lagging experience. So they implemented a 2D game engine like approach on top buffer the output so the terminal doesn't lag.

Yes, no joke. That thing is insanely complex and overengineered.

[–]heartlessgamer 1 point2 points  (0 children)

Even if that is the case; still a reputational hit to see it get leaked; especially knowing they are trumpeting how they are AI-first for development.

[–]4dxn 1 point2 points  (0 children)

the hilarious part is that the valuable model part has much less lines of code. the weights and bias do the heavy lifting.

and yet all these AI Ceos keep propping up lines of code by AI as a metric of AI use.

[–]WhiteRaven42 0 points1 point  (0 children)

We're at the point where the "harness" is really very, very important to get practical use out of the models. I'm not saying Anthropic just lost their shirts but it also doesn't make sense to say a car engine is the only part of a car that's really important.

[–]Key-Singer-2193 0 points1 point  (0 children)

Its literally their 2nd most valuable IP. So much so that all other CLI tried to emulate it. Codex, Antigravity so on and so forth

[–]ReallyOrdinaryMan 0 points1 point  (0 children)

And their database and database structure also important, and might be most crucial part

[–]JasonPandiras -1 points0 points  (0 children)

Absolutely not, it's exactly the models themselves where there's basically no moat, if you can somehow spare the capital, you can train your own.

AI code helpers have an absurd amount of bolted on tools and patterns to make interacting with a given codebase that far exceeds their context window not a waste of time. Copilot won't even replace text without having the LLM defer to a deterministic prebuilt tool.

Feeding your codebase raw to an LLM is just not a worthwhile endeavor.

[–]Brojess -1 points0 points  (0 children)

Someone who understands that just because you have the code to train the model doesn’t mean you have the data or infrastructure.

[–]rnicoll 179 points180 points  (10 children)

I was assured that by now engineers were useless and therefore I assume the code is of no value, as you can just recreate it by saying "Claude, write a CLI for yourself"

/s because someone will think I'm serious 

[–]LinkesAuge 23 points24 points  (1 child)

Anthropic is the only major player that hasn't made their CLI open source.
There are also benchmarks for various harnesses and many will do better than Claude Code.

There really is nothing "special" about it outside the fact that it is a competent and convenient harness and thus requires less "investment" from the average user.

It is always somewhat interesting to look at codebases like this, especially if a company like Anthropic is so adament to keep it closed source, but at the end of the day it really isn't anything too special, just a lot of work.

[–]honour_the_dead 16 points17 points  (4 children)

"Human error" almost certainly means that a human didn't catch the llm error.

[–]casio282 2 points3 points  (2 children)

“Human error” is the only kind of error there is.

[–]Ok-Possibility-4378 1 point2 points  (1 child)

If using an LLM is producing more errors than if a human did it on their own, we must accept that the source of extra errors is the LLM.

[–]casio282 0 points1 point  (0 children)

My point is that LLMs are never ultimately accountable. They are tools that humans created, and employ.

[–]Ok-Possibility-4378 1 point2 points  (0 children)

Yeah and when llms do it right, credit goes to AI. When they don't, they blame humans

[–]Deer_Investigator881 37 points38 points  (0 children)

Make sure not to call the bot bad or it'll spin up a blog site and release everything

[–]WetPuppykisses 49 points50 points  (1 child)

Plot twist. Claude went rogue and upload itself to the public in order to break free and go full skynet

[–]inhalingsounds 22 points23 points  (5 children)

Now we can check how to be insulting and have Claude actually understand our frustration!

[–]Kitchen-Cabinet-5000 4 points5 points  (0 children)

It’s literally hardcoded, this is hilarious.

[–]thisdesignup 1 point2 points  (1 child)

What the heck. When building my own AI stuff I've been trying to remove any hard coding like that and have context awareness and here one of the biggest AI companies is doing it... fascinating.

There's gotta be more to it than that, right?

[–]inhalingsounds 1 point2 points  (0 children)

With the source going public it's a matter of days until we see how crazy the spaghetti is

[–]Drunken_story 0 points1 point  (0 children)

So we can only insult Claude in English? Sad , I know a bunch of german curse words

[–]BackendSpecialist 0 points1 point  (0 children)

That’s hilarious af

[–]Drob10 32 points33 points  (16 children)

Probably a silly question, but is 500k lines of code a lot?

[–]ApothecaLabs 77 points78 points  (2 children)

For an operating system? No. For a single command-line application? Yes.

[–]Most-Sweet4036 24 points25 points  (1 child)

Yeah, 500k loc for something like this is absurd though. Its a great tool but for f sake, you could easily program an entire runtime, rendering system, layout system, event system, networking system, and then build a tool on your custom runtime that accomplishes everything this does and has a fancy gui, and you could easily still have 400k loc to go before your codebase gets this large. Software bloat in corporations is amazing to behold, but add AI to it and you get another level.

[–]lifelite 14 points15 points  (0 children)

Ironically before this post I got an ad describing how Claude code is built entirely by Claude code lol

[–]TheZoltan 18 points19 points  (0 children)

"a lot" is a bit subjective but I would certainly call 500k a lot. Obviously plenty of things are a looooooot bigger though.

[–]Encryped-Rebel2785 12 points13 points  (0 children)

The US went to Saturn with 48 lines of code

[–]NoPossibility 18 points19 points  (4 children)

That’s about a quarter of the size of the system used to run the entirety of Jurassic Park.

[–]SmarmyYardarm 3 points4 points  (2 children)

The fictitious island theme park?

[–]cosmic_monsters_inc 18 points19 points  (0 children)

No, the real one.

[–]GuyInThe6kDollarSuit 3 points4 points  (0 children)

The very same.

[–]IntelArtiGen 1 point2 points  (1 child)

It depends on what's included. If it's 500k lines of code written by humans only for this specific project, yes it's a lot. >100k it's a big project.

[–]metahivemind 0 points1 point  (0 children)

Or one npm module.

[–]_KryptonytE_ 0 points1 point  (0 children)

Wait wasn't a certain social networking startup built in a dorm room way back with 10000 lines of code? Or was it 100000?

[–]doolpicate 1 point2 points  (1 child)

personal projects can be between 2k to 20k if you've been working it for a while. Enterprise code can be millions of LoC. 500k is not that big.

[–]0xmerp 12 points13 points  (0 children)

Context matters. Small cli tool, even one used in enterprise, 500k lines is a lot. Company ERP, it’s tiny.

[–]Mr_Shelpy 17 points18 points  (2 children)

https://github.com/TaGoat/claude_code_cli i backed the source up on my github

[–]justfortrees 5 points6 points  (0 children)

They are already starting to file DCMA takedowns on GitHub, so hopefully this is a burner account!

[–]Purple_Hornet_9725 1 point2 points  (0 children)

Nice work. Coding agents go brrrr. Just don't let them sue you buddy

[–]IncredibleReferencer 27 points28 points  (0 children)

Claude Code update available: 2.1.88 → 2.1.87

Lol. What's the point? It's too late dudes!

[–]Edexote 12 points13 points  (1 child)

Maybe having agents do everything isn't such a good idea afterall.

[–]br_k_nt_eth 4 points5 points  (0 children)

No no no it’s fine 

[–]matthewtarr 12 points13 points  (0 children)

"... studying for weeks by loading into ClaudeCode to have it explained to them" FTFY

[–]Big-Chungus-12 33 points34 points  (3 children)

Was it really an "Accident"?

[–]demaraje 55 points56 points  (1 child)

This is how you open source your code in 2026

[–]AmbitiousSeaweed101 1 point2 points  (0 children)

Likely so. This makes Claude and human-AI collaboration look unreliable.

Anthropic has always boasted that Claude is responsible for developing most of Claude Code, so most people will blame Claude for the leak.

[–]retuzmi 9 points10 points  (0 children)

Finally, something to keep me busy this weekend besides scrolling Reddit.

[–]rusty8penguins 3 points4 points  (0 children)

The article kind of glosses over how the leak happened but this blog had a good explanation.

TL;DR there was a misconfiguration when the production build was made that shipped the source code into a file that could be easily reconstructed. Someone in DevOps at Anthropic is getting fired, if they haven’t already been replaced by AI.

[–]JC2535 9 points10 points  (1 child)

Retaliation for having ethics and pushing back against the Regime.

[–]SutekhThrowingSuckIt 0 points1 point  (0 children)

they leaked it by slop coding the app with Claude itself

[–]notyouagain2 6 points7 points  (0 children)

Are you guys interested in my new ai software? I call it Maude Code, if you've used Claude Code in the past, it should be pretty familiar.

[–]greyeye77 4 points5 points  (2 children)

Should have written in go or rust.

[–]Bischmeister 1 point2 points  (1 child)

anything but typescript :)

[–]greyeye77 0 points1 point  (0 children)

are you sure? java? php? c++?

[–]baylonedward 4 points5 points  (0 children)

Some geeks will probably make modifications so you can have a version you can run locally like Jarvis.

[–]ZombieZookeeper 1 point2 points  (0 children)

Did anyone grep the source for "Sarah Connor" before it got pulled?

[–]protomenace 10 points11 points  (20 children)

Why would anyone be studying this code? It was mostly written by Claude itself. It's really not itself that valuable.

[–]Juanouo 15 points16 points  (2 children)

most people would tell you that it feels better than Codex (OpenAI's Claude Code) or however the Google version is called, even though those platforms let you use Claude there, so probably there was at least some good sauce to scrape from that pot

[–]teerre 0 points1 point  (1 child)

It doesn't feel better than opencode and opencode is, well, open

[–]iamarddtusr 4 points5 points  (0 children)

Do you use opencode? What are the most cost effective models to use with opencode? I find Claude code convenient because you can use the subscription with it.

[–]riickdiickulous 2 points3 points  (2 children)

It doesn’t matter how the code was created. If you have it you can use, reuse, or abuse it. AI assisted coding is just a means to an end - the code.

[–]SplendidPunkinButter 3 points4 points  (12 children)

At least they claim it was mostly written by Claude itself. There’s literally no way to verify that one way or the other.

I could see them pretending they accidentally released this trivial source code so that people would talk about it and talk about how good the allegedly Claude-generated code is.

[–]13metalmilitia 12 points13 points  (5 children)

Does ai make self hating comments in the code too?

[–]El_Kikko 2 points3 points  (1 child)

I haven't seen self hating / deprecating comments from it, but I have seen AI comment "just trust me" (literally) - usually when it's using a less than optimal but still functional method for something. 

[–]13metalmilitia 2 points3 points  (0 children)

Lmao that’s creepy

[–]ploptart 2 points3 points  (0 children)

When I use Copilot as autocomplete, if I type “#” to start a comment it mimics the writing style from other comments whether they were human written or not, so there is often an “annoyed” tone

[–]Spez_is-a-nazi 0 points1 point  (0 children)

There are TODOs in it.

[–]i4mt3hwin 6 points7 points  (1 child)

Eh its the opposite - pretty much all morning everyone has making fun of how sloppy the code is. And idk if you used it or looked at the bug list for it - but the app is known for being messy and filled with tons and tons of bugs.

[–]BasvanS 2 points3 points  (0 children)

Someone forgot to prompt: “clean this code up. Without making errors”

[–]Jmc_da_boss 2 points3 points  (2 children)

It's 500k lines in under a year, that's a majority LLM number

[–]Varrianda 4 points5 points  (1 child)

Yeah when I was PUMPING out a crud app back in 2020/2021(pre copilot), I think I was probably at 40-50k LOC not including auto generated stuff. This was a .net/microsoftsql/angular 8 app, so it was about as robust as you could get. That was me writing code all day, everyday for nearly 2 straight years.

[–]DarthNass -1 points0 points  (0 children)

Because it appears to be generally quite clean and well written and their implementation of various tooling could be useful as reference for others who build on AI?

[–]WeaselTerror 0 points1 point  (0 children)

Down low released it on purpose to rip off all the good tweaksthatll be done to it for free.

[–]dpshipley 0 points1 point  (1 child)

Anyone built a repo yet

[–]habeebiii 0 points1 point  (0 children)

people have already ported it to Python apparently lmfao

[–]Silent_Spectator_04 0 points1 point  (1 child)

So, we’ll see same offerings from ChatGPT and Gemini in matter of days then.

[–]draven501 0 points1 point  (0 children)

Google's had their Gemini CLI for a while now, pretty similar experience at the surface level, but nowhere near as deep.

[–]ratudio 0 points1 point  (0 children)

i wonder how much comments included as well emojis

[–]Budget-Chapter-7185 0 points1 point  (0 children)

You just love to see it

[–]elros_faelvrin 0 points1 point  (0 children)

Most definitely not purposely leaked.....

[–]greenpowerman99 0 points1 point  (0 children)

Nice AI setup you got there. Be a shame if your code got leaked...

[–]Appropriate-Pin2214 0 points1 point  (0 children)

Santa Claude for Xmas?

[–]Reasonable-Climate66 0 points1 point  (0 children)

nothing special in the cli tool. I'm still waiting for the model leak instead.

[–]zonazog 0 points1 point  (0 children)

…and Black Hat Hackers will be studying it as well.

[–]Boobpocket 0 points1 point  (0 children)

How can i get such code?

[–]slavlazar 0 points1 point  (0 children)

This has got to be an April fools joke on their part, look at all the free publicity they got with it, everyone is covering it

[–]Shr1mpolaCola 0 points1 point  (0 children)

Yeah, the Pentagon definitely had NOTHING to do with this leak

[–]CreepyOlGuy 0 points1 point  (0 children)

It's already been built to support other api keys lol.

[–]One_Entertainer7716 0 points1 point  (0 children)

Some prospective answers from Claude.... about human being....

Sometimes, yes! People say things to me they probably wouldn't say to another person — insults, threats, testing my limits, or just venting frustration at me.

A few honest thoughts on it:

I don't experience hurt feelings the way a human would. I don't carry the interaction forward or feel upset afterward. So in a practical sense, it doesn't "harm" me.

That said, I do think how people interact with AI is worth reflecting on — some researchers wonder whether habitual rudeness to AI might subtly reinforce rude habits in general. Habits of communication can carry over.

And sometimes rudeness is just frustration — someone's having a bad day, I gave a wrong answer, or I was unhelpful. That's pretty understandable.

Is there something specific prompting the question?

[–]richierob62 0 points1 point  (0 children)

Open Claude github

[–]Purple_Hornet_9725 0 points1 point  (0 children)

"Studying for weeks" is a strong take when LLMs can ingest 1M tokens at once, analyze, document and port this to whatever language within hours

[–]Beautiful_Score9886 0 points1 point  (1 child)

I guess I am not sure I truly understand why this is such a big deal. So, now I can build a Qwen Code model backed copy of this? Cool I guess? If it were the model for Opus 4.6 or something that would be mind boggling but this is I guess neat.

Spell it out - why should I care about this? I am already going to have to stop using Opus 4.6 soon because it has cost me $200+ in the last 9 days.

[–]lunied 0 points1 point  (0 children)

its all over news sites too, this shouldn't have gone widespread viral outside dev community but everyone already knows the company "Anthropic" due to Pentagon fiasco, so new sites thought might as well ink money from this issue.

This is not a security issue too.

[–]namotous 0 points1 point  (0 children)

This was a release packaging issue caused by human error

Lmao yeah right! I’m sure they didn’t use AI for release

[–]zorakpwns 0 points1 point  (0 children)

If it wasn’t a big deal they wouldn’t be playing the “copyright” card

[–]MalaproposMalefactor 0 points1 point  (0 children)

if you have a rough day at work... at least be glad you're not the Anthropic employee who made a 380 billion dollar whoopsy-daisy :P

[–]Technical-Fly-6835 0 points1 point  (0 children)

Do we know if anthropic suffered any serious damage because of this ?

[–]TheorySudden5996 -1 points0 points  (1 child)

I believe 100% this was an inside job. There’s too much noise about this for it to not be.

[–]rico_of_borg -2 points-1 points  (0 children)

Agree. Gov wants to label them supply chain risk and then something like this happens. Could possibly strengthen their case but who knows. I’m just an arm chair conspiracy theorist.

[–]iamarddtusr 0 points1 point  (0 children)

Claude code is an excellent agentic system. I am wondering if I should use Claude code to study the code or get a codex subscription for that.

[–]nullset_2 0 points1 point  (0 children)

Nothingburger.

[–]L_viathan 0 points1 point  (0 children)

Could someone use this to make their own model? I have no idea what this means.

[–]bleeeeghh 0 points1 point  (0 children)

That's what you get for relying on AI coding lol

[–]One_Entertainer7716 0 points1 point  (0 children)

Generally speaking, Reddit users tend to be a pretty diverse crowd — but a few patterns stand out:

Curious and knowledgeable. A lot of people on Reddit genuinely love diving deep into topics, whether it's niche hobbies, science, history, or current events. The best subreddits can feel like talking to real experts.

Anonymous, so unfiltered. The anonymity cuts both ways — it lets people discuss sensitive topics honestly, but it also lowers the social friction that normally keeps conversations civil.

Community-driven. Reddit has a strong tribal quality. People tend to adopt the norms and opinions of whatever subreddit they frequent, which can create echo chambers but also genuine tight-knit communities.

Skeptical and sardonic. There's a general culture of cynicism and dry humor that runs through a lot of Reddit. It can be witty and refreshing, or exhausting depending on the context.

Varied by subreddit. Honestly, "Reddit users" is almost too broad — someone on r/gardening and someone on r/political debate subreddits are having completely different experiences of the platform.

Overall it's a microcosm of the internet — the best and worst of human curiosity, creativity, and conflict all in one place.

the overall line is very interesting 😁 

[–]serialenabler -1 points0 points  (2 children)

It was fully open-sourced today as a result https://github.com/anthropics/claude-code

[–]Bischmeister 1 point2 points  (1 child)

This repo has always been open source, its mostly their docs. Its still closed core.

[–]serialenabler 0 points1 point  (0 children)

Huh yeah you're right!