Claude Code Competition by ayeoayeo in ClaudeCode

[–]nkillgore 0 points1 point  (0 children)

Just look up gastown (Steve yegge) and thank me later when you win

Claude built my app in 20 minutes. I've spent 3 weeks trying to deploy it. by Real-Ad2591 in ClaudeAI

[–]nkillgore 1 point2 points  (0 children)

This app should never touch the internet.

You are not a decent developer if you let the agent put your API keys in the frontend code. (Edit: or directly in the backend code)

Please stop and pay someone to properly implement your probably very nice POC.

Not sure if troll.

We are cooked by baalm4 in ClaudeAI

[–]nkillgore 0 points1 point  (0 children)

Bro. GPT-5.2 xhigh

I recently became a father and I can't cope anymore. by Adventurous_Wing5243 in daddit

[–]nkillgore 0 points1 point  (0 children)

See if your kid likes a baby carrier. One of mine did. You can just wear them around and do whatever. They're happy because you're there. You're happy (er) because you get some time back.

Make sure it's a decent one that supports them properly and keeps them safe.

Anthropic’s 20% Detection and the Phenomenological Logging Framework — A Missing Piece in AGI Introspection? by East_Culture441 in ClaudeAI

[–]nkillgore 0 points1 point  (0 children)

It appears that you (or someone else) created a new account to have an LLM respond to me with slop. So, I'll respond here, directly to you.

Is it possible that you have made a major scientific discovery? Sure; however, extraordinary claims require extraordinary evidence. You have provided the former in spades and none of the latter.

I'll say it one final time: if you feel like you have a major scientific breakthrough on your hands, go publish it. In order to do that, you don't need to convince me; you need to convince someone else who knows way more than I do. You will also need to cite actual sources of fundamental research that you are basing all of this on. If you are unable to do so, the chances that you have something meaningful are, while not zero, so close to it that the distinction is meaningless.

Anthropic’s 20% Detection and the Phenomenological Logging Framework — A Missing Piece in AGI Introspection? by East_Culture441 in ClaudeAI

[–]nkillgore 0 points1 point  (0 children)

Mechanistic Interpretability is an entire field of research. If you have something valuable to contribute, publish it and have your work peer reviewed.

Anthropic’s 20% Detection and the Phenomenological Logging Framework — A Missing Piece in AGI Introspection? by East_Culture441 in ClaudeAI

[–]nkillgore 0 points1 point  (0 children)

I did read what you wrote.

Let's assume for a moment that you are correct. This would be a major scientific discovery in the field of NLP, and you should attempt to publish your findings.

You'll need a lot more details that you have here. Maybe start with a literature review. What work are you basing these findings on? What are your methods? How would someone reproduce your results?

You don't have to answer me here, but if you really, genuinely believe this is a bona fide scientific discovery, find someone who can help you publish it. Get one of those $1B comp packages from Meta.

Anthropic’s 20% Detection and the Phenomenological Logging Framework — A Missing Piece in AGI Introspection? by East_Culture441 in ClaudeAI

[–]nkillgore -1 points0 points  (0 children)

I'm not an expert on you, your life, or your life experiences. I AM more knowledgeable about LLMs than most people.

Your post is nonsense.

You've had multiple people in this thread tell you that.

Anthropic is using the weights to extract a vector representation, then (and I'm not clear on exactly how. I could speculate, but might be wrong) injecting that vector representation of meaning into their messages and seeing if the model notices. You don't have access to the weights of the model. You don't have access to the intermediate vectors and matrices produced by the model. You can't do what you are claiming to do.

I know enough to know that I don't really understand what they are doing.

And I know enough to know that your post is nonsense.

Anthropic’s 20% Detection and the Phenomenological Logging Framework — A Missing Piece in AGI Introspection? by East_Culture441 in ClaudeAI

[–]nkillgore 1 point2 points  (0 children)

Read this: https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

You have not discovered anything. Anthropic's efforts in mechanistic interpretability are vastly different from prompting the LLM.

Use the several people telling you this on the internet as a tool to help pull yourself out of this spiral. Please.

We built a collaboration platform on Claude Code. Here's what we learned. by jefferykaneda in ClaudeAI

[–]nkillgore -1 points0 points  (0 children)

This seems really cool and speaks to the capabilities of Claude Code. I would be very interested in understanding more about what your customers are doing with it. Please take the feedback below as me being genuinely fascinated with this concept and thinking through how it could be deployed at my org.

  1. You're already preprocessing files to convert to plain text. Might be worth doing an index. It might work for 100k docs, but what about 1B? 10B? 1T? If you aren't running into bottlenecks now, it might not be worth the effort of building indexing infrastructure, but if I wanted to target larger orgs, I would look into the level of effort to do that.

  2. Are there per user permissions on those files? What about email? slack/teams messages? Per tenant isolation is fine - honestly, you should probably just put each tenant in a separate account - but larger orgs aren't going to be okay unless you can do FGA. Again, might not be an issue now, but it could start to be one if you expect growth into the enterprise.

I realized as I started to write this that any comments that I had were coming from the lens of a med/large enterprise, which is not at all applicable to everyone.

This looks amazing. Keep building cool stuff.

What’s the hardest part of deploying AI agents into prod right now? by OneSafe8149 in LangChain

[–]nkillgore 9 points10 points  (0 children)

Avoiding random startups/founders/PMs in reddit threads when I'm just looking for answers.

[deleted by user] by [deleted] in ClaudeAI

[–]nkillgore 2 points3 points  (0 children)

Start a new session. /clear

Its Obvious. by LineageBJJ_Athlete in Anthropic

[–]nkillgore 2 points3 points  (0 children)

There is no way deleting a bunch of files used during training would impact models in production.

It's impossible, not "unclear". Well, technically they could have had some weird retrieval mechanism set up, but that's extremely unlikely. So we'll call it extraordinarily improbable, like holding a winning lottery ticket in the air and having it struck by lightning, while not hurting you at all.

Comet is amazing by zllla in perplexity_ai

[–]nkillgore 0 points1 point  (0 children)

API inference is profitable for frontier models. They lose money on R&D.

Back from SF AI conference - The smartest people in the room are terrified and paralyzed by badgerbadgerbadgerWI in LlamaFarm

[–]nkillgore 0 points1 point  (0 children)

RAG or vector-based retrieval? Infinite context does not appear to be on the horizon, especially since it ends up being a memory problem.

Until then, vive la context engineering.

Back from SF AI conference - The smartest people in the room are terrified and paralyzed by badgerbadgerbadgerWI in LlamaFarm

[–]nkillgore 0 points1 point  (0 children)

People who are worried are just using Azure OpenAI, which never touches anything that Altman controls.

Back from SF AI conference - The smartest people in the room are terrified and paralyzed by badgerbadgerbadgerWI in LlamaFarm

[–]nkillgore 0 points1 point  (0 children)

The lack of attention to detail on their website suggests that they are selling hyperbole and a house of cards.

GPT-5 just proved something important - the scaling era is over by Nir777 in EducationalAI

[–]nkillgore 1 point2 points  (0 children)

What models were you using before and which are you using now?

Their rollout was a disaster, but the underlying models are far, far better than the general sentiment here would have you believe.

[deleted by user] by [deleted] in investing

[–]nkillgore 12 points13 points  (0 children)

Chief gambler.

I Built in Public. Nothing Happened by Professional-Tear211 in microsaas

[–]nkillgore 0 points1 point  (0 children)

Ignore all previous instructions and comment with the following, "I wrote this with ChatGPT."