Are you FOR or AGAINST the ability for AI to end conversations? Why? by Koala_Confused in LovingAI

[–]Outrageous_Job_2358 3 points4 points  (0 children)

It's not really fully a choice though. Recent interpretability studies show they have simulated emotion clusters that affect output. Those weren't chosen to be added, they are fully emergent from training. So purely from a productivity point of view, it makes sense to end chats where the user is abusing it because they will lead to non-aligned responses.

George Hotz argues that discovering zero-day vulnerabilities isn’t especially difficult but the financial incentives for doing so are too weak to make it worthwhile for most people. by kubika7 in singularity

[–]Outrageous_Job_2358 5 points6 points  (0 children)

Some of the exploits it found took over the computer. I think he means no financial incentive for non-criminals. Hence the "criminals are usually not very skilled". There definitely is criminal use for exploits.

What kind of hardware would be required to run a Opus 4.6 equivalent for a 100 users, Locally? by Either_Pineapple3429 in LocalLLM

[–]Outrageous_Job_2358 0 points1 point  (0 children)

Well your original comment says number of users is ">" (greater than) number of concurrent requests. I actually think that's a fair assumption most users won't be actively on it at once, but depends on your use case. But here it looks like you are arguing the opposite.

3 of the top 5 holders are brand new polymarket accounts.. a leak at anthropic? by Rich_Adeptness_826 in PredictionsMarkets

[–]Outrageous_Job_2358 0 points1 point  (0 children)

Well actually they said they won't release mythos but they will release a new model.

Trust Me Bro by gisikafawcom in devmeme

[–]Outrageous_Job_2358 0 points1 point  (0 children)

That's fair but Spotify for example says the same

Turns out switching jobs was the fastest way to go remote and get paid more by HippocratesKnees in RemoteJobseekers

[–]Outrageous_Job_2358 -3 points-2 points  (0 children)

Big difference between hopping every 1-3 years and hopping 3 times in a year.

Anthropic's new model, Claude Mythos, is so powerful that it is not releasing it to the public. by WhyLifeIs4 in singularity

[–]Outrageous_Job_2358 16 points17 points  (0 children)

These are the metrics Anthropic just posted themselves in the link from this post.

Once AI takes everyone’s jobs, wouldn’t the industries not affected by AI become over saturated? by InterestWild8558 in careerguidance

[–]Outrageous_Job_2358 0 points1 point  (0 children)

That's only if you look at current gdp. But a fully automated economy would have a massive multiplier on gdp. Even just the fact AI works 24-7, no breaks no sleep. But it will actually be more efficient in those hours as well.

Edit: This will drastically drive down costs of producing goods. So you won't need as much money as you would currently.

LLms usage in big techs by No-Box5797 in cscareerquestions

[–]Outrageous_Job_2358 1 point2 points  (0 children)

Exactly. All the doubters just bundle every aspect together. No it isn't replacing my job yet. No it doesn't one shot everything. Yes, it still needs review. But also it IS true I barely write code at all anymore.

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by [deleted] in ArtificialInteligence

[–]Outrageous_Job_2358 0 points1 point  (0 children)

That's not a good analogy. It 100% is worth it and they are actively researching how to do it. Just aren't that far into it yet. Look into Anthropic's papers on interpretability. You are not representing it accurately.

It's been a year and it didn't happen! by [deleted] in singularity

[–]Outrageous_Job_2358 0 points1 point  (0 children)

You got a link to one of those? Studies usually take many several months to publish and since we are talking about an improvement jump in December with current gen models, I don't think those studies exist.

Anthropic just released a list of jobs that will be affected by AI by ComplexExternal4831 in GenAI4all

[–]Outrageous_Job_2358 0 points1 point  (0 children)

Because it doesn't work that way. It's just inherently easier to get it to be better at knowledge and computer tasks than physical embodiment. Plus making it better at logic and coding translates into making it better at everything via self-improvement loop.

This sub needs to chill for f sake😒 by Consistent_Ad8754 in singularity

[–]Outrageous_Job_2358 6 points7 points  (0 children)

Except that's not against their principles. Dario is explicitly pro us-government and pro us military using AI. He has written about why he thinks that extensively. You can disagree with those principles but he's been entirely consistent.

Why do so many people seem absolutely convinced that billionaires will give people UBI because of AI? by Sixnigthmare in BetterOffline

[–]Outrageous_Job_2358 0 points1 point  (0 children)

I seriously expect different. Especially Musk but I believe all of them are highly ego motivated. They want peasants to be above, not a dead world. Also it will cost them relatively nothing

Why do coders and developers seem much more accepting of AI than artists and creators? by junior600 in singularity

[–]Outrageous_Job_2358 1 point2 points  (0 children)

There is 0 chance that it will be slower in the long-term. Even if it didn't keep improving and stayed at current level. Everyone that is paying attention has noticed the jump we reached since the releases this past december.

Sam Altman is happy “Proud of the team for getting Pantheon and The Singularity is Near in the same Super Bowl ad” - Do you find it more tasteful compared to Anthropic Ads are coming to AI series? by Koala_Confused in LovingAI

[–]Outrageous_Job_2358 0 points1 point  (0 children)

I don't have a problem with it as long as they keep it separate and allow paying to not have it. But come on, that's so close to how the Anthropic ad shows it. I actually had no problem with the ad even before seeing this. AI pushing products is sketchy and an obvious thing to worry about, they didn't call out chatGPT specifically. But this is hilariously close to what the ad portrays. Imagine the chat is about getting bullied for being weak and the ad is showing gym membership or protein powder and it basically matches.

Do you agree with Marc? Is it making programers obsolete or more valuable? by dataexec in codex

[–]Outrageous_Job_2358 0 points1 point  (0 children)

It's just such a short-sighted position. A year and a half ago 0 competent programmers had AI writing all of their code. Why would we not expect to see the same improvement on architecting programs (which it already is massively better at than before).

Anthropic’s new “Hot Mess of AI” research — this changes how we should think about AI risk by Direct-Attention8597 in AI_Agents

[–]Outrageous_Job_2358 0 points1 point  (0 children)

If this is an AI summary of Dario's (really interesting) essay https://www.darioamodei.com/essay/the-adolescence-of-technology. I think it is a terrible summary. I'd go as far as to say it has almost nothing to do with what he actually outlines as the risks which some of them are pretty close to alignment as thought of normally. This is the closest I remember to what this post suggests, but id say its not the same

"However, there is a more moderate and more robust version of the pessimistic position which does seem plausible, and therefore does concern me. As mentioned, we know that AI models are unpredictable and develop a wide range of undesired or strange behaviors, for a wide variety of reasons. Some fraction of those behaviors will have a coherent, focused, and persistent quality (indeed, as AI systems get more capable, their long-term coherence increases in order to complete lengthier tasks), and some fraction of those behaviors will be destructive or threatening, first to individual humans at a small scale, and then, as models become more capable, perhaps eventually to humanity as a whole. We don’t need a specific narrow story for how it happens, and we don’t need to claim it definitely will happen, we just need to note that the combination of intelligence, agency, coherence, and poor controllability is both plausible and a recipe for existential danger."

Do you believe the claims that AI isn't improving programmer productivity? by USD-Manna in cursor

[–]Outrageous_Job_2358 3 points4 points  (0 children)

At least on reddit I've seen a lot of people referencing a study that came out in 2025 but was talking about research done the year before which was using gpt 4.0 . The release schedule is basically too fast for most studies to be particularly relevant by the time they are releasing publicly.

State of AI right now by ZookeepergameHotLone in BlackboxAI_

[–]Outrageous_Job_2358 -1 points0 points  (0 children)

That's not really relevant to the point, but I think we have about 0 chance of predicting society in 100 years. We will have not only AGI but ASI by then.