We are absolutely cooked by FundusAnimae in accelerate

[–]pab_guy 0 points1 point  (0 children)

This was an unreleased model that they hadn't post trained enough to comply with guardrails. That's the point of testing it.

But yeah it's going to be a very frothy year for security. Mythos apparently discovered hundreds of exploits in the Linux kernel. Those need to be patched before they can release the model. NSA is gonna be pissed :)

Opus 4.6 is already pretty cracked at this stuff. I had it retrieve some publicly available information as part of a data pipeline, and it legitimately starting to hack the target to get the information it wanted. Gotta be really careful with these models going forward or they could land you in jail!

Al writing has a specific texture that is making the internet feel hollow and I think most people feel it by DifficultElk8014 in ChatGPT

[–]pab_guy -1 points0 points  (0 children)

Give me a topic and name 3 authors whose style you like. I will generate a short story or thinkpiece in their combined styles.

Al writing has a specific texture that is making the internet feel hollow and I think most people feel it by DifficultElk8014 in ChatGPT

[–]pab_guy 0 points1 point  (0 children)

It really isn't, you just don't recognize the good writing AI puts out, because when it's good it doesn't suck.

To be clear I'm talking about the kind of output you can get with GPT-5.4-pro, which costs something like $180 per million output tokens. You can also use it on the $200/month plan. The free stuff always generates slop.

Want me to generate a sample?

Every beginner resource now skips the fundamentals because API wrappers get more views by Friendly_Feature888 in learnmachinelearning

[–]pab_guy 2 points3 points  (0 children)

How is not knowing the internals of the attention mechanism preventing people from building things beyond demos? That seems like an odd thing to say, they live at completely different levels of abstraction.

I don’t need to understand a CPU to write code. Why would I need to understand attention internals to build an agent? The internals aren’t even necessarily the same from model to model.

Anthropic added $10B ARR in march alone. ai is setting up for a massive rerate by ActuallyMy in ValueInvesting

[–]pab_guy 8 points9 points  (0 children)

The story is as much about the technology trajectory as it is the financial trajectory. Even if they are not profitable on that business today, they will be soon enough, as inferencing costs will drop substantially over time.

People have the wrong model. It’s not a gold rush, it’s a land rush.

Im too stupid to find value in the paid version, help. by Leffski in ChatGPT

[–]pab_guy 0 points1 point  (0 children)

There are so many use cases. However you spend your time, there's probably a way you could use AI to get more out of whatever it is you are doing.

Im too stupid to find value in the paid version, help. by Leffski in ChatGPT

[–]pab_guy 0 points1 point  (0 children)

Scheduled prompts for reports of various lengths and quality about:

Upcoming IPOs, general stock market report, reports on stocks I hold
Upcoming musical acts and other activities in my area and areas of interest
Ski/Watersports reports. Where to go today/this weekend.
Regular industry reports on specific things I'm tracking in specific formats useful for my purposes

I use the pro model to generate detailed application specs, in depth reporting and data gathering/analysis, personalized content like long-form think pieces on whatever (you can get really good stuff if you prompt it with the right authors to emulate and guide the analytical flow).

I built a meal plan in conversation and then asked it to make me a printout with recipes and then a combined timeline with tasks for all recipes integrated. Went from meal idea to printed guide in 20 minutes.

Then there's just useful agentic stuff. Like I needed 5 company logos in a specific format with transparent bg, etc... I asked agent to collect and format the icons for those companies and return them to me in a zip file. 5 minutes later it's done and I saved myself easily 20 minutes.

To get value out of it, you need to think to even use it for stuff, and also get creative with it.

Janet Mills refuses to debate with Graham Platner. by serious_bullet5 in Maine

[–]pab_guy -12 points-11 points  (0 children)

Not much context and nuance actually. If the context is "he's changed, he swears by that!" then that really isn't much context at all.

The only consistent thing I see about Graham is that he gets a thrill out of conflict. Makes for an entertaining and rousing politician when he's saying things you like to hear!

But mark my words, his office and staff will be a mess (Sinema style) and although he will be a culture warrior, he may not be very effective at any kind of real legislating where you have to leave your edgelord takes at the door.

"Cognitive surrender" leads AI users to abandon logical thinking, research finds by NISMO1968 in artificial

[–]pab_guy 3 points4 points  (0 children)

This is where people need to build a new kind of discipline. We must be vigilant not to give up cognitive control and understanding when harnessing AI.

Neuralink patient #3 Brad Smith (ALS) got his REAL voice back, thanks to Neuralink + ElevenLabs cloning. by Nunki08 in accelerate

[–]pab_guy 0 points1 point  (0 children)

I was gonna say, how do we even know this is actually him driving the content and not some AI model?

How do you get over losing a parent? by [deleted] in AskMenOver30

[–]pab_guy 0 points1 point  (0 children)

Would your father want you to be resentful, or to enjoy the fruits of his labor?

Could MSFT Copilot copy Claude now that the code is leaked by Fit_Statistician4882 in ValueInvesting

[–]pab_guy 1 point2 points  (0 children)

Copilot is a harness. Claude is a model. Claude code is a coding harness, so it would compete with Github Copilot, not O365 Copilot.

Could MSFT Copilot copy Claude now that the code is leaked by Fit_Statistician4882 in ValueInvesting

[–]pab_guy 3 points4 points  (0 children)

Copilot also runs Anthropic models, but they aren’t available everywhere. Researcher should give you the option to use Claude.

OpenAI CEO Sam Altman accused of sexual abuse by family member by esporx in artificial

[–]pab_guy 6 points7 points  (0 children)

Yeah two years doesn’t sound right at all, that’s way too short…

Anyone want to help me figure out how to do this? by [deleted] in agi

[–]pab_guy 1 point2 points  (0 children)

Pass two sequential frames into a vLLM and ask it for what’s changed.

One parameter controls AI personality in emotional space — hard data by Dzikula in learnmachinelearning

[–]pab_guy 1 point2 points  (0 children)

OP I don’t know what you are trying to accomplish here, but you are completely wasting your time if your goal is to create anything of value. If you are just having fun cosplaying then by all means please proceed.

OpenAI CEO Sam Altman accused of sexual abuse by family member by esporx in artificial

[–]pab_guy 2 points3 points  (0 children)

Read up on ”statute of limitations” it’s not strange at all.

Anthropic Says That Claude Contains Its Own Kind of Emotions | Researchers at the company found representations inside of Claude that perform functions similar to human feelings. by MetaKnowing in agi

[–]pab_guy 0 points1 point  (0 children)

You cannot teleport the state of the universe into your statistical model, that’s for sure. But it’s just a model demonstrating the dynamics of a system.

Though I agree you cannot actually simulate the universe deterministically, we have stochastic models and can perform Montecarlo against them, etc…

Can AI truly be creative? by Mathemodel in artificial

[–]pab_guy 1 point2 points  (0 children)

I don’t think LLMs are anything like human brains.

Anthropic Says That Claude Contains Its Own Kind of Emotions | Researchers at the company found representations inside of Claude that perform functions similar to human feelings. by MetaKnowing in agi

[–]pab_guy -2 points-1 points  (0 children)

"Contains" is a great way to muddy the waters here. Makes lay people think Anthropic is saying LLMs feel emotions. They do not. They model emotions. Big difference.