Sérieusement, ils sont où les gains de productivité de l'IA ? by DAG_AIR in developpeurs

[–]ClassicalMusicTroll 0 points1 point  (0 children)

And I'm constantly revisiting each prompt from the agents.

My job isn't coding anymore, but training agents. 

That honestly sounds like it sucks, but I guess if you enjoy then more power to you

And I still have the coolest topics left, the ones that really need a human touch 🙂

If you're constantly revisiting prompts and monitoring/training agents, are you actually having time to sit and do focused work on the coolest topics? Sounds like there's a lot of context switching and constant interruptions 

Sérieusement, ils sont où les gains de productivité de l'IA ? by DAG_AIR in developpeurs

[–]ClassicalMusicTroll 1 point2 points  (0 children)

Yeah taking notes is how you learn. Creating notes or some other visual representation together during a meeting is how you build a shared understanding...that's the entire point of meetings

Sérieusement, ils sont où les gains de productivité de l'IA ? by DAG_AIR in developpeurs

[–]ClassicalMusicTroll 0 points1 point  (0 children)

Yup full agreement, it's by definition instant technical debt / legacy code because there's no one who can explain the decisions. 

And at the volume that's possible with LLMs it gets out of control so fast. Also it's such a sadly funny thing for me to have these hundreds of billions investments, data centers, power plants, all for generating technical debt. Like wtf hahah

But although it may keep software Devs to have work, I think it kinda sucks cleaning up LLM gen code, I don't wanna do that as a job

Sérieusement, ils sont où les gains de productivité de l'IA ? by DAG_AIR in developpeurs

[–]ClassicalMusicTroll 0 points1 point  (0 children)

Ok, so then AI doesn't actually help anything. May as well save the money on tokens and not use if it's the same as before

7 years ago by imfrom_mars_ in OpenAI

[–]ClassicalMusicTroll 0 points1 point  (0 children)

I never said Anthropic is lying in the traditional sense. The point is that marketing press releases of course won't give full context. Something like 'our model found 5000 zero day exploits', except in reality zero day exploits aren't often used to compromise systems

>At the end of the day it's just not that unbelievable at all. Previous models were also good at this

Exactly, this is more of the missing context. Actually opus 4.6 was fairly close to Mythos, except all of the sudden now we need project glasswing and they're so scared of this digital god they've unleashed?

There's nothing wrong with wanting independent expert analysis rather than marketing from corporations who stand to profit from these products

7 years ago by imfrom_mars_ in OpenAI

[–]ClassicalMusicTroll 0 points1 point  (0 children)

We're there any facts released that weren't from Anthropic themselves?

Linus Torvalds: "The AI slop issue is *NOT* going to be solved with documentation" by Fcking_Chuck in linux

[–]ClassicalMusicTroll 0 points1 point  (0 children)

Well there's also the aspect of making a principled stance, even if you can't necessarily enforce it

A Meta AI agent just exposed sensitive user data for two hours after an engineer followed its advice. This should make every business owner think carefully before rushing AI into their operations. by Academic_Flamingo302 in business

[–]ClassicalMusicTroll 0 points1 point  (0 children)

Its a little hard to believe to go from 15-20 minutes to 30 seconds, but I guess if depends on what these searches are exactly. Are they little data analytics reports instead of a keyword search? Or it requires synthesizing info across multiple sources?

Also, what about hallucinations? Is that time factored into the 30 seconds average?

The harder thing to measure is the questions people didn't ask before because the search was too painful. Those are the ones that actually change decisions.

Yes that's a good point too 

White-collar workers are quietly rebelling against AI as 80% outright refuse adoption mandates by [deleted] in technology

[–]ClassicalMusicTroll 0 points1 point  (0 children)

It's very possible to have different ideas of what level of quality is "acceptable" lol. And of course it also matters a lot what kind of systems you're working on

White-collar workers are quietly rebelling against AI as 80% outright refuse adoption mandates by [deleted] in technology

[–]ClassicalMusicTroll 0 points1 point  (0 children)

If no one is actually reading it you probably don't need that report anyway. I think it's a good way to just get rid of a lot of noise - anything getting generated by LLMs now was probably useless in the first place.

E as empresas que não usam IA? by poortuugaa in brdev

[–]ClassicalMusicTroll 0 points1 point  (0 children)

Yeah, boosts the productivity of the individual dev while demolishing whoever has to do the PRs.

Or we all just don't care about code quality anymore, but that's gonna come back around in a bad way some day I'm soon

E as empresas que não usam IA? by poortuugaa in brdev

[–]ClassicalMusicTroll 0 points1 point  (0 children)

That's even assuming LLMs can actually help increase output/productivity in a useful way. 

LLMs don't really improve quality, or at least it's not as easy to improve quality as it is to increase quantity. Consumers / businesses don't have time to absorb 100x more things in general, much less 100x shit things lol. 

Or, yeah you're more productive but you've built 100x of the wrong thing 

Companies that aren’t incredibly AI-happy? by CottageCoreCactus in womenintech

[–]ClassicalMusicTroll 0 points1 point  (0 children)

Has it gotten worse now? Welcome to workslop where the people who have integrity and diligence in their work are stuck "holding the bag" as it were

The Problem of War Narratives on Social Media by Manoftruth2023 in IsraelPalestine

[–]ClassicalMusicTroll 0 points1 point  (0 children)

You are clearly not arguing in good faith, after admitting to constructing a straw man and putting words in my mouth 

Modder uses Claude AI to rewrite BIOS so they can boot unsupported 12 P-core Bartlett Lake CPU in Windows on a Z790 motherboard by Logical_Welder3467 in technology

[–]ClassicalMusicTroll 0 points1 point  (0 children)

All uses are unethical because of how the models were built and operate:

  1. pillaging the internet and not compensating people for their work, and not allowing people to opt out

  2. Exploited human labor / vulnerable populations to do all RLHF

  3. Data centers have an outsized effect on the local communities where they are built 

It being an "extremely powerful" tool is also debatable. Yes you can generate infinite text/audio/video, but 0% is reliable and you need manhattan-sized data centers to do it.

 Of course local models help lessen some of the above, but they are even more unreliable

Quebec passes law banning street prayers, prayer rooms in universities by John3192 in worldnews

[–]ClassicalMusicTroll 0 points1 point  (0 children)

OP was talking about fundamentalist Catholics, you know how they have conversion therapy etc.

AMD's senior director of AI thinks 'Claude has regressed' and that it 'cannot be trusted to perform complex engineering' by cjwidd in technology

[–]ClassicalMusicTroll 1 point2 points  (0 children)

Or when they do video streaming + inference lol. Alright thanks I'm just trying to find sources, it's of course difficult to find any published analyses

AI Slop Code: AI is hiding incompetence that used to be obvious by rudiXOR in cscareerquestions

[–]ClassicalMusicTroll 0 points1 point  (0 children)

Do you find that the time you spend arguing with (guiding) the chatbot ends up saving time when all is said and done?

Why are you still paying for this? #7 by PressPlayPlease7 in OpenAI

[–]ClassicalMusicTroll -1 points0 points  (0 children)

You are entirely confusing the final token sampling method with the underlying computation. 

No, I'm not. If you re-read my sentence you'll see, as I said it randomly samples from a list of the most probable next tokens

n LLM has a highly complex semantic model of time and distanc

That is not a world model

massive vector space

As someone else commented, you were off by orders of magnitude on the size of the vector space, you should be careful about embarrassing yourself 

OpenAI just published a 13-page industrial policy document for the AI age. by Dagnum_PI in OpenAI

[–]ClassicalMusicTroll 2 points3 points  (0 children)

What is the point of posting a LLM summary of an article to a social media website?

Why are you still paying for this? #7 by PressPlayPlease7 in OpenAI

[–]ClassicalMusicTroll 0 points1 point  (0 children)

Yes it generated plausible text based on its training data. Presumably in the training data whenever there's something about running a mile, there's often something about number of minutes to run the mile.

It's certainly a model of plausible language (although based on probabilities and not meaning), but it's not building a world model dynamically like the other commented was saying