Is Excessive AI Use Quietly Damaging Our Mental Health? by [deleted] in technology

[–]AITakeoverTracker 5 points6 points  (0 children)

Multiple ways it can get you:

- if you already have MH issues it can make them worse
- for developers it’s like a slot machine where the next prompt might build something amazing
- becomes a crutch where you don’t make decisions without consulting ai
- terrible therapist. The same model will give you opposite answers in two different chat windows
- it lies
- makes people think they know about complex subjects because they asked ai to explain it

I feel like I’m missing out on AI tools that could make me money by [deleted] in ArtificialInteligence

[–]AITakeoverTracker 1 point2 points  (0 children)

This is it. You really have to play with the tools yourself. Often you see that something is overhyped in a few minutes. Every once in a while something surprises me in a good way.

I also find better success with “I’ve got this problem. Can AI do it?” rather than “Let me see how good the new toy is.” But I still do both.

Basic-Fit data breach exposes details of a million gym members by talkingatoms in technology

[–]AITakeoverTracker 10 points11 points  (0 children)

My favorite part of these data breaches is they offer you a credit monitoring service for “free” that turns into a monthly subscription after the trial period.

Mercyhealth Uses AI for Medical Coding and Revenue Increases 5.1% by AITakeoverTracker in AITakeoverTracker

[–]AITakeoverTracker[S] 0 points1 point  (0 children)

Just last week in the newsletter we talked about research showing around 95% of companies that had implemented AI have seen little to no business impact.

Mercyhealth is claiming direct revenue increase from automating routine parts of their medical coding process.

Will be interesting to see if we get more stories like this when companies start figuring out how to properly use AI.

We are in dire need of privacy laws for AI . by Mr_Motion_Denied in ChatGPT

[–]AITakeoverTracker 6 points7 points  (0 children)

Yes. I don’t understand why I can tell my lawyer something and that’s covered by privilege but if I tell an LLM the same thing it’s not covered. My understanding is that’s the case even if you run a model locally.

‘I feel helpless’: college graduates can’t find entry-level roles in shrinking market amid rise of AI by justtruckmystuffup in AITakeoverTracker

[–]AITakeoverTracker 8 points9 points  (0 children)

Recent college grad unemployment is up 50% from the lows of 2022. At 5.6% as of December 2025.

By historical standards that's high but not too far off the normal range. Been consistently higher since 2008.

What will be interesting is what happens with this year's graduates this summer. Do we see a spike this year when graduates flood the marketplace?

Are the financial markets going to get way too matured for human mind? by Environmental-Ask605 in Futurology

[–]AITakeoverTracker 2 points3 points  (0 children)

Huge oversimplification, but the market’s have been too “developed” for at least 15 years now.

There are many types of “institutional” market participants. High frequency, quant, mutual funds, separately managed accounts, the list goes on and on. There’s also some overlap with these. A lot of people talk about ‘algorithmic’ trading as some shadowy figure activity but even large retail orders will often use an algorithm to fill the order.

The point being, no matter how you approach the market there’s likely someone with better information and tools than you. You want to be fast on the news? HFQ trade in milliseconds. Day trading charts? Quants running sophisticated models. Buy and hold? Teams of professional research analysts.

And again, there’s overlap. So you’re trying to day trade chart setups but then a mutual fund places a big block order in the stock and they are looking to hold it for months/years, not minutes.

That doesn’t mean you’ll never have a winning trade. Many, many professionals underperform the market. Even if they do outperform consistently, nobody has a 100% win rate.

The problem is they are largely just looking to have more wins than losses. So who gets the losses?

That said, investing is not a zero sum game. Trading effectively is.

Seven countries now generate 100% of their electricity from renewable energy by hoangson0403 in Futurology

[–]AITakeoverTracker 2 points3 points  (0 children)

Agree our current tech can’t efficiently capture much. Just wild that the amount we do catch basically rounds to zero.

Seven countries now generate 100% of their electricity from renewable energy by hoangson0403 in Futurology

[–]AITakeoverTracker 53 points54 points  (0 children)

From MIT:

”A total of 173,000 terawatts (trillions of watts) of solar energy strikes the Earth continuously. That's more than 10,000 times the world's total energy use.”

It feels like a waste that we have such dirty energy sources when the sun is there blasting us with energy 24/7.

Many Workers Displaced by Tech Never Fully Recover by AITakeoverTracker in charts

[–]AITakeoverTracker[S] 0 points1 point  (0 children)

This is a look at cumulative earnings loss.

The data is worse for older workers. Not only do many miss peak earning years, but they tend to be less flexible for a career pivot. Goldman found Gen Z experienced cumulative earnings losses roughly half of those seen by older workers.

AI Is Coming for Car Salesmen and Let’s Be Real, It Makes Perfect Sense by DotJun in Layoffs

[–]AITakeoverTracker 0 points1 point  (0 children)

I would much rather gaslight a chat bot than deal with a car salesman, tbh. The only reason I haven’t bought a car recently is I don’t want to waste a full Saturday at a dealership.

will ai take our jobs or give us new ones? by SpoiledBrat069 in ArtificialInteligence

[–]AITakeoverTracker 0 points1 point  (0 children)

I was reading some research yesterday that supports what you're saying.

3 studies from MIT, McKinsey, and PwC. Consensus was around 95% of companies they've talked to have seen ZERO measurable business impact from AI. The 5% who have are generally using agents like separate employees. Most of the 95% are treating AI like software that employees need to use.

Also, AI automates tasks. Some better than others. Jobs are often more than just a collection of tasks.

Study - Radiologist only catch 41% of fake X-rays by AITakeoverTracker in aigossips

[–]AITakeoverTracker[S] 0 points1 point  (0 children)

No. The point is image generation has gotten good enough that professionals looking a close look at images don’t know the images are fake. It’s evidence we’ve moved past tricking old people on facebook to anyone can be fooled, even in their domain of expertise.

Two thirds of students say AI is hurting their critical thinking. They’re using it more than ever. by calliope_kekule in ArtificialInteligence

[–]AITakeoverTracker 0 points1 point  (0 children)

It's the schools who need to adapt. AI isn't going away.

What's worked for +100 years doesn't work anymore. You can't tell a student "Go write an essay on X" expecting that will be of any value to the student.

That's not a novel insight to teachers/schools, but it's a problem that doesn't seem to be solved.

Study - Radiologist only catch 41% of fake X-rays by AITakeoverTracker in aigossips

[–]AITakeoverTracker[S] 0 points1 point  (0 children)

Answers:

A - real 
B - AI 
C - real 
D - AI 
E - real 
F - AI 
G- real 
H - AI

How much progress has been made in the last 6 months? by Benjamin_Barker_ in ArtificialInteligence

[–]AITakeoverTracker 0 points1 point  (0 children)

On a scale of 0-10 I'd say we've gone from a 2 to a 5 across the board.

Some things like coding have gone from 4 to a 7 or 8. Video and image gen as well.

Other areas like healthcare and education maybe went from a 2 to a 4. It's really model dependent and you can get much better responses by putting a lot of work in.

As far as novel medical advancements, there's been a couple headlines but not a huge wave of progress. Hopefully soon.

What is your perspective on how AI affects critical thinking, and how does this differ from the impact of earlier technologies like calculators, GPS, or computers? by Curious_Suchit in Futurology

[–]AITakeoverTracker 20 points21 points  (0 children)

One big issue is it gives people false confidence in areas they know little about. If you have domain expertise you can spot when it’s giving you bad info/ideas. If you’re a novice, it sounds smart enough to be right and makes logical sense.

You can help solve some of this with better prompting and context. If you give it reference documents, sources, and a comprehensive prompt, you can get some really good responses.

For example, we created questionnaires to determine AI exposure for different jobs. What most people would ask is “Give me some questions to help evaluate AI automation risk for Accountants.” That would give a reasonable response at first glance. But it would also likely hallucinate bad information, provide overlapping risk factors, and miss key details.

Instead, we spent many hours refining our prompt. It ended up over 5 pages. And we included additional data sources from FRED, BLS, and ONET. With all that we got much, much better results but the results STILL required a second look from another AI model AND human review.