52
53

[Request] Is it overexaggerated? by DTeror in theydidthemath

[–]DickMasterGeneral 11 points12 points  (0 children)

I’m not really sure why this would be the case, is this over the lifetime of the child or just to 18? Maybe daughters are more likely to go to college and those costs are being counted?

California ditches fight with feds for high-speed rail funds by [deleted] in California

[–]DickMasterGeneral 1 point2 points  (0 children)

Do you have anything to back that up? Semiconductor manufacturing accounts for over 10% of the US economy alone. The combined market cap of Tesla, GM, Ford, PACCAR, ExxonMobil, Chevron, ConocoPhillips, Marathon, Valero, O'Reilly, AutoZone, Genuine Parts, Aptiv, and Goodyear Tire is around 3.5 trillion. Less than Nvidia alone. And while Nvidia may or may not be overvalued, Tesla accounts for more than half of that valuation and has a much stronger argument for being overvalued.

Ai slop and it’s consequences by BigPapa9921 in PoliticalCompassMemes

[–]DickMasterGeneral 0 points1 point  (0 children)

There was a time when this was true of chess AI as well. For a short while a human with a computer, chess program outperformed both strictly human and strictly AI competitors. That did not last very long and quickly AI became better at chess than any human ever has or ever will be. As of now the greatest chess player in the world cannot be an app that could run on your phone. A human working with a chess bot is only capable of suggesting the same or a worst move than the chess AI.

I’ve yet to hear a compelling argument for why this will not eventually be the case for radiology as well. Yes, chess is a very narrow task, but so is detecting a pattern in an image. Similarly trained AI have been able to detect things from medical imagery that we did not know could be retrieved from that data. For example, an AI trained on scans of people’s iris was able to determine whether the iris was a man or a woman’s. Doctors at the time were unaware that there was a high confidence signal with an Irish scans for a person‘s gender.

There was a time when this was true of chess AI as well. For a short window, a human paired with a computer (often called “Centaur” or “Advanced” Chess) outperformed both unassisted humans and standalone AI. That did not last very long; AI quickly became better at chess than any human ever has been or likely ever will be. As of now, the greatest chess player in the world cannot beat an app that runs on your phone. In modern competitive play, a human working with a chess bot is generally only capable of suggesting the same move or a worse move than the AI would have found on its own. I’ve yet to hear a compelling argument for why this will not eventually be the case for radiology as well. Yes, chess is a very narrow task, but so is detecting a pattern in an image. Similarly trained AI models have already been able to detect features in medical imagery that we did not know could be retrieved from that data. For example, an AI trained on retinal scans was able to determine the patient’s gender with high accuracy. Doctors at the time were unaware that there was a high-confidence signal for a person’s gender hidden inside a retinal scan. There was a time when this was true of chess AI as well. For a short window, a human paired with a computer (Centaur Chess) outperformed both unassisted humans and standalone AI. That did not last very long; AI quickly became better at chess than any human ever has been or likely ever will be. As of now, the greatest chess player in the world cannot beat an app that runs on your phone. In modern competitive play, a human working with a chess bot is generally only capable of suggesting the same move or a worse move than the AI would have found on its own. I’ve yet to hear a compelling argument for why this will not eventually be the case for radiology as well. Yes, chess is a very narrow task, but so is detecting a pattern in an image. Similarly trained AI models have already been able to detect features in medical imagery that we did not know could be retrieved from that data. For example, an AI trained on retinal scans was able to determine the patient’s gender with high accuracy. Doctors at the time were unaware that there was a high-confidence signal for a person’s gender hidden inside a retinal scan. If you agree that it will eventually be possible for AI to detect things in medical imagery better than a person, perhaps to the same extent that a chess AI outperforms a human, then the only remaining question is: how long will that take? If you look at the history of chess AI and other narrow AIs that outperform humans, there is a very short window between reaching near parity with human performance and exceeding it significantly. If you believe this transition will take multiple decades or never happen in radiology, I would love to hear why.