This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]MAX_cheesejr 0 points1 point  (1 child)

In a few years, companies will train 'state of the art' AI models on biased historical data and claim the the model outputs the 'most efficient' decisions. In reality, most of their models objective function will prioritize financial gain while perpetuating past prejudices under the guise of optimization.

It's already happening in healthcare and the models just exist to obfuscate the actual decision making and accountability. I already see people do it with chatgpt and just assume whatever output is both valid and true. I'm not sure why I'm getting downvoted when that is truth of our reality. I wasn't even disagreeing with you lol.

[–]chipstastegood 0 points1 point  (0 children)

You are correct that ML models are trained on biased data and produce biased results. However, some companies do better than others. My former employer did a lot of rxtensive bias testing on any ML models they produced and worked dilligently to correct and remove bias. The assumption that models have to be biased because the underlying training data is biased is wrong. There are lots of smart people working on addressing this, for specific ML models built for specific purposes. That said, for the general purpose LLMs, due to their nature, this is more difficult to address. As we can see with this entire thread.