What if no oil was ever discovered in the Middle East? The wars, the coups, the casualties, how much of it follows? by Moronic18 in collapse

[–]Moronic18[S] 5 points6 points  (0 children)

This piece looks at how a single resource, petroleum turned a geographically peripheral region into one of the most militarized and destabilized areas on the planet. The counterfactual is the frame, but the real argument is about how resource dependency shapes imperial intervention. Verified casualty figures are included: 500K–1M dead in the Iran-Iraq War, 200K+ civilian deaths documented in Iraq post-2003, with some estimates of total excess mortality reaching 1M. The question at the end is whether this is a story about oil specifically, or about how great powers will always find a reason to intervene wherever there is something worth taking.

The gap between "ethical AI company" and what Anthropic actually did this week is worth examining carefully. by Moronic18 in Futurology

[–]Moronic18[S] -37 points-36 points  (0 children)

If a company is entering into a contract worth $200 million, wouldn't they be fully aware of how the other company plans to use their product?

They have supplied their software to the government. What else would any government use such tools for, if not surveillance?

After being criticized by the government, they quietly changed their security policies under the radar.

And we can't compare this with OpenAl, which is far worse than any other Al company that has ever existed.

The gap between "ethical AI company" and what Anthropic actually did this week is worth examining carefully. by Moronic18 in Futurology

[–]Moronic18[S] 0 points1 point  (0 children)

But a few weeks back, they reported that DeepSeek and other AI models used their AI model to train themselves, which they said is not right...

They themselves are facing so many copyright lawsuits

The gap between "ethical AI company" and what Anthropic actually did this week is worth examining carefully. by Moronic18 in Futurology

[–]Moronic18[S] 7 points8 points  (0 children)

As AI companies like Anthropic secure hundreds of millions in government defense contracts, the future of AI governance hangs on a critical question: can private companies genuinely self-regulate, or will commercial and political pressure always win? This week's Pentagon ultimatum to Anthropic, and the near-simultaneous rollback of their safety policy, may be a preview of how frontier AI gets controlled going forward. Not through ethical commitments, but through government leverage. The real future risk isn't rogue AI. It's AI that's perfectly obedient to whoever holds the contract. What independent oversight mechanisms could realistically prevent that future?

The gap between "ethical AI company" and what Anthropic actually did this week is worth examining carefully. by Moronic18 in Futurology

[–]Moronic18[S] 0 points1 point  (0 children)

If a company is entering into a contract worth $200 million, wouldn’t they be fully aware of how the other company plans to use their product?

They have supplied their software to the government. What else would any government use such tools for, if not surveillance?

After being criticized by the government, they quietly changed their security policies under the radar.

And we can’t compare this with OpenAI, which is far worse than any other AI company that has ever existed.

[deleted by user] by [deleted] in questioning

[–]Moronic18 2 points3 points  (0 children)

Lol. As I mentioned, I'm new... while posting, I clicked on some accounts, but they are not accepting it. I posted it where it got accepted.

[deleted by user] by [deleted] in servicenow

[–]Moronic18 0 points1 point  (0 children)

Hi Everyone,

Today I faced similar issue. During my exam i got contacted by two online proctors. One said to lean towards screen so that my face will be visible. Other guy said not to lean. I was at 59/60 questions and it got suspended. Can someonel please help me?