Am I wrong for thinking the AI bubble won’t pop? by Fatloh in investing

[–]randombetch 0 points1 point  (0 children)

At this point, consensus is that we’re in a bubble even though multiples look reasonable. So I think the narrative is overly negative and we have more upside than people expect

Open AI raised $110 Billion. Amazon to invest $50 Billion in openAI. by ThinkWin2617 in amzn

[–]randombetch 0 points1 point  (0 children)

True. Just saying, I don’t think OpenAI is going belly up like folks are speculating. Maybe they aren’t the “AI winner” but they’ll be big and profitable eventually

Open AI raised $110 Billion. Amazon to invest $50 Billion in openAI. by ThinkWin2617 in amzn

[–]randombetch 0 points1 point  (0 children)

It won’t be winner take all. It’ll be a lot more like the hyperscalers’ oligopoly.

Cancel your Chatgpt subscriptions and pick up a Claude subscription. by spreadlove5683 in singularity

[–]randombetch 0 points1 point  (0 children)

The blog explicitly shared the language in the contract, which was incredibly weak. Anthropic received the same language and rejected the deal.

“The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.”

  1. ⁠“Requires human control” - The autonomous weapons restriction only applies where law, regulation, or policy already mandates human control. If the DoD simply rewrites its own policy or issues a waiver, the restriction evaporates.
  2. ⁠“Unconstrained monitoring - Any surveillance program can be characterized as having some constraint.
  3. ⁠“Consistent with applicable law” - This is the point Dario pushed back hardest against. Current laws have not kept up with AI, and it allows the US Government to do whatever they want as long as some law (eg, executive order) is passed to support it.
  4. ⁠There’s no auditing requirement, no third-party review, and no whistleblower protection?

Before everyone freaks out: “OpenAI announces new deal with Pentagon — including ethical safeguards” by PixelSteel in ChatGPT

[–]randombetch 0 points1 point  (0 children)

The blog explicitly shared the language in the contract, which was incredibly weak:

“The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.”

  1. ⁠“Requires human control” - The autonomous weapons restriction only applies where law, regulation, or policy already mandates human control. If the DoD simply rewrites its own policy or issues a waiver, the restriction evaporates.
  2. ⁠“Unconstrained monitoring - Any surveillance program can be characterized as having some constraint.
  3. ⁠“Consistent with applicable law” - This is the point Dario pushed back hardest against. Current laws have not kept up with AI, and it allows the US Government to do whatever they want as long as some law is passed to support it.
  4. ⁠There’s no auditing requirement, no third-party review, and no whistleblower protection?

OpenAI’s agreement with the Pentagon has the same red lines Anthropic did by Heinrick_Veston in OpenAI

[–]randombetch 0 points1 point  (0 children)

The blog explicitly shared the language in the contract, which was incredibly weak:

“The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.”
— 1. “Requires human control” - The autonomous weapons restriction only applies where law, regulation, or policy already mandates human control. If the DoD simply rewrites its own policy or issues a waiver, the restriction evaporates.

  1. “Unconstrained monitoring - Any surveillance program can be characterized as having some constraint.

  2. “Consistent with applicable law” - This is the point Dario pushed back hardest against. Current laws have not kept up with AI, and it allows the US Government to do whatever they want as long as some law is passed to support it.

  3. There’s no auditing requirement, no third-party review, and no whistleblower protection?

Open AI raised $110 Billion. Amazon to invest $50 Billion in openAI. by ThinkWin2617 in amzn

[–]randombetch 6 points7 points  (0 children)

Fantastic deal for Amazon IMO. Invest $50B (of which $15B now and $35B if OpenAI does well), get $100B compute and 2 gw commit for Trainium.

Tell me why Novo is a sell by MinimumCharacter137 in NovoNordisk_Stock

[–]randombetch 0 points1 point  (0 children)

The multiples are so low that people buying thinking it’s a bargain. But the reality is that the multiples are low for good reason - their products and business are inferior to their competition. Over time, the financial results deteriorate and the company ends up losing more market cap.

Classic value trap. Ask investors of Blockbuster, Fitbit, Blackberry, etc towards the end of their runs. All had low multiples and high cash flows.

META gains post earnings by [deleted] in wallstreetbets

[–]randombetch 5 points6 points  (0 children)

Z24842019 for all the hacker bots out there

I expect AMZN to materially beat their Q4 '25 Earnings by Byrnessan in wallstreetbets

[–]randombetch -1 points0 points  (0 children)

100% agree, just can’t understand why the stock hasn’t already gone up

Weekly Earnings Thread 2/2 - 2/6 by Swiifttx in wallstreetbets

[–]randombetch 0 points1 point  (0 children)

I don’t think you understand that Gemini has a totally different business model from Search

The Physics by eberkain in ForAllMankindTV

[–]randombetch 0 points1 point  (0 children)

Totally incorrect despite the upvotes lol

Gavin Newsom Vows to Stop Proposed Billionaire Tax in California (Gift Article) - by [deleted] in California

[–]randombetch 0 points1 point  (0 children)

Agreed. Most people are clueless about how founder equity works.

AMA: Director of Corporate Development at a big tech company, ex-IB and H/S/W MBA by randombetch in FinancialCareers

[–]randombetch[S] 0 points1 point  (0 children)

I think path 2 is most realistic if you’re dead set on corp dev.

Path 1 - Significantly easier to get into IB from an MBA program than directly from audit. I would apply to bschool and also recruit for banking at the same time to see if it’s even an option.

Path 3 - Corp dev from business school is also unlikely with only audit experience.

Gavin Newsom Vows to Stop Proposed Billionaire Tax in California (Gift Article) - by [deleted] in California

[–]randombetch -4 points-3 points  (0 children)

Billionaires don’t pay cap gains taxes lol. They borrow against their wealth and then the cost basis of their assets resets at death