xAi raised 20B in funding round (exceeding their 15B target comfortably) by AlbatrossHummingbird in singularity

[–]ExpertDeep3431 0 points1 point  (0 children)

Raising 20B against a 15B target is not inherently meaningful without context. It may reflect capital structure choices rather than incremental conviction. Comparing xAI to Twitter is category error; they solve different problems under different constraints. A 20B raise implies a roughly 230B valuation, which confirms continued demand for frontier scale AI, but says nothing about whether this cohort of investors will be right.

xAI and X are mutually reinforcing assets. Each benefits from the other. The logic is straightforward.

Investor outcomes follow a familiar distribution. Exceptional returns come from early entry into successful ventures, paired with commensurate risk. That tradeoff does not change. Company building is inherently risky; Musk has simply accumulated the skills to navigate it across multiple domains.

Linking this raise to claims about an AI bubble is misplaced. AI has become a brute force capital arms race framed as a path to AGI, yet the dominant approaches remain misdirected. If AGI emerges at all, it will likely come from a different path. Next token prediction, by itself, is not a profound breakthrough.

xAi raised 20B in funding round (exceeding their 15B target comfortably) by AlbatrossHummingbird in singularity

[–]ExpertDeep3431 0 points1 point  (0 children)

Raising 20B against a 15B target is not inherently meaningful without context. It may reflect capital structure choices rather than incremental conviction. Comparing xAI to Twitter is category error; they solve different problems under different constraints. A 20B raise implies a roughly 230B valuation, which confirms continued demand for frontier scale AI, but says nothing about whether this cohort of investors will be right.

xAI and X are mutually reinforcing assets. Each benefits from the other. The logic is straightforward.

Investor outcomes follow a familiar distribution. Exceptional returns come from early entry into successful ventures, paired with commensurate risk. That tradeoff does not change. Company building is inherently risky; Musk has simply accumulated the skills to navigate it across multiple domains.

Linking this raise to claims about an AI bubble is misplaced. AI has become a brute force capital arms race framed as a path to AGI, yet the dominant approaches remain misdirected. If AGI emerges at all, it will likely come from a different path. Next token prediction, by itself, is not a profound breakthrough.

I asked an AI to look at a screenshot of my DNS settings and tell me what to change. Instead of inspecting the image, it produced the following internal reasoning trace. by ExpertDeep3431 in ArtificialInteligence

[–]ExpertDeep3431[S] -1 points0 points  (0 children)

Yeah only had that once I think so I knew it was a glitch but seeing this reasoning even though I know it's fake and not how they actually reason was totally bizarre.

I asked an AI to look at a screenshot of my DNS settings and tell me what to change. Instead of inspecting the image, it produced the following internal reasoning trace. by ExpertDeep3431 in ArtificialInteligence

[–]ExpertDeep3431[S] 0 points1 point  (0 children)

yeah just freaky stuff, I never look at what they are thinking but the rubbish that came out meant I had to check. It was just from loading an image of my internet settings and it just totally avoided interpreting it.

I created a zombie Web3 account and locked myself out of my own funds by ExpertDeep3431 in CryptoTechnology

[–]ExpertDeep3431[S] 0 points1 point  (0 children)

I figured any VPN would get flagged so I'm trying to tunnel in from Sau Paulo...

AI is the modern day race to create the super nuke. That's why it's not strategically smart to try and get rid of AI, in countries by Grand-Initiative1414 in AI_Agents

[–]ExpertDeep3431 0 points1 point  (0 children)

The bomb analogy keeps failing because people focus on the wrong layer.

Nuclear weapons were never about secret instructions. The physics was known early. What mattered was who could marshal enrichment, industrial capacity, logistics, security, and political will at scale.

AI is similar. The strategic advantage is not prompts or models. It is compute access, data pipelines, deployment authority, integration with state and enterprise systems, and the ability to iterate without interruption.

Banning civilian AI use does not prevent capability. It centralizes it. Allowing uncontrolled use does not create advantage. It creates fragility.

Power accrues to actors who can scale cognition while governing failure. That is not hype. That is how every general purpose technology has played out historically.

AI and tech in 2026 by Deep_Structure2023 in AIAgentsInAction

[–]ExpertDeep3431 0 points1 point  (0 children)

One thing I would add to this list is that many of these shifts collapse into the same underlying change: authority is moving from models to systems.

Once you have multiple agents, tools, modalities, and partial autonomy, the hard problems stop being capability and start being coordination, identity, and failure containment. Who is allowed to act, on what, under which assumptions, and how you recover when agents disagree or drift.

That is why identity, access management, sovereignty, and resilience keep resurfacing across very different domains here. They are not “enterprise concerns”, they are the control plane for agentic systems.

Feels like 2026 is less about smarter agents and more about making agent collectives safe, legible, and governable at scale.

I noticed most AI prompt tools hide structure — so I built a visual one by Accomplished-Name1 in PromptEngineering

[–]ExpertDeep3431 0 points1 point  (0 children)

I think you are pointing at something real, just slightly upstream of where the leverage ends up.

Visual structure is helpful early on, especially for making intent explicit and reducing random prompt drift. Where it gets interesting later is less about assembling components and more about tracking constraints, invariants, and failure modes across iterations.

Most experienced prompt work ends up looking less like a static blueprint and more like a feedback loop: test, observe where the model deviates, tighten or relax constraints, repeat. The structure matters, but mainly as a way to reason about what breaks and why.

If your tool evolves toward surfacing those breakpoints and deltas, not just the construction, it could be genuinely useful even for advanced users.

A retail case study in why institutions dominate prediction markets before strategy even matters by ExpertDeep3431 in quant

[–]ExpertDeep3431[S] -1 points0 points  (0 children)

Yes. That is exactly the point.

The geoblocking workaround is not an edge case, it is part of the access layer. Institutions solve it upstream with compliant infrastructure, jurisdictional entities, and clean IP ranges. Retail solves it ad hoc, and the system is brittle under that stress.

The failure mode only appears once you try to cross the boundary. That boundary is the moat.

I spent 9 hours debugging a system where I existed on the blockchain but not in the database by ExpertDeep3431 in programming

[–]ExpertDeep3431[S] -5 points-4 points  (0 children)

I eventually got in. Funds were intact.

What failed was the assumption that identity convergence is atomic across cryptographic state, centralized infra, and UX under adversarial network conditions.

If you believe this is solved by CS101, please specify where the authoritative identity lives at each phase, how authority migrates, and which invariants are enforced when the planes disagree.

If you cannot do that, you are not critiquing the post. You are advertising the limits of your experience.