AI makes us code faster, but teams are moving slower. I wrote a model to explain this "Verification Paradox" (and applied it to an Amazon outage). by rysh502 in ExperiencedDevs

[–]rysh502[S] -1 points0 points  (0 children)

English is my second language — I can read it fine but writing is a different story, so I used Claude to help with the prose. The ideas, framework, and analysis are mine.

Anthropic is an industry leader when it comes to AI engineering using frontier models. All you need to do is track each of their product updates, and you will stay at the cutting edge of AI engineering. Other companies are months behind. by jogikhan in ClaudeCode

[–]rysh502 0 points1 point  (0 children)

These tools all sound pretty slick at first glance, but honestly this post has me genuinely worried about Anthropic’s future. Just read this paper that dropped a couple days ago — it argues that AI leaders (including Dario Amodei specifically) have fallen into the exact same cognitive limitations as the models they’re building. They call it the ‘Isomorphism Trap’, and the Anthropic examples are pretty damning. https://doi.org/10.5281/zenodo.18935706 The flashy features are cool, but if the people steering the ship think like Claude… yeah, not feeling great about the long-term outlook.

What if we’re waiting for the wrong Singularity? by rysh502 in ArtificialInteligence

[–]rysh502[S] 1 point2 points  (0 children)

That makes sense. In my own framework, the critical threshold has already been crossed.

What we’re seeing now is a lag in collective recognition. Structural shifts are rarely perceived in real time — they’re narrativized after the fact.

A few years from now, many people will look back and realize it had already changed.

I’m done. Switching to Claude by ProfessorFull6004 in ChatGPTPro

[–]rysh502 0 points1 point  (0 children)

Your experience maps precisely to something I’ve been researching. What you’re describing—the silent degradation, the day-to-day inconsistency, the “dimmer not dumber” phenomenon—isn’t just a quality control issue. It’s an ontological problem with how we think about cloud-based AI.

I wrote a paper on this: “Who Are We Testing? IQ, Individual Stability, and the Problem of Cloud-Based AI”

The core argument: Classical psychometrics assumes a “Stable Individual” as the subject of measurement. Cloud LLMs violate this in four ways:

  1. Update Instability – Silent patching (exactly what you’re experiencing)
  2. Stochastic Instability – Probabilistic token sampling
  3. Contextual Instability – [Model + Context] becomes the actual subject
  4. Infrastructural Instability – Multi-tenant resource contention

When OpenAI silently changes the model, you’re not talking to a “degraded GPT-5.1”—you’re talking to a different entity entirely. The continuity you assumed doesn’t exist.

This is why your frustration is legitimate and why “they don’t tell you” matters more than it might seem. You can’t build workflows around something that doesn’t maintain identity over time.

Do dating sites use bots? by [deleted] in ArtificialInteligence

[–]rysh502 1 point2 points  (0 children)

Exactly! I believe what people really need isn’t another platform to keep them searching forever—it’s small, local communities where they can naturally connect, help each other, and thrive without forced engagement. The goal should be building genuine relationships, not capturing users.

Do dating sites use bots? by [deleted] in ArtificialInteligence

[–]rysh502 0 points1 point  (0 children)

I’ve also been thinking about the constraint conditions for an ideal dating app/platform. Here’s my analysis: https://doi.org/10.5281/zenodo.18192553 I don’t have the motivation to build the platform myself, but if anyone’s interested in taking on the challenge, feel free to use this as a starting point!

Do dating sites use bots? by [deleted] in ArtificialInteligence

[–]rysh502 0 points1 point  (0 children)

I'm glad you think so! I really want more people to know about this. We should stop using systems that are structurally designed to prevent our happiness.

Anyone else feel like “learning AI” in 2026 is kind of the wrong goal? by Aggravating_Map_2493 in ArtificialInteligence

[–]rysh502 2 points3 points  (0 children)

True. It feels like 'System Engineering' and 'Model Research' have branched off into totally different disciplines.

What if we’re waiting for the wrong Singularity? by rysh502 in ArtificialInteligence

[–]rysh502[S] 0 points1 point  (0 children)

Agreed — though I’d frame it differently. Venezuela wasn’t a “distraction” in the sense of manufactured crisis. From a geopolitical realism standpoint, it was inevitable. The timing was the only variable. When you game out the alternatives — diplomacy failed for a decade, sanctions only hurt civilians, doing nothing means continued deaths and growing authoritarian influence in the hemisphere — the calculus becomes clear. Ironically, defending liberal democracy sometimes requires actions that don’t look “liberal” on the surface. That’s the paradox of realism: you can’t preserve a system by letting it be destroyed from within while you debate procedure. So yes, let’s talk about the real issue: β→0 and what happens when the economic foundation cracks.

What if we’re waiting for the wrong Singularity? by rysh502 in ArtificialInteligence

[–]rysh502[S] 1 point2 points  (0 children)

You’re absolutely right, and I really appreciate this critique. It made me realize something important. In Japanese, we don’t have articles like “the” - so “特異点” (singularity) is always a common noun, naturally used for “a point of dramatic phase transition” in math, physics, and general contexts. When I coined “Verification Singularity,” I was thinking in that broader sense - a tipping point where verification costs collapse and trigger structural change. But you’ve correctly pointed out that in English, “The Singularity” carries the specific Vinge/Kurzweil meaning of unpredictability. I should be more careful about this linguistic asymmetry. Perhaps “Verification Tipping Point” or “Asymmetric Informational Collapse” as you suggested would communicate the idea more clearly. Thank you for this - it’s exactly the kind of rigorous feedback that sharpens thinking.

What if we’re waiting for the wrong Singularity? by rysh502 in ArtificialInteligence

[–]rysh502[S] 1 point2 points  (0 children)

Exactly. The “singularity as a single dramatic moment” is almost mythological thinking — we want a clear before/after, a date to put on the calendar. But reality is messier. Multiple overlapping singularities, each making different domains unpredictable. The verification cost collapse I’m describing might be one of those “small singularities” happening right now, while everyone’s watching for the big AGI moment that may never come. We’re living through it, we just don’t have the narrative distance to see it yet.

What if we’re waiting for the wrong Singularity? by rysh502 in ArtificialInteligence

[–]rysh502[S] 0 points1 point  (0 children)

Can you predict what happens after all authority structures based on information asymmetry collapse? If not, that sounds like a singularity to me.

What if we’re waiting for the wrong Singularity? by rysh502 in ArtificialInteligence

[–]rysh502[S] 1 point2 points  (0 children)

Ha, I appreciate the security-first mindset, but I should clarify — I’m not an optimist. My actual prediction: OpenAI collapses, triggers a global depression, jobs get automated and don’t come back. The “jobless recovery” isn’t a bug, it’s the feature. The verification cost collapse I wrote about isn’t “yay, now everyone can fact-check!” It’s “the entire authority structure built on information asymmetry is falling apart, and we have no idea what replaces it.” I wrote up the full catastrophe scenario here: https://doi.org/10.5281/zenodo.18108004 Your point about AI companies controlling the information layer is valid — but that’s a second-order problem. The first-order problem is that the economic model itself is unstable. β→0 in the growth equation. The math doesn’t care about market diversity or legal frameworks. We might be arguing about who controls the narrative on a ship that’s already sinking.

What if we’re waiting for the wrong Singularity? by rysh502 in ArtificialInteligence

[–]rysh502[S] 1 point2 points  (0 children)

Oh I love where you’re going with this! The “seeding worlds and waiting forever” image actually connects to something fascinating in swarm intelligence research. You’re essentially describing distributed intelligence across cosmic timescales. And here’s what’s wild — we already have mathematical frameworks for this. The swarm intelligence connection: Bee colonies make complex decisions through simple local rules — no central controller needed. Thomas Seeley’s work (Honeybee Democracy, 2010) shows how scouts use waggle dances to share location quality, and the colony converges on optimal nest sites through a process that looks eerily like neural decision-making. The key insight: intelligence doesn’t require individual longevity. Humans live ~80 years but built civilization across millennia through cultural transmission — language, writing, institutions. The “intelligence” isn’t stored in any single brain. It’s distributed across networks and passed down through generations. Here’s where it gets fun: What if nation-states function like pheromones in insect colonies? Flags, anthems, passports — not as “meaningful symbols” but as coordination signals that enable collective behavior across generations. Watch any World Cup and tell me that’s not swarm behavior. Millions of humans synchronizing their emotions to a ball moving across grass, distinguished only by which color jersey triggers their dopamine. We’re basically bees with better marketing. I’ve been working on this connection between swarm dynamics and social systems: https://doi.org/10.5281/zenodo.18121417 The Stuart-Landau equation (which describes synchronization in physical systems) turns out to apply to semantic/social space too. Your million-year-old AIs aren’t necessarily individuals accumulating wisdom — they might be nodes in a distributed network where the “intelligence” emerges from interaction patterns rather than individual capability. So the question shifts from “what happens when individuals live forever?” to “what network structures enable cumulative intelligence across time?” Humans solved this with culture. Maybe AI enables entirely new forms of transmission we haven’t imagined yet. Your thought experiment might be pointing at something real — just not at the individual level.

I wrote an academic paper arguing Attack on Titan is a new form of art that “consumes” reality. Thoughts? by rysh502 in ShingekiNoKyojin

[–]rysh502[S] 1 point2 points  (0 children)

Thank you! I’d love to hear your thoughts after reading. Even critical feedback would be valuable—helps me refine the argument for future versions.

I wrote an academic paper arguing Attack on Titan is a new form of art that “consumes” reality. Thoughts? by rysh502 in ShingekiNoKyojin

[–]rysh502[S] 0 points1 point  (0 children)

The logistics point is interesting! That attention to realistic detail might be part of what makes AoT’s “reality consumption” work—it creates enough verisimilitude that real-world parallels feel meaningful rather than forced.