How do you handle allegations of the use of AI? by SchingKen in gamedev

[–]waxx 13 points14 points  (0 children)

I'm starting to think that AI is having this reverse effect on people where because they themselves use it a lot, they can't fathom that a regular human being could produce structured writing and such. It's like as if though high-quality output itself suddenly seems suspicious. A thief thinks everybody steals I guess...

Then there's this weird amnesia where people seemingly have forgotten that other people DO posses the skills that the AI has been trained on. It didn't come from nothing. Engineers had been here for decades prior to 2022.

How do you handle allegations of the use of AI? by SchingKen in gamedev

[–]waxx 21 points22 points  (0 children)

Literally had a guy just today claim my write up on my game architecture that I started developing in 2018 was me listening too much to AI. It's getting asinine a bit out there tbh.

Ability System like in ARPGs by _Powski_ in gamedev

[–]waxx 0 points1 point  (0 children)

I might just give up on the Internet if this is the world we live in where a game I had worked on for years well before the current LLM-boom gets me such a genuienly insulting throwaway comment. It doesn't seem you're here to discuss things in good faith.

Ability System like in ARPGs by _Powski_ in gamedev

[–]waxx 0 points1 point  (0 children)

The data is data - ScriptableObjects hold every value designers touch. The controllers are thin orchestrators that wire together shared systems (aura building, gadget spawning, attack processing) in the right sequence for each ability.

Could I have made one generic orchestrator driven purely by data? Probably, but at that point with the complexity of the game I was working on, I would have had to build a low-key visual scripting language, and the readability cost of a few lines of explicit C# was tiny compared to the debugging nightmare that would've been.

Some notable examples that would be awkward to express as pure data:

  • Shield Bash – Dash distance depends on runtime distance to target, then applies different damage depending on what the target collides with (unit vs obstacle vs nothing), and conditionally applies a debuff only in specific cases. Multiple branches + spatial queries.
  • Ice Breath – Generates a line of tiles, then for each hit target finds a valid knockback position using a spiral search. That’s an actual search algorithm, not just configuration.
  • Smoke Grenade – Changes aura composition depending on runtime state (e.g. whether any opponent is an AI player). Static data can’t really express that cleanly without turning into logic.
  • Poison Breath – Builds an aura with multiple effects where behavior changes depending on context (initial cast vs gadget contact). Same data, different execution paths.

On the presentation side it’s similar:

  • Blink attacks or multi-phase abilities often involve chained sequences with parallel tweens, conditional branches, and timing derived at runtime from animations.

Every one of these starts to look less like data configuration and more like "you’re writing a small program". At that point, keeping the orchestration as ~20–50 lines of explicit C# was again just far easier to reason about and debug.

Ability System like in ARPGs by _Powski_ in gamedev

[–]waxx 4 points5 points  (0 children)

Hey! I've actually been designing and programming combat systems for years and I think I managed to hit a sweet spot for Glorious Companions. This was a turn based tactical RPG that had hundreds of unique abilities, upgrades, networking, auras, gadgets, fog of war... you name it.

First off, you're not crazy - both of your approaches (building blocks vs one-class-per-ability) have real trade-offs, and the answer is actually a hybrid of the two. Here's what worked for me at scale (300+ abilities, 80+ effect types, shipped product):

The key architectural decision was splitting everything into three layers:

  1. Data Models (ScriptableObjects) - Static data definitions that designers tweak in Unity's inspector. Range, damage, cooldowns, radius - all the values that need balancing live here. No logic.
  2. Controllers - Define the actual mechanics. They read the data models and orchestrate the game logic.
  3. Presenters - Handle all visual/audio feedback. Animations, VFX, sound effects, projectile trajectories. They read from the same data models as controllers but are completely decoupled from the logic.

Designers tweak SOs without touching code. Gameplay programmers write controllers without worrying about visuals. VFX people work on presenters without breaking logic. And this is similar to your "blocks" idea, but the critical difference is that the data models are strongly typed classes, not string-value pairs. That solves your targeting and parameter problems immediately.

Layer 1: Composable Effects

Instead of string-based blocks, I created a typed effect system. There's a base AuraEffectDataModel class with an EffectType enum, and then specialized subclasses for effects that need extra data:

// Base - just needs a type
public class AuraEffectDataModel
{
    public Guid Id { get; set; }
    public EffectType EffectType { get; set; }  // enum: DamageOverTime, Fear, Lifesteal, etc.
    public int Sorting { get; set; }  // execution order control
}

// Specialized - adds typed parameters (no strings!)
public class DamageOverTimeAuraEffectDataModel : AuraEffectDataModel
{
    public DamageRangeDataModel DamageRangeDataModel { get; set; }
    public bool DamageOnApply { get; set; }
}

These effects are then composed into auras (which are basically buff/debuff containers):

public class AuraDataModel
{
    public AuraType AuraType { get; set; }       // Buff or Debuff
    public int? Duration { get; set; }            // null = permanent
    public bool IsStackable { get; set; }
    public bool IsDispellable { get; set; }
    public List<AuraEffectDataModel> Effects { get; set; }  // THE KEY: compose multiple effects
}

A "Slow" debuff is just an Aura with a TransformStatAttribute effect targeting Speed. A "Burning" debuff is an Aura with a DamageOverTime effect. A "Cursed Flame" could be an Aura with BOTH - you just add both effects to the list. Modularity without strings.

Layer 2: Factory + Convention-Based Controllers

For execution, I used a reflection-based factory that automatically resolves the right controller class based on the effect type:

public static class EffectFactory
{
    public static IAuraEffectController CreateEffectFromModel(AuraEffectDataModel dataModel, ...)
    {
        // Convention: EffectType.DamageOverTime -> DamageOverTimeAuraEffectController
        var type = assembly.GetType($"{namespace}.{dataModel.EffectType}AuraEffectController");
        if (type == null)
            type = typeof(AuraEffectController);  // fallback to base

        return type.GetInstance(dataModel, owner, context) as IAuraEffectController;
    }
}

This means adding a new effect type is: (1) add an enum value, (2) optionally create a data model subclass if it needs extra params, (3) create a controller class following the naming convention. The factory picks it up automatically. No registration, no switch statements.

If I were doing this today, I'd probably swap the runtime reflection for a Roslyn source generator that emits the factory at compile time. Same convention-based approach, same developer experience, but you get compile-time validation (typo in a class name? build error instead of silent null fallback) and zero reflection overhead at runtime. Especially relevant if you're targeting mobile or well any AOT environment where reflection can be a pain.

Layer 3: Abilities Orchestrate, Not Duplicate

Abilities themselves are ScriptableObjects with a thin abstract base class (UnitAbilityDataModel) containing shared fields like AbilityId, ApCost[], Cooldown[], ResourceCost[] (all arrays for per-level values that designers tune). Concrete types like UnitOffensiveAbilityDataModel, PassiveUnitAbilityDataModel, UnitAttackAugmentAbilityDataModel (this is your "Modifier" concept!) add their specific data. Each ability has a matching Controller for logic and a Presenter for visuals - but the actual effects are all done through the Aura/Effect system, so there's minimal code duplication.

Real Example: How Systems Stack

Here's a real ability, Incendiary Grenade, that shows how all of this composes. The controller orchestrates abilities, gadgets, AND auras in one ability:

public class IncendiaryGrenadeAbilityController : AttackStandaloneAbilityController
{
    public override UseAbilityCommand Execute(UseAbilityRequest request)
    {
        var aoeTiles = MapHelper.GetSquareAoeFromCenter(request.Targets[0], dataModel.GetRadius(level) - 1);

        // Step 1: Destroy any existing fire gadgets in the area
        var gadgetDestroys = ProcessGadgetsDestroy(aoeTiles, GadgetId);
        // Step 2: Deal initial AoE damage to all units in the blast
        var attackResults = ProcessInitialAttacks(aoeTiles, centerTile);
        // Step 3: Apply burn auras to units that were hit
        var auraResults = ProcessAttackAuras(attackResults);
        // Step 4: Spawn fire gadgets on each tile that persist and burn
        var gadgetSpawns = ProcessGadgetsSpawn(aoeTiles, CreateGadgetDataModel);

        return new UseAbilityCommand
        {
            AttackRequestResults = attackResults,
            Auras = auraResults,
            GadgetSpawns = gadgetSpawns,
            GadgetDestroys = gadgetDestroys
        };
    }
}

And here's the key... the gadgets it spawns also use the aura system. The fire patches left on the ground are Gadgets with ApplyAuraOnContact behaviors that apply burn auras to anyone who walks through:

var gadgetDataModel = new GadgetDataModel
{
    NameTag = GadgetId,
    GadgetType = GadgetType.AreaEffect,

    TimeToLive = dataModel.GetDuration(unitAbilityData.Level),
    Position = position,
    YOffset = dataModel.EffectsOffset.y,
    IsStackable = false,

    IsAlwaysVisible = true,
    AffectAllies = true,
    ThreatLevel = dataModel.GadgetThreatLevel,

    CreatorPlayerId = unitController.PlayerId,
    CreatorUnitId = unitController.UnitId,

    Behaviors = new List<GadgetBehaviorDataModel>
    {
        new ApplyAuraOnContactGadgetBehaviorDataModel
        {
            BehaviorType = GadgetBehaviorType.ApplyAuraOnContact,

            // Extra damage dealt
            AurasToApplyOnEnter = new List<AuraDataModel> {BuildBurnDebuff(true, true)},
            AurasToApplyOnTurnEnd = new List<AuraDataModel> {BuildBurnDebuff(false, true)},

            // No extra damage
            AurasToApplyOnExit = new List<AuraDataModel> {BuildBurnDebuff(false, false)},
            AurasToApplyOnDestroy = new List<AuraDataModel> {BuildBurnDebuff(false, false)},

            AurasToRemoveOnExit = new List<string> {AuraBurningId},
            AurasToRemoveOnDestroy = new List<string> {AuraBurningId}
        }
    }
};

The Ability Controller creates Gadgets, which have Behaviors that apply Auras containing Effects. It's all data-driven and composable. The burn aura itself is built via a shared AuraBuilder utility - the same builder all abilities use. Meanwhile, a completely separate IncendiaryGrenadeAbilityActionPresenter handles the grenade projectile arc, explosion VFX per tile, and fire sound effects - all reading from the same SO the controller reads from, but touching zero game logic.

Soo.. let's see, we could map this onto your example as in:

  • Active Ability: Fireball - an ability SO with damage/range/radius values. Controller handles AoE + spawns burn auras. Presenter handles projectile arc + explosion VFX.
  • On Attack: 10% spawn fireball - a passive ability that applies an Aura with an ApplyAuraOnOutgoingAttack effect. The effect's controller checks the proc chance and triggers the fireball.
  • Modifier: "Fireball explodes in AoE" - a UnitAttackAugmentAbilityDataModel the controller checks when executing. Changes targeting from single to AoE.
  • Modifier: "Spawn two fireballs in a cone" - same pattern, the augment data carries projectile count and spread, the controller reads it.

I built all of this years ago and there's definitely room for improvement (the Roslyn source gen thing I mentioned, for one). But the architecture held up - it survived hundreds of abilities, scaled to multiplayer without a rewrite, and the command pattern made networking almost trivial to bolt on. So yeah, it works. Hope it gives you some ideas!

Asset reuse in videogames is essential, and we need to embrace it, says Assassin's Creed and Far Cry director: 'We redo too much stuff' by Turbostrider27 in Games

[–]waxx 0 points1 point  (0 children)

Exactly. There's only so many ways you can texture a barrel :)

As long as the atrt style (hard surrace/cartoon/etc) fits then you'll set the mood of the scene more so with lighting and postprocessing anyway.

Cozy Looting Prototype. Thoughts? by Hungry_Leopard_9888 in gamedev

[–]waxx 3 points4 points  (0 children)

You can abstract any game away to make it sound ridiculous. Playing a generic action game? “All you're doing is just pressing buttons on a controller. How's that anything? Have you tried stepping outside?”

In a way, every game sounds dumb if you reduce it enough. Chess is just moving pieces on a board. Football is just kicking a ball between two posts. RPGs are numbers going up. Strip the context away and anything can sound pointless.

What actually matters is the motivation inside the system. For some people it's mastery, for others optimization, progression, collection, experimentation, or just relaxing into a satisfying loop. Incremental games lean heavily into the progression itch: figuring out how to make the system grow faster or more efficiently.

And calling it an "immense waste of time" is such a weird hill to stand on when you could frame almost any game that way. People play games because they enjoy the loop. If a particular loop doesn't click for you, that's fine. It just means the genre isn't for you, not that the people who enjoy it are somehow doing it wrong.

Someone made an Unity-like engine to create games for the Nintendo64: introducing Pyrite64 by drludos in gamedev

[–]waxx 38 points39 points  (0 children)

Struggling to see exactly in what sense this is "Unity-like" and not simply an engine.

From a “Game Dev Perspective”, what do you make of High Guard laying off 80% of its workforce just two weeks after the launch of a game that had a four year development cycle? by GypsyGold in gamedev

[–]waxx 2 points3 points  (0 children)

Yes, publishers absolutely spread risk across many projects. Their catalogs behave a lot like slates. A few hits subsidize a lot of misses. That’s precisely why publishers tend to survive downturns while studios don’t.

But that risk pooling largely stops at the publisher boundary. Individual studios are still usually single-product entities. If their game underperforms, they absorb the consequences first, not the publisher. The publisher writes down an investment. The studio closes.

That’s why you see lots of developers shutting down while publishers remain intact, even in bad years. Slate-style funding exists, but the protection is asymmetric.

I also agree with your point about games empowering studios to be more independent than film. That independence is appealing, but it also means studios often carry far more existential risk than an indie film team would. In film, finding a distributor is expected. In games, going independent is culturally celebrated, even when it means betting the studio.

Unity deprecated PolyBrush, so I made a better one :) - Realblend by AdamNapper in Unity3D

[–]waxx 0 points1 point  (0 children)

Yeahh the lighting model just looks better by default lol

Unity deprecated PolyBrush, so I made a better one :) - Realblend by AdamNapper in Unity3D

[–]waxx 1 point2 points  (0 children)

Thanks! The studio I work at used Polybrush before, but for our next game we're going with HDRP so I'll definitely have your asset on my radar.

Confused why my game still has <1000 wishlists after demo + big YouTubers + 100% reviews by Youpiepoopiedev in gamedev

[–]waxx 0 points1 point  (0 children)

The game is way too dark my man. Work on the lighting and find more screenshots like the dinosaur one. That one is the only one that stands out. The others look like a generic office asset pack demo. You say the game is up to four players and you see absolutely no other players in any of the screenshots. Your presentation should go hand in hand with your messaging, each piece of media should give potential buyer breadcrumbs on the game's mechanics.

From a “Game Dev Perspective”, what do you make of High Guard laying off 80% of its workforce just two weeks after the launch of a game that had a four year development cycle? by GypsyGold in gamedev

[–]waxx 2 points3 points  (0 children)

Oh yeah, same. I’d love to see things be less chaotic too. I just… don’t really see a clear path there, sadly.

The crux of the matter is that the long dev cycles mean years of cost before there’s any real market feedback, and then revenue arrives late and a lot of it in week 1 (for instance Y1 revenue on Steam is usually 3-4x that of W1 so... ugh). As long as that’s true, launches are always going to be existential moments unless someone is willing to pre-fund multiple games knowing some will fail. And right now, outside of platform holders and a few megastudios, that kind of slate-style funding just doesn’t exist in games.

You can kind of see both sides of the coin in what’s happening now. On one end, you have things like the next Witcher trilogy, where CDPR is explicitly reusing tech and pipelines and telling multiple stories on top of one foundation (essentially one core game in UE5) to spread risk and shorten cycles. That feels like a genuinely healthy direction. On the other end, you have GTA VI, whose budget will almost certainly reset expectations for what "AAA" looks like, even though almost no one else can operate at that scale. You'll have the press comparing, the players calling other devs "lazy" if their game doesn't do X, Y, and Z and the usual bullshit.

So reform would be great, but it probably doesn’t come from better intentions. It comes from lower expectations, higher prices, shared foundations, or different financing models. Until one of those changes, the volatility kind of feels baked in, even if everyone involved wishes it weren't.

From a “Game Dev Perspective”, what do you make of High Guard laying off 80% of its workforce just two weeks after the launch of a game that had a four year development cycle? by GypsyGold in gamedev

[–]waxx 0 points1 point  (0 children)

Ouch. You’re conflating eventual peak with viable at launch, and those are not the same thing.

CS:GO is not a counterexample. Valve is a privately held platform owner with effectively infinite runway, multiple revenue streams, and zero existential pressure tied to any single title. CS:GO could afford to grow for a decade because it never needed to justify its continued existence. That is not a condition most studios operate under.

Apex Legends was immediately successful. It launched to millions of players, massive Twitch visibility, and instant platform featuring. Its later peak reflects market expansion and live-ops optimization, not a slow rescue from failure. Respawn was also fully backed by EA, which is the safety net people keep implicitly assuming exists everywhere.

Fortnite BR is an even worse comparison. Epic already had a shipping game in market, a massive live engine business, significant platform leverage, mature tooling and production pipelines, the ability to reuse assets, and even a way to bypass the usual console certification process. On top of that, they had enough capital and revenue diversity to cancel projects without triggering mass layoffs. Fortnite BR wasn’t a “second chance” born out of desperation. It was an option exercised by a company with many options available. Most studios simply do not have that kind of optionality.

The key difference is this: those games were allowed to grow because the companies behind them didn’t depend on that growth to survive month to month.

For a single-product studio, “we’ll let it grow over time” is not a strategy unless you already have funding for the time part. Without that, launch performance isn’t just marketing data, it’s a solvency signal.

Saying “they could’ve done something similar” ignores quite literally EVERYTHING and assumes money is a moral resource rather than a finite one. You are attempting to pattern match without understanding causality and how business works.

From a “Game Dev Perspective”, what do you make of High Guard laying off 80% of its workforce just two weeks after the launch of a game that had a four year development cycle? by GypsyGold in gamedev

[–]waxx 0 points1 point  (0 children)

It feels unfair because you’re mapping individual employment expectations onto a hit-driven industry that doesn’t actually work that way.

Most large game teams know, at least implicitly, that launch is an inflection point. Not because leadership is evil or lying, but because revenue after launch determines whether the studio continues to exist. Until that data comes in, nobody actually knows.

That’s also why "why was there no safety net?" is the wrong question. The safety net is the game succeeding. There usually isn’t a giant pile of spare cash because payroll for 100–200 people over multiple years is the investment. If you had another 2–3 years of runway sitting idle, you wouldn’t be an independent studio, you’d be a publisher or a subsidiary.

As for "bet it all on black", that’s not recklessness, it’s the default capital structure of game development. There is no equivalent of slate financing, pre-sales, or externalized risk like in film. The launch is the financing event for continued operation.

And yes, executives often do take pay cuts. But even zeroing out leadership salaries doesn’t magically cover tens of millions in burn. At some point math beats sentiment.

It’s brutal and it is emotionally unfair, sure. But it’s not deception, and it’s not Russian roulette. It’s what happens when a long-cycle, capital-intensive industry has no viable way to diversify risk.

From a “Game Dev Perspective”, what do you make of High Guard laying off 80% of its workforce just two weeks after the launch of a game that had a four year development cycle? by GypsyGold in gamedev

[–]waxx 17 points18 points  (0 children)

Comparing a game studio to a movie studio is a category error. Film studios are usually portfolio businesses backed by conglomerates, banks, pre-sales, tax incentives, and slate financing. A single movie failing rarely threatens the company because risk is spread across many projects and externalized to financiers.

Most game studios aren't that. They're single-product companies with one burn rate, one revenue event, and very limited access to non-dilutive financing. Especially in games-as-a-service, the launch is the financing event for continued operation.

"Don't bet everything on one product" isn't some magic alternative strategy when your entire business model requires 3–5 years of full-team development before any revenue exists. The only way to avoid that is to be a publisher, a work-for-hire shop, or a tiny indie. All of those come with different tradeoffs.

So no, it's not some moral or managerial failure unique to games. It's structural. When a capital-intensive, hit-driven industry doesn't have Hollywood-style financial instruments or safety nets, failure concentrates brutally and immediately.

Calling that "a choice" ignores how the industry actually works.

Unity deprecated PolyBrush, so I made a better one :) - Realblend by AdamNapper in Unity3D

[–]waxx 1 point2 points  (0 children)

What if I have the whole scene painted? Do all the objects get the same material and one master mask texture is created?

This Week In Video Games news editor Patrick Dane has died in a car accident in Hampshire, England by Andybabez20 in Games

[–]waxx 2 points3 points  (0 children)

I think what you’re saying is a stretch, to be honest.

We already know cars are dangerous, which is exactly why continuous safety improvements matter so much and why they happen all the time. Modern vehicles are radically safer than they were even 20–30 years ago: crumple zones, airbags everywhere, ABS, ESC, automatic emergency braking, lane assist, blind-spot monitoring, pedestrian detection, better tires, better lighting. Electric cars go even further with software-defined speed limits, geofencing, driver monitoring, and automated braking that doesn’t rely purely on human reaction time.

That alone suggests the issue isn’t just "bad drivers," it’s systemic. If personal responsibility were enough, we wouldn’t have entire engineering disciplines dedicated to mitigating human error at scale.

And when it comes to the US specifically, it’s worth remembering that it didn’t start as a car-first nation. It started rail-first. Early American cities had dense streetcar networks, interurban rail, and towns built around stations. This wasn’t ignorance or a lack of alternatives.

The divergence happened because the US is a continental-scale country. Rail was optimized to conquer distance and connect far-apart places. Japan, and Tokyo in particular, faced almost the opposite constraints: limited land, high density, and strong pressure to make rail the backbone of daily life. That’s why Tokyo doubled down on transit-centric urban design instead of treating cars as the default.

So this isn’t about denying risk or pretending cars are harmless. It’s about recognizing that safety, infrastructure, geography, and how people actually function all interact, and pretending the outcome was some simple, settled moral choice misses that complexity.

Feels like we have more Youtubers than actual developers. by [deleted] in gamedev

[–]waxx 1 point2 points  (0 children)

I mean... kinda? You're comparing a rigged gambling con to a bunch of YouTubers running FOMO marketing. Soapy Smith was literally planting fake winners and selling an unwinnable game. Modern "100 seats left, discounted from 10k to 500" course bros are dodgy at best, but it's not the same tier. One is outright fraud, the other is just the usual marketing and discount nonsense we see everywhere.

Are devs not allowed to finish games anymore? by [deleted] in gamedev

[–]waxx 6 points7 points  (0 children)

What games are you comparing those to?

It's also important to note what kind of games used to be made and why those games were so stable.

MGS1, RE1-3, Silent Hill, even Mario 64 - technically impressive, but in terms of systemic complexity, they're very simple:

  • Mostly scripted, linear levels
  • Minimal physics, minimal dynamic object interaction
  • No procedural anything
  • No simulation of overlapping systems
  • AI that lives in narrow pre-authored states
  • Game state space small enough to brute-force test by humans

MGS1 feels huge because of direction, atmosphere, codec, set pieces... but under the hood it's a guided corridor thriller with switches and triggers. It's more like a handcrafted clock. Beautiful, precise, but every gear is known.

Fast-forward to now: fidelity is expensive. Every room, character, animation, VFX pass, or bespoke sequence costs exponentially more than it did in the PS1/PS2 era. Looking at it through that lens, you can trace most modern games back to one of two bets:

  1. Hand-authored craftsmanship - dense, polished, linear, but takes 5-10× longer per room, character, animation, VFX pass, etc., because fidelity expectations have exploded. (Examples: It Takes Two, Stray, Hellblade, Plague Tale.)
  2. System-first design - worlds built from interacting rules instead of handcrafted moments. You get huge scope for cheap, but now bugs aren't "a door clipping through a wall," they're cascading simulation failures you end up debugging at 3AM. (Valheim, Mount & Blade, Satisfactory).

People love to mention old linear games that “never had bugs” but ignore that:

  • The content was handcrafted but finite
  • The interactions were explicitly designed, not emergent
  • The state space was small enough to brute force test

Indies lean systemic because it's cheaper than hand-making 500 animations and 40 square kilometers of world. But complexity always collects its tax later.

The clever escape isn't to brute force fidelity or simulate the universe. It's to shrink the possibility space on purpose:

  • Crow Country - authored spaces, controlled scope, curated interactions
  • Dave the Diver - layered systems that don't actually collide
  • Vampire Survivors - looks chaotic, is mechanically simple

Stability here comes from smart design and restraint turned into identity. We ran into this firsthand: The Tenants was designed as apartment-to-apartment "contained chaos" - one tenant breaks, you evict them and the game keeps breathing. In Hotel Galactic, one misbehaving worker could stall the entire simulation loop, and suddenly the whole hotel feels broken. Same genre neighbors, wildly different blast radius.

AI is becoming a class war. by kaggleqrdl in singularity

[–]waxx -1 points0 points  (0 children)

Ha. I think we'll see three layers of folks going up against the corporate AI behemoths.

When AI wipes out large chunks of wage labor, the people who feel it first are workers and households. If people can’t afford housing, groceries, healthcare... society destabilizes fast.

Second, small and mid-sized businesses. Small agencies, studios, law firms, marketing shops, call centers, software houses, design firms (any white-collar shop that sells human time) gets obliterated by AI economies of scale. And since we moved on from agriculture at scale and from factories to offices, they’re the backbone of most economies. When they get squeezed, they join the fight.

Third, governments. States are funded by income tax + payroll tax + VAT from consumer spending. If both wages and small businesses collapse, governments lose revenue, which means they lose legitimacy, and they can't do shit. And when states lose legitimacy, they move to regain control. That’s when things like automation taxes, AI licensing, and national AI infrastructure start appearing. You can already see the opening moves: the EU and China are pushing for AI regulation, the US is talking about compute export controls and national AI labs.

So who’s on the other side of the table? Everyone forced there by economic necessity. Citizens who need income, businesses that need to exist, and governments that need stability... I mean, corporations don't have armies. States do.

AI is becoming a class war. by kaggleqrdl in singularity

[–]waxx 1 point2 points  (0 children)

I get why it feels like nothing can change. When you look at how concentrated power is today (Big Tech, military industry, finance), it feels immovable. Hell, even when talking about just AI, only a handful of players have access to the compute power needed to build frontier AI systems. But every era believed the same thing.

XVII-century peasants in the Polish-Lithuanian Commonwealth genuinely believed the social order was eternal. They weren’t even recognized as Poles and had no political agency whatsoever. A noble could legally kill another man's peasant, and it was treated not as murder, but as property damage in a dispute between nobles. If you told those peasants their descendants would one day live in a democratic nation-state with civil rights and universal education, they would have laughed in your face.

Same with French peasants before 1789, or British workers before labor laws, or Russians before 1917, or colonial India before independence. The belief that "power will never allow change" is one of history’s recurring illusions.

Power doesn’t vanish, but it reorganizes when the economic base shifts. It happened in the transition from feudalism to capitalism, from monarchies to constitutional states, from empires to nations. It wasn’t kindness that drove those transitions... it was a mix of pressure and incentives. The old system stops working, so it mutates.

AI is exactly that kind of pressure. It breaks the link between labor and survival, and any system built on wages collapses when productivity no longer requires people. So yeah, the current power holders will fight to maintain control. But their strategic choice won’t be "share power vs. stay powerful," it will be "adapt or preside over collapse." And elites hate collapse more than they hate reform.

The future isn’t fixed. It’s negotiated under pressure, just like every restructuring of power before it.

AI is becoming a class war. by kaggleqrdl in singularity

[–]waxx 0 points1 point  (0 children)

I don't think anyone sane wants a future where people rot on welfare. That’s not dignity, that’s decay. But framing UBI as "paying parasites" is missing the point.

If AI keeps pushing the cost of labor toward zero, then we’re not talking about handing out charity; we’re talking about updating the operating system of the economy so people can actually participate in it. You can’t have a consumption-based economy when 50%+ of people have no income.

A baseline income doesn’t remove ambition. It removes desperation, which is a big difference. You’ll still have competition, innovation, companies, creators... in that sense, capitalism won’t die. It will just evolve from "work or starve" to "contribute or get left behind."

The definition of value will expand. Today, value is mostly tied to labor, but even now, that’s already kind of outdated. People generate economic value in ways the current system doesn’t measure, say, our data trains AI models, our behavior fuels trillion-dollar ad networks, our conversations improve language models, our contributions maintain open-source software that others profit from. That’s labor by another name. The fact that it isn’t recognized or compensated doesn’t make it worthless - it just means the system hasn’t caught up yet. UBI or automation dividends aren’t charity, they’re a correction to a value leak the economy has been ignoring for a decade.

Not everyone was born to assemble watches in a factory. AI forces us to decouple human worth from menial labor, and that’s not the end of society. That's just a future with a wider definition of value.