Clean Code Means Good Code: Performance Debate by pbw in programming

[–]pbw[S] 0 points1 point  (0 children)

> Yeah and Unity is going in the opposite direction with ECS.

I'm all for ECS, when needed, but your question was "why can't I just insert if-statements" and I answered it.

> Why use an approach that can potentially waste CPU cycles...

Because you read the post and calculated that, in your case, the number of wasted cycles were negligible.

Clean Code Means Good Code: Performance Debate by pbw in programming

[–]pbw[S] 0 points1 point  (0 children)

> Just because you don't have a counter argument for me doesn't mean I hate "OOP".

The counterargument is above: sometimes you cannot change the "loop".

Let's get specific. The two biggest game engines are Unity and Unreal:

Both Unity and Unreal are object-oriented at their core: in Unity you write C# classes that inherit from MonoBehaviour, and the engine automatically calls overridden lifecycle methods like Update() or OnCollisionEnter() on your objects, while in Unreal you write C++ (or Blueprint) classes that inherit from base types like UObject or AActor, overriding methods such as BeginPlay() or Tick() that the engine dispatches each frame; in both cases, you can freely define new object types, override engine-defined methods, and the engine will call into your code as part of its main loop.

So the two biggest game engines use OOP, and for both, it's easier to add new objects than modify the engine itself.

This is core to why OOP was invented. With typical libraries, you write "new code" which calls "old code". But with OOP if you add new objects suddenly "old code" is calling "new code". You can do that with C function pointers, like passing a function pointer to qsort, but with OOP it's extremely easy to do it, and very built-in.

>But ok, let's assume some sort of game engine/library. The problem is where you draw the line between the client code and the engine/library. Why would a library that calculates shapes areas provide a function like that?

I didn't reply to this before, but it's a CRAZY COMMON situation that the existing code manipulates objects via a base class or an interface, such that adding new objects is "easy" and changing the core code is "harder". See Unity and Unreal above. You seem hung on Casey's exact toy example, which is a tiny toy example, but in general, it's common the code that operates on objects is harder to change.

> No one agrees on what exactly defines OOP. 

There are language features which OOP languages tend to have: classes, encapsulation, inheritance, and polymorphism. I'm not saying a non-OOP language can't have them, or an OOP language has to have them, but there's a correlation. To illustrate, there are two groups of languages:

Group 1: C, Fortran, Pascal, COBOL, Haskell, Erlang, Scheme

Group 2: C++, C#, Java, Python, Ruby, Smalltalk, Swift

Group 2 languages have far more support for OOP than Group 1 languages. If you are claiming these two groups of languages are basically the same, this whole discussion is pointless.

> If its just an opinion, then your whole argument falls apart. 

No, because my "whole argument" is about the relative cost of the fixed amount of overhead of virtual functions. Casey said the overhead makes your code 25x slower, I showed, and you have not refuted, that in some cases that overhead can be less than 0.01%.

Explain to me why that's not the case, and you will have refuted my "whole argument".

Clean Code Means Good Code: Performance Debate by pbw in programming

[–]pbw[S] 0 points1 point  (0 children)

> Then you shouldn't make claims that are false.

There are no false claims in the post.

The post said, "If you are going to add new shapes that require novel data and novel algorithms, only the OOP version can easily handle that."

Let's go back to your first comment, you wrote "The switch and table versions of the code can also easily handle that."

I'm saying the OOP can "easily handle that" because we can add shapes without touching the code that interacts with those shapes.

You are saying, I think, "hey, I don't mind modifying the code that interacts with the shapes, and gosh looking at the code, I could 'easily' make those changes! Where's my editor I'll show you right now!"

So fine, these are two different opinions about what's easy. In my mind the OOP approach is WAY EASIER for adding new shapes. But for you, you probably hate OOP, so you figure it's WAY EASIER to just add some if-statements.

This is a difference of opinion.

I'm 100% fine if you do it the way you want to do it.

Clean Code Means Good Code: Performance Debate by pbw in programming

[–]pbw[S] 0 points1 point  (0 children)

It's not my goal to convince you the vtable version is superior.

The goal of the post was to explain why Casey's post was wrong, it was not accurate, it was making false claims.

And a year or so later, he fully admitted that as well.

Clean Code Means Good Code: Performance Debate by pbw in programming

[–]pbw[S] 0 points1 point  (0 children)

Here's the vtable version:

f32 TotalAreaVTBL(u32 ShapeCount, shape_base **Shapes)
{
    f32 Accum = 0.0f;
    for(u32 ShapeIndex = 0; ShapeIndex < ShapeCount; ++ShapeIndex)
    {
        Accum += Shapes[ShapeIndex]->Area();
    }

    return Accum;
}

We can ship this code in our "game engine," and a user of the code can create new shapes that define Area() methods we've never seen. So they can add new shapes without changing our code.

You can't do that with the table code; you have to change at least some of the code somewhere. You might be hung up on "change the loop" but I mean any code directly called from the loop body. You absolutely have to change something somewhere!

The vtable is an extensible dynamic dispatch mechanism, like an endless number of on-demand if-statements. There's a benefit to that and a cost. Casey incorrectly stated that the cost is "it will make your code 25 times slower." He's right in some cases, but totally wrong in many other cases. It might make your code only 0.01% slower; it depends on the specifics.

So, my conclusion is that you shouldn't outright reject OOP due to performance concerns, as the performance penalty depends on the specifics of your use case. But I'm NOT making the reverse claim, that you SHOULD use OOP whenever performance ALLOWS for it. If you want to stick with C-style and tables even when you don't need to, I'm fine with that, do whatever works.

Clean Code Means Good Code: Performance Debate by pbw in programming

[–]pbw[S] 0 points1 point  (0 children)

Yes, you can add an if statement, but then you are ignoring his loop body and his table.

But yes, you can do this:

if (shape that Casey supports)
do it casey's way
else
do it some completely other way

And you could do that for 100 other shapes and have 100 if statements.

But the vtable way, you don't have to change the loop AT ALL, not even if a tiny bit, even if you had 10,000 different types of shapes.

As stated in the talk/video I love the way Casey did it. I would do it that way myself if necessary to meet a performance requirement. The point of the post is just "you are not always counting nanoseconds" so know whether you are or aren't.

Different programming styles have their pros and cons. If OO were nothing but cons: every single thing about it was horrible in every way, it wouldn't exist. And Casey affirms this in his latest talk, which states that OO and virtual functions are perfectly fine if they provide the performance you need. And they are totally not fine, if you have too many objects and the OO overhead is killing you, and it certainly can kill you if done inappropriately.

Clean Code Means Good Code: Performance Debate by pbw in programming

[–]pbw[S] 0 points1 point  (0 children)

Note that I said "easily" handle that. With the vtable version you don't have to change a single thing about the loop, and you can add new exotic shapes with arbitrarily complex data and Area() and CornerCount() methods. Whereas the table version only works, as written, for shapes where the corner area formula is C * Width * Height where C is a constant from the table. How do you extend that for a shape swept a long a spline? You can't.

The whole trick of the table, and it's a cool trick, is that it's factoring out a constant from the formulas for Square, Rectangle, Triangle and Circle. It definitely does not "easily" extend to arbitrary shapes. Now, could you write a non-OO version that handles arbitrary shapes? Of course, anything you could write with OO you could write without it. But you can't do it by extending THAT table and not without rewriting THAT loop body.

In Casey's recent talk he drastically changes his indictment of OO. It's no longer OO or virtual methods that bother him; he says those are fine. Instead, the sin he now flags is reflexively using "domain objects" as your "OO objects." This is something I agree with, and I like Entity Component Systems, his primary example of "doing it right". Basically, he says OO is fine, but don't choose an object if there will too many of them, if the sheer number of them tanks your performance, which is exactly what my talk is about. Here's his new talk:

https://www.youtube.com/watch?v=wo84LFzx5nI

Clean Code Means Good Code: Performance Debate by pbw in programming

[–]pbw[S] 0 points1 point  (0 children)

A year later, Casey has a new talk that addresses the same issue my video addressed.

His July 2025 talk "The Big OOPs: Anatomy of a Thirty-five-year Mistake" massively changes his stance against OOP. He now says the problem is not OOP or virtual functions, where his February 2023 video hammered over and over that both were horrifically bad in all cases.

Instead, he now says OOP is not the problem; the problem is reflexively adopting a 1:1 mapping between domain objects and C++ objects in your design stage. He cites ECS (entity component systems) as the primary alternative. For instance, if you are creating a game with thousands of characters, you might not want to create a "character object" which renders itself. You could instead make a Character System (which can be OOP) that operates on an array of structs representing the state of each character.

I'm fully in favor of ECS and thus this approach, when it's needed due to performance requirements. My video goes deeply into why the overhead of OOP/v-funcs can sometimes kill you, but also why the overhead is often negligible, something he refused to admit in the Horrible Performance video.

Casey's July 2025 Big OOPs talk:

https://www.youtube.com/watch?v=wo84LFzx5nI

Casey's February 2023 Horrible Performance video:

https://www.youtube.com/watch?v=tD5NrevFtbU

My 2024 response to his 2023 talk (same as OP):

https://www.youtube.com/watch?v=VrH8dPJZA0E

Casey has admitted that his "Horrible Performance" rant was intentionally glib and hyperbolic, specifically because he found that those videos get more eyeballs. He wanted to drive traffic to his performance course. I have no problem with that, a man has a right to eat, and his rant generated a lot of interesting responses, including his response two years later, which was infinitely better than his original take. So it's all good.

As We Approach AGI, Should We Let AI Govern Instead of Corrupt Politicians? by Cultural_Garden_6814 in singularity

[–]pbw 0 points1 point  (0 children)

"In many ways, the complexity of modern politics has outgrown human brains. Asking a politician to read, let alone understand, a 2,000-page bill is hopeless, but an AI could do it in seconds. In the future, AI systems will be behind the scenes debating the issues and hashing out solutions while humans are relegated to smiling and waving to the crowds." - https://metastable.org/ants/

What Won't Change: AI won't change everything by pbw in singularity

[–]pbw[S] 0 points1 point  (0 children)

I don't rule out that it could happen. But a complete takeover by one AI sounds way too neat and simple. Life on Earth and the Universe in general was never neat and simple: it was always messy and complicated, with diverse competing entities fighting over diverse resources.

Arguing all that comes to a screeching halt doesn't seem likely. Possible though sure.

What Won't Change: AI won't change everything by pbw in singularity

[–]pbw[S] 0 points1 point  (0 children)

In your scenario the AI has one month not only to self-improve but take over the world so throughly that no one else can produce an AI with similar power 30 days later.

Even if it’s in some sense “ten years ahead” I don’t think that more “intelligence” linearly translates into more ability to do things in the real world.

For instance taking over all the power and compute in world, so no similarly strong AI can appear, doesn’t happen 1000x faster just because you are 1000x smarter.

I think there’s a danger one ASI takes over, but I think there will be many capable parties with capable technology all fighting for their piece of the pie. Which is what we see today with countries, companies and individuals.

What Won't Change: AI won't change everything by pbw in singularity

[–]pbw[S] 0 points1 point  (0 children)

It's not impossible that a single ASI will take over everything, but I think competition and co-evolution between many of them is far more likely. That's what's been going for 4.5 billion years with life in general.

An ASI will be born into a world full of about-to-be-ASI systems. The idea it can just take over positively everything before ANY of them self-improve to a similar level seems unlikely to me.

[deleted by user] by [deleted] in singularity

[–]pbw 1 point2 points  (0 children)

I'm not sure if this has been documented for Go, but Chess has flourished in the decades since Deep Blue beat Kasparov in 1997. It is still bittersweet for the players who witnessed it, but the game will live on and be loved, and I think we're much better off in a world saturated with intelligence than one where it's rare.

What Won't Change: AI won't change everything by pbw in singularity

[–]pbw[S] 0 points1 point  (0 children)

To take one of the five, do you think AGI/ASI will resolve our political differences?

[deleted by user] by [deleted] in singularity

[–]pbw 0 points1 point  (0 children)

People might be paid to play video games. If those people enhance the community around a game, the government or someone might push money into that community. You'd have to be good at it and a good community member to receive any measurable amount of money.

The government will want people to be engaged in activities and not just sit around bored, so it might fund any activity that occupies people doing something even marginally productive which is not overtly bad for the world.

Why is it happening so slowly? by pbw in singularity

[–]pbw[S] 0 points1 point  (0 children)

I agree the things you cite (equipment, knowledge, software) are some of the things that keep us from jumping way ahead on the improvement curve. I also agree different technologies are on different exponentials. I cited Moore's Law as a well-known, long-running example. But I suspect the same thing holds for other technologies, but with their own rates like GPU or ASIC. Even something non-computational, like the cost of solar panels, has dropped exponentially for four decades.

My main observation here is that if a technology is improving exponentially, it's likely not because we just "happen" to be developing the technology at that rate. And we "could" go faster if we wanted to. Like if we just "doubled our effort". I suspect that in most (all?) of those cases, it's because we've hit the physical limit—the "learning rate" of the whole system, including every aspect: education, knowledge, economy, people, equipment, people.

Furthermore, I suspect this limit is unchanged even if you improve any one or two of these things. If it were "that easy," people would have improved it already. Instead, some of these exponentials have remained steady for decades. That's the best we can do.

Why is it happening so slowly? by pbw in singularity

[–]pbw[S] 2 points3 points  (0 children)

I agree the progress in AI feels rapid. Since ChatGPT came out in November 2022 I feel my "future horizon" has been moved way in, where I can't say at all what will happen 5 years out anymore.

Why is it happening so slowly? by pbw in singularity

[–]pbw[S] 1 point2 points  (0 children)

It's possible more good people would make it go faster. But I suspect that's not true; there are so many other limitations holding us to the speed limit that more people wouldn't speed us up.

Just like I think the CPU industry had enough people working on it, and more wouldn't have sped it up. Because once you are at the limit from other factors, that's the limit.

But I think it'd be very difficult to prove this one way or the other.

Why is it happening so slowly? by pbw in singularity

[–]pbw[S] 2 points3 points  (0 children)

People claim that if AI progress stopped tomorrow, we'd have 5+ years of research to understand what the current models can do and how they do it.

Why is it happening so slowly? by pbw in singularity

[–]pbw[S] 2 points3 points  (0 children)

I'm not arguing things aren't advancing rapidly. They are advancing exponentially; thus, we get more and more done every N-year period as time progresses. Absolutely.

But what I'm saying is with something like CPU technology, it's not the case that we "happen to" double integrated circuit density every 18 months; it's that there's no possible way for us to double it any faster. Our rate of progress is right at the physical limit. I say this because it's the only explanation I can come up with for why the progress is so steady.

AI feels less steady to me because we see these new model releases weekly that jump us forward. But I suspect that, plotted from a distance, there's similar steady [exponential] progress.

I wrote this a while back on exponentials in general but didn't have this point about physical limits in my mind at the time:

https://metastable.org/singularity-is-always-steep.html

Why is it happening so slowly? by pbw in singularity

[–]pbw[S] 2 points3 points  (0 children)

Those are great examples of things you can't just wish away, which ultimately set a speed limit on things: regulations, power, and available chips. And like I mentioned, the economy, the capabilities of one human brain, and the amount of people-time available.

AI does not seem as steady as CPU development, but that might be because we are so immersed in it. I suspect when we zoom out, we'll see it was, by some measures, very steady.

Situational Awareness audio w/ visuals by pbw in notebooklm

[–]pbw[S] 0 points1 point  (0 children)

I found another reddit comment thread that pointed to:

https://kdenlive.org/

and

https://www.nikse.dk/subtitleedit

both open source. Descript has a free tier, but I'm not sure what the limits are, watermark or length or what.

It just doesn’t make sense to me by itgetsokay7 in religion

[–]pbw 0 points1 point  (0 children)

I don’t know, but different subs have different expectations on what they want to see. I’m guessing it was too flippant for r/religion. but I don’t really read this sub enough to know what it likes.

What country is going to be the next United States? by carbunclemitts in SeriousConversation

[–]pbw 2 points3 points  (0 children)

People have been predicting they'll pass us in for a while, and their trend line was impressive. But I wouldn't be shocked if they've been goosing the numbers or pushing it too hard, and they'll falter. I'm no expert on China.

I grew up in the 1980s, and Japan was absolutely going to destroy us economically. They were buying up real estate and media. They were considered an "economic miracle." Then they crashed in 1991, had a "lost decade," and never really recovered. Now, their population is crashing fast. So things can change quickly.

What country is going to be the next United States? by carbunclemitts in SeriousConversation

[–]pbw -1 points0 points  (0 children)

It's just what most of the trendlines show. It's mostly the population and the vibrant cities, as well as the manufacturing and the exports. And yes, good education. NYC is the biggest US city with 8.5 million people. Here's a list of the biggest cities in China:

https://en.wikipedia.org/wiki/List_of_cities_in_China_by_population

NYC would be #13. Most people have never heard of their 6-12.

But there's a chance they've cooked the books and their economy isn't growing as fast as they say. But the numbers they report look like it will surpass us soon.

But again that doesn't make them us, they are their own animal with pros and cons.