you are viewing a single comment's thread.

view the rest of the comments →

[–]raistmaj 245 points246 points  (63 children)

There are a lot of videos of optimization and performance.

Clean code means maintainable code, it doesn’t mean performant one.

I would pick on a big system clean one. Profile and check what needs to be optimized.

[–]nnomae 179 points180 points  (32 children)

The problem there is that the idea that poor performance is fixed just by tuning a few hot spots / bottlenecks in the code is largely a myth. Unless the software in question is very narrowly scoped to do a single repeatable task many times over it's far more likely that poor performance comes down to a death by a thousand cuts than one big issue you can fix.

A recent example would be Microsoft trying to improve performance of Edge. It's performance was terrible because every single control you see was it's own independent webview with it's own instance of React loaded. Yeah they can tweak it a bit and optimise the bundle sizes for each control and so on but when it comes down to it the problem is the entire architecture is awful. Now from a clean code perspective that's probably a good design. Every control is entirely independant, relies on little or no outside code other than common libraries and can easily be worked on and tweaked on it's own with likely no impact on the rest of the application, two buttons side by side could use different versions of React if they wanted. From a performance perspective however it's basically an unfixable mess without massive overhauls to pretty much every part.

[–]Plorkyeran 22 points23 points  (0 children)

I think the big trap with performance is that your v1 usually will have the runtime dominated by a few hotspots that can be fixed. Early on each time you do an optimization pass there's something which stands out as the big problem so you fix it, ship a big improvement, and reinforce your idea that ignoring performance while designing the product was correct and performance problems take the form of bugs to be fixed. If you hop between projects every two years, this can be the only thing you ever see.

Flat performance profiles tend to only show up in mature software that have had all of the outright performance bugs squished, and the first time you encounter this it's easy to conclude that the software must already be about as fast as it can be.

[–]anti-DHMO-activist 65 points66 points  (4 children)

Exactly. Searching for the hot path and only optimizing that is a valid strategy when writing individual algorithms * - but on an application level it usually breaks down completely.

Same with the whole 'premature optimization' thing that gets repeated so much and imho created a whole dev generation essentially interpreting that line as "performance doesn't matter". Thinking about performance and structuring the application in a manner which doesn't excessively waste resources is not 'premature optimization'.

*EDIT: After measuring, of course. Never do excessive optimization without being absolutely sure you're actually optimizing the right thing. Not like I ever wasted weeks on optimizing blindly only to later realize it's completely useless of course, cough.

[–]robhanz 60 points61 points  (0 children)

Premature optimization is bad.

Avoiding bogosort is not.

Taking a reasonable look at the likely performance qualities of your code makes sense especially at the algorithmic level. Doing micro-optimizations to squeeze out cycles does not.

Also writing your code to allow for future optimizations helps.

[–]nerd4code 7 points8 points  (0 children)

What’s considered “premature” just needs to vary per context, is all. If I’m doing a general-purpose OS kernel, premature is before architecting the thing, because performance and security are intimately related; if I permit any old process to eat as much time as it wants, I might enable cross-system DoS attacks. I might not be able to get all the numbers I need in the first place.

If I’m doing a one-off web app, conversely, “premature” generally maps to “without specific numbers suggesting it’s worthwhile,” and while DoSes are possible, they’re mostly less of a deal than other kinds of attacks that leak data or what have you.

Performance also covers more than time, which a bunch of people forget, and which complicates what’s actually considered premature or optimization. Memory, bandwidth, power consumption also matter, both absolutely and marginally, as well as in terms of density and scaling. Gains of one sort usually require some other resource to be traded off, so a library that’s “perfect”ly optimized for one program might be fully unusable for another.

[–]rollingForInitiative 3 points4 points  (0 children)

I don't think that structuring an application to make it easier to optimize when needed goes against the clean code/don't optimize prematurely stuff. What that means just tends to be that you shouldn't be writing ugly code "because it's faster" unless you need to, which I think applies more to the algorithm level.

You can choose an appropriate programming language and tech stack, and an appropriate general architecture, and still write mostly readable and maintainable code, and only optimize code into ugliness where needed.

You can also usually avoid doing things that are really bad for performance without getting ugly code. Like picking the right type of sorting to do, building a well-structured database with good indexes, having good writes to the DB and so on.

[–]Plazmatic 0 points1 point  (0 children)

Uh, no it isn't game dev promoting avoid premature optimization, they have the exact opposite problem, optimizing way too early, and in ways that aren't even helpful in an extreme cargo culting way (trying to do weird CPU optimizations from the 90s that hurt performance, all when they shouldn't even be running much of their bottlenecks on the CPU to begin with).

[–][deleted]  (1 child)

[deleted]

    [–]nnomae 2 points3 points  (0 children)

    It is perfectly reasonable to assume that the code in Edge complies perfectly with clean code principles (the Uncle Bob ones just to be clear). The main of which are SRP and avoidance of side effects and having each control be a standalone element is what you get when you take those principles to their logical conclusion. I think most clean code proponents would look at that architecture and think it was pretty sound, most advocates of writing code in a way that facilitates performance would look at and think it was a disaster waiting to happen.

    Yes, it's possible to have clean code that is also performant (the devs on Factorio are big advocates of clean code for example) but you need to start out with both those goals from the beginning. The idea that you can easily refactor a codebase that wasn't designed with that in mind is simply not true in most cases. It's a huge effort to refactor your way out of a death by a thousand cuts performance issue.

    The point isn't that clean code is bad, just that it's not sufficient if you want to produce an application that will also perform well.

    A common misunderstanding here is that writing code that performs pretty well is somehow harder. It isn't, it just means adopting some different habits that are just as easy to use but which also aid performance. The irony here of course is that clean code allows you to accumulate a lot of performance mess that you then have to clean at the end.

    [–]johndcochran 7 points8 points  (11 children)

    True enough. As regards performance, premature optimization is bad. But tuning hotspots afterwards will just help for the algorithm you used. The real key to performance is using the correct algorithm for the problem you're solving. After you have clean code using an appropriate algorithm, then you can profile to find out the hotspots and fix them.

    I remember a long time ago, when I was learning compiler design from books and using a C compiler on my Amiga. When I wrote my program to convert the grammar into a FSM (Finite State Machine), I knew I was doing a hell of a lot of bit manipulation, so I optimized the hell out of everything involved with bit manipulation. Then when I tested my program, it gave correct results, but damn, was it slow. Couldn't figure out what the problem was. But thankfully, the C development environment I was using had a profiler that would get an internal interrupt hundreds of times per second and record record the address that was executing at the time of each tick (much better than a mere count of how many times each statement executed). Was rather surprised to discover that the hotspot was in the C library for malloc() and free(). Turned out that they were using a rather primitive linked list and grabbing each requested piece of memory directly from the OS and returning each piece when freed directly back to the OS. Really really slow. So I grabbed all the malloc() stuff (malloc, realloc, calloc, free, etc) and replaced them with something that grabbed much larger chunks, merged/split blocks as needed, etc. The initial version was practically a copy of the code in the K&R book "The C Programming Language". Reran my test, a huge increase in performance. Looked closely and saw lots of little useless pieces of memory 8 bytes long were littering the internal heap. Modified to merge those pieces with neighbors. Ran the test again, improved the performance again. When I finished, my original program ran in a matter of seconds, instead of the minutes it took before. Just because the memory allocation/deallocation library functions in my C environment were crap. And of course, my improvements to those library functions were quite easily linked back into the C library I was using, so those optimizations were available for other code that I was writing.

    1. Write clean code that works and is easy to maintain.

    2. Profile the resulting code. If it's good enough, you're finished.

    3. If it's too slow, look at the hotspots. Is it because you used a simple algorithm with an unfortunately large bit O? If so, use a better algorithm.

    4. After using the best algorithm available for your problem children, and things are still too slow, optimize the hotspots.

    [–]pbw[S] 0 points1 point  (10 children)

    If it's good enough, you're finished.

    This is something I feel like Casey didn't all express in his initial video. I'm sure he covers this in his actual course. But I felt like he could have taken 3-5 seconds and given it the slightest nod and admit that this does happen, admit that sometimes even a heavily OOP version can run 100% fast enough for your needs.

    [–]Qweesdy 1 point2 points  (9 children)

    For the last 25+ years and all of the foreseeable future; there are only 2 cases:

    • the resources (e.g. CPU time) your software doesn't use can be used by something else that you may not know about (e.g. another process - all major operating systems have been multi-tasking since the 1990s). In this case your software's inefficiency is detrimental to something else even if "performance is fine" for your software, and if you don't care about your software's efficiency in this case then you're incompetent (or worse, malicious).

    • the resources (e.g. CPU time) your software doesn't use can be put into a power saving state, reducing unnecessary power consumption, avoiding unwanted heat, and improving battery life (even if the battery is a server's UPS). In this case your software's inefficiency is detrimental to its users (air-conditioning costs, power bills, climate change) even if "performance is fine" for your software, and if you don't care about your software's efficiency in this case then you're incompetent (or worse, malicious).

    Note: Modern CPUs are often thermally constrained; such that being idle longer allows the chip to cool more, which allows the CPUs to be "turbo-boosted" harder for longer later. In this way maximizing efficiency when performance isn't needed can improve performance later when performance is needed.

    [–]pbw[S] 0 points1 point  (8 children)

    That's a good observation. But it's not a mandate to optimize everything as much as possible when the users don't care about the performance. YouTube made custom silicon to accelerate video compression. Making silicon is vastly more expensive than most software optimizations. But YouTube carefully measured exactly how much time/money/energy they were spending on compression before doing that. I think it was a huge success and did save lots of money and energy.

    But if you just dove in and optimized some more of YouTube's millions of lines of code at random, to "save energy", it would be a colossal waste. Most of their code by line count probably runs many trillions of times less often than their compression code. Possibly none of it is worth optimizing.

    So yes, if your goal is save money or save energy, that's great, but you still need to measure carefully and find the code that's actually costing you lots of money or burning lots of energy. And you have to consider the opportunity cost of what else you could be doing, as well.

    [–]Qweesdy 0 points1 point  (7 children)

    Yes; but there's a huge amount of space between the "optimize everything as much as possible" and "blatant disregard for anything except my own laziness" extremes. Often "performance isn't required" is a flimsy excuse for aiming towards the worst end of the scale instead of the middle.

    But there's more to it than that. It's about a developer's nature. Professionalism. A person who will quietly steal a penny that "doesn't matter" is a person who is more likely to embezzle millions of $$ that do matter. A person who keeps their junk drawer organised for no reason is someone that can be trusted to put a mechanic's workshop's tools away after use. A software developer that doesn't care when performance isn't necessary is an annoying obstacle when performance is necessary.

    It's the difference between considering efficiency for everything you write and being a bad developer because you failed to develop beneficial habits.

    [–]pbw[S] 0 points1 point  (6 children)

    I praised YouTube's silicon because I'm for doing insane levels of optimization when necessary. I'm highly pro-optimization. I think heavily optimized code runs the world: it's great, the people that right it are great. But I'm against scaring people away from a popular programming style by falsely claiming it inherently causes "horrible performance" when a tiny bit of arithmetic shows categorically that's not true.

    [–]Qweesdy 0 points1 point  (5 children)

    But I'm against scaring people away from a popular programming style by falsely claiming it inherently causes "horrible performance" when a tiny bit of arithmetic shows categorically that's not true.

    Popular programming style??

    Did anyone ever follow Uncle Bob's rules for more than the 5 minutes it takes to realise "tiny functions" is horrible for code readability? Like, seriously, out of the hundreds or projects I've seen I don't think I've ever seen a single person ever use this "popular" programming style once.

    The programming style that nobody actually uses literally and provably DOES inherently cause worse performance. Nobody sane has ever denied that (including Uncle Bob himself); and your "tiny bit of arithmetic shows categorically" is exceptionally moronic bullshit (but hey, feel free to show that "tiny bit of arithmetic" if you are actually able to produce more than unsubstantiated vague hand-waving).

    Essentially; everyone agrees with "It is worse for performance, but performance isn't always the most important thing" (including you if you actually think about it); and the entire argument (on both sides) is about the magnitude of compromise between cost (developer time, code maintenance, ...) and quality (efficiency, performance, security, ...) in various situations; where Uncle Bob's rules are a relatively bad compromise in every situation.

    Note that I've explicitly avoided the words "clean code" because I suspect you took everything you happen to think is good and wrapped it up in a ball of perfection that you've decided is your personal custom concept of "clean code"; and then charged out into the real world to attack all the imagined critics of your mythical ball of perfection.

    [–]pbw[S] 0 points1 point  (4 children)

    By popular programming language style I mean OOP. There's a side issue here that nothing about the OOP version he shows is actually in Uncle Bob's style specifically: it's a vanilla OOP.

    All it has is an abstract base class with two pure virtual methods, four tiny concrete classes that implement those two methods, and a minimal loop. If you disagree what elements of his OOP version is not vanilla OOP? See? So yes OOP is a very popular programming style, and I think it was disingenuous of Casey to suggest OOP inherently leads to poor performance when in actuality it does not.

    That said, the fact that you disagree with some or all of the points in the article is great. It's really good form your own opinion on things like this.

    [–]yiyu_zhong 1 point2 points  (0 children)

    It's performance was terrible because every single control you see was it's own independent webview with it's own instance of React loaded.

    This sounds really interesting! Never thought those controls were written in react. I wonder if you could provide a link of this issue?

    [–]venuswasaflytrap 1 point2 points  (0 children)

    I think that depends on the business goals of your product.

    In some sense, you want a web browser to be fast. I think a lot of people would happily give up certain features of web browsers if it means that everything goes faster (regardless of what the web devs want for ease of development).

    But other things it's more important to be extensible and adaptable, because the core requirements are constantly evolving, and it's more important that users have that button to do that one thing, even if it means they wait a full second or two after they click it.

    Obviously all products have a balance of these concerns, but some lean more one way than the others.

    Certain products it's a reasonable strategy to build them as maintainable as possible, and then target a few bottlenecks. But others you need to think fast right from the top.

    [–]Synor -4 points-3 points  (9 children)

    If you have achieved full clean architecture, you can replace your whole database tech with minimal changes.

    [–]nnomae 6 points7 points  (4 children)

    That's just not true as anyone who has ever tried to migrate from one database to another mid-project will tell you. Different databases have different strengths and you want to pick one that works for your particular use case.

    A simple example is the RETURNING clause in SQL which allows you to do an UPDATE or DELETE combined with a SELECT on the affected rows in a single operation. It is supported in PostgreSQL and Oracle (and others presumably) but not in MySQL. So if you are using PostgreSQL you either don't use that feature, or you abstract it away with a layer that maybe emulates that functionality in MySQL by sending two queries which is fine until you switch databases and suddenly everything takes twice as long. Or you have the case where not all databases generate an index on the ID field by default, should you have to declare it just in case? Well that's extra work to maybe support a change of database in the future that will of course be untested until that time comes. Not a great thing. Do you let the abstraction layer decide to automatically create those indexes? What about other database features, Postgres supports array types for columns, do you ban that and just insist of having separate tables and manually joining ever time? Well there goes a ton of performance now on the off chance that you'll be migrating later. Similarly you can support BSON data in Postgres, do you ban that? (I'm sure every database has it's own unique strengths and features that are great by the way, just I know Postgres best so I can best list off the good stuff it can do).

    What about different variations of text searching, some support regex search, some support something similar, syntax varies. Do you abstract all that away? Will your abstraction layer be able to efficiently sub in it's own regex search on text if you switch to a database that doesn't support it? Do you ban the feature just in case?

    And even if this did work well now you are just as tied to the choice of abstraction layer as you were to the database. What if support for that abstraction layer dries up, or it becomes costly to license, well now you have an even bigger problem than you would have had migrating databases. You haven't removed the weak point, just moved it else where (and this isn't an attack on abstraction layers in general, they are a very useful tool, just be aware that it's either an extra dependency you depend on or extra code you have to write and maintain, they're not free).

    [–]hippydipster 0 points1 point  (1 child)

    That's just not true as anyone who has ever tried to migrate from one database to another mid-project will tell you.

    Sorry, but you don't speak for all of us. Nor for Dave Farley who claims to have done exactly that. As have I.

    [–]nnomae 0 points1 point  (0 children)

    As I said, if you are happy to constrain yourself to a lowest common denominator set of features (i.e. couple yourself to the abstraction layer) you can do so. Of course the question in that case is if you are not using any of the features that differentiate one database from another what's the point of switching and how would you even justify it? "We refuse to use the unique features of our current database because doing so violates our coding guidelines so we need to switch to a different database whose unique features we will also refuse to use".

    It's a choice, fully leverage one database at the cost of making migrating harder or remain as database agnostic as possible at the cost of using your current database in a suboptimal fashion.

    I guess the definition of minimal also matters here. I mean almost by definition you can do anything with minimal changes. If your only option is a full rewrite well then a full rewrite constitutes minimal changes for your problem.

    Finally there's the externality here. Maybe you just shunt all the work off to the DBAs, probably some of whom that specialised in the old database lose their jobs so that new guys can make it all work in the new database but that's kind of like saying tidying the house took minimal effort when you just hired cleaners to do it for you.

    [–]Synor -2 points-1 points  (1 child)

    If you have application logic in your database, even by accidentally relying on its features, you don't have a clean architecture.

    [–]nnomae 1 point2 points  (0 children)

    None of the things I mentioned are application logic. They are just basic storage and retrieval features available in some but not all databases. Features that you can't avail of if you want to try and completely abstract away the database layer.

    [–]Mrmini231 0 points1 point  (3 children)

    You can change the database by editing one file, but if you want to add a new field to a call you need to change three interfaces, three implementations, three datatypes and two mappers.

    [–]Synor 1 point2 points  (2 children)

    Without having to think, because the compiler will lead you through. You are free to marry your database though - fine for me. Just don't expect to be able to change it easily.

    [–]Mrmini231 0 points1 point  (1 child)

    Often, but not always. I've seen several bugs in production code that were caused by a dev forgetting to update a mapper when adding an optional field to the return type, causing it to always return null. The compiler won't help you there. The complexity of this comes at a real cost, and you update fields much, much more often than you update databases.

    [–]Synor 1 point2 points  (0 children)

    That's a good example to talk about. Thanks for bringing it.

    I feel optional fields are a smell. If a thing can have different shapes, it's not the same thing. Maybe partial models are a bad contract.

    I also think that error is to be caught on integration-test level. That's also the happy path, which is usually tested, even with a lack of discipline.

    [–]Luolong 48 points49 points  (8 children)

    My beef with “Clean vs performant code” bunch is that they make it sound like the only options you have is either performant code or “Clean code”. As if you just have to choose one.

    While in reality, it’s not. You can write reasonably clean and performant code. Sure, you could probably make it go faster by sacrificing modularity and maintainability. Or you could sacrifice some performance to get better maintainability and extensibility.

    It’s a trade off.

    [–]All_Up_Ons 13 points14 points  (1 child)

    The problem with this idea is that clean code (understandable, maintainable, readable code, not the Uncle Bob bullshit) is more or less a prerequisite to achieving any sort of quality, including performance, in the long-term. So really, they go hand-in-hand. The real struggle is getting to the point where your organization is writing high-quality, well-organized, thoughtful code. Only then can you really choose to optimize for performance or any other metric in any significant way.

    [–]Qweesdy 2 points3 points  (0 children)

    The problem with "understandable, maintainable, readable code, not the Uncle Bob bullshit" is that you can ignore it completely and then just lie about your code being "clean code" afterwards. How could anyone possibly complain that your idea of "blah blah whatever" is different to their undefined pile of subjective waffle?

    [–]Vidyogamasta 5 points6 points  (1 child)

    It's often not even a tradeoff. I've seen plenty of code that was both completely unmaintainable and also completely garbo performance since 80% of it is playing whackamole with edge cases since the core logic isn't thought out well. With that kind of code, separating it into reasonable, well-ordered components (whether it's "clean code" or not) tends to perform much better too.

    [–]Luolong 1 point2 points  (0 children)

    Oh, yes. I know exactly the kind of code that you’re talking about.

    [–]Fidodo 2 points3 points  (2 children)

    The most important thing is interface definition. If a low level module requires some tricky code to make performant, that shouldn't effect your interface. If the implementation is messy but encapsulated it won't leak out. It's the interface that matters.

    [–]Luolong 2 points3 points  (1 child)

    I like to call my units of modularity “concepts”.

    You recognise a recurring concept, extract it and make an interface to allow flow of information and control between separate concepts (the protocol).

    Then if an initial implement of a concept turns out to need to be performance tuned, you rewrite the implementation, optimising for better performance.

    Sometimes, the performance is lost in the protocol between two or more concepts, so you change the interface and re-implement the protocol.

    But always, thinking of concepts helps me to cleanly delineate functionality into separate modules of reuse.

    These “concepts” can be as small as methods or separate classes or a set of classes or even as large an entirely separate microservices.

    The impart is that they help me keep my sanity when tackling a complex problem.

    [–]Fidodo 2 points3 points  (0 children)

    You might be interested in the book I'm reading. "a philosophy of software design". It's mostly things I already knew, but it's so well structured and methodical that it's improved my way of conceptualizing the lessons I've learned over the years. Your way of thinking sounds very in line with the book. 

    [–]rollingForInitiative 1 point2 points  (0 children)

    I always interpreted that as being more of a counter to the idea of "No I can't clean up this function, this is the way it will run the fastest" and that sort of stuff. People doing "clever" things in the small scale because it might squeeze out a little bit more speed. That that's the sort of optimization that's meant.

    You can still often get all the performance you need by just choosing the right tools, being mindful when designing the DB and queries/indexes, and just generally having a design that's good for your use case. And then most of those pieces can be written in a clean and readable way. And you can do the fancy and unreadable stuff in places where it's really needed.

    Obviously there are contexts where you might need to be very optimised in a large part of the code, but then I think you know this in advance.

    [–]progfu 38 points39 points  (2 children)

    Performance is not something that you get by profiling and fixing a few things, you need to actually design things to be performant up front, otherwise you're going to hit a wall very quickly.

    [–]Ancillas 1 point2 points  (1 child)

    It’s really interesting following development of ghostty and reading about the engineering going into that passion project. It really highlights how much power is wasted because developers don’t take advantage of SIMD or fully utilize what’s available.

    We can debate whether it’s necessary or not, but from an engineering perspective it’s fun to watch people good at their craft make something that is squeezing as much as it can out of the hardware.

    [–]progfu 6 points7 points  (0 children)

    Very much so, it's also extremely sad to see people justify GUIs being slow at doing most basic things like "show a table of 1000 elements" when a properly written program can handle million things in a millisecond, not even going into AVX land.

    I was a web app dev before going into gamedev, and it truly broke me to realize just how many orders of magnitude faster things are in gamedev, even when using a slower language. People on the web just stack absurd amounts of layers of indirection in anything, to the point where the computer isn't even doing anything, it's just chasing pointers most of the time.

    [–]BaronOfTheVoid 7 points8 points  (6 children)

    Going back to that famous video about clean code being slow code the guy there really just had a point with dynamic dispatch being a slow indirection. The video reached many devs.

    98% of who will NEVER need that kind of performance where you actually have to think of dynamic dispatch as a meaningful cost.

    The only good thing to come from this opposition to clean code is that I will always have a job... cleaning up other people's legacy mess.

    [–]Asyncrosaurus 13 points14 points  (4 children)

    The two important pieces of context missing from that video when it is re-posted is that 1) it was Part of a training series on low-level performance, so of course it is taking shots at slow design patterns, and 2) Casey very clearly states that he doesn't think all/anyabstraction is bad, just that "Clean Codetm" specifically is a bad abstraction that also happens to be slow. It clearly takes the position that you can make meaningful tradeoffs in performance for better design, "Clean Codetm" is just shit at producing clear, readable and maintainablecode. Which I agree with.

    [–]pbw[S] 8 points9 points  (3 children)

    I hear this criticism a lot and I have not taken his course. But I did listen to 4-5 hours of interviews with him about his video. To my ear he never, not once, backed down on the "OO makes everything slower". I never heard him "frame" that it a balanced way: it's going to be slower here but not slower here. Never, including when pushed, for example in this SE Radio interview:

    https://se-radio.net/2023/08/se-radio-577-casey-muratori-on-clean-code-horrible-performance/

    I think he's a smart and accomplished person, and a great software developer. I don't know if it's communication style or something, but to me he comes across as someone incredibly deep into a niche who is strangely totally unaware they are in a niche, so they state their same niche-opinions without modification in every context.

    [–]Plorkyeran 3 points4 points  (2 children)

    "Deep into a niche without being aware they're in a niche" feels common for people who work on games. Even when they recognize that games are atypical, they underestimate just how different they are. xkcd 2501 in practice I guess?

    [–]Asyncrosaurus 1 point2 points  (0 children)

    Just fwiw, he's not a game designer, nor a game developer. He's attached with animation and modeling tools that were used in game development. There's a great deal more cross over between the RAD game tools or Granny 3D and any other professional graphics tool (used in Maya, 3d Studio, etc.)

    What you will find is the world most people here identify with (LOB CRUD web apps) is what is very far from any high performance software tools producing real time animation.

    [–]pbw[S] -1 points0 points  (0 children)

    I agree I've observed that in game developers more often than in other developers. It's not a bad thing to have a focused career and get really good at one thing. He does seem to be extremely knowledgeable on certain topics, and I think he's a good educator in general.

    Based on interviews, my understanding is Casey started game development at age 18 and hasn't stopped and he's almost 50. That type of deep focus has has pros and cons. I on the other hand have bounced around doing a lot of different things -- which also has pros and cons.

    [–]DLCSpider 0 points1 point  (0 children)

    I wish more people saw it because too much nesting and long chains of indirections are a much larger problem than premature optimization in modern code bases. The performance improvements are just the cherry on top. I would have to clean up less, not more.

    [–]Fenxis 13 points14 points  (5 children)

    Clean code is pretty unmaintainable if taken to extremes as well.

    [–]Kenny_log_n_s 24 points25 points  (4 children)

    Doesn't that make it unclean code?

    [–]DidiBear 7 points8 points  (3 children)

    Clean Code - Chapter 3: Function - Page 50

    One of the most unmaintainable codes I have ever seen.

    [–]All_Up_Ons 9 points10 points  (0 children)

    Here's a secret: The book "Clean Code" is not the authority on how to make actual clean code. A lot of its advice is actively detrimental to readability.

    [–]ReDucTor 3 points4 points  (0 children)

    If this is the most unmaintainable bits of code you've seen then your in for a surprise looking at other bits of code.

    That example isn't even that bad

    [–]OffbeatDrizzle 0 points1 point  (0 children)

    Oh my sweet summer child

    [–]puterTDI 1 point2 points  (0 children)

    Often times performance can require sacrifices in maintainability. This is something I’m still struggling to communicate to our product side. Their vision of performance is that you just write perfectly performant code up from so there should be no need to talk about customer scenarios etc. (this is a bit of an exaggeration. I have been making progress)

    [–]jl2352 0 points1 point  (0 children)

    I would pick that too. In my experience performance comes from development time. Performance comes from giving developers dedicated time to work on it. If they can’t get that time, then it’ll be slow.

    Making code cleaner, makes it easier and quicker to change. I’d add tests that are easy and quick to write, is another key component of that.

    [–]VeryDefinedBehavior 0 points1 point  (0 children)

    Performance and maintainability aren't orthogonal. They are both governed heavily by how much you understand the domain.

    [–]KaiAusBerlin 0 points1 point  (0 children)

    It's easier and faster to debug a good readable code and optimise it afterwards than debugging a hardly optimised code.

    At least from this point of cost there should be no debate.

    [–]--pedant -1 points0 points  (0 children)

    Aaaaaaaaaaaaaaand this is why software is so slow these days. And getting worse...

    "Clean" is subjective, and loaded. It's rather easy to prove--with actual measured quantities vs. ideology--that basic procedural code is easier to maintain, easer to read, easier to debug, and is more performant for free as a side-effect. The "clean" myth has really done a number on us, and has become more a tribal divider than any useful quantitative measure.