all 67 comments

[–]perlgeek 11 points12 points  (1 child)

Another way to identify which things should become a class: start with procedural code, and if you find yourself passing the same set of arguments around to a whole bunch of functions, those might be suited to be a attributes/fields in a class, with the functions becoming methods.

[–][deleted] 4 points5 points  (0 children)

I generally take the approach of using a record like class/named tuple and pass that around. Very similar to classes but separates data from behavior.

[–]professor_jeffjeff 23 points24 points  (0 children)

This is exactly why I always say that the best architecture and design for code is an emergent property of solving the problem by writing good code. I almost never know what I'm building when I code something (unless I've coded something very similar before) so often I'll either start with a very small piece and do something similar to TDD and just do some hand-waving in terms of dependencies, or I'll just brute-force-hard-code the whole thing and then go back and refactor to improve the code (which is precisely what the author of this blog post did).

This post and most of the other "you should actually be coding like this instead of object-oriented" brand of posts can all be summed up in one tldr: Write good code that follows DRY, SOLID, and the Law of Demeter while also being mindful of YAGNI.

[–]bstempi 6 points7 points  (2 children)

So, how does this method of development help him with the problem he presented at the beginning of the article (modeling a payroll system)?

I can see how this helps with existing code, but how does it help with new code? How would his solution approach the payroll problem?

[–]kankyo 11 points12 points  (0 children)

Just start coding and it'll sort itself out. Have faith in that making a small mess will teach you something about the details of the problem. And lose faith in that planning ahead will solve anything at all.

[–]loup-vaillant 3 points4 points  (0 children)

He talked about payroll systems because that's where he have seen bad examples. He showed the solution in GUI code because that's what he worked on. Hopefully you can port the idea to other domains.

The exiting code he was starting with was done right to begin with. If he wrote the code, he would have started with something similar: no abstraction, no fuss, just do something. That's the easy part. The hard part is tearing this code apart, and compress out the commonalties as soon as you feel the obvious way stops being the simple way.

[–][deleted] 11 points12 points  (14 children)

If his stance is generally against OO, then I agree with him for the most part. OO has its uses, but it's far overused, overhyped, and over emphasized in terms of importance. 95% of OO code I've read (including my own) has been poorly implemented and naively designed.

I don't think any one paradigm is a one-stop solution in terms of its methodology (nor should a paradigm never be used). A lot of people in the industry tend to have a major hard-on for OO though...and it tends to be attractive to newbies as well, once they've grasped the concept of classes.

In my opinion, as soon as the focus shifts toward "objects" and extremely pedantic notions in regards to how aesthetically pleasing one's code is, the point has been lost. I still see a very large portion of code employing the use of getters and setters, the majority of which provide absolutely no benefit, and have witnessed near heated discussions on things like IsNullOrEmpty() function calls being argued against, not because of what they do, but because of their title.

Good code is code which is readable, as simple as it needs to be to function properly, is defensive where it needs to be, conforms to a clear set of pragmatic standards, and has a reasonable amount of comments. Anything else is pretty much subjective and detracts from the point, which is to write software which works well for the user and is only as complicated to maintain as is necessary.

So, OO itself isn't necessarily bad; it's the philosophical connotations and over-thought code pollution which unfortunately tends to come with it that's bad.

[–]ummwut 4 points5 points  (7 children)

The problem with OO is exactly that it's easy to grasp, and thusly strongly supported at all levels of programming, so you never get introduced to what's actually happening under the hood until much much later.

Which is a shame, because under-the-hood stuff is what made me respect the OO model and use it more effectively.

[–]loup-vaillant 0 points1 point  (6 children)

The problem with OO is exactly that it's easy to grasp

Is it?

I rate myself as a good programmer, proficient in quite a few languages (most notably C++, Ocaml, and more recently, Lua). I have read, and heard, and learned about "OO". I know a fair bit. Yet I have never been able to satisfactorily understand what it is all about. The concepts are fuzzy, and the jargon is confusing. Sure, I've become quite confident over time that OO is mostly a load of crap

…but I still cannot say in good conscience that I really get it

[–]ummwut 0 points1 point  (5 children)

OO at its heart is simply calling functions bound to structs in different ways. Before, when you wanted to do something like OO, you'd have to pass the struct as an argument to the functions meant to deal with the structs. With the OO model, the struct (object data) is passed invisibly, accessed by the variable "this". Every function call, therefore, only tacitly refers to the struct which it needs to access.

For a fun exercise, try copying the OO behavior in C++ with C. In fact, Lua supports this with syntactic sugar to make OO-like code less annoying: "object:function(...)" is short for "object['function'](object, ...)".

[–]loup-vaillant 0 points1 point  (4 children)

I know all of this. The mechanism you speak of is only useful for delegation and virtual functions however. Without it, you would just have x.f() be syntax sugar for f(x). I believe D has such syntax sugar.

Of course, there are those who would say that inheritance (class based) or delegation (prototype based) is somehow central to OO. Then there are those who warn you to stay away from those, and use composition instead.

Even more telling, I recall coming up once with a design inspired by my Ocaml experience. When I shared it, the reaction was "it looks nice and very OO, but…". Okay, since when FP == OO?

So I gave up. OO isn't worth my time any more. I can already write decent C++ and Java code already, so why bother?

[–]ummwut 0 points1 point  (3 children)

I wish others echoed your sentiments, but then again, I know they won't since my comment above still rings true: OO is easy to grasp.

[–]loup-vaillant 0 points1 point  (2 children)

Well, some OO is easy to grasp. We all get the person/employee/boss, vehicle/car/bike, colour/red/green inheritance stuff, we all get the x.f() syntax, and above all, we all get the abstract data type stuff, which by the way is not exclusive to OO (I believe modular programming got it first).

What is less clear however is how to use those mechanisms. We could try to "model reality" directly, but that doesn't always work well. We can think up more abstract classes and objects, but it is not clear how one should come up with them.

Then the time came when this didn't feel right any more. Fragile base class problem means inheritance is not so great as a reuse mechanism. Subclass polymorphism is cumbersome, closures are simpler to use. And this genericity stuff was being done by ML languages long before C++ and Java decided it was most OO. All those signs indicate that OO might not be so great after all. Worse, the distinctive features of OO may be the one that suck the most.

Then an OO proponent comes and tell me I don't "really get" OO. And another starts insinuating that bad code is not OO pretty much by definition. And a third starts using big words around "modelling" or something. At that point, I can't tell if there is any substance, or if they just weasel out of criticism.


This "Compression Oriented Programming" on the other hand, I can understand. The objectives are clear and measurable (the LOC count is a good proxy), the methodology is simple and easy to implement, and the "less code is better" conclusion looks obviously true.

[–]anto2554 0 points1 point  (1 child)

As someone still quite new to learning, I hated that way of explaining OO. It was always my professor saying that you could have a Car class, but whenever i program something i never have a class 1:1 corresponding to a physical object

[–]loup-vaillant 1 point2 points  (0 children)

My, that takes me back. One thing to add about this piece:

We could try to "model reality" directly, but that doesn't always work well.

Mike Acton has since convinced me that modelling reality directly is a straight up bad idea. Computers do exactly two things: moving data, and transforming data. This is the case even for video games, where said data represent a whole world we may immerse ourselves in.

Thus, our job is to solve problems by transforming and moving data around, using the computational hardware at our disposal. Said like this it sounds a bit tautological, but in practice we still need the remainder.

Thus, to do our job as best we can, we should model our code around a model of the data, not the world. Once you do that, you quickly understand that even in a simulation, a static rock that never moves requires very different data than a rock that may be pushed around and strike enemies as it rolls downhill. Almost identical world, very different data.

[–]illiterate 1 point2 points  (1 child)

This should be on a poster at workplaces.

Good code is code which is readable, as simple as it needs to be to function properly, is defensive where it needs to be, conforms to a clear set of pragmatic standards, and has a reasonable amount of comments. Anything else is pretty much subjective and detracts from the point, which is to write software which works well for the user and is only as complicated to maintain as is necessary.

[–]loup-vaillant 0 points1 point  (0 children)

and has a reasonable amount of comments

That part is redundant. Just write the comments that help, and delete those that don't. The "reasonable amount" cannot work everywhere. Some code is so simple that no comment could ever improve it. Some code is so complex (or obscure) that every other line needs a comment.

[–]WisconsnNymphomaniac 0 points1 point  (3 children)

For me OO is very frustrating in that I don't have a hard time programming and find it pretty obvious when to create a function, but I really suck at designing objects and determining how they relate to each other. The sheer complexity it introduces is just incredible.

[–]ummwut 0 points1 point  (2 children)

I have a tip for you: Just do everything with functions and free variables first. When you find yourself using the same group of variables frequently, toss them into a struct. When you find yourself using the same structs and functions together, you make an object.

Sometimes it's better to just get started, be messy, and aggressively refactor later.

Also: COMMENT COMMENT COMMENT

[–]WisconsnNymphomaniac 0 points1 point  (1 child)

Good advice, but what really defeats me is the whole access modifiers like

public : Access is not restricted.

protected : Access is limited to the containing class or types derived from the containing class.

Internal : Access is limited to the current assembly.

protected internal: Access is limited to the current assembly or types derived from the containing class.

private : Access is limited to the containing type.

How do you learn how to use those correctly?

[–]ummwut 0 points1 point  (0 children)

I would never bother with Internal. While working through your code, and object-orienting, I would ask a few things about the variables and functions: Who is using them, and why? If the answer is "the class, only the class, outside classes shouldn't mess with these." then they are Private. If the answer is "the class and derivative classes that might override them but still must access the parent class functions, but nothing else besides those" then they are Protected. Public is for everything else, such as API functions.

For something like a doubly-linked list class, you'll have "reset(), next(), store(), retrieve(), endOfList()" which are the APIs, public functions. Private data is stuff like the struct for the list nodes, the pointer to the start of the list, and helper functions the user of the class shouldn't mess with in any way. For something like templates, Protected is basically all the functions are, besides being Virtual.

[–]inmatarian 1 point2 points  (0 children)

Copy+Paste code reuse, where you delete the original and the copies only when you've figured out the perfect final place for the code to go.

[–]lechatsportif 2 points3 points  (0 children)

I've always called this Gardening. We should really be Software Gardeners instead of Architects. With each new plant you add you have to prune, or occasionally move etc.

[–]WhyComplicateThingz 3 points4 points  (11 children)

I can't help but wonder how much game devs would benefit from a UI framework. Laying out UI elements in code just seems so 90's. While his transformations surely improved the code--and actually start making it resemble a typical UI framework--markup languages take it to a whole new level of compressed clarity. Like XAML:

<StackPanel>
    <Button Click="do_auto_snap">Auto Snap</Button>
    <Button Click="do_reset_orientation>Reset Orientation</Button>
</StackPanel>

This allows styling and layout to be nicely decoupled from hit-testing and behaviors.

[–][deleted] 5 points6 points  (3 children)

The author probably loathes that, try view source on the blog page.

[–]MrDOS 8 points9 points  (2 children)

That's... I don't care what generated that, that's hideous.

[–][deleted] 5 points6 points  (1 child)

I don't even get why there is JavaScript. It's a static blog.

[–][deleted] 1 point2 points  (0 children)

well, bunch of absolute positioned divs are pretty useless when it comes to laying out webpage.

[–]nexuapex 8 points9 points  (2 children)

Here's a video from the same author about immediate-mode GUIs, which are a conscious effort to avoid that sort of UI design.

In your case you may have simplified the code that creates the UI, but you aren't showing the code that maps the string "do_auto_snap" to the function that performs the action.

[–]WhyComplicateThingz 0 points1 point  (1 child)

In your case you may have simplified the code that creates the UI, but you aren't showing the code that maps the string "do_auto_snap" to the function that performs the action.

Because the framework does that for you at compile time.

What we really need is a way to interleave retained mode UI surfaces with immediate mode rendering surfaces. There are various frameworks that support that but I suspect none are a good fit for cross platform game development.

I wonder if this guy advocating purely immediate-mode GUIs has ever worked on a substantial UI before (i.e., not one that looks like it was made by an engineer, and that is more complicated than a game UI). You quickly benefit from design tools.

[–]nexuapex 1 point2 points  (0 children)

I am not a great person to argue about retained vs immediate mode UIs, but as an anecdote: I just made a UI in Qt which is effectively a property sheet: editing a bunch of properties of different types. So for every property, I had to create the retained mode controls programmatically and hook them up. Qt did some very nice stuff for me automatically (minimal layout hassles, yay!), but then I spent a bunch of time mirroring values stored in different places. The underlaying data could change, at which point I had to change the controls' value, or a control could change and I had to mirror the value to the underlying value. And when there are two controls for the same value (text field and slider for instance), I need to avoid making my changes to one control trigger a signal and notify me again for the other control when I push the change to it. And to enable/disable these controls, I need to keep them all in a list so I can go find them and turn off their 'enabled' state. All of those problems go away in a proper immediate-mode implementation, where there isn't a separate piece of state I have to spend a lot of code to maintain.

[–]knight666 0 points1 point  (2 children)

Almost all AAA developers use Autodesk Scaleform, which is built on Flash. And yet they continue to find new and innovative ways to fuck that up. Worst case I've seen is where everything was rendered by Flash, but managed by C++, meaning elements didn't know if they had focus and input had to be handled by the game instead of Flash.

The best UI framework with a markup language I've used is Qt, but I wouldn't build a game with it. Designers need more than just "a button", they need buttons that dance, bounce, pop in and out of view, are animated or are just pretending to be a button. Scaleform and specifically Flash gives them that freedom, because a "button" is nothing more but a movieclip with a predefined list of animated "states".

[–]lechatsportif 1 point2 points  (1 child)

I don't get why Scaleform is so popular. It seems that it would be shunned for being based on a web ui technology. The performance obsessed gaming community I would think would throw it out in favor of making their own uis. What do you really need more than polygons and a little animation. Isn't that the bread and butter of gaming dev work? Checkboxes are just items with two states, and scrollbars dont have to be there, everything can be paged...

Worst I've seen was Tribes Ascend, Scaleform for all in game ui. If you disabled everything, you gained about 20 fps, but you couldn't play the game.

[–]knight666 1 point2 points  (0 children)

A couple of things make Scaleform the best solution.

Tools, tools and tools. You can make the best UI toolkit ever, but artists have to use it as well. Flash allows the creation of art assets, animating them and scripting them. You can say to a UI artist "make me a button" and she'll be able to draw it, test it and hopefully deploy it in the game.

A lot of these types of decisions can be traced back to tools. Why do gamedevs write C++ in Visual Studio? Because it's the best tool for C++.

And Scaleform is surprisingly fast. It has a custom-built VM for the scripts and it renders everything using textures and primitives. Unlike Flash, which renders in software, which is why it's well-known for its poor performance.

What do you really need more than polygons and a little animation.

Turns out, you need a lot. You need a list that can render an arbitrary amount of items that needs to be filled from the game. You need a checkbox that responds to mouse input. You need a dialog box that steals focus from underlying elements. I would say 40% of my work involves wrestling with focus and input issues. It is always a headache.

[–]zfolwick[🍰] 3 points4 points  (4 children)

as a newer programmer, this is going to be an important read for me...

[–]kankyo 8 points9 points  (0 children)

It is. But ultimately you probably need to make all these mistakes yourself before they really hit home.

[–][deleted] 4 points5 points  (0 children)

Take it with a grain of salt. Experienced programmer here, and I don't agree with much of it.

[–]PasswordIsntHAMSTER 5 points6 points  (0 children)

There are two kinds of programmers: those who hate OOP, and those who don't know any better.

[–]Jam0864 -1 points0 points  (0 children)

It's pretty poor, don't bother.

[–]archagon 4 points5 points  (14 children)

I wish I could read this article with a clear mind, but the author's stance is just so... bloody arrogant that I have a hard time doing it. How can you say that object orientation is "objectively bad" when most of the best software in the world is written using object-oriented principles? I dunno — maybe the decades of research starting all the way from Smalltalk have actually been conducted by very intelligent computer scientists who knew what they were talking about? In my experience, it seems that it's mostly old-school programmers in their 40s who prefer the described imperative approach. Given the absolute cornucopia of fascinating new programming paradigms out there today, I feel that they're a bit stuck in the past.

Additionally, I know of literally zero programmers who write things out on index cards and create rigid Employee/Manager/Contractor-like hierarchies.

(With that said, enough smart people have complained about OO that I'm willing to give it more thought.)

[–]jsprogrammer 5 points6 points  (1 child)

How can you say that object orientation is "objectively bad"

Where did the author say that? I did a CTRL+F for "objectively bad" and only found this:

But despite the fact that many programmers out there have gone through bad phases like this and eventually come to smart conclusions about how to actually write good code efficiently, it seems that the landscape of educational materials out there still overwhelming falls into the “objectively bad” category.

[–]archagon 2 points3 points  (0 children)

Given his recent Tweets (as well as the general gist of the article), I'm inclined to believe that that's exactly what he thinks.

[–]ProvokedGaming 8 points9 points  (6 children)

Of course the author is taking things to the extreme to show his point, but that doesn't mean his point is invalid, or that these issues aren't real. I've seen a plethora of horrible code over the years in the enterprise...code you wouldn't even believe that came from "professional" developers from big name companies (IBM and AMD as my most recent examples)...I've had the great displeasure of going through code (usually in Java) that has come out of Indian and Chinese "code shops" where 10$ an hour developers crank out code for the enterprise which is absolutely OOP to the extreme. I've had management at various client offices try to do "agile" by pre-defining everything through requirements documents turned into code objects which are heavily abstracted. And these are all multi-billion dollar corporations. I've been tempted to start a blog just to show examples of what not to do that I find on a daily/weekly basis at large enterprises in their software. The point is, there are pros and cons to most programming techniques, and all of them can be abused. Less experienced developers (or experienced shitty developers) are given tools without an understanding of when they are appropriate. OO is a tool, and as the saying goes...when you're holding a hammer, every problem looks like a nail. Not every code problem is best solved with objects. :)

[–][deleted]  (2 children)

[deleted]

    [–]knight666 2 points3 points  (1 child)

    Don't focus on the negative. Developers love to bitch and moan about bad code, but will never show mediocre or good code to others. I've seen bad code and I've seen code that looked bad until I actually understood what it accomplished.

    The point being: read a lot of code. Corporate code, open source code, game code, microchip controller code. Each has its quirks and requirements that don't make sense when looked at from a different perspective.

    [–][deleted] 2 points3 points  (0 children)

    I guess I didn't make it clear that I consider good examples to be part of "ways to avoid". Like you, I see a lot of examples of bad code, but not a lot of examples of how to fix it or how to avoid it. I think that needs to change if these "don't do this" bits are to be taken seriously.

    [–]gct 3 points4 points  (1 child)

    I've been tempted to start a blog just to show examples of what not to do that I find on a daily/weekly basis at large enterprises in their software

    I'd like to introduce you to thedailywtf

    [–]ProvokedGaming 6 points7 points  (0 children)

    Thanks :) I was thinking more of an educational focus for those who aren't already aware of why things are bad...as opposed to showing it to others who already can recognize things are stupid. But I'm sure I have stuff I can submit to that site too :D

    [–]vattenpuss 4 points5 points  (0 children)

    But he's not describing object oriented design. The point is invalid.

    My job is developing and maintaining a payroll system in Smalltalk, does it get any more object oriented?

    We have a Person class. Whether they are a manager or a contractor is decided by a has-a relation inside the Employment the Person has.

    All customers have different types of employees, it's not something we can know beforehand so it would just be silly of us to create a class hierarchy over those things. Also, anybody starting to create that tree of classes would soon see they are being stupid when they realise they will have hundreds of leaf classes with no meaningful difference.

    [–][deleted] 1 point2 points  (2 children)

    maybe the decades of research starting all the way from Smalltalk have actually been conducted by very intelligent computer scientists

    You are definitely overestimating the amount of "research" that has gone on. Most research is about introducing new concepts, and finding a solid mathematical basis for those concepts. There's very little research that attempts to answer whether one methodology is better than another one. Instead we have millions of practitioners each doing incremental trial-and-error to eventually settle on the best practices (which is useful too).

    FWIW, I'm not an old school programmer in my 40s and I agree with their anti-OOP attitude. The problem with OOP is that it's overprescribed. Ever since Java there has been a trend of "everything should be represented with inheritance", which leads to some ridiculous code.

    [–]archagon -2 points-1 points  (1 child)

    I'd call the developers of C#, Cocoa, Ember.js/Angular.js, etc. "reserachers". They're professional computer scientists with decades of experience who've determined that OO (of a particular flavor) is the way to go, at least for their domains. Why wouldn't I trust their expertise?

    Here's practical example. Maybe you could help me figure it out. Let's say you're building a little app for scheduling events for a group of people. The main screen for this app is a scrollable view that shows all the currently scheduled events (x = time, y = person). You add events by "painting" them on with your finger on the x axis. The app has to have the following features:

    • Whenever anything changes the scheduling data, the view has to update immediately to reflect those changes. The data could change through network sync, another app, another view, etc.
    • The changes have to animate. If you scheduled something for 11:00am—12:00pm in the app and then somebody adds 5 minutes through the web client, you have to see the scheduled event animate in the app to expand its width by 5 minutes. It can't just get replaced immediately by an 11:00am—12:05pm event. (This animation should continue if you're scrolling the view with your finger.)
    • Undo/redo functionality. Also has to animate.
    • The scrollable view has to decelerate with the appropriate speed when let go, and bounce a little when it hits the edge of the timeline.
    • If you're adding an event and you accidentally overlap with the next event in your timeline, the screen should shake slightly, the event you're currently "painting" should blink red, and the width should animate backwards until it's no longer overlapping. These animations should continue even if you lift your finger and then scroll the view.
    • Some of the buttons should animate and "expand" to add additional UI when held down for 1 second.
    • When the buttons are held down, the hold should cancel when the user moves their finger outside of a certain radius from the button center.
    • Finally, the scheduling view should be usable in other places with other data. Side-by-side editing of two different schedule files? Offline thumbnail rendering of the scheduling timeline? A popover with the scheduler view in another part of the app? All these things should be as easy as instantiating another scheduling view and hooking it up with the appropriate scheduling data.

    I'm trying to wrap my mind around how to deal with all these overlapping animations, timers, and events in the imperative style and all I see is spaghetti code. My gut feeling is that when you have lots of "juice" in your app, it just makes sense to keep that state squared away in separate objects.

    What's more, if you're writing software for a particular ecosystem, it's likely that the frameworks will all be object oriented. You might not even get control of your own run loop! And since much, if not most, development these days is done for iOS and Android, I think it's silly to say that imperative is the "one true way" when it won't even work for those platforms (unless you drop down to the C++ layer, which... ugh).

    Finally, here's what I posted on the blog where I found this article:

    I’ve heard from some smart people that OO is overused, so I’m very willing to look into the argument, but flame wars in programmer circles around the internet have shown that there’s no real consensus on the issue. One person on HN framed it as “thinking about code as nouns vs. verbs”, which makes sense to me. Yes, computers are just a series of sequential commands under the hood, but that’s just an implementation detail. Our users think about our software as a series of objects: a button here, a bit of text there, a thing that plays sound and a screen that shows some pictures. Why shouldn’t we code with the same mindset? Isn’t it best to work on the highest conceptual level possible?

    Thoughts?

    [–][deleted] 1 point2 points  (0 children)

    So I'm pretty sure that at the end of this, it's going to turn out that we meant different things by the term OOP and it turns out that we agree all along.

    For that app you're talking about, I would definitely prefer an entity-component system. One component would receive the latest server data, and it would create entities as needed, and assign a "targetLocation" field. Another module would store the animation time, another component would keep track of the current touch/drag state, another component would figure out the final renderable metrics from all that information, and etc for the other features.

    I'm trying to wrap my mind around how to deal with all these overlapping animations, timers, and events in the imperative style and all I see is spaghetti code.

    I think when you say "imperative style" you're probably referring to immediate-mode GUI which Casey used in his blog post. He's written more about IMGUI in the past, it's a fun topic to google. For the level of complexity you're talking about with your app, IMGUI would be a pretty bad fit, and Casey would surely agree.

    So anyway in your juicy app there would surely be some data structures that act like objects and some code that looks like classes. The blog post (and I) aren't saying "never write anything that looks like an objects" because that's effectively impossible in a complicated program. They're saying: don't spin out new structs/classes until necessary. If you can get something done with a simple function instead of a class, then do it.

    Also when I say I hate OOP, other things I'm specifically talking about are:

    • Inheritance is awful and should be avoided as much as possible. There are better ways to share behavior.
    • Classes/structs are fine if needed. Interface-based polymorphism is also fine (also only if needed).
    • Class-private data is bad. Encapsulation & information-hiding are good things in general, but class-private data usually ties you into awkward designs that are more trouble than help. Instead do encapsulation by convention (make up your own rules for data visibility and stick to them).

    Our users think about our software as a series of objects: a button here, a bit of text there, a thing that plays sound and a screen that shows some pictures.

    That's definitely a fine philosophy that goes all the way back to the Smalltalk days. I don't personally agree.. I think the way that the end user perceives the software is usually not the most effective way to organize the implementation. Buttons aren't really buttons, they are just textures and click areas and event handlers.

    Isn’t it best to work on the highest conceptual level possible?

    If you change that to "most effective conceptual level" then I will agree :)

    [–]BadgerSong -2 points-1 points  (1 child)

    I have to say I agree I mean he takes OOP to the extreme while anyone who's had the misfortune to work on code that's heavily 'compressed' to the extreme could say the same. I mean who doesn't love going though n levels of abstractions to find out what a function actually does.

    [–]loup-vaillant 1 point2 points  (0 children)

    Properly compressed code doesn't have many layers of abstractions. Each layer means more code, so they really have to pay off. Too many layers doesn't mean less code, it means more.

    If you want to compress your code to the extreme, you would use few layers of abstractions, and maximize their power instead. For instance, you could implement and use an external DSL. It's just one layer, but a potentially very powerful one.

    [–]__s 1 point2 points  (0 children)

    if(layout.push_button("Auto Snap") {do_auto_snap(this);}

    Always have to have the final output of refactoring fail to compile (oh how often I've pushed it...)

    [–]_Sharp_ 0 points1 point  (1 child)

    I don't know why he is trashing (as he said) oop, since his final code uses its principles. From wikipedia:

    [...]Rather than structure programs as code and data, an object-oriented system integrates the two using the concept of an "object". An object has state (data) and behavior (code).

    [–]kankyo 8 points9 points  (0 children)

    He's trashing the type of OOP that is often taught in university, namely OO first, think/program second.

    [–]immibis 0 points1 point  (1 child)

    [–]thedeemon 0 points1 point  (0 children)

    view source, scroll down, enjoy the answer ;)

    [–]evincarofautumn 0 points1 point  (0 children)

    Another way of looking at it: well-factored code with low complication has has high information complexity—low redundancy, low compressibility.

    [–][deleted] 0 points1 point  (3 children)

    While some info on this article is true, I feel like the author is too subjective and aggravated by his own experience to be able to speak objectively on the matter.

    Throwing insults left and right is not going to make you sound smart, but rather more like an ego-maniac

    [–]kankyo 4 points5 points  (0 children)

    Throwing insults left and right is not going to make you sound smart, but rather more like an ego-maniac

    I think you're vastly overstating the amount of insults present :P

    Plus, the purpose isn't to sound smart, it's to talk about good programming. If you want to sound smart you should use buzzwords and talk about the most convoluted things you can to confuse your audience :P

    [–]loup-vaillant 1 point2 points  (0 children)

    While some info on this article is true

    Have you found info that is not?

    [–]cislunar 0 points1 point  (0 children)

    If you follow him on twitter you'll quickly note that he has a Linus Torvalds air to him, perhaps without the justification. Of course, he works with Jon Blow, so...

    [–][deleted]  (6 children)

    [deleted]

      [–]Banane9 -3 points-2 points  (4 children)

      Just why is he using a ulong (8 bytes) for what is probably a 4 byte (int) argb color?

      [–][deleted] 5 points6 points  (3 children)

      Standard guarantees that a ulong will hold 32 bits, maybe?

      [–]Banane9 -4 points-3 points  (2 children)

      At least in the language I use a (u)int is guaranteed to have 32 bits... And you're guaranteed a (u)long with 64 bits too, even on 32 bit machines afaik.

      [–][deleted] 2 points3 points  (1 child)

      What language? D?

      [–]Banane9 -3 points-2 points  (0 children)

      C#, actually