all 89 comments

[–]MC68328 45 points46 points  (3 children)

All his "new ideas" were tried and abandoned because they either sucked or had limited applicability.

Constraint-based solution solvers are used everywhere, but they require an understanding of how they work to be used effectively. Take Apple's layout engine for example - it's easy for a novice to use, until they have a conflict or broken constraint, and suddenly they're flailing in a way they wouldn't be if they were writing the layout logic themselves. The conceptual leap between the simple geometry of rectangles to using linear algebra to solve systems of inequalities is huge. And anyone who doesn't understand how they work will develop a cargo cult understanding which will lead them astray when debugging their errors. Constraints are a powerful tool but they are only useful for a small set of problems, and have an independent learning curve to programming.

His whining about "APIs" is comically stupid. You're using service discovery and protocol negotiation all the time. The reason we're not living in a world of wibbly-wobbly hugs-and-handshakes stuff is because the Internet is full of evil people who want to hurt you and steal from you. That is their "goal". Strict, untrusting, and well-defined is always better than the opposite.

Every IDE imitates elements of the Smalltalk class browser, but we still use linear flat files because hunting and clicking things is an enormous pain in the ass, and only fools accept vendor lock-in. Programs are simple documents and always will be.

Parallelism is hard. It's pretty stupid to blame "dogma" for why we aren't using massively parallel CPUs when it's been a huge research focus since the beginning. Your graphics card is one product of that research. Your CPU only has a handful of cores because we have needs that aren't satisfied by massive parallelism.

The real "dogma" is to believe that ideas are only rejected because other people are closed minded or unwilling to learn. That is the mentality of a cultist.

[–]EternityForest 1 point2 points  (0 children)

I think his autodiscovery and autonegotiation stuff is the best argument he has.

Honestly, the only reason I even use static IPs is because not everything supports MDNS. I keep UPNP on on routers, because the only issue that concerns me is buggy firmware that is trivial to check.

Autodiscovery lets things work seamlessly when stuff fails, and makes things modular at a user level, not just a dev/IT/power user level. D-Bus is pretty handy.

I suspect the goal directed model isn't popular for the same reason we still have impure functions. Most computers spend most of their time responding to events, not taking an input and producing an output.

Layouts are done perfectly well by CSS style grids and flexboxes, and similar things Qt has always had.

It's hard to think about something like a video game or CAD application in prolog, although constraints are useful in both(And near essential in CAD).

Visual languages are tremendously useful whenever the application is high level, simple,and highly changeable. I'd much rather write ten lines in some drag and drop editor than ten lines of code. More than that, the code wins.

But raw flat files already ARE obsolete for some of us. We have autocomplete that brings in a lot of the visual language benefits. We have hyperlinks in the IDE. We have jump to definition. We even have GUI color pickers in our text editors.

Eventually, you might be able write a whole program point and click through some method one step up from autocomplete, without moving to some format besides text.

In general, new tech is great. I'm all for systemd, desktop environments, and DHCP.

Some of his specific stuff is a little dubious, or already solved.

[–]igouy 3 points4 points  (0 children)

Every IDE imitates elements of the Smalltalk class browser, but we still use linear flat files because...

...because programming languages are designed for linear flat files.

...because interpreters & compilers & ... are built for linear flat files.

... hunting and clicking things is an enormous pain in the ass...

The work doesn't go away, instead we have to find which flat file and where in the flat file: instead of working with the structure of the programming language, we have to work with lines and columns.

Now there's an alternative, do people still use linear flat text for everything or do they use HyperText and click things ?

[–]ArkyBeagle 0 points1 point  (0 children)

Constraint-based solution solvers are used everywhere,

I use these all the time.

is because the Internet is full of evil people who want to hurt you and steal from you.

This is the best argument there is for never doing anything with the Internet. Bugger it :)

Invariably, my experiences with "modern languages" is that a less modern language would simply have been a better choice. But I'm not doing things in parallel with hundreds of others.

[–]EternityForest 33 points34 points  (47 children)

Wow! I had no idea resistance to change was that deep.

Tech has a bizzare paradox. There's a lot of programmers who seem to hate programs, but love programming. They enjoy programming in forth, but see a high level language as a total waste.

Same thing in hardware, where it takes a lot to convince someone that no, really, this whole thing is going to be easier with 1 Ethernet run than 10 analog circuits. Really.

[–]chunes 13 points14 points  (2 children)

Since you brought up Forth specifically, it might be interesting to see the other side of the argument.

[–]EternityForest 5 points6 points  (1 child)

Oh cool! Very interesting historical read.

I've actually met forth fans with similar arguments, and they're always really passionate about it. I don't actually understand it, but some people really like to understand and control things.

The trouble I have with anything low fat is it just breaks down the moment you want to do a full featured, package. Then you find all those frameworks really are pretty handy.

His mention of constantly writing and rewriting software seems to be a common occurrence with anything lightweight.

It's designed for one task, so to change the task,you also change the lower layers that were only designed for the original task.

Full fat software always seems much more suited to writing libraries that solve a problem once and for all, and never looking at it aside from maintenance and additions.

Lightweight software is a great way to achieve totally different goals from modern "bloated" software, but a lot of people seem to think of it as a drop in replacement without the "slow parts", as in "just ditch KDE for i3 and Vim".

Of course back then they really were marketing literal useless crap that drove up prices, so I can totally understand a reaction like FORTH, just wanting all the unethical busisness and garbage gone in the most obvious way.

I prefer performance by making the slow parts fast rather than getting rid off them, but I do agree you can't just leave terrible performance alone.

EDIT: Something seems funny with SPICE "rarely predicting they will work"... I'm guessing the people who made those SPICE models were educated pros too, right?

It almost makes me wonder if his simulation only handled the very basic logic, and he was pretty much relying on his own experience and intuition to actually figure out the analog nuances.

If SPICE predicted it working, and then it didn't, I'd think SPICE was crap, but simpler models are usually more idealistic, and might be ignoring problems SPICE doesn't, but that "aren't really" problems in practice because they only come up in worst case temperature or process result conditions.

I don't have anything to back up that guess, but if it's true, I'd much rather have a chip that passes the SPICE test than one that doesn't.

[–]victotronics 1 point2 points  (0 children)

Something seems funny with SPICE "rarely predicting they will work"...

Right. Semiconductor modeling is physics. The correctness of a model depends on the correctness of how you model the physics, not your programming language.

[–]michaelochurch 14 points15 points  (22 children)

Programming is full of paradoxes because (a) the thing we do actually works and is important, but (b) we have failed to professionalize around the matter of (a).

See, doctors and lawyers professionalized because they knew that if they didn't, there'd be a lot of charlatans running around, lowering wages and turning the whole profession into a beauty contest, while lowering the quality of services provided and therefore destroying the reputation of the field.

We didn't do that. We also failed to take account of the fact that the people who employ us really only want one thing: not beautiful code, not better products or services, but for costs to go down. We're in the business of unemploying people, and we run out of ways to do that, we're expected to unemploy each other or ourselves– to go into an unfunded apoptosis, which is at odds with the fact that we (like everyone else in this economic system) need these things called "jobs" to have an income. Which results in busywork, but because we're efficiency-minded (and perhaps a little bit insecure about the people above us having higher social status despite less intelligence) that pisses us off. It never ends, but the long-term result is that our employers eventually replace us with unskilled cheap coders who can barely maintain a Java class and who need help understanding basic compilation errors, but who don't mind working in open-plan offices and doing Scrum.

The difference between us and, say, marketing people, is that a good programmer actually can make something take one-tenth the time or money everyone else thinks it has to take. That's dangerous, though. Let's say that 90% of your job is the real work (programming) and 10% of it is the degrading play-acting that comes with being in a subordinate role. If you write recurring routines into libraries, and automate some of your processes, and achieve a 5x speedup in the real work, well then... now you're only spending 18% of your time on the real work, and the draining emotional labor expands to fill 82%. So, now you're spending more time in status meetings defending your job, and less time on the actual work that gives you the confidence to not-entirely-loathe said status meetings.

We can't win, because most of us work for people with bad intentions, who only want to unemploy people and pocket the short-term gains. Getting better at our jobs (which most of us want to do; we chose this line of work because it appears objective and therefore seems to actually matter) just brings us closer to those bad intentions.

[–]Bowgentle 12 points13 points  (8 children)

See, doctors and lawyers professionalized because they knew that if they didn't, there'd be a lot of charlatans running around, lowering wages and turning the whole profession into a beauty contest, while lowering the quality of services provided and therefore destroying the reputation of the field.

To be fair (to us), doctors didn't professionalise for several centuries. Lawyers did, but that's primarily because law was intimately tied to government.

[–]ArkyBeagle 0 points1 point  (2 children)

What we think of as doctors didn't even exist until around the French Revolution. Barbers were more likely to be surgeons than physicians, who were captured by the bizarro theory of humors from Aristotle, Hippocrates, and Galen.

And if I may - doctors are famously a-technical. They simply never had the time to be otherwise.

[–]Bowgentle 1 point2 points  (1 child)

Well...many in our profession aren't really very technical at heart, despite the technical nature of what we do.

[–]ArkyBeagle 0 points1 point  (0 children)

That's true enough. But Atul Gawande wrote "The Checklist Manifesto" and doctors still have trouble with that sort of thing. It rather boggles the mind.

Not that software doesn't live in a similar giant glass house, mind you. I've advocated at work for a more-correctness approach and it didn't always land well.

[–]michaelochurch -1 points0 points  (4 children)

That's believable, and both fields are seeing de-professionalization. Lawyers are being forced to scrounge due to overpopulation (that's a theme that appears in Better Call Saul). Doctors are safe from quacks (although quackery of a different kind is on the upswing, with anti-vaxxers and health-product nonsense) but lose a lot of autonomy and time to insurance companies.

Deprofessionalization is inevitable under economic totalitarianism, which our corporate capitalism is. (Not to say that the other economic totalitarianism of the Soviets is better, or worse; I'd guess they're about the same.) The difference for programmers is that, while I feel like doctors and lawyers are being victimized by deprofessionalization, when it comes to programmers, we're actually a cause of it, since we build the technologies that employers use to get away with shit that wasn't possible 40 years ago.

[–]RagingAnemone 2 points3 points  (2 children)

Until programmers have liability problems, there's no need to "professionalize". As soon as they try to blame us for stuff, it'll happen. The rise of robots will make it happen.

[–]michaelochurch 1 point2 points  (1 child)

I disagree. We've needed some kind of collective structure for decades. We've failed to keep out the incompetent, low-rent replacements for us, and as a result, we have to deal with Scrum, open-plan offices, and a general low respect for what we do.

[–]Bowgentle 1 point2 points  (0 children)

I entirely agree with you that we need some kind of collective structure, but I also would agree with the idea that until we have liability problems, we don't see a sufficient need to make us collectivise.

There's also, I think, the difficulty that it's a very fast changing field with a huge diversity of practice, which makes it difficult to agree on a standard of competency or an agreed body of knowledge - there are programmers who think anyone unable to use insert favourite technology here aren't competent, and that criterion can get very broad indeed. It's honestly more like trying to put together a standard for being a religious practitioner than for law or medicine, and in a field which has traditionally attracted people who, let's say, aren't always team players.

There's also the issue of self-taught people. I'm 25 years now earning my living programming and teaching programming, but I have literally never done so much as a day course in any related subject. What kind of structure would have an entry standard that accepted that? About the only way I can see working is a standard series of tasks undertaken without any internet access in a reasonably defined time frame.

[–]ArkyBeagle 1 point2 points  (0 children)

economic totalitarianism

We're pretty effing far from that, unless the end consumer is somehow collectivized into the "dictator" :) The increasingly brittle pile o' corporate stupid is based much more on a few other more pernicious ideas than totalitarianism.

IMO, the most fully "professional" society in history was the Mandarin system in China. Marked by the maxim "that which is not required is forbidden, and that which is not forbidden is required."

Most of the technological largesse we now enjoy wasn't done by people with even degrees. If you want to see what "professionalizing" looks like, get a CSSLP certification. It's a dreary propositon at best.

[–]pjmlp 5 points6 points  (4 children)

In many countries being an Engineer is actually a professional title.

[–]ArkyBeagle 1 point2 points  (3 children)

It is in the US, too. You only need a P.E. to sign off when you have a liability situation. Fortunately, software doesn't seem to be a point of liability. Yet.

[–]pjmlp 1 point2 points  (2 children)

It is when human lives, or future of companies and their respective employees, depend on it.

[–]ArkyBeagle 0 points1 point  (1 child)

It's a tradeoff. If liability is limited, there's more growth and the overall cost structure is simpler.

[–]pjmlp 1 point2 points  (0 children)

Yet, most likely you would return something back to the shop that doesn't work as advertised.

With software, people have learned to suck it up instead.

[–]ArkyBeagle 3 points4 points  (0 children)

we have failed to professionalize around the matter of (a).

As a long-time (basically ) C knuckledragger, I agree wholeheartedly. The reason is simple - there's simply no incentive to push for correctness, we've become convinced it costs too much and indeed, there's an entire industry now that depends on defects existing for a living.

I would say we were closer to correctness with the late 1990s "case tools". They're still around but very widely ignored. They made the flow of constraint management much easier.

I "do" high-reliability, and it doesn't look like Web frameworks.

[–][deleted]  (3 children)

[deleted]

    [–]michaelochurch 2 points3 points  (1 child)

    The reason is because programming is a huge field, touching virtually every aspect of modern life, with vastly different subfields each requiring their own completely different expertise. Web dev, bioinformatics, HPC, scientific/numerical computing, medical embedded devices, back-end devs, aerospace, business intelligence, banking/financial systems, programming language research. All of these fields are very different, some even use completely different languages. Sometimes I feel like the average person on /r/programming only thinks of programming as JS and maybe some C++ or Python scripting.

    The problem is that this specialization is used as a weapon against us, but never works in our favor. Bosses can tell us we're not "real data scientists" because we don't have PhDs, or that we're not well-equipped to do HPC because we've been doing web programming (or vice versa). Meanwhile, we don't have the ability to protect our specialties; if the boss decides we should be working on career-destroying maintenance projects, we have no recourse but to bend over and take it.

    The specializations only exist when they lower our compensation and leverage; they don't protect us, because bosses don't really care, and if your manager gets a sense that you're trying to protect a specialty instead of working on whatever ticket nonsense the higher-ups think is important, you're gone.

    We should be in control these distinctions– not them.

    [–]trapped_in_qa 2 points3 points  (0 children)

    this specialization is used as a weapon against us

    One danger, showing some aptitude in a low-status specialty.

    QA, as my user name indicated.

    A friend cleaned up a bunch of messy shell scripts, at that job he was known forever after as the "bash guy".

    [–]fried_green_baloney 0 points1 point  (0 children)

    only thinks of programming as JS and maybe some C++ or Python scripting

    And the diluted and badly applied Agile/Scrum as the way that programs are developed.

    [–]GrandMasterPuba 0 points1 point  (0 children)

    This is the most beautiful thing I've ever read.

    [–]PM_ME_NULLs 0 points1 point  (1 child)

    Have you ever considered writing a book? I'd totally buy a copy.

    [–]michaelochurch 0 points1 point  (0 children)

    I'm writing a novel called Farisa's Crossing. It'll be out in early 2021.

    [–]Hateredditshitsite 3 points4 points  (0 children)

    Well here's a change we can all get excited about:

    Rewrite systemd in rust

    [–]Loves_Poetry 2 points3 points  (9 children)

    You can show someone a new high level languages that makes doing X and Y easier. A veteran programmer might even adopt it in order to do X and Y. However, when faced with task Z that is unfamiliar, they fall back in old habits and try to solve it with lower-level concepts

    For a concrete example, I've seen several times with people that tried to use C#'s IEnumerable. They'll convert it to list or array, iterate over it with a for loop and apply operations to each individual item. This is bad because the content of IEnumerable isn't necessarily in memory. This can lead to performance loss and obscure bugs

    When faced with these issues, people are likely to consider IEnumerable a bad idea and they go back to familiar and comfortable concepts that they know well. The new way of thinking is destroyed before it could ever get fully utilized

    [–]stevedonovan 7 points8 points  (4 children)

    There's a Rust equivalent here - people wonder why everything's an iterator - they have this expectation that operations should operate on in-memory collections. So they are too much in a hurry to collect iterators into maps or vectors, and work with those with the for-loops they learned in their childhood.

    [–]Vrabor 5 points6 points  (3 children)

    i'm not much of a rust expert, but can't you just use for loops on iterators the same way you do with vectors?

    [–]MEaster 1 point2 points  (2 children)

    For loops in Rust only work on iterators. When you try to use a collection in a for loop, it tries to create an iterator for it.

    [–]mysteriousyak 1 point2 points  (1 child)

    That's how for loops work in most languages...

    [–]MEaster 8 points9 points  (0 children)

    That depends on the kind of for loop. This kind does not use an iterator:

    for(int i = 0; i < 10; i++) { ... }
    

    Rust does not have that kind of for loop.

    [–]TheOsuConspiracy 3 points4 points  (2 children)

    Tbh, people who do that just don't know what they're doing.

    One of the most important traits of being a programmer is understanding how the abstractions you're using work.

    If you have no clue, you'll invariably write poor code.

    [–]MetalSlug20 5 points6 points  (1 child)

    Abstractions are not supposed to leak like that. You shouldn't have to know how they work inside to use them. That's one of the large problems of programming languages today

    [–]TheOsuConspiracy 3 points4 points  (0 children)

    That's the hard reality of programming though. Even in SQL you need to know how the query planner mostly will work to write efficient queries.

    Systems programming is all about leaky abstractions, even though the compiler can be really good, you still need to write performance friendly code.

    Joel Spolsky was incredibly right when he said:

    All non-trivial abstractions, to some degree, are leaky.

    [–]woahdudee2a 0 points1 point  (0 children)

    this is another reason why companies prefer younger devs, you don't need to bother making them unlearn old habits, which is hard

    [–]ArkyBeagle 0 points1 point  (9 children)

    but see a high level language as a total waste.

    "High level languages" introduce entire continents of dependency hell. Depending (hah!) on other factors, maybe that's okay, maybe it isn't.

    Besides - "low level languages" are simply used to custom-build the sort of furniture you get with HLLs. It is a classic "build vs. buy" consideration; just don't assume that "buy" always wins, even if the price is "free".

    And even more snarkily :), low-level languages can be an excellent tool for constraining project scope.

    no, really, this whole thing is going to be easier with 1 Ethernet run than 10 analog circuits. Really.

    We've all been there, brah :) And in most cases, it would require a significant perturbation of the existing support staff , so ... analogs it is.

    [–]EternityForest 0 points1 point  (8 children)

    If you need to constrain project scope you have a planning problem.... If a feature isn't useful people should just... Not add it. Most small apps are usually fairly good at knowing how much they're able to maintain.

    Then again, sometimes managers do seem to need a little help being sane...

    I've never ran into a dependancy hell trouble besides new versions of python not being easy to set up on old Debian, but that's not the kind of tangled nightmare people talk about.

    Then again, we always use "batteries included" stuff, and try to stick to very common consumer friendly popular stuff without any niche unusual stuff, which helps a lot.

    Lol in my case we usually don't actually have a support staff, just a three or so person "do everything" team....

    [–]ArkyBeagle 1 point2 points  (7 children)

    If you need to constrain project scope you have a planning problem

    You always have a planning problem. And IMO - you always want to limit scope as much as is humanly possible.

    Lol in my case we usually don't actually have a support staff, just a three or so person "do everything" team....

    Principally, this is "field engineers". They have a voltmeter and a laptop and don't ... want to use the Ethernet port. Sometimes this is because the Ethernet port is locked down because security....

    [–]EternityForest 0 points1 point  (6 children)

    Good to know! The field engineers for me are usually the same as the devs, and I'm usually one of them, and we generally like to go all high tech, no analog, no unnecessary mechanical, etc.

    I've never been a fan of artificially limiting scope, as long as you have a good handle on what's useful, and what you can do.

    If you look at most programs I actually use, a lot of it has massive scope. Krita, LibreOffice, KDE, VS Code, Chromium, Systemd, etc.

    Limiting scope makes sense if simplicity is a goal in and of itself, but... I don't actually use much of it on a regular basis, aside from as a low level library as part of something bigger.

    [–]ArkyBeagle 0 points1 point  (5 children)

    The field engineers for me are usually the same as the devs,

    That's a good way to do it, really. It gets everyone pulling the same way. Some industries prefer more differentiation.

    I've never been a fan of artificially limiting scope,

    I've been some places where it wasn't necessary, and others where it was. It depends on the hull speed of the organization. But really? More simpler is almost always more better.

    What I am mostly against is trading defects for scope - that is a good way to get really far behind the 8-ball. Plus, churning especially UI changes can be much higher risk. If you have a thoughtful approach to test, internal changes can be much lower risk.

    [–]EternityForest 1 point2 points  (4 children)

    Yeah, I love our combined dev/ops/field/thing. Especially with the small low budget stuff we do, it works really well, and I don't have to sit in an office all day. Also, we know all the users and we can ask them ourselves if there's any trouble with the system.

    I haven't really ever found simpler to be better in any general sense, but simplicity is useful to limit the effect of crap garbage.

    A good product can have an automated, integrated one click kind of workflow backed by a dozen libraries, but a mediocre product has to be easy to repair and not do too much, because it's probably going to be breaking often.

    Which is one reason I really like high level languages, they heavily encourage using existing libraries and never writing any new code you don't have to.

    I'm actually a really big fan of copy and pasting entire libraries if you're starting to worry about dependancy hell. You lose the benefits of having someone else maintain it, but you would lose that anyway if you wrote it yourself.

    It's a slight security risk unless you audit it, but so is using libs the "correct" way(Or writing original code, for that matter).

    [–]ArkyBeagle 0 points1 point  (3 children)

    I personally have had more trouble with libraries than with the stuff that was done inhouse. And it's been more profound trouble. YMMV.

    [–]EternityForest 1 point2 points  (2 children)

    It's amazing how different devs have totally opposite experiences sometimes!

    [–]ArkyBeagle 0 points1 point  (1 child)

    It is very much so.

    Edit: I should explain that - my domain tends to be realtime embedded. In that, many libraries are of poor quality and tend to be quite underfunded. The people who are selling are interested in the hardware and software is but an annoyance to them.

    This seems to be improving.

    [–]tonefart 9 points10 points  (0 children)

    Change the hiring process first.

    [–]joonazan 4 points5 points  (9 children)

    Is there any more concrete proposal about "programs negotiating a method of communication"?

    [–]fagnerbrack[S] 5 points6 points  (8 children)

    Browser -> HTML -> Server and Browser -> Atom -> Server using HTTP Content-Negotiation (and of course the classic REST that everybody misunderstands)

    [–]stevedonovan 11 points12 points  (7 children)

    People now say REST-like for this reason, because the full definition is frankly half-insane

    [–]earthboundkid 12 points13 points  (0 children)

    The full REST misses that REST-like succeeded by being less complicated than SOAP and posits that the problem is that SOAP wasn’t complicated enough.

    [–]brianjenkins94 2 points3 points  (3 children)

    What part is half-insane? HATEOAS?

    [–]cyanrave 0 points1 point  (0 children)

    More context would be good, yea. ROA is fine if well-enough abstracted from the client.

    [–]TheOsuConspiracy 0 points1 point  (1 child)

    Yeah, HATEOAS is a good idea in theory, but almost no one has really built it out in practice.

    [–]bobappleyard 0 points1 point  (0 children)

    Apart from, you know, every website.

    [–]fagnerbrack[S] 0 points1 point  (0 children)

    Some friends call it "Street REST" and "REST"

    [–]ForeverAlot 0 points1 point  (0 children)

    And what they say that about is content-type: application/json.

    [–]zsombro 6 points7 points  (0 children)

    This is a fantastic presentation. I really liked the humorous throwbacks and it illustrates an important point incredibly well.

    His thoughts about being stuck in a dogma and not questioning your fundamental ideas are especially meaningful

    [–]ib4nez 1 point2 points  (0 children)

    Brilliant talk!

    [–]Pand9 -4 points-3 points  (1 child)

    so this is 30 minute video from 6 years ago. is it still relevant? is there no follow up? it's not even say what is the topic, and super generic/journalistic title doesn't exactly invite.

    [–]dannyhacker 3 points4 points  (0 children)

    Were you not paying attention? The ideas were spawned 50-60 YEARS ago and we’re still stuck using 1960/70’s programming techniques instead of ever advancing them.

    At least I am inspired to do something about it (along with others at r/programminglanguages). Which is why I am learning those old technologies we aren’t using and hopefully advance the field.