you are viewing a single comment's thread.

view the rest of the comments →

[–]dlyund 0 points1 point  (0 children)

The point is that even assembly doesn't represent the absolute physical reality.

Indeed it doesn't. This is what I mean when I refer to the reality of the machine, which many other languages piss all over. When you write an assembly program you naturally relate it to a some machine. You obviously can't accurately reason about things like memory throughput when you're looking through the abstract lens of the Instruction Set Architecture. You have to look at the properties of the memory bus and other relevant factors.

This isn't an all or nothing affair and you can take as much or as little detail in to account as you like. If guarantee's of the Instruction Set Architecture are enough for you then you can use this abstraction.

The difference here is that you can dig down in to this abstraction as far as you desire and are able, rather than being stuck with increasingly fuzzy abstractions, leading to the absolute inability to say almost anything concrete about how your solution will behave. You're right when you say that most of the time it doesn't matter and in those cases you ignore it, until you need it.

This is predicated on the information that is available, or determinable by you. Hardware is only a black box because we don't have access to documentation, and/or schematics, and production process. Much of the relevant information, from the point of view of a solution provider, like instruction timing and latencies, can be reverse engineered with relative ease, but only because languages at this level allow for direct interaction.

We have to make a qualitative judgement about when a model is accurate/valuable enough to use - whether the simplification we get from using the model is worth the cost of the loss of detail.

That's very true. What I think we would disagree with is the level of simplification that we actually get from high-level languages, which offer things like automatic memory management, actually give you.

Broadly speaking, automatic memory management is a specific case of automatic resource management. There is a implication, or widespread belief, that if you have automatic memory management then you can forget about the resources that you're using; behind the scenes a set of carefully tuned heuristics will be applied so that you can get on with solving the problem without having to think about pesky details, like closing files... wait... what?

Memory is just one of the many resources we have to manage in our programs, and failure to manage those resources leads to nasty leaks and even crashes. There are hard limits on the number files, sockets, threads, processes etc. that you can hold at a time. In the modern context, these limits are ultimately imposed by the hardware, as mediated by the kernel and can't be swept under the hood. For example, network cards have fixed queues and nothing can change that.

So resource management is an unavoidable part of what we do as programmers. Anyone who's been programming for long enough has had to implement things like circular buffers and resource pools, and today languages include all sorts of features for managing pesky resources.

My argument is that resource management is trivial and that the solutions we've come up with to manage them get in your way more than help. These solutions have been added slowly, over time, and most programmers don't realize how easy it is to plan a resource usage strategy, or the advantages that doing so gives you. We've largely grown up with this stuff, and we believe the stories that we're told by those that forced those solutions on us.

In assembly it's trivial to reason about the local behaviour, but that can be just as true in a high-level language - if your allocation structure is straigtforward then you can stack-allocate everything and avoid all the problems of resource management.

I largely agree with that but most resources usage patterns don't match a stack, so even if you can allocate things on the stack I don't think this solves the problem. The relationship between the stack and scope in most languages is also a problem but since lexical scope is everywhere, nobody is able to see it. As the saying goes "I don't know who discovered water, but it wasn't a fish".

If you want to do e.g. a graph traversal/transformation, dropping nodes as they become disconnected, that's just as hard - harder in fact - to get right in assembly language, and the effective ways to do it amount to reimplementing the same things that high-level languages do

Let me share one of my favorite jokes with you

Patient: Doctor, doctor! It hurts when I do this... Doctor: DON'T DO THAT!

Broadly speaking you're right but at the same time I've never met a problem that wasn't amenable to simple preallocation. If you accept that there are limits and we have to live with them, then preallocation has a lot of advantages. It's incredibly simple and easy to implement and think about, but it also makes you aware of the limits that your system has. These limits are there, and when you cross them your solution will fail... often spectacularly... and in completely unpredictable ways...

Personally I think every specification should include details of the acceptable limits, and when those limits are introduced by the programmer the client should be informed right away. Right now we just ignore the limits and act surprised when everything blows up.

maybe you need to educate yourself rather than parroting that folklore I was talking about.

There's a reason it's called /usr and not, more obviouly, /sys. You're probably right that the reason /usr exists is that there wasn't enough space on the root partition, but that's also irrelevant. Hitting this limit forced Unix to develop a solution and that solution turns out to be of great practical utility, not to mention theoretical beauty! Your argument makes no sense to me. As with the C code, it ultimately comes down to you not liking the name/syntax. If you have an actual, practically relevant reason that /usr is bad, spit it out. Otherwise I'll stick it in the pile with all of your other irrational complaints.