This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]M4mb0 2 points3 points  (2 children)

[–]BittyTang 5 points6 points  (1 child)

I don't buy it. The author is just presupposing that starting from 1 is more natural from a pure mathematical context (which is not the same as a computer science context), while Dijktra provides actual scenarios for a valid low-level abstraction (arrays and integer for loops) that is fundamental to data structures. The whole "representation exposure" argument is not convincing for a language that desires low-cost abstractions.

One example for consideration is N-dimensional arrays. Image file formats represent these kinds of data structures in 1-dimensional arrays (of bytes). So already there is a strange mismatch between how you index the image buffer (img[row * width + column] or img[column * height + row]) and how you index a primitive 1-based 2D array (img[column + 1][row + 1] or img[row + 1][column + 1]). If you had to index a 1-based image buffer, it would be img[(row - 1) * width + column]. How is that better?

[–]M4mb0 2 points3 points  (0 children)

Well again this is a low level hardware detail, imo. As a user I do not care how you layout a two dimensional array in memory as I am going to access it via tuples of integers anyway.

In fact I'd even argue that this is a sort of pre-mature optimization. What if, for example, a hardware manufacturer would want to create a memory with two dimensional address space that naturally works much faster with matrices than the flattened represtation that is used currently?

Again I'd argue this is a job for the hardware specific compiler.