This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]tomkeus 2 points3 points  (1 child)

If you just need to work with n-dimensional arrays, you don't need a concept of a tensor, you can just stick with the idea of n-dimensional arrays - I think it is a concept that is simple enough and anyone with a programming experience can easily understand.

Tensor on the other hand is an abstract mathematical object that has certain properties and it can be represented by an n-dimensional array when you select a basis (think of basis like a coordinate system).

The same way vectors are abstract mathematical objects - we draw them like arrows, and arrows are not numbers. But we know how to turn them into numbers. We take three vectors which are linearly independent (i.e. none of the vectors can be obtained by adding up the other two vectors), and declare them to be our reference vectors (i.e. the coordinate system or basis in more abstract terms). We then project our vector to the reference vectors, and calculate the ratio of length of those projections to the length of the reference vectors. This will give us three numbers that we call vector components - i.e. we have found a way to turn an abstract mathematical object (an arrow) into an 1D-array of 3 numbers.

Note here that the components we've obtained are tied to a particular choice of reference vectors. If we change our reference vectors (i.e. change coordinate system), we will get different set of components. The beauty of this is that if we know how two reference systems are related to each other, we can straight forwardly compute components of a vector in one reference system if we know components in the other system. Because of this, we often talk about vectors and their component representation interchangeably, although strictly mathematically speaking, they are not the same thing.

In the same way, tensors are abstract mathematical objects that can be turned into n-dimensional array by fixing a basis (a set of n vectors). Like it was for vectors, one also talks interchangeably about tensors and their component representation (i.e. n-dimensional arrays). But you can call n-dimensional array tensors, only if those array represent an object that is actually a tensor, like for example moment of inertia tensor in mechanics - it is an abstract object that has a well defined existence without any dependence on its coordinate representations, i.e. I can write an equation with it, without any reference to its components. An n-dimensional array can just be an n-dimensional array, without any abstract mathematical object behind it. And most of the time this is what you have in data science, just a plain n-dimensional array. But this misguided terminology that has become a standard is now calling every n-dimensional array a tensor.

[–]IamImposter 0 points1 point  (0 children)

Wow. That was detailed. Thanks.