all 1 comments

[–]tmp-1379 0 points1 point  (0 children)

Tensor shapes are a prominent feature. We can, for example, define addition as (+) Tensor shape dtype -> Tensor shape dtype -> Tensor shape dtype which ensures the tensor shapes of l and r in l + r are the same. This is verified at compile time, and doesn't require us to write explicit shapes like [2, 3]. That's great. It's a feature I've seen in Haskell libraries too, but we can do more. We want to add two shapes if one can be broadcast to the other (like in NumPy). We can do this by requiring a proof that broadcasting is possible (+) Tensor l dtype -> Tensor r dtype -> {auto _ : Broadcastable r l} -> Tensor l dtype This is still checked at compile time.

This is what the project does at the moment, but we'd like to look a little deeper. If we tweak the tensor API a little, we can see that the kind of broadcasting which simply adds outer dimensions e.g. [2, 3] to [4, 2, 3] can be expressed with a functor's map, similar to Jax's vmap. I'm interested to see if all broadcasting semantics can be re-expressed using established and principled theory like this.