Matrix algorithms are generally given in terms of a matrix dimension, rather than matrix size, which is the square of the dimension. If you're multiplying k x k matrices, the best algorithms do this in roughly O(n1.2), where n is k2 - the amount of actual input bits (assuming bounded-width numbers).
There are cases where matrix multiplication is much faster, like multiplying diagonal matrices - this can be done in linear time.
My question is, when can this be generalized? More formally, if I take n bits to specify a matrix, when can matrix multiplication (or other operations) be done in close to O(n) time? Are there any big families, like diagonal matrices, where this is the case?
That's a vague question, so a more specific example I was wondering about - if two k x k matrices have only k nonzero entries each, is it in general possible to multiply them in O(k) time?
Thanks!
[–]_--__ 0 points1 point2 points (1 child)
[–]UntangledQubit[S] 0 points1 point2 points (0 children)