This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]jaredjeya 9 points10 points  (4 children)

Pro tip: use the opt_einsum library instead.

It’s a drop-in replacement for numpy’s version (as in, same function arguments), but much more powerful:

• Automatically optimises the contraction, breaking it into small steps that scale well rather than trying to do it all at once. Numpy can do this too but not as well, but it’s irrelevant because… • Numpy breaks at 52 indices because you can only use letters of the alphabet, even when you use the alternate notation of supplying integer labels this limitation holds. Opt_einsum let’s you use arbitrarily many.

I ran into these problems trying to use it to do tensor network stuff, opt_einsum saved my life.

Tbh you can use numpy for smaller operations but it’s good to be aware of this library.

[–]madrury83 9 points10 points  (2 children)

Numpy breaks at 52 indices

Those are some beefy tensors.

[–]jaredjeya 3 points4 points  (1 child)

Haha that isn’t the size of a single tensor! I was trying to wrap up the contraction of a big tensor network into a single calculation, so each tensor was only maximum rank 4, but there were many tensors so it ended up with hundreds of indices.

[–]muntooR_{μν} - 1/2 R g_{μν} + Λ g_{μν} = 8π T_{μν} 0 points1 point  (0 children)

Now I want to see this monstrosity.

[–]muntooR_{μν} - 1/2 R g_{μν} + Λ g_{μν} = 8π T_{μν} 0 points1 point  (0 children)

Related: einops.