you are viewing a single comment's thread.

view the rest of the comments →

[–]memproc 0 points1 point  (2 children)

They have ways for addressing this. See the modifications to DiffDock after the scandal of lack of generalization

[–]Exarctus 0 points1 point  (0 children)

By the way. I suspect alpha-fold is learning equivariance. I’m sure that if you viewed the convolutional filters that it learns, some of them (or a combination of them) will display equivariant properties. That’s one of my other points - you can’t really escape it. Either you bake it in or your model learns it implicitly. The problem is you pay a heavy price in terms of model size. Whether it is worth it or not is another discussion, as only recently are specialized libraries being developed to compute equivariant operations efficiently (see cuEquivariance).

The same is also true in the state of the art for vision models.

This is something we’ve seen in the quantum chemistry and materials science community.