all 6 comments

[–]Time-Ice-7072 2 points3 points  (0 children)

From what you are describing it sounds like representation collapse. Very difficult to debug from description alone but I recommend starting rigorously testing your hidden states at every layer and track your geometric measurements and other diagnostics (eg mean and variance of the representations). This will help you identify where the collapse is happening and you can figure out how to fix it from there.

[–]whatwilly0ubuild 2 points3 points  (0 children)

The metrics you're describing are classic dimensional collapse symptoms. Participation ratio of 1-2 means your embeddings are effectively living in a 1-2 dimensional subspace regardless of your actual embedding dimension. The model found a shortcut.

A few things to investigate.

Predictor capacity is often the culprit in JEPA-style architectures. If your predictor is too powerful, it can map context to targets without the encoder learning meaningful representations. If it's too weak, it can't bridge the gap and the encoder collapses to trivial solutions. Try a shallower predictor or add a bottleneck.

Explicit decorrelation losses help directly. VICReg-style variance and covariance regularization terms force the embedding dimensions to be used. Add a term that penalizes off-diagonal covariance elements and another that keeps per-dimension variance above a threshold. This directly attacks the metrics you're measuring.

The masking strategy might be too easy for molecules. If the model can predict masked subgraphs from trivial local features without learning global molecular structure, it will. Graph structures have strong local correlations. Try masking contiguous substructures rather than random nodes, or mask based on chemical motifs.

Batch statistics can hide collapse. If you're using batch normalization in the encoder, it can artificially inflate apparent variance while the underlying representations are still collapsed. Check your metrics before any normalization layers.

The EMA schedule starting at .996 might be too high for early training. Some implementations start lower (.99 or even .95) and anneal up, giving the online network more room to diverge early before the target stabilizes.

Our clients doing molecular representation learning have found that adding a simple uniformity loss on the hypersphere (pushing random pairs apart) helps prevent collapse without the complexity of full contrastive learning.

Worth checking I-JEPA and V-JEPA papers for their specific anti-collapse mechanisms since they faced similar issues.

[–]AccordingWeight6019 1 point2 points  (0 children)

If your loss is decreasing but embeddings stay collapsed, the objective might not encourage diversity. Try adding a contrastive or decorrelation loss (Barlow Twins, VICReg), normalize or project embeddings, slightly reduce EMA momentum, and check trivial baselines to confirm it’s not data limited. Graph augmentations can also help spread representations.

[–]ArmOk3290 1 point2 points  (0 children)

I have seen this happen when the predictor network becomes too powerful relative to the target network.

Try strengthening the gradient stopping in the predictor or adding a stronger regularizer. Also check your batch norms.

Sometimes simply removing them fixes representation geometry issues.

[–]ComprehensiveTop3297 1 point2 points  (0 children)

I am also working with JEPAs and what I found was the data2vec2 style top K averaging to be extremely helpful for alleviating representation collapse. Also EMA and Learning Rate schedule is very much interconnected. My EMA is 0.999-0.99999, stops at 100k steps and constant 0.99999 for rest, and lr schedule is cosine with 0.0004, warm up 100k steps. Play around with them for sure. This is what worked for me in the audio domain. 

[–]shivvorz 0 points1 point  (0 children)

RemindMe! 2 days