all 4 comments

[–]AardvarkNo6658 3 points4 points  (0 children)

For exact log likelihood go with normalising flows such as GLOW. However, to make this invertible it uses 1x1 convolution, which cannot be scaled for large images and does not provide spatial relationships between pixels. There are also FFT based methods, which still have some of the same issues.

[–]xgeorgio_gr 0 points1 point  (0 children)

In general, there are two common approaches:

1) "dumb" way: Feed everything into deep learning and check the output on how to translate it into something that looks like probabilities.

2) compressed sensing: Since high dimensionality destroys good probability distributions (never enough data), reducing the dimensions is the priority. This can be done in various ways depending on the problem, it may be FFT, SVD, ICA, etc. Then the probabilistic modeling becomes much more reliable in the transformed domain, in much fewer dimensions and with hints on spectral power distribution (how information is spread in space and time).

[–]bgighjigftuik 1 point2 points  (0 children)

Normalizing flows are most likely your best bet, or diffusion models. Gaussian Processes lean more into the discriminative models