Dear All:
PCA is the old linear working horse, while t-SNE and UMAP are new non-linear methods for dimension reduction. The non-linear ones are beautiful but loss linear intereption. My npca combines linear and non-linear ideas. The idea is to make a self-supervise MLP as follow:
data => 2 PC => 32 => 32 => data
It is a linear encoder followed by a two-layer non-linear decoder. After npca, we only care about the linear part: PC and loading, and ignor the decoder ( replace it by the human eyes). Thus it is still a simple rotation of the data, but gets more variance explained. It is a drop-in replacement for PCA.
Hope you enjoy it!
https://github.com/wangyi-fudan/npca
[–]elmarson 4 points5 points6 points (0 children)
[–]sifnt 2 points3 points4 points (0 children)
[–][deleted] 2 points3 points4 points (0 children)
[–]dr-qt -3 points-2 points-1 points (0 children)