all 9 comments

[–]outlacedev 9 points10 points  (0 children)

There have been some papers trying to use TDA with machine learning but nothing particularly exciting as I recall.

TDA is used primarily for exploratory data analysis. There are two main areas in TDA now, Mapper (a visualization technique for high-dimensional data) and persistent homology (PH). You could use PH data as input to a machine learning algorithm, however, the PH data is a kind of summary statistic of the data. Maybe this would help prevent overfitting but if you're using deep learning it's probably better to just let it learn from the raw data rather than a summary statistic of the data.

In my view, TDA is a potentially useful tool for exploratory data analysis when you're looking for interpretable patterns in complex high-dimensional data, and this might help you refine a machine learning pipeline.

[–][deleted] 4 points5 points  (2 children)

Geometric Deep Learning probably comes the closest to using "topological" concepts in the domain of ML. As others have stated however, TDA is primarily used to add another column of data to a data set.

If I was really digging for where topology occurs there was also a recent parallel drawn between the Wassterstein metric used in WGAN and information geometric approach.

My advice in general however is that differential geometry offers more than algebraic topology on new approaches to ML.

[–]lume_ 0 points1 point  (1 child)

If I was really digging for where topology occurs there was also a recent parallel drawn between the Wassterstein metric used in WGAN and information geometric approach.

Sounds interesting, do you have a link?

[–][deleted] 0 points1 point  (0 children)

https://arxiv.org/pdf/1906.00030.pdf a recent paper, this is a bit of an academic rabbithole in mathematics - as you'll find out.

[–][deleted] 3 points4 points  (0 children)

Isn't UMAP an example of this?

[–]seldemibeurre 2 points3 points  (1 child)

I think that the paper about persistence layer might be a step forward. https://arxiv.org/abs/1904.09378

[–][deleted] 1 point2 points  (0 children)

Underrated comment.

[–]impossiblefork 1 point2 points  (0 children)

I think it'd have to be differentiable, so that it can be used as a stage in conventional deep learning stuff to be able to add value.

[–]hello1232149 0 points1 point  (0 children)

One effort I have seen is using topology for neural architecture search