A reader’s guide to Govind Menon’s research program on manifold learning.

In order to develop a concrete understanding of Govind Menon’s
research program on manifold learning, I have currently been exploring
the UMAP algorithm with several other mathematicians. While I have often
found the original UMAP paper lacking, the 2021 paper on *On UMAP’s
true loss function* has been illuminating. Moreover, I have found it
useful not only to zoom out but to consider the manifold learning
problem from complementary perspectives.

What follows is a collection of publications that offer useful perspectives on the challenging problem of Manifold Learning which subsumes, in principle, all of Machine Learning.

Carlsson, Gunnar. “Topology and Data.” Bulletin of the American Mathematical Society, vol. 46, no. 2, 2009, pp. 255-308.

Menon, Govind. “Information Theory and the Embedding Problem for Riemannian Manifolds.” International Conference on Geometric Science of Information (2021).

George C. Linderman, Gal Mishne, Yuval Kluger, Stefan Steinerberger. Randomized Near Neighbor Graphs, Giant Components, and Applications in Data Science. Arxiv. 2017.

Marcello Pelillo, Ismail Elezi, Marco Fiorucci. Revealing Structure in Large Graphs: Szemerédi’s Regularity Lemma and its Use in Pattern Recognition. Pattern Recognition Letters. 2016.

Mathew Kahle. Random simplicial complexes. Handbook of Discrete & Computational Geometry. 2016.

Ariel Caticha. Geometry from Information Geometry. MaxEnt 2015, the 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. 2015.

Lay Kuan Loh & Mihovil Bartulovic. Efficient Coding Hypothesis and an Introduction to Information Theory. 2014.

Matthew Chalk, Olivier Marre, and Gašper Tkačik. Toward a unified theory of efficient, predictive, and sparse coding. PNAS. 2017.

Emmanuel Candes, Justin Romberg, Terence Tao. Stable Signal Recovery from Incomplete and Inaccurate Measurements. Arxiv. 2005.

Keenan Crane. DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION. 2023.

While this collection of papers isn’t exhaustive, they have provided me with a number of useful and complementary perspectives with which to explore the UMAP algorithm and manifold learning in general. One emerging perspective is that we may have stable signal recovery through efficient coding of incomplete and inaccurate measurements.

I believe that a practical theory that forms around this perspective will lead to important advances in theoretical neuroscience.

For attribution, please cite this work as

Rocke (2023, April 16). Kepler Lounge: A reader's guide to Manifold Learning. Retrieved from keplerlounge.com

BibTeX citation

@misc{rocke2023a, author = {Rocke, Aidan}, title = {Kepler Lounge: A reader's guide to Manifold Learning}, url = {keplerlounge.com}, year = {2023} }