no code implementations • 14 Nov 2023 • Keaton Hamm, Caroline Moosmüller, Bernhard Schmitzer, Matthew Thorpe
This paper aims at building the theoretical foundations for manifold learning algorithms in the space of absolutely continuous probability measures on a compact and convex subset of $\mathbb{R}^d$, metrized with the Wasserstein-2 distance $W$.
no code implementations • 25 Jul 2023 • Xinran Liu, Yikun Bai, Huy Tran, Zhanqi Zhu, Matthew Thorpe, Soheil Kolouri
In this paper, we introduce partial transport $\mathrm{L}^{p}$ distances as a new family of metrics for comparing generic signals, benefiting from the robustness of partial transport distances.
no code implementations • 6 Sep 2022 • Nicolás García Trillos, Ryan Murray, Matthew Thorpe
In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularisation.
no code implementations • 16 Jun 2022 • Tolou Shadbahr, Michael Roberts, Jan Stanczuk, Julian Gilbey, Philip Teare, Sören Dittmer, Matthew Thorpe, Ramon Vinas Torne, Evis Sala, Pietro Lio, Mishal Patel, AIX-COVNET Collaboration, James H. F. Rudd, Tuomas Mirtti, Antti Rannikko, John A. D. Aston, Jing Tang, Carola-Bibiane Schönlieb
Classifying samples in incomplete datasets is a common aim for machine learning practitioners, but is non-trivial.
no code implementations • ICLR 2022 • Matthew Thorpe, Tan Minh Nguyen, Hedi Xia, Thomas Strohmer, Andrea Bertozzi, Stanley Osher, Bao Wang
We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i. e., low-labeling rate.
no code implementations • 22 Apr 2021 • Matthew Thorpe, Bao Wang
Graph Laplacian (GL)-based semi-supervised learning is one of the most used approaches for classifying nodes in a graph.
1 code implementation • 17 Feb 2021 • Tianji Cai, Junyi Cheng, Bernhard Schmitzer, Matthew Thorpe
Working with the local linearization and the corresponding embeddings allows for the advantages of the Euclidean setting, such as faster computations and a plethora of data analysis tools, whilst still enjoying approximately the descriptive power of the Hellinger--Kantorovich metric.
Optimization and Control
no code implementations • 1 Jan 2021 • Matthew Thorpe, Bao Wang
Within a certain adversarial perturbation regime, we prove that GL with a $k$-nearest neighbor graph is intrinsically more robust than the $k$-nearest neighbor classifier.
no code implementations • 23 Sep 2020 • Oliver M. Crook, Mihai Cucuringu, Tim Hurst, Carola-Bibiane Schönlieb, Matthew Thorpe, Konstantinos C. Zygalakis
The transportation $\mathrm{L}^p$ distance, denoted $\mathrm{TL}^p$, has been proposed as a generalisation of Wasserstein $\mathrm{W}^p$ distances motivated by the property that it can be applied directly to colour or multi-channelled images, as well as multivariate time-series without normalisation or mass constraints.
no code implementations • 14 Aug 2020 • Michael Roberts, Derek Driggs, Matthew Thorpe, Julian Gilbey, Michael Yeung, Stephan Ursprung, Angelica I. Aviles-Rivero, Christian Etmann, Cathal McCague, Lucian Beer, Jonathan R. Weir-McCall, Zhongzhao Teng, Effrossyni Gkrania-Klotsas, James H. F. Rudd, Evis Sala, Carola-Bibiane Schönlieb
Machine learning methods offer great promise for fast and accurate detection and prognostication of COVID-19 from standard-of-care chest radiographs (CXR) and computed tomography (CT) images.
1 code implementation • ICML 2020 • Jeff Calder, Brendan Cook, Matthew Thorpe, Dejan Slepcev
We propose a new framework, called Poisson learning, for graph based semi-supervised learning at very low label rates.
no code implementations • 4 Jun 2020 • Jeff Calder, Dejan Slepčev, Matthew Thorpe
The proofs of our well-posedness results use the random walk interpretation of Laplacian learning and PDE arguments, while the proofs of the ill-posedness results use $\Gamma$-convergence tools from the calculus of variations.
no code implementations • 20 Apr 2020 • Nicolas Garcia Trillos, Ryan Murray, Matthew Thorpe
In this work we study statistical properties of graph-based clustering algorithms that rely on the optimization of balanced graph cuts, the main example being the optimization of Cheeger cuts.
no code implementations • 23 Sep 2019 • Oliver M. Crook, Tim Hurst, Carola-Bibiane Schönlieb, Matthew Thorpe, Konstantinos C. Zygalakis
In this paper we extend the labels by minimising the constrained discrete $p$-Dirichlet energy.
no code implementations • CVPR 2018 • Serim Park, Matthew Thorpe
With experiments using 4 different datasets, we show that the generative tangent plane model in the optimal transport (OT) manifold can be learned with small numbers of images and can be used to create infinitely many `unseen' images.
no code implementations • 23 May 2018 • Matthew M. Dunlop, Dejan Slepčev, Andrew M. Stuart, Matthew Thorpe
Scalings in which the graph Laplacian approaches a differential operator in the large graph limit are used to develop understanding of a number of algorithms for semi-supervised learning; in particular the extension, to this graph setting, of the probit algorithm, level set and kriging methods, are studied.
no code implementations • 19 Jul 2017 • Dejan Slepčev, Matthew Thorpe
The task is to assign real-valued labels to a set of $n$ sample points, provided a small training subset of $N$ labeled points.
no code implementations • 27 Sep 2016 • Matthew Thorpe, Serim Park, Soheil Kolouri, Gustavo K. Rohde, Dejan Slepčev
Transport based distances, such as the Wasserstein distance and earth mover's distance, have been shown to be an effective tool in signal and image analysis.
no code implementations • 15 Sep 2016 • Soheil Kolouri, Serim Park, Matthew Thorpe, Dejan Slepčev, Gustavo K. Rohde
Transport-based techniques for signal and data analysis have received increased attention recently.