no code implementations • 13 Dec 2023 • Ilana Sebag, Muni Sreenivas Pydi, Jean-Yves Franceschi, Alain Rakotomamonjy, Mike Gartrell, Jamal Atif, Alexandre Allauzen
Safeguarding privacy in sensitive training data is paramount, particularly in the context of generative modeling.
1 code implementation • NeurIPS 2023 • Jean-Yves Franceschi, Mike Gartrell, Ludovic Dos Santos, Thibaut Issenhuth, Emmanuel de Bézenac, Mickaël Chen, Alain Rakotomamonjy
Particle-based deep generative models, such as gradient flows and score-based diffusion models, have recently gained traction thanks to their striking performance.
1 code implementation • 22 Feb 2023 • Song Duong, Alberto Lumbreras, Mike Gartrell, Patrick Gallinari
Our model is designed to handle the tasks of D2T and T2D jointly.
1 code implementation • 1 Jul 2022 • Insu Han, Mike Gartrell, Elvis Dohmatob, Amin Karbasi
In this work, we develop a scalable MCMC sampling algorithm for $k$-NDPPs with low-rank kernels, thus enabling runtime that is sublinear in $n$.
2 code implementations • ICLR 2022 • Insu Han, Mike Gartrell, Jennifer Gillenwater, Elvis Dohmatob, Amin Karbasi
However, existing work leaves open the question of scalable NDPP sampling.
no code implementations • 26 Jul 2021 • Imad Aouali, Sergey Ivanov, Mike Gartrell, David Rohde, Flavian vasile, Victor Zaytsev, Diego Legrand
In this paper, we formulate several Bayesian models that incorporate the reward signal (Reward model), the rank signal (Rank model), or both (Full model), for non-personalized slate recommendation.
no code implementations • NeurIPS Workshop LMCA 2020 • Lucas Anquetil, Mike Gartrell, Alain Rakotomamonjy, Ugo Tanielian, Clément Calauzènes
Through an evaluation on a real-world dataset, we show that our Wasserstein learning approach provides significantly improved predictive performance on a generative task compared to DPPs trained using MLE.
2 code implementations • ICLR 2021 • Mike Gartrell, Insu Han, Elvis Dohmatob, Jennifer Gillenwater, Victor-Emmanuel Brunel
Determinantal point processes (DPPs) have attracted significant attention in machine learning for their ability to model subsets drawn from a large item collection.
no code implementations • 21 Jun 2019 • Syrine Krichene, Mike Gartrell, Clement Calauzenes
For example, applying constraints a posteriori can result in incomplete recommendations or low-quality results for the tail of the distribution (i. e., less popular items).
1 code implementation • NeurIPS 2019 • Mike Gartrell, Victor-Emmanuel Brunel, Elvis Dohmatob, Syrine Krichene
Our method imposes a particular decomposition of the nonsymmetric kernel that enables such tractable learning algorithms, which we analyze both theoretically and experimentally.
no code implementations • ICLR 2019 • Ugo Tanielian, Flavian vasile, Mike Gartrell
This is often the case for applications such as language modeling, next event prediction and matrix factorization, where many of the potential outcomes are not mutually exclusive, but are more likely to be independent conditionally on the state.
no code implementations • 25 Mar 2019 • Jason Shuo Zhang, Mike Gartrell, Richard Han, Qin Lv, Shivakant Mishra
In this paper, we present GEVR, the first Group Event Venue Recommendation system that incorporates mobility via individual location traces and context information into a "social-based" group decision model to provide venue recommendations for groups of mobile users.
no code implementations • 17 Nov 2018 • Mike Gartrell, Elvis Dohmatob, Jon Alberdi
While DPPs have substantial expressive power, they are fundamentally limited by the parameterization of the kernel matrix and their inability to capture nonlinear interactions between items within sets.
no code implementations • 24 May 2018 • Romain Warlop, Jérémie Mary, Mike Gartrell
Determinantal point processes (DPPs) have received significant attention in the recent years as an elegant model for a variety of machine learning tasks, due to their ability to elegantly model set diversity and item quality or popularity.
no code implementations • 22 May 2018 • Ugo Tanielian, Mike Gartrell, Flavian vasile
In recent years, the Word2Vec model trained with the Negative Sampling loss function has shown state-of-the-art results in a number of machine learning tasks, including language modeling tasks, such as word analogy and word similarity, and in recommendation tasks, through Prod2Vec, an extension that applies to modeling user shopping activity and user preferences.
no code implementations • 15 Feb 2018 • Zelda Mariet, Mike Gartrell, Suvrit Sra
To address this issue, which reduces the quality of the learned model, we introduce a novel optimization problem, Contrastive Estimation (CE), which encodes information about "negative" samples into the basic learning model.
no code implementations • 15 Aug 2016 • Mike Gartrell, Ulrich Paquet, Noam Koenigstein
Determinantal point processes (DPPs) are an elegant model for encoding probabilities over subsets, such as shopping baskets, of a ground set, such as an item catalog.
1 code implementation • 17 Feb 2016 • Mike Gartrell, Ulrich Paquet, Noam Koenigstein
In this work we present a new method for learning the DPP kernel from observed data using a low-rank factorization of this kernel.