no code implementations • 1 Dec 2023 • Sulthana Shams, Douglas Leith
This paper proposes a novel method for detecting shilling attacks in Matrix Factorization (MF)-based Recommender Systems (RS), in which attackers use false user-item feedback to promote a specific item.
no code implementations • 12 May 2023 • David Young, Douglas Leith
We develop a novel latent-bandit algorithm for tackling the cold-start problem for new users joining a recommender system.
no code implementations • 8 May 2023 • Sulthana Shams, Douglas Leith
In practice, users of a Recommender System (RS) fall into a few clusters based on their preferences.
no code implementations • 1 Feb 2023 • David Young, Douglas Leith, George Iosifidis
We show that a kernel estimator using multiple function evaluations can be easily converted into a sampling-based bandit estimator with expectation equal to the original kernel estimate.
no code implementations • 30 Oct 2022 • Mohamed Suliman, Douglas Leith
We illustrate the effectiveness of the attacks against the next word prediction model used in Google's GBoard app, a widely used mobile keyboard app that has been an early adopter of federated learning for production use.
no code implementations • 20 Apr 2022 • Naram Mhaisen, George Iosifidis, Douglas Leith
We build upon the Follow-the-Regularized-Leader (FTRL) framework, which is developed further here to include predictions for the file requests, and we design online caching algorithms for bipartite networks with pre-reserved or dynamic storage subject to time-average budget constraints.
1 code implementation • 22 Feb 2022 • Naram Mhaisen, George Iosifidis, Douglas Leith
The design of effective online caching policies is an increasingly important problem for content distribution networks, online social networks and edge computing services, among other areas.
no code implementations • NeurIPS 2021 • Daron Anderson, Douglas Leith
We study Online Lazy Gradient Descent for optimisation on a strongly convex domain.
no code implementations • 3 Apr 2020 • Daron Anderson, Douglas Leith
We prove the familiar Lazy Online Gradient Descent algorithm is universal on polytope domains.
no code implementations • 10 Sep 2019 • Daron Anderson, Douglas Leith
We show that the Subgradient algorithm is universal for online learning on the simplex in the sense that it simultaneously achieves $O(\sqrt N)$ regret for adversarial costs and $O(1)$ pseudo-regret for i. i. d costs.