no code implementations • 26 Sep 2023 • Kemal Oksuz, Selim Kuzucu, Tom Joy, Puneet K. Dokania
Combining the strengths of many existing predictors to obtain a Mixture of Experts which is superior to its individual components is an effective way to improve the performance without having to develop new architectures or train a model from scratch.
Ranked #1 on Oriented Object Detection on DOTA 1.0
1 code implementation • CVPR 2023 • Kemal Oksuz, Tom Joy, Puneet K. Dokania
The current approach for testing the robustness of object detectors suffers from serious deficiencies such as improper methods of performing out-of-distribution detection and using calibration metrics which do not consider both localisation and classification quality.
1 code implementation • 13 Jul 2022 • Tom Joy, Francesco Pinto, Ser-Nam Lim, Philip H. S. Torr, Puneet K. Dokania
The most common post-hoc approach to compensate for this is to perform temperature scaling, which adjusts the confidences of the predictions on any input by scaling the logits by a fixed value.
1 code implementation • ICLR 2022 • Tom Joy, Yuge Shi, Philip H. S. Torr, Tom Rainforth, Sebastian M. Schmon, N. Siddharth
Here we introduce a novel alternative, the MEME, that avoids such explicit combinations by repurposing semi-supervised VAEs to combine information between modalities implicitly through mutual supervision.
2 code implementations • ICLR 2021 • Tom Joy, Sebastian M. Schmon, Philip H. S. Torr, N. Siddharth, Tom Rainforth
We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels.