no code implementations • 8 Jan 2024 • Chengjie Huang, Vahdat Abdelzad, Sean Sedwards, Krzysztof Czarnecki
We consider the problem of cross-sensor domain adaptation in the context of LiDAR-based 3D object detection and propose Stationary Object Aggregation Pseudo-labelling (SOAP) to generate high quality pseudo-labels for stationary objects.
1 code implementation • 20 Oct 2022 • Sunsheng Gu, Vahdat Abdelzad, Krzysztof Czarnecki
We evaluate the effectiveness of XC scores via the task of distinguishing true positive (TP) and false positive (FP) detected objects in the KITTI and Waymo datasets.
3D Object Detection Explainable Artificial Intelligence (XAI) +1
no code implementations • 28 Sep 2022 • Chengjie Huang, Van Duong Nguyen, Vahdat Abdelzad, Christopher Gus Mannes, Luke Rowe, Benjamin Therien, Rick Salay, Krzysztof Czarnecki
Detecting OOD inputs is challenging and essential for the safe deployment of models.
no code implementations • 1 Jun 2022 • Matthew Pitropov, Chengjie Huang, Vahdat Abdelzad, Krzysztof Czarnecki, Steven Waslander
The estimation of uncertainty in robotic vision, such as 3D object detection, is an essential component in developing safe autonomous systems aware of their own performance.
no code implementations • 30 Aug 2021 • Rick Salay, Krzysztof Czarnecki, Hiroshi Kuwajima, Hirotoshi Yasuoka, Toshihiro Nakae, Vahdat Abdelzad, Chengjie Huang, Maximilian Kahn, Van Duong Nguyen
In this paper, we propose the Integration Safety Case for Perception (ISCaP), a generic template for such a linking safety argument specifically tailored for perception components.
no code implementations • 25 Jun 2020 • Vahdat Abdelzad, Krzysztof Czarnecki, Rick Salay
In addition to comparing several OODD approaches using our proposed robustness score, we demonstrate that some optimization methods provide better solutions for OODD approaches.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
2 code implementations • 23 Oct 2019 • Vahdat Abdelzad, Krzysztof Czarnecki, Rick Salay, Taylor Denounden, Sachin Vernekar, Buu Phan
Several approaches have been proposed to detect OOD inputs, but the detection task is still an ongoing challenge.
1 code implementation • 9 Oct 2019 • Sachin Vernekar, Ashish Gaurav, Vahdat Abdelzad, Taylor Denouden, Rick Salay, Krzysztof Czarnecki
By design, discriminatively trained neural network classifiers produce reliable predictions only for in-distribution samples.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 25 Sep 2019 • Sachin Vernekar, Ashish Gaurav, Vahdat Abdelzad, Taylor Denouden, Rick Salay, Krzysztof Czarnecki
In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called “confident-classifier” by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KLdivergence between the predictive distribution of OOD samples in the low-density“boundary” of in-distribution and the uniform distribution (maximizing the entropy of the outputs).
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 27 Apr 2019 • Sachin Vernekar, Ashish Gaurav, Taylor Denouden, Buu Phan, Vahdat Abdelzad, Rick Salay, Krzysztof Czarnecki
Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution).
no code implementations • 6 Dec 2018 • Taylor Denouden, Rick Salay, Krzysztof Czarnecki, Vahdat Abdelzad, Buu Phan, Sachin Vernekar
There is an increasingly apparent need for validating the classifications made by deep learning systems in safety-critical applications like autonomous vehicle systems.
no code implementations • 27 Nov 2018 • Buu Phan, Rick Salay, Krzysztof Czarnecki, Vahdat Abdelzad, Taylor Denouden, Sachin Vernekar
In many safety-critical applications such as autonomous driving and surgical robots, it is desirable to obtain prediction uncertainties from object detection modules to help support safe decision-making.