no code implementations • 26 Apr 2024 • Emmanouil Seferis, Stefanos Kollias, Chih-Hong Cheng
Randomized smoothing (RS) has successfully been used to improve the robustness of predictions for deep neural networks (DNNs) by adding random noise to create multiple variations of an input, followed by deciding the consensus.
1 code implementation • 25 Apr 2024 • Chih-Hong Cheng, Changshun Wu, Harald Ruess, Xingyu Zhao, Saddek Bensalem
Reinforcing or even exacerbating societal biases and inequalities will increase significantly as generative AI increasingly produces useful artifacts, from text to images and beyond, for the real world.
no code implementations • 27 Mar 2024 • Changshun Wu, WeiCheng He, Chih-Hong Cheng, Xiaowei Huang, Saddek Bensalem
Nevertheless, integrating OoD detection into state-of-the-art (SOTA) object detection DNNs poses significant challenges, partly due to the complexity introduced by the SOTA OoD construction methods, which require the modification of DNN architecture and the introduction of complex loss functions.
no code implementations • 20 Mar 2024 • Brian Hsuan-Cheng Liao, Chih-Hong Cheng, Hasan Esen, Alois Knoll
This paper presents safety-oriented object detection via a novel Ego-Centric Intersection-over-Union (EC-IoU) measure, addressing practical concerns when applying state-of-the-art learning-based perception models in safety-critical domains such as autonomous driving.
1 code implementation • 10 Feb 2024 • Chih-Hong Cheng, Paul Stöckel, Xingyu Zhao
Modeling and calibrating the fidelity of synthetic data is paramount in shaping the future of safe and reliable self-driving technology by offering a cost-effective and scalable alternative to real-world data collection.
no code implementations • 6 Oct 2023 • Chih-Hong Cheng, Michael Luttenberger, Rongjie Yan
Deep neural networks (DNNs) are instrumental in realizing complex perception systems.
no code implementations • 11 Aug 2023 • Chih-Hong Cheng, Venkatesh Prasad Venkataramanan, Pragya Kirti Gupta, Yun-Fei Hsu, Simon Burton
We study challenges using reinforcement learning in controlling energy systems, where apart from performance requirements, one has additional safety requirements such as avoiding blackouts.
no code implementations • 24 Jul 2023 • Chih-Hong Cheng, Harald Ruess, Konstantinos Theodorou
The reshaped test set reflects the distribution of neuron activation values as observed during operation, and may therefore be used for re-evaluating safety performance in the presence of covariate shift.
no code implementations • 20 Jul 2023 • Saddek Bensalem, Chih-Hong Cheng, Wei Huang, Xiaowei Huang, Changshun Wu, Xingyu Zhao
Machine learning has made remarkable advancements, but confidently utilising learning-enabled components in safety-critical domains still poses challenges.
no code implementations • 14 Jun 2023 • Chih-Hong Cheng, Changshun Wu, Harald Ruess, Saddek Bensalem
Out-of-distribution (OoD) detection techniques are instrumental for safety-related neural networks.
no code implementations • 28 May 2023 • Utku Ayvaz, Chih-Hong Cheng, Hao Shen
While autonomous vehicles (AVs) may perform remarkably well in generic real-life cases, their irrational action in some unforeseen cases leads to critical safety concerns.
no code implementations • 6 Mar 2023 • Monish R. Nallapareddy, Kshitij Sirohi, Paulo L. J. Drews-Jr, Wolfram Burgard, Chih-Hong Cheng, Abhinav Valada
In this work, we propose EvCenterNet, a novel uncertainty-aware 2D object detection framework using evidential learning to directly estimate both classification and regression uncertainties.
no code implementations • 14 Nov 2022 • Nguyen Anh Vu Doan, Arda Yüksel, Chih-Hong Cheng
This work aims to explore and identify tiny and seemingly unrelated perturbations of images in object detection that will lead to performance degradation.
no code implementations • 21 Sep 2022 • Brian Hsuan-Cheng Liao, Chih-Hong Cheng, Hasan Esen, Alois Knoll
We consider the safety-oriented performance of 3D object detectors in autonomous driving contexts.
no code implementations • 16 May 2022 • Chih-Hong Cheng, Changshun Wu, Emmanouil Seferis, Saddek Bensalem
We consider the definition of "in-distribution" characterized in the feature space by a union of hyperrectangles learned from the training dataset.
no code implementations • 10 Feb 2022 • Tobias Schuster, Emmanouil Seferis, Simon Burton, Chih-Hong Cheng
We address a special sub-type of performance limitations: the prediction bounding box cannot be perfectly aligned with the ground truth, but the computed Intersection-over-Union metric is always larger than a given threshold.
no code implementations • 8 Feb 2022 • Brian Hsuan-Cheng Liao, Chih-Hong Cheng, Hasan Esen, Alois Knoll
As an emerging type of Neural Networks (NNs), Transformers are used in many domains ranging from Natural Language Processing to Autonomous Driving.
no code implementations • 4 Nov 2021 • Chih-Hong Cheng, Tobias Schuster, Simon Burton
We investigate the issues of achieving sufficient rigor in the arguments for the safety of machine learning functions.
no code implementations • 21 May 2021 • Chih-Hong Cheng, Alois Knoll, Hsuan-Cheng Liao
Within the context of autonomous driving, safety-related metrics for deep neural networks have been widely studied for image classification and object detection.
no code implementations • 29 Mar 2021 • Yuhang Chen, Chih-Hong Cheng, Jun Yan, Rongjie Yan
While object detection modules are essential functionalities for any autonomous vehicle, the performance of such modules that are implemented using deep neural networks can be, in many cases, unreliable.
no code implementations • 8 Mar 2021 • Chih-Hong Cheng, Rongjie Yan
Continuous engineering of autonomous driving functions commonly requires deploying vehicles in road testing to obtain inputs that cause problematic decisions.
no code implementations • 24 Nov 2020 • Chih-Hong Cheng
For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training.
no code implementations • 12 Oct 2020 • Chih-Hong Cheng, Rongjie Yan
Deploying deep neural networks (DNNs) as core functions in autonomous driving creates unique verification and validation challenges.
no code implementations • 25 Mar 2020 • Chih-Hong Cheng
We study how state-of-the-art neural networks for 3D object detection using a single-stage pipeline can be made safety aware.
no code implementations • 30 Sep 2019 • Chih-Hong Cheng
We further extend the loss function and define a new provably robust criterion that is parametric to the allowed output tolerance $\Delta$, the layer index $\tilde{l}$ where perturbation is considered, and the maximum perturbation amount $\kappa$.
1 code implementation • 9 Apr 2019 • Chih-Hong Cheng, Chung-Hao Huang, Thomas Brunner, Vahid Hashemi
We study the problem of safety verification of direct perception neural networks, where camera images are used as inputs to produce high-level features for autonomous vehicles to make control decisions.
no code implementations • 27 Feb 2019 • Chih-Hong Cheng, Dhiraj Gulati, Rongjie Yan
We provide a summary over architectural approaches that can be used to construct dependable learning-enabled autonomous systems, with a focus on automated driving.
1 code implementation • 16 Nov 2018 • Chih-Hong Cheng, Chung-Hao Huang, Georg Nührenberg
Can engineering neural networks be approached in a disciplined way similar to how engineers build software for civil aircraft?
no code implementations • 18 Sep 2018 • Chih-Hong Cheng, Georg Nührenberg, Hirotoshi Yasuoka
For using neural networks in safety critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in training.
no code implementations • 6 Jun 2018 • Chih-Hong Cheng, Georg Nührenberg, Chung-Hao Huang, Harald Ruess, Hirotoshi Yasuoka
Artificial neural networks (NN) are instrumental in realizing highly-automated driving functionality.
no code implementations • 11 May 2018 • Chih-Hong Cheng, Chung-Hao Huang, Hirotoshi Yasuoka
Systematically testing models learned from neural networks remains a crucial unsolved barrier to successfully justify safety for autonomous vehicles engineered using data-driven approach.
no code implementations • 9 Oct 2017 • Chih-Hong Cheng, Georg Nührenberg, Chung-Hao Huang, Harald Ruess
We study the problem of formal verification of Binarized Neural Networks (BNN), which have recently been proposed as a energy-efficient alternative to traditional learning networks.
no code implementations • 4 Sep 2017 • Chih-Hong Cheng, Frederik Diehl, Yassine Hamza, Gereon Hinz, Georg Nührenberg, Markus Rickert, Harald Ruess, Michael Troung-Le
We propose a methodology for designing dependable Artificial Neural Networks (ANN) by extending the concepts of understandability, correctness, and validity that are crucial ingredients in existing certification standards.
no code implementations • 28 Apr 2017 • Chih-Hong Cheng, Georg Nührenberg, Harald Ruess
The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges.