Search Results for author: Runqi Lin

Found 3 papers, 2 papers with code

Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency

no code implementations25 May 2024 Runqi Lin, Chaojian Yu, Bo Han, Hang Su, Tongliang Liu

Catastrophic overfitting (CO) presents a significant challenge in single-step adversarial training (AT), manifesting as highly distorted deep neural networks (DNNs) that are vulnerable to multi-step adversarial attacks.

Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization

2 code implementations NeurIPS 2023 Runqi Lin, Chaojian Yu, Tongliang Liu

Specifically, we design a novel method, termed Abnormal Adversarial Examples Regularization (AAER), which explicitly regularizes the variation of AAEs to hinder the classifier from becoming distorted.

Adversarial Robustness

On the Over-Memorization During Natural, Robust and Catastrophic Overfitting

1 code implementation13 Oct 2023 Runqi Lin, Chaojian Yu, Bo Han, Tongliang Liu

In this work, we adopt a unified perspective by solely focusing on natural patterns to explore different types of overfitting.

Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.