Search Results for author: Shengyun Peng

Found 13 papers, 9 papers with code

Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models

no code implementations27 May 2024 Shengyun Peng, Pin-Yu Chen, Matthew Hull, Duen Horng Chau

Safety alignment is the key to guiding the behaviors of large language models (LLMs) that are in line with human preferences and restrict harmful behaviors at inference time, but recent studies show that it can be easily compromised by finetuning with only a few adversarially designed training examples.

Interactive Visual Learning for Stable Diffusion

no code implementations22 Apr 2024 Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, Shengyun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Polo Chau

Diffusion-based generative models' impressive ability to create convincing images has garnered global attention.

LLM Attributor: Interactive Visual Attribution for LLM Generation

2 code implementations1 Apr 2024 Seongmin Lee, Zijie J. Wang, Aishwarya Chakravarthy, Alec Helbling, Shengyun Peng, Mansi Phute, Duen Horng Chau, Minsuk Kahng

Our library offers a new way to quickly attribute an LLM's text generation to training data points to inspect model behaviors, enhance its trustworthiness, and compare model-generated text with user-provided text.

Attribute Text Generation

Self-Supervised Pre-Training for Table Structure Recognition Transformer

1 code implementation23 Feb 2024 Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau

We discover that the performance gap between the linear projection transformer and the hybrid CNN-transformer can be mitigated by SSP of the visual encoder in the TSR model.

Representation Learning

High-Performance Transformers for Table Structure Recognition Need Early Convolutions

2 code implementations9 Nov 2023 Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau

This allows it to "see" an appropriate portion of the table and "store" the complex table structure within sufficient context length for the subsequent transformer.

Decoder Representation Learning +2

Robust Principles: Architectural Design Principles for Adversarially Robust CNNs

1 code implementation30 Aug 2023 Shengyun Peng, Weilin Xu, Cory Cornelius, Matthew Hull, Kevin Li, Rahul Duggal, Mansi Phute, Jason Martin, Duen Horng Chau

Our research aims to unify existing works' diverging opinions on how architectural components affect the adversarial robustness of CNNs.

Adversarial Robustness

LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked

1 code implementation14 Aug 2023 Mansi Phute, Alec Helbling, Matthew Hull, Shengyun Peng, Sebastian Szyller, Cory Cornelius, Duen Horng Chau

We test LLM Self Defense on GPT 3. 5 and Llama 2, two of the current most prominent LLMs against various types of attacks, such as forcefully inducing affirmative responses to prompts and prompt engineering attacks.

Language Modelling Large Language Model +2

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

1 code implementation4 May 2023 Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, Shengyun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng Chau

Diffusion Explainer tightly integrates a visual overview of Stable Diffusion's complex components with detailed explanations of their underlying operations, enabling users to fluidly transition between multiple levels of abstraction through animations and interactive elements.

Image Generation

RobArch: Designing Robust Architectures against Adversarial Attacks

1 code implementation8 Jan 2023 Shengyun Peng, Weilin Xu, Cory Cornelius, Kevin Li, Rahul Duggal, Duen Horng Chau, Jason Martin

Adversarial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs).

IMB-NAS: Neural Architecture Search for Imbalanced Datasets

no code implementations30 Sep 2022 Rahul Duggal, Shengyun Peng, Hao Zhou, Duen Horng Chau

In this paper, we propose a new and complementary direction for improving performance on long tailed datasets - optimizing the backbone architecture through neural architecture search (NAS).

Neural Architecture Search Representation Learning

DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors

1 code implementation CVPR 2022 Sivapriya Vellaichamy, Matthew Hull, Zijie J. Wang, Nilaksh Das, Shengyun Peng, Haekyu Park, Duen Horng (Polo) Chau

With deep learning based systems performing exceedingly well in many vision-related tasks, a major concern with their widespread deployment especially in safety-critical applications is their susceptibility to adversarial attacks.

Object object-detection +2

Accurate Anchor Free Tracking

no code implementations13 Jun 2020 Shengyun Peng, Yunxuan Yu, Kun Wang, Lei He

Specifically, a target object is defined by a bounding box center, tracking offset, and object size.

Object Visual Object Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.