1 code implementation • 30 May 2024 • Zhenxing Niu, Yuyao Sun, Haodong Ren, Haoxuan Ji, Quan Wang, Xiaoke Ma, Gang Hua, Rong Jin
Finally, we convert the embJS into text space to facilitate the jailbreaking of the target LLM.
1 code implementation • 4 Feb 2024 • Zhenxing Niu, Haodong Ren, Xinbo Gao, Gang Hua, Rong Jin
This paper focuses on jailbreaking attacks against multi-modal large language models (MLLMs), seeking to elicit MLLMs to generate objectionable responses to harmful user queries.