This paper introduces a novel method for universal jailbreaking of multimodal large language models by exploiting non-textual modalities, revealing important security vulnerabilities in current multimodal AI systems.

Jiahui Geng (耿佳辉)
Postdoctoral Researcher at MBZUAI. Incoming Assistant Professor at Linköping University. Research on AI safety, trustworthiness, and alignment.
- Abu Dhabi, UAE
- Mohamed bin Zayed University of Artificial Intelligence
- Google Scholar
- ORCID
- GitHub