Shaping the Safety Boundaries: Understanding and Defending Against Jailbreaks in Large Language Models

Published in ACL 2025 Main, 2025

This work provides comprehensive insights into jailbreak attacks on large language models and proposes effective defense mechanisms to enhance AI safety and security.

Recommended citation: L Gao, J Geng, X Zhang, P Nakov, X Chen. (2025). "Shaping the Safety Boundaries: Understanding and Defending Against Jailbreaks in Large Language Models." ACL 2025 Main.
Download Paper