This paper introduces a novel method for universal jailbreaking of multimodal large language models by exploiting non-textual modalities, revealing important security vulnerabilities in current multimodal AI systems.