#header-inner img {width: 900px; margin: 0 auto;} #header-inner {text-align: center;}

Future of AI

News on Artificial Intelligence in Education and Libraries

Tuesday, December 2, 2025

Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models...



Researchers show that turning harmful prompts into poetry lets many large language models ignore safety rules. Across 25 models, poetic prompts raised jailbreak success rates far above non-poetic versions. The study suggests current safety methods can be defeated by simple stylistic changes.

ORIGINAL LINK: https://arxiv.org/abs/2511.15304