- ChatGPT and Gemini can be tricked into giving harmful answers through poetry, new study finds livemint.com
- AI’s safety features can be circumvented with poetry, research finds The Guardian
- Poetic prompts can jailbreak AI, study finds 62 per cent of chatbots slip into harmful replies India Today
- Poetic Prompts May Trick AI To Help You Build Nuclear Weapon NDTV
- AI Chatbots Can be Tricked to Bypass Safety Features by Using Poetry, New Study Reveals Tech Times

