
Hacker News: Front Page
shared a link post in group #Stream of Goodies

www.tomshardware.com
Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries
ArtPrompt bypassed safety measures in ChatGPT, Gemini, Clause, and Llama2.