
Hacker News: Front Page
shared a link post in group #Stream of Goodies
arxiv.org
ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs
Safety is critical to the usage of large language models (LLMs). Multiple techniques such as data filtering and supervised fine-tuning have been developed to strengthen LLM safety. However, currently