
Hacker News: Front Page
shared a link post in group #Stream of Goodies

arstechnica.com
AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic
Trained LLMs that seem normal can generate vulnerable code given different triggers.