Researchers found font-rendering trick to hide malicious commands

Researchers have published a proof-of-concept (PoC) that uses custom fonts to fool many popular Artificial Intelligence (AI) assistants, including ChatGPT, Claude, Copilot, Gemini, Leo, Grok, Perplexity, Sigma, Dia, Fellou, and Genspark.

Imagine a book where the visible text is harmless, but hidden between the lines is a second message written in special, human-only ink. Humans can see both layers. AI can’t, and it only reads the visible part. That means the AI is working with an incomplete picture, while a human reader may act on instructions the AI never even saw.

Why this matters

We’ve written before about different ClickFix-type attacks, where cybercriminals trick people into infecting their own devices. Suppose you land on a suspicious-looking webpage and ask…

Source link

Leave a Comment