6 mar. 2023

The article is in German and behind a paywall, so it’s all very hard to confirm, but if I understand it correctly, that makes complete sense as an attack vector: Bing Chat “reads” search results by appending them to the big block of text that makes its prompt, so if the results themselves contain something that looks like a sub-prompt… it is liable to respond to that sub-prompt, derailing the user’s conversation entirely, potentially providing malicious links. Oooops.

Eva Wolfangel (

@ollibaba I didn’t find it very intuitive when I heard it the first time, too. That’s why it took so long to publish my article 😅 Well the original prompt is text. And Bing Chat has to process text, too, in order to be helpful for users. And as soon as it processes something that has the same pattern as its‘ original prompt, it cannot help but react on it and follow the new rules. This is at least how I understood it. (And how it looked when I tried it)

Want to know when I post new content to my blog? It's a simple as registering for free to an RSS aggregator (Feedly, NewsBlur, Inoreader, …) and adding to your feeds (or if you want to subscribe to all my topics). We don't need newsletters, and we don't need Twitter; RSS still exists.

Legal information: This blog is hosted par OVH, 2 rue Kellermann, 59100 Roubaix, France,

Personal data about this blog's readers are not used nor transmitted to third-parties. Comment authors can request their deletion by e-mail.

All contents © the author or quoted under fair use.