Fjoddes.Net

News Site

If you teach a chatbot how to read ASCII art, it will teach you how to make a bomb


University researchers have developed a way to “jailbreak” large language models like Chat-GPT using old-school ASCII art. The technique, aptly named “ArtPrompt,” involves crafting an ASCII art “mask” for a word and then cleverly using the mask to coax the chatbot into providing a response it shouldn’t.

Read Entire Article

This post has been read 117 times!

+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0