17
Oct
The researchers say that if the attack were carried out in the real world, people could be socially engineered into believing the unintelligible prompt might do something useful, such as improve their CV. The researchers point to numerous websites that provide people with prompts they can use. They tested the attack by uploading a CV to conversations with chatbots, and it was able to return the personal information contained within the file.Earlence Fernandes, an assistant professor at UCSD who was involved in the work, says the attack approach is fairly complicated as the obfuscated prompt needs to identify personal information,…