Andy Patel from WithSecure Labs joins with Dave to discuss their study that demonstrates how GPT-3 can be misused through malicious and creative prompt engineering. The research looks at how this technology, GPT-3 and GPT-3.5, can be used to trick users into scams.
GPT-3 is a user-friendly tool that employs autoregressive language to generate versatile natural language text using a small amount of input that could inevitably interest cybercriminals. The research is looking for possible malpractice from this tool, such as phishing content, social opposition, social validation, style transfer, opinion transfer, prompt creation, and fake news.
The research can be found here:
Creatively malicious prompt engineering
Learn more about your ad choices. Visit
megaphone.fm/adchoices