Please login or sign up to post and edit reviews.
Prompts gone rogue. [Research Saturday] - Publication Date |
- Aug 10, 2024
- Episode Duration |
- 00:25:44
Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in
Vanna.AI." A security vulnerability in the
Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.
This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.
The research can be found here:
When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in
Vanna.AI
Learn more about your ad choices. Visit
megaphone.fm/adchoicesThis episode could use a review!
This episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review