Please login or sign up to post and edit reviews.
A dark side to LLMs. [Research Saturday]
Podcast |
CyberWire Daily
Publisher |
The CyberWire
Media Type |
audio
Podknife tags |
Cybersecurity
Tech News
Technology
Categories Via RSS |
Daily News
News
Tech News
Technology
Publication Date |
Apr 08, 2023
Episode Duration |
00:17:46
Sahar Abdelnabi from CISPA Helmholtz Center for Information Security sits down with Dave to discuss their work on "A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models." There is currently a large advance in the capabilities of Large Language Models or LLMs, as well as being integrated into many systems, including integrated development environments (IDEs) and search engines. The research states, "The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable." This could lead them to be susceptible to targeted adversarial prompting, as well as making them adaptable to even unseen tasks. Researchers demonstrated these said attacks to see if the LLMs needed new techniques for more defense. The research can be found here: More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models Learn more about your ad choices. Visit megaphone.fm/adchoices

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review