A New Trick Uses AI to Jailbreak AI Models—Including GPT-4
Publisher |
WIRED
Media Type |
audio
Podknife tags |
Cybersecurity
Tech News
Technology
Categories Via RSS |
Technology
Publication Date |
Dec 11, 2023
Episode Duration |
00:05:28

Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave. Read the story here.

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review