Please login or sign up to post and edit reviews.
The Language Model Too Dangerous to Release
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Mar 25, 2019
Episode Duration |
00:21:01
OpenAI recently created a cutting-edge new natural language processing model, but unlike all their other projects so far, they have not released it to the public. Why? It seems to be a little too good. It can answer reading comprehension questions, summarize text, translate from one language to another, and generate realistic fake text. This last case, in particular, raised concerns inside OpenAI that the raw model could be dangerous if bad actors had access to it, so researchers will spend the next six months studying the model (and reading comments from you, if you have strong opinions here) to decide what to do next. Regardless of where this lands from a policy perspective, it’s an impressive model and the snippets of released auto-generated text are quite impressive. We’re covering the methodology, the results, and a bit of the policy implications in our episode this week.

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review