This episode currently has no reviews.
Submit ReviewMost machine learning models are used in roughly the same way: they take a complex, high-dimensional input (like a data table, an image, or a body of text) and return something very simple (a classification or regression output, or a set of cluster centroids). That makes machine learning ideal for automating repetitive tasks that might historically have been carried out only by humans.
But this strategy may not be the most exciting application of machine learning in the future: increasingly, researchers and even industry players are experimenting with generative models, that produce much more complex outputs like images and text from scratch. These models are effectively carrying out a creative process — and mastering that process hugely widens the scope of what can be accomplished by machines.
My guest today is Xander Steenbrugge, and his focus is on the creative side of machine learning. In addition to consulting with large companies to help them put state-of-the-art machine learning models into production, he’s focused a lot of his work on more philosophical and interdisciplinary questions — including the interaction between art and machine learning. For that reason, our conversation went in an unusually philosophical direction, covering everything from the structure of language, to what makes natural language comprehension more challenging than computer vision, to the emergence of artificial general intelligence, and how all these things connect to the current state of the art in machine learning.
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review