This episode currently has no reviews.
Submit ReviewDr Chris Mitchell is the CEO and Founder of Audio Analytic, who develop sound recognition software that adds context-based intelligence into consumer technology products. In our conversation, you'll learn what the state of the art sound recognition systems can do, the types of sound events that are typically recognised, which consumer products they're integrated into, and the many benefits and new possibilities the technology affords to developers and users.
We discover the difference between sound recognition and speech recognition, how sound recognition provides the all important context for voice enabled devices to make the right decisions, and how smart devices can take advantage of this contextual knowledge. Then we dive into some of the technical details of how it all works, including 'better than real-time processing', edge computing vs the cloud, the need to train custom acoustic models, and how these machine learning models can run on low-resource devices like headphones using TinyML. Chris briefly explains the process of integrating the AI3 framework into your products, then we tackle the all important question of data privacy and security.
Many of the smart devices of future will rely on sound recognition to understand the context of their environments. Chris and his team are at the cutting edge of the sound recognition field and are long-time experts in the domain, so there's no better person to introduce us to this important technology. This is a time-limited preview. To hear the full episode, and access the full catalogue of episodes and bonus content, become a Voice Tech Pro https://voicetechpodcast.com/pro
Links from the show:
Subscribe to get future episodes:
Join the discussion:
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review