This episode currently has no reviews.
Submit ReviewGoogle scientists have developed a machine-learning tool called Health Acoustic Representations (HeAR) that can detect and monitor health conditions by analyzing sounds such as coughing and breathing. Trained on millions of audio clips of human sounds, HeAR shows promise in diagnosing diseases like COVID-19 and tuberculosis, as well as assessing lung function. Unlike traditional supervised learning methods, the researchers used self-supervised learning with unlabelled data. By converting over 300 million sound clips into visual representations, the model can be adapted for multiple tasks. HeAR outperformed existing models and has wide applicability due to the diverse training data. The field of health acoustics has potential for diagnosis, screening, and monitoring health conditions.
--- Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/messageThis episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review