September 2021 OES Beacon

Webinar Series on Machine Learning and its applications to Oceanic Engineering: Talk by Dr. Shyam Madhusudhana

Dr. Gopu R. Potty, Chair, Technology Committee on Data Analytics, Integration and Modelling (DAIM)

Figure 1: A screen shot from Dr. Madhusudhana’s webinar showing one of the topics (Artificial Neural Network) covered during the talk. The full webinar is available at the OES YouTube channel.

Dr. Shyam Madhusudhana gave a webinar on the topic Machine learning in marine bioacoustics on 21 July, 2021. Dr. Madhusudhana is a postdoctoral researcher at the K. Lisa Yang Centre for Conservation Bioacoustics (CCB) within the Cornell Lab of Ornithology. His research involves developing deep-learning techniques for realizing effective and efficient machine-listening in the big-data realm, with applications in the monitoring of both marine and terrestrial fauna. He is actively associated with OES as Coordinator of Technology Committees and as the co-Chair of the Student Poster Competitions at the biannual OCEANS conference. This talk was the second in the series of talks organized by the Technology Committee on Data Analytics, Integration and Modelling (DAIM) related to Machine Learning and its application to Oceanic Engineering. The first talk in this series, Introduction to machine learning in acoustics: theory and applications, was given by Dr. Michael Bianco, Assistant Project Scientist, Marine Physical Laboratory, University of California San Diego (UCSD), La Jolla, CA, USA. This second webinar, given by Dr. Madhusudhana, was also well attended, like the first one,  by approximately fifty attendees online. The recordings of these two talks are available at the OES Youtube channel (https:// channel/ UC6wjVnDY2-BmzdS8LzxrdHQ)

Passive acoustic monitoring (PAM) methods are used for monitoring and studying a wide variety of marine mammals and fishes based on their vocalizations. Previously, the identification and classification of these vocalizations were carried out manually, which can be highly labour intensive and time consuming considering the amount of data being collected on different platforms such as sonobuoys, moored recorders, cabled observatories, and mobile platforms such as AUVs, drifters, ships etc. This led to active research into developing automatic detection algorithms based on a variety of techniques. See the webinar archive: Passive acoustic monitoring overview-Applications for marine mammals and fishes by Dr. Sofie Van Parijs (available at  The use of these automatic recognition techniques has largely improved the ease and repeatability of analyses. Over the past decade, the adoption of machine learning (ML) based recognition techniques have brought about improved accuracy and reliability in analysing large acoustic datasets. Dr. Madhusudhana, in his talk, provided an overview of PAM undertakings, presented a brief overview of the various automation techniques used and contrasted them with modern ML based techniques. He also provided a gentle introduction to ML concepts, as they apply to acoustic event recognition, to the benefit of non-experts of ML.

Dr. Madhusudhana provided a brief  introduction to Convolutional Neural networks (CNN) and gave an overview of the various resources available to implement a CNN. He proceeded to explain how to successfully implement a CNN for bioacoustics applications. He discussed various pre-processing and transformations that can be done to the raw waveform to highlight different features, which the CNN can be trained to ‘learn’. He also emphasized the factors to be considered while deciding the architecture and training approach of the CNN.

The high point of the webinar was a  hands on demonstration of developing an ML model using real underwater acoustic recordings. This demonstration adopted a hands-on approach where the participants followed along the use an ML based solution for automatic recognition using a dataset containing North Atlantic Right Whale (NARW) calls. The dataset used for this exercise is a part of the publicly available annotated NARW recordings that were part of the 2013 Detection, Classification, Localization and Density Estimation (DCLDE) challenge (available at (2019)). The demonstration utilized Google Collaboratory, which is a free platform (for non-commercial use) offering a cloud computation facility.

DAIM-TC is planning the third talk in the series during late September or early October, 2021. Please be on the lookout for the announcement in September. We will also like to hear ( your feedback including suggestions for topics and potential speakers for future webinars.