Biodiversity Information Science and Standards : Conference Abstract
|
Corresponding author: Jessie Barry (jb794@cornell.edu)
Received: 06 Jun 2018 | Published: 06 Jun 2018
© 2018 Jessie Barry
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation: Barry J (2018) Applications of Deep Learning in Ornithology. Biodiversity Information Science and Standards 2: e27251. https://doi.org/10.3897/biss.2.27251
|
Earth’s ecosystems are threatened by anthropogenic change, yet relatively little is known about biodiversity across broad spatial (i.e. continent) and temporal (i.e. year-round) scales. There is a significant gap at these scales in our understanding of species distribution and abundance, which is the precursor to conservation (
Our approach to accomplishing this is through partnerships between our non-profit organization, computer science faculty, and industry leaders. By leveraging deep learning technologies and including an array of stakeholders, we are able to process data that would take years to analyze using traditional methods.
Methods.
We use 28 years of Next-Generation Radar (NEXRAD) imagery, which contains birds aloft during nocturnal migration. Using CNNs we can assess the density of birds captured on radar images to count the number of individuals crossing the continental U.S. each spring and fall. For acoustical analysis of birds vocalizing during nocturnal migration, we are using recorders to monitor the calling activity of birds aloft and CNN’s to detect and classify bird vocalizations in noisy landscapes. We gathered more than 6 million images from the eBird community, archived them in the Macaulay Library at the Cornell Lab of Ornithology, and crowdsourced millions of annotations to train models to classify more than 5,000 species of birds in images. Now we are applying this approach to video. These projects have used both supervised and unsupervised learning techniques. With supervised learning and the use of elaborate training datasets, we made tremendous headway in bird photo identification. Unsupervised learning was used to eliminate rain in NEXRAD images successfully, with little training data incorporated. We expect advances in unsupervised learning will open new possibilities in the future.
Conclusions.
The Cornell Lab pioneered the concept of autonomous recording units for monitoring biodiversity two decades ago, but without AI to process the data, discoveries were limited by human processing time. Today, we can combine our findings using radar with acoustic monitoring and sightings from citizen scientists for a more complete understanding of bird populations. We now expect AI processes to be able to identify birds with high confidence in the near future for images, audio recordings and videos. Furthermore, while conventional approaches require using separate neural nets that are combined in a separate process, we now combine multi-model sensor integration into a single CNN. There is no longer a need for pre-processing of data for AI pattern recognition. Our vision is to continue to apply these techniques to create a ‘real-time global bird monitoring network’, with a combination of humans and automated sensors. This network of sensors (or robots) will have comparable ability as a human to detect, identify, and count birds, gathering information systematically and in places where humans cannot reach.
artificial, deep learning
Jessie Barry