Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
AgaMiko authored May 28, 2021
1 parent a3d5b97 commit 048f00c
Showing 1 changed file with 15 additions and 0 deletions.
15 changes: 15 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,20 @@ Deep Learning Techniques](http://ceur-ws.org/Vol-1609/16090547.pdf) point out so
# Papers

![Flying bird](http://www.kuwaitbirds.org/sites/default/files/files-misc/birding-bird-shapes-1.jpg)
## 2020
- Priyadarshani, Nirosha, et al. ["Wavelet filters for automated recognition of birdsong in long‐time field recordings."](https://besjournals.onlinelibrary.wiley.com/doi/abs/10.1111/2041-210X.13357) Methods in Ecology and Evolution 11.3 (2020): 403-417.
&nbsp;&nbsp;&nbsp;&nbsp; <details><summary> Abstract </summary>
Ecoacoustics has the potential to provide a large amount of information about the abundance of many animal species at a relatively low cost. Acoustic recording units are widely used in field data collection, but the facilities to reliably process the data recorded – recognizing calls that are relatively infrequent, and often significantly degraded by noise and distance to the microphone – are not well-developed yet.
We propose a call detection method for continuous field recordings that can be trained quickly and easily on new species, and degrades gracefully with increased noise or distance from the microphone. The method is based on the reconstruction of the sound from a subset of the wavelet nodes (elements in the wavelet packet decomposition tree). It is intended as a preprocessing filter, therefore we aim to minimize false negatives: false positives can be removed in subsequent processing, but missed calls will not be looked at again.
We compare our method to standard call detection methods, and also to machine learning methods (using as input features either wavelet energies or Mel-Frequency Cepstral Coefficients) on real-world noisy field recordings of six bird species. The results show that our method has higher recall (proportion detected) than the alternative methods: 87% with 85% specificity on >53 hr of test data, resulting in an 80% reduction in the amount of data that needed further verification. It detected >60% of calls that were extremely faint (far away), even with high background noise.
This preprocessing method is available in our AviaNZ bioacoustic analysis program and enables the user to significantly reduce the amount of subsequent processing required (whether manual or automatic) to analyse continuous field recordings collected by spatially and temporally large-scale monitoring of animal species. It can be trained to recognize new species without difficulty, and if several species are sought simultaneously, filters can be run in parallel.

</details>

- Brooker, Stuart A., et al. ["Automated detection and classification of birdsong: An ensemble approach."](https://www.sciencedirect.com/science/article/pii/S1470160X2030546X) Ecological Indicators 117 (2020): 106609.
&nbsp;&nbsp;&nbsp;&nbsp; <details><summary> Abstract </summary>
The avian dawn chorus presents a challenging opportunity to test autonomous recording units (ARUs) and associated recogniser software in the types of complex acoustic environments frequently encountered in the natural world. To date, extracting information from acoustic surveys using readily-available signal recognition tools (‘recognisers’) for use in biodiversity surveys has met with limited success. Combining signal detection methods used by different recognisers could improve performance, but this approach remains untested. Here, we evaluate the ability of four commonly used and commercially- or freely-available individual recognisers to detect species, focusing on five woodland birds with widely-differing song-types. We combined the likelihood scores (of a vocalisation originating from a target species) assigned to detections made by the four recognisers to devise an ensemble approach to detecting and classifying birdsong. We then assessed the relative performance of individual recognisers and that of the ensemble models. The ensemble models out-performed the individual recognisers across all five song-types, whilst also minimising false positive error rates for all species tested. Moreover, during acoustically complex dawn choruses, with many species singing in parallel, our ensemble approach resulted in detection of 74% of singing events, on average, across the five song-types, compared to 59% when averaged across the recognisers in isolation; a marked improvement. We suggest that this ensemble approach, used with suitably trained individual recognisers, has the potential to finally open up the use of ARUs as a means of automatically detecting the occurrence of target species and identifying patterns in singing activity over time in challenging acoustic environments.
</details>

## 2019
- Stowell, Dan, et al. ["Automatic acoustic detection of birds through deep learning: the first Bird Audio Detection challenge."](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13103) Methods in Ecology and Evolution 10.3 (2019): 368-380.
Expand Down Expand Up @@ -189,6 +203,7 @@ https://github.com/NickleDave/ML-comparison-birdsong</details>

# Competitions
![Flying bird](http://www.kuwaitbirds.org/sites/default/files/files-misc/birding-bird-shapes-1.jpg)
- [BirdCLEF 2021 - Birdcall Identification](https://www.kaggle.com/c/birdclef-2021) - identify which birds are calling in long recordings, given training data generated in meaningfully different contexts. This is the exact problem facing scientists trying to automate the remote monitoring of bird populations. This competition builds on the previous one by adding soundscapes from new locations, more bird species, richer metadata about the test set recordings, and soundscapes to the train set.
- [kaggle - Cornell Birdcall Identification](https://www.kaggle.com/c/birdsong-recognition/overview) - Build tools for bird population monitoring. Identify a wide variety of bird vocalizations in soundscape recordings. Due to the complexity of the recordings, they contain weak labels. There might be anthropogenic sounds (e.g., airplane overflights) or other bird and non-bird (e.g., chipmunk) calls in the background, with a particular labeled bird species in the foreground. Bring your new ideas to build effective detectors and classifiers for analyzing complex soundscape recordings!
- [LifeCLEF 2020 - BirdCLEF](https://www.imageclef.org/BirdCLEF2020) - Two scenarios will be evaluated: (i) the recognition of all specimens singing in a long sequence (up to one hour) of raw soundscapes that can contain tens of birds singing simultaneously, and (ii) chorus source separation in complex soundscapes that were recorded in stereo at very high sampling rate (250 kHz SR).
The training set used for the challenge will be a version of the 2019 training set enriched by new contributions from the Xeno-canto network and a geographic extension. It will contain approximately 80K recordings covering between 1500 and 2000 species from North, Central and South America, as well as Europe. This will be the largest bioacoustic dataset used in the literature.
Expand Down

0 comments on commit 048f00c

Please sign in to comment.