Files are recorded but there is no sound
I have two Audiomoths. They were both working fine. Now one seems to work okay in every respect except that the files recorded are empty, no sound at all. True for both Custom and Default settings. The file sizes are appropriate (i.e 39296kb for a 55 second recording). The LED lights behave appropriately, flashing red while recording, flashing green in rest cycle). Voltage is 4.6. Sandisk Extreme 128GB V30 card which works fine in the other Audiomoth. Other identical SD card works in other Audiomoth but records the same in the problem Audiomoth i.e. great file sizes but no sound. Same results when “played” in different computers, the only sound is the noise floor. In desperation I have tried to flash the device with 1.5.0 (which it already has). The flash app sees the Audiomoth with the 1.5.0, says “switching to flash mode”, ultimately turns Audiomoth off and returns error “failed to switch Audiomoth to flash mode”. Disconnect USB and reconnecting produces the same result.
Both devices are less than 6 months old and both were working a week ago.
Any suggestions on what is wrong with it and what else I might try to get it working?
Thanks.
Wayne Hall, Anchorage
We have recently experienced similar issues at a large scale. We have been deploying ~35 AudioMoths year round since 2020 (batteries changed and SD cards swapped every 3 months) in the official cases. Some have died (trampled by cattle or stolen) and have been replaced over the course of the study. In total we have deployed more than 60 unique units in more than 305 deployments. Upon inspection and analysis after several years of data collection we discovered hundreds of thousands of the files did not have any sound data. We determined that ~ 30% of the data we collected did not have any sounds recorded (91 of 305 deployments). I have not had access to any of the units to inspect for physical damage, but I figured I would post here incase others have been experiencing anything similar. We developed two metrics of mic quality/sensor function and did some QAQC. Below are some of the details on the protocol we used and a figure showing the extent of the failures. Any thoughts/suggestions would be appreciated! ----
Smoothed Species Richness
We calculated the BirdNET species richness for each day of deployment at each site and took a smoothed mean over ten days, so the richness for a given day was the average of the richness on that day and the next nine days. We considered a mean richness of zero an indicator of possible sensor malfunction, as it is unlikely to have zero birds were detected over a period of ten days, even at a location with low diversity.
Entropy of the Coefficient of Variation of the Frequency Spectrum (ECV)
We calculated the Entropy of the Coefficient of Variation of the Frequency Spectrum (ECV; Towsey, 2018) for each day of deployment using the python library `scikit-maad` (Ulloa et al. 2021). This metric is sensitive to acoustic signals, especially birds and insects (Conservation Metrics, unpub data). We also smoothed this metric over ten days. We visually compared the LTSA, the smoothed mean species richness and the ECV value, as well as listened to a randomly sampled sound clips and determined that an ECV below 0.01 indicated a possible sensor malfunction .
Manual Review of Deployment Recordings
To confirm that these two metrics were valid indicators of recording quality we also listened to a subset of sound files manually. We listened to the “deployment recordings” made by the field worker servicing the units (the survey protocol requests that upon deployment “audio notes” are made by the field worker indicating the who is deploying and the site, sensor, time, etc.). These deployment recordings existed for almost every deployment, and we qualitatively classified the sound quality of each file (good, moderate, poor). We found that the deployment recording quality matched well with the two metrics, supporting them as valid indicators of sound file quality.
Sound Inclusion Protocol
When a date at a site was found to have either an average richness of zero or an average ECV below 0.01, we removed that site-date and the rest of that sensor’s deployment from further analysis. Of the 6,403 site-dates removed, the majority (5,440) were removed due to low values for both metrics. An additional 691 site-dates were removed due to a mean richness of zero (but an ECV above 0.01), and 272 site-dates were removed due to a mean ECV below 0.01 (but a richness above 0) .
Below is a figure with showing the extent of the failure AudioMoths