Accessibility Homepage Skip navigation Sitemap

Research: Using Artificial Intelligence to help hedgehogs

2nd February 2022

Dylan Carbone is a Master’s student at ZSL and a PTES intern. He has recently been studying whether camera trap surveys can be made easier by using artificial intelligence to automatically sort the footage.

Camera traps are increasingly popular, as they help us discover more about wildlife populations. They’re unobtrusive, can capture images day and night, and can be put out for weeks in the field. Since 2016, The London HogWatch project, run by the Zoological Society of London (ZSL), has conducted camera trap surveys in many green spaces across London.

Dylan strapping a camera trap to a tree.

However, surveys of this scale can collect millions of images of park visitors, including humans, dogs, and squirrels, as well as species of interest such as hedgehogs, foxes and badgers. Such large volumes of images can take months to check manually. Known as ‘hand tagging’, someone must look at each image and record which animal is present. It’s a huge undertaking. This has raised the question of whether image recognition algorithms can be used instead. If we can automate the process of classifying species, it will offer a much faster alternative to hand tagging. Dylan and his supervisor, Dr Robin Freeman from ZSL, have been investigating whether this is possible.

Image classification

Dylan has been helping develop the ‘Species Classifier’ method, which allows users to create image recognition algorithms (or species classifiers) to identify and filter common species from a directory of images. He trialled the Species Classifier method using images he’d collected in 2020 surveying Brockwell Park in south London, which he’d previously hand tagged. Dylan used the results from his analysis to determine which factors influence species classifier accuracy.

The camera trap in action.


Dylan discovered that the species classifier was extremely accurate identifying humans, which were the most common species recorded. People were recorded in 78% of the survey images, and the species classifier accurately identified them 94% of the time. Being able to remove over three-quarters of all images would greatly reduce the time needed to process the camera trap images. Dylan predicted that the high accuracy levels were in part because people are larger than any other species turning up in the photos, and the sheer abundance of humans in the training images. Dylan also found that the classifier rarely confused images of dogs and foxes despite their similar size and build, which he’d been concerned about when starting the project.

Limits for the classifier

The species classifier method did show limitations too. Currently, it’s not possible to separate multiple species within one image when training the classifier. This undermined the accuracy of the dog and squirrel classifiers which predicted dogs or squirrels in images with only humans present. Dylan also discovered that factors reducing the visibility of the animal also reduced species classifier accuracy. For example, squirrels, being small and well camouflaged, were frequently missed by the squirrel classifier. Likewise, the fox classifier had difficulty distinguishing foxes – which are nocturnal – from the surrounding foliage in night-time images.

A muntjac and fox photographed by camera traps deployed in Claybury Park and Brockwell Park respectively.

Future implications

Image recognition can be a powerful tool for conservation. Dylan’s work has demonstrated its potential for species classification in urban camera trap surveys. He’s also attempted to ensure that the new image recognition tools are a more accessible utility for conservationists by creating them in software that’s free to download and easy to use.

Dylan’s enthusiastic about the potential of camera traps for conservation, and is keen to explore the issue further, in collaboration with PTES and ZSL. We’ll keep you up to date on Dylan’s progress whilst he continues to further his education and career.