Motion-triggered cameras set in forests, grasslands, and mountain ranges have transformed wildlife conservation. Costing less every year, small enough to mount on a tree trunk, they can capture tens of thousands of images over months of deployment — recording the private lives of animals that would vanish the moment a human researcher appeared.
The problem has always been the same: somebody has to look at all those images.
A project with 200 cameras might generate a million photographs in a season. Sorting through them manually — identifying species, counting individuals, discarding false triggers from blowing leaves — has traditionally required enormous teams of volunteers and months of work. Data arrived late. Conservation decisions were made on old information.
**Enter SpeciesNet**
Google has open-sourced **SpeciesNet**, an AI model trained on **65 million labelled wildlife images** from conservation partners around the world. It can automatically classify animals in camera trap photos across nearly **2,500 species categories** — from common deer to rare Amur leopards, from elephants to musky rat-kangaroos — with a speed and accuracy that would be impossible by hand.
The model, originally built as part of Google's **Wildlife Insights** platform, was released as open-source software a year ago, allowing conservation teams anywhere in the world to download, adapt, and refine it for their own environments and species lists.
**What It's Already Done**
In the twelve months since open-source release, SpeciesNet has been deployed by research groups on six continents:
🐆 **Colombia** — helping identify pumas and ocelots in camera traps set across complex montane forests 🦌 **Idaho** — tracking elk and black bear populations through seasonal migration corridors 🦅 **Australia** — spotting cassowaries and musky rat-kangaroos in dense tropical rainforest 🐘 **Tanzania's Serengeti** — supporting population monitoring of lions, elephants, and wildebeest
For each of these teams, the AI doesn't replace the biologist — it handles the sorting so the biologist can focus on the analysis. Instead of spending three months identifying species in photos, a researcher can now have preliminary classifications in hours, with flagged uncertain cases ready for human review.
'Larger projects are now collecting thousands or even millions of wildlife images that could take decades to identify manually,' Google noted in its recent blog post. SpeciesNet makes that data useful in near-real time.
**Part of Google Earth AI**
SpeciesNet sits within **Google Earth AI**, a growing collection of geospatial tools, datasets, and AI models designed for what Google calls 'deep planetary intelligence.' The programme is oriented around empowering communities and nonprofits to address the planet's most pressing environmental challenges — from deforestation monitoring to coral reef mapping.
The model is free to use, free to adapt, and the underlying code is publicly available on GitHub. Any conservation organisation, university, or government agency with camera trap data can apply it.
**Why It Matters**
We are living through a biodiversity crisis. Roughly **one million species** face extinction threats. Effective conservation requires knowing which animals are where, in what numbers, and how those numbers change over time. Camera traps provide the raw data. AI like SpeciesNet makes that data usable at scale.
For decades, the bottleneck in wildlife conservation wasn't cameras or field teams. It was the sheer human cost of processing what those cameras recorded. That bottleneck is now dissolving.
Elephants photographed in Tanzania at 2 AM on a Wednesday. A jaguar crossing a river in Bolivia. A wolverine moving through high alpine terrain in Norway. All logged, all identified, all feeding into population models and conservation plans — automatically.
The animals are still out there. Now we can finally see them clearly enough to protect them. 🐾🌍
*Sources: Google Research Blog · Wildlife Insights · Google Earth AI (March 2026)*