Computers programmed with algorithms intended to mimic neural connections "learned" to recognise cats after being shown a sampling of YouTube videos, Google fellow Jeff Dean and visiting faculty Andrew Ng said in a blog post.
"Our hypothesis was that it would learn to recognise common objects in those videos," the researchers said.
"Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats," they continued. "Remember that this network had never been told what a cat was, nor was it given even a single image labeled as a cat."
The computer, essentially, discovered for itself what a cat looked like, according to Dean and Ng. The computations were spread across an "artificial neural network" of 16,000 processors and a billion connections in Google data centers.
The small-scale "newborn brain" was shown YouTube images for a week to see what it would learn. "It 'discovered' what a cat looked like by itself from only unlabeled YouTube stills," the researchers said.
"That's what we mean by self-taught learning." Google researchers are building a larger model and are working on ways to apply the artificial neural network approach to improve technology for speech recognition and natural language modeling, according to Dean and Ng.
"Someday this could make the tools you use every day work better, faster, and smarter," they said.
Dean and Ng conceded that there is a long road ahead, since an adult human brain has around 100 trillion connections.
Google X Lab headed by company co-founder Sergey Brin is known for its work on innovations such as a self-driving car and "Terminator" film style glasses that provide Internet information about what is being seen.