If you feel like some arcane new tech term pops up everyday just to foil and befuddle you, you are in good company. We’ve hit an era when technology and culture are so intertwined, and changing so rapidly, that it seems impossible to keep up. And one of the fastest moving, most influential—and most intriguing— categories is image recognition.Image recognition is expected to grow to $29.98 billion USD by 2020
A subset of artificial intelligence (AI), image recognition is already in widespread use in an wide array of settings from the high tech to the mundane. But it will continue to become an even more important player in the coming years — the worldwide market for image recognition is expected to grow to $29.98 billion USD by 2020 — as our world gets ever smarter and more connected. Here’s a look at what goes into image recognition, how we use it, and what to expect in the near future.
What does image recognition mean?
There are a number of different types of artificial intelligence, and one major flavor of AI is called Computer Vision. It refers to the ability of computers to acquire, process, and analyze data coming primarily from visual sources—the ability to track or predict movement for instance— but could also include data from heat sensors and other similar source.
You might call image recognition a subset of computer vision, in that it refers to the ability of a computer to “see,” to decipher and understand the information fed to it from an image, be it a still, video, graphic, or even live. This is no small feat. If you’ve ever scratched your head at a bizarre spelling or grammar correction that Google, Siri or Microsoft Word suggest, then you get an idea of how tough it is for computers to understand the rules of written language, even though they are predictable and consistent. It gets even more complicated when computers tackle the visual.
Consider that a photo, image, or video is infinitely more complex and open-ended than the words that make up a sentence. Think of a newborn that is dazzled by light and color, and you begin to a touch the experience of a computer that has no pre-defined way of understanding what all the various data in an image are. In fact, to a computer, a photo is simply a bunch of tiny colored dots arrayed in pattern (what we call pixels, to be more precise). In order to make sense of what those dots all mean, the computer needs to first understand that patterns make up things called objects, and objects exist in space and have dimensions, and on an on. That’s a pretty steep learning curve. (In fact, as humans we use about half our brain power to process visual information!)
How do you teach computers to see?
In order to teach computers to process visual data, you have to teach them to recognize patterns. In the early days of computing, researchers created a number of way to detect numbers and letters, dubbed optical character recognition—this is the technology that allows books and papers to be scanned and have characters converted to usable text in a computer, but also for today’s smartphone to do the same from a photo.
Other types of complex programming that have emerged over the last half century have allowed computers to learn that some patterns of pixels actually defined the edges of an object, that there was such a thing as dimension (in fact a few of them!), and that patches of color might actually belong to the same object. This process of refinement has been supercharged over the past decade thanks to extremely fast but cheap computers, powerful graphics processors, and the internet, among other technologies. For instance, through various techniques dubbed “machine learning,” computers, or rather, giant clusters of them connected together, can now be fed thousand and thousands of images, even millions, and within minutes to hours are able to process the images, find patterns, match the various patterns to each other, and output a meaningful analysis of them—it gets complex fast, but a silly example might be finding all the images that have people holding cats while on a boat.
How will image recognition tech be a part of my life?
Image recognition isn’t just restricted to sorting large batches of photos looking for funny cats. It’s the underlying technology behind a ton of existing software and uses already. For instance you’ve almost certainly benefited from your phone’s ability to detect faces to take better pictures. Or from Facebook’s ability to auto-detect your friends and family. Or from Google being able to search for something random like pictures of hockey players wearing flesh-colored skates. Behind those seemingly simply processes is vast computing power housed in server farms, massive storehouses of billions of photos, and an awful lot of clever engineering.
More cutting-edge uses are everywhere, though. Cars with “self-driving” modes like those from Tesla are equipped with cameras that analyze their surroundings and make sure they don’t bump into other cars, or people, or walls, or deer, for that matter. Consumer level drones now have cameras that not only keep them safe from crashing into trees and buildings, but also from getting lost when a GPS signal is low. And the medical field uses image recognition technology for a host of applications, like analyzing medical imaging such as mammographs to more accurately diagnose patients.
On a business level, image recognition is used for everything from serving dynamically customized contextual ads into relevant images to analyzing social media shares to calculating the true value of sports sponsorships across multiple platforms.
So what’s next?
That’s the answer everyone wants to know. Just as a lot of computing is moving away from text-based interactions to voice-based ones, and that images are an increasingly viable way to communicate, to search, and to interact with computers, we’re turning a major corner in the realm of what’s possible. Without peering too far down the future-telescope, it’s easy to pick a few close-to-home topics that are almost a certainty.
For instance, the idea of completely autonomous cars is no longer the stuff of dreams but coming soon. In fact, taxi-upstart Uber has publicly stated its intent to convert its fleet of cars to self-driving ones as quickly as possible, Ford believes it will have its own car ready by 2021 and Google has mature plans as well. A recent report from BI Intelligence predicts that10 million self-driving cars will be on roads by 2020, less than five years away!
Similarly, we could see highway-culture become dominated by fleets of trucks that can drive 24/7 without fear of driver fatigue. What’s more, within a few decades, the idea of commuting will be a totally different experience when drivers are simply passengers.
Similarly, cities large and small will be transformed by the ability to manage resources, traffic, pollution, even road maintenance, thanks to image-recognition technologies. Think of a smart grid of traffic lights that communicate directly with cars, which also talk with each other and therefore manage traffic with perfect efficiency so that there are never accidents or gridlock, and always on-time deliveries of product to stores and homes.
Or Amazon’s vaunted plan to use a fleet of drones to cut out delivery trucks altogether and have your orders zip through the air in minutes, not days, on drones that use image recognition technology to prevent accidents.
Surgeons may eventually be replaced by robotic ones so sensitive and perfect that they’re able to perform quickly, perfectly and 24 hours a day in any condition, anywhere on earth. They might identify individual cancer cells mid-surgery, or create perfect sutures and vastly safer outcomes with few chances of complications or infections. Or diagnostic tools that detect cancers or other conditions far in advance, making them far more treatable.Google’s computer vision AI was able to detect false positives and negatives in diagnosing diabetic retinopathy 90 percent of the time, versus 80 percent with human doctors.
More than 128,000 medical images that were used to train its machines. If it sound like the fantastic stuff of sci-fi, you’re only partly right. It’s the stuff of decades worth of science and research finally coming to fruition. Thanks to image recognition, we’re finally seeing things we never dreamed were possible.
See how Image Recognition is used in In-Image advertising and social media intelligence.
For the latest in artificial intelligence, computer vision and image recognition, visit www.thevisionary.com