insights

More Cool Uses of Computer Vision

In the past year, computer vision has gone from a modest place in the technology sector to a must-have component of any AI offering. From health and automotive to security and entertainment, most sectors are using computer vision in major ways. Marketing is no exception. The same innovations that are helping users sign into their phones faster or apply special effects inexpensively are also useful to brands and marketers. We’ve outlined some of the biggest trends in computer vision today, highlighting where and how marketers can make the most of them. 

Creativity


All those cameras taking over the world aren’t just capturing still images; video and moving pictures of all kinds are also proliferating. Computer vision is increasingly being implemented to make sense of the endless hours of video being generated 24/7 across the globe. Most recently, IBM Watson AI analyzed the expressions and gestures of the crowd during an estimated 20,000 shots at the 2019 Masters Golf tournament to auto-generate daily highlight reels for each of the 90 players.  Bay Area startup Minute has developed a tool that scours individual videos to dynamically generates five-second auto-preview (APV) trailers, those little looping thumbnails users see when scrolling over online clips, to increase clicks and views by 13 percent over thumbnails that feature only still images. The technology is designed to look for the most alluring footage, whether it’s winning plays or drama by athletes and fans.

Sometimes computer vision is used in simpler ways to manage computing resources.  In BMW’s M Virtual Experience for the new HTC VIVE Pro Eye VR headset, eye-tracking is used to optimize and enhance the rendering on whichever parts of the 360-degree screen a user is looking, in real time, and then reduces computing power on other sections. The eye-tracking is also used to analyzed how users interact with BMW’s products, albeit virtual ones. 

The same technology that helps spot trends can also be used by designers to create future fashions, both as a collaboration or entirely without human intervention. Submit a photo of a dress, pant, or shirt into IBM Watson’s Cognitive Prints search engine, for example, and it will return images of similar items, which can be further filtered down by specifics such as type of print, year or designer. In addition, the tool can also create original patterns based on user input—flowers, plaid, cats, for example—and those patterns can be modified further by humans, if desired. The idea is not only to help designers spot trends or research historical fashion trends for inspiration, but also to make sure what they’re working on hasn’t been done before. Already, designers such as Shane and Falguni Peacock and Tommy Hilfiger have worked with the models to design collections.  While Amazon is already using computer vision coworking with humans to offer fashion advice via its Echo Look camera, it’s also working on technology that might one day serve up actual fashions on-demand. At Lab 126, Amazon’s San Francisco R+D center, generative adversarial networks (GANs), which pit two neural networks against each other to essentially fact-check and perfect outcomes, are used to create original outfits based on whatever style is fed into the network for training.

Movie making is also benefiting from advances in computer vision. That pricey Super Bowl spot may soon be a thing of the past thanks to startups such as Arraiy which has developed a technology that separates actors and objects from backgrounds in footage in order to place them into different shots. Replacing the need for green screens—and the many hours that actors must spend in motion caption suits—the process known as rotoscoping is the result of neural networks trained on hours of special effects footage. Already, Arraiy’s technology has been used in music videos such as  The Black Eyed Peas’ “Street Livin’,” in which band members’ moving mouths were dynamically superimposed onto pictures of people in the civil rights era.

Even design is getting the AI approach. Need a new logo, business card, or product label? Just feed a few keywords about your grand into Brandmark, and the AI tool will generate a logo, complete with icon, fonts, and color mix. The service provides all the assets a brand would need from logo design files and business card designs to Facebook covers and social profile icons.

What’s Cool For Marketers: From resources of time to resources of computing power, computer vision is helping make both the creation and the consumption of content more efficient. Brands can optimize their content while simultaneously gathering new insights about consumers. From TV spots to swag to branding, computer vision and AI expand the creative capabilities of brands both established and emerging while also reducing costs.

Augmented Ads and Activations

As the AR space continues to heat up in the content and gaming areas on mobile phones, it also creates a fertile environment for advertising and marketing. Why rely solely on 2D banner ads and real-world billboards when the AR spaces you see through your phone’s viewfinder can be augmented with fun, contextual ads. Santa Monica-based Hype AR is an advertising network that uses object recognition to serve relevant 3D augmented ads inside of AR apps. For example, shoppers might toggle through a set of AR chairs placed virtually into their living rooms on, say, the Wayfair app, and one of those just might be the spiky Game of Thrones Iron Throne. Or they might see a spinning 3D globe advertising fare sales on United after finishing a Tetris-like AR game. Some brands are working on their own AR experiences. In the Great Oreo Cookie Quest, users collect virtual Oreos that appear when phones are pointed at the right objects (a watchface or a window, for example). Users get hints via social media, email and text, and the app uses object recognition to identify real-world items that activate the virtual cookies—some of which offer prizes like smartphones or promo codes.

Augmented reality is also being implemented in experiential marketing scenarios. At its booth at the IFA consumer technology show in Berlin last year, Samsung collaborated with the creators of Family Guy and ad agency BBH to create ‘Doorways,’ an AR experience that generated an animated version of the Griffins’ home, complete with virtual versions of the Korean tech giant’s smart home appliances. Users could interact directly with the AR versions of Samsung’s devices, sending, say, a song playing on Spotify on the smart refrigerator in the kitchen to the smart TV in the living room.

Snapchat and Facebook offer some of the most robust AR tools for users and brands alike. Snapchat’s new “Landmarker” tool lets developers create animated filters that transform real-world points of interest, such as confetti shooting out of the Capitol building in Washington, DC, or a dragon landing atop New York’s Flatiron building, which then gets covered with ice, to promote the final season of Game of Thrones. Similarly, Facebook’s Spark AR platform was used to augment a massive wall billboard for the Dallas Mavericks in which a virtual Dennis Smith Jr. jumps out of the ad and slam dunks a basketball. And given Facebook’s new emphasis on its messaging platforms, it’s no surprise that Nike developed an AR reveal experience in which special edition sneakers are unveiled to users who unlock the experience by collecting special emojis on Facebook Messenger. The limited edition “Kyrie 4 ‘Red Carpet’” kicks sold out in an hour.

What’s Cool For Marketers: AR experiences have improved to the point where they are easier to develop, available for wider distribution, and lead to serious ROI from buzz to sales. 

Emotion, Gesture and Action Recognition

Interactive content can be hit or miss, but the success of Black Mirror’s “Bandersnatch” episode—not to mention YouTube’s subsequent foray into the genre—demonstrate the taste for multiple storylines is alive and well. Now, emotion recognition has to potential to make the experience more seamless. Italian startup MorphCast has developed a suite of tools that detect everything from age and gender to emotion and head positioning to change content on the fly. While the technology uses computer or mobile phone cameras, it’s completely browser-based and never shared with the cloud, making it fully GDPR compliant. The company is aiming its Studio app at creators in broadcast, media, entertainment, and advertising, which offers potential not only for improving engagement but also supercharging interactive storytelling. 

Meanwhile, digital avatars from in-store to storytelling are increasingly equipped with eye-tracking, emotion, and gesture recognition capabilities to react to users and customers in real-time. In the interactive Whispers in the Night, a “virtual being” named Lucy uses natural language processing and computer vision to react in real time to human users, learning over time about that person in order to deliver more compelling interactions. Think of it as a narrative version of Google Assistant or Alexa. On a more practical and point-of-sale level, TwentyBN’s Millie AI is an interactive digital sales assistant avatar that uses computer vision to see what shoppers are looking at or trying on, then showers them with compliments, when appropriate, much like a human sales assistant would do. Residing on kiosks with portrait screens to amp up the life-size realism, Millie’s gesture-, action-, and language-recognition can also be implemented for roles as a store greeter, dance instructor, or serve any other type of role that involves seeing, evaluating, and reacting. What’s more, learning is continuous thanks to the daily videos she collects, which are used to train the models that power Millie.

Emotion recognition is also helpful in an analytics context. French startup Datakalab has developed tools that use facial recognition and eye-tracking to evaluate the emotional reactions of people clicking through retail websites. These data points are then mixed in with traditional behavior metrics like page views and bounce and click rates to deliver an “Emotional Product Ranking” on specific SKUs, as well as success and pain points along the customer journey. Testing is opt-in and GDPR compliant.

What’s Cool For Marketers: Computer vision’s ability to track visual behavior in an anonymized way not only offers more immersive and realistic interactions with virtual assistants and avatars, but also adds new analytics capabilities and categories for brands looking to optimize their content, websites, and ads.

Illustrations by Turgay Mutlay