While Facebook previewed its creative toolset, what wasn't as clear is what kinds of AR-related data will be available for marketers.
"We're making the camera the first augmented reality platform," Facebook CEO Mark Zuckerberg said at F8, referring to the camera on a phone. The comment was a jab at Snap, which went public after rebranding itself as "a camera company." To demonstrate the platform, Zuckerberg showed images of people taking selfies with the Nike Run Club app, with the app adding automatically-sized digital headbands and an overlay of runners' stats.
Beyond facial detection, Facebook is developing object recognition within images. When this work is complete, Facebook will be able to identify coffee mugs and offer imagery of virtual steam, or recognize a bottle of wine and trigger an overlay with its rating and description.
Once visual identification becomes reliable, one can imagine fun, creative concepts building on the Nike example. For instance, when someone shares a photo of their Froot Loops, Facebook could recognize that the colorful rings are Kellogg's brand, and a toucan could fly into the frame. Or when someone takes a photo of their home, Facebook could recognize the sofa from Crate and Barrel, and link to the product page. Perhaps people will be able to customize which kinds of contextual information they receive; some people may request nutritional information to appear next to any food, while others may want track information for the songs playing in ads.
Marketers will benefit most not just from the identification of objects within images but from intelligence about the objects that appear. There are many elements of an image that can be parsed; algorithms may be able to identify logos, product categories, objects and scene backgrounds, to name just a few elements.
All of this is far more powerful than the analytics from mining textual information alone. When searching a platform such as Instagram, Twitter, or YouTube, what typically comes up are results related to keywords and hashtags describing content. This creates a "tree falling in the woods" challenge for social media. That is, when someone publicly posts an image but doesn't describe it, the content and context is mostly invisible to marketers. Parsing visual elements of images and videos changes that.
After tapping into AR's creative potential, the next phase for marketers will be to access analytics that offer more information than what they can get today from textual analysis. Reports could offer findings such as these hypothetical examples:
- In Los Angeles, cars in photos are most commonly white, black, or silver, while in Boston, cars are most commonly black, gray, or red.
- Photos and videos of XYZ Lite cola are most commonly visible in photos with breakfast foods and cupcakes, while XYZ's full calorie soda is most commonly pictured with burgers and pizza.
- Chez Bagz handbags are twice as likely to be photographed at night compared to other brands in the category.
- Images and videos with O'Lucky beer typically show someone alone, while scenes with Der Beermeister tend to show two or more people together.
The applications will extend far beyond creative development and campaign ideation. They will create new avenues for media planning, influencer marketing, rights management, crisis management, customer service, product development and other disciplines.
The technology needed for object identification exists and is rapidly improving. Facebook, Google, Pinterest, Snapchat, and a slew of other companies are tackling this. The questions for platform providers like Facebook and others revolve around access and analytics. How much data will they provide directly? How much of their content will they offer to third-party analytics providers? Will they offer any aggregate data from privately shared content, or will public posts be the only accessible content?
None of this will be resolved right away. The one thing that should be clear after F8 is that with social media looking so much richer, the analytics will need to follow suit.