Video recommendation and discovery are very hot topics across video publishers looking to drive higher returns on their video lifetime value. Attracting a consumer to watch more videos isn’t simple in this attention deficit society we live. Gone in 90 Seconds according to Netflix. Your audience is one swipe away from being on another experience. Fluid media shifting is just life. However, video publishers are finding ways to keep consumers engaged using higher video intelligence. Want to make an impact in your consumer experience? Then make it simple to discover and surface relevant video content they find interesting. In this post we’ll explore the intelligence behind visual recommendation and what’s being leveraged to increase video lifetime value. VIDEO: Chase McMichael gives talk at Intel on how to process massive amounts of video on a budget and why visual computing attracts more attention to video.
Don’t be fooled
Enough with the buzz words around Artificial Intelligence, Machine Learning and Deep Learning. What problem are you solving? Is there a learning system and automated method to create a better solution? Last year we posted on Search Engine Journal How Deep Learning Powers Video SEO describing the advantages of video image labeling. Since then, Google announced at Next17 a full video annotation platform call Video Intelligence . (InfiniGraph was honored to be selected as a Google Video Intelligence Beta Tester) Beyond Google having a huge cloud systems running on chips design for deep learning (TPU) to pull this, this massive video processing capability comes with a cost. We’re still in the very early days of video analysis. The MAJOR challenge with Google cloud offering is pushing all your video over to Google Cloud and 2nd is letting your DATA be used as part of their training set. This is very problematic on many levels due to content rights and Google becoming smarter on your video than you are. How do you achieve similar results without all this overhead?
Not all data is created equal
All video publishers have the standard meta data attached to their videos when loading a video in their CMS. Behavior tracking is very powerful if you have the consumers consent. Many consumers don’t want to be tracked, if they are not logged into your property. Complicating matters, there are communal devices in many homes. As for the mobile device, (iPhone etc.) this is VERY PERSONAL and tracking is possible BUT Apple and Google have taken steps to block 3rd parties tracking. Single party tracking will be in place, however; a standard has yet to be fully adopted. Gone are the good old day of “dropping a cookie”. Creating a truly personalized experience is ideal, however; depends on the consumer authorizing and receiving value for giving up their privacy. OTT apps provides the best path to robust personalization. We have learned a great deal from innovative companies like: Netflix, HULU, YouTube and Amazon who have all come a long way in their approach to advanced video discovery. So how do you leverage these innovations on a budget?
See how “Netflix knows it has just 90 seconds to convince the user it has something for them to watch before they abandon the service and move on to something else”.
Video recommendation platforms
Not all video recommendation platforms are created equal. The main challenge is every mouse trap is virtually using the same META data and behavior tracking does not create meaningful discovery to new content. The heavy reliance on what people have played hence popular video must be what everyone wants to watch. Right? Popularity is not a barometer of relevance and vast majority of you’re video content isn’t seen by the majority of your audience. Hence good video content that lacks engagement will not be surfaced at all. This is your most expensive content like what’s the most expensive table in a restaurant? The empty table.
To exacerbate the problem, trending videos are a self fulfilling prophecy because trending is being artificially amplified and doesn’t indicate relevance. Surfacing the right video at the right time can make all the difference in people staying or going. What videos got play, time on video and completion indicates watchablity and captured interest. There is so much more to a videos than raw insights. Did someone watch a video is important but understanding the why in context of other like videos with similar content is intelligence. YouTube has been recommending videos for a long time but until recently started leveraging AI to build intelligent personalized video play lists. So has Netflix, HULU and Amazon to some extent. There are a few 3rd party platforms in the space when it comes to video recommendation. However, very few have tapped into visual insights to achieve higher intelligence. Companies like Iris.tv being an early entry in video recommendation and latter like Prizma and BABATOR all have unique Meta data tracking algorithms designed to entice more people to stay longer mostly on desktop auto play video. Now with the Viewablity demand increase and requirement to verify viewablity more advance methods of assuring people are watching the content is required. Hence, a new thinking on video recommendation was mandated.
An Intelligent Visual Approach
A definitive differentiation is using the images and video segments within the video to build relevance. Consumer know what they like when they see it. Understand this visual ignition process was key to unlocking the potential of visual recommendation. A visual psychographic map now can be created based on video consumption. How do you really know what people would like to play if you really don’t know much about the video content? Understanding the video content and context is the next stage in intelligent video recommendation and personalized discovery. Dissecting the video content and context now opens up a new DATA set that was other wise trapped behind a play button.
3 Ways Visual Video Recommendation Drives Video Lifetime Value
1. Visual recommendation – Visual information within video creates higher visual affinity to amplify discovery. Content likeness beyond just meta data opens up more video content to select from. Mapping what people watch is based on past observation, predicting what people will watch requires understand video context.
2. Video scoring – A much deeper approach to video had to be invented where the video is scored based on visual attribution inside the video and human behavior on those visuals. This scoring lets the content SPEAK FOR ITSELF and enables ordering play list relative to what was watched.
3. Personalized selection - Enhancing discover requires getter intelligence and context to what content is being consumed. Depending on the video publishers environment like OTT or a mobile app can enable high levels of personalization. For consumers using the web a more general approach and clustering consumers into content preferences powers better results while honoring privacy.
The Future is Amazing for Video Discovery
Google, Amazon, Facebook and Apple are going head to head with deep video analysis in the cloud. Large scale video publishers have a grand opportunity to embrace this new technology wave and be relevant while creating a visually inducive consumer experience. Video annotation has a very bright future using a technology called Deep Learning. We have come a very long way from just doing single image labeling via ImageNet. A major challenge going forward is the speed of change video publishers must adapt if they wish to stay competitive. With advanced technologies designed for video publishers there is hope. Take advantage of this movement and increase your video lifetime value.
Top image from post melding real life with recommendations.