How Deep Learning Video Sequence Drives Profits

Beyond the deep learning hype, digital video sequence (clipping) powered by machine learning is driving higher profits. Video publishers use various images (thumbnails – poster images) to attract readers to watch more video. These “Thumbnail Images” are critical, and the visual information has a great impact on video performance. The lead visual in many cases is more important than the headline. More view equals more revenue it’s that simple. Deep learning is having significant impact in video visual search to video optimization. Here we explore video sequencing and the power of deep learning.

Having great content is required, but if your audience isn’t watching the video then you’re losing money. Understanding what images resonate with your audience and produce higher watch rates is exactly what KRAKEN does. That’s right: show the right image, sequence or clip to your consumers and you’ll increase the number of videos played. This is proven and measurable behavior as outlined in our case studies. An image is really worth a thousand words.

Below are live examples of KRAKEN in action. Each form is powered by a machine learning selection process. Below we describe the use cases for apex image, image rotation and animation clip.

Animation Clip:

KRAKEN “clips” the video at the point of APEX. Sequences are put together creating a full animation of a scene(s). Boost rates are equal to those from image rotation and can be much higher depending on the content type.

  • PROS
    • Consumer created clipping points within video
    • Creates more visual information vs. a static image
    • Highlights action scenes
    • Great for mobile and OTT preview
  • CONS:
    • More than one on page can cause distraction
    • Overuse can turn off consumers
    • Too many on page can slow page loading performance (due to size)
    • Mobile LTE is slow and can lead to choppy images instead of a smooth video

Image Rotation:

Image rotation allows for a more complete visual story to be told when compared to a static image. This results in consumers having a better idea of the content in the video. KRAKEN determines the top four most engaging images and then cycles through them. We are seeing mobile video boost rates above 50%.

  • PROS:
    • Smooth visual transition
    • Consumer selected top images
    • Creates a visual story vs. one image to engage more consumers
    • Ideal for mobile and OTT
    • Less bandwidth intensive (Mobile LTE)
  • CONS:
    • Similar to animated clips, publishers should limit multiple placements on a single page

Apex Image:

KRAKEN always finds the best lead image for any placement. This apex image alone creates high levels of play rates, especially in a click-to-launch placement. Average boost rates are between 20% to 30%.

  • PROS:
    • Audience-chosen top image for each placement
    • Can be placed everywhere (including social media)
    • Ideal for desktop
    • Good with mobile and OTT
  • CONS:
    • Static thumbnails have limited visual information
    • Once the apex is found, the image will never be substituted

Below are live KRAKEN animation clip examples. All three animations start with the audience choosing the apex image.  Then, KRAKEN identifies (via deep learning) clipping points and uses machine learning to adjust to optimal clipping sequence.

HitFix Video Deep Learning Video Clipping to Action Machine Learning

HitFix Video Deep Learning Video Clipping to Action, Machine Learning adjust in real time

Video players have transitioned to HTML5 and mobile consumption of video is the fastest growing medium. Broadcasters that embrace advanced technologies that adapt to the consumer preference will achieve higher returns, and at the same time create a better consumer experience. The value proposition is simple: If you boost your video performance by 30% (for a video publisher doing 30 million video plays per month), KRAKEN will drive an additional $2.2 million in revenue (See KRAKEN revenue calculator). This happens with existing video inventory and without additional head count. KRAKEN creates a win-win scenario and will improve its performance as more insights are used to bring prediction and recommendation to consumers, thereby increasing the video process.

AdTech? Think “User Tech” For a Better Video Experience

How and why did Ad Tech become a bad word?  Ad tech has become associated with, and blamed for, everything from damaging the user experience (slow load rates) to creating a series of tolls that the advertiser pays for but ultimately at the expense of margins for publishers.  Global warming has a better reputation. Even the VC’s are investing more in marketing tech than the ad tech space.

The Lumascape is denser than ever and, even with consolidation, it will take years before there is clarity.  And the newest, new threats to the ad ecosystem like visibility, bots, and ad blocking will continue to motivate scores of new “innovative” companies to help solve these issues. This is in spite of the anemic valuations ad tech companies are currently seeing from Wall Street and venture firms. The problem is that the genesis of almost all of these technologies begins with the race for the marketing dollar while the user experience remains an afterthought. A wise man once said, “Improve the user experience and the ad dollars will follow.” So few new companies are born out of this philosophy. The ones that are—Facebook, Google and Netflix (How Netflix does A/B testing) —are massively successful.

One of the initial promises for publishers to engage their readers on the web was to Panthers_Video_Machine_Learning_iPhoneKRAKEN (1)provide an “interactive” experience—a two-way conversation. The user would choose what they wanted to consume, and editors would serve up more of what they wanted resulting in a happier, more highly engaged user.   Service and respect the user and you—the publisher—will be rewarded.

This is what my company does.  We have been trying to understand why the vast majority of users don’t click on a video when, in fact, they are there to watch one!  How can publishers make the experience better?   Editors often take great care to select a thumbnail image that they believe their users will click on to start a video and then…nothing.  On average, 85% of videos on publishers’ sites do not get started.

We believe that giving the user control and choice is the answer to this dilemma.  So we developed a patented machine learning platform that responds to the wisdom of the crowds by serving up thumbnail images from publisher videos that the user—not the editor—determines are best. By respecting the user experience with our technology, users are 30% more likely to click on videos when the thumbnails are user-curated.

What does this mean for publishers?   Their users have a better experience because they are actually consuming the most compelling content on the site.  Nothing beats the sight, sound and motion of the video experience. Their users spend more time on the site and are more likely to return to the site in the future to consume video. Importantly from a monetization standpoint, InfiniGraph’s technology “KRAKEN” creates 30% more pre-roll revenue for the publisher.

We started our company with the goal of improving the user experience, and as a result, monetization has followed. This, by the way, enables publishers to create even more video for their users. There are no tricks.  No additional load times.  No videos that follow you down the page to satisfy the viewability requirements for proposals from the big holding companies. Just an incredibly sophisticated machine learning algorithm that helps consumers have a more enjoyable experience on their favorite sites. Our advice?   Forget about “ad tech” solutions.  Think about “User Tech”.   The “ad” part will come.

The live example above demonstrates KRAKEN in action on the movie trailer “Intersteller” achieving 16.8X improvement over the traditional static thumbnail image.

Deep Learning Methods Within Video An End Game Application

Deep Learning Methods Within Video An End Game Application – We’ll explore the use cases of using deep learning to drive higher video views. The coming Valhalla of video Deep Learning is being realized in visual object recognition and image classification within video. Mobile video has and continues to transform the way video is being distributed and consumed.

Deep Learning Methods Within Video – An End Game Application

Big moves

Adobe Stats from Report on Mobile VideoWe’re witnessing the largest digital land grab in video history. Mobile video advertising is the fastest growing segment projected to account for $25 billion worth of ad spend by 2021.  Deep Learning and artificial intelligence are also growing within the very same companies who are jockeying for your cognitive attention. This confluence of video and deep learning has created a new standard in higher performing video content diving greater engagement, views, and revenue. In this post we’ll dive deep into how video intelligence is changing the mobile video game. Many studies showing tablet and smartphone viewing accounted for nearly 40 minutes of daily viewing in 2015 with mobile video continuing to dominate in 2016. Moreover, digital video is set to out pace TV for the first time and social / Instagram/Snapchat video is experiencing explosive growth.

 

The Interstellar trailer is a real example of KRAKEN in action and achieved a 16X improvement in video starts. Real-Time A/B testing between the poster image (thumbnail) and selected images pulled from visual training set provide the simultaneous measurement of what image induce engagement.  All data and actions are linked with a Video Machine Learning (KRAKEN) algorithm enabling real-time optimization and sequences of the right images to achieve maximum human engagement possible.

How it works

Processing video at large scale and learning requires advanced algorithms designed to ingest real-time data.  We have now entered the next phase of data insights going beyond the click and video play. Video opens the door to video consumption habits KRAKEN video deep learning Images for high video engagementand using machine learning enables a competitive advantage.

Consumer experience and time on site are paramount when video is the primary revenue source for most broadcasting and over-the-top (OTT) sites today including Netflix, HULU, Comcast X1, and Amazon. Netflix has already put into production their version of updating poster images to improve higher play starts, discovery and completions.

It’s All Math

Images with higher object density have proven to drive higher engagement. The graph demonstrates images with high entropy (explained in this video) generated the most attraction. Knowing what images produce a cognitive response are fundamental for video publishers looking to maximized their video assets.

Top 3 video priorities we’re hearing from customers.

1) Revenue is very important, and showing more video increases revenue (especially during peak hours when inventory is already sold out)

2) More video starts means more user time on site

3) Mobile is becoming very important. Increasing mobile video plays is a top priority.

While this is good news overall, it does present a number of new challenges facing video publishers in 2016. One challenge is managing the consumer access to content on their terms and across many points. Video consumption is increasingly accessed through multiple entry-points throughout the day. These entry points, by their very nature, have context.

Deep Learning

KRAKEN Video Deep Learning AB Test VIDEO mobile video liftBroadcasters and publishers must consider consumer visual consumption as a key insight. These eye balls (neurons firing) are worth billions of dollars but its no longer a game of looking at web logs. More advance image analysis to determine what images work with customers requires insights into consumers video consumption habit. For the digital broadcasters, enabling intelligence where the consumer engages isn’t new. Using deep convolutional neural networks powers the image identification and other priority algorithms. More details are in the main video.

Motivation

Visual consumer engagement tracking is not something random. Tracking engagement on video has been done for many years but when it comes to “what” within the video there was a major void. InfiniGraph created KRAKEN to enable video deep learning and fill that void by enabling machine learning within the video to optimize what images are shown to achieve the best response rates. Interstellar’s 16X boost is a great example of using KRAKEN to dive higher click to launch for autoplay on desktop and click to play in mobile resulting in higher revenue and greater video efficiency.  Think of KRAKEN as the Optimizely for video.

One question that comes up often is: “Is the image rotation the only thing causing people to click play?” The short answer is NO. Rotating arbitrary images is annoying and distracting.  KRAKEN finds what the customer likes first and then sequences the images based on measurable events. The right set of images is everything. Once you have the right images you can then find the right sequence and this combination makes all the difference in maximizing play rates. Not using the best visuals will cause higher abandonment rates.

Conclusion

Further advances in deep learning are opening the doors to continuous learning and self improving systems. One are we’re very excited about is visual prediction and recommendation of video. We see a great future of mapping human collective cognitive response to visuals that stimulate and created excitement. Melting the human mind to video intelligence is the next phase for publishers to deliver a better consumer experience.

FORBES: InfiniGraph Invents Video Thumbnail Optimization

Bruce Rogers ForbesBruce Rogers, FORBES STAFF
I’m Forbes’ Chief Insights Officer & write about thought leadership.
Originally posted on Forbes

A Series of Forbes Insights Profiles of Thought Leaders Changing the Business Landscape: Chase McMichael, Co-Founder and CEO, InfiniGraph

Optimizing web content to drive higher conversion rates, for a long time, meant only focusing on boosting the number of click-throughs, or figuring out what kinds of static content got shared most often on social media sites.

But what about videos? This key component of many sites went largely overlooked, because there simply wasn’t a good way to determine what actually made viewers want to click on and watch a given video.

Chase_McMichael_Video_Machine_Learning_Headshot_2_2016

Chase McMichael, Co-Founder and CEO, Infinigraph

In an effort to remedy this problem, says entrepreneur Chase McMichael, brand managers may have, at most, tried to simply improve the video’s image quality. Or, in a move like a Hail Mary pass, they might have splashed up even more content, in the hopes that something, anything, would score higher click-to-play rates. Yet even after all that, McMichael says, brands often found that some 90% of viewers still did not watch the videos posted on their sites.

As it turns out, the “thumbnail” image (static visual frame from the video footage) has
everything to do with online video performance. And while several ad tech companies were already out there, using so-called A/B testing to determine how to optimize the user experience, no one had focused on optimizing video thumbnail images. Given video’s sequencing speed with thousands of images flashed up for milliseconds at a time, it meant that measuring the popularity of thumbnails was simply too complex.

Sensing a challenge, McMichael, a mathematician and physicist with an ever-so-slight east Texas drawl, set out to tackle this issue. He’d already started InfiniGraph, an ad tech firm aimed at tracking and measuring people’s engagement on brand content. But as his company grew, he found that customers began asking more and more about how they might best optimize web videos in order to boost viewership.Panthers_Video_Machine_Learning_iPhoneKRAKEN (1)

Viewership, of course, is key: Higher video viewership translates into more shares; more shares means increased engagement. And that all translates into more revenue for the website. Premium publishers are limited in their ability to create more inventory because the price of entry is so high. These new in house studios are producing quality content, but getting scale is a huge challenge.

When he started looking into it, McMichael says, he often found that the thumbnails posted to attract viewers usually fell flat and the process for choosing thumbnails hasn’t changed in 15 years. And the realization that the images gained little to no traction among viewers came as something of a surprise: Most of the time, the publishers and brand managers themselves had selected specific images for posting with no thought at all into optimizing the image.

According to McMichael, the company’s technology (called “Kraken”) solves for two critical areas for publishers: it creates inventory and the corresponding revenue while also increasing engagement and time spent on site.

Timing, it turns out, was everything for McMichael and InfiniGraph. Image- and object-recognition software had been improving to the point where those milliseconds-at-a-time thumbnails could be slowed down and evaluated more cheaply than in the past. Using that technology along with special algorithms, McMichael created Kraken, a program that breaks down videos into “best possible” thumbnails. Using an API, Kraken monitors which part of the video, or which thumbnail, viewers click on the most. Using machine learning, Kraken then rotates through and posts the best thumbnails to increase the chances that new users will also click on the published thumbnail in order to watch an entire video.

This process is essentially crowd-sourced, says McMichael—the images that users click on the most are those that Kraken pushes back to the sites for more clicks. “What’s fascinating is we’ve had news content, hard news, shocking, all the way up to entertainment, music, sports and it’s pretty much universal,” he says, “that no one [person] picks the right answer”—only the program will provide the best image or images that draw in the most clicks. On its first few experimental runs, InfiniGraph engineers discovered something huge: By repeatedly testing and re-posting certain images, InfiniGraph saw rates of click-to-play increase by, in some cases, 200%. Says McMichael: “It was like found money.”

InfiniGraph is a young and small company, even for a start-up: The Silicon Valley firm has eight employees in addition to a network of technicians and specialty consultants he scales on and as-needed basis, and has boot-strapped itself to where it is today. McMichael says he’s built a “very revenue-efficient company” because “everything is running in two data centers and images distributed across a global CDN.” His goal is to be cash-flow positive by this summer. Right now InfiniGraph works exclusively with publishers but the market is ripe for growth, especially in mobile devices, McMichael says.

Recently, Tom Morrissy, a publishing leader with extensive experience in both publishing (Entertainment Weekly, SpinMedia Group) and video ad tech (Synaptic Digital, Selectable Media) joined InfiniGraph as a Board Advisor.

“So many companies claim to bring a ‘revenue generating solutions that is seamlessly integrated.” This product creates inventory for premium publishers and is the lightest tech integration I’ve seen. I was completely impressed with Chase’s vision because he truly thought through the technology from the mindset of a publisher. Improve the consumer experience and the ad dollars always follow” says Tom Morrissy

The son of a military officer father and registered nurse mother, McMichael grew up in the small town of New Boston, Texas, located just outside of the Red River Army Depot. A self-described “Brainiac kid,” McMichael says he was always busying himself with science experiments, with a special interest in superconductors, or materials that conduct electricity with zero resistance. Though he’d been accepted to North Texas, McMichael still took a tour at the University of Houston, mainly because the work of one physics professor who discovered high temperature superconductivity had grabbed his attention. “So I went to Paul Chu’s office and said, ‘hey, I want to work for you.’” It was the craziest thing, but growing up I was always told, ‘If you don’t ask for it, you won’t know.’”

That spawned the beginning of seven-year partnership with Chu during which time the University built a ground-breaking science center. McMichael spent seven years in DARPA funded applied science, but decided to leave for the business world. A friend of McMichael’s worked at Sun Microsystems and encouraged him to leverage his programming knowledge. His first job out of college was creating the ad banner management system for Hearst. “So I got sucked into the whole internet wave and left the hard-core science field,” he says. He also worked at Chase Manhattan Bank in the 90s, building out its online banking business.

As for the future for InfiniGraph?

McMichael says his mission is “to improve the consumer experience on every video across the globe, and it’s an ambitious plan. But we know that there are billions of people holding a phone right now looking at an image. And their thumb is about to click ‘play,’ and we want to help that experience.”

Bruce H. Rogers is the co-author of the recently published book Profitable Brilliance: How Professional Service Firms Become Thought Leaders - Originally posted on Forbes

The Force Awakens Video Machine Learning – Star Wars

Star Wars: The Force Awakens Video Machine Learning Trailer achieves a massive boost (41% gain) using visual sequence story telling. Optimizing video is now a must for publishers looking to maximize their video assets and engage customers with content relevant to them. Embrace the “FORCE”
Force Awakens KRAKEN Video Machine LearningAbove is a live example of KRAKEN’s “Image Rotation” in action powered by video machine learning seen on NYDailyNews.  The image sequencing is created by KRAKEN and is integrated directly inside the video player via the KRAKEN API.

Force-Awakens-KRAKEN-Video-Machine-Learning-Mobile-Star-WarsThe Problem

The impression a video makes on a consumer is everything, especially with mobile. Typically seen is a still image with a large play button overlay in video players. This thumbnail image has been stuck in a static world for over 15 years. The old school static thumbnail on video is dead and auto play is frankly annoying.

There have been recent advancements in image processing using deep neural nets.  Finding quality and clarity is great but can be expensive at scale.

Google Thumbnailer quality selector Neural Networks

Image quality is important but our findings prove that consumers select images and prefer not the best image but the ones that cause the human mind to have intrigue.

However, the static thumbnail selection is still dependent on the person who uploads a video. This process does not scale to thousands of videos over a short period of time. That is why the majority of commercial video platforms auto select from a fixed time slice from the video and hope for the best.

Image Selection in YouTube Note KRAKEN enabled Video Machine Learning

Static thumbnail selection with customized thumbnail upload. All video platform provide this manual feature as well as a auto default is selected.

Humans cannot optimize or adjust creative on the fly to increase video performance. Many attempts to do A/B testing have proven to be helpful, however they produce limited results due to their manual nature.

The Solution

Video machine learning has come of age because it is cost effective and enables publishers to use the FORCE. Image sequencing is not a new ideal and has been used for centuries for depicting visual story telling.
cat218-lge

Video machine learning makes it possible to scale image sequencing over thousands of video placements and millions of plays. Video has gone from a static world to a dynamic and intelligent world. Star Wars: The Force Awakens Trailer benefited tremendously from video machine learning with a lift of 41%.

Force Awakens KRAKEN Video Machine Learning International

Another major bonus of video machine learning is the ability to scale and combat image fatigue (decreasing engagement over time).

Conclusion

Video Machine Learning Star Wars Force AwakensCapturing a consumer’s attention has never been harder than now. Consumers are glued to their smartphones and every millisecond counts. Publishers are reverting to the annoying auto play tactic, however, consumers are pushing back and complaining.  Fox has responded to consumer feedback by offering a feature to turn auto play off. The growth of mobile video will continue to increase massively for publishers optimizing video.  Machine learning will continue to help them benefit and maximize their valuable video assets.

Do you want to learn more about KRAKEN and hear what others are saying about video machine learning?  Check out our testimonials and intro below. Thanks for your input and thoughts on our our journey in video machine learning.

Ryan Shane VP of Sale

Ryan Shane VP of Sales

Want to increase your video play rates and increase revenue? Contact us for a 1:1 demo and access customer use cases and see live examples on both mobile/desktop implementations.

 

Video Machine Learning Skyrockets Mobile Engagement by 16.8X (Case Study)

Video machine learning technology called KRAKEN skyrockets mobile consumer engagement by 16.8X for the Interstellar Trailer (case study).

COMPANYVideo Machine Learning KRAKEN Social Moms Logo

Social networking for influential moms
SocialMoms began in 2008 as a popular community site for moms looking to build their reach and influence through social networking, traditional media opportunities, and brand sponsorships. It now boasts over 45,000 bloggers, reaches more than 55 million people each month, and has a network of influencers with more than 300 million followers on Twitter.

CHALLENGE

Create engaging mobile digital media campaigns for women 25-49
Video Machine Learning KRAKEN Interstellar PosterSocialMoms brings top brands to millions of women each month. They are responsible for ensuring that each campaign not only reaches the intended audience, but also that it be engaging and meaningful. However, it was challenging to get meaningful audience engagement with video campaigns on their smartphones.

SOLUTION

Responsive visuals optimized for mobile

interstellarkraken1KRAKEN replaces a video’s old, static default thumbnail with a responsive set of “Lead Visuals” taken from the video. It treats each endpoint differently, so it can optimize a movie with one set of visuals for a desktop site and another set of visuals for a mobile site—because people respond differently depending on which device they use for viewing.

RESULT

Maximum lift of 16.8X on mobile for the Interstellar CampaignVideo Machine Learning KRAKEN Interstellar Graph
After KRAKEN’s “Lead Visuals” optimization, engagement via mobile skyrocketed. SocialMoms saw over 16.8X increased engagement compared to the original default thumbnail that was chosen for the desktop site. They also reported higher completion rates when running KRAKEN.

 

Video Machine Learning KRAKEN Jim Calhoun“We’re seeing the highest engagement levels for our customers using InfiniGraph’s KRAKEN powered content.”
– Jim Calhoun COO
SocialMoms

 

 

Download InfiniGraph’s Interstellar Case Study (PDF)

Read our Birdman Case Study

Would you like to learn more about video machine learning?  Request a demo!

5 Ways Machine Learning Accelerates Mobile Video

 In Mobile Video Machine Learning KRAKEN, the “Birdman” Case study demonstrates video lift engagement powered by machine learning. In “5 Ways Machine Learning Accelerates Mobile Video”, we dive into why brands are embracing video as a key marketing and storytelling tool and how machine learning can be used to drive higher engagement.

The hard reality is video is STILL LINEAR.  Even so, some are attempting to make them interactive like Jack White’s Interactive Video that allows viewers to choose their own adventure.

While the majority of brand videos are still stuck in a 15s / 30s pre-roll with a force fed content model, we’re starting to see a clear migration to long form and sponsored content that’s not just an interruption but instead it IS the story. Video machine learning is new and millions of videos can benefit from programmatic visual control. Why machine learning? Marketers don’t care what algorithms you’re using they just want to see:

Mobile Video Machine Learing Birdman post

Case study on the movie trailer “Birdman” Click to play lift achieved 3000% using machine learning technology.

  • Revenue
  • Efficiency
  • Effectiveness

Publishers are looking to achieve high KPI’s in order to increase overall spend while the media buyer is looking to lower CPA, without increasing costs. Publishers are trying to increase inventory and get the most out of their customer’s engagement. Machine learning enables both parties to achieve their goal by impacting revenue, efficiency and effectiveness simultaneously.  With this technology publishers are empowered to keep the user video engagement high over significantly longer periods of time which is proving to be an invaluable tool that will become imperative to all successful video marketing efforts.

What Marketers want to see?

  • Viewability
  • Video watch time
  • Audio on or off
  • When did consumer stop watching
  • Was the video paused
Video Viewablity Across the Web

Google research finds only 53 percent of PC video advertising is viewable.

Gone are the days of simply tracking web page hits. A more sophisticated marketer has emerged where data is king. However, video distribution and analytics are complicated. Machine learning facilitates the systems ability to learn behavior and automatically adjust marketing efforts based on active feedback loops. This virtual neural network driven by human interaction with video content creates a meaningful data set providing the foundation for mobile video intelligence.

Programmatic Explosion

Machine Learning Mobile Video Birdman Split Test KRAKEN

Graph shows real-time A/B testing of static image and KRAKEN image driven by machine learning. Machine learning makes it possible to stabilize and achieve lift.

Programmatic targeting reached an all time high of sophistication with it’s own machine learning and big data approach. Companies like RockFuel, Turn and eXelate have all perfected audience based targeting with advanced machine learning methods of aggregating massive sums of data to ensure that the right content is placed in front of the right people at the right time. The following are examples of machine learning techniques being used to enhance content engagement levels.

1. Algorithmic learning is used to determine what demographic segment responds well with specific content (e.g. videos).

2. Identification of habitual responses to visual objects by region allows for higher confidence of consumer engagement with content.

3. The type of content greatly affects the reaction of a targeted segment. Machine learning can track the visual preference of the video segments to give brands and content creators a new level of understanding as to what an audience will find most appealing.

4. Machine Learning can predict audience consumption. Plotting audience behavior across video types creates a consumption map, which can be used to predict things like video placement and cycle times.

5.  Reduce video fatigue and increase engagement by rotation of static video images (thumbnails). Static starting images face image fatigue due to a lack of visual changes, color and motion alterations. Continuous and dynamic changes in a static video image will increase audience interest and result in higher click to play rates as well as completion rates.

Visual Programmatic

Netflix has the capability to “predict” what you would watch next based on past viewing habits. Information like show/movie title and genre are compiled to help select Netflix’s recommendations. These algorithms are an example of something that pulls from the surface level information vs actual content within the video.

Netflix-Wants-Personalized-Recommendations-Instead-of-Current-Interface-443094-2 Visual content marketing is a very powerful method of attracting and retaining customers. Building a content story arch is key to perpetuating engagement and video is the most effective means to accomplish this. Publishers that leverage their audience to tune the video will achieve higher levels of revenue on their existing assets.

How do you see machine learning impacting video in the further and what video KPIs do you track that aren’t on the list? Let us know in the comments!

Mobile Video Play Rate Boosted by 200X to 3000X – KRAKEN Release – Case Study

Problem:

Video is the largest and fastest growing segment in online marketing. Unfortunately the first impression to a consumer of those videos is more than likely a static image and there isn’t a simple way to programmatically adjust based on audience intelligence. This problem is leaving billions on the table with un-played videos and lost engagement due to the lack of compelling starting visuals.

Baglan

Baglan Nurhan Rhymes , SVP of Revenue – AnchorFree

“In a highly competitive Ad Tech space, where videos drive the lion share of revenues, InfiniGraph’s technology, Kraken, is the first real breakthrough we have seen in many years.

I can see Kraken being implemented by digital broadcast networks, publishers, ad networks and video player platforms in the very near future. Early adopters will turbo charge their video ad revenues on desktop and mobile. ”

Initial Results:

Our beta customers: Disney, Paramount, Microsoft and AnchorFree have experienced between 10% to 3000% lifts in click-through and play rates on their video content using InfiniGraph’s patented Kraken technology..Mobile Video Play Rate Boost AnchorFree

Example –  AnchorFree before and after on “50 Shades of Grey”:

  • 200%+ boost in play rate
  • 10X increase in private viewing sales

InfiniGraph’s machine learning technology, achieves scale by producing the highest possible response to mobile video based on audience behavior.

Learning algorithms matter:

Mobile Video Birdman AnchorFreeImproving mobile video play rates is more of a science as seen in the second example on AnchorFree running “Birdman”.  Is your video “Thumb Stopping” when a consumer scrolls through their feed?  In the case of Birdman, the results were amazing.

Overall video play rates grew by:

  • 3000%+ boost in play rate
  • 2.6% peek click through rate

 

Mobile Video Performance Birdman KRAKEN Machine LearningSolution:

The Kraken machine learning system is continuously analyzing the video and user interactions at every content distribution endpoint, over many sequences. This decision making is done almost in real-time.

Value:

Mobile Video Machine LearningInfiniGraph’s mission is to help video content owners, publishers, and agencies deliver the most relevant video experience. This helps boost video starts, video completion rates, and increases page visit depth by eliminating creative burn and waste. This translates into higher revenues for the existing content through higher video starts, higher VCRs, and higher brand engagement.

Our clients, who have implemented proprietary “video click-through / play rate enhancement technology called “Kraken”,  have experienced upwards of 3000% lift in content plays.
As an Advertising,  Content Mobile Video Machine Learning Whats KRAKENProducer,  or  Brand Management Professional you know that video creates the greatest impact to your online marketing.  The challenge and measurement of success continues to be execution of “play-rates” and “video completion rates” .

Want to see more or run your own test? Contact Us

 

Mobile Native Video Interstellar 14X Hyper Play Rate Jump

Jim Calhoun, COO SocialMoms

Jim Calhoun, COO SocialMoms

We’re seeing the highest engagement levels for our customers using InfiniGraph’s native content

Paramount’s Interstellar uses mobile native video and distributed native advertising units to hyper jump their play rates by 14X. Mobile video is exploding and brands are leveraging advanced methods of distribution outside of the old school static native ads we are seeing today. We are now in the age of intelligence, machine learning, and responsive design.  Why can’t this intelligence be applied to mobile native video? Learn how to hyper drive your own mobile native video – signup, request a demo, and see how intelligence is applied.
Sign UP for Mobile Native Video

Above is an example of such intelligence integrated within the mobile native advertising unit for on and off domain content amplification. The mobile video leverages “deep linking” to launch video on a smartphone creating a seamless consumer experience. The content inside the native unit is dynamically updated based on consumer actions which increases overall engagement rates and exposure.

The Takeaway:

  • 50% play rate boost
  • Maximize content marketing spend
  • Mobile first content rendering
  • Intelligent content distribution
Rich Media CTR example

Doubleclick data showing industry averages of 0.06 to 0.18 vs .84 for mobile native more on benchmark on rich media

Most brands are sitting on tons of great content. This content is usually stuck in silos and doesn’t have a simple way of tracking with the consumer across other media sources, such as a websites/blogs (owned), and other paid mediums. Ad re-targeting is extremely effective; however a major issue is visual ad fatigue. The consumer sees the same media over and over, eventually reducing effectiveness. The Industry benchmarks on rich media content averages around .06% to .10% CTR compared to native ads which perform much higher. With new intelligent data techniques harnessing machine learning, these technologies are pushing engagement rates up to 50% improvement. 

Interstellar200

Interstellar Mobile Native Video

Content Marketing has truly taken on a new life and the quality of content within videos has leaped to extraordinary levels.  As more publishers and brands are seeking to control the content on their site, white label native ad platforms will continue to take hold.

Native unit for Interstellar launching the mobile native video

Native unit for Interstellar launching the mobile native video

The example on the left demonstrates what an embedded native unit looks like when deployed over a publisher network or brand website. The code behind these units is designed with a mobile first strategy, assuring optimal rendering on mobile. InfiniGraph sources content from the existing Interstellar movie published content, scores the content and transforms it into intelligent mobile native units. What’s unique about this approach is the consumer actions on content are tracked and the brands content inventory is managed per individual unit to maximize content marketing spend. eye-120803Keeping content fresh is a major factor in reducing image fatigue, repetitiveness, and inactive engagement. The human brain can process images in 13 milliseconds, how fast your visuals resonate with your consumer is key to amplifying brand content on domain as well as enhancing off domain engagement. The same mobile native unit can be deployed over other 3rd party native networks, ad networks, programmatic exchanges, and on the brands domain simultaneously.

Apply some intelligence in your mobile native video – signup, request a demo and see how intelligence is applied.
Sign UP for Mobile Native Video