Making More Donuts

Being a publisher is a tough gig these days.   It’s become a complex world for even the most sophisticated companies.  And the curve balls keep coming.  Consider just a few of the challenges that face your average publisher today:

  • Ad blocking.
  • Viewability and measurement.
  • Decreasing display rates married with audience migration to mobile with even lower CPMs.
  • Maturing traffic growth on O&O sites.
  • Pressure to build an audience on social platforms including adding headcount to do so (Snapchat) without any certainty that it will be sufficiently monetizable.
  • The sad realization that native ads—last year’s savior!–are inefficient to produce, difficult to scale and are not easily renewable with advertising partners.  

The list goes on…

The Challenge

Of course, the biggest opportunity—and challenge–for publishers is video.  Nothing shows more promise for publishers from both a user engagement and business perspective than (mobile) video. It’s a simple formula.  When users watch more video on a publisher’s site, they are, by definition, more engaged.  More video engagement drives better “time spent’ numbers and, of course,  higher CPMs.    

But the barrier to entry is high, particularly for legacy print publishers. They struggle to convert readers to viewers because creating a consistently high volume of quality video content is expensive and not necessarily a part of their core DNA.  Don’t get me InfiniGraph Video Machine Learning Challenge Opportunitywrong.  They are certainly creating compelling video, but they have not yet been able to produce it at enough scale to satisfy their audiences.  At the other end of the spectrum, video-centric publishers like TV networks that live and breathe video run out of inventory on a continuous basis.   

The combined result of publishers’ challenge of keeping up with the consumer demand for quality video is a collective dearth of quality video supply in the market.  To put it in culinary terms, premium publishers would sell more donuts if they could, but they just can’t bake enough to satisfy the demand.  

So how can you make more donuts?
Trust and empower the user! 

InfiniGraph Video Machine Learning Donuts

Rise of  Artificial Intelligence

The majority of the buzz at CES this year was about Artificial Intelligence and Machine Learning.  The potential for Amazon’s Alexa to enhance the home experience was the shining example of this.  In speaking with several seasoned media executives about the AI/machine learning phenomenon, however, I heard a common refrain:  “The stuff is cool, but I’m not seeing any real applications for my business yet.”  Everyone is pining to figure out a way to unlock user preferences through machine learning in practical ways that they can scale and monetize for their businesses.  It is truly the new Holy Grail.

The Solution

That’s why we at InfiniGraph are so excited about our product KRAKEN.  KRAKEN has an immediate and profound impact on video publishing.  KRAKEN lets users curate the thumbnails publishers serve and optimizes towards user preference through machine learning in real time. The result?:  KRAKEN increases click-to-play rates by 30% on average resulting in the corresponding additional inventory and revenues.     

It is a revolutionary application of machine learning that, in execution, makes a one-InfiniGraph Video Machine Learning Brain Machineway, dictatorial publishing style an instant relic. With KRAKEN, the users literally collaborate with the publisher on what images they find most engaging.  KRAKEN actually helps you, the publisher, become more responsive to your audience. It’s a better experience and outcome for everyone.  

The Future…Now!

In a world of cool gadgets and futuristic musings, KRAKEN works today in tangible and measurable ways to improve your engagement with your audience.  Most importantly, KRAKEN accomplishes this with your current video assets. No disruptive change to your publishing flow. No need to add resources to create more video. Just a machine learning tool that maximizes your video footprint.  

In essence, you don’t need to make more donuts.  You simply get to serve more of them to your audience.  And, KRAKEN does that for you!

 

For more information about InfiniGraph, you can contact me at tom.morrissy@infinigraph.com or read my last blog post  AdTech? Think “User Tech” For a Better Video Experience

 

How Deep Learning Video Sequence Drives Profits

Beyond the deep learning hype, digital video sequence (clipping) powered by machine learning is driving higher profits. Video publishers use various images (thumbnails – poster images) to attract readers to watch more video. These “Thumbnail Images” are critical, and the visual information has a great impact on video performance. The lead visual in many cases is more important than the headline. More view equals more revenue it’s that simple. Deep learning is having significant impact in video visual search to video optimization. Here we explore video sequencing and the power of deep learning.

Having great content is required, but if your audience isn’t watching the video then you’re losing money. Understanding what images resonate with your audience and produce higher watch rates is exactly what KRAKEN does. That’s right: show the right image, sequence or clip to your consumers and you’ll increase the number of videos played. This is proven and measurable behavior as outlined in our case studies. An image is really worth a thousand words.

Below are live examples of KRAKEN in action. Each form is powered by a machine learning selection process. Below we describe the use cases for apex image, image rotation and animation clip.

Animation Clip:

KRAKEN “clips” the video at the point of APEX. Sequences are put together creating a full animation of a scene(s). Boost rates are equal to those from image rotation and can be much higher depending on the content type.

  • PROS
    • Consumer created clipping points within video
    • Creates more visual information vs. a static image
    • Highlights action scenes
    • Great for mobile and OTT preview
  • CONS:
    • More than one on page can cause distraction
    • Overuse can turn off consumers
    • Too many on page can slow page loading performance (due to size)
    • Mobile LTE is slow and can lead to choppy images instead of a smooth video

Image Rotation:

Image rotation allows for a more complete visual story to be told when compared to a static image. This results in consumers having a better idea of the content in the video. KRAKEN determines the top four most engaging images and then cycles through them. We are seeing mobile video boost rates above 50%.

  • PROS:
    • Smooth visual transition
    • Consumer selected top images
    • Creates a visual story vs. one image to engage more consumers
    • Ideal for mobile and OTT
    • Less bandwidth intensive (Mobile LTE)
  • CONS:
    • Similar to animated clips, publishers should limit multiple placements on a single page

Apex Image:

KRAKEN always finds the best lead image for any placement. This apex image alone creates high levels of play rates, especially in a click-to-launch placement. Average boost rates are between 20% to 30%.

  • PROS:
    • Audience-chosen top image for each placement
    • Can be placed everywhere (including social media)
    • Ideal for desktop
    • Good with mobile and OTT
  • CONS:
    • Static thumbnails have limited visual information
    • Once the apex is found, the image will never be substituted

Below are live KRAKEN animation clip examples. All three animations start with the audience choosing the apex image.  Then, KRAKEN identifies (via deep learning) clipping points and uses machine learning to adjust to optimal clipping sequence.

HitFix Video Deep Learning Video Clipping to Action Machine Learning

HitFix Video Deep Learning Video Clipping to Action, Machine Learning adjust in real time

Video players have transitioned to HTML5 and mobile consumption of video is the fastest growing medium. Broadcasters that embrace advanced technologies that adapt to the consumer preference will achieve higher returns, and at the same time create a better consumer experience. The value proposition is simple: If you boost your video performance by 30% (for a video publisher doing 30 million video plays per month), KRAKEN will drive an additional $2.2 million in revenue (See KRAKEN revenue calculator). This happens with existing video inventory and without additional head count. KRAKEN creates a win-win scenario and will improve its performance as more insights are used to bring prediction and recommendation to consumers, thereby increasing the video process.

How Deep Learning Powers Visual Search

The elusive video search whereby you can search video image context is now possible with advanced technologies like deep learning. It’s very exciting to see video SEO becoming a reality thanks to amazing algorithms and massive computing power. We truly can say a picture is worth 1,000 words!

Content creators have fantasized about doing video search. For many years,, major engineering challenges were a road block to comprehending video images directly.

Originally posted on SEJ

Video visual search opens up a whole new field where video is the new HTML. And, the new visual SEO is what’s in the image. We’re in exciting times with new companies dedicated to video visual search. In a previous post, Video Machine Learning: A Content Marketing Revolution, we demonstrated image analysis within video to improve video performance. After one year, we’re now embarking on video visual search via deep learning.

Behind the Deep Curtain

Video Deep Learning  KRAKEN wonder-woman-trailer

Video clipping powered by KRAKEN video deep learning. Identify relevance within video images to drive higher plays

Many research groups have collaborated to push the field of deep learning forward. Using an advanced image labeling repository like ImageNet has elevated the deep learning field. The ability to take video and identify what’s in the video frames and apply description opens up huge visual keywords.

What is deep learning? It is probably the biggest buzzword around along with AI (Artificial Intelligence). Deep Learning came from advanced math on large data set processing, similar to the way the human brain works. The human brain is made of up tons of neurons and we have long attempted to mimic how these neurons work. Previously, only humans and a few other animals had the ability to do what machines can now do. This is a game changer.

The evolution of what’s call a Convolution Neural Network, or CNN aka deep learning, was created from thought leaders like Yann LeCrun (Facebook), Geoffrey Hinton (Google), Andrew Ng (Baidu) and Li Fei-Fei (Director of the Stanford AI Lab and creator of ImageNet). Now the field has exploded and all major companies have open sourced their deep learning platforms for running Convolution Neural Networks in various forms. In an interview with New York Times, Fei-Fei said “I consider the pixel data in images and video to be the dark matter of the Internet. We are now starting to illuminate it.” That was back in 2014. For more on the history of machine learning, see the post by Roger Parloff at Fortune.

Big Numbers

KRAKEN video deep learning Images for high video engagement

KRAKEN video deep learning Images for high video engagement

Image reduction is key to video deep learning. Image analysis is achieved through big number crunching. Photo: Chase McMichael created image

Think about this: video is a collection of images linked together and played back at 30 frames-a-second. Analyzing massive number of frames is a major challenge

As humans, we see video all the time and our brains are processing those images in real-time. Getting a machine to do this very task at scale is not trivial. Machines processing images is an amazing feat and doing this task in real-time video is even harder. You must decipher shapes, symbols, objects, and meaning. For robotics and self-driving cars this is the holy grail.

To create a video image classification system required a slightly different approach. You must handle the enormous number of single frames in a video file first to understand what’s in the images.

Visual Search

On September 28th, 2016, the seven-member Google research team announced YouTube-8M leveraging state-of-the-art deep learning models. YouTube-8M, consists of 8 million YouTube videos, equivalent to 500K hours of video, all labeled and there are 4800 Knowledge Graph entities. This is a big deal for the video deep learning space. YouTube-8M’s scale required some pre-processing on images to pull frame level features first. The team used Inception-V3 image annotation model trained on ImageNet. What’s makes this such a great thing is we now have access to a very large video labeling system and Google did massive heavy lifting to create 8M.

Google 8M Stats Video Visual Search

Top level numbers of YouTube 8M. Photo created by Chase McMichael.

Top level numbers of YouTube 8M. Photo created by Chase McMichael.

The secret to handling all this big data was reducing the number of frames to be processed. The key is extracting frame level features from 1 frame-per-second creating a manageable data set. This resulted in 1.9 billion video frames enabling a reasonable handling of data. With this size you can train a TensorFlow model on a single Graphic Process Unit (GPU) in 1 day! In comparison, the 8M would have required a petabyte of video storage and 24 CPUs of computing power for a year. It’s easy to see why pre-processing was required to do video image analysis and frame segmenting created a manageable data set.

Big Deep Learning Opportunity

 

Chase mcMichael Deep Learning Talk to ACM Reinforced Deep Learning Vidoe

Chase McMichael gives talk on video hacking to ACM Aug 29th Photo: Sophia Viklund used with permission

Google has beautifully created two big parts of the video deep learning trifecta. First, they opened up a video based labeling system (YouTube8m). This will give all in the industry a leg up in analyzing video. Without a labeling system like ImageNet, you would have to do the insane visual analysis on your own. Second, Google opened Tensoflow, their deep learning platform, creating a perfect storm for video deep learning to take off. This is why some call it an artificial intelligence renaissance. Third, we have access to a big data pipeline. For Google this is easy, as they have YouTube. Companies that are creating large amounts of video or user-generated videos will greatly benefit.

The deep learning code and hardware are becoming democratized, and its all about the visual pipeline. Having access to a robust data pipeline is the differentiation. Companies that have the data pipeline will create a competitive advantage from this trifecta.

Big Start

Follow Google’s lead with TensorFlow, Facebook launched it’s own open AI platform FAIR, followed by Baidu. What does this all mean? The visual information disruption is in full motion. We’re in a unique time where machines can see and think. This is the next wave of computing. Video SEO powered by deep learning is on track to be what keywords are to HTML.

Visual search is driving opportunity and lowering technology costs to propel innovation. Video discovery is not bound by what’s in a video description (meta layer). The use cases around deep learning include medical image processing to self-flying drones, and that is just a start.

Deep learning will have a profound impact our daily lives in ways we never imagined.

Both Instagram and Snapchat are using sticker overlays based on facial recognition and Google Photo sort your photos better than any app out there. Now we’re seeing purchases linked with object recognition at Houzz leveraging product identification powered by deep learning. The future is bright for deep learning and content creation. Very soon we’ll be seeing artificial intelligence producing and editing video.

How do you see video visual search benefiting you, and what exciting use cases can you imagine?

Feature Image is YouTube 8M web interface screen shot taken by Chase McMichael on September 30th .

AdTech? Think “User Tech” For a Better Video Experience

How and why did Ad Tech become a bad word?  Ad tech has become associated with, and blamed for, everything from damaging the user experience (slow load rates) to creating a series of tolls that the advertiser pays for but ultimately at the expense of margins for publishers.  Global warming has a better reputation. Even the VC’s are investing more in marketing tech than the ad tech space.

The Lumascape is denser than ever and, even with consolidation, it will take years before there is clarity.  And the newest, new threats to the ad ecosystem like visibility, bots, and ad blocking will continue to motivate scores of new “innovative” companies to help solve these issues. This is in spite of the anemic valuations ad tech companies are currently seeing from Wall Street and venture firms. The problem is that the genesis of almost all of these technologies begins with the race for the marketing dollar while the user experience remains an afterthought. A wise man once said, “Improve the user experience and the ad dollars will follow.” So few new companies are born out of this philosophy. The ones that are—Facebook, Google and Netflix (How Netflix does A/B testing) —are massively successful.

One of the initial promises for publishers to engage their readers on the web was to Panthers_Video_Machine_Learning_iPhoneKRAKEN (1)provide an “interactive” experience—a two-way conversation. The user would choose what they wanted to consume, and editors would serve up more of what they wanted resulting in a happier, more highly engaged user.   Service and respect the user and you—the publisher—will be rewarded.

This is what my company does.  We have been trying to understand why the vast majority of users don’t click on a video when, in fact, they are there to watch one!  How can publishers make the experience better?   Editors often take great care to select a thumbnail image that they believe their users will click on to start a video and then…nothing.  On average, 85% of videos on publishers’ sites do not get started.

We believe that giving the user control and choice is the answer to this dilemma.  So we developed a patented machine learning platform that responds to the wisdom of the crowds by serving up thumbnail images from publisher videos that the user—not the editor—determines are best. By respecting the user experience with our technology, users are 30% more likely to click on videos when the thumbnails are user-curated.

What does this mean for publishers?   Their users have a better experience because they are actually consuming the most compelling content on the site.  Nothing beats the sight, sound and motion of the video experience. Their users spend more time on the site and are more likely to return to the site in the future to consume video. Importantly from a monetization standpoint, InfiniGraph’s technology “KRAKEN” creates 30% more pre-roll revenue for the publisher.

We started our company with the goal of improving the user experience, and as a result, monetization has followed. This, by the way, enables publishers to create even more video for their users. There are no tricks.  No additional load times.  No videos that follow you down the page to satisfy the viewability requirements for proposals from the big holding companies. Just an incredibly sophisticated machine learning algorithm that helps consumers have a more enjoyable experience on their favorite sites. Our advice?   Forget about “ad tech” solutions.  Think about “User Tech”.   The “ad” part will come.

The live example above demonstrates KRAKEN in action on the movie trailer “Intersteller” achieving 16.8X improvement over the traditional static thumbnail image.

FORBES: InfiniGraph Invents Video Thumbnail Optimization

Bruce Rogers ForbesBruce Rogers, FORBES STAFF
I’m Forbes’ Chief Insights Officer & write about thought leadership.
Originally posted on Forbes

A Series of Forbes Insights Profiles of Thought Leaders Changing the Business Landscape: Chase McMichael, Co-Founder and CEO, InfiniGraph

Optimizing web content to drive higher conversion rates, for a long time, meant only focusing on boosting the number of click-throughs, or figuring out what kinds of static content got shared most often on social media sites.

But what about videos? This key component of many sites went largely overlooked, because there simply wasn’t a good way to determine what actually made viewers want to click on and watch a given video.

Chase_McMichael_Video_Machine_Learning_Headshot_2_2016

Chase McMichael, Co-Founder and CEO, Infinigraph

In an effort to remedy this problem, says entrepreneur Chase McMichael, brand managers may have, at most, tried to simply improve the video’s image quality. Or, in a move like a Hail Mary pass, they might have splashed up even more content, in the hopes that something, anything, would score higher click-to-play rates. Yet even after all that, McMichael says, brands often found that some 90% of viewers still did not watch the videos posted on their sites.

As it turns out, the “thumbnail” image (static visual frame from the video footage) has
everything to do with online video performance. And while several ad tech companies were already out there, using so-called A/B testing to determine how to optimize the user experience, no one had focused on optimizing video thumbnail images. Given video’s sequencing speed with thousands of images flashed up for milliseconds at a time, it meant that measuring the popularity of thumbnails was simply too complex.

Sensing a challenge, McMichael, a mathematician and physicist with an ever-so-slight east Texas drawl, set out to tackle this issue. He’d already started InfiniGraph, an ad tech firm aimed at tracking and measuring people’s engagement on brand content. But as his company grew, he found that customers began asking more and more about how they might best optimize web videos in order to boost viewership.Panthers_Video_Machine_Learning_iPhoneKRAKEN (1)

Viewership, of course, is key: Higher video viewership translates into more shares; more shares means increased engagement. And that all translates into more revenue for the website. Premium publishers are limited in their ability to create more inventory because the price of entry is so high. These new in house studios are producing quality content, but getting scale is a huge challenge.

When he started looking into it, McMichael says, he often found that the thumbnails posted to attract viewers usually fell flat and the process for choosing thumbnails hasn’t changed in 15 years. And the realization that the images gained little to no traction among viewers came as something of a surprise: Most of the time, the publishers and brand managers themselves had selected specific images for posting with no thought at all into optimizing the image.

According to McMichael, the company’s technology (called “Kraken”) solves for two critical areas for publishers: it creates inventory and the corresponding revenue while also increasing engagement and time spent on site.

Timing, it turns out, was everything for McMichael and InfiniGraph. Image- and object-recognition software had been improving to the point where those milliseconds-at-a-time thumbnails could be slowed down and evaluated more cheaply than in the past. Using that technology along with special algorithms, McMichael created Kraken, a program that breaks down videos into “best possible” thumbnails. Using an API, Kraken monitors which part of the video, or which thumbnail, viewers click on the most. Using machine learning, Kraken then rotates through and posts the best thumbnails to increase the chances that new users will also click on the published thumbnail in order to watch an entire video.

This process is essentially crowd-sourced, says McMichael—the images that users click on the most are those that Kraken pushes back to the sites for more clicks. “What’s fascinating is we’ve had news content, hard news, shocking, all the way up to entertainment, music, sports and it’s pretty much universal,” he says, “that no one [person] picks the right answer”—only the program will provide the best image or images that draw in the most clicks. On its first few experimental runs, InfiniGraph engineers discovered something huge: By repeatedly testing and re-posting certain images, InfiniGraph saw rates of click-to-play increase by, in some cases, 200%. Says McMichael: “It was like found money.”

InfiniGraph is a young and small company, even for a start-up: The Silicon Valley firm has eight employees in addition to a network of technicians and specialty consultants he scales on and as-needed basis, and has boot-strapped itself to where it is today. McMichael says he’s built a “very revenue-efficient company” because “everything is running in two data centers and images distributed across a global CDN.” His goal is to be cash-flow positive by this summer. Right now InfiniGraph works exclusively with publishers but the market is ripe for growth, especially in mobile devices, McMichael says.

Recently, Tom Morrissy, a publishing leader with extensive experience in both publishing (Entertainment Weekly, SpinMedia Group) and video ad tech (Synaptic Digital, Selectable Media) joined InfiniGraph as a Board Advisor.

“So many companies claim to bring a ‘revenue generating solutions that is seamlessly integrated.” This product creates inventory for premium publishers and is the lightest tech integration I’ve seen. I was completely impressed with Chase’s vision because he truly thought through the technology from the mindset of a publisher. Improve the consumer experience and the ad dollars always follow” says Tom Morrissy

The son of a military officer father and registered nurse mother, McMichael grew up in the small town of New Boston, Texas, located just outside of the Red River Army Depot. A self-described “Brainiac kid,” McMichael says he was always busying himself with science experiments, with a special interest in superconductors, or materials that conduct electricity with zero resistance. Though he’d been accepted to North Texas, McMichael still took a tour at the University of Houston, mainly because the work of one physics professor who discovered high temperature superconductivity had grabbed his attention. “So I went to Paul Chu’s office and said, ‘hey, I want to work for you.’” It was the craziest thing, but growing up I was always told, ‘If you don’t ask for it, you won’t know.’”

That spawned the beginning of seven-year partnership with Chu during which time the University built a ground-breaking science center. McMichael spent seven years in DARPA funded applied science, but decided to leave for the business world. A friend of McMichael’s worked at Sun Microsystems and encouraged him to leverage his programming knowledge. His first job out of college was creating the ad banner management system for Hearst. “So I got sucked into the whole internet wave and left the hard-core science field,” he says. He also worked at Chase Manhattan Bank in the 90s, building out its online banking business.

As for the future for InfiniGraph?

McMichael says his mission is “to improve the consumer experience on every video across the globe, and it’s an ambitious plan. But we know that there are billions of people holding a phone right now looking at an image. And their thumb is about to click ‘play,’ and we want to help that experience.”

Bruce H. Rogers is the co-author of the recently published book Profitable Brilliance: How Professional Service Firms Become Thought Leaders - Originally posted on Forbes

Think You’ve Picked the Best Video Thumbnail? Think Again — 52 Videos that Prove Video Machine Learning can Double Play Rates

I’ve never met anyone who intentionally picked a bad video thumbnail—but they’re everywhere.

To be clear, bad ≠ ugly. Bad thumbnails are sometimes beautiful. Bad means that people don’t WANT to click on them. After all, the point of a thumbnail is to get people to click “play” or “stop scrolling” long enough for the video to start to playing.

KRAKEN Video Machine Learning Rikers Original ThumbnailEditors and content creators with years of experience spend a lot of time picking “best” thumbnails.  And publishers posting hundreds of videos daily rely on content management systems (CMS) that suggest or auto-pick thumbnails.

Guess what? They’re usually wrong.

Almost always, there is a better thumbnail for any given video or set of thumbnails.

How? Why?

Because “best” is defined by your audience, not you. You bring your experience and baggage with you every time you pick a thumbnail—and you are different from your audience.  Why not take the guess work out of the equation and use data, not opinion, to choose the right thumbnails every time?

Real Example

KRAKEN Video Machine Learning Teacher Original Thumbnail Let’s say you’re an editor in LA and pick a thumbnail for a video about the latest breaking news topic. You might choose this image to the right:

Now what if your viewer is from Texas? What if that image doesn’t speak to them at all? That doesn’t mean they’re not interested in the topic or wouldn’t want to see the video content, it means that the thumbnail doesn’t make them WANT to click “play.”

KRAKEN Video Machine Learning Teacher
If you had asked your viewers, they would have told you that they preferred seeing the images on the left—all taken from the very same video.

 

 

Our recent post “The Force Awakens” shows another great example and the science behind data-chosen thumbnails.

Your audience isn’t one-size-fits-all. Your thumbnails shouldn’t be either.

Here are 52 videos from last month that prove intelligent selection of images can greatly improve video play rates.  Each has an optimized set of thumbnails that performed 101%–425% BETTER than the original thumbnail.

KRAKEN Video Machine Learning Rikers

Quickly though—what is an optimized thumbnail?

Optimized thumbnails are dynamic and rely on machine learning and audience feedback. Our product called KRAKEN does this all in real-time

So, what the heck does that mean in english???

It means that our computers examine a video and pick a bunch of ‘best possible’ thumbnails, then A/B tests them to determine what ones people actually click on. It will serve different images to different people depending on a variety of factors, including device and placement. Hey, it’s a patented process!

Said another way, we crowdsource what thumbnails people actually engage with, then show them to future visitors.

Results – Before & After

Think sports fans will click on any video related to their team? Think again. Optimized thumbnails performed 198% better than the original:
KRAKEN Video Machine Learning NY Giants Original Thumbnail KRAKEN Video Machine Learning NY Giants
                    Original Thumbnail                                 KRAKEN Optimized Visuals

Optimized thumbnails work for ‘hard’ news videos, too. This video about Enrique Marquez’s ties to the San Bernardino gunmen had a 205% lift:
KRAKEN Video Machine Learning Enrique Original Thumbnail KRAKEN Video Machine Learning Enrique
                    Original Thumbnail                                 KRAKEN Optimized Visuals

Kardashians—love them or hate them, right? It turns out that optimized thumbnails can produce a 128% lift in video play rates:
KRAKEN Video Machine Learning Kardashian Original Thumbnail KRAKEN Video Machine Learning Kardashian
                    Original Thumbnail                                 KRAKEN Optimized Visuals

From earlier in the article: the Rikers Island Guard video saw a 157% lift, while the video of a Teacher under fire for her lesson on Islam saw a 127% lift.

Our top performing video of December saw a 425% lift.  Here’s an overview of all 52:
KRAKEN Video Machine Learning Graph Videos with 100 lift Dec 2015

 

What could you do with double the video plays (or 3X or 4X)?

Would it double your video revenue? Satisfy your audience because more of them are seeing your awesome video content (after all, that’s why they’re on your site in the first place)?

The good news is your “best” thumbnails already exist and are buried in your existing videos. You just need to release the KRAKEN and get them to the surface.

Leave a comment below and tell us your thoughts. If you are interested in links to all 52 top performing videos, send me an email at ryan.shane@infinigraph.com—I like talking with new people.

Video Machine Learning Skyrockets Mobile Engagement by 16.8X (Case Study)

Video machine learning technology called KRAKEN skyrockets mobile consumer engagement by 16.8X for the Interstellar Trailer (case study).

COMPANYVideo Machine Learning KRAKEN Social Moms Logo

Social networking for influential moms
SocialMoms began in 2008 as a popular community site for moms looking to build their reach and influence through social networking, traditional media opportunities, and brand sponsorships. It now boasts over 45,000 bloggers, reaches more than 55 million people each month, and has a network of influencers with more than 300 million followers on Twitter.

CHALLENGE

Create engaging mobile digital media campaigns for women 25-49
Video Machine Learning KRAKEN Interstellar PosterSocialMoms brings top brands to millions of women each month. They are responsible for ensuring that each campaign not only reaches the intended audience, but also that it be engaging and meaningful. However, it was challenging to get meaningful audience engagement with video campaigns on their smartphones.

SOLUTION

Responsive visuals optimized for mobile

interstellarkraken1KRAKEN replaces a video’s old, static default thumbnail with a responsive set of “Lead Visuals” taken from the video. It treats each endpoint differently, so it can optimize a movie with one set of visuals for a desktop site and another set of visuals for a mobile site—because people respond differently depending on which device they use for viewing.

RESULT

Maximum lift of 16.8X on mobile for the Interstellar CampaignVideo Machine Learning KRAKEN Interstellar Graph
After KRAKEN’s “Lead Visuals” optimization, engagement via mobile skyrocketed. SocialMoms saw over 16.8X increased engagement compared to the original default thumbnail that was chosen for the desktop site. They also reported higher completion rates when running KRAKEN.

 

Video Machine Learning KRAKEN Jim Calhoun“We’re seeing the highest engagement levels for our customers using InfiniGraph’s KRAKEN powered content.”
– Jim Calhoun COO
SocialMoms

 

 

Download InfiniGraph’s Interstellar Case Study (PDF)

Read our Birdman Case Study

Would you like to learn more about video machine learning?  Request a demo!

Mobile Video Play Rate Boosted by 200X to 3000X – KRAKEN Release – Case Study

Problem:

Video is the largest and fastest growing segment in online marketing. Unfortunately the first impression to a consumer of those videos is more than likely a static image and there isn’t a simple way to programmatically adjust based on audience intelligence. This problem is leaving billions on the table with un-played videos and lost engagement due to the lack of compelling starting visuals.

Baglan

Baglan Nurhan Rhymes , SVP of Revenue – AnchorFree

“In a highly competitive Ad Tech space, where videos drive the lion share of revenues, InfiniGraph’s technology, Kraken, is the first real breakthrough we have seen in many years.

I can see Kraken being implemented by digital broadcast networks, publishers, ad networks and video player platforms in the very near future. Early adopters will turbo charge their video ad revenues on desktop and mobile. ”

Initial Results:

Our beta customers: Disney, Paramount, Microsoft and AnchorFree have experienced between 10% to 3000% lifts in click-through and play rates on their video content using InfiniGraph’s patented Kraken technology..Mobile Video Play Rate Boost AnchorFree

Example –  AnchorFree before and after on “50 Shades of Grey”:

  • 200%+ boost in play rate
  • 10X increase in private viewing sales

InfiniGraph’s machine learning technology, achieves scale by producing the highest possible response to mobile video based on audience behavior.

Learning algorithms matter:

Mobile Video Birdman AnchorFreeImproving mobile video play rates is more of a science as seen in the second example on AnchorFree running “Birdman”.  Is your video “Thumb Stopping” when a consumer scrolls through their feed?  In the case of Birdman, the results were amazing.

Overall video play rates grew by:

  • 3000%+ boost in play rate
  • 2.6% peek click through rate

 

Mobile Video Performance Birdman KRAKEN Machine LearningSolution:

The Kraken machine learning system is continuously analyzing the video and user interactions at every content distribution endpoint, over many sequences. This decision making is done almost in real-time.

Value:

Mobile Video Machine LearningInfiniGraph’s mission is to help video content owners, publishers, and agencies deliver the most relevant video experience. This helps boost video starts, video completion rates, and increases page visit depth by eliminating creative burn and waste. This translates into higher revenues for the existing content through higher video starts, higher VCRs, and higher brand engagement.

Our clients, who have implemented proprietary “video click-through / play rate enhancement technology called “Kraken”,  have experienced upwards of 3000% lift in content plays.
As an Advertising,  Content Mobile Video Machine Learning Whats KRAKENProducer,  or  Brand Management Professional you know that video creates the greatest impact to your online marketing.  The challenge and measurement of success continues to be execution of “play-rates” and “video completion rates” .

Want to see more or run your own test? Contact Us