So, you’ve developed an OTT app and you’ve marketed it to your viewers. Now your focus is on keeping your viewers watching. How can machine learning drive more engagement? Let’s face it—they may have a favorite show or two, but to keep them engaged for the long term, they need to be able to discover new shows. Because OTT is watched on TVs, you have a lot of real estate to engage with your viewers. A video’s thumbnail has more of an impact on OTT than any other platform, so choose your thumbnails carefully!
Discovery is different on different platforms
On desktop, most videos start with either a search (e.g. Google) or via a social share (e.g. Facebook). Headlines and articles provide additional info to get a viewer to cognitively commit to watching a video. Autoplay runs rampant removing the decision to press “play” from the user.
TVs have a lot more real estate than smartphones
On a smartphone, small screen size is an issue. InfiniGraph’s machine learning data shows that more than three objects in a thumbnail will cause a reduction in play rates. Again, social plays a huge role in the discovery of new content, with some publishers reporting that almost half of their mobile traffic originates from Facebook.
OTT Discovery is Unique
The discovery process on OTT is unique because the OTT experience is unique. Most viewers already have something in mind when they turn on their OTT device. In fact, Hulu claims that they can predict with a 70% accuracy the top three shows each of their users is tuning in to see. But what about the other 30%? What about the discovery of new shows?
Netflix AB Test Example
Netflix has said that if a user can’t find something to watch in 30 seconds, they’ll leave the platform. They decided to start A/B testing their thumbnails to see what impact it would have, and discovered that different audiences engage with different images. They were able to increase view rates by 20-30% for some videos by using better images! In the on-demand world of OTT, the right image is the difference between a satisfied viewer and a user who abandons your platform. If you’re interested in increasing engagement on your OTT app, reach out to us at InfiniGraph to learn more about our machine learning technology named KRAKEN that chooses the best images for the right audience, every single time. Also, check out our post about increasing your video ad inventory!
We had the unique opportunity to talk with Tom Morrissy our Board Advisor in Time Square NY. Below is the transcript between Tom and Chase talking about his perspective on publishers and KRAKEN our Video Machine learning technology.
Chase: Hi I’m Chase McMichael CEO and Co-founder of InfiniGraph and I’m here today with Tom Morrissy our Board Advisor. Hi Tom.
Tom: Hi Chase.
Chase: So Tom tell me a little about your experience working with us obviously you’re a midi ex media guy from SpinMedia, Timing you’re basically better. So tell us a little about your experience working with us and obviously you know more about the media space than we do. Now we really like to have your viewpoint on you know knowledge space, some of the issues. What do we do to really break in and arbitration within you know obviously cracking mobile videos big deal? So tell me what you think?
Tom: One thing that we’re finding as we talk to different publisher was just one of the reasons that I joined the company is all have very similar pain points. There’s a certain buried entry with video. There’s only so much video they can create but getting it seen and consumed by consumers is a huge challenge. So, how can we excite the consumer to be more excited and excited enough about the content that they are viewing to actually click to play and actually watch it and engage more deeply on these websites and as a factor of that we create more inventory for the publishers to monetize their video plays.
Really what it comes down to its combining a better User experience and starting with that and then taking the better User experience and being able to monetize it in a much more accelerated way. That’s the magic on what this company has done in my eyes because it took the publishers view through the User experience. Most Ad Tech starts with how we can make more money off this User with our technology does is empowers the user and that’s a vital difference relative to all the other technologies I’ve seen.
Having been on the publisher’s side and then on the Ad Tech side and then back to the publisher’s side. I was getting email after email, LinkedIN after LinkedIN request saying I’ve got a seamless integration opportunity for you revenue generating and the truth and matter is there is no seamless integration. Anybody who promises that is not telling the full truth because we all know that’s not the case.
So, the question becomes as a publisher what do you want to do and where do you want to spend your time and resources to integrate what kind of impact does it going have in your audience and then what kind of impact ultimately will have on your business. That’s what InfiniGraph is taking into account. We created the most frictionless integration opportunity that I’ve seen from seeing all the viewpoint of a publisher and figuring out a way to excite the reader and therefore grow the business.
Chase: Yeah we are just embedding it on the video player how much lower touch we can get you know we’re not touching the CMS don’t change the workflow that’s a death nail. One of the things you say a lot “its about the content especially premium content”. I like this whole thing where you talking about where we’re really going for this from a content angle. Not the ad angle and how are we going to juice the User and get the much money out of them. There’s some importance there for some of the publisher but reality is that they play an incredible strong content game they are going to lose the audience. So talk a little more about that.
Tom: The truth of the matter is, you don’t have that much time to engage the audience you can have killer content but the audience may not choose to view it for whatever reason. You got to figure out ways to engage them and get into what we call cognitive thinking and making the commitment to watching the content that is you can have the best movie you can have the best video clip if nobody watches it is like a tree falling in the forest right.
Chase: Yeah, that’s a great point because I know on mobile, we’ve been measuring you got 0.25 seconds man and if you’re not “thumbstopping” The viewer is just scrolling and scrolling you’re going to miss them. The video is a linear body of work and there is so much content there but then you’re starting out with just one image. What do you do?
Tom: Right, you have to figure out what’s going to motivate that person because think about it they have chosen to go to that space to watch a video and ninety percent of them on average are not. So why?
Chase: Yes they are bailing.
Tom: How can we make a better experience and make them more motivated to watch the video in the first place that they are there to watch in the first place. That’s the weirdest thing about this market. That’s what keeps me up at night and what keeps you up at night to try to solve for that. That’s what our company I think the promise of what we’re trying to do is really help lead the consumer to the choice they always told themselves they want to make.
Tom: And, if you can do that then you have a much more robust relationship with your consumer to spend more time on your site, they watch more video which is why they’re there and then you as a publisher make more money. It’s as simple as that.
Chase: Right. Thanks Tom man. It’s great to have you on board and awesome here to be in the big apple. And hey viewers, click up on the (i) up on the right here to see some more information and we will back at you soon. Thank you.
VIDEO – Better User Experience, Time on Site and Converting Readers into Viewers.
Video Optimization With Machine Learning is now a reality and publishers are intelligently making the most out of their O&O digital assets. The digital video industry is undergoing a transformation and machine learning is advancing the video user experience. Mobile, combined with video, is truly the definitive on-demand platform making it the fastest growing sector in digital content distribution.
Video machine learning is a new field. The ability to crowd source massive human interactions on video content has created a new data-set. We’re tapping into a small part of the human collective conscious for the first time. Publishers and media broadcasters are now going beyond the video view, clicks, and completions to actually obtaining introspection into video objects, orientations and types of movements that induce positive cognitive response. This human cognitive response is the ultimate in measurement of relevance where humans are interacting with video in a much more profound way. In this article, we will dive deep into the four drivers of video machine learning.
Video by its nature is linear, however, there are several companies working to personalize the video experience as well as make it live. We’re now in an age where the peak of hype on Virtual Reality / Augmented Reality will provide the most immersive experience. All of these forms of video have two things in common: moving sights and sound. Humans by nature prefer video because this is how we see the world around us. The bulk of video consumed globally is mostly designed around a liner body of work that tells a story. The fact that the video is just a series of images connected together is not something people think much about. In the days of film, seeing a real film strip from a movie reel made it obvious that each frame was in fact a still image. Now fast forward, digital video has frames but those frames are made up of 1’s and 0’s. “Digital” opens the door to advance mathematics and image / object recognition technologies to process these images into more meaning than just a static picture.
It’s hard to believe how important images really are. For videos placed “above the fold,” you have to wonder why so many videos have such a low play rate to begin with (Video Start CTR). Consumers process objects in images within 13 milliseconds (0.013 seconds). That’s FAST! Capturing cognitive attention has to be achieved extremely fast for a human to commit to watching a video and the first image is important, but not everything. More than one image is sometimes required to assure a positive cognitive response. The reality is people are just flat out dismissive and some decide not to play the video. This is evident when you have a 10% CTR, which means 90% of your audience OPTED OUT OF PLAYING THE VIDEO. What happened? The facts are the first image may have been great but didn’t create a full mental picture of what was possible in the linear body of work. The reality is you’re not going to get 100% play rates, however, providing greater cognitive stimulation that builds relevance will drive greater reasons to commit time to watching a linear form of video.
Machine Learning and Algorithms
In the last 4 years, machine learning / artificial intelligence has exploded with new algorithms and advanced computing power has greatly reduced the cost of complex computations. Machine learning is transforming the way information is being interpreted and used to gain actionable insights. With the recent open sourcing of TensorFlow from Google and advances in Torch from Facebook, these machine learning platforms have truly disrupted the entire artificial intelligence industry.
Feature extraction and classification is key to learning what’s in the image that is achieving positive response.
Major hardware providers, such as NVIDIA, have ushered massive advancements in the machine learning and AI fields that would have otherwise been out of reach. The democratization of machine learning is now opening the doors to many small teams to propel the product development around meaningful algorithmic approaches.
The unique properties of digital video specifically in a consumer’s mobile feed, delivered from a video publishing site, creates a perfect window into how consumers snack on content. If you want to see hyper snacking, ride a train into a city or watch kids on their smartphones. Digital content consumption has never been so interactive than now. All digital publishers and broadcasters have to ask themselves this question, “How is my content going to get traction with this type of behavior?” If your audience is Snapchatters, YouTubers, or Instagramers you’re going to have to provide more value in your content V I S U A L Y or you will lose them in a split second.
Graphs – Video Views (Mobile-KMView / Desktop-KDView) vs. Minutes in a day – 1440 min = 24 hrs. Mobile is dominating the weekend where as work week, during commute and after work, skyrockets in usage. Is your video content adapting to this behavior?
Video Publishing Conundrum
A big conundrum is why people are not playing videos. This required further investigation. We found that the lead image (i.e. the old school “thumbnail”, or “poster image”) had a huge impact on introducing a cognitive response. In the mobile world, video is still a consumer driven response and we hope this will stay a click to play world. We believe consumer choice and control will always win the day. For video publishers, under the revenue gun, consumers will quickly tire of native ad content tricks, in-stream video (auto play), and the bludgeoning and force feeding of video on the desktop. No wonder ad-blocking is at an all time high! There is a whole industry cropping up around blocking ads and it’s an all out war. The sad part is the consumer is stuck in the middle.
Many publishers are using desktop video auto-play to reduce friction, however the FRONT of the page, video carousel, or gallery is a click to launch environment making the images on the published page even more important. Those Fronts are the main traffic driver over possible social share amplification. As for mobile video, it’s still a click to play world for a majority of broadcasters and publishers. Video is the highest consumer engaging vehicle at their disposal and it is why so many publishers are forcing themselves to create more video content. Publishing more video oriented content is great, however, the lack of knowledge of what consumers emotionally respond to has been a major gap. A post and pray or post and measure later system is currently prevalent throughout the publishing industry.
Video Quality matters
Creating a better consumer experience is everything if you want your content to be consumed in the days where auto-play is rampant and force fed content is inducing engagement. More brands demand measured engagement. Video engagement quality is measured by starts, length of time on video, and physical actions taken. Capturing human attention is very hard due to many distractions, especially on a mobile device. We’re in a phase where the majority of connected humans are now digital natives in this digital deluge. ADD is at an all time high (link). With < .25sec to get the consumer to engage before they have formulated the video story line in their mind is a hard task. A quick peak on the video thumbnail fast read of a headline and glance of some keywords could be standing between you and a revenue generating video play. People are pressed with their time and unwillingness to commit to a video play unless it induces a real cognitive response. Translating readers into video viewers is important and keeping them is even more important.
Mobile Video and Machine Learning
Mobile is becoming the prevalent method of on demand video access. This combination of video and mobile is an explosive pair and most likely the most powerful marketing conduit ever created. Here we have investigated how machine learning algorithms on images can provide a real-time level of insight and decision support to catch the consumer’s attention and achieve higher video yield otherwise lost. The big challenge with video is it created in a linear format and then loaded in a CMS put up for publishing and pray it gets traction. Promotion helps and placement matters, however, there is really nothing a publisher can do to adjust the video content once out. Enter video intelligence. The ability to measure in real-time video engagement is a game changer. Enabling intelligence within video seems intuitive, however, the complexity of encoding and decoding video has great a sufficient barrier of entry that this area of video intelligence has been otherwise untapped.
How and Why KRAKEN Works
Here we dive deep into consumers looking to interact with certain visual objects to create a positive response before a video is played. InfiniGraph invented a technology called KRAKEN that actually shows a series of images, but the series of images we call “image rotation” is not really new. What’s new is the actual selection and choice of those images using machine learning algorithms allowing us to adjust those images to achieve highest human response possible.
GRAPH – LIFT by KRAKEN mobile (KMLIFT) vs. desktop (KDLIFT) on same day. NOTE the grouping prior and after lunch had overall higher boost by KRAKEN. We attribute this behavior due to less distraction.
As more images are processed by KRAKEN, the system becomes smarter by selecting better lead images driving higher video efficiency. This entire process of choosing which order to sequence the best is another part of the learning mechanism. Image sequencing is derived from a collection of 1 to 4 images. These images are being selected based upon KRAKEN ranking linked with human actions. Those visual achieved the highest degree of engagement will receive a higher KRAKEN rank. The actual sequence also creates a visual story maximizing the limited time to capture a consumer’s attention.
KRAKEN in Action
KRAKEN determines the best possible thumbnails for any video using machine learning and audience testing. Once it finds the top 1-4 images, it rotates through them to further increase click-to-play rates. It also A/B tests against the original thumbnail to continually show its benefits. Here are 2 real examples:
KRAKEN Thumbnails with 273% lift below. What makes a good video lead image unique? We’re asked this question all the time. Why would someone click on one image versus another? These questions are extremely context and content dependent. The actual number of visual objects in the frame has a great deal to do with humans determining relevance, inducing intrigue or desire. The human brain sees shapes first in black / white. Color is a third response however red has it’s on visual alerting system. The human brain can process vast sums of visual information fast. The digital real estate such as mobile or desktop can be vastly different. A great example is what we call information packaging where a smaller image size on a mobile phone may only support 2 or 3 visual objects that a human would quickly recognize and induce a positive response whereas the desktop could support up to 5. Remember one size doesn’t fit all especially in mobile video. KRAKEN Thumbnails with 217% lift to the left. Trick your brain: black and white photo turns to colour! – Colour: The Spectrum of Science – BBC
4 drivers of video machine learning
Who benefits from video machine learning? The consumer benefits the most because of increased consumer experience due to creating a more visually accurate compilation of what the video content’s best moments are. It’s critical that people get a sense of the video so they commit to playing the video and sticking around. Obviously the publisher or broadcaster benefits financially due to more video consumption yielding to higher social shares.
Color depth: remember bright colors don’t always yield the best results. Visuals that depict action or motion elicit a higher response. Depending on the background can greatly alter color perception, hence images with a complementary background can enable a human eye to pick up colors that will best represent what they are looking at creating greater intrigue.
Image sequencing: Sequencing the wrong or bad images together doesn’t help but turns off. The right collection is everything and could be 1 to 4. Know when to alter or shift is key to obtaining the highest degree of engagement. The goal is to create a visual story that will increase consumer experience.
Visual processing: The human brain can process vast amounts of visual information fast. The digital real estate such as mobile or desktop can differ. A great example is what we call “information packaging” where a smaller image size on mobile phone screen may only support 2 or 3 visual objects in view. Humans can quickly recognize and induce a positive response whereas the desktop could support up to 5. One size doesn’t fit all especially in mobile video.
Object classification: Understanding what’s in an image and classify those images provides a library to top performing images. These images with the right classification create a unique data set for use in recommendation to prediction. Knowing what’s in the image as just as important as knowing it was acted on.
The first impression is everything or maybe the second or third if you are showing a sequence of images. For publishers and digital broadcasters adapting to their customers content consumption preferences and being on platforms that will yield the most will be an ongoing saga. Nurturing your audience and perpetuating their viewing experience will be key as more and more consumer move to mobile. KRAKEN is just the start of using machine learning to create a better user experience in mobile video. We see video intelligence expanding into prediction to VR / AR in the not too distantd future. As this unique dataset expands we look forward to getting your feedback on other exciting use cases and finding ways to increase the overall yield on your existing video assets.
Tell us what you think and where you see mobile video going in your business.
I’ve never met anyone who intentionally picked a bad video thumbnail—but they’re everywhere.
To be clear, bad ≠ ugly. Bad thumbnails are sometimes beautiful. Bad means that people don’t WANT to click on them. After all, the point of a thumbnail is to get people to click “play” or “stop scrolling” long enough for the video to start to playing.
Editors and content creators with years of experience spend a lot of time picking “best” thumbnails. And publishers posting hundreds of videos daily rely on content management systems (CMS) that suggest or auto-pick thumbnails.
Guess what? They’re usually wrong.
Almost always, there is a better thumbnail for any given video or set of thumbnails.
Because “best” is defined by your audience, not you. You bring your experience and baggage with you every time you pick a thumbnail—and you are different from your audience. Why not take the guess work out of the equation and use data, not opinion, to choose the right thumbnails every time?
Let’s say you’re an editor in LA and pick a thumbnail for a video about the latest breaking news topic. You might choose this image to the right:
Now what if your viewer is from Texas? What if that image doesn’t speak to them at all? That doesn’t mean they’re not interested in the topic or wouldn’t want to see the video content, it means that the thumbnail doesn’t make them WANT to click “play.”
If you had asked your viewers, they would have told you that they preferred seeing the images on the left—all taken from the very same video.
Our recent post “The Force Awakens” shows another great example and the science behind data-chosen thumbnails.
Your audience isn’t one-size-fits-all. Your thumbnails shouldn’t be either.
Here are 52 videos from last month that prove intelligent selection of images can greatly improve video play rates. Each has an optimized set of thumbnails that performed 101%–425% BETTER than the original thumbnail.
Quickly though—what is an optimized thumbnail?
Optimized thumbnails are dynamic and rely on machine learning and audience feedback. Our product called KRAKEN does this all in real-time
So, what the heck does that mean in english???
It means that our computers examine a video and pick a bunch of ‘best possible’ thumbnails, then A/B tests them to determine what ones people actually click on. It will serve different images to different people depending on a variety of factors, including device and placement. Hey, it’s a patented process!
Said another way, we crowdsource what thumbnails people actually engage with, then show them to future visitors.
Results – Before & After
Think sports fans will click on any video related to their team? Think again. Optimized thumbnails performed 198% better than the original: Original Thumbnail KRAKEN Optimized Visuals
Optimized thumbnails work for ‘hard’ news videos, too. This video about Enrique Marquez’s ties to the San Bernardino gunmen had a 205% lift: Original Thumbnail KRAKEN Optimized Visuals
Kardashians—love them or hate them, right? It turns out that optimized thumbnails can produce a 128% lift in video play rates: Original Thumbnail KRAKEN Optimized Visuals
From earlier in the article: the Rikers Island Guard video saw a 157% lift, while the video of a Teacher under fire for her lesson on Islam saw a 127% lift.
Our top performing video of December saw a 425% lift. Here’s an overview of all 52:
What could you do with double the video plays (or 3X or 4X)?
Would it double your video revenue? Satisfy your audience because more of them are seeing your awesome video content (after all, that’s why they’re on your site in the first place)?
The good news is your “best” thumbnails already exist and are buried in your existing videos. You just need to release the KRAKEN and get them to the surface.
Leave a comment below and tell us your thoughts. If you are interested in links to all 52 top performing videos, send me an email at firstname.lastname@example.org—I like talking with new people.
Star Wars: The Force Awakens Video Machine Learning Trailer achieves a massive boost (41% gain) using visual sequence story telling. Optimizing video is now a must for publishers looking to maximize their video assets and engage customers with content relevant to them. Embrace the “FORCE” Above is a live example of KRAKEN’s “Image Rotation” in action powered by video machine learning seen on NYDailyNews. The image sequencing is created by KRAKEN and is integrated directly inside the video player via the KRAKEN API.
The impression a video makes on a consumer is everything, especially with mobile. Typically seen is a still image with a large play button overlay in video players. This thumbnail image has been stuck in a static world for over 15 years. The old school static thumbnail on video is dead and auto play is frankly annoying.
Image quality is important but our findings prove that consumers select images and prefer not the best image but the ones that cause the human mind to have intrigue.
However, the static thumbnail selection is still dependent on the person who uploads a video. This process does not scale to thousands of videos over a short period of time. That is why the majority of commercial video platforms auto select from a fixed time slice from the video and hope for the best.
Static thumbnail selection with customized thumbnail upload. All video platform provide this manual feature as well as a auto default is selected.
Humans cannot optimize or adjust creative on the fly to increase video performance. Many attempts to do A/B testing have proven to be helpful, however they produce limited results due to their manual nature.
Video machine learning has come of age because it is cost effective and enables publishers to use the FORCE. Image sequencing is not a new ideal and has been used for centuries for depicting visual story telling.
Video machine learning makes it possible to scale image sequencing over thousands of video placements and millions of plays. Video has gone from a static world to a dynamic and intelligent world. Star Wars: The Force Awakens Trailer benefited tremendously from video machine learning with a lift of 41%.
Another major bonus of video machine learning is the ability to scale and combat image fatigue (decreasing engagement over time).
Capturing a consumer’s attention has never been harder than now. Consumers are glued to their smartphones and every millisecond counts. Publishers are reverting to the annoying auto play tactic, however, consumers are pushing back and complaining. Fox has responded to consumer feedback by offering a feature to turn auto play off. The growth of mobile video will continue to increase massively for publishers optimizing video. Machine learning will continue to help them benefit and maximize their valuable video assets.
Do you want to learn more about KRAKEN and hear what others are saying about video machine learning? Check out our testimonials and intro below. Thanks for your input and thoughts on our our journey in video machine learning.
Ryan Shane VP of Sales
Want to increase your video play rates and increase revenue? Contact us for a 1:1 demo and access customer use cases and see live examples on both mobile/desktop implementations.
Introducing Baglan Rhymes, Chief Digital Officer at AnchorFree with Chase McMichael, CEO of InfiniGraph, discussing the recent success of video machine learning KRAKEN on AnchorFree video ads page. Video Machine Learning Customer Testimonial – Case Studies discussed in this video are Fifty Shades of Grey, American Sniper and Birdman.
Chase: Hi I’m Chase McMichael, CEO and Co- Founder of Infinigraph and I’m here today with Baglan Rhymes, the Chief Digital Officer of AnchorFree. Hi Baglan. Baglan: Hi Chase. Chase: So tell us a little about AnchorFree. Baglan: Of course. AnchorFree is the world’s largest internet freedom platform and our mission is to provide secure and uncensored access to the world’s information for every single person on the planet. To date, we’ve been installed 300 million times. We have 30 million monthly active users and we secure approximately 5 billion page views.
Chase: That’s excellent. Obviously, we got connected with the video machine learning technology—a technology called Kraken. Baglan: Yes. Chase: And you know one of the things was that you are using a monetization page with video on the free sites. Baglan: Correct.Chase: Tell us a little more about that.
Baglan: Yes, because we have a free service and subscription-based service and the revenue stream for the free service is our content sponsors—be it movie studios, be it news organizations. And we have our own content discovery platform where we have tiles of video content and also static content where we present the users upon connect. And the videos—we don’t make any revenue off of the videos unless the users click on it. So how do we get the users to click on a video when we have maybe 5 or 10 seconds of their attention right upon connection and that’s when we connected. So we partnered with you on click to play videos to increase click to play rate because unless those videos are played we don’t get paid and through your machine learning algorithms we were able to increase the click rate.
Click to view rate grew 20 to 30 times on videos overall, movies, overall movies and we ran a test on Fifty Shades of Grey and American Sniper afterwards we did and we did Birdman where we got 3,000% that ridiculous number [increase in click to play rate]. A fight scene in tighty whities. I actually remember I asked you to remove that. We can’t show it there and you kept it and that tighty whities that fight scene.Chase: That was the best one! Baglan: Exactly. 3,000% increase [in click to play rates] and I’m so happy we kept it.
Chase: That’s the one that boost the most revenue. So you know right now, where you seeing you going, especially around the consumer in mobile. Baglan: Yeah, video is the way users consume content now. And then whenever we see a video associated with a brand, we see a 96% increase on purchase intent, 139% increase on brand recall and even our conversations are now in the form of a video with your friends and it is just a video. So the whole communication is changing from voice to audio, visuals and emotions—which is video. Chase: Thank you so much Baglan. So please be sure to click on the (i) above to get more information. Thank you.
Video marketing is being revolutionized by machine learning, fast data and artificial intelligence. The dawn of data-driven video is upon us. Video takes the lion’s share of marketing spend and fast-growing mobile video is surpassing all other marketing methods. Understanding behavior and content consumption is key in optimizing mobile video. Brands have an insatiable appetite for consumer engagement, as evident in brands’ adoption of video, reported by YouTube, Facebook and InMobi.
Video industry leaders who embrace these advanced technologies will establish a formidable competitive advantage.
The market is moving away from the video interruption ad model and premium video is taking center stage. Battle for middle earth is being waged between video networks, publishers, and content creators. Those who have intelligent data will win the video marketing thunder-dome.
With few exceptions, old school person-to-person media buying is fading fast. Machine learning is being used to ensure the optimal deal is always reached in programmatic video placement. We are seeing a torrent of data coming in from ad platforms, beacons, wearables, IoT, and so forth. This data tsunami is compounding daily, creating what the industry calls “fast data”. Video and human action on video is a big challenge due to consumption volume. The competitive weapons are now speed and agility when building an intelligent video arsenal.
In July, I attended the launch of Miip by InMobi, an intelligent video and ad unit experience. These units are like Facebook’s left and right slider units, but Miip has also implemented discovery. Check out the video to see more of what I’m talking about:
All programmatic networks use fast data composed of human personas, actions, and connected devices. This data explosion is forming big data, and it’s happening at a massive scale. It’s not surprising that programmatic targeting leveraged machine learning and big data management. There is a lot of hype around Real-Time Bidding (RTB) and programmatic targeting.
With all this technology, the one thing that remains true is content still must resonate with the consumer – and machine learning is creating a huge opportunity to match the right content with the right consumer.
Video creation tools like Magisto, PowToon, and iMovie are simplifying the process. The decreasing hardware costs have also lowered the barrier of entry. The iPhone 6, Hero4, and video drone technology are great leaps forward in video capture.
Low-cost broadcast-quality video is here with iMovie HD and Camtasia Studio 8. Full commercials are edited on iPhones only. There is an explosion of professional content now. What was once cost-prohibitive is now the industry norm. With all this video technology unleashed, hundreds of YouTube stars were born. The cable cord-cutting acceleration is upon the cable networks now. As more high-quality digital video hits the scene this will fuel grater choice on the consumer’s terms.
Peter Fasano, from Ogilvy, and Allison Stern of Tubular, did a great job presenting The Rise of Multi-Platform Video. Here they reveal the differing advantages of Facebook and YouTube.
This year, Cannes Lions was all about VIDEO storytelling with a big focus on data. Visual and mobile content experiences are personal. I am seeing a massive shift to data-driven journalism. Companies like Google News Lab, Facebook’s Publishing Garage, and Truffle Pig (a content creation agency) are all working with Snapchat, Daily Mail, and WPP – all powering scaled content creation.
“The power of digital allows content, platform, and companies to test and learn in real time before scaling.” -Max Kalehoff
Hear more on this movement from David Leonhard from New York Times’ The Upshot, Mona Chalabi from Facebook Garage, and Ezra Klein and Melissa Bell from Vox:
Video is Not Spandex
Consumers are not one-size-fits-all when it comes to how they consume content. The creation of content is a natural progress for using artificial intelligence (AI) technology. Machine learning has the ability to connect many data elements and test many hypotheses in real-time. Using humans to adjust the algorithms is “supervised learning”. “Unsupervised learning,” a self learning and constantly improving system, is the holy grail in AI.
Getting the right message to the right person is critical in obtaining a positive response. The delivery process and decision will impact the responsiveness. Each platform requires a different strategy. Companies like Tubemogul, Tremor Video, and Hulu all have programmatic video management.
Now broadcasters are starting to embrace data, which enables advertisers to target a more specific audience. Soon we’ll have AI video distribution based on the actual content inside the video.
This graph shows real-time A/B testing from video launch and KRAKEN machine learning optimization in action. Machine learning makes it possible to stabilize and achieve lift.
The following are three examples of machine learning techniques being used to enhance video engagement levels:
Fast data requires advanced algorithmic learning to process: Identify what demographic responds well to which content type (e.g. video). Segment your audience by the type of content consumed. Look at what was shared when most comments were generated. Combine these data points and see what drove most action. These steps will help you learn what logical groupings achieve highest targeting response.
Identify what visual objects induce habitual responses: What visual objects allow for higher consumer engagement? Visual content can then be grouped and that knowledge can be used over and over in later videos.
Machine learning predicts video consumption habits: What people watch tells you a great deal about their preferences. Measuring audience behavior across video types creates a consumption map. Consumption maps predict things like video placement and cycle times.
The type of visual content affects the reaction of a targeted segment. Machine learning can track the visual preference of the video segments. Each brand and content creator structure can achieve a new level of understanding. What does the audience find most appealing? Is there a large-scale pattern you can identify?
The next frontier of mobile video is intelligence – the ability to predict, as well as adapt, content based on all the data available. We are seeing companies like IRIS.TV indexing video libraries to recommend content. Netflix and Amazon have the capability to “predict” using supervised learning human curators. All this metadata in video is providing a treasure trove of information: now we’re connecting with the social graph changing the game.
Finding content that viewers will enjoy is the ultimate goal and extended deep video engagement is a big opportunity. Achieving this level of nirvana has its challenges: see Why Websites Still Can’t Predict Exactly What You Want. We are just scratching the learning algorithms surface of artificial intelligence.
As technology advances, more intelligent visual content marketing will emerge. Machine learning will soon dominate the data-driven marketing landscape. We are moving toward story creation with technologies like Dramatis. People like Brian O’Neill at Western New England University are leading the way (see With Expanding Roles, Computers Need To Add ‘Storyteller’ To Resume). Video networks, content creators, and publishers have a grand opportunity, but all are going to need to collaborate and incorporate a more sophisticated offering if they plan on competing over Facebook and YouTube. The big question is, will they maintain control of their content destiny?
In the age of intelligent data, audience insight is always a winning strategy. Those who tune their video content with intelligence will achieve higher levels of revenue.
Video machine learning technology called KRAKEN boosts consumer engagement by 309% for the Fifty Shades of Grey Trailer (case study).
AnchorFree: The most trusted VPN service in the world!
With a monthly active user base of over 25 million and 350 million installs to date, AnchorFree’s Hotspot Shield VPN is the largest free VPN service in the world. It has an unparalleled ability to protect users’ IP from spammers, snoopers, and hackers, provide Wi-Fi security, and detect and protect against malware.
Increase revenue from limited inventory In order to keep Hotspot Shield free, AnchorFree relies on advertising. With finite inventory and users, increasing consumer engagement is very important, as this results in a higher yield for each video. They are constantly looking to generate more interest and engagement with each longform video placement to increase advertising revenue.
Responsive visuals at programmatic scale
KRAKEN uses machine learning technology to replace static thumbnails with a programmatically optimized set of “Lead Visuals.” This directly results in higher user engagement. AnchorFree is therefore able to increase yield from a finite user base and inventory.
Consumer engagement increased 309% with the Fifty Shades of Grey Campaign
Over the course of the campaign, KRAKEN was able to increase consumer engagement by 309% when compared to the trailer using a standard default thumbnail. AnchorFree was able to generate additional revenue leveraging existing customers and without having to add inventory.
“Without KRAKEN running, we would be leaving money on the table. I can’t imagine why anyone would run video without first optimizing it with KRAKEN.” – Baglan Nurhan Rhymes Chief Digital Officer, SVP Global Revenue AnchorFree
Video machine learning technology called KRAKEN drives 40% additional revenue for the Birdman Trailer (case study).
Most trusted VPN
service in the world! With a monthly active user base of over 25 million and350 million installs to date, AnchorFree’s Hotspot Shield VPN is the largest free VPN service in the world. It has anunparalleled ability to protect users’ IP from spammers, snoopers, and hackers, provide Wi-Fi security, and detectand protect against malware.
Increase revenue from video longform placements In order to keep Hotspot Shield free, AnchorFree is constantly looking for ways to increase their customers’ engagement levels and average revenue per user (ARPU). Regardless of premium placement on the AnchorFree launch page, the video ads were producing less than desired click-to-start and completion rates. Before KRAKEN, AnchorFree tested with various forms of static default thumbnails attached to the video promos.
Responsive visuals at programmatic scale KRAKEN uses machine learning technology to optimize “Lead Visuals” in a programmatic structure, enabling the highest video engagement possible. KRAKEN became the preferred platform to maximize video revenue yield from their current advertiser base.
40% revenue gain for the Birdman campaign KRAKEN boosted click to play rates for the Birman trailer video campaign by a staggering 3,000%. This increase in click to play rates directly resulted in a 40% gain in revenue. After realizing such profound revenue gains, AnchorFree does not run high value video campaigns without KRAKEN.
“InfiniGraph’s Kraken technology is the first real breakthrough we have seen in many years. I can see Kraken being implemented by digital broadcast networks, publishers, ad networks and video player platforms in the very near future. Early adopters will turbo charge their video ad revenues on desktop and mobile.”
– Baglan Nurhan Rhymes Chief Digital Officer, SVP Global Revenue
Video machine learning technology called KRAKEN sustains a 378% video play rate lift for the American Sniper Trailer over 48 Days (case study).
AnchorFree: The most trusted VPN service in the world! With a monthly active user base of over 25 million and 350 million installs to date, AnchorFree’s Hotspot Shield VPN is the largest free VPN service in the world. It has an unparalleled ability to protect users’ IP from spammers, snoopers, and hackers, provide Wi-Fi security, and detect and protect against malware.
Maintain engagement over long periods of time with the same media
AnchorFree shows movie trailers as part of their advertising campaigns. A single campaign with various video content might last two months. Before KRAKEN, AnchorFree would see engagement peak when videos were launched, but steadily decrease over time. Engagement levels decreased as users saw the same thumbnail over and over, slowly becoming blind to it. This phenomenon is called video fatigue.
Responsive visuals at programmatic scale
KRAKEN replaces a video’s old, static default thumbnail with a responsive set of “Lead Visuals” taken from the video. Since it is powered by machine learning, KRAKEN continually optimizes the set of “Lead Visuals” to ensure a consistently high engagement rate, even over long periods of time.
Average lift of 378% for the forty-eight day American Sniper Campaign Over forty-eight days, KRAKEN was able to increase engagement by an average of 378% for a single American Sniper video trailer. With a consistently high yield, Anchor- Free was able to run the campaign longer to maximize revenue versus using a standard, static thumbnail.
“We run trailers for weeks, even months at a time. Only after optimizing with KRAKEN have we been able to see consistent and high levels of engagement from the beginning of a campaign to the end.” – Baglan Nurhan Rhymes Chief Digital Officer, SVP Global Revenue AnchorFree