How and why did Ad Tech become a bad word? Ad tech has become associated with, and blamed for, everything from damaging the user experience (slow load rates) to creating a series of tolls that the advertiser pays for but ultimately at the expense of margins for publishers. Global warming has a better reputation. Even the VC’s are investing more in marketing tech than the ad tech space.
The Lumascape is denser than ever and, even with consolidation, it will take years before there is clarity. And the newest, new threats to the ad ecosystem like visibility, bots, and ad blocking will continue to motivate scores of new “innovative” companies to help solve these issues. This is in spite of the anemic valuations ad tech companies are currently seeing from Wall Street and venture firms. The problem is that the genesis of almost all of these technologies begins with the race for the marketing dollar while the user experience remains an afterthought. A wise man once said, “Improve the user experience and the ad dollars will follow.” So few new companies are born out of this philosophy. The ones that are—Facebook, Google and Netflix (How Netflix does A/B testing) —are massively successful.
One of the initial promises for publishers to engage their readers on the web was to provide an “interactive” experience—a two-way conversation. The user would choose what they wanted to consume, and editors would serve up more of what they wanted resulting in a happier, more highly engaged user. Service and respect the user and you—the publisher—will be rewarded.
This is what my company does. We have been trying to understand why the vast majority of users don’t click on a video when, in fact, they are there to watch one! How can publishers make the experience better? Editors often take great care to select a thumbnail image that they believe their users will click on to start a video and then…nothing. On average, 85% of videos on publishers’ sites do not get started.
We believe that giving the user control and choice is the answer to this dilemma. So we developed a patented machine learning platform that responds to the wisdom of the crowds by serving up thumbnail images from publisher videos that the user—not the editor—determines are best. By respecting the user experience with our technology, users are 30% more likely to click on videos when the thumbnails are user-curated.
What does this mean for publishers? Their users have a better experience because they are actually consuming the most compelling content on the site. Nothing beats the sight, sound and motion of the video experience. Their users spend more time on the site and are more likely to return to the site in the future to consume video. Importantly from a monetization standpoint, InfiniGraph’s technology “KRAKEN” creates 30% more pre-roll revenue for the publisher.
We started our company with the goal of improving the user experience, and as a result, monetization has followed. This, by the way, enables publishers to create even more video for their users. There are no tricks. No additional load times. No videos that follow you down the page to satisfy the viewability requirements for proposals from the big holding companies. Just an incredibly sophisticated machine learning algorithm that helps consumers have a more enjoyable experience on their favorite sites. Our advice? Forget about “ad tech” solutions. Think about “User Tech”. The “ad” part will come.
The live example above demonstrates KRAKEN in action on the movie trailer “Intersteller” achieving 16.8X improvement over the traditional static thumbnail image.
Deep Learning Methods Within Video An End Game Application – We’ll explore the use cases of using deep learning to drive higher video views. The coming Valhalla of video Deep Learning is being realized in visual object recognition and image classification within video. Mobile video has and continues to transform the way video is being distributed and consumed.
We’re witnessing the largest digital land grab in video history. Mobile video advertising is the fastest growing segment projected to account for $25 billion worth of ad spend by 2021. Deep Learning and artificial intelligence are also growing within the very same companies who are jockeying for your cognitive attention. This confluence of video and deep learning has created a new standard in higher performing video content diving greater engagement, views, and revenue. In this post we’ll dive deep into how video intelligence is changing the mobile video game. Many studies showing tablet and smartphone viewing accounted for nearly 40 minutes of daily viewing in 2015 with mobile video continuing to dominate in 2016. Moreover, digital video is set to out pace TV for the first time and social / Instagram/Snapchat video is experiencing explosive growth.
The Interstellar trailer is a real example of KRAKEN in action and achieved a 16X improvement in video starts. Real-Time A/B testing between the poster image (thumbnail) and selected images pulled from visual training set provide the simultaneous measurement of what image induce engagement. All data and actions are linked with a Video Machine Learning (KRAKEN) algorithm enabling real-time optimization and sequences of the right images to achieve maximum human engagement possible.
How it works
Processing video at large scale and learning requires advanced algorithms designed to ingest real-time data. We have now entered the next phase of data insights going beyond the click and video play. Video opens the door to video consumption habits and using machine learning enables a competitive advantage.
Consumer experience and time on site are paramount when video is the primary revenue source for most broadcasting and over-the-top (OTT) sites today including Netflix, HULU, Comcast X1, and Amazon. Netflix has already put into production their version of updating poster images to improve higher play starts, discovery and completions.
It’s All Math
Images with higher object density have proven to drive higher engagement. The graph demonstrates images with high entropy (explained in this video) generated the most attraction. Knowing what images produce a cognitive response are fundamental for video publishers looking to maximized their video assets.
Top 3 video priorities we’re hearing from customers.
1) Revenue is very important, and showing more video increases revenue (especially during peak hours when inventory is already sold out)
2) More video starts means more user time on site
3) Mobile is becoming very important. Increasing mobile video plays is a top priority.
While this is good news overall, it does present a number of new challenges facing video publishers in 2016. One challenge is managing the consumer access to content on their terms and across many points. Video consumption is increasingly accessed through multiple entry-points throughout the day. These entry points, by their very nature, have context.
Broadcasters and publishers must consider consumer visual consumption as a key insight. These eye balls (neurons firing) are worth billions of dollars but its no longer a game of looking at web logs. More advance image analysis to determine what images work with customers requires insights into consumers video consumption habit. For the digital broadcasters, enabling intelligence where the consumer engages isn’t new. Using deep convolutional neural networks powers the image identification and other priority algorithms. More details are in the main video.
Visual consumer engagement tracking is not something random. Tracking engagement on video has been done for many years but when it comes to “what” within the video there was a major void. InfiniGraph created KRAKEN to enable video deep learning and fill that void by enabling machine learning within the video to optimize what images are shown to achieve the best response rates. Interstellar’s 16X boost is a great example of using KRAKEN to dive higher click to launch for autoplay on desktop and click to play in mobile resulting in higher revenue and greater video efficiency. Think of KRAKEN as the Optimizely for video.
One question that comes up often is: “Is the image rotation the only thing causing people to click play?” The short answer is NO. Rotating arbitrary images is annoying and distracting. KRAKEN finds what the customer likes first and then sequences the images based on measurable events. The right set of images is everything. Once you have the right images you can then find the right sequence and this combination makes all the difference in maximizing play rates. Not using the best visuals will cause higher abandonment rates.
Further advances in deep learning are opening the doors to continuous learning and self improving systems. One are we’re very excited about is visual prediction and recommendation of video. We see a great future of mapping human collective cognitive response to visuals that stimulate and created excitement. Melting the human mind to video intelligence is the next phase for publishers to deliver a better consumer experience.
Chase McMichael, NAB VIDEO Intro – Top Video Platforms and Video Machine Learning made a big splash at NAB 2016.
The event was all about digital video, video production, VR, drones and every other technology you could imagine. Think of NAB as the as the CEO of digital and video broadcasting. Everywhere you looked there was drone technology, robotics and even a full area dedicated to VR. The future of video publishing is bright for sure as new technology simplifies quality capture and distribution. We took the time to connect with some of our video platform partners at NAB. Our one-on-one interviews were with Ooyala, Brightcove, and Kaltura. Each video platform provided a comprehensive walkthrough of their latest development and demos. What stood out the most was the big push in Over The Top (OTT) supporting broadcasters. OTT was a big theme for many video platforms, and all show amazing on-demand video technology. Everyone has seen Netflix and Hulu interfaces and are now becoming serious about OTT. Visuals are everything in OTT interfaces and using the power of intelligence is a key differentiation. Netflix identifies this fact in “Selecting the best artwork for videos through A/B testing”
The consumer has gone mobile in a big way, and digital video is taking on TV. Consumers want access to on-demand video wherever they are and on their terms. User experience was also a big draw, too. There is no question that lines have been drawn with rumblings of opening up the Set Top Box and unbundling the TV. Apple TV and Roku started to look like a yesteryear technology compared with the OTT interfaces and mobile native app interfaces being demoed. Brightcove released an OTT Flow and a very exciting interface for a video library and we got a first-hand view of a super slick mobile interface to digital video consumption. Kaltura also showed off what they did for Vodafone. The video platforms seem well positioned to service a TV Everywhere strategy and feed into the Apple TV and Roku devices.
Another part of the demonstrations on each platform that we experienced was 360 video support. Each player had mouse controls whereas Ooyala demonstrated split screen view supporting Google Cardboard. There is an exciting future in VR content and all are waiting to see what’s going to come out from a content perspective. Beyond linear video, immersive storytelling has a great future and we hope that technology doesn’t encumber the adoption and create friction for the experience. The speed of video player loading, streaming efficiency and low buffer rates have always been major competitive advantages when video publishers evaluate platforms.
A big topic was the relatively new Apple standard HLSjs streaming protocol. DASH by Microsoft was also discussed at various booths. All players support HTML5 with a focus on migrating customers away from the old Adobe Flash technology. Every platform demonstrated to use of HLSjs/HTML5. Kaltura shows a real-time side-by-side with an impressive HTML5 player load speed of 50% improvement. Improving load time and streaming will continue to benefit the mobile web and autoplay world. Video is everywhere and customers are demanding more of it. All video publishing platforms had very well organized video management and publishing capabilities. The big takeaways are that the platforms are focused on simplification in publishing and handling a large volume of video with greater intelligence built-in. Obviously, this is important when serving video and creating a better video viewing experience. Here are the top 4 most mentioned attributions for all the platforms.
Availability - percentage of times video playback starts successfully
Start Up Time - time between the play button click and playback start
Rebuffers - number of times and the duration of interruptions due to re-buffering
Bitrate - average bits per second of video playback. The higher the bitrate, the better the experience
All of our conversation centered around using intelligence within thumbnail selection and the process of integration. KRAKEN video machine learning has a bright future with the onslaught of OTT platforms offering more video carousel and indexes as part of the central interface for video discovery. Next up is video prediction (recommendation) and using data to make smarter decisions on what to watch next. There are some very positive results coming from companies like Iris.tv and JW Player. Look for our next post coming from Stream Media East. Catch more on our last podcast here “Thumbnails are part of a Video Marketing Strategy”
VIDEO – Better User Experience, Time on Site and Converting Readers into Viewers.
Video Optimization With Machine Learning is now a reality and publishers are intelligently making the most out of their O&O digital assets. The digital video industry is undergoing a transformation and machine learning is advancing the video user experience. Mobile, combined with video, is truly the definitive on-demand platform making it the fastest growing sector in digital content distribution.
Video machine learning is a new field. The ability to crowd source massive human interactions on video content has created a new data-set. We’re tapping into a small part of the human collective conscious for the first time. Publishers and media broadcasters are now going beyond the video view, clicks, and completions to actually obtaining introspection into video objects, orientations and types of movements that induce positive cognitive response. This human cognitive response is the ultimate in measurement of relevance where humans are interacting with video in a much more profound way. In this article, we will dive deep into the four drivers of video machine learning.
Video by its nature is linear, however, there are several companies working to personalize the video experience as well as make it live. We’re now in an age where the peak of hype on Virtual Reality / Augmented Reality will provide the most immersive experience. All of these forms of video have two things in common: moving sights and sound. Humans by nature prefer video because this is how we see the world around us. The bulk of video consumed globally is mostly designed around a liner body of work that tells a story. The fact that the video is just a series of images connected together is not something people think much about. In the days of film, seeing a real film strip from a movie reel made it obvious that each frame was in fact a still image. Now fast forward, digital video has frames but those frames are made up of 1’s and 0’s. “Digital” opens the door to advance mathematics and image / object recognition technologies to process these images into more meaning than just a static picture.
It’s hard to believe how important images really are. For videos placed “above the fold,” you have to wonder why so many videos have such a low play rate to begin with (Video Start CTR). Consumers process objects in images within 13 milliseconds (0.013 seconds). That’s FAST! Capturing cognitive attention has to be achieved extremely fast for a human to commit to watching a video and the first image is important, but not everything. More than one image is sometimes required to assure a positive cognitive response. The reality is people are just flat out dismissive and some decide not to play the video. This is evident when you have a 10% CTR, which means 90% of your audience OPTED OUT OF PLAYING THE VIDEO. What happened? The facts are the first image may have been great but didn’t create a full mental picture of what was possible in the linear body of work. The reality is you’re not going to get 100% play rates, however, providing greater cognitive stimulation that builds relevance will drive greater reasons to commit time to watching a linear form of video.
Machine Learning and Algorithms
In the last 4 years, machine learning / artificial intelligence has exploded with new algorithms and advanced computing power has greatly reduced the cost of complex computations. Machine learning is transforming the way information is being interpreted and used to gain actionable insights. With the recent open sourcing of TensorFlow from Google and advances in Torch from Facebook, these machine learning platforms have truly disrupted the entire artificial intelligence industry.
Feature extraction and classification is key to learning what’s in the image that is achieving positive response.
Major hardware providers, such as NVIDIA, have ushered massive advancements in the machine learning and AI fields that would have otherwise been out of reach. The democratization of machine learning is now opening the doors to many small teams to propel the product development around meaningful algorithmic approaches.
The unique properties of digital video specifically in a consumer’s mobile feed, delivered from a video publishing site, creates a perfect window into how consumers snack on content. If you want to see hyper snacking, ride a train into a city or watch kids on their smartphones. Digital content consumption has never been so interactive than now. All digital publishers and broadcasters have to ask themselves this question, “How is my content going to get traction with this type of behavior?” If your audience is Snapchatters, YouTubers, or Instagramers you’re going to have to provide more value in your content V I S U A L Y or you will lose them in a split second.
Graphs – Video Views (Mobile-KMView / Desktop-KDView) vs. Minutes in a day – 1440 min = 24 hrs. Mobile is dominating the weekend where as work week, during commute and after work, skyrockets in usage. Is your video content adapting to this behavior?
Video Publishing Conundrum
A big conundrum is why people are not playing videos. This required further investigation. We found that the lead image (i.e. the old school “thumbnail”, or “poster image”) had a huge impact on introducing a cognitive response. In the mobile world, video is still a consumer driven response and we hope this will stay a click to play world. We believe consumer choice and control will always win the day. For video publishers, under the revenue gun, consumers will quickly tire of native ad content tricks, in-stream video (auto play), and the bludgeoning and force feeding of video on the desktop. No wonder ad-blocking is at an all time high! There is a whole industry cropping up around blocking ads and it’s an all out war. The sad part is the consumer is stuck in the middle.
Many publishers are using desktop video auto-play to reduce friction, however the FRONT of the page, video carousel, or gallery is a click to launch environment making the images on the published page even more important. Those Fronts are the main traffic driver over possible social share amplification. As for mobile video, it’s still a click to play world for a majority of broadcasters and publishers. Video is the highest consumer engaging vehicle at their disposal and it is why so many publishers are forcing themselves to create more video content. Publishing more video oriented content is great, however, the lack of knowledge of what consumers emotionally respond to has been a major gap. A post and pray or post and measure later system is currently prevalent throughout the publishing industry.
Video Quality matters
Creating a better consumer experience is everything if you want your content to be consumed in the days where auto-play is rampant and force fed content is inducing engagement. More brands demand measured engagement. Video engagement quality is measured by starts, length of time on video, and physical actions taken. Capturing human attention is very hard due to many distractions, especially on a mobile device. We’re in a phase where the majority of connected humans are now digital natives in this digital deluge. ADD is at an all time high (link). With < .25sec to get the consumer to engage before they have formulated the video story line in their mind is a hard task. A quick peak on the video thumbnail fast read of a headline and glance of some keywords could be standing between you and a revenue generating video play. People are pressed with their time and unwillingness to commit to a video play unless it induces a real cognitive response. Translating readers into video viewers is important and keeping them is even more important.
Mobile Video and Machine Learning
Mobile is becoming the prevalent method of on demand video access. This combination of video and mobile is an explosive pair and most likely the most powerful marketing conduit ever created. Here we have investigated how machine learning algorithms on images can provide a real-time level of insight and decision support to catch the consumer’s attention and achieve higher video yield otherwise lost. The big challenge with video is it created in a linear format and then loaded in a CMS put up for publishing and pray it gets traction. Promotion helps and placement matters, however, there is really nothing a publisher can do to adjust the video content once out. Enter video intelligence. The ability to measure in real-time video engagement is a game changer. Enabling intelligence within video seems intuitive, however, the complexity of encoding and decoding video has great a sufficient barrier of entry that this area of video intelligence has been otherwise untapped.
How and Why KRAKEN Works
Here we dive deep into consumers looking to interact with certain visual objects to create a positive response before a video is played. InfiniGraph invented a technology called KRAKEN that actually shows a series of images, but the series of images we call “image rotation” is not really new. What’s new is the actual selection and choice of those images using machine learning algorithms allowing us to adjust those images to achieve highest human response possible.
GRAPH – LIFT by KRAKEN mobile (KMLIFT) vs. desktop (KDLIFT) on same day. NOTE the grouping prior and after lunch had overall higher boost by KRAKEN. We attribute this behavior due to less distraction.
As more images are processed by KRAKEN, the system becomes smarter by selecting better lead images driving higher video efficiency. This entire process of choosing which order to sequence the best is another part of the learning mechanism. Image sequencing is derived from a collection of 1 to 4 images. These images are being selected based upon KRAKEN ranking linked with human actions. Those visual achieved the highest degree of engagement will receive a higher KRAKEN rank. The actual sequence also creates a visual story maximizing the limited time to capture a consumer’s attention.
KRAKEN in Action
KRAKEN determines the best possible thumbnails for any video using machine learning and audience testing. Once it finds the top 1-4 images, it rotates through them to further increase click-to-play rates. It also A/B tests against the original thumbnail to continually show its benefits. Here are 2 real examples:
KRAKEN Thumbnails with 273% lift below. What makes a good video lead image unique? We’re asked this question all the time. Why would someone click on one image versus another? These questions are extremely context and content dependent. The actual number of visual objects in the frame has a great deal to do with humans determining relevance, inducing intrigue or desire. The human brain sees shapes first in black / white. Color is a third response however red has it’s on visual alerting system. The human brain can process vast sums of visual information fast. The digital real estate such as mobile or desktop can be vastly different. A great example is what we call information packaging where a smaller image size on a mobile phone may only support 2 or 3 visual objects that a human would quickly recognize and induce a positive response whereas the desktop could support up to 5. Remember one size doesn’t fit all especially in mobile video. KRAKEN Thumbnails with 217% lift to the left. Trick your brain: black and white photo turns to colour! – Colour: The Spectrum of Science – BBC
4 drivers of video machine learning
Who benefits from video machine learning? The consumer benefits the most because of increased consumer experience due to creating a more visually accurate compilation of what the video content’s best moments are. It’s critical that people get a sense of the video so they commit to playing the video and sticking around. Obviously the publisher or broadcaster benefits financially due to more video consumption yielding to higher social shares.
Color depth: remember bright colors don’t always yield the best results. Visuals that depict action or motion elicit a higher response. Depending on the background can greatly alter color perception, hence images with a complementary background can enable a human eye to pick up colors that will best represent what they are looking at creating greater intrigue.
Image sequencing: Sequencing the wrong or bad images together doesn’t help but turns off. The right collection is everything and could be 1 to 4. Know when to alter or shift is key to obtaining the highest degree of engagement. The goal is to create a visual story that will increase consumer experience.
Visual processing: The human brain can process vast amounts of visual information fast. The digital real estate such as mobile or desktop can differ. A great example is what we call “information packaging” where a smaller image size on mobile phone screen may only support 2 or 3 visual objects in view. Humans can quickly recognize and induce a positive response whereas the desktop could support up to 5. One size doesn’t fit all especially in mobile video.
Object classification: Understanding what’s in an image and classify those images provides a library to top performing images. These images with the right classification create a unique data set for use in recommendation to prediction. Knowing what’s in the image as just as important as knowing it was acted on.
The first impression is everything or maybe the second or third if you are showing a sequence of images. For publishers and digital broadcasters adapting to their customers content consumption preferences and being on platforms that will yield the most will be an ongoing saga. Nurturing your audience and perpetuating their viewing experience will be key as more and more consumer move to mobile. KRAKEN is just the start of using machine learning to create a better user experience in mobile video. We see video intelligence expanding into prediction to VR / AR in the not too distantd future. As this unique dataset expands we look forward to getting your feedback on other exciting use cases and finding ways to increase the overall yield on your existing video assets.
Tell us what you think and where you see mobile video going in your business.
A Series of Forbes Insights Profiles of Thought Leaders Changing the Business Landscape: Chase McMichael, Co-Founder and CEO, InfiniGraph
Optimizing web content to drive higher conversion rates, for a long time, meant only focusing on boosting the number of click-throughs, or figuring out what kinds of static content got shared most often on social media sites.
But what about videos? This key component of many sites went largely overlooked, because there simply wasn’t a good way to determine what actually made viewers want to click on and watch a given video.
Chase McMichael, Co-Founder and CEO, Infinigraph
In an effort to remedy this problem, says entrepreneur Chase McMichael, brand managers may have, at most, tried to simply improve the video’s image quality. Or, in a move like a Hail Mary pass, they might have splashed up even more content, in the hopes that something, anything, would score higher click-to-play rates. Yet even after all that, McMichael says, brands often found that some 90% of viewers still did not watch the videos posted on their sites.
As it turns out, the “thumbnail” image (static visual frame from the video footage) has
everything to do with online video performance. And while several ad tech companies were already out there, using so-called A/B testing to determine how to optimize the user experience, no one had focused on optimizing video thumbnail images. Given video’s sequencing speed with thousands of images flashed up for milliseconds at a time, it meant that measuring the popularity of thumbnails was simply too complex.
Sensing a challenge, McMichael, a mathematician and physicist with an ever-so-slight east Texas drawl, set out to tackle this issue. He’d already started InfiniGraph, an ad tech firm aimed at tracking and measuring people’s engagement on brand content. But as his company grew, he found that customers began asking more and more about how they might best optimize web videos in order to boost viewership.
Viewership, of course, is key: Higher video viewership translates into more shares; more shares means increased engagement. And that all translates into more revenue for the website. Premium publishers are limited in their ability to create more inventory because the price of entry is so high. These new in house studios are producing quality content, but getting scale is a huge challenge.
When he started looking into it, McMichael says, he often found that the thumbnails posted to attract viewers usually fell flat and the process for choosing thumbnails hasn’t changed in 15 years. And the realization that the images gained little to no traction among viewers came as something of a surprise: Most of the time, the publishers and brand managers themselves had selected specific images for posting with no thought at all into optimizing the image.
According to McMichael, the company’s technology (called “Kraken”) solves for two critical areas for publishers: it creates inventory and the corresponding revenue while also increasing engagement and time spent on site.
Timing, it turns out, was everything for McMichael and InfiniGraph. Image- and object-recognition software had been improving to the point where those milliseconds-at-a-time thumbnails could be slowed down and evaluated more cheaply than in the past. Using that technology along with special algorithms, McMichael created Kraken, a program that breaks down videos into “best possible” thumbnails. Using an API, Kraken monitors which part of the video, or which thumbnail, viewers click on the most. Using machine learning, Kraken then rotates through and posts the best thumbnails to increase the chances that new users will also click on the published thumbnail in order to watch an entire video.
This process is essentially crowd-sourced, says McMichael—the images that users click on the most are those that Kraken pushes back to the sites for more clicks. “What’s fascinating is we’ve had news content, hard news, shocking, all the way up to entertainment, music, sports and it’s pretty much universal,” he says, “that no one [person] picks the right answer”—only the program will provide the best image or images that draw in the most clicks. On its first few experimental runs, InfiniGraph engineers discovered something huge: By repeatedly testing and re-posting certain images, InfiniGraph saw rates of click-to-play increase by, in some cases, 200%. Says McMichael: “It was like found money.”
InfiniGraph is a young and small company, even for a start-up: The Silicon Valley firm has eight employees in addition to a network of technicians and specialty consultants he scales on and as-needed basis, and has boot-strapped itself to where it is today. McMichael says he’s built a “very revenue-efficient company” because “everything is running in two data centers and images distributed across a global CDN.” His goal is to be cash-flow positive by this summer. Right now InfiniGraph works exclusively with publishers but the market is ripe for growth, especially in mobile devices, McMichael says.
Recently, Tom Morrissy, a publishing leader with extensive experience in both publishing (Entertainment Weekly, SpinMedia Group) and video ad tech (Synaptic Digital, Selectable Media) joined InfiniGraph as a Board Advisor.
“So many companies claim to bring a ‘revenue generating solutions that is seamlessly integrated.” This product creates inventory for premium publishers and is the lightest tech integration I’ve seen. I was completely impressed with Chase’s vision because he truly thought through the technology from the mindset of a publisher. Improve the consumer experience and the ad dollars always follow” says Tom Morrissy
The son of a military officer father and registered nurse mother, McMichael grew up in the small town of New Boston, Texas, located just outside of the Red River Army Depot. A self-described “Brainiac kid,” McMichael says he was always busying himself with science experiments, with a special interest in superconductors, or materials that conduct electricity with zero resistance. Though he’d been accepted to North Texas, McMichael still took a tour at the University of Houston, mainly because the work of one physics professor who discovered high temperature superconductivity had grabbed his attention. “So I went to Paul Chu’s office and said, ‘hey, I want to work for you.’” It was the craziest thing, but growing up I was always told, ‘If you don’t ask for it, you won’t know.’”
That spawned the beginning of seven-year partnership with Chu during which time the University built a ground-breaking science center. McMichael spent seven years in DARPA funded applied science, but decided to leave for the business world. A friend of McMichael’s worked at Sun Microsystems and encouraged him to leverage his programming knowledge. His first job out of college was creating the ad banner management system for Hearst. “So I got sucked into the whole internet wave and left the hard-core science field,” he says. He also worked at Chase Manhattan Bank in the 90s, building out its online banking business.
As for the future for InfiniGraph?
McMichael says his mission is “to improve the consumer experience on every video across the globe, and it’s an ambitious plan. But we know that there are billions of people holding a phone right now looking at an image. And their thumb is about to click ‘play,’ and we want to help that experience.”
Bruce H. Rogers is the co-author of the recently published book Profitable Brilliance: How Professional Service Firms Become Thought Leaders - Originally posted on Forbes
Want to learn what type of video content KRAKEN’s Video Machine Learning technology works with? Read on!
So, you already have video on your site and you’re asking yourself, “can KRAKEN help me get more video plays??? I mean, my content is pretty special and unique!”
Let’s get right down to answering that with these 3 easy steps:
Step 1: Come to our blog. Wait… you’re already here. Good job! You’ve done Step 1 successfully, so check it off your list.
Optimized thumbnails with 198% lift
Step 2: Ask yourself—what kind of content do I have and how is it monetized? I mean, you could just have video and not make money on it. You could run pre-roll beforehand, or you could get paid with every video play by the producer/creator (this last one is called “premium video”).
Step 3: Scratch Step 2, because KRAKEN can help increase play rates with all three types! Check out the following examples!
Quick recap of what’s happening with each video below:
Video is broken down into a bunch of “best possible” thumbnails (using crazy smart algorithms)
Your audience selects their favorite thumbnails via A/B testing & machine learning (favorite=thumbnail they WANT to click on)
We intelligently rotate through the best (aka favorite) thumbnails to ensure a high click-to-play rate
Click play… notice that the video begins playing? This is premium video where the content is the advertising medium. Notice the video is about a product, so there’s no need to run a pre-roll ahead of time.
What do I do next?
As you can see, KRAKEN increases play rates on just about any type of video content. You just need to release the KRAKEN!
Want to see some awesome examples where optimized thumbnails performed up to 425% better than the original? Check out this article.
Publishers are under financial assault and video performance is a white hot topic with brands doubling down on mobile video spend. It’s all about revenue, consumer win and getting the most out of your videos assets.
Boosting video performance on existing content is not simple, however, video machine learning provides a unique and scalable way to accelerate video engagement like never before.
Here we will dive into how one of the top 20 news site uses hyper video machine learning to boost play rates on the lion’s share (70%) of videos published.
The David Bowie video release achieved an average 92% boost and in the first 3 days hit 104% boost (We’ll miss you David)
We kicked in the hyper drive on The Force Awakens movie trailer delivering 41% boost and describing what’s behind visual learning. News oriented video content has shown tremendous lift rates up to 425%, with many videos achieving 100%+ lift in play rates. Breaking away from old school thinking, in this post you will learn what’s driving higher video revenue beyond image recommendation (selection) technology to full on creating more engaged video watchers via intelligence.
How it’s done
KRAKEN’s video machine learning API connects directly within the publisher’s video player. The video thumbnail images are optimized using real time A/B testing and image recognition algorithms. There is tons of evidence on the immense power of visuals and first impression is everything for viewers.
The key component of KRAKEN is learning algorithms that understand placement and visual elements within the video that resonate with a particular audience. Consumers are guzzling content at a hyper rate and in a world full of distractions, content “images” that can quickly capture attention will achieve a higher share of time.
In the case of video: more plays =
more share =
more overall engagement = MORE REVENUE.
Another key attribution of KRAKEN is helping videos moving away from an unresponsive static image that retained no intelligence. KRAKEN’s image selection is not random and incorporates the sequencing of images “Image Rotation.” This translates into showing more visual depth and further stimulates visual cognition. Hey, we patented KRAKEN and here are some solid number to prove it works!
The ability to tell a visual story based on behavioral engagement assures the maximum possible engagement levels, which would otherwise be lost. That’s right—you’re losing money!
That lost video play is lost revenue and in most cases, it’s a great deal of money being left on the table. For the fans of Game of Thrones this video hit an astonishing 169% LIFT proving the right visuals drive higher revenue (lift= performance of dynamic visuals over the original thumbnail).
Video play decay over time. Note that after two days, the video gets a bump. KRAKEN created a 92% avg LIFT on the David Bowie video.
All published videos experience a time to live. Even viral videos will decay over time.Video play engagement decay on high CTR videos is displayed in the graphs above and below.
Time to live is a function of:
Video placement on main sections
Placement above or below the fold on published page
Mobile feed depth (how many times to scroll to see the video)
Mobile in view (how long is the video in view)
How long it’s displayed in the editorial pick or trending section
Social share magnitude
This video achieved 141% LIFT demonstrating that human faces don’t generate greater action over visual scenes that depict the video content.
Time to live variables have an impact on how long content can achieve high engagement and for how long. Obviously video performance is a function of site traffic, however, the wrong image causes massive consumer engagement loss due to the speed at which humans can process visuals and determine relevance. This speed is on the order of a blink of an eye. Are you adjusting your visual at a blink of an eye? That coupled with Forrester Research, one minute of video is worth 1.8 million words and there you have a perfect reason to make sure every consumer engagement counts.
I’ve never met anyone who intentionally picked a bad video thumbnail—but they’re everywhere.
To be clear, bad ≠ ugly. Bad thumbnails are sometimes beautiful. Bad means that people don’t WANT to click on them. After all, the point of a thumbnail is to get people to click “play” or “stop scrolling” long enough for the video to start to playing.
Editors and content creators with years of experience spend a lot of time picking “best” thumbnails. And publishers posting hundreds of videos daily rely on content management systems (CMS) that suggest or auto-pick thumbnails.
Guess what? They’re usually wrong.
Almost always, there is a better thumbnail for any given video or set of thumbnails.
Because “best” is defined by your audience, not you. You bring your experience and baggage with you every time you pick a thumbnail—and you are different from your audience. Why not take the guess work out of the equation and use data, not opinion, to choose the right thumbnails every time?
Let’s say you’re an editor in LA and pick a thumbnail for a video about the latest breaking news topic. You might choose this image to the right:
Now what if your viewer is from Texas? What if that image doesn’t speak to them at all? That doesn’t mean they’re not interested in the topic or wouldn’t want to see the video content, it means that the thumbnail doesn’t make them WANT to click “play.”
If you had asked your viewers, they would have told you that they preferred seeing the images on the left—all taken from the very same video.
Our recent post “The Force Awakens” shows another great example and the science behind data-chosen thumbnails.
Your audience isn’t one-size-fits-all. Your thumbnails shouldn’t be either.
Here are 52 videos from last month that prove intelligent selection of images can greatly improve video play rates. Each has an optimized set of thumbnails that performed 101%–425% BETTER than the original thumbnail.
Quickly though—what is an optimized thumbnail?
Optimized thumbnails are dynamic and rely on machine learning and audience feedback. Our product called KRAKEN does this all in real-time
So, what the heck does that mean in english???
It means that our computers examine a video and pick a bunch of ‘best possible’ thumbnails, then A/B tests them to determine what ones people actually click on. It will serve different images to different people depending on a variety of factors, including device and placement. Hey, it’s a patented process!
Said another way, we crowdsource what thumbnails people actually engage with, then show them to future visitors.
Results – Before & After
Think sports fans will click on any video related to their team? Think again. Optimized thumbnails performed 198% better than the original: Original Thumbnail KRAKEN Optimized Visuals
Optimized thumbnails work for ‘hard’ news videos, too. This video about Enrique Marquez’s ties to the San Bernardino gunmen had a 205% lift: Original Thumbnail KRAKEN Optimized Visuals
Kardashians—love them or hate them, right? It turns out that optimized thumbnails can produce a 128% lift in video play rates: Original Thumbnail KRAKEN Optimized Visuals
From earlier in the article: the Rikers Island Guard video saw a 157% lift, while the video of a Teacher under fire for her lesson on Islam saw a 127% lift.
Our top performing video of December saw a 425% lift. Here’s an overview of all 52:
What could you do with double the video plays (or 3X or 4X)?
Would it double your video revenue? Satisfy your audience because more of them are seeing your awesome video content (after all, that’s why they’re on your site in the first place)?
The good news is your “best” thumbnails already exist and are buried in your existing videos. You just need to release the KRAKEN and get them to the surface.
Leave a comment below and tell us your thoughts. If you are interested in links to all 52 top performing videos, send me an email at email@example.com—I like talking with new people.
Star Wars: The Force Awakens Video Machine Learning Trailer achieves a massive boost (41% gain) using visual sequence story telling. Optimizing video is now a must for publishers looking to maximize their video assets and engage customers with content relevant to them. Embrace the “FORCE” Above is a live example of KRAKEN’s “Image Rotation” in action powered by video machine learning seen on NYDailyNews. The image sequencing is created by KRAKEN and is integrated directly inside the video player via the KRAKEN API.
The impression a video makes on a consumer is everything, especially with mobile. Typically seen is a still image with a large play button overlay in video players. This thumbnail image has been stuck in a static world for over 15 years. The old school static thumbnail on video is dead and auto play is frankly annoying.
Image quality is important but our findings prove that consumers select images and prefer not the best image but the ones that cause the human mind to have intrigue.
However, the static thumbnail selection is still dependent on the person who uploads a video. This process does not scale to thousands of videos over a short period of time. That is why the majority of commercial video platforms auto select from a fixed time slice from the video and hope for the best.
Static thumbnail selection with customized thumbnail upload. All video platform provide this manual feature as well as a auto default is selected.
Humans cannot optimize or adjust creative on the fly to increase video performance. Many attempts to do A/B testing have proven to be helpful, however they produce limited results due to their manual nature.
Video machine learning has come of age because it is cost effective and enables publishers to use the FORCE. Image sequencing is not a new ideal and has been used for centuries for depicting visual story telling.
Video machine learning makes it possible to scale image sequencing over thousands of video placements and millions of plays. Video has gone from a static world to a dynamic and intelligent world. Star Wars: The Force Awakens Trailer benefited tremendously from video machine learning with a lift of 41%.
Another major bonus of video machine learning is the ability to scale and combat image fatigue (decreasing engagement over time).
Capturing a consumer’s attention has never been harder than now. Consumers are glued to their smartphones and every millisecond counts. Publishers are reverting to the annoying auto play tactic, however, consumers are pushing back and complaining. Fox has responded to consumer feedback by offering a feature to turn auto play off. The growth of mobile video will continue to increase massively for publishers optimizing video. Machine learning will continue to help them benefit and maximize their valuable video assets.
Do you want to learn more about KRAKEN and hear what others are saying about video machine learning? Check out our testimonials and intro below. Thanks for your input and thoughts on our our journey in video machine learning.
Ryan Shane VP of Sales
Want to increase your video play rates and increase revenue? Contact us for a 1:1 demo and access customer use cases and see live examples on both mobile/desktop implementations.
Introducing Baglan Rhymes, Chief Digital Officer at AnchorFree with Chase McMichael, CEO of InfiniGraph, discussing the recent success of video machine learning KRAKEN on AnchorFree video ads page. Video Machine Learning Customer Testimonial – Case Studies discussed in this video are Fifty Shades of Grey, American Sniper and Birdman.
Chase: Hi I’m Chase McMichael, CEO and Co- Founder of Infinigraph and I’m here today with Baglan Rhymes, the Chief Digital Officer of AnchorFree. Hi Baglan. Baglan: Hi Chase. Chase: So tell us a little about AnchorFree. Baglan: Of course. AnchorFree is the world’s largest internet freedom platform and our mission is to provide secure and uncensored access to the world’s information for every single person on the planet. To date, we’ve been installed 300 million times. We have 30 million monthly active users and we secure approximately 5 billion page views.
Chase: That’s excellent. Obviously, we got connected with the video machine learning technology—a technology called Kraken. Baglan: Yes. Chase: And you know one of the things was that you are using a monetization page with video on the free sites. Baglan: Correct.Chase: Tell us a little more about that.
Baglan: Yes, because we have a free service and subscription-based service and the revenue stream for the free service is our content sponsors—be it movie studios, be it news organizations. And we have our own content discovery platform where we have tiles of video content and also static content where we present the users upon connect. And the videos—we don’t make any revenue off of the videos unless the users click on it. So how do we get the users to click on a video when we have maybe 5 or 10 seconds of their attention right upon connection and that’s when we connected. So we partnered with you on click to play videos to increase click to play rate because unless those videos are played we don’t get paid and through your machine learning algorithms we were able to increase the click rate.
Click to view rate grew 20 to 30 times on videos overall, movies, overall movies and we ran a test on Fifty Shades of Grey and American Sniper afterwards we did and we did Birdman where we got 3,000% that ridiculous number [increase in click to play rate]. A fight scene in tighty whities. I actually remember I asked you to remove that. We can’t show it there and you kept it and that tighty whities that fight scene.Chase: That was the best one! Baglan: Exactly. 3,000% increase [in click to play rates] and I’m so happy we kept it.
Chase: That’s the one that boost the most revenue. So you know right now, where you seeing you going, especially around the consumer in mobile. Baglan: Yeah, video is the way users consume content now. And then whenever we see a video associated with a brand, we see a 96% increase on purchase intent, 139% increase on brand recall and even our conversations are now in the form of a video with your friends and it is just a video. So the whole communication is changing from voice to audio, visuals and emotions—which is video. Chase: Thank you so much Baglan. So please be sure to click on the (i) above to get more information. Thank you.