The digital video industry has major challenges with creation because editing and production costs are rising. Lack of video inventory is universal; however, there is hope in leveraging video artificial intelligence (VAI) to address these problems. Massive amounts of live and pre-recorded video never reach the audience. Publishers are being asked to do more with less. Publishers are hamstrung due to small editorial budgets and shifting priority while massive video revenue goes untapped. The need to intelligently create video inventory couldn’t come fast enough to feed the insatiable mobile video appetite. Here we dig in on the What, Where and Why in VAI and rethink how content is being created, edited and delivered all towards building greater video lifetime value.
Automation is not the cure-all; however, combining live streams and artificial intelligence solves several pain points while augmenting the production staff and delivering relevant content. Your editorial team is the most valuable resource in your video workflow. This is the very reason why these highly-valued editors should be focused on higher value video creation. VAI isn’t about replacing your editorial team but scaling lower priority content at a fraction of the cost while increasing revenue generation.
There are many promising video AI examples showing off real-time facial tracking, object recognition and image labeling etc. In our previous post Top Video Artificial Intelligence and Machine Learning at NAB 2018, we highlighted several early AI products in the market. Outside of the video labeling services several are extending existing video editing systems as a cloud Saas. So what does this all mean for my existing video assets? First off, there has been monumental advancements in algorithms to obtain a deeper understanding of actions people take within a video. Such as YOLO and Deep Structured Models to label individual actions in video. This in-depth dissection changes the game of measuring what parts of a video resonate with consumers vs. old linear video measurement which woefully lacks meaningful insights.
Promising Use Cases
Using VAI to create and edit video isn’t new. Companies Like Adobe and IBM are all using advanced video/image analysis to enable smarter editing platforms. However, publishers need more than just editing assistance they need scalable video creation. One of the more profitable use cases is short video highlights and previews along with extending in context video compilation. These combinations have proven to be highly lucrative and well suited for VAI in both creation and scale.
Example above demonstrates using video artificial intelligence to create and control audio and video transitions between clips. The video length and segments being used are determined based on video scoring generated via VAi.
Another exciting use case is making video libraries searchable. Finding specific people, actions, scenes within video opens the doors to advanced video discovery. A whole new world opens up with amazing possibilities of cross-reference and extracting information otherwise trapped in old school liner video metadata.
Solving the problem of creating net new video from existing video assets opens up a monster revenue source while extending your overall video lifetime value. Taking long-form video or live content and compressing to meaningful short-form mobile-ready is a game-changer. VAI creation does not require expanding editorial budgets or delaying critical production windows.
Organizing content to be contextually relevant will always be a key factor in ensuring meaningful content flow. What clip’s and scenes logically flows together are subjective unless you are directly measuring audience response. This is why utilizing an active feedback mechanism to improve relevance is important. Here we address some top challenges,
Shared viewing experience
There isn’t a lack of challenges facing video creation at scale. For sure content specifically created for social platforms that harness the human connection and social dynamics will produce a higher quality product. However, how do you scale this type of creation? Combining AI with video provides the opportunity to develop a video learning system to assist in delivering quality output that builds on itself over time. Hence, why a learning system is ideal for video creation based on reproducible tasks.
Here we describe the strategy of video “fracking” your entire library. The ability to organize video libraries, extract cross-referenced videos, annotate video content and create relevant net new video makes MicroClips possible. Manipulating video at scale using artificial intelligence creates an enormous revenue opportunity. The ability to learn what works and adjust video assets requires thinking differently about how content is produced. The possibilities are endless over many content types, moreover, digital enables measuring levels of audience engagement like never before.
Above is a sports MicroClips compilation created using AI and is derived from many videos. MicroClips are created using incontext high value action sequences. These clips are places together to create the MicroClips compilation. What’s exciting is the variety of compilation that can be created and watching what the audience finds most entertaining. Highlights and compilations have garnered the highest play and share rates. Some have even “gone viral” !
The ultimate goal is to provide a cost-effective video creation process while improving viewability, quality and consumer play rates. Leveraging how consumers connect and the time they have available are additional driving factors to a smarter creation process.
There are many big bets being made as well as major networks investing in dedicated Snapchat teams to capture a slice of this multi-billion dollar pie and prove the model out.
Quality content and great storytelling require a human touch for now but for large-scale video organizing and creation, the future is bright for video artificial intelligence. The digital video industry is going through a major transformation and the consumers are the winners. We have seen the networks being put on notice with the incumbents not only producing top shows, receiving awards, drawing in compelling audience sizes but also leveraging advanced technology faster improving the consumer experience. The publishers that harness the competitive advantage of VAI creation will deliver quality video to market faster with higher relevance all resulting in greater consumer retention and revenue.
Video artificial intelligence was a massive theme at NAB 2018 with a majority of the video publishing technology companies showing off some form of AI integration. In my previous post How Artificial Intelligence Gives Back Time time is money in the video publishing business. AI is set to be a very important tool why all the big guns like AWS (Elemental), Google (Video Intelligence), IBM (Watson) and Microsoft (Azure) had digital AI eye candy to share. There was a feeling of a meet too with all of them were competing to weave their video annotation/labeling – speech to text APIs into a variety of video workflows.
Top Video AI use cases:
Labeling – The ability to label the elements within a video specific scene selection, people, places, and things.
Editing – Segmenting by relevance, slicing up the video into logical parts and producing.
Discovery – Using both annotation and speech to text to expand metadata for funding specific scenes within video libraries.
One of several challenges is this ALL or nothing situation. Video publishers assets can be on many hard drives or encoded without lots of metadata. There are companies that provide services like Axel to index those videos and make them searchable with a mixed model of on-prem tech and cloud services. Dealing with live feeds requires hardware and bigger commitments. Most publishers are not willing to forklift over their video encoding and library to another provider without a clear ROI justification. The other big ROI challenge is video publishers don’t have a lot of patience and the pressure to increase profits on video is higher now with more competition in the digital space across all channels. Selling workflow efficiency in AI won’t be a big enough draw over AI generating substantial revenue solving a specific problem. The pain isn’t high enough to make a big AI investment. There are lots of POC right now in the market, however, not one product creates a seamless flow within a video publisher’s existing workflow. Avid and Adobe are well positioned for the edit team since their products are so widely used. Other cloud providers are enabling AI technology not a specific solution.
Search and discovery was the biggest theme using AI to do image and speech to text analysis. Compliance with Closed Caption to make video accessible in digital will be mandated driving faster adoption. Editing video via AI is in its early phase, however, the technology is emerging fast. There are some exciting examples of AI created video but at scale is another. Of the many talks at NAB some exciting direction on AI in Video were discussed around video asset management. Here are a few examples of what we demoed at NAB 2018 showing promise in the video intelligence field.
Adobe had a big splash with their new editing technologies and using AI to enhance the video editing process. Todd Burke presented Adobe Sensi their AI entry into video intelligence. The video labeling demo and scene slicing we’re designed to help editors create videos faster and simplify the process. The segmenting was just a prototype and video labeling demonstrated the API extension integrated within Sensi.
IBM’s demo was slick and pointed to the direction of using machine learning to process large amounts of video to pull out interesting parts of the video. Doing the announcer and crowd response analysis added another layer of segmentation. You can see a live demo of their AI highlight for the Master. They did the same for Wimbelton slicing up the live feed they were powering for the events and creating “cognitive highlights”. It wasn’t clear if these highlights were used by the edit team or if this was a POC. Regardless there was both image and text analysis of the steams occurring and demonstrated the power of AI in the video.
The Avid demo was just that. They created a discovery UI on top of API’s like Google Vision to assist in the video analysis for search and supporting edit teams. Speech to text and annotation in one UI has its advantages. It’s wasn’t clear how soon this was going to be made available over a development tool.
The team over at Zora had by far the slickest video HUB approach. I believe the play for Google is more around their cloud strategy trying to attract storage of the videos and leverage their Video Intelligence to enable search over all your video assets. Google’s video intelligence is just getting started and their opening up of the AI foundation Tensorflow makes them one of the top companies committed to video AI. I like what Zora is doing and can see editing teams benefiting from this approach. There was a collaborative element too.
GrayMeta UI was slick and their voice to text interface was amazing. This was all powered by Azure. Azure Video Indexer is the real deal and ability to identify face identification has broad use cases. Indexing voice isn’t new but having a fast and slick UI helps enable adoption of the technology. They can pinpoint parts of the video just on the text along. There is a team collaboration element around the product have a Slack feel. The approach was making all media assets searchable.
There were several cool examples of possibilities with Amazon Rekognition - video analysis, facial recognition and video segments. Elemental (purchased by Amazon) core technology is a video ad stitching whereby video ads are inserted into the video directly. They created UI extension demonstrating some possibilities with Rekognition. It wasn’t clear what was in production over the demo. The facial recognition around celebrities looked solid. Elemental had a cool real-time object detect bounding boxes showing up on sports content too. This has many use cases, however, creating more data for video publishers to access when the amount of data they can manage needs to be addressed before adding another data firehosed.
Video artificial intelligence is just getting started and will only improve with greater computing advancements and new algorithms. The guts of what’s needed exist to achieve scale. The major use cases around video discovery and search are set to improve dramatically with industry players opening up more API’s. Video machine learning has great momentum utilizing these API’s to crack open the treasure trove of data locked away inside of video. The combination of video AI and text analysis truly creates a massive metadata for the multitude of use cases where voice computing can play are roll. Outside of all the AI eye candy there needs to be more focus on clear business problems vs. Me Too. More like what’s the end product and how will it make the video publisher more revenue?
How AI – Artificial Intelligence and machine learning gives back time? It’s not a secret that AI is here and coming much faster than many other technology booms. Some are saying we’re in the 3rd wave of computing. In our previous post 3 Ways Video Recommendation Drives Video Lifetime Value we talk about how machine learning is transforming finding and recommending videos to enhance consumer experience. For me, I’m just excited to be part of the machine learning business and creating powerful products focused on improving the digital video experience. I was recently commissioned to put together a short video and deck on how Artificial Intelligence is transforming the device-based experience for consumers. As part of this project the brands we’re looking to understand the different ways AI is going to potentially impact their business. Here were the main topic areas.
How do you stay ahead of where AI is headed?
How should AI be leveraged to enhance brand trust, improve engagement and help consumers get jobs to be done in a way that is valued by consumers?
How can AI be employed to create better personal performance for individuals?
The presentation was to top brands like Scripps Network, Hertz, Bacardi, Planet Fitness, Arizona State University and DX Marketing.
Hi I’m Chase McMichael CEO and co-founder of infiniGraph. InfiniGraph focuses on increasing video lifetime value for video publishers and broadcasters and we do that through processing vast amounts of their video data and understanding what visual in the video engage consumers. By measuring what image or video clips are most engaging within certain scenes we’re able to increase video consumption showing the right images or clips to excite the consumer. Here’s a great example of a video clip that we extracted out of a video for CBS. By putting this specific clip in front of the right person at the right time we’re able to dramatically increase video take rates as produce a better consumer experience.
I was asked to talk about machine learning / artificial intelligence and how it would affect brands and improve consumer experience around devices. One of the exciting things about artificial intelligence primarily in my point of view is AI will give back time to individuals. AI is about making smart decision for them or providing insights proactively.
So how should AI be leverage to increase brand trust, engagement and help consumers?First off brand trust, brand trust is about anticipating your consumers, being able to be very proactive when they interface with you. It’s important to actually recognize them and provide them with incredible value and service. This is something that all brands have struggled with. Its not you know someone comes into a retail establishment or comes online but laking responsive is lost opportunity. A big opportunity is personalization. The ability to personalize ones experience is a big deal right now. Companies are really utilizing their data in some smart ways especially in the retail segment we’re seeing this with Amazon and a lot of the movie companies like Netflix trying to customize the experience for their audience.
The other thing that we’re seeing around brand trust is the ability to really not only be intuitive and responsive but proactive. Being proactive requires a much higher level of intelligence around your data. Taking that data to the next level of insights where you’re really thinking is AI. The key is anticipation like what does that consumer going to buy or how are they going to respond? When they purchase a product being more proactive creates an incredible experience. Again brand trust is easy to lose. Brands spend many many decades or a hundred years on creating trust and all of a sudden something happens and the internet revolts. They become completely eviscerated.
Its critical brands are responsive to what’s happening across the social web and monitoring intelligently. How you interface with your consumers across mobile, social or over cloud application all requires intelligence.
Don Peppers back in their late 90s was doing what’s called one-to-one marketing and this is really the onset of personalized experience. If you’re going to enhance consumer engagement you really have to put in front of that consumer something that visually and cognitively gets them excited. Are you creating an emotional response? Without emotion people do not recall information so if you don’t make an emotional response with someone the ability for them especially in this distracted economy to engage approaches zero.
Getting consumer to recall is very difficult so good service is expected the reality is people want to be wowed and that’s really comes down to how well your consumer touch points are response and intuitive. Do you know about the individual interacting with you on a day to day or month to month or year to year basis? A core component to any company is understanding your consumers and their individual behavior. A customer interacts with your brand do you have the ability to recommend or provide insights that helps and again give them back time? If not you’re really have done them a disservice. Your interface has to be fluid or you create drudgery.
Improving engagement is the big win with artificial intelligence. System designed to predict and be intuitive through giving back time will lead their industry. The core components that any enterprise must think about is how their enterprises is going to re-engineer around consumer data as well as capturing data to drive an active feedback loop. This feedback mechanism between the consumers and that touch point will be the foundation of an AI system.
Help consumers and creating a frictionless environment will win your consumer over. Driving proactive actions that actually work. The other question you have to ask yourself is how are you utilizing that information to create a very robust profile so that you’re actually having a conversation with your customer.
You actually know a lot about their history and you know a lot about what was successful. Start engaging with you consumer by understanding their product usage and leverage that to information to improve the experience. From an AI perspective, now you have a system out there mining data to surface functional clusters of information both visually maybe vocally as well as across just standard data sets.
Think big here because we’re now in such a connected community in society that with a push of a button I can share a picture with thousands of people and then the whole conversation spurts up around maybe your product being both positive and negative sense. How are you inserting yourself in that conversation proactively is very important.
How do you physically help a consumer really comes down to can you be intuitive on their needs. Think about personal data assistant or personal AI assistance. Personal assistants are going to be very intuitive very smart and crawl lots of different data sources. These AI assistance are proactively telling you to do thing to simplify your life. An example is let them go out and find this information for you. These types of digital assistants are gonna be extremely important in people’s lives because now things that they had to spend their mental time on they will be spending more time on more higher cognitive thinking than the mundane check off tasks that we do today.
If a brand is able to give time back to there consumer creating an intuitive and very frictionless experience will be the go to experience. Now your brand will dominant with that customer because they’re going to create not only the loyalty clearly but creating engagement through helping that consumer.
Another areas that is creating lots of buzz is artificial intelligence taking over jobs. Clearly you as a big industry or brand don’t want to become the job killer in the industry and that’s a big issue for executives thinking about implementing intelligent automation. Everyone is reading everywhere that the robots are going to take over the world the reality is how are you augmenting your staff. What can you do to enable your staff to be more intelligent utilizing internal resources to be more efficient and effective when they interface with consumers.
The real crux in this whole equation is how to enhance that experience with consumers and be able to empower my employees to be smarter and faster while creating a symbiant relationship.
Another thing around artificial intelligence is you really need to be thinking not in quarters but in decades. The companies that have really focused on digital transformation and utilizing data intelligence to transform their business will be the disruptors. Are you going to be the leader, because those that have executed AI in their business will have the speed and ability to adapt enabling them to trump anything that comes to the table. Think AI first.
A picture is worth a thousand words so in my business videos are worth tens of thousands of words and for us we’re wanting to find that unique image or video clip within a video segment or even a long video formats that really gets consumers excited and gets them engaged. This visual intelligence is critical in my business and very important for many brands. Using visual intelligence especially in video for marketing is an incredible opportunity. What you put in front of your consumers and can learn from that engagement around those visual properties within the actual images themselves is insight. The ability to adjust video is a competitive advantage creating a higher order of thinking where you’re giving a machine the ability to transform content in real-time. That’s that artificial intelligence we talked about previously so really being able to think about how do I take my brand in my industry and start thinking about all the data that’s coming in.
Its all about data in and quality and consumer engagement out.
Video recommendation and discovery are very hot topics across video publishers looking to drive higher returns on their video lifetime value. Attracting a consumer to watch more videos isn’t simple in this attention deficit society we live. However, major video publishers are creating better experience using video intelligence to delight and enhance discovery and keep you coming back for more. In this post we’ll explore the intelligence behind visual recommendation and how to enhance consumer video discovery.
Google Video Intelligence API demo of video search finding baseball clips within video segments.
Last year we posted on Search Engine Journal How Deep Learning Powers Video SEO describing the advantages of video image labeling and how publishers can leverage valuable data that was otherwise trapped in images. Since then, Google announced at Next17 Video Intelligence . (InfiniGraph was honored to be selected as a Google Video Intelligence Beta Tester) The MAJOR challenge with Google cloud offering is pushing all your video over to Google Cloud, cost per labeling the video at volume and loosing control of your data. So how do you do all this on a budget?
Not all data is created equal
Trending Content is based on popularity vs content context and the consumer content consumption.
And, not all video recommendation platforms are created equal The biggest video publishers are advancing their offerings with intelligence. InfiniGraph is addressing this gap between using video intelligence and creating affordable technology otherwise out of reach.
Outside of the do not track, creating a truly personalized experience is ideal. For VOD / OTT apps creates the best path to robust personalization. For web a more generalized grouping of consumer is required.
Image based video recommendation “MANTIS”. Going beyond simple meta data and trending content to full intelligent context. Powered by KRAKEN.
All video recommendation platforms are reliant on data entered (called Meta data) when it was uploaded to a video content management system. Title, discription etc. The other main points of data capture plays, time on video and completion indicating watchablity. There is so much more to a videos than raw insights. Did someone watch a video is important but understanding the why in context of other like videos with similar content is intelligence. Many site have trending videos, however, promoting videos that get lots of plays creates a self fulfilling prophecy because trending is being artificially amplified and doesn’t indicate relevance.
An Intelligent Visual Approach
Going beyond meta data is key to a better consumer experience. Trending only goes so far. Visual recommendation looks at all the content based on consumer actions.
Surfacing the right video at the right time can make all the difference if people are staying or going. Leaders like YouTube have already become to leverage artificial intelligence in their recommending videos producing 70% greater watch time. Recently they included animated video previews for their thumbnails pushing take rates even high. This is more proof consumer desire intelligent recommendation and slicker visual presentation.
InfiniGraph provides a definitive differentiation using actions on images and in-depth knowledge of what’s in the video segments to build relevance. Consumer know what they like when they see it. Understand this visual ignition process is key to unlocking the potential of visual recommendation. How do you really know what people would like to play if you really don’t know much about the video content? Understanding the video content and context is the next stage in intelligent video recommendation and personalized discovery.
3 Ways Visual Video Recommendation Drives Video Lifetime Value
1. Visual recommendation – Visual information within video creates higher visual affinity to amplify discovery. Content likeness beyond just meta data opens up more video content to select from. Mapping what people watch is based on past observation, predicting what people will watch requires understand video context.
2. Video scoring – A much deeper approach to video had to be invented where the video is scored based on visual attribution inside the video and human behavior on those visuals. This scoring lets the content SPEAK FOR ITSELF and enables ordering play list relative to what was watched.
3. Personalized selection - Enhancing discover requires getter intelligence and context to what content is being consumed. Depending on the video publishers environment like OTT or a mobile app can enable high levels of personalization. For consumers using the web a more general approach and clustering consumers into content preferences powers better results while honoring privacy.
The Future is Amazing for Video Discovery
We have learned a great deal from innovative companies like: Netflix, HULU, YouTube and Amazon who have all come a long way in their approach to advanced video discovery. Large scale video publishers have a grand opportunity to embrace a new technology wave and be relevant while creating a visually conducive consumer experience. A major challenge going forward is the speed of change video publishers must adapt if they wish to stay competitive. With InfinGraph’s advanced technologies designed for video publishers there is hope. Take advantage of this movement and increase your video lifetime value.
Top image from post melding real life with recommendations.
Video discovery is one of the best ways to increase video lifetime value. Learning what video content is relevant increases greater time on site.
All video publishers are looking to increase their video’s lifetime value. Creating video can be expensive and the shelf life of most video is short. Maximizing those videos assets and their lifetime value is a top priority. With the advent of new technologies such as Video Machine Learning, publishers can now increase their video’s lifetime value by intelligently generating more time on site. Identifying the best image to lead with (thumbnail) and recommending relevant videos drive higher lifetime value through user experience and discovery.
This combination of visual identification and recommendation is like the Reese’s of video. By linking technologies like artificial intelligence and real-time video analytics, we’re changing the video game through automated actionable intelligence.
Ryan Shane, our VP of Sales, describes the advantages of knowing what visual (video thumbnail) (context) produces the most engagement and what video business models benefit the most from video machine learning.
Hear from our CEO, Chase McMichael, who talks about the advanced use of machine learning and deep learning to improve video take rates by finding and recommending the right images consumers engage with the most.
Here are two examples of how video machine learning increases revenue on your existing video assets.
Yield Example #1: Pre-roll
If you run pre-roll on your video content, you likely fill it with a combination of direct sales and an RTB network. For this example, assume you have a 10% CTR, which translates to 1 million video plays each day. That means that you are showing 1,000,000 pre-roll ads each day. Now assume that you run KRAKEN on your videos, and engagement jumps to by 30% to a 12% CTR. That means that you will be showing 1,300,000 pre-roll ads each day. KRAKEN has effectively added an additional 300,000 pre-roll spots for you to fill! This is an example of increasing the video value on your existing consumers.
Yield Example #2: Premium Content
For our second example, assume you monetize with premium content. You have an advertising client who has given you a budget of $100,000 and expects their video to be shown 5 million times. With your current play rates, you determine it will take four days to achieve that KPI. Instead, you run KRAKEN on their premium content, and engagement jumps 2X. You will hit your client’s KPI in only two days. You now have freed up two days of premium content inventory that you can sell to another client! Maximizing your existing video consumers and increase CTR reduces the need to sell off network.
Below is a Side by Side example of Guardians Of the Galaxy Default Thumbnail vs. KRAKEN Rotation powered by Deep Learning. Boosting click rates generates more primary views. While leveraging known images that induce response is logical to insert into a video recommendation (Reese’s). The two together now drive primary and secondary video views.
As you can see from both examples, using KRAKEN actually increases lifetime value as well as advertising yield from your video assets. Displaying like base content sorted by Deep Learning and video analytics by category delivers greater relevance. Organizing video into context is key to increasing discovery. Harnessing artificial intelligence with image selection and recommendation brings together the best of both digital video intelligent worlds.
Bite into a Reese’s and see how you can increase your video lifetime value. Request a demo and we’ll show you.
Being a publisher is a tough gig these days. It’s become a complex world for even the most sophisticated companies. And the curve balls keep coming. Consider just a few of the challenges that face your average publisher today:
Decreasing display rates married with audience migration to mobile with even lower CPMs.
Maturing traffic growth on O&O sites.
Pressure to build an audience on social platforms including adding headcount to do so (Snapchat) without any certainty that it will be sufficiently monetizable.
The sad realization that native ads—last year’s savior!–are inefficient to produce, difficult to scale and are not easily renewable with advertising partners.
The list goes on…
Of course, the biggest opportunity—and challenge–for publishers is video. Nothing shows more promise for publishers from both a user engagement and business perspective than (mobile) video. It’s a simple formula. When users watch more video on a publisher’s site, they are, by definition, more engaged. More video engagement drives better “time spent’ numbers and, of course, higher CPMs.
But the barrier to entry is high, particularly for legacy print publishers. They struggle to convert readers to viewers because creating a consistently high volume of quality video content is expensive and not necessarily a part of their core DNA. Don’t get me wrong. They are certainly creating compelling video, but they have not yet been able to produce it at enough scale to satisfy their audiences. At the other end of the spectrum, video-centric publishers like TV networks that live and breathe video run out of inventory on a continuous basis.
The combined result of publishers’ challenge of keeping up with the consumer demand for quality video is a collective dearth of quality video supply in the market. To put it in culinary terms, premium publishers would sell more donuts if they could, but they just can’t bake enough to satisfy the demand.
So how can you make more donuts? Trust and empower the user!
Rise of Artificial Intelligence
The majority of the buzz at CES this year was about Artificial Intelligence and Machine Learning. The potential for Amazon’s Alexa to enhance the home experience was the shining example of this. In speaking with several seasoned media executives about the AI/machine learning phenomenon, however, I heard a common refrain: “The stuff is cool, but I’m not seeing any real applications for my business yet.” Everyone is pining to figure out a way to unlock user preferences through machine learning in practical ways that they can scale and monetize for their businesses. It is truly the new Holy Grail.
That’s why we at InfiniGraph are so excited about our product KRAKEN. KRAKEN has an immediate and profound impact on video publishing. KRAKEN lets users curate the thumbnails publishers serve and optimizes towards user preference through machine learning in real time. The result?: KRAKEN increases click-to-play rates by 30% on average resulting in the corresponding additional inventory and revenues.
It is a revolutionary application of machine learning that, in execution, makes a one-way, dictatorial publishing style an instant relic. With KRAKEN, the users literally collaborate with the publisher on what images they find most engaging. KRAKEN actually helps you, the publisher, become more responsive to your audience. It’s a better experience and outcome for everyone.
In a world of cool gadgets and futuristic musings, KRAKEN works today in tangible and measurable ways to improve your engagement with your audience. Most importantly, KRAKEN accomplishes this with your current video assets. No disruptive change to your publishing flow. No need to add resources to create more video. Just a machine learning tool that maximizes your video footprint.
In essence, you don’t need to make more donuts. You simply get to serve more of them to your audience. And, KRAKEN does that for you!
Video viewability is a top priority for video publishers who are under pressure to verify that their audience is actually watching advertisers’ content. In a previous post How Deep Learning Video Sequence Drives Profits, we demonstrated why image sequences draw consumer attention. Advanced technologies such as Deep Learning are increasing video Viewability through identifying and learning which images make people stick to content. This content intelligence is the foundation for advancing video machine learning and improving overall video performance. In this post, we will explore some challenges in viewability and how deep learning is boosting video watch rates.
Side by Side Default Thumbnail vs. KRAKEN Rotation powered by Deep Learning
In the two examples above, which one do you think would increase viability? The video on the right has images selected by deep learning and automatically adjusted image rotation. It delivered a whopping 120% more plays than the static image on the left, which was chosen by an editor. Higher viewability is validated by the fact that the same video with the same placement at the same time achieved a greater audience take rate with images chosen by machine learning.
This boost in video performance was powered by KRAKEN, a video machine learning technology. KRAKEN is designed to understand what visuals (contained in the video) consumers are more likely to engage with based on learning. More views equals more revenue.
A/B testing is required when looking to verify optimization. For decades, video players have been void of any intelligence. They have been a ‘dumb’ interface for displaying a video stream to consumers. The fact was that without intelligence, the video player was just bit-pipe. Very basic measurements were taken, such as Video Starts, Completes, Views as well as some advanced metrics such as how long a user watched, etc. A new thinking was required to be more responsive to the audience and take advantage of what images people would reacted on. Increasing reaction increase viewability.
So how does KRAKEN do its A/B Testing? The goal was to create the most accurate measurement foundation possible to test for visuals consumers are more likely to engage with and measure the crowds response to one image vs another. KRAKEN implemented 90/10 splitting of traffic whereby 10% of traffic shows the default thumbnail image (the control) and 90% of traffic to the KRAKEN selected images. It is very simple to see why testing video performance through A/B testing is possible. Now that HTML5 is the standard and Adobe Flash has been deprecated, the ability to run A/B testing within video players has been furthered simplified.
Making sure a video is “in view” is one thing, but the experience has a great deal to do with legitimate viewability. A bigger question is: Will a person engage and really want to watch? People have a choice to watch content. It’s not that complex. If the content is bad, why would anyone want to watch it? If the site is known for identifying or creating great content then that box can be checked off.
Understanding what visual(s) makes people tick and get engaged is a key factor to increase viewability. Consumers have affinities to visuals and those affinities are core to them taking action. Tap into the right images and you will enhance the first impression and consumer experience.
What is Visual Cognitive Loading?
How the brain recognizes objects – MIT Neuroscientists find evidence that the brain’s inferotemporal cortex can identify objects. Visual induce human response using the right visuals increase attraction and attention. Photo: MIT
A single image is very hard to convey a video story with a single image. Yes, an image is worth a 1000 words but some people need more information to get excited. Video is a linear body of work that tells a story. Humans are motivated by emotion, intrigue and actions. Senses of sight and motion create a visual story that can be a turn on or turn off. Finding the right turn on images that tells a story is golden. Identifying what will draw them into a video is priceless.
The human visual cortex is connected to your eyes via the optic nerve; it’s like a super computer. Your ability to detect faces and objects at lightning speed is also how fast someone can get turned off to your video. Digital expectations are high in the age of digital natives. For this very reason, the right visual impression is required to get a video to stick, i.e. “sticky videos”. If you’re video isn’t sticky you will loose massive numbers of viewers and be effectively ignored just like “Banner Blindness”. The more visual information shown to a person the higher the probability of inducing an emotional response. Cognitive loading thereby gives them more information about what’s in the video. If you’re going to increase viewability you have to increase cognitive loading. It’s all about whether the content is worthy of their time.
Why Deep Learning
Deep Learning layers of object recognition. Understanding whats in the images is as valuable as the meta data and title. Photo: VICOS
The ability to identify what images and why are a big deal over the previous method of “plug a pray”. Systems now can recognize what’s in the image and linking that information back in real time with consumer behavior creates a very powerful leaning environment for video. Its now possible to create a hierarchical shape vocabulary for multi-class object representation further expanding a meaningful data layer.
Quality video and actuate measurement are paramount when optimizing video. Many ask, Why are KRAKEN images better? The reality is they are because using deep learning to select the right starting images increases the probability of nailing the right images that consumers will want to engage with. Over time, the system gets smarter and optimizes faster. A real time active feedback mechanism is created continuously adjusting and sending information back into the algorithm to improve over time.
Because KRAKEN consists of consumer curated actions, proactive video image selection is made possible. We make the assertion that optimized thumbnails result in more engaged video watchers as proven by the increase in video plays. KRAKEN drives viewability and enable publishers move premium O&O rates as a result.
Viewability or go home
After the Facebook blunder or “miss calculating video plays” and other measurement stumbles major brands have taken notice …. if you want to believe this was just a “mistake.” A 3 second play in AUTO PLAY isn’t a play in a feed environment when audio is off according to Rob Norman of Group M. The big challenge is there really isn’t a clear standard, just advice on handling viewability from the IAB. However, the big media buyers like Group M are demanding more and requiring half the video plays have a click to play to meet their viewability standard. This is wake up call for video publishers to get very serious about viewability and advertiser to create better content. All agree viewability is a top KPI when judging a campaigns effectiveness. 2017 is going to be an exciting year to watch how advertisers and publishers work together to increase video viewability. See The state of video Ad viewability in 5 charts as the conversation heats up.
VIDEO – Better User Experience, Time on Site and Converting Readers into Viewers.
Video Optimization With Machine Learning is now a reality and publishers are intelligently making the most out of their O&O digital assets. The digital video industry is undergoing a transformation and machine learning is advancing the video user experience. Mobile, combined with video, is truly the definitive on-demand platform making it the fastest growing sector in digital content distribution.
Video machine learning is a new field. The ability to crowd source massive human interactions on video content has created a new data-set. We’re tapping into a small part of the human collective conscious for the first time. Publishers and media broadcasters are now going beyond the video view, clicks, and completions to actually obtaining introspection into video objects, orientations and types of movements that induce positive cognitive response. This human cognitive response is the ultimate in measurement of relevance where humans are interacting with video in a much more profound way. In this article, we will dive deep into the four drivers of video machine learning.
Video by its nature is linear, however, there are several companies working to personalize the video experience as well as make it live. We’re now in an age where the peak of hype on Virtual Reality / Augmented Reality will provide the most immersive experience. All of these forms of video have two things in common: moving sights and sound. Humans by nature prefer video because this is how we see the world around us. The bulk of video consumed globally is mostly designed around a liner body of work that tells a story. The fact that the video is just a series of images connected together is not something people think much about. In the days of film, seeing a real film strip from a movie reel made it obvious that each frame was in fact a still image. Now fast forward, digital video has frames but those frames are made up of 1’s and 0’s. “Digital” opens the door to advance mathematics and image / object recognition technologies to process these images into more meaning than just a static picture.
It’s hard to believe how important images really are. For videos placed “above the fold,” you have to wonder why so many videos have such a low play rate to begin with (Video Start CTR). Consumers process objects in images within 13 milliseconds (0.013 seconds). That’s FAST! Capturing cognitive attention has to be achieved extremely fast for a human to commit to watching a video and the first image is important, but not everything. More than one image is sometimes required to assure a positive cognitive response. The reality is people are just flat out dismissive and some decide not to play the video. This is evident when you have a 10% CTR, which means 90% of your audience OPTED OUT OF PLAYING THE VIDEO. What happened? The facts are the first image may have been great but didn’t create a full mental picture of what was possible in the linear body of work. The reality is you’re not going to get 100% play rates, however, providing greater cognitive stimulation that builds relevance will drive greater reasons to commit time to watching a linear form of video.
Machine Learning and Algorithms
In the last 4 years, machine learning / artificial intelligence has exploded with new algorithms and advanced computing power has greatly reduced the cost of complex computations. Machine learning is transforming the way information is being interpreted and used to gain actionable insights. With the recent open sourcing of TensorFlow from Google and advances in Torch from Facebook, these machine learning platforms have truly disrupted the entire artificial intelligence industry.
Feature extraction and classification is key to learning what’s in the image that is achieving positive response.
Major hardware providers, such as NVIDIA, have ushered massive advancements in the machine learning and AI fields that would have otherwise been out of reach. The democratization of machine learning is now opening the doors to many small teams to propel the product development around meaningful algorithmic approaches.
The unique properties of digital video specifically in a consumer’s mobile feed, delivered from a video publishing site, creates a perfect window into how consumers snack on content. If you want to see hyper snacking, ride a train into a city or watch kids on their smartphones. Digital content consumption has never been so interactive than now. All digital publishers and broadcasters have to ask themselves this question, “How is my content going to get traction with this type of behavior?” If your audience is Snapchatters, YouTubers, or Instagramers you’re going to have to provide more value in your content V I S U A L Y or you will lose them in a split second.
Graphs – Video Views (Mobile-KMView / Desktop-KDView) vs. Minutes in a day – 1440 min = 24 hrs. Mobile is dominating the weekend where as work week, during commute and after work, skyrockets in usage. Is your video content adapting to this behavior?
Video Publishing Conundrum
A big conundrum is why people are not playing videos. This required further investigation. We found that the lead image (i.e. the old school “thumbnail”, or “poster image”) had a huge impact on introducing a cognitive response. In the mobile world, video is still a consumer driven response and we hope this will stay a click to play world. We believe consumer choice and control will always win the day. For video publishers, under the revenue gun, consumers will quickly tire of native ad content tricks, in-stream video (auto play), and the bludgeoning and force feeding of video on the desktop. No wonder ad-blocking is at an all time high! There is a whole industry cropping up around blocking ads and it’s an all out war. The sad part is the consumer is stuck in the middle.
Many publishers are using desktop video auto-play to reduce friction, however the FRONT of the page, video carousel, or gallery is a click to launch environment making the images on the published page even more important. Those Fronts are the main traffic driver over possible social share amplification. As for mobile video, it’s still a click to play world for a majority of broadcasters and publishers. Video is the highest consumer engaging vehicle at their disposal and it is why so many publishers are forcing themselves to create more video content. Publishing more video oriented content is great, however, the lack of knowledge of what consumers emotionally respond to has been a major gap. A post and pray or post and measure later system is currently prevalent throughout the publishing industry.
Video Quality matters
Creating a better consumer experience is everything if you want your content to be consumed in the days where auto-play is rampant and force fed content is inducing engagement. More brands demand measured engagement. Video engagement quality is measured by starts, length of time on video, and physical actions taken. Capturing human attention is very hard due to many distractions, especially on a mobile device. We’re in a phase where the majority of connected humans are now digital natives in this digital deluge. ADD is at an all time high (link). With < .25sec to get the consumer to engage before they have formulated the video story line in their mind is a hard task. A quick peak on the video thumbnail fast read of a headline and glance of some keywords could be standing between you and a revenue generating video play. People are pressed with their time and unwillingness to commit to a video play unless it induces a real cognitive response. Translating readers into video viewers is important and keeping them is even more important.
Mobile Video and Machine Learning
Mobile is becoming the prevalent method of on demand video access. This combination of video and mobile is an explosive pair and most likely the most powerful marketing conduit ever created. Here we have investigated how machine learning algorithms on images can provide a real-time level of insight and decision support to catch the consumer’s attention and achieve higher video yield otherwise lost. The big challenge with video is it created in a linear format and then loaded in a CMS put up for publishing and pray it gets traction. Promotion helps and placement matters, however, there is really nothing a publisher can do to adjust the video content once out. Enter video intelligence. The ability to measure in real-time video engagement is a game changer. Enabling intelligence within video seems intuitive, however, the complexity of encoding and decoding video has great a sufficient barrier of entry that this area of video intelligence has been otherwise untapped.
How and Why KRAKEN Works
Here we dive deep into consumers looking to interact with certain visual objects to create a positive response before a video is played. InfiniGraph invented a technology called KRAKEN that actually shows a series of images, but the series of images we call “image rotation” is not really new. What’s new is the actual selection and choice of those images using machine learning algorithms allowing us to adjust those images to achieve highest human response possible.
GRAPH – LIFT by KRAKEN mobile (KMLIFT) vs. desktop (KDLIFT) on same day. NOTE the grouping prior and after lunch had overall higher boost by KRAKEN. We attribute this behavior due to less distraction.
As more images are processed by KRAKEN, the system becomes smarter by selecting better lead images driving higher video efficiency. This entire process of choosing which order to sequence the best is another part of the learning mechanism. Image sequencing is derived from a collection of 1 to 4 images. These images are being selected based upon KRAKEN ranking linked with human actions. Those visual achieved the highest degree of engagement will receive a higher KRAKEN rank. The actual sequence also creates a visual story maximizing the limited time to capture a consumer’s attention.
KRAKEN in Action
KRAKEN determines the best possible thumbnails for any video using machine learning and audience testing. Once it finds the top 1-4 images, it rotates through them to further increase click-to-play rates. It also A/B tests against the original thumbnail to continually show its benefits. Here are 2 real examples:
KRAKEN Thumbnails with 273% lift below. What makes a good video lead image unique? We’re asked this question all the time. Why would someone click on one image versus another? These questions are extremely context and content dependent. The actual number of visual objects in the frame has a great deal to do with humans determining relevance, inducing intrigue or desire. The human brain sees shapes first in black / white. Color is a third response however red has it’s on visual alerting system. The human brain can process vast sums of visual information fast. The digital real estate such as mobile or desktop can be vastly different. A great example is what we call information packaging where a smaller image size on a mobile phone may only support 2 or 3 visual objects that a human would quickly recognize and induce a positive response whereas the desktop could support up to 5. Remember one size doesn’t fit all especially in mobile video. KRAKEN Thumbnails with 217% lift to the left. Trick your brain: black and white photo turns to colour! – Colour: The Spectrum of Science – BBC
4 drivers of video machine learning
Who benefits from video machine learning? The consumer benefits the most because of increased consumer experience due to creating a more visually accurate compilation of what the video content’s best moments are. It’s critical that people get a sense of the video so they commit to playing the video and sticking around. Obviously the publisher or broadcaster benefits financially due to more video consumption yielding to higher social shares.
Color depth: remember bright colors don’t always yield the best results. Visuals that depict action or motion elicit a higher response. Depending on the background can greatly alter color perception, hence images with a complementary background can enable a human eye to pick up colors that will best represent what they are looking at creating greater intrigue.
Image sequencing: Sequencing the wrong or bad images together doesn’t help but turns off. The right collection is everything and could be 1 to 4. Know when to alter or shift is key to obtaining the highest degree of engagement. The goal is to create a visual story that will increase consumer experience.
Visual processing: The human brain can process vast amounts of visual information fast. The digital real estate such as mobile or desktop can differ. A great example is what we call “information packaging” where a smaller image size on mobile phone screen may only support 2 or 3 visual objects in view. Humans can quickly recognize and induce a positive response whereas the desktop could support up to 5. One size doesn’t fit all especially in mobile video.
Object classification: Understanding what’s in an image and classify those images provides a library to top performing images. These images with the right classification create a unique data set for use in recommendation to prediction. Knowing what’s in the image as just as important as knowing it was acted on.
The first impression is everything or maybe the second or third if you are showing a sequence of images. For publishers and digital broadcasters adapting to their customers content consumption preferences and being on platforms that will yield the most will be an ongoing saga. Nurturing your audience and perpetuating their viewing experience will be key as more and more consumer move to mobile. KRAKEN is just the start of using machine learning to create a better user experience in mobile video. We see video intelligence expanding into prediction to VR / AR in the not too distantd future. As this unique dataset expands we look forward to getting your feedback on other exciting use cases and finding ways to increase the overall yield on your existing video assets.
Tell us what you think and where you see mobile video going in your business.