About Chase McMichael

Here are my most recent posts

3 Ways Video Recommendation Drives Video Lifetime Value

Video recommendation and discovery are very hot topics across video publishers looking to drive higher returns on their video lifetime value. Attracting a consumer to watch more videos isn’t simple in this attention deficit society we live. Gone in 90 Seconds according to Netflix. Your audience is one swipe away from being on another experience. Fluid media shifting is just life. However, video publishers are finding ways to keep consumers engaged using higher video intelligence. Want to make an impact in your consumer experience? Then make it simple to discover and surface relevant video content they find interesting. In this post we’ll explore the intelligence behind visual recommendation and what’s being leveraged to increase video lifetime value. VIDEO: Chase McMichael gives talk at Intel on how to process massive amounts of video on a budget and why visual computing attracts more attention to video.

Don’t be fooled

Google Video Intelligence Demo At Google Next 17

Google Video Intelligence API demo of video search finding baseball clips within video segments.

Enough with the buzz words around Artificial Intelligence, Machine Learning and Deep Learning. What problem are you solving?  Is there a learning system and automated method to create a better solution? Last year we posted on Search Engine Journal How Deep Learning Powers Video SEO describing the advantages of video image labeling. Since then, Google announced at Next17 a full video annotation platform call Video Intelligence . (InfiniGraph was honored to be selected as a Google Video Intelligence Beta Tester) Beyond Google having a huge cloud systems running on chips design for deep learning (TPU) to pull this, this massive video processing capability comes with a cost. We’re still in the very early days of video analysis. The MAJOR challenge with Google cloud offering is pushing all your video over to Google Cloud and 2nd is letting your DATA be used as part of their training set. This is very problematic on many levels due to content rights and Google becoming smarter on your video than you are. How do you achieve similar results without all this overhead?

Not all data is created equal

Trending Content - Lacks Image Based Video Machine Learning

Trending Content is based on popularity vs content context and the consumer content consumption.

All video publishers have the standard meta data attached to their videos when loading a video in their CMS.  Behavior tracking is very powerful if you have the consumers consent. Many consumers don’t want to be tracked, if they are not logged into your property. Complicating matters, there are communal devices in many homes. As for the mobile device, (iPhone etc.) this is VERY PERSONAL and tracking is possible BUT Apple and Google have taken steps to block 3rd parties tracking. Single party tracking will be in place, however; a standard has yet to be fully adopted. Gone are the good old day of “dropping a cookie”. Creating a truly personalized experience is ideal, however; depends on the consumer authorizing and receiving value for giving up their privacy. OTT apps provides the best path to robust personalization. We have learned a great deal from innovative companies like: Netflix, HULU, YouTube and Amazon who have all come a long way in their approach to advanced video discovery.  So how do you leverage these innovations on a budget?

See how “Netflix knows it has just 90 seconds to convince the user it has something for them to watch before they abandon the service and move on to something else”.

Video recommendation platforms

Video Recommendation Mantis Powers by KRAKEN Video Machine Learning

Image based video recommendation “MANTIS”. Going beyond simple meta data and trending content to full intelligent context. Powered by KRAKEN.

Not all video recommendation platforms are created equal. The main challenge is every mouse trap is virtually using the same META data and behavior tracking does not create meaningful discovery to new content. The heavy reliance on what people have played hence popular video must be what everyone wants to watch. Right? Popularity is not a barometer of relevance and vast majority of you’re video content isn’t seen by the majority of your audience. Hence good video content that lacks engagement will not be surfaced at all. This is your most expensive content like what’s the most expensive table in a restaurant? The empty table.

Video Machine Learning, Going beyond meta data is key to a better consumer experience. Trending only goes so far. Visual recommendation looks at all the content based on consumer actions.

Going beyond meta data is key to a better consumer experience. Trending only goes so far. Visual recommendation looks at all the content based on consumer actions.

To exacerbate the problem, trending videos are a self fulfilling prophecy because trending is being artificially amplified and doesn’t indicate relevance. Surfacing the right video at the right time can make all the difference in people staying or going.  What videos got play, time on video and completion indicates watchablity and captured interest. There is so much more to a videos than raw insights. Did someone watch a video is important but understanding the why in context of other like videos with similar content is intelligence. YouTube has been recommending videos for a long time but until recently started leveraging AI to build intelligent personalized video play lists. So has Netflix, HULU and Amazon to some extent. There are a few 3rd party platforms in the space when it comes to video recommendation. However, very few have tapped into visual insights to achieve higher intelligence. Companies like Iris.tv being an early entry in video recommendation and latter like Prizma and BABATOR all have unique Meta data tracking algorithms designed to entice more people to stay longer mostly on desktop auto play video. Now with the Viewablity demand increase and requirement to verify viewablity more advance methods of assuring people are watching the content is required. Hence, a new thinking on video recommendation was mandated.

An Intelligent Visual Approach

A definitive differentiation is using the images and video segments within the video to build relevance. Consumer know what they like when they see it. Understand this visual ignition process was key to unlocking the potential of visual recommendation.  A visual psychographic map now can be created based on video consumption. How do you really know what people would like to play if you really don’t know much about the video content? Understanding the video content and context is the next stage in intelligent video recommendation and personalized discovery. Dissecting the video content and context now opens up a new DATA set that was other wise trapped behind a play button.

3 Ways Visual Video Recommendation Drives Video Lifetime Value

1. Visual recommendation – Visual information within video creates higher visual affinity to amplify discovery. Content likeness beyond just meta data opens up more video content to select from. Mapping what people watch is based on past observation, predicting what people will watch requires understand video context.

2.  Video scoring – A much deeper approach to video had to be invented where the video is scored based on visual attribution inside the video and human behavior on those visuals. This scoring lets the content SPEAK FOR ITSELF and enables ordering play list relative to what was watched.

3. Personalized selection - Enhancing discover requires getter intelligence and context to what content is being consumed. Depending on the video publishers environment like OTT or a mobile app can enable high levels of personalization. For consumers using the web a more general approach and clustering consumers into content preferences powers better results while honoring privacy.

The Future is Amazing for Video Discovery

Google, Amazon, Facebook and Apple are going head to head with deep video analysis in the cloud. Large scale video publishers have a grand opportunity to embrace this new technology wave and be relevant while creating a visually inducive consumer experience.  Video annotation has a very bright future using a technology called Deep Learning. We have come a very long way from just doing single image labeling via ImageNet. A major challenge going forward is the speed of change video publishers must adapt if they wish to stay competitive. With advanced  technologies designed for video publishers there is hope. Take advantage of this movement and increase your video lifetime value.

Top image from post melding real life with recommendations.

Video Publishers Ready for Video Autoplay Shutdown.

deer-in-headlights Publishers need intelligence via Machine Learning KRAKEN video artificial intelligence Video publishers have been caught off guard with the recent announcement of Apple blocking video autoplay. Even Google is pushing back on bad web ads. The backlash against video autoplay has been festering for some time. If losing video ad revenue and turning consumers off with declining traffic isn’t a wake-up call then what will be? Headlines like this from CNN “Apple’s plan to kill autoplay feature could leave publishers in the dust” should get video publisher’s attention. This clamp down isn’t a joke and Google and Apple are taking a hardline to clean up the web experience when it comes to video. Here we dive deep into how to get ahead of these changes by Apple/Google and increase your video lifetime value.

Facebook started the conversation

Since Facebook started force-feeding video autoplay on us, other publishers followed suit knowing their video volume would go up. However, some major agencies flat out said they would only pay half of the CPMs due to the viewability issues with autoplay. A major advertiser (Heineken) is publicly having challenges getting a 6 sec clip to stick. Publishers say the video relationship with Facebook is “complicated”. This is a topic of constant discussions and other players are outright opting out of video autoplay altogether in favor of a better consumer experience. Apple Autoplay Blocking iOS11 KRAKEN Video Machine Learning is the SolutionThe major catch 22 here is that publishers driving their O&O strategy can’t think of autoplay is a video strategy—it’s a tactic that, in most cases, turns consumers off. If you want to see some of the consumer backlash, just search on Google “how to turn off autoplay” and you will see that this is most definitely a real consumer pain point. With Apple’s latest release of iOS 11 specifically blocking video autoplay, a more thoughtful and intelligent approach is required.

Video Strategy?

Autoplay on off Publisher handling UI KRAKEN Video Machine Learning Drives Higher Play Rates

Publishers are responding to consumer demand by giving the options to turn OFF autoplay video.

A video strategy involves deciding to dominate a content category vertically and be the go-to source for the highest value content in that space. Yes, video is content marketing. People watch video for information, enlightenment, entertainment, etc. Video is a very effective communication tool. Video is mobile and on demand. And being a tool, the publisher has a responsibility to harness and wield that tool surgically vs. a blunt object that pushes video views without consumer consent or value add to paid advertisers. Some publishers understand this, such as LittleThings Inc. They are disabling video autoplay completely and focusing on consumer experience. This has resulted In higher play rates (CTR), and higher CPMs that can be verified and justified to their customers. The other major benefit was consumers engaged more.

“We wanted video views to be on the consumer’s terms.  By running autoplay, you might [reach your desired] fill rate, but the user is not engaged with the brand the way they would if they raised their hands to watch the video” said Justin Festa, chief digital officer for Little Things, at JW Player’s JW Insights event in New York

Higher Intelligence

The digital publisher today is going to have to use higher intelligence with consumers. A surgical approach to utilizing data and then presenting it is now a must have. So what is the benefit of artificial intelligence in video? It is better to start with the question: What is digital video? If we break it down, digital video is just a series of images and sequences spliced together. Humans are visual and have emotional responses to images and context. The story is a major draw in creating greater emotional response over simply the affinity one may have to the people. Now a computer that translates all the above and puts it into context would have to be truly intelligent. This is not something new; Netflix proved you get higher take rates by having the right images, which results in higher consumer engagement.

In the Making

KRAKEN AMP example powers by Video Machine LearningThree years ago, a technology was introduced called KRAKEN.  It utilizes video machine learning to select images to replace the static non-intelligent thumbnail with interactive dynamic thumbnails which are the best set of images to drive the highest play rates possible. The rotation of images provides more visual information when compared to a single image. Video clipping (GIF) was next, however, it is most effective in action shots. A new way of looking at video thumbnails was required. The solution was creating a real time responsive, dynamic intelligence and scoring images based on relevance. Finding the best images is one thing, however, powering video recommendation was a natural fit for finding great images.  Learning what collective visuals work together to extend longer time on site is a major deal for all publishers. We’re living in exciting times with advances in machine learning and computer chip design having achieved amazing levels of image processing capability. We have experienced a big leap forward in the code foundation (like Deep Learning) now powering platforms to segment out objects, images, places and facial recognition. We’re in an artificial intelligence renaissance.

Show me the money

Video Recommendation Powered by KRAKEN Video Machine Learning

Video Recommendation powered by KRAKEN video machine learning. Going beyond meta data and plays to now visuals within the Video.

It’s no secret ads still drive the bulk of digital video revenue. For that very reason, each video play, and increased time on site, translates into cold hard cash. Making the site sticky and getting more repeat visits requires video intelligence. Google and Apple are very serious about protecting the mobile web. It is clear that Google AMP (accelerated mobile pages) has won out with the publishers while Facebook instant articles has fallen short and most have abandoned it due to lack of making money vs AMP. The perfect trifecta of real-time video analytics, intelligence image selection, and video recommendation are now a reality. We have the data and processing power to predict what images make you excited and what video is most relevant to watch. Video discovery is key for increasing video life time value.

Conclusion

Are you ready for the do not track and the non-autoplay world?  Like it or not, Google and Apple are disabling video autoplay and intrusive ads. The digital broadcasting publisher has a grand opportunity to leverage machine learning in video. Tapping into visually relevant actions and drawing out behavior is a competitive advantage. Machine learning linked with digital video that maximizes your video assets is a strategic advantage and increases video lifetime value. The above video recommendation example was not possible before machine learning based video processing made it a reality. What possibilities can you imagine? .

How To Increase Video Lifetime Value via Machine Learning

Videos Found For You Recommendation KRAKEN Video Machine Learning Deep Learning

Video discovery is one of the best ways to increase video lifetime value. Learning what video content is relevant increases greater time on site.

All video publishers are looking to increase their video’s lifetime value. Creating video can be expensive and the shelf life of most video is short. Maximizing those videos assets and their lifetime value is a top priority. With the advent of new technologies such as Video Machine Learning, publishers can now increase their video’s lifetime value by intelligently generating more time on site. Identifying the best image to lead with (thumbnail) and recommending relevant videos drive higher lifetime value through user experience and discovery.

Reeses two great tasts put togehter Video Machine Learning Deep Learning Artificial IntelligenceThis combination of visual identification and recommendation is like the Reese’s of video. By linking technologies like artificial intelligence and real-time video analytics, we’re changing the video game through automated actionable intelligence.

Ryan Shane, our VP of Sales, describes the advantages of knowing what visual (video thumbnail) (context) produces the most engagement and what video business models benefit the most from video machine learning.

Hear from our CEO, Chase McMichael, who talks about the advanced use of machine learning and deep learning to improve video take rates by finding and recommending the right images consumers engage with the most.

Here are two examples of how video machine learning increases revenue on your existing video assets.

Yield Example #1: Pre-roll

If you run pre-roll on your video content, you likely fill it with a combination of direct sales and an RTB network. For this example, assume you have a 10% CTR, which translates to 1 million video plays each day. That means that you are showing 1,000,000 pre-roll ads each day. Now assume that you run KRAKEN on your videos, and engagement jumps to by 30% to a 12% CTR. That means that you will be showing 1,300,000 pre-roll ads each day. KRAKEN has effectively added an additional 300,000 pre-roll spots for you to fill! This is an example of increasing the video value on your existing consumers.

Yield Example #2: Premium Content

For our second example, assume you monetize with premium content. You have an advertising client who has given you a budget of $100,000 and expects their video to be shown 5 million times. With your current play rates, you determine it will take four days to achieve that KPI. Instead, you run KRAKEN on their premium content, and engagement jumps 2X. You will hit your client’s KPI in only two days. You now have freed up two days of premium content inventory that you can sell to another client! Maximizing your existing video consumers and increase CTR reduces the need to sell off network.

Below is a Side by Side example of Guardians Of the Galaxy Default Thumbnail vs. KRAKEN Rotation powered by Deep Learning. Boosting click rates generates more primary views. While leveraging known images that induce response is logical to insert into a video recommendation (Reese’s). The two together now drive primary and secondary video views.

As you can see from both examples, using KRAKEN actually increases lifetime value as well as advertising yield from your video assets. Displaying like base content sorted by Deep Learning and video analytics by category delivers greater relevance. Organizing video into context is key to increasing discovery. Harnessing artificial intelligence with image selection and recommendation brings together the best of both digital video intelligent worlds.

Bite into a Reese’s and see how you can increase your video lifetime value.  Request a demo and we’ll show you.

 

How Deep Learning Increases Video Viewability

Video viewability is a top priority for video publishers who are under pressure to verify that their audience is actually watching advertisers’ content. In a previous post How Deep Learning Video Sequence Drives Profits, we demonstrated why image sequences draw consumer attention. Advanced technologies such as Deep Learning are increasing video Viewability through identifying and learning which images make people stick to content. This content intelligence is the foundation for advancing video machine learning and improving overall video performance. In this post, we will explore some challenges in viewability and how deep learning is boosting video watch rates.

Side by Side Default Thumbnail vs. KRAKEN Rotation powered by Deep Learning

 

In the two examples above, which one do you think would increase viability? The video on the right has images selected by deep learning and automatically adjusted image rotation. It delivered a whopping 120% more plays than the static image on the left, which was chosen by an editor. Higher viewability is validated by the fact that the same video with the same placement at the same time achieved a greater audience take rate with images chosen by machine learning.

This boost in video performance was powered by KRAKEN, a video machine learning technology. KRAKEN is designed to understand what visuals (contained in the video) consumers are more likely to engage with based on learning. More views equals more revenue.

Measurement

Video Deep Learning Machine Learning A_B Testing KRAKEN InfiniGraphA/B testing is required when looking to verify optimization. For decades, video players have been void of any intelligence. They have been a ‘dumb’ interface for displaying a video stream to consumers. The fact was that without intelligence, the video player was just bit-pipe. Very basic measurements were taken, such as Video Starts, Completes, Views as well as some advanced metrics such as how long a user watched, etc. A new thinking was required to be more responsive to the audience and take advantage of what images people would reacted on. Increasing reaction increase viewability.

So how does KRAKEN do its A/B Testing? The goal was to create the most accurate measurement foundation possible to test for visuals consumers are more likely to engage with and measure the crowds response to one image vs another. KRAKEN implemented 90/10 splitting of traffic whereby 10% of traffic shows the default thumbnail image (the control) and 90% of traffic to the KRAKEN selected images. It is very simple to see why testing video performance through A/B testing is possible. Now that HTML5 is the standard and Adobe Flash has been deprecated, the ability to run A/B testing within video players has been furthered simplified.

User experience

Mobile Video Sponsor Content In FeedMaking sure a video is “in view” is one thing, but the experience has a great deal to do with legitimate viewability. A bigger question is: Will a person engage and really want to watch? People have a choice to watch content. It’s not that complex. If the content is bad, why would anyone want to watch it? If the site is known for identifying or creating great content then that box can be checked off.

Understanding what visual(s) makes people tick and get engaged is a key factor to increase viewability. Consumers have affinities to visuals and those affinities are core to them taking action. Tap into the right images and you will enhance the first impression and consumer experience.

What is Visual Cognitive Loading?

MIT-Object-Rec_0-Visual Congnition 2

How the brain recognizes objects – MIT Neuroscientists find evidence that the brain’s inferotemporal cortex can identify objects.  Visual induce human response using the right visuals increase attraction and attention. Photo: MIT

A single image is very hard to convey a video story with a single image. Yes, an image is worth a 1000 words but some people need more information to get excited. Video is a linear body of work that tells a story. Humans are motivated by emotion, intrigue and actions. Senses of sight and motion create a visual story that can be a turn on or turn off. Finding the right turn on images that tells a story is golden. Identifying what will draw them into a video is priceless.

The human visual cortex is connected to your eyes via the optic nerve; it’s like a super computer. Your ability to detect faces and objects at lightning speed is also how fast someone can get turned off to your video. Digital expectations are high in the age of digital natives. For this very reason, the right visual impression is required to get a video to stick, i.e. “sticky videos”. If you’re video isn’t sticky you will loose massive numbers of viewers and be effectively ignored just like “Banner Blindness”. The more visual information shown to a person the higher the probability of inducing an emotional response. Cognitive loading thereby gives them more information about what’s in the video.  If you’re going to increase viewability you have to increase cognitive loading. It’s all about whether the content is worthy of their time.

Why Deep Learning

Deep Learning layers of object recognition. Understanding whats in the images is as valuable as the meta data and title.

Deep Learning layers of object recognition. Understanding whats in the images is as valuable as the meta data and title. Photo: VICOS

The ability to identify what images and why are a big deal over the previous method of “plug a pray”. Systems now can recognize what’s in the image and linking that information back in real time with consumer behavior creates a very powerful leaning environment for video. Its now possible to create a hierarchical shape vocabulary for multi-class object representation further expanding a meaningful data layer.

In our previous post How Deep Learning Powers Video SEO we describe the elements behind deep learning in video and the power of object recognition. This same power can be applied to video selection and managing visual in real time. Both image rotation and full animation (clips) provides maximum visual cognitive loading.

The KRAKEN Hypothesis

Quality video and actuate measurement are paramount when optimizing video. Many ask, Why are KRAKEN images better? The reality is they are because using deep learning to select the right starting images increases the probability of nailing the right images that consumers will want to engage with. Over time, the system gets smarter and optimizes faster. A real time active feedback mechanism is created continuously adjusting and sending information back into the algorithm to improve over time.

Because KRAKEN consists of consumer curated actions, proactive video image selection is made possible.  We make the assertion that optimized thumbnails result in more engaged video watchers as proven by the increase in video plays. KRAKEN drives viewability and enable publishers move premium O&O rates as a result.

Viewability or go home

After the Facebook blunder or “miss calculating video plays” and other measurement stumbles major brands have taken notice …. if you want to believe this was just a “mistake.”  A 3 second play in AUTO PLAY isn’t a play in a feed environment when audio is off according to Rob Norman of Group M. The big challenge is there really isn’t a clear standard, just advice on handling viewability from the IAB. However, the big media buyers like Group M are demanding more and requiring half the video plays have a click to play to meet their viewability standard. This is wake up call for video publishers to get very serious about viewability and advertiser to create better content. All agree viewability is a top KPI when judging a campaigns effectiveness. 2017 is going to be an exciting year to watch how advertisers and publishers work together to increase video viewability. See The state of video Ad viewability in 5 charts as the conversation heats up.

How Deep Learning Video Sequence Drives Profits

Beyond the deep learning hype, digital video sequence (clipping) powered by machine learning is driving higher profits. Video publishers use various images (thumbnails – poster images) to attract readers to watch more video. These “Thumbnail Images” are critical, and the visual information has a great impact on video performance. The lead visual in many cases is more important than the headline. More view equals more revenue it’s that simple. Deep learning is having significant impact in video visual search to video optimization. Here we explore video sequencing and the power of deep learning.

Having great content is required, but if your audience isn’t watching the video then you’re losing money. Understanding what images resonate with your audience and produce higher watch rates is exactly what KRAKEN does. That’s right: show the right image, sequence or clip to your consumers and you’ll increase the number of videos played. This is proven and measurable behavior as outlined in our case studies. An image is really worth a thousand words.

Below are live examples of KRAKEN in action. Each form is powered by a machine learning selection process. Below we describe the use cases for apex image, image rotation and animation clip.

Animation Clip:

KRAKEN “clips” the video at the point of APEX. Sequences are put together creating a full animation of a scene(s). Boost rates are equal to those from image rotation and can be much higher depending on the content type.

  • PROS
    • Consumer created clipping points within video
    • Creates more visual information vs. a static image
    • Highlights action scenes
    • Great for mobile and OTT preview
  • CONS:
    • More than one on page can cause distraction
    • Overuse can turn off consumers
    • Too many on page can slow page loading performance (due to size)
    • Mobile LTE is slow and can lead to choppy images instead of a smooth video

Image Rotation:

Image rotation allows for a more complete visual story to be told when compared to a static image. This results in consumers having a better idea of the content in the video. KRAKEN determines the top four most engaging images and then cycles through them. We are seeing mobile video boost rates above 50%.

  • PROS:
    • Smooth visual transition
    • Consumer selected top images
    • Creates a visual story vs. one image to engage more consumers
    • Ideal for mobile and OTT
    • Less bandwidth intensive (Mobile LTE)
  • CONS:
    • Similar to animated clips, publishers should limit multiple placements on a single page

Apex Image:

KRAKEN always finds the best lead image for any placement. This apex image alone creates high levels of play rates, especially in a click-to-launch placement. Average boost rates are between 20% to 30%.

  • PROS:
    • Audience-chosen top image for each placement
    • Can be placed everywhere (including social media)
    • Ideal for desktop
    • Good with mobile and OTT
  • CONS:
    • Static thumbnails have limited visual information
    • Once the apex is found, the image will never be substituted

Below are live KRAKEN animation clip examples. All three animations start with the audience choosing the apex image.  Then, KRAKEN identifies (via deep learning) clipping points and uses machine learning to adjust to optimal clipping sequence.

HitFix Video Deep Learning Video Clipping to Action Machine Learning

HitFix Video Deep Learning Video Clipping to Action, Machine Learning adjust in real time

Video players have transitioned to HTML5 and mobile consumption of video is the fastest growing medium. Broadcasters that embrace advanced technologies that adapt to the consumer preference will achieve higher returns, and at the same time create a better consumer experience. The value proposition is simple: If you boost your video performance by 30% (for a video publisher doing 30 million video plays per month), KRAKEN will drive an additional $2.2 million in revenue (See KRAKEN revenue calculator). This happens with existing video inventory and without additional head count. KRAKEN creates a win-win scenario and will improve its performance as more insights are used to bring prediction and recommendation to consumers, thereby increasing the video process.

How Deep Learning Powers Visual Search

The elusive video search whereby you can search video image context is now possible with advanced technologies like deep learning. It’s very exciting to see video SEO becoming a reality thanks to amazing algorithms and massive computing power. We truly can say a picture is worth 1,000 words!

Content creators have fantasized about doing video search. For many years,, major engineering challenges were a road block to comprehending video images directly.

Originally posted on SEJ

Video visual search opens up a whole new field where video is the new HTML. And, the new visual SEO is what’s in the image. We’re in exciting times with new companies dedicated to video visual search. In a previous post, Video Machine Learning: A Content Marketing Revolution, we demonstrated image analysis within video to improve video performance. After one year, we’re now embarking on video visual search via deep learning.

Behind the Deep Curtain

Video Deep Learning  KRAKEN wonder-woman-trailer

Video clipping powered by KRAKEN video deep learning. Identify relevance within video images to drive higher plays

Many research groups have collaborated to push the field of deep learning forward. Using an advanced image labeling repository like ImageNet has elevated the deep learning field. The ability to take video and identify what’s in the video frames and apply description opens up huge visual keywords.

What is deep learning? It is probably the biggest buzzword around along with AI (Artificial Intelligence). Deep Learning came from advanced math on large data set processing, similar to the way the human brain works. The human brain is made of up tons of neurons and we have long attempted to mimic how these neurons work. Previously, only humans and a few other animals had the ability to do what machines can now do. This is a game changer.

The evolution of what’s call a Convolution Neural Network, or CNN aka deep learning, was created from thought leaders like Yann LeCrun (Facebook), Geoffrey Hinton (Google), Andrew Ng (Baidu) and Li Fei-Fei (Director of the Stanford AI Lab and creator of ImageNet). Now the field has exploded and all major companies have open sourced their deep learning platforms for running Convolution Neural Networks in various forms. In an interview with New York Times, Fei-Fei said “I consider the pixel data in images and video to be the dark matter of the Internet. We are now starting to illuminate it.” That was back in 2014. For more on the history of machine learning, see the post by Roger Parloff at Fortune.

Big Numbers

KRAKEN video deep learning Images for high video engagement

KRAKEN video deep learning Images for high video engagement

Image reduction is key to video deep learning. Image analysis is achieved through big number crunching. Photo: Chase McMichael created image

Think about this: video is a collection of images linked together and played back at 30 frames-a-second. Analyzing massive number of frames is a major challenge

As humans, we see video all the time and our brains are processing those images in real-time. Getting a machine to do this very task at scale is not trivial. Machines processing images is an amazing feat and doing this task in real-time video is even harder. You must decipher shapes, symbols, objects, and meaning. For robotics and self-driving cars this is the holy grail.

To create a video image classification system required a slightly different approach. You must handle the enormous number of single frames in a video file first to understand what’s in the images.

Visual Search

On September 28th, 2016, the seven-member Google research team announced YouTube-8M leveraging state-of-the-art deep learning models. YouTube-8M, consists of 8 million YouTube videos, equivalent to 500K hours of video, all labeled and there are 4800 Knowledge Graph entities. This is a big deal for the video deep learning space. YouTube-8M’s scale required some pre-processing on images to pull frame level features first. The team used Inception-V3 image annotation model trained on ImageNet. What’s makes this such a great thing is we now have access to a very large video labeling system and Google did massive heavy lifting to create 8M.

Google 8M Stats Video Visual Search

Top level numbers of YouTube 8M. Photo created by Chase McMichael.

Top level numbers of YouTube 8M. Photo created by Chase McMichael.

The secret to handling all this big data was reducing the number of frames to be processed. The key is extracting frame level features from 1 frame-per-second creating a manageable data set. This resulted in 1.9 billion video frames enabling a reasonable handling of data. With this size you can train a TensorFlow model on a single Graphic Process Unit (GPU) in 1 day! In comparison, the 8M would have required a petabyte of video storage and 24 CPUs of computing power for a year. It’s easy to see why pre-processing was required to do video image analysis and frame segmenting created a manageable data set.

Big Deep Learning Opportunity

 

Chase mcMichael Deep Learning Talk to ACM Reinforced Deep Learning Vidoe

Chase McMichael gives talk on video hacking to ACM Aug 29th Photo: Sophia Viklund used with permission

Google has beautifully created two big parts of the video deep learning trifecta. First, they opened up a video based labeling system (YouTube8m). This will give all in the industry a leg up in analyzing video. Without a labeling system like ImageNet, you would have to do the insane visual analysis on your own. Second, Google opened Tensoflow, their deep learning platform, creating a perfect storm for video deep learning to take off. This is why some call it an artificial intelligence renaissance. Third, we have access to a big data pipeline. For Google this is easy, as they have YouTube. Companies that are creating large amounts of video or user-generated videos will greatly benefit.

The deep learning code and hardware are becoming democratized, and its all about the visual pipeline. Having access to a robust data pipeline is the differentiation. Companies that have the data pipeline will create a competitive advantage from this trifecta.

Big Start

Follow Google’s lead with TensorFlow, Facebook launched it’s own open AI platform FAIR, followed by Baidu. What does this all mean? The visual information disruption is in full motion. We’re in a unique time where machines can see and think. This is the next wave of computing. Video SEO powered by deep learning is on track to be what keywords are to HTML.

Visual search is driving opportunity and lowering technology costs to propel innovation. Video discovery is not bound by what’s in a video description (meta layer). The use cases around deep learning include medical image processing to self-flying drones, and that is just a start.

Deep learning will have a profound impact our daily lives in ways we never imagined.

Both Instagram and Snapchat are using sticker overlays based on facial recognition and Google Photo sort your photos better than any app out there. Now we’re seeing purchases linked with object recognition at Houzz leveraging product identification powered by deep learning. The future is bright for deep learning and content creation. Very soon we’ll be seeing artificial intelligence producing and editing video.

How do you see video visual search benefiting you, and what exciting use cases can you imagine?

Feature Image is YouTube 8M web interface screen shot taken by Chase McMichael on September 30th .

Hacking Digital Video Via Deep Learning, A Video Machine Learning Solution


Chase McMichael spoke at the ACM Bay Area Chapter Event on September 29th.

Intro to the Video Deep Learning Talk

Deep Learning, image and object recognition are core elements to intelligent video visual analysis. Understanding context within and classification creates a strong use case for video deep learning. Digital video is exploding, however there are few leveraging the wealth of data and how to harness visual analysis. A true reinforced deep learning system using collective human intelligence linked with neural networks provides the foundation to a new level of video insights. We’re just at the beginnings of intelligent video and using this knowledge to improve video performance.

kraken-gif-example-sportsphelps-kraken

Chase McMichael talk at ACM on Hacking Video Via Deep Learning

Chase McMichael talk at ACM on Hacking Video Via Deep Learning Photo: Sophia Viklund

Deep Learning Methods Within Video An End Game Application

Deep Learning Methods Within Video An End Game Application – We’ll explore the use cases of using deep learning to drive higher video views. The coming Valhalla of video Deep Learning is being realized in visual object recognition and image classification within video. Mobile video has and continues to transform the way video is being distributed and consumed.

Deep Learning Methods Within Video – An End Game Application

Big moves

Adobe Stats from Report on Mobile VideoWe’re witnessing the largest digital land grab in video history. Mobile video advertising is the fastest growing segment projected to account for $25 billion worth of ad spend by 2021.  Deep Learning and artificial intelligence are also growing within the very same companies who are jockeying for your cognitive attention. This confluence of video and deep learning has created a new standard in higher performing video content diving greater engagement, views, and revenue. In this post we’ll dive deep into how video intelligence is changing the mobile video game. Many studies showing tablet and smartphone viewing accounted for nearly 40 minutes of daily viewing in 2015 with mobile video continuing to dominate in 2016. Moreover, digital video is set to out pace TV for the first time and social / Instagram/Snapchat video is experiencing explosive growth.

 

The Interstellar trailer is a real example of KRAKEN in action and achieved a 16X improvement in video starts. Real-Time A/B testing between the poster image (thumbnail) and selected images pulled from visual training set provide the simultaneous measurement of what image induce engagement.  All data and actions are linked with a Video Machine Learning (KRAKEN) algorithm enabling real-time optimization and sequences of the right images to achieve maximum human engagement possible.

How it works

Processing video at large scale and learning requires advanced algorithms designed to ingest real-time data.  We have now entered the next phase of data insights going beyond the click and video play. Video opens the door to video consumption habits KRAKEN video deep learning Images for high video engagementand using machine learning enables a competitive advantage.

Consumer experience and time on site are paramount when video is the primary revenue source for most broadcasting and over-the-top (OTT) sites today including Netflix, HULU, Comcast X1, and Amazon. Netflix has already put into production their version of updating poster images to improve higher play starts, discovery and completions.

It’s All Math

Images with higher object density have proven to drive higher engagement. The graph demonstrates images with high entropy (explained in this video) generated the most attraction. Knowing what images produce a cognitive response are fundamental for video publishers looking to maximized their video assets.

Top 3 video priorities we’re hearing from customers.

1) Revenue is very important, and showing more video increases revenue (especially during peak hours when inventory is already sold out)

2) More video starts means more user time on site

3) Mobile is becoming very important. Increasing mobile video plays is a top priority.

While this is good news overall, it does present a number of new challenges facing video publishers in 2016. One challenge is managing the consumer access to content on their terms and across many points. Video consumption is increasingly accessed through multiple entry-points throughout the day. These entry points, by their very nature, have context.

Deep Learning

KRAKEN Video Deep Learning AB Test VIDEO mobile video liftBroadcasters and publishers must consider consumer visual consumption as a key insight. These eye balls (neurons firing) are worth billions of dollars but its no longer a game of looking at web logs. More advance image analysis to determine what images work with customers requires insights into consumers video consumption habit. For the digital broadcasters, enabling intelligence where the consumer engages isn’t new. Using deep convolutional neural networks powers the image identification and other priority algorithms. More details are in the main video.

Motivation

Visual consumer engagement tracking is not something random. Tracking engagement on video has been done for many years but when it comes to “what” within the video there was a major void. InfiniGraph created KRAKEN to enable video deep learning and fill that void by enabling machine learning within the video to optimize what images are shown to achieve the best response rates. Interstellar’s 16X boost is a great example of using KRAKEN to dive higher click to launch for autoplay on desktop and click to play in mobile resulting in higher revenue and greater video efficiency.  Think of KRAKEN as the Optimizely for video.

One question that comes up often is: “Is the image rotation the only thing causing people to click play?” The short answer is NO. Rotating arbitrary images is annoying and distracting.  KRAKEN finds what the customer likes first and then sequences the images based on measurable events. The right set of images is everything. Once you have the right images you can then find the right sequence and this combination makes all the difference in maximizing play rates. Not using the best visuals will cause higher abandonment rates.

Conclusion

Further advances in deep learning are opening the doors to continuous learning and self improving systems. One are we’re very excited about is visual prediction and recommendation of video. We see a great future of mapping human collective cognitive response to visuals that stimulate and created excitement. Melting the human mind to video intelligence is the next phase for publishers to deliver a better consumer experience.

Top Video Platforms and Video Machine Learning at NAB 2016

Chase McMichael, NAB VIDEO Intro – Top Video Platforms and Video Machine Learning made a big splash at NAB 2016.

The event was all about digital video, video production, VR, drones and every other technology you could imagine. Think of NAB as the as the CEO of digital and video broadcasting. Everywhere you looked there was drone technology, robotics and even a full area dedicated to VR. The future of video publishing is bright for sure as new technology simplifies quality capture and distribution. We took the time to connect with some of our video platform partners at NAB. Our one-on-one interviews were with Ooyala, Brightcove, and Kaltura. Each video platform provided a comprehensive walkthrough of their latest development and demos.  What stood out the most was the big push in Over The Top (OTT) supporting broadcasters. Drone Plane Hybrid NAB 2016sm OTT was a big theme for many video platforms, and all show amazing on-demand video technology.  Everyone has seen Netflix and Hulu interfaces and are now becoming serious about OTT. Visuals are everything in OTT interfaces and using the power of intelligence is a key differentiation. Netflix identifies this fact in “Selecting the best artwork for videos through A/B testing”

The consumer has gone mobile in a big way, and digital video is taking on TV.  Consumers want access to on-demand video wherever they are and on their terms.  User experience was also a big draw, too. There is no question that lines have been drawn with rumblings of opening up the Set Top Box and unbundling the TV. Apple TV and Roku started to look like a yesteryear technology compared with the OTT interfaces and mobile native app interfaces being demoed. Brightcove released an OTT Flow and a very exciting interface for a video library and we got a first-hand view of a super slick mobile interface to digital video consumption. Kaltura also showed off what they did for Vodafone. The video platforms seem well positioned to service a TV Everywhere strategy and feed into the Apple TV and Roku devices.

Tom Morrissy sporting the laste in VR ware at NAB 2Another part of the demonstrations on each platform that we experienced was 360 video support. Each player had mouse controls whereas Ooyala demonstrated split screen view supporting Google Cardboard. There is an exciting future in VR content and all are waiting to see what’s going to come out from a content perspective. Beyond linear video, immersive storytelling has a great future and we hope that technology doesn’t encumber the adoption and create friction for the experience. The speed of video player loading, streaming efficiency and low buffer rates have always been major competitive advantages when video publishers evaluate platforms.

A big topic was the relatively new Apple standard HLSjs streaming protocol. DASH by Microsoft was also discussed at various booths. All players support HTML5 with a focus on migrating customers away from the old Adobe Flash technology. Every platform demonstrated to use of HLSjs/HTML5. Kaltura shows a real-time side-by-side with an impressive HTML5 player load speed of 50% improvement. Improving load time and streaming will continue to benefit the mobile web and autoplay world. Video is everywhere and customers are demanding more of it. All video publishing platforms had very well organized video management and publishing capabilities. The big takeaways are that the platforms are focused on simplification in publishing and handling a large volume of video with greater intelligence built-in. Obviously, this is important when serving video and creating a better video viewing experience. Here are the top 4 most mentioned attributions for all the platforms.

  1. Availability - percentage of times video playback starts successfully
  2. Start Up Time - time between the play button click and playback start
  3. Rebuffers - number of times and the duration of interruptions due to re-buffering
  4. Bitrate - average bits per second of video playback. The higher the bitrate, the better the experience

All of our conversation centered around using intelligence within thumbnail selection and the process of integration. KRAKEN video machine learning has a bright future with the onslaught of OTT platforms offering more video carousel and indexes as part of the central interface for video discovery.  Next up is video prediction (recommendation) and using data to make smarter decisions on what to watch next. There are some very positive results coming from companies like Iris.tv and JW Player. Look for our next post coming from Stream Media East. Catch more on our last podcast here “Thumbnails are part of a Video Marketing Strategy”

Tom Morrissy on KRAKEN – Publisher Perspective on Video Machine Learning

We had the unique opportunity to talk with Tom Morrissy our Board Advisor in Time Square NY.  Below is the transcript between Tom and Chase talking about his perspective on publishers and KRAKEN our Video Machine learning technology.

Chase: Hi I’m Chase McMichael CEO and Co-founder of InfiniGraph and I’m here today with Tom Morrissy our Board Advisor. Hi Tom.

Tom: Hi Chase.

Chase: So Tom tell me a little about your experience working with us obviously you’re a midi ex media guy from SpinMedia, Timing you’re basically better. So tell us a little about your experience working with us and obviously you know more about the media space than we do. Now we really like to have your viewpoint on you know knowledge space, some of the issues. What do we do to really break in and arbitration within you know obviously cracking mobile videos big deal? So tell me what you think?

Tom: One thing that we’re finding as we talk to different publisher was just one of the reasons that I joined the company is all have very similar pain points. There’s a certain buried entry with video. There’s only so much video they can create but getting it seen and consumed by consumers is a huge challenge. So, how can we excite the consumer to be more excited and excited enough about the content that they are viewing to actually click to play and actually watch it and engage more deeply on these websites and as a factor of that we create more inventory for the publishers to monetize their video plays.

Really what it comes down to its combining a better User experience and starting with that and then taking the better User experience and being able to monetize it in a much more accelerated way. That’s the magic on what this company has done in my eyes because it took the publishers view through the User experience. Most Ad Tech starts with how we can make more money off this User with our technology does is empowers the user and that’s a vital difference relative to all the other technologies I’ve seen.

Having been on the publisher’s side and then on the Ad Tech side and then back to the publisher’s side. I was getting email after email, LinkedIN after LinkedIN request saying I’ve got a seamless integration opportunity for you revenue generating and the truth and matter is there is no seamless integration. Anybody who promises that is not telling the full truth because we all know that’s not the case.

So, the question becomes as a publisher what do you want to do and where do you want to spend your time and resources to integrate what kind of impact does it going have in your audience and then what kind of impact ultimately will have on your business. That’s what InfiniGraph is taking into account. We created the most frictionless integration opportunity that I’ve seen from seeing all the viewpoint of a publisher and figuring out a way to excite the reader and therefore grow the business.

Chase: Yeah we are just embedding it on the video player how much lower touch we can get you know we’re not touching the CMS don’t change the workflow that’s a death nail. One of the things you say a lot “its about the content especially premium content”. I like this whole thing where you talking about where we’re really going for this from a content angle. Not the ad angle and how are we going to juice the User and get the much money out of them. There’s some importance there for some of the publisher but reality is that they play an incredible strong content game they are going to lose the audience. So talk a little more about that.

Tom: The truth of the matter is, you don’t have that much time to engage the audience you can have killer content but the audience may not choose to view it for whatever reason. You got to figure out ways to engage them and get into what we call cognitive thinking and making the commitment to watching the content that is you can have the best movie you can have the best video clip if nobody watches it is like a tree falling in the forest right.

Chase: Yeah, that’s a great point because I know on mobile, we’ve been measuring you got 0.25 seconds man and if you’re not “thumbstopping” The viewer is just scrolling and scrolling you’re going to miss them. The video is a linear body of work and there is so much content there but then you’re starting out with just one image. What do you do?

Tom: Right, you have to figure out what’s going to motivate that person because think about it they have chosen to go to that space to watch a video and ninety percent of them on average are not. So why?

Chase: Yes they are bailing.

Tom: How can we make a better experience and make them more motivated to watch the video in the first place that they are there to watch in the first place. That’s the weirdest thing about this market. That’s what keeps me up at night and what keeps you up at night to try to solve for that. That’s what our company I think the promise of what we’re trying to do is really help lead the consumer to the choice they always told themselves they want to make.

Chase: Exactly.

Tom: And, if you can do that then you have a much more robust relationship with your consumer to spend more time on your site, they watch more video which is why they’re there and then you as a publisher make more money. It’s as simple as that.

Chase: Right. Thanks Tom man. It’s great to have you on board and awesome here to be in the big apple. And hey viewers, click up on the (i) up on the right here to see some more information and we will back at you soon. Thank you.

Want to see more? Request a live demo.