Machine Learning, Video Deep Learning and Innovations in Big Data

Video Deep Learning Machine Learning Paul Burns Talk at Idea to IPO Innovations in Big Data

Paul Burns CDS at InfiniGraph talks on Video Deep Learning, Machine Learning  at Idea to IPO on Innovations in Big Data

Paul Burns Chief Data Scientist at InfiniGraph provides his point of view on what he has learned from doing massive video processing and video data analysis to find what images and clips work best with audiences. He spoke at the event Idea to IPO on Machine Learning, Video Deep Learning and Innovations in Big Data. Quick preview of Paul’s insights and approach to machine learning and big data.

Paul Burns Chief Data Scientist InfiniGraph working with start up involved in mobile video intelligence. I’ve had a bit of a varied career although a purely technical I would say started off in auto-sensing that’s 15 years doing research and RF sensor signal and data processing algorithms. I took a bit of a diverted turn in my career a number of years ago got a PhD and bioinformatics some works in the life sciences in genomics and sequencing industry for about three years. At the moment now I have turned again into video so I have range of experience with working with large datasets and learning algorithms and so hopefully I could bring some insights that others would like here.

My own personal experience is one in which I’ve inhabited a space very close to the data source and so when I think about big data I think about opportunities to find and discover patterns that are not apparent to an expert necessarily or they could be automatically found and used for prediction or analysis or health and status of the sensors at levels of effectiveness. There’s a lot of differences in the perception of what big data really is other than there’s the common thread that seems to be a way of thinking about data and I hate the word data. Really data is so non descriptive it’s so generic so that it’s it has almost no meaning at all.

I think of data as just information that’s stockpiled and it could be useful if you knew how to go in and sort through the stockpile of information to find patterns. How to find patterns that persist and can be used for predictive purposes. I think there’s been a generally slow progress over many decades and why this explosion in recent years is primarily because of the breakthroughs in computer vision and advancements in multi layer deep neural networks particularly processing image and video data.

This is something that’s taken places over the last ten years first with the breakthrough the seminal paper that was authored by Geoffrey Hinton in 2006 which demonstrated breakthroughs and deep multi-layer networks neural networks and then with the work that was published towards the ImageNet the competition in 2012 that made the significant advancement in performance over more conventional methods.

I think the major reason why there’s all this excitement is because visual perception is so incredibly powerful. That’s been an area where we’ve really struggled to make computers relate to the world and to understand and process things that are happening around them. There’s this sense that we’re on the cusp of a major revolution and autonomy. You can look at all the autonomous vehicles and all the human power and capital being put into those efforts.

Paul answers question on Privacy:  Honestly, I think privacy has been dead for some time the way it should be structured is the way Facebook works I can choose to opt into Facebook and have a lot of details about the gory details of my life exposed to the world and Facebook. But what I get out of that is I’m more closely connected to friends and family so I choose to opt in because I want them to that reward but privacy issues where I don’t have the opt-out choice is most problematic. There was a government program I’m aware of that happened in the Netherlands some years ago. They adopted a pilot program where people could opt out of their having their Hospital care data published in a government database. The purpose of which was to lean and make patterns with health outcomes. That’s a little controversial because you can have public health the public health benefits of having such a database could be enormous and transformational so it’s a very complicated issue. I’m certainly probably not qualified to speak on this topic. I would say it’s (privacy) long since been dead and we kind of have to do a postmortem.

We’re very fortunate that so much very high quality research has been published, so many very excellent data sets and model parameters are available free download. If starting out we were working on just very generic replication of open systems. Object recognition can be done with fairly high quality free open source code in a week. That was kind of our starting point to be able to advertise mobile video by selecting thumbnails that are somehow more enticing for people to click on than the default ones the content owners provide.

As it turned out this idea our co-founders came up with (KRAKEN VIDEO MACHINE LEARNING how to increase video lifetime value) about a couple years ago. It’s amazing how bad humans are at predicting what other people want to click on it’s amazing. We are as far as we know the only startup that’s solely focused on this core idea which sounds like a small business but with all the mobile video volume an advertising revenue that’s out there and growing.

What I do is when I have a hard problem I try to stockpile as much data to create the most thorough training set that I can possibly create and I think the most successful businesses will be the ones that are able to do that. It turns out there there are actually companies all they do is help you create training sets for your machine learning applications we use a variety of methods to do that crowdsourcing is one common way that’s really expensive to it’s far more expensive I thought it was even possible. Getting startups to find a way to harvest rich training sets that are valuable for inference are potential to be huge winners. It just turns out to be very hard to do.

Another area that is big is wearable technology for the purpose of health monitor personal health. I think that’s an area that has tremendous potential just because you know your physician is starving for data. You have to make a point to see your doctor schedule it etc. So what do they do? They weigh you and take your blood pressure ask how old you are that’s about it. I mean that’s nothing right they know they do not know what’s going on with you. Maybe it’s personality dependent but I would be very much in favor of disclosing all kinds of biometric information about myself it’s continuously recorded and stockpiled in a database and repeatedly scanned by intelligent agents for anomalies and doctors appointments automatically scheduled for me. Same thing with any complicated piece of machinery you know it could be a car it could be parts of your business. This kind of invasive monitoring I think will come with resistant but could be unleashed as people see the value in disclosing.

See full panel here Idea to IPO

Making More Donuts

Being a publisher is a tough gig these days.   It’s become a complex world for even the most sophisticated companies.  And the curve balls keep coming.  Consider just a few of the challenges that face your average publisher today:

  • Ad blocking.
  • Viewability and measurement.
  • Decreasing display rates married with audience migration to mobile with even lower CPMs.
  • Maturing traffic growth on O&O sites.
  • Pressure to build an audience on social platforms including adding headcount to do so (Snapchat) without any certainty that it will be sufficiently monetizable.
  • The sad realization that native ads—last year’s savior!–are inefficient to produce, difficult to scale and are not easily renewable with advertising partners.  

The list goes on…

The Challenge

Of course, the biggest opportunity—and challenge–for publishers is video.  Nothing shows more promise for publishers from both a user engagement and business perspective than (mobile) video. It’s a simple formula.  When users watch more video on a publisher’s site, they are, by definition, more engaged.  More video engagement drives better “time spent’ numbers and, of course,  higher CPMs.    

But the barrier to entry is high, particularly for legacy print publishers. They struggle to convert readers to viewers because creating a consistently high volume of quality video content is expensive and not necessarily a part of their core DNA.  Don’t get me InfiniGraph Video Machine Learning Challenge Opportunitywrong.  They are certainly creating compelling video, but they have not yet been able to produce it at enough scale to satisfy their audiences.  At the other end of the spectrum, video-centric publishers like TV networks that live and breathe video run out of inventory on a continuous basis.   

The combined result of publishers’ challenge of keeping up with the consumer demand for quality video is a collective dearth of quality video supply in the market.  To put it in culinary terms, premium publishers would sell more donuts if they could, but they just can’t bake enough to satisfy the demand.  

So how can you make more donuts?
Trust and empower the user! 

InfiniGraph Video Machine Learning Donuts

Rise of  Artificial Intelligence

The majority of the buzz at CES this year was about Artificial Intelligence and Machine Learning.  The potential for Amazon’s Alexa to enhance the home experience was the shining example of this.  In speaking with several seasoned media executives about the AI/machine learning phenomenon, however, I heard a common refrain:  “The stuff is cool, but I’m not seeing any real applications for my business yet.”  Everyone is pining to figure out a way to unlock user preferences through machine learning in practical ways that they can scale and monetize for their businesses.  It is truly the new Holy Grail.

The Solution

That’s why we at InfiniGraph are so excited about our product KRAKEN.  KRAKEN has an immediate and profound impact on video publishing.  KRAKEN lets users curate the thumbnails publishers serve and optimizes towards user preference through machine learning in real time. The result?:  KRAKEN increases click-to-play rates by 30% on average resulting in the corresponding additional inventory and revenues.     

It is a revolutionary application of machine learning that, in execution, makes a one-InfiniGraph Video Machine Learning Brain Machineway, dictatorial publishing style an instant relic. With KRAKEN, the users literally collaborate with the publisher on what images they find most engaging.  KRAKEN actually helps you, the publisher, become more responsive to your audience. It’s a better experience and outcome for everyone.  

The Future…Now!

In a world of cool gadgets and futuristic musings, KRAKEN works today in tangible and measurable ways to improve your engagement with your audience.  Most importantly, KRAKEN accomplishes this with your current video assets. No disruptive change to your publishing flow. No need to add resources to create more video. Just a machine learning tool that maximizes your video footprint.  

In essence, you don’t need to make more donuts.  You simply get to serve more of them to your audience.  And, KRAKEN does that for you!

 

For more information about InfiniGraph, you can contact me at tom.morrissy@infinigraph.com or read my last blog post  AdTech? Think “User Tech” For a Better Video Experience

 

How Deep Learning Increases Video Viewability

Video viewability is a top priority for video publishers who are under pressure to verify that their audience is actually watching advertisers’ content. In a previous post How Deep Learning Video Sequence Drives Profits, we demonstrated why image sequences draw consumer attention. Advanced technologies such as Deep Learning are increasing video Viewability through identifying and learning which images make people stick to content. This content intelligence is the foundation for advancing video machine learning and improving overall video performance. In this post, we will explore some challenges in viewability and how deep learning is boosting video watch rates.

Side by Side Default Thumbnail vs. KRAKEN Rotation powered by Deep Learning

 

In the two examples above, which one do you think would increase viability? The video on the right has images selected by deep learning and automatically adjusted image rotation. It delivered a whopping 120% more plays than the static image on the left, which was chosen by an editor. Higher viewability is validated by the fact that the same video with the same placement at the same time achieved a greater audience take rate with images chosen by machine learning.

This boost in video performance was powered by KRAKEN, a video machine learning technology. KRAKEN is designed to understand what visuals (contained in the video) consumers are more likely to engage with based on learning. More views equals more revenue.

Measurement

Video Deep Learning Machine Learning A_B Testing KRAKEN InfiniGraphA/B testing is required when looking to verify optimization. For decades, video players have been void of any intelligence. They have been a ‘dumb’ interface for displaying a video stream to consumers. The fact was that without intelligence, the video player was just bit-pipe. Very basic measurements were taken, such as Video Starts, Completes, Views as well as some advanced metrics such as how long a user watched, etc. A new thinking was required to be more responsive to the audience and take advantage of what images people would reacted on. Increasing reaction increase viewability.

So how does KRAKEN do its A/B Testing? The goal was to create the most accurate measurement foundation possible to test for visuals consumers are more likely to engage with and measure the crowds response to one image vs another. KRAKEN implemented 90/10 splitting of traffic whereby 10% of traffic shows the default thumbnail image (the control) and 90% of traffic to the KRAKEN selected images. It is very simple to see why testing video performance through A/B testing is possible. Now that HTML5 is the standard and Adobe Flash has been deprecated, the ability to run A/B testing within video players has been furthered simplified.

User experience

Mobile Video Sponsor Content In FeedMaking sure a video is “in view” is one thing, but the experience has a great deal to do with legitimate viewability. A bigger question is: Will a person engage and really want to watch? People have a choice to watch content. It’s not that complex. If the content is bad, why would anyone want to watch it? If the site is known for identifying or creating great content then that box can be checked off.

Understanding what visual(s) makes people tick and get engaged is a key factor to increase viewability. Consumers have affinities to visuals and those affinities are core to them taking action. Tap into the right images and you will enhance the first impression and consumer experience.

What is Visual Cognitive Loading?

MIT-Object-Rec_0-Visual Congnition 2

How the brain recognizes objects – MIT Neuroscientists find evidence that the brain’s inferotemporal cortex can identify objects.  Visual induce human response using the right visuals increase attraction and attention. Photo: MIT

A single image is very hard to convey a video story with a single image. Yes, an image is worth a 1000 words but some people need more information to get excited. Video is a linear body of work that tells a story. Humans are motivated by emotion, intrigue and actions. Senses of sight and motion create a visual story that can be a turn on or turn off. Finding the right turn on images that tells a story is golden. Identifying what will draw them into a video is priceless.

The human visual cortex is connected to your eyes via the optic nerve; it’s like a super computer. Your ability to detect faces and objects at lightning speed is also how fast someone can get turned off to your video. Digital expectations are high in the age of digital natives. For this very reason, the right visual impression is required to get a video to stick, i.e. “sticky videos”. If you’re video isn’t sticky you will loose massive numbers of viewers and be effectively ignored just like “Banner Blindness”. The more visual information shown to a person the higher the probability of inducing an emotional response. Cognitive loading thereby gives them more information about what’s in the video.  If you’re going to increase viewability you have to increase cognitive loading. It’s all about whether the content is worthy of their time.

Why Deep Learning

Deep Learning layers of object recognition. Understanding whats in the images is as valuable as the meta data and title.

Deep Learning layers of object recognition. Understanding whats in the images is as valuable as the meta data and title. Photo: VICOS

The ability to identify what images and why are a big deal over the previous method of “plug a pray”. Systems now can recognize what’s in the image and linking that information back in real time with consumer behavior creates a very powerful leaning environment for video. Its now possible to create a hierarchical shape vocabulary for multi-class object representation further expanding a meaningful data layer.

In our previous post How Deep Learning Powers Video SEO we describe the elements behind deep learning in video and the power of object recognition. This same power can be applied to video selection and managing visual in real time. Both image rotation and full animation (clips) provides maximum visual cognitive loading.

The KRAKEN Hypothesis

Quality video and actuate measurement are paramount when optimizing video. Many ask, Why are KRAKEN images better? The reality is they are because using deep learning to select the right starting images increases the probability of nailing the right images that consumers will want to engage with. Over time, the system gets smarter and optimizes faster. A real time active feedback mechanism is created continuously adjusting and sending information back into the algorithm to improve over time.

Because KRAKEN consists of consumer curated actions, proactive video image selection is made possible.  We make the assertion that optimized thumbnails result in more engaged video watchers as proven by the increase in video plays. KRAKEN drives viewability and enable publishers move premium O&O rates as a result.

Viewability or go home

After the Facebook blunder or “miss calculating video plays” and other measurement stumbles major brands have taken notice …. if you want to believe this was just a “mistake.”  A 3 second play in AUTO PLAY isn’t a play in a feed environment when audio is off according to Rob Norman of Group M. The big challenge is there really isn’t a clear standard, just advice on handling viewability from the IAB. However, the big media buyers like Group M are demanding more and requiring half the video plays have a click to play to meet their viewability standard. This is wake up call for video publishers to get very serious about viewability and advertiser to create better content. All agree viewability is a top KPI when judging a campaigns effectiveness. 2017 is going to be an exciting year to watch how advertisers and publishers work together to increase video viewability. See The state of video Ad viewability in 5 charts as the conversation heats up.

How Deep Learning Video Sequence Drives Profits

Beyond the deep learning hype, digital video sequence (clipping) powered by machine learning is driving higher profits. Video publishers use various images (thumbnails – poster images) to attract readers to watch more video. These “Thumbnail Images” are critical, and the visual information has a great impact on video performance. The lead visual in many cases is more important than the headline. More view equals more revenue it’s that simple. Deep learning is having significant impact in video visual search to video optimization. Here we explore video sequencing and the power of deep learning.

Having great content is required, but if your audience isn’t watching the video then you’re losing money. Understanding what images resonate with your audience and produce higher watch rates is exactly what KRAKEN does. That’s right: show the right image, sequence or clip to your consumers and you’ll increase the number of videos played. This is proven and measurable behavior as outlined in our case studies. An image is really worth a thousand words.

Below are live examples of KRAKEN in action. Each form is powered by a machine learning selection process. Below we describe the use cases for apex image, image rotation and animation clip.

Animation Clip:

KRAKEN “clips” the video at the point of APEX. Sequences are put together creating a full animation of a scene(s). Boost rates are equal to those from image rotation and can be much higher depending on the content type.

  • PROS
    • Consumer created clipping points within video
    • Creates more visual information vs. a static image
    • Highlights action scenes
    • Great for mobile and OTT preview
  • CONS:
    • More than one on page can cause distraction
    • Overuse can turn off consumers
    • Too many on page can slow page loading performance (due to size)
    • Mobile LTE is slow and can lead to choppy images instead of a smooth video

Image Rotation:

Image rotation allows for a more complete visual story to be told when compared to a static image. This results in consumers having a better idea of the content in the video. KRAKEN determines the top four most engaging images and then cycles through them. We are seeing mobile video boost rates above 50%.

  • PROS:
    • Smooth visual transition
    • Consumer selected top images
    • Creates a visual story vs. one image to engage more consumers
    • Ideal for mobile and OTT
    • Less bandwidth intensive (Mobile LTE)
  • CONS:
    • Similar to animated clips, publishers should limit multiple placements on a single page

Apex Image:

KRAKEN always finds the best lead image for any placement. This apex image alone creates high levels of play rates, especially in a click-to-launch placement. Average boost rates are between 20% to 30%.

  • PROS:
    • Audience-chosen top image for each placement
    • Can be placed everywhere (including social media)
    • Ideal for desktop
    • Good with mobile and OTT
  • CONS:
    • Static thumbnails have limited visual information
    • Once the apex is found, the image will never be substituted

Below are live KRAKEN animation clip examples. All three animations start with the audience choosing the apex image.  Then, KRAKEN identifies (via deep learning) clipping points and uses machine learning to adjust to optimal clipping sequence.

HitFix Video Deep Learning Video Clipping to Action Machine Learning

HitFix Video Deep Learning Video Clipping to Action, Machine Learning adjust in real time

Video players have transitioned to HTML5 and mobile consumption of video is the fastest growing medium. Broadcasters that embrace advanced technologies that adapt to the consumer preference will achieve higher returns, and at the same time create a better consumer experience. The value proposition is simple: If you boost your video performance by 30% (for a video publisher doing 30 million video plays per month), KRAKEN will drive an additional $2.2 million in revenue (See KRAKEN revenue calculator). This happens with existing video inventory and without additional head count. KRAKEN creates a win-win scenario and will improve its performance as more insights are used to bring prediction and recommendation to consumers, thereby increasing the video process.

How Deep Learning Powers Visual Search

The elusive video search whereby you can search video image context is now possible with advanced technologies like deep learning. It’s very exciting to see video SEO becoming a reality thanks to amazing algorithms and massive computing power. We truly can say a picture is worth 1,000 words!

Content creators have fantasized about doing video search. For many years,, major engineering challenges were a road block to comprehending video images directly.

Originally posted on SEJ

Video visual search opens up a whole new field where video is the new HTML. And, the new visual SEO is what’s in the image. We’re in exciting times with new companies dedicated to video visual search. In a previous post, Video Machine Learning: A Content Marketing Revolution, we demonstrated image analysis within video to improve video performance. After one year, we’re now embarking on video visual search via deep learning.

Behind the Deep Curtain

Video Deep Learning  KRAKEN wonder-woman-trailer

Video clipping powered by KRAKEN video deep learning. Identify relevance within video images to drive higher plays

Many research groups have collaborated to push the field of deep learning forward. Using an advanced image labeling repository like ImageNet has elevated the deep learning field. The ability to take video and identify what’s in the video frames and apply description opens up huge visual keywords.

What is deep learning? It is probably the biggest buzzword around along with AI (Artificial Intelligence). Deep Learning came from advanced math on large data set processing, similar to the way the human brain works. The human brain is made of up tons of neurons and we have long attempted to mimic how these neurons work. Previously, only humans and a few other animals had the ability to do what machines can now do. This is a game changer.

The evolution of what’s call a Convolution Neural Network, or CNN aka deep learning, was created from thought leaders like Yann LeCrun (Facebook), Geoffrey Hinton (Google), Andrew Ng (Baidu) and Li Fei-Fei (Director of the Stanford AI Lab and creator of ImageNet). Now the field has exploded and all major companies have open sourced their deep learning platforms for running Convolution Neural Networks in various forms. In an interview with New York Times, Fei-Fei said “I consider the pixel data in images and video to be the dark matter of the Internet. We are now starting to illuminate it.” That was back in 2014. For more on the history of machine learning, see the post by Roger Parloff at Fortune.

Big Numbers

KRAKEN video deep learning Images for high video engagement

KRAKEN video deep learning Images for high video engagement

Image reduction is key to video deep learning. Image analysis is achieved through big number crunching. Photo: Chase McMichael created image

Think about this: video is a collection of images linked together and played back at 30 frames-a-second. Analyzing massive number of frames is a major challenge

As humans, we see video all the time and our brains are processing those images in real-time. Getting a machine to do this very task at scale is not trivial. Machines processing images is an amazing feat and doing this task in real-time video is even harder. You must decipher shapes, symbols, objects, and meaning. For robotics and self-driving cars this is the holy grail.

To create a video image classification system required a slightly different approach. You must handle the enormous number of single frames in a video file first to understand what’s in the images.

Visual Search

On September 28th, 2016, the seven-member Google research team announced YouTube-8M leveraging state-of-the-art deep learning models. YouTube-8M, consists of 8 million YouTube videos, equivalent to 500K hours of video, all labeled and there are 4800 Knowledge Graph entities. This is a big deal for the video deep learning space. YouTube-8M’s scale required some pre-processing on images to pull frame level features first. The team used Inception-V3 image annotation model trained on ImageNet. What’s makes this such a great thing is we now have access to a very large video labeling system and Google did massive heavy lifting to create 8M.

Google 8M Stats Video Visual Search

Top level numbers of YouTube 8M. Photo created by Chase McMichael.

Top level numbers of YouTube 8M. Photo created by Chase McMichael.

The secret to handling all this big data was reducing the number of frames to be processed. The key is extracting frame level features from 1 frame-per-second creating a manageable data set. This resulted in 1.9 billion video frames enabling a reasonable handling of data. With this size you can train a TensorFlow model on a single Graphic Process Unit (GPU) in 1 day! In comparison, the 8M would have required a petabyte of video storage and 24 CPUs of computing power for a year. It’s easy to see why pre-processing was required to do video image analysis and frame segmenting created a manageable data set.

Big Deep Learning Opportunity

 

Chase mcMichael Deep Learning Talk to ACM Reinforced Deep Learning Vidoe

Chase McMichael gives talk on video hacking to ACM Aug 29th Photo: Sophia Viklund used with permission

Google has beautifully created two big parts of the video deep learning trifecta. First, they opened up a video based labeling system (YouTube8m). This will give all in the industry a leg up in analyzing video. Without a labeling system like ImageNet, you would have to do the insane visual analysis on your own. Second, Google opened Tensoflow, their deep learning platform, creating a perfect storm for video deep learning to take off. This is why some call it an artificial intelligence renaissance. Third, we have access to a big data pipeline. For Google this is easy, as they have YouTube. Companies that are creating large amounts of video or user-generated videos will greatly benefit.

The deep learning code and hardware are becoming democratized, and its all about the visual pipeline. Having access to a robust data pipeline is the differentiation. Companies that have the data pipeline will create a competitive advantage from this trifecta.

Big Start

Follow Google’s lead with TensorFlow, Facebook launched it’s own open AI platform FAIR, followed by Baidu. What does this all mean? The visual information disruption is in full motion. We’re in a unique time where machines can see and think. This is the next wave of computing. Video SEO powered by deep learning is on track to be what keywords are to HTML.

Visual search is driving opportunity and lowering technology costs to propel innovation. Video discovery is not bound by what’s in a video description (meta layer). The use cases around deep learning include medical image processing to self-flying drones, and that is just a start.

Deep learning will have a profound impact our daily lives in ways we never imagined.

Both Instagram and Snapchat are using sticker overlays based on facial recognition and Google Photo sort your photos better than any app out there. Now we’re seeing purchases linked with object recognition at Houzz leveraging product identification powered by deep learning. The future is bright for deep learning and content creation. Very soon we’ll be seeing artificial intelligence producing and editing video.

How do you see video visual search benefiting you, and what exciting use cases can you imagine?

Feature Image is YouTube 8M web interface screen shot taken by Chase McMichael on September 30th .

Hacking Digital Video Via Deep Learning, A Video Machine Learning Solution


Chase McMichael spoke at the ACM Bay Area Chapter Event on September 29th.

Intro to the Video Deep Learning Talk

Deep Learning, image and object recognition are core elements to intelligent video visual analysis. Understanding context within and classification creates a strong use case for video deep learning. Digital video is exploding, however there are few leveraging the wealth of data and how to harness visual analysis. A true reinforced deep learning system using collective human intelligence linked with neural networks provides the foundation to a new level of video insights. We’re just at the beginnings of intelligent video and using this knowledge to improve video performance.

kraken-gif-example-sportsphelps-kraken

Chase McMichael talk at ACM on Hacking Video Via Deep Learning

Chase McMichael talk at ACM on Hacking Video Via Deep Learning Photo: Sophia Viklund

AdTech? Think “User Tech” For a Better Video Experience

How and why did Ad Tech become a bad word?  Ad tech has become associated with, and blamed for, everything from damaging the user experience (slow load rates) to creating a series of tolls that the advertiser pays for but ultimately at the expense of margins for publishers.  Global warming has a better reputation. Even the VC’s are investing more in marketing tech than the ad tech space.

The Lumascape is denser than ever and, even with consolidation, it will take years before there is clarity.  And the newest, new threats to the ad ecosystem like visibility, bots, and ad blocking will continue to motivate scores of new “innovative” companies to help solve these issues. This is in spite of the anemic valuations ad tech companies are currently seeing from Wall Street and venture firms. The problem is that the genesis of almost all of these technologies begins with the race for the marketing dollar while the user experience remains an afterthought. A wise man once said, “Improve the user experience and the ad dollars will follow.” So few new companies are born out of this philosophy. The ones that are—Facebook, Google and Netflix (How Netflix does A/B testing) —are massively successful.

One of the initial promises for publishers to engage their readers on the web was to Panthers_Video_Machine_Learning_iPhoneKRAKEN (1)provide an “interactive” experience—a two-way conversation. The user would choose what they wanted to consume, and editors would serve up more of what they wanted resulting in a happier, more highly engaged user.   Service and respect the user and you—the publisher—will be rewarded.

This is what my company does.  We have been trying to understand why the vast majority of users don’t click on a video when, in fact, they are there to watch one!  How can publishers make the experience better?   Editors often take great care to select a thumbnail image that they believe their users will click on to start a video and then…nothing.  On average, 85% of videos on publishers’ sites do not get started.

We believe that giving the user control and choice is the answer to this dilemma.  So we developed a patented machine learning platform that responds to the wisdom of the crowds by serving up thumbnail images from publisher videos that the user—not the editor—determines are best. By respecting the user experience with our technology, users are 30% more likely to click on videos when the thumbnails are user-curated.

What does this mean for publishers?   Their users have a better experience because they are actually consuming the most compelling content on the site.  Nothing beats the sight, sound and motion of the video experience. Their users spend more time on the site and are more likely to return to the site in the future to consume video. Importantly from a monetization standpoint, InfiniGraph’s technology “KRAKEN” creates 30% more pre-roll revenue for the publisher.

We started our company with the goal of improving the user experience, and as a result, monetization has followed. This, by the way, enables publishers to create even more video for their users. There are no tricks.  No additional load times.  No videos that follow you down the page to satisfy the viewability requirements for proposals from the big holding companies. Just an incredibly sophisticated machine learning algorithm that helps consumers have a more enjoyable experience on their favorite sites. Our advice?   Forget about “ad tech” solutions.  Think about “User Tech”.   The “ad” part will come.

The live example above demonstrates KRAKEN in action on the movie trailer “Intersteller” achieving 16.8X improvement over the traditional static thumbnail image.

Deep Learning Methods Within Video An End Game Application

Deep Learning Methods Within Video An End Game Application – We’ll explore the use cases of using deep learning to drive higher video views. The coming Valhalla of video Deep Learning is being realized in visual object recognition and image classification within video. Mobile video has and continues to transform the way video is being distributed and consumed.

Deep Learning Methods Within Video – An End Game Application

Big moves

Adobe Stats from Report on Mobile VideoWe’re witnessing the largest digital land grab in video history. Mobile video advertising is the fastest growing segment projected to account for $25 billion worth of ad spend by 2021.  Deep Learning and artificial intelligence are also growing within the very same companies who are jockeying for your cognitive attention. This confluence of video and deep learning has created a new standard in higher performing video content diving greater engagement, views, and revenue. In this post we’ll dive deep into how video intelligence is changing the mobile video game. Many studies showing tablet and smartphone viewing accounted for nearly 40 minutes of daily viewing in 2015 with mobile video continuing to dominate in 2016. Moreover, digital video is set to out pace TV for the first time and social / Instagram/Snapchat video is experiencing explosive growth.

 

The Interstellar trailer is a real example of KRAKEN in action and achieved a 16X improvement in video starts. Real-Time A/B testing between the poster image (thumbnail) and selected images pulled from visual training set provide the simultaneous measurement of what image induce engagement.  All data and actions are linked with a Video Machine Learning (KRAKEN) algorithm enabling real-time optimization and sequences of the right images to achieve maximum human engagement possible.

How it works

Processing video at large scale and learning requires advanced algorithms designed to ingest real-time data.  We have now entered the next phase of data insights going beyond the click and video play. Video opens the door to video consumption habits KRAKEN video deep learning Images for high video engagementand using machine learning enables a competitive advantage.

Consumer experience and time on site are paramount when video is the primary revenue source for most broadcasting and over-the-top (OTT) sites today including Netflix, HULU, Comcast X1, and Amazon. Netflix has already put into production their version of updating poster images to improve higher play starts, discovery and completions.

It’s All Math

Images with higher object density have proven to drive higher engagement. The graph demonstrates images with high entropy (explained in this video) generated the most attraction. Knowing what images produce a cognitive response are fundamental for video publishers looking to maximized their video assets.

Top 3 video priorities we’re hearing from customers.

1) Revenue is very important, and showing more video increases revenue (especially during peak hours when inventory is already sold out)

2) More video starts means more user time on site

3) Mobile is becoming very important. Increasing mobile video plays is a top priority.

While this is good news overall, it does present a number of new challenges facing video publishers in 2016. One challenge is managing the consumer access to content on their terms and across many points. Video consumption is increasingly accessed through multiple entry-points throughout the day. These entry points, by their very nature, have context.

Deep Learning

KRAKEN Video Deep Learning AB Test VIDEO mobile video liftBroadcasters and publishers must consider consumer visual consumption as a key insight. These eye balls (neurons firing) are worth billions of dollars but its no longer a game of looking at web logs. More advance image analysis to determine what images work with customers requires insights into consumers video consumption habit. For the digital broadcasters, enabling intelligence where the consumer engages isn’t new. Using deep convolutional neural networks powers the image identification and other priority algorithms. More details are in the main video.

Motivation

Visual consumer engagement tracking is not something random. Tracking engagement on video has been done for many years but when it comes to “what” within the video there was a major void. InfiniGraph created KRAKEN to enable video deep learning and fill that void by enabling machine learning within the video to optimize what images are shown to achieve the best response rates. Interstellar’s 16X boost is a great example of using KRAKEN to dive higher click to launch for autoplay on desktop and click to play in mobile resulting in higher revenue and greater video efficiency.  Think of KRAKEN as the Optimizely for video.

One question that comes up often is: “Is the image rotation the only thing causing people to click play?” The short answer is NO. Rotating arbitrary images is annoying and distracting.  KRAKEN finds what the customer likes first and then sequences the images based on measurable events. The right set of images is everything. Once you have the right images you can then find the right sequence and this combination makes all the difference in maximizing play rates. Not using the best visuals will cause higher abandonment rates.

Conclusion

Further advances in deep learning are opening the doors to continuous learning and self improving systems. One are we’re very excited about is visual prediction and recommendation of video. We see a great future of mapping human collective cognitive response to visuals that stimulate and created excitement. Melting the human mind to video intelligence is the next phase for publishers to deliver a better consumer experience.