Machine Learning, Video Deep Learning and Innovations in Big Data

Video Deep Learning Machine Learning Paul Burns Talk at Idea to IPO Innovations in Big Data

Paul Burns CDS at InfiniGraph talks on Video Deep Learning, Machine Learning  at Idea to IPO on Innovations in Big Data

Paul Burns Chief Data Scientist at InfiniGraph provides his point of view on what he has learned from doing massive video processing and video data analysis to find what images and clips work best with audiences. He spoke at the event Idea to IPO on Machine Learning, Video Deep Learning and Innovations in Big Data. Quick preview of Paul’s insights and approach to machine learning and big data.

Paul Burns Chief Data Scientist InfiniGraph working with start up involved in mobile video intelligence. I’ve had a bit of a varied career although a purely technical I would say started off in auto-sensing that’s 15 years doing research and RF sensor signal and data processing algorithms. I took a bit of a diverted turn in my career a number of years ago got a PhD and bioinformatics some works in the life sciences in genomics and sequencing industry for about three years. At the moment now I have turned again into video so I have range of experience with working with large datasets and learning algorithms and so hopefully I could bring some insights that others would like here.

My own personal experience is one in which I’ve inhabited a space very close to the data source and so when I think about big data I think about opportunities to find and discover patterns that are not apparent to an expert necessarily or they could be automatically found and used for prediction or analysis or health and status of the sensors at levels of effectiveness. There’s a lot of differences in the perception of what big data really is other than there’s the common thread that seems to be a way of thinking about data and I hate the word data. Really data is so non descriptive it’s so generic so that it’s it has almost no meaning at all.

I think of data as just information that’s stockpiled and it could be useful if you knew how to go in and sort through the stockpile of information to find patterns. How to find patterns that persist and can be used for predictive purposes. I think there’s been a generally slow progress over many decades and why this explosion in recent years is primarily because of the breakthroughs in computer vision and advancements in multi layer deep neural networks particularly processing image and video data.

This is something that’s taken places over the last ten years first with the breakthrough the seminal paper that was authored by Geoffrey Hinton in 2006 which demonstrated breakthroughs and deep multi-layer networks neural networks and then with the work that was published towards the ImageNet the competition in 2012 that made the significant advancement in performance over more conventional methods.

I think the major reason why there’s all this excitement is because visual perception is so incredibly powerful. That’s been an area where we’ve really struggled to make computers relate to the world and to understand and process things that are happening around them. There’s this sense that we’re on the cusp of a major revolution and autonomy. You can look at all the autonomous vehicles and all the human power and capital being put into those efforts.

Paul answers question on Privacy:  Honestly, I think privacy has been dead for some time the way it should be structured is the way Facebook works I can choose to opt into Facebook and have a lot of details about the gory details of my life exposed to the world and Facebook. But what I get out of that is I’m more closely connected to friends and family so I choose to opt in because I want them to that reward but privacy issues where I don’t have the opt-out choice is most problematic. There was a government program I’m aware of that happened in the Netherlands some years ago. They adopted a pilot program where people could opt out of their having their Hospital care data published in a government database. The purpose of which was to lean and make patterns with health outcomes. That’s a little controversial because you can have public health the public health benefits of having such a database could be enormous and transformational so it’s a very complicated issue. I’m certainly probably not qualified to speak on this topic. I would say it’s (privacy) long since been dead and we kind of have to do a postmortem.

We’re very fortunate that so much very high quality research has been published, so many very excellent data sets and model parameters are available free download. If starting out we were working on just very generic replication of open systems. Object recognition can be done with fairly high quality free open source code in a week. That was kind of our starting point to be able to advertise mobile video by selecting thumbnails that are somehow more enticing for people to click on than the default ones the content owners provide.

As it turned out this idea our co-founders came up with (KRAKEN VIDEO MACHINE LEARNING how to increase video lifetime value) about a couple years ago. It’s amazing how bad humans are at predicting what other people want to click on it’s amazing. We are as far as we know the only startup that’s solely focused on this core idea which sounds like a small business but with all the mobile video volume an advertising revenue that’s out there and growing.

What I do is when I have a hard problem I try to stockpile as much data to create the most thorough training set that I can possibly create and I think the most successful businesses will be the ones that are able to do that. It turns out there there are actually companies all they do is help you create training sets for your machine learning applications we use a variety of methods to do that crowdsourcing is one common way that’s really expensive to it’s far more expensive I thought it was even possible. Getting startups to find a way to harvest rich training sets that are valuable for inference are potential to be huge winners. It just turns out to be very hard to do.

Another area that is big is wearable technology for the purpose of health monitor personal health. I think that’s an area that has tremendous potential just because you know your physician is starving for data. You have to make a point to see your doctor schedule it etc. So what do they do? They weigh you and take your blood pressure ask how old you are that’s about it. I mean that’s nothing right they know they do not know what’s going on with you. Maybe it’s personality dependent but I would be very much in favor of disclosing all kinds of biometric information about myself it’s continuously recorded and stockpiled in a database and repeatedly scanned by intelligent agents for anomalies and doctors appointments automatically scheduled for me. Same thing with any complicated piece of machinery you know it could be a car it could be parts of your business. This kind of invasive monitoring I think will come with resistant but could be unleashed as people see the value in disclosing.

See full panel here Idea to IPO

3 Ways Video Recommendation Drives Video Lifetime Value

Video recommendation and discovery are very hot topics across video publishers looking to drive higher returns on their video lifetime value. Attracting a consumer to watch more videos isn’t simple in this attention deficit society we live. However, major video publishers are creating better experience using video intelligence to delight and enhance discovery and keep you coming back for more. In this post we’ll explore the intelligence behind visual recommendation and how to enhance consumer video discovery.

Industry Challenge

Google Video Intelligence Demo At Google Next 17

Google Video Intelligence API demo of video search finding baseball clips within video segments.

Last year we posted on Search Engine Journal How Deep Learning Powers Video SEO describing the advantages of video image labeling and how publishers can leverage valuable data that was otherwise trapped in images. Since then, Google announced at Next17 Video Intelligence . (InfiniGraph was honored to be selected as a Google Video Intelligence Beta Tester) The MAJOR challenge with Google cloud offering is pushing all your video over to Google Cloud, cost per labeling the video at volume and loosing control of your data. So how do you do all this on a budget?

Not all data is created equal

Trending Content - Lacks Image Based Video Machine Learning

Trending Content is based on popularity vs content context and the consumer content consumption.

And, not all video recommendation platforms are created equal  The biggest video publishers are advancing their offerings with intelligence. InfiniGraph is addressing this gap between using video intelligence and creating affordable technology otherwise out of reach.

Outside of the do not track, creating a truly personalized experience is ideal. For VOD / OTT apps creates the best path to robust personalization. For web a more generalized grouping of consumer is required.

See how “Netflix knows it has just 90 seconds to convince the user it has something for them to watch before they abandon the service and move on to something else”.

Video recommendation platforms

Video Recommendation Mantis Powers by KRAKEN Video Machine Learning

Image based video recommendation “MANTIS”. Going beyond simple meta data and trending content to full intelligent context. Powered by KRAKEN.

All video recommendation platforms are reliant on data entered (called Meta data) when it was uploaded to a video content management system.  Title, discription etc. The other main points of data capture plays, time on video and completion indicating watchablity. There is so much more to a videos than raw insights. Did someone watch a video is important but understanding the why in context of other like videos with similar content is intelligence. Many site have trending videos, however, promoting videos that get lots of plays creates a self fulfilling prophecy because trending is being artificially amplified and doesn’t indicate relevance.

An Intelligent Visual Approach

Video Machine Learning, Going beyond meta data is key to a better consumer experience. Trending only goes so far. Visual recommendation looks at all the content based on consumer actions.

Going beyond meta data is key to a better consumer experience. Trending only goes so far. Visual recommendation looks at all the content based on consumer actions.

Surfacing the right video at the right time can make all the difference if people are staying or going.  Leaders like YouTube have already become to leverage artificial intelligence in their recommending videos producing 70% greater watch time. Recently they included animated video previews for their thumbnails pushing take rates even high. This is more proof consumer desire intelligent recommendation and slicker visual presentation.

InfiniGraph provides a definitive differentiation using actions on images and in-depth knowledge of what’s in the video segments to build relevance. Consumer know what they like when they see it. Understand this visual ignition process is key to unlocking the potential of visual recommendation. How do you really know what people would like to play if you really don’t know much about the video content? Understanding the video content and context is the next stage in intelligent video recommendation and personalized discovery.

3 Ways Visual Video Recommendation Drives Video Lifetime Value

1. Visual recommendation – Visual information within video creates higher visual affinity to amplify discovery. Content likeness beyond just meta data opens up more video content to select from. Mapping what people watch is based on past observation, predicting what people will watch requires understand video context.

2.  Video scoring – A much deeper approach to video had to be invented where the video is scored based on visual attribution inside the video and human behavior on those visuals. This scoring lets the content SPEAK FOR ITSELF and enables ordering play list relative to what was watched.

3. Personalized selection - Enhancing discover requires getter intelligence and context to what content is being consumed. Depending on the video publishers environment like OTT or a mobile app can enable high levels of personalization. For consumers using the web a more general approach and clustering consumers into content preferences powers better results while honoring privacy.

The Future is Amazing for Video Discovery

We have learned a great deal from innovative companies like: Netflix, HULU, YouTube and Amazon who have all come a long way in their approach to advanced video discovery. Large scale video publishers have a grand opportunity to embrace a new technology wave and be relevant while creating a visually conducive consumer experience. A major challenge going forward is the speed of change video publishers must adapt if they wish to stay competitive. With InfinGraph’s advanced  technologies designed for video publishers there is hope. Take advantage of this movement and increase your video lifetime value.

Top image from post melding real life with recommendations.

How To Increase Video Lifetime Value via Machine Learning

Videos Found For You Recommendation KRAKEN Video Machine Learning Deep Learning

Video discovery is one of the best ways to increase video lifetime value. Learning what video content is relevant increases greater time on site.

All video publishers are looking to increase their video’s lifetime value. Creating video can be expensive and the shelf life of most video is short. Maximizing those videos assets and their lifetime value is a top priority. With the advent of new technologies such as Video Machine Learning, publishers can now increase their video’s lifetime value by intelligently generating more time on site. Identifying the best image to lead with (thumbnail) and recommending relevant videos drive higher lifetime value through user experience and discovery.

Reeses two great tasts put togehter Video Machine Learning Deep Learning Artificial IntelligenceThis combination of visual identification and recommendation is like the Reese’s of video. By linking technologies like artificial intelligence and real-time video analytics, we’re changing the video game through automated actionable intelligence.

Ryan Shane, our VP of Sales, describes the advantages of knowing what visual (video thumbnail) (context) produces the most engagement and what video business models benefit the most from video machine learning.

Hear from our CEO, Chase McMichael, who talks about the advanced use of machine learning and deep learning to improve video take rates by finding and recommending the right images consumers engage with the most.

Here are two examples of how video machine learning increases revenue on your existing video assets.

Yield Example #1: Pre-roll

If you run pre-roll on your video content, you likely fill it with a combination of direct sales and an RTB network. For this example, assume you have a 10% CTR, which translates to 1 million video plays each day. That means that you are showing 1,000,000 pre-roll ads each day. Now assume that you run KRAKEN on your videos, and engagement jumps to by 30% to a 12% CTR. That means that you will be showing 1,300,000 pre-roll ads each day. KRAKEN has effectively added an additional 300,000 pre-roll spots for you to fill! This is an example of increasing the video value on your existing consumers.

Yield Example #2: Premium Content

For our second example, assume you monetize with premium content. You have an advertising client who has given you a budget of $100,000 and expects their video to be shown 5 million times. With your current play rates, you determine it will take four days to achieve that KPI. Instead, you run KRAKEN on their premium content, and engagement jumps 2X. You will hit your client’s KPI in only two days. You now have freed up two days of premium content inventory that you can sell to another client! Maximizing your existing video consumers and increase CTR reduces the need to sell off network.

Below is a Side by Side example of Guardians Of the Galaxy Default Thumbnail vs. KRAKEN Rotation powered by Deep Learning. Boosting click rates generates more primary views. While leveraging known images that induce response is logical to insert into a video recommendation (Reese’s). The two together now drive primary and secondary video views.

As you can see from both examples, using KRAKEN actually increases lifetime value as well as advertising yield from your video assets. Displaying like base content sorted by Deep Learning and video analytics by category delivers greater relevance. Organizing video into context is key to increasing discovery. Harnessing artificial intelligence with image selection and recommendation brings together the best of both digital video intelligent worlds.

Bite into a Reese’s and see how you can increase your video lifetime value.  Request a demo and we’ll show you.

 

Making More Donuts

Being a publisher is a tough gig these days.   It’s become a complex world for even the most sophisticated companies.  And the curve balls keep coming.  Consider just a few of the challenges that face your average publisher today:

  • Ad blocking.
  • Viewability and measurement.
  • Decreasing display rates married with audience migration to mobile with even lower CPMs.
  • Maturing traffic growth on O&O sites.
  • Pressure to build an audience on social platforms including adding headcount to do so (Snapchat) without any certainty that it will be sufficiently monetizable.
  • The sad realization that native ads—last year’s savior!–are inefficient to produce, difficult to scale and are not easily renewable with advertising partners.  

The list goes on…

The Challenge

Of course, the biggest opportunity—and challenge–for publishers is video.  Nothing shows more promise for publishers from both a user engagement and business perspective than (mobile) video. It’s a simple formula.  When users watch more video on a publisher’s site, they are, by definition, more engaged.  More video engagement drives better “time spent’ numbers and, of course,  higher CPMs.    

But the barrier to entry is high, particularly for legacy print publishers. They struggle to convert readers to viewers because creating a consistently high volume of quality video content is expensive and not necessarily a part of their core DNA.  Don’t get me InfiniGraph Video Machine Learning Challenge Opportunitywrong.  They are certainly creating compelling video, but they have not yet been able to produce it at enough scale to satisfy their audiences.  At the other end of the spectrum, video-centric publishers like TV networks that live and breathe video run out of inventory on a continuous basis.   

The combined result of publishers’ challenge of keeping up with the consumer demand for quality video is a collective dearth of quality video supply in the market.  To put it in culinary terms, premium publishers would sell more donuts if they could, but they just can’t bake enough to satisfy the demand.  

So how can you make more donuts?
Trust and empower the user! 

InfiniGraph Video Machine Learning Donuts

Rise of  Artificial Intelligence

The majority of the buzz at CES this year was about Artificial Intelligence and Machine Learning.  The potential for Amazon’s Alexa to enhance the home experience was the shining example of this.  In speaking with several seasoned media executives about the AI/machine learning phenomenon, however, I heard a common refrain:  “The stuff is cool, but I’m not seeing any real applications for my business yet.”  Everyone is pining to figure out a way to unlock user preferences through machine learning in practical ways that they can scale and monetize for their businesses.  It is truly the new Holy Grail.

The Solution

That’s why we at InfiniGraph are so excited about our product KRAKEN.  KRAKEN has an immediate and profound impact on video publishing.  KRAKEN lets users curate the thumbnails publishers serve and optimizes towards user preference through machine learning in real time. The result?:  KRAKEN increases click-to-play rates by 30% on average resulting in the corresponding additional inventory and revenues.     

It is a revolutionary application of machine learning that, in execution, makes a one-InfiniGraph Video Machine Learning Brain Machineway, dictatorial publishing style an instant relic. With KRAKEN, the users literally collaborate with the publisher on what images they find most engaging.  KRAKEN actually helps you, the publisher, become more responsive to your audience. It’s a better experience and outcome for everyone.  

The Future…Now!

In a world of cool gadgets and futuristic musings, KRAKEN works today in tangible and measurable ways to improve your engagement with your audience.  Most importantly, KRAKEN accomplishes this with your current video assets. No disruptive change to your publishing flow. No need to add resources to create more video. Just a machine learning tool that maximizes your video footprint.  

In essence, you don’t need to make more donuts.  You simply get to serve more of them to your audience.  And, KRAKEN does that for you!

 

For more information about InfiniGraph, you can contact me at tom.morrissy@infinigraph.com or read my last blog post  AdTech? Think “User Tech” For a Better Video Experience

 

Tom Morrissy on KRAKEN – Publisher Perspective on Video Machine Learning

We had the unique opportunity to talk with Tom Morrissy our Board Advisor in Time Square NY.  Below is the transcript between Tom and Chase talking about his perspective on publishers and KRAKEN our Video Machine learning technology.

Chase: Hi I’m Chase McMichael CEO and Co-founder of InfiniGraph and I’m here today with Tom Morrissy our Board Advisor. Hi Tom.

Tom: Hi Chase.

Chase: So Tom tell me a little about your experience working with us obviously you’re a midi ex media guy from SpinMedia, Timing you’re basically better. So tell us a little about your experience working with us and obviously you know more about the media space than we do. Now we really like to have your viewpoint on you know knowledge space, some of the issues. What do we do to really break in and arbitration within you know obviously cracking mobile videos big deal? So tell me what you think?

Tom: One thing that we’re finding as we talk to different publisher was just one of the reasons that I joined the company is all have very similar pain points. There’s a certain buried entry with video. There’s only so much video they can create but getting it seen and consumed by consumers is a huge challenge. So, how can we excite the consumer to be more excited and excited enough about the content that they are viewing to actually click to play and actually watch it and engage more deeply on these websites and as a factor of that we create more inventory for the publishers to monetize their video plays.

Really what it comes down to its combining a better User experience and starting with that and then taking the better User experience and being able to monetize it in a much more accelerated way. That’s the magic on what this company has done in my eyes because it took the publishers view through the User experience. Most Ad Tech starts with how we can make more money off this User with our technology does is empowers the user and that’s a vital difference relative to all the other technologies I’ve seen.

Having been on the publisher’s side and then on the Ad Tech side and then back to the publisher’s side. I was getting email after email, LinkedIN after LinkedIN request saying I’ve got a seamless integration opportunity for you revenue generating and the truth and matter is there is no seamless integration. Anybody who promises that is not telling the full truth because we all know that’s not the case.

So, the question becomes as a publisher what do you want to do and where do you want to spend your time and resources to integrate what kind of impact does it going have in your audience and then what kind of impact ultimately will have on your business. That’s what InfiniGraph is taking into account. We created the most frictionless integration opportunity that I’ve seen from seeing all the viewpoint of a publisher and figuring out a way to excite the reader and therefore grow the business.

Chase: Yeah we are just embedding it on the video player how much lower touch we can get you know we’re not touching the CMS don’t change the workflow that’s a death nail. One of the things you say a lot “its about the content especially premium content”. I like this whole thing where you talking about where we’re really going for this from a content angle. Not the ad angle and how are we going to juice the User and get the much money out of them. There’s some importance there for some of the publisher but reality is that they play an incredible strong content game they are going to lose the audience. So talk a little more about that.

Tom: The truth of the matter is, you don’t have that much time to engage the audience you can have killer content but the audience may not choose to view it for whatever reason. You got to figure out ways to engage them and get into what we call cognitive thinking and making the commitment to watching the content that is you can have the best movie you can have the best video clip if nobody watches it is like a tree falling in the forest right.

Chase: Yeah, that’s a great point because I know on mobile, we’ve been measuring you got 0.25 seconds man and if you’re not “thumbstopping” The viewer is just scrolling and scrolling you’re going to miss them. The video is a linear body of work and there is so much content there but then you’re starting out with just one image. What do you do?

Tom: Right, you have to figure out what’s going to motivate that person because think about it they have chosen to go to that space to watch a video and ninety percent of them on average are not. So why?

Chase: Yes they are bailing.

Tom: How can we make a better experience and make them more motivated to watch the video in the first place that they are there to watch in the first place. That’s the weirdest thing about this market. That’s what keeps me up at night and what keeps you up at night to try to solve for that. That’s what our company I think the promise of what we’re trying to do is really help lead the consumer to the choice they always told themselves they want to make.

Chase: Exactly.

Tom: And, if you can do that then you have a much more robust relationship with your consumer to spend more time on your site, they watch more video which is why they’re there and then you as a publisher make more money. It’s as simple as that.

Chase: Right. Thanks Tom man. It’s great to have you on board and awesome here to be in the big apple. And hey viewers, click up on the (i) up on the right here to see some more information and we will back at you soon. Thank you.

Want to see more? Request a live demo.