Video Artificial Intelligence Powers Creation

The digital video industry has major challenges with creation because editing and production costs are rising. Lack of video inventory is universal; however, there is hope in leveraging video artificial intelligence (VAI) to address these problems. Massive amounts of live and pre-recorded video never reach the audience. Publishers are being asked to do more with less. Publishers are hamstrung due to small editorial budgets and shifting priority while massive video revenue goes untapped. The need to intelligently create video inventory couldn’t come fast enough to feed the insatiable mobile video appetite. Here we dig in on the What, Where and Why in VAI and rethink how content is being created, edited and delivered all towards building greater video lifetime value.

Above video was created using video artificial intelligence. Animated preview thumbnail selected using machine learning.

Rise of Video Artificial Intelligence

Automation is not the cure-all; however, combining live streams and artificial intelligence solves several pain points while augmenting the production staff and delivering relevant content. Your editorial team is the most valuable resource in your video workflow. This is the very reason why these highly-valued editors should be focused on higher value video creation. VAI isn’t about replacing your editorial team but scaling lower priority content at a fraction of the cost while increasing revenue generation.
Object detection YOLO Basketball labeling actions MicroClips InfiniGraph
There are many promising video AI examples showing off real-time facial tracking, object recognition and image labeling etc. In our previous post Top Video Artificial Intelligence and Machine Learning at NAB 2018, we highlighted several early AI products in the market. Outside of the video labeling services several are extending existing video editing systems as a cloud Saas. So what does this all mean for my existing video assets? First off, there has been monumental advancements in algorithms to obtain a deeper understanding of actions people take within a video. Such as YOLO and Deep Structured Models to label individual actions in video. This in-depth dissection changes the game of measuring what parts of a video resonate with consumers vs. old linear video measurement which woefully lacks meaningful insights.

Promising Use Cases

Using VAI to create and edit video isn’t new. Companies Like Adobe and IBM are all using advanced video/image analysis to enable smarter editing platforms. However, publishers need more than just editing assistance they need scalable video creation. One of the more profitable use cases is short video highlights and previews along with extending in context video compilation. These combinations have proven to be highly lucrative and well suited for VAI in both creation and scale.



Example above demonstrates using video artificial intelligence to create and control audio and video transitions between clips. The video length and segments being used are determined based on video scoring generated via VAi.

Another exciting use case is making video libraries searchable. Finding specific people, actions, scenes within video opens the doors to advanced video discovery. A whole new world opens up with amazing possibilities of cross-reference and extracting information otherwise trapped in old school liner video metadata.

The Opportunity

Solving the problem of creating net new video from existing video assets opens up a monster revenue source while extending your overall video lifetime value. Taking long-form video or live content and compressing to meaningful short-form mobile-ready is a game-changer. VAI creation does not require expanding editorial budgets or delaying critical production windows.
Artificial Intelligence Video Scoring InfiniGraph MicroClips
Organizing content to be contextually relevant will always be a key factor in ensuring meaningful content flow. What clip’s and scenes logically flows together are subjective unless you are directly measuring audience response. This is why utilizing an active feedback mechanism to improve relevance is important. Here we address some top challenges,

Top challenges

  • Context
  • Time availability
  • Quality
  • Shared viewing experience
  • Interactive storyline


  • Football search terms compilation highlights most searchedThere isn’t a lack of challenges facing video creation at scale. For sure content specifically created for social platforms that harness the human connection and social dynamics will produce a higher quality product. However, how do you scale this type of creation? Combining AI with video provides the opportunity to develop a video learning system to assist in delivering quality output that builds on itself over time. Hence, why a learning system is ideal for video creation based on reproducible tasks.

    Introducing MicroClips

    Here we describe the strategy of video “fracking” your entire library. The ability to organize video libraries, extract cross-referenced videos, annotate video content and create relevant net new video makes MicroClips possible. Manipulating video at scale using artificial intelligence creates an enormous revenue opportunity. The ability to learn what works and adjust video assets requires thinking differently about how content is produced. The possibilities are endless over many content types, moreover, digital enables measuring levels of audience engagement like never before.



    Above is a sports MicroClips compilation created using AI and is derived from many videos. MicroClips are created using incontext high value action sequences. These clips are places together to create the MicroClips compilation. What’s exciting is the variety of compilation that can be created and watching what the audience finds most entertaining. Highlights and compilations have garnered the highest play and share rates. Some have even “gone viral” !
    InfiniGraph MicroClips compilation Artificial Intelligence created video
    Want to see more exciting Sports MicroClips in action?

    Endgame

    The ultimate goal is to provide a cost-effective video creation process while improving viewability, quality and consumer play rates. Leveraging how consumers connect and the time they have available are additional driving factors to a smarter creation process.
    There are many big bets being made as well as major networks investing in dedicated Snapchat teams to capture a slice of this multi-billion dollar pie and prove the model out.

    Conclusion

    Quality content and great storytelling require a human touch for now but for large-scale video organizing and creation, the future is bright for video artificial intelligence. The digital video industry is going through a major transformation and the consumers are the winners. We have seen the networks being put on notice with the incumbents not only producing top shows, receiving awards, drawing in compelling audience sizes but also leveraging advanced technology faster improving the consumer experience. The publishers that harness the competitive advantage of VAI creation will deliver quality video to market faster with higher relevance all resulting in greater consumer retention and revenue.

    Top Video Artificial Intelligence and Machine Learning at NAB 2018

    Video artificial intelligence was a massive theme at NAB 2018 with a majority of the video publishing technology companies showing off some form of AI integration. In my previous post How Artificial Intelligence Gives Back Time time is money in the video publishing business. AI is set to be a very important tool why all the big guns like AWS (Elemental), Google (Video Intelligence), IBM (Watson) and Microsoft (Azure) had digital AI eye candy to share. There was a feeling of a meet too with all of them were competing to weave their video annotation/labeling – speech to text APIs into a variety of video workflows.

    Top Video AI use cases:

    1. Labeling – The ability to label the elements within a video specific scene selection, people, places, and things.
    2. Editing – Segmenting by relevance, slicing up the video into logical parts and producing.
    3. Discovery – Using both annotation and speech to text to expand metadata for funding specific scenes within video libraries.

    Challenges

    One of several challenges is this ALL or nothing situation. Video publishers assets can be on many hard drives or encoded without lots of metadata. There are companies that provide services like Axel to index those videos and make them searchable with a mixed model of on-prem tech and cloud services. Dealing with live feeds requires hardware and bigger commitments. Most publishers are not willing to forklift over their video encoding and library to another provider without a clear ROI justification. The other big ROI challenge is video publishers don’t have a lot of patience and the pressure to increase profits on video is higher now with more competition in the digital space across all channels. Selling workflow efficiency in AI won’t be a big enough draw over AI generating substantial revenue solving a specific problem. The pain isn’t high enough to make a big AI investment. There are lots of POC right now in the market, however, not one product creates a seamless flow within a video publisher’s existing workflow. Avid and Adobe are well positioned for the edit team since their products are so widely used. Other cloud providers are enabling AI technology not a specific solution.

    AI Opportunity

    Search and discovery was the biggest theme using AI to do image and speech to text analysis. Compliance with Closed Caption to make video accessible in digital will be mandated driving faster adoption. Editing video via AI is in its early phase, however, the technology is emerging fast. There are some exciting examples of AI created video but at scale is another. Of the many talks at NAB some exciting direction on AI in Video were discussed around video asset management. Here are a few examples of what we demoed at NAB 2018 showing promise in the video intelligence field.

    Adobe Sensi

    Adobe Video SegmentingAdobe had a big splash with their new editing technologies and using AI to enhance the video editing process. Todd Burke presented Adobe Sensi their AI entry into video intelligence. The video labeling demo and scene slicing we’re designed to help editors create videos faster and simplify the process. The segmenting was just a prototype and video labeling demonstrated the API extension integrated within Sensi. Adobe Labeling Demo

    IBM Watson

    IBM Watson Video SegmentingIBM’s demo was slick and pointed to the direction of using machine learning to process large amounts of video to pull out interesting parts of the video. Doing the announcer and crowd response analysis added another layer of segmentation. You can see a live demo of their AI highlight for the Master. They did the same for Wimbelton slicing up the live feed they were powering for the events and creating “cognitive highlights”. It wasn’t clear if these highlights were used by the edit team or if this was a POC. Regardless there was both image and text analysis of the steams occurring and demonstrated the power of AI in the video.

    Avid

    Avid Video analysisThe Avid demo was just that. They created a discovery UI on top of API’s like Google Vision to assist in the video analysis for search and supporting edit teams. Speech to text and annotation in one UI has its advantages. It’s wasn’t clear how soon this was going to be made available over a development tool. Avid Labeling

    Google Vision

    Google Vision Zoro labelingThe team over at Zora had by far the slickest video HUB approach. I believe the play for Google is more around their cloud strategy trying to attract storage of the videos and leverage their Video Intelligence to enable search over all your video assets. Google’s video intelligence is just getting started and their opening up of the AI foundation Tensorflow makes them one of the top companies committed to video AI. I like what Zora is doing and can see editing teams benefiting from this approach. There was a collaborative element too.

    Microsoft Azure

    Azure GreyMeta2GrayMeta UI was slick and their voice to text interface was amazing. This was all powered by Azure. Azure Video Indexer is the real deal and ability to identify face identification has broad use cases. Indexing voice isn’t new but having a fast and slick UI  helps enable adoption of the technology. They can pinpoint parts of the video just on the text along. There is a team collaboration element around the product have a Slack feel. The approach was making all media assets searchable.

    AWS Elemental

    There were several cool examples of possibilities with Amazon Rekognition - video analysis, facial recognition and video segments. Elemental (purchased by Amazon) core technology is a video ad stitching whereby video ads are inserted into the video directly. They created UI extension demonstrating some possibilities with Rekognition.  It wasn’t clear what was in production over the demo. The facial recognition around celebrities looked solid. AWS Singular Analysis Tracking PeopleElemental had a cool real-time object detect bounding boxes showing up on sports content too. This has many use cases, however, creating more data for video publishers to access when the amount of data they can manage needs to be addressed before adding another data firehosed. AWS Elemental label celebrity words SM

    Conclusion

    Video artificial intelligence is just getting started and will only improve with greater computing advancements and new algorithms. The guts of what’s needed exist to achieve scale.  The major use cases around video discovery and search are set to improve dramatically with industry players opening up more API’s. Video machine learning has great momentum utilizing these API’s to crack open the treasure trove of data locked away inside of video. The combination of video AI and text analysis truly creates a massive metadata for the multitude of use cases where voice computing can play are roll. Outside of all the AI eye candy there needs to be more focus on clear business problems vs. Me Too. More like what’s the end product and how will it make the video publisher more revenue?

    How Artificial Intelligence Gives Back Time

    How AI – Artificial Intelligence and machine learning gives back time? It’s not a secret that AI is here and coming much faster than many other technology booms. Some are saying we’re in the 3rd wave of computing. In our previous post 3 Ways Video Recommendation Drives Video Lifetime Value we talk about how machine learning is transforming finding and recommending videos to enhance consumer experience. For me, I’m just excited to be part of the machine learning business and creating powerful products focused on improving the digital video experience. I was recently commissioned to put together a short video and deck on how Artificial Intelligence is transforming the device-based experience for consumers. As part of this project the brands we’re looking to understand the different ways AI is going to potentially impact their business. Here were the main topic areas.

    • How do you stay ahead of where AI is headed?
    • How should AI be leveraged to enhance brand trust, improve engagement and help consumers get jobs to be done in a way that is valued by consumers?
    • How can AI be employed to create better personal performance for individuals?

    The presentation was to top brands like Scripps Network, Hertz, Bacardi, Planet Fitness, Arizona State University and DX Marketing.

    Video Transcript ….

    Hi I’m Chase McMichael CEO and co-founder of infiniGraph. InfiniGraph focuses on increasing video lifetime value for video publishers and broadcasters and we do that through processing vast amounts of their video data and understanding what visual in the video engage consumers. By measuring what image or video clips are most engaging within certain scenes we’re able to increase video consumption showing the right images or clips to excite the consumer. Here’s a great example of a video clip that we extracted out of a video for CBS. By putting this specific clip in front of the right person at the right time we’re able to dramatically increase video take rates as produce a better consumer experience.

    I was asked to talk about machine learning / artificial intelligence and how it would affect brands and improve consumer experience around devices. One of the exciting things about artificial intelligence primarily in my point of view is AI will give back time to individuals. AI is about making smart decision for them or providing insights proactively.

    So how should AI be leverage to increase brand trust, engagement and help consumers?First off brand trust, brand trust is about anticipating your consumers, being able to be very proactive when they interface with you. It’s important to actually recognize them and provide them with incredible value and service. This is something that all brands have struggled with. Its not you know someone comes into a retail establishment or comes online but laking responsive is lost opportunity. A big opportunity is personalization. The ability to personalize ones experience is a big deal right now. Companies are really utilizing their data in some smart ways especially in the retail segment we’re seeing this with Amazon and a lot of the movie companies like Netflix trying to customize the experience for their audience.

    The other thing that we’re seeing around brand trust is the ability to really not only be intuitive and responsive but proactive. Being proactive requires a much higher level of intelligence around your data. Taking that data to the next level of insights where you’re really thinking is AI. The key is anticipation like what does that consumer going to buy or how are they going to respond? When they purchase a product being more proactive creates an incredible experience. Again brand trust is easy to lose.  Brands spend many many decades or a hundred years on creating trust and all of a sudden something happens and the internet revolts. They become completely eviscerated.

    Its critical brands are responsive to what’s happening across the social web and monitoring intelligently. How you interface with your consumers across mobile, social or over cloud application all requires intelligence.

    Don Peppers back in their late 90s was doing what’s called one-to-one marketing and this is really the onset of personalized experience. If you’re going to enhance consumer engagement you really have to put in front of that consumer something that visually and cognitively gets them excited. Are you creating an emotional response? Without emotion people do not recall information so if you don’t make an emotional response with someone the ability for them especially in this distracted economy to engage approaches zero.

    Getting consumer to recall is very difficult so good service is expected the reality is people want to be wowed and that’s really comes down to how well your consumer touch points are response and intuitive.  Do you know about the individual interacting with you on a day to day or month to month or year to year basis? A core component to any company is understanding your consumers and their individual behavior. A customer interacts with your brand do you have the ability to recommend or provide insights that helps and again give them back time? If not you’re really have done them a disservice. Your interface has to be fluid or you create drudgery. 

    Improving engagement is the big win with artificial intelligence. System designed to predict and be intuitive through giving back time will lead their industry.  The core components that any enterprise must think about is how their enterprises is going to re-engineer around consumer data as well as capturing data to drive an active feedback loop. This feedback mechanism between the consumers and that touch point will be the foundation of an AI system.

    Help consumers and creating a frictionless environment will win your consumer over. Driving proactive actions that actually work. The other question you have to ask yourself is how are you utilizing that information to create a very robust profile so that you’re actually having a conversation with your customer.

    You actually know a lot about their history and you know a lot about what was successful. Start engaging with you consumer by understanding their product usage and leverage that to information to improve the experience.  From an AI perspective, now you have a system out there mining data to surface functional clusters of information both visually maybe vocally as well as across just standard data sets.

    Think big here because we’re now in such a connected community in society that with a push of a button I can share a picture with thousands of people and then the whole conversation spurts up around maybe your product being both positive and negative sense. How are you inserting yourself in that conversation proactively is very important.

    How do you physically help a consumer really comes down to can you be intuitive on their needs. Think about personal data assistant or personal AI assistance. Personal assistants are going to be very intuitive very smart and crawl lots of different data sources. These AI assistance are proactively telling you to do thing to simplify your life. An example is let them go out and find this information for you. These types of digital assistants are gonna be extremely important in people’s lives because now things that they had to spend their mental time on they will be spending more time on more higher cognitive thinking than the mundane check off tasks that we do today.

    If a brand is able to give time back to there consumer creating an intuitive and very frictionless experience will be the go to experience. Now your brand will dominant with that customer because they’re going to create not only the loyalty clearly but creating engagement through helping that consumer.

    Another areas that is creating lots of buzz is artificial intelligence taking over jobs. Clearly you as a big industry or brand don’t want to become the job killer in the industry and that’s a big issue for executives thinking about implementing intelligent automation. Everyone is reading everywhere that the robots are going to take over the world the reality is how are you augmenting your staff. What can you do to enable your staff to be more intelligent utilizing internal resources to be more efficient and effective when they interface with consumers.

    The real crux in this whole equation is how to enhance that experience with consumers and be able to empower my employees to be smarter and faster while creating a symbiant relationship.

    Another thing around artificial intelligence is you really need to be thinking not in quarters but in decades. The companies that have really focused on digital transformation and utilizing data intelligence to transform their business will be the disruptors. Are you going to be the leader, because those that have executed AI in their business will have the speed and ability to adapt enabling them to trump anything that comes to the table. Think AI first.

    A picture is worth a thousand words so in my business videos are worth tens of thousands of words and for us we’re wanting to find that unique image or video clip within a video segment or even a long video formats that really gets consumers excited and gets them engaged. This visual intelligence is critical in my business and very important for many brands. Using visual intelligence especially in video for marketing is an incredible opportunity. What you put in front of your consumers and can learn from that engagement around those visual properties within the actual images themselves is insight. The ability to adjust video is a competitive advantage creating a higher order of thinking where you’re giving a machine the ability to transform content in real-time. That’s that artificial intelligence we talked about previously so really being able to think about how do I take my brand in my industry and start thinking about all the data that’s coming in.

    Its all about data in and quality and consumer engagement out.

    Machine Learning, Video Deep Learning and Innovations in Big Data

    Video Deep Learning Machine Learning Paul Burns Talk at Idea to IPO Innovations in Big Data

    Paul Burns CDS at InfiniGraph talks on Video Deep Learning, Machine Learning  at Idea to IPO on Innovations in Big Data

    Paul Burns Chief Data Scientist at InfiniGraph provides his point of view on what he has learned from doing massive video processing and video data analysis to find what images and clips work best with audiences. He spoke at the event Idea to IPO on Machine Learning, Video Deep Learning and Innovations in Big Data. Quick preview of Paul’s insights and approach to machine learning and big data.

    Paul Burns Chief Data Scientist InfiniGraph working with start up involved in mobile video intelligence. I’ve had a bit of a varied career although a purely technical I would say started off in auto-sensing that’s 15 years doing research and RF sensor signal and data processing algorithms. I took a bit of a diverted turn in my career a number of years ago got a PhD and bioinformatics some works in the life sciences in genomics and sequencing industry for about three years. At the moment now I have turned again into video so I have range of experience with working with large datasets and learning algorithms and so hopefully I could bring some insights that others would like here.

    My own personal experience is one in which I’ve inhabited a space very close to the data source and so when I think about big data I think about opportunities to find and discover patterns that are not apparent to an expert necessarily or they could be automatically found and used for prediction or analysis or health and status of the sensors at levels of effectiveness. There’s a lot of differences in the perception of what big data really is other than there’s the common thread that seems to be a way of thinking about data and I hate the word data. Really data is so non descriptive it’s so generic so that it’s it has almost no meaning at all.

    I think of data as just information that’s stockpiled and it could be useful if you knew how to go in and sort through the stockpile of information to find patterns. How to find patterns that persist and can be used for predictive purposes. I think there’s been a generally slow progress over many decades and why this explosion in recent years is primarily because of the breakthroughs in computer vision and advancements in multi layer deep neural networks particularly processing image and video data.

    This is something that’s taken places over the last ten years first with the breakthrough the seminal paper that was authored by Geoffrey Hinton in 2006 which demonstrated breakthroughs and deep multi-layer networks neural networks and then with the work that was published towards the ImageNet the competition in 2012 that made the significant advancement in performance over more conventional methods.

    I think the major reason why there’s all this excitement is because visual perception is so incredibly powerful. That’s been an area where we’ve really struggled to make computers relate to the world and to understand and process things that are happening around them. There’s this sense that we’re on the cusp of a major revolution and autonomy. You can look at all the autonomous vehicles and all the human power and capital being put into those efforts.

    Paul answers question on Privacy:  Honestly, I think privacy has been dead for some time the way it should be structured is the way Facebook works I can choose to opt into Facebook and have a lot of details about the gory details of my life exposed to the world and Facebook. But what I get out of that is I’m more closely connected to friends and family so I choose to opt in because I want them to that reward but privacy issues where I don’t have the opt-out choice is most problematic. There was a government program I’m aware of that happened in the Netherlands some years ago. They adopted a pilot program where people could opt out of their having their Hospital care data published in a government database. The purpose of which was to lean and make patterns with health outcomes. That’s a little controversial because you can have public health the public health benefits of having such a database could be enormous and transformational so it’s a very complicated issue. I’m certainly probably not qualified to speak on this topic. I would say it’s (privacy) long since been dead and we kind of have to do a postmortem.

    We’re very fortunate that so much very high quality research has been published, so many very excellent data sets and model parameters are available free download. If starting out we were working on just very generic replication of open systems. Object recognition can be done with fairly high quality free open source code in a week. That was kind of our starting point to be able to advertise mobile video by selecting thumbnails that are somehow more enticing for people to click on than the default ones the content owners provide.

    As it turned out this idea our co-founders came up with (KRAKEN VIDEO MACHINE LEARNING how to increase video lifetime value) about a couple years ago. It’s amazing how bad humans are at predicting what other people want to click on it’s amazing. We are as far as we know the only startup that’s solely focused on this core idea which sounds like a small business but with all the mobile video volume an advertising revenue that’s out there and growing.

    What I do is when I have a hard problem I try to stockpile as much data to create the most thorough training set that I can possibly create and I think the most successful businesses will be the ones that are able to do that. It turns out there there are actually companies all they do is help you create training sets for your machine learning applications we use a variety of methods to do that crowdsourcing is one common way that’s really expensive to it’s far more expensive I thought it was even possible. Getting startups to find a way to harvest rich training sets that are valuable for inference are potential to be huge winners. It just turns out to be very hard to do.

    Another area that is big is wearable technology for the purpose of health monitor personal health. I think that’s an area that has tremendous potential just because you know your physician is starving for data. You have to make a point to see your doctor schedule it etc. So what do they do? They weigh you and take your blood pressure ask how old you are that’s about it. I mean that’s nothing right they know they do not know what’s going on with you. Maybe it’s personality dependent but I would be very much in favor of disclosing all kinds of biometric information about myself it’s continuously recorded and stockpiled in a database and repeatedly scanned by intelligent agents for anomalies and doctors appointments automatically scheduled for me. Same thing with any complicated piece of machinery you know it could be a car it could be parts of your business. This kind of invasive monitoring I think will come with resistant but could be unleashed as people see the value in disclosing.

    See full panel here Idea to IPO

    3 Ways Video Recommendation Drives Video Lifetime Value

    Video recommendation and discovery are very hot topics across video publishers looking to drive higher returns on their video lifetime value. Attracting a consumer to watch more videos isn’t simple in this attention deficit society we live. However, major video publishers are creating better experience using video intelligence to delight and enhance discovery and keep you coming back for more. In this post we’ll explore the intelligence behind visual recommendation and how to enhance consumer video discovery.

    Industry Challenge

    Google Video Intelligence Demo At Google Next 17

    Google Video Intelligence API demo of video search finding baseball clips within video segments.

    Last year we posted on Search Engine Journal How Deep Learning Powers Video SEO describing the advantages of video image labeling and how publishers can leverage valuable data that was otherwise trapped in images. Since then, Google announced at Next17 Video Intelligence . (InfiniGraph was honored to be selected as a Google Video Intelligence Beta Tester) The MAJOR challenge with Google cloud offering is pushing all your video over to Google Cloud, cost per labeling the video at volume and loosing control of your data. So how do you do all this on a budget?

    Not all data is created equal

    Trending Content - Lacks Image Based Video Machine Learning

    Trending Content is based on popularity vs content context and the consumer content consumption.

    And, not all video recommendation platforms are created equal  The biggest video publishers are advancing their offerings with intelligence. InfiniGraph is addressing this gap between using video intelligence and creating affordable technology otherwise out of reach.

    Outside of the do not track, creating a truly personalized experience is ideal. For VOD / OTT apps creates the best path to robust personalization. For web a more generalized grouping of consumer is required.

    See how “Netflix knows it has just 90 seconds to convince the user it has something for them to watch before they abandon the service and move on to something else”.

    Video recommendation platforms

    Video Recommendation Mantis Powers by KRAKEN Video Machine Learning

    Image based video recommendation “MANTIS”. Going beyond simple meta data and trending content to full intelligent context. Powered by KRAKEN.

    All video recommendation platforms are reliant on data entered (called Meta data) when it was uploaded to a video content management system.  Title, discription etc. The other main points of data capture plays, time on video and completion indicating watchablity. There is so much more to a videos than raw insights. Did someone watch a video is important but understanding the why in context of other like videos with similar content is intelligence. Many site have trending videos, however, promoting videos that get lots of plays creates a self fulfilling prophecy because trending is being artificially amplified and doesn’t indicate relevance.

    An Intelligent Visual Approach

    Video Machine Learning, Going beyond meta data is key to a better consumer experience. Trending only goes so far. Visual recommendation looks at all the content based on consumer actions.

    Going beyond meta data is key to a better consumer experience. Trending only goes so far. Visual recommendation looks at all the content based on consumer actions.

    Surfacing the right video at the right time can make all the difference if people are staying or going.  Leaders like YouTube have already become to leverage artificial intelligence in their recommending videos producing 70% greater watch time. Recently they included animated video previews for their thumbnails pushing take rates even high. This is more proof consumer desire intelligent recommendation and slicker visual presentation.

    InfiniGraph provides a definitive differentiation using actions on images and in-depth knowledge of what’s in the video segments to build relevance. Consumer know what they like when they see it. Understand this visual ignition process is key to unlocking the potential of visual recommendation. How do you really know what people would like to play if you really don’t know much about the video content? Understanding the video content and context is the next stage in intelligent video recommendation and personalized discovery.

    3 Ways Visual Video Recommendation Drives Video Lifetime Value

    1. Visual recommendation – Visual information within video creates higher visual affinity to amplify discovery. Content likeness beyond just meta data opens up more video content to select from. Mapping what people watch is based on past observation, predicting what people will watch requires understand video context.

    2.  Video scoring – A much deeper approach to video had to be invented where the video is scored based on visual attribution inside the video and human behavior on those visuals. This scoring lets the content SPEAK FOR ITSELF and enables ordering play list relative to what was watched.

    3. Personalized selection - Enhancing discover requires getter intelligence and context to what content is being consumed. Depending on the video publishers environment like OTT or a mobile app can enable high levels of personalization. For consumers using the web a more general approach and clustering consumers into content preferences powers better results while honoring privacy.

    The Future is Amazing for Video Discovery

    We have learned a great deal from innovative companies like: Netflix, HULU, YouTube and Amazon who have all come a long way in their approach to advanced video discovery. Large scale video publishers have a grand opportunity to embrace a new technology wave and be relevant while creating a visually conducive consumer experience. A major challenge going forward is the speed of change video publishers must adapt if they wish to stay competitive. With InfinGraph’s advanced  technologies designed for video publishers there is hope. Take advantage of this movement and increase your video lifetime value.

    Top image from post melding real life with recommendations.

    Video Publishers Ready for Video Autoplay Shutdown.

    deer-in-headlights Publishers need intelligence via Machine Learning KRAKEN video artificial intelligence Video publishers have been caught off guard with the recent announcement of Apple blocking video autoplay. Even Google is pushing back on bad web ads. The backlash against video autoplay has been festering for some time. If losing video ad revenue and turning consumers off with declining traffic isn’t a wake-up call then what will be? Headlines like this from CNN “Apple’s plan to kill autoplay feature could leave publishers in the dust” should get video publisher’s attention. This clamp down isn’t a joke and Google and Apple are taking a hardline to clean up the web experience when it comes to video. Here we dive deep into how to get ahead of these changes by Apple/Google and increase your video lifetime value.

    Facebook started the conversation

    Since Facebook started force-feeding video autoplay on us, other publishers followed suit knowing their video volume would go up. However, some major agencies flat out said they would only pay half of the CPMs due to the viewability issues with autoplay. A major advertiser (Heineken) is publicly having challenges getting a 6 sec clip to stick. Publishers say the video relationship with Facebook is “complicated”. This is a topic of constant discussions and other players are outright opting out of video autoplay altogether in favor of a better consumer experience. Apple Autoplay Blocking iOS11 KRAKEN Video Machine Learning is the SolutionThe major catch 22 here is that publishers driving their O&O strategy can’t think of autoplay is a video strategy—it’s a tactic that, in most cases, turns consumers off. If you want to see some of the consumer backlash, just search on Google “how to turn off autoplay” and you will see that this is most definitely a real consumer pain point. With Apple’s latest release of iOS 11 specifically blocking video autoplay, a more thoughtful and intelligent approach is required.

    Video Strategy?

    Autoplay on off Publisher handling UI KRAKEN Video Machine Learning Drives Higher Play Rates

    Publishers are responding to consumer demand by giving the options to turn OFF autoplay video.

    A video strategy involves deciding to dominate a content category vertically and be the go-to source for the highest value content in that space. Yes, video is content marketing. People watch video for information, enlightenment, entertainment, etc. Video is a very effective communication tool. Video is mobile and on demand. And being a tool, the publisher has a responsibility to harness and wield that tool surgically vs. a blunt object that pushes video views without consumer consent or value add to paid advertisers. Some publishers understand this, such as LittleThings Inc. They are disabling video autoplay completely and focusing on consumer experience. This has resulted In higher play rates (CTR), and higher CPMs that can be verified and justified to their customers. The other major benefit was consumers engaged more.

    “We wanted video views to be on the consumer’s terms.  By running autoplay, you might [reach your desired] fill rate, but the user is not engaged with the brand the way they would if they raised their hands to watch the video” said Justin Festa, chief digital officer for Little Things, at JW Player’s JW Insights event in New York

    Higher Intelligence

    The digital publisher today is going to have to use higher intelligence with consumers. A surgical approach to utilizing data and then presenting it is now a must have. So what is the benefit of artificial intelligence in video? It is better to start with the question: What is digital video? If we break it down, digital video is just a series of images and sequences spliced together. Humans are visual and have emotional responses to images and context. The story is a major draw in creating greater emotional response over simply the affinity one may have to the people. Now a computer that translates all the above and puts it into context would have to be truly intelligent. This is not something new; Netflix proved you get higher take rates by having the right images, which results in higher consumer engagement.

    In the Making

    KRAKEN AMP example powers by Video Machine LearningThree years ago, a technology was introduced called KRAKEN.  It utilizes video machine learning to select images to replace the static non-intelligent thumbnail with interactive dynamic thumbnails which are the best set of images to drive the highest play rates possible. The rotation of images provides more visual information when compared to a single image. Video clipping (GIF) was next, however, it is most effective in action shots. A new way of looking at video thumbnails was required. The solution was creating a real time responsive, dynamic intelligence and scoring images based on relevance. Finding the best images is one thing, however, powering video recommendation was a natural fit for finding great images.  Learning what collective visuals work together to extend longer time on site is a major deal for all publishers. We’re living in exciting times with advances in machine learning and computer chip design having achieved amazing levels of image processing capability. We have experienced a big leap forward in the code foundation (like Deep Learning) now powering platforms to segment out objects, images, places and facial recognition. We’re in an artificial intelligence renaissance.

    Show me the money

    Video Recommendation Powered by KRAKEN Video Machine Learning

    Video Recommendation powered by KRAKEN video machine learning. Going beyond meta data and plays to now visuals within the Video.

    It’s no secret ads still drive the bulk of digital video revenue. For that very reason, each video play, and increased time on site, translates into cold hard cash. Making the site sticky and getting more repeat visits requires video intelligence. Google and Apple are very serious about protecting the mobile web. It is clear that Google AMP (accelerated mobile pages) has won out with the publishers while Facebook instant articles has fallen short and most have abandoned it due to lack of making money vs AMP. The perfect trifecta of real-time video analytics, intelligence image selection, and video recommendation are now a reality. We have the data and processing power to predict what images make you excited and what video is most relevant to watch. Video discovery is key for increasing video life time value.

    Conclusion

    Are you ready for the do not track and the non-autoplay world?  Like it or not, Google and Apple are disabling video autoplay and intrusive ads. The digital broadcasting publisher has a grand opportunity to leverage machine learning in video. Tapping into visually relevant actions and drawing out behavior is a competitive advantage. Machine learning linked with digital video that maximizes your video assets is a strategic advantage and increases video lifetime value. The above video recommendation example was not possible before machine learning based video processing made it a reality. What possibilities can you imagine? .

    How To Increase Video Lifetime Value via Machine Learning

    Videos Found For You Recommendation KRAKEN Video Machine Learning Deep Learning

    Video discovery is one of the best ways to increase video lifetime value. Learning what video content is relevant increases greater time on site.

    All video publishers are looking to increase their video’s lifetime value. Creating video can be expensive and the shelf life of most video is short. Maximizing those videos assets and their lifetime value is a top priority. With the advent of new technologies such as Video Machine Learning, publishers can now increase their video’s lifetime value by intelligently generating more time on site. Identifying the best image to lead with (thumbnail) and recommending relevant videos drive higher lifetime value through user experience and discovery.

    Reeses two great tasts put togehter Video Machine Learning Deep Learning Artificial IntelligenceThis combination of visual identification and recommendation is like the Reese’s of video. By linking technologies like artificial intelligence and real-time video analytics, we’re changing the video game through automated actionable intelligence.

    Ryan Shane, our VP of Sales, describes the advantages of knowing what visual (video thumbnail) (context) produces the most engagement and what video business models benefit the most from video machine learning.

    Hear from our CEO, Chase McMichael, who talks about the advanced use of machine learning and deep learning to improve video take rates by finding and recommending the right images consumers engage with the most.

    Here are two examples of how video machine learning increases revenue on your existing video assets.

    Yield Example #1: Pre-roll

    If you run pre-roll on your video content, you likely fill it with a combination of direct sales and an RTB network. For this example, assume you have a 10% CTR, which translates to 1 million video plays each day. That means that you are showing 1,000,000 pre-roll ads each day. Now assume that you run KRAKEN on your videos, and engagement jumps to by 30% to a 12% CTR. That means that you will be showing 1,300,000 pre-roll ads each day. KRAKEN has effectively added an additional 300,000 pre-roll spots for you to fill! This is an example of increasing the video value on your existing consumers.

    Yield Example #2: Premium Content

    For our second example, assume you monetize with premium content. You have an advertising client who has given you a budget of $100,000 and expects their video to be shown 5 million times. With your current play rates, you determine it will take four days to achieve that KPI. Instead, you run KRAKEN on their premium content, and engagement jumps 2X. You will hit your client’s KPI in only two days. You now have freed up two days of premium content inventory that you can sell to another client! Maximizing your existing video consumers and increase CTR reduces the need to sell off network.

    Below is a Side by Side example of Guardians Of the Galaxy Default Thumbnail vs. KRAKEN Rotation powered by Deep Learning. Boosting click rates generates more primary views. While leveraging known images that induce response is logical to insert into a video recommendation (Reese’s). The two together now drive primary and secondary video views.

    As you can see from both examples, using KRAKEN actually increases lifetime value as well as advertising yield from your video assets. Displaying like base content sorted by Deep Learning and video analytics by category delivers greater relevance. Organizing video into context is key to increasing discovery. Harnessing artificial intelligence with image selection and recommendation brings together the best of both digital video intelligent worlds.

    Bite into a Reese’s and see how you can increase your video lifetime value.  Request a demo and we’ll show you.

     

    For OTT, Machine Learning Image is Worth More Than a Thousand Words

    So, you’ve developed an OTT app and you’ve marketed it to your viewers.  Now your focus is on keeping your viewers watching.  How can machine learning drive more engagement? Let’s face it—they may have a favorite show or two, but to keep them engaged for the long term, they need to be able to discover new shows. Discovery InfiniGraph KRAKEN Video Machine LearningBecause OTT is watched on TVs, you have a lot of real estate to engage with your viewers.  A video’s thumbnail has more of an impact on OTT than any other platform, so choose your thumbnails carefully!

    Discovery is different on different platforms

    On desktop, most videos start with either a search (e.g. Google) or via a social share (e.g. Facebook).  Headlines and articles provide additional info to get a viewer to cognitively commit to watching a video.  Autoplay runs rampant removing the decision to press “play” from the user.

    TVs have a lot more real estate than smartphones

    TVs have a lot more real estate than smartphones

    On a smartphone, small screen size is an issue.  InfiniGraph’s machine learning data shows that more than three objects in a thumbnail will cause a reduction in play rates.  Again, social plays a huge role in the discovery of new content, with some publishers reporting that almost half of their mobile traffic originates from Facebook.

    OTT Discovery is Unique

    The discovery process on OTT is unique because the OTT experience is unique.  Most viewers already have something in mind when they turn on their OTT device.  In fact, Hulu claims that they can predict with a 70% accuracy the top three shows each of their users is tuning in to see.  But what about the other 30%?  What about the discovery of new shows?

    Netflix AB Test Example

    Netflix AB Test Example

    Netflix has said that if a user can’t find something to watch in 30 seconds, they’ll leave the platform.  They decided to start A/B testing their thumbnails to see what impact it would have, and discovered that different audiences engage with different images.  They were able to increase view rates by 20-30% for some videos by using better images!  In the on-demand world of OTT, the right image is the difference between a satisfied viewer and a user who abandons your platform. If you’re interested in increasing engagement on your OTT app, reach out to us at InfiniGraph to learn more about our machine learning technology named KRAKEN that chooses the best images for the right audience, every single time.  Also, check out our post about increasing your video ad inventory!

    More on machine learning powered image selection and driving more video views.

    Making More Donuts

    Being a publisher is a tough gig these days.   It’s become a complex world for even the most sophisticated companies.  And the curve balls keep coming.  Consider just a few of the challenges that face your average publisher today:

    • Ad blocking.
    • Viewability and measurement.
    • Decreasing display rates married with audience migration to mobile with even lower CPMs.
    • Maturing traffic growth on O&O sites.
    • Pressure to build an audience on social platforms including adding headcount to do so (Snapchat) without any certainty that it will be sufficiently monetizable.
    • The sad realization that native ads—last year’s savior!–are inefficient to produce, difficult to scale and are not easily renewable with advertising partners.  

    The list goes on…

    The Challenge

    Of course, the biggest opportunity—and challenge–for publishers is video.  Nothing shows more promise for publishers from both a user engagement and business perspective than (mobile) video. It’s a simple formula.  When users watch more video on a publisher’s site, they are, by definition, more engaged.  More video engagement drives better “time spent’ numbers and, of course,  higher CPMs.    

    But the barrier to entry is high, particularly for legacy print publishers. They struggle to convert readers to viewers because creating a consistently high volume of quality video content is expensive and not necessarily a part of their core DNA.  Don’t get me InfiniGraph Video Machine Learning Challenge Opportunitywrong.  They are certainly creating compelling video, but they have not yet been able to produce it at enough scale to satisfy their audiences.  At the other end of the spectrum, video-centric publishers like TV networks that live and breathe video run out of inventory on a continuous basis.   

    The combined result of publishers’ challenge of keeping up with the consumer demand for quality video is a collective dearth of quality video supply in the market.  To put it in culinary terms, premium publishers would sell more donuts if they could, but they just can’t bake enough to satisfy the demand.  

    So how can you make more donuts?
    Trust and empower the user! 

    InfiniGraph Video Machine Learning Donuts

    Rise of  Artificial Intelligence

    The majority of the buzz at CES this year was about Artificial Intelligence and Machine Learning.  The potential for Amazon’s Alexa to enhance the home experience was the shining example of this.  In speaking with several seasoned media executives about the AI/machine learning phenomenon, however, I heard a common refrain:  “The stuff is cool, but I’m not seeing any real applications for my business yet.”  Everyone is pining to figure out a way to unlock user preferences through machine learning in practical ways that they can scale and monetize for their businesses.  It is truly the new Holy Grail.

    The Solution

    That’s why we at InfiniGraph are so excited about our product KRAKEN.  KRAKEN has an immediate and profound impact on video publishing.  KRAKEN lets users curate the thumbnails publishers serve and optimizes towards user preference through machine learning in real time. The result?:  KRAKEN increases click-to-play rates by 30% on average resulting in the corresponding additional inventory and revenues.     

    It is a revolutionary application of machine learning that, in execution, makes a one-InfiniGraph Video Machine Learning Brain Machineway, dictatorial publishing style an instant relic. With KRAKEN, the users literally collaborate with the publisher on what images they find most engaging.  KRAKEN actually helps you, the publisher, become more responsive to your audience. It’s a better experience and outcome for everyone.  

    The Future…Now!

    In a world of cool gadgets and futuristic musings, KRAKEN works today in tangible and measurable ways to improve your engagement with your audience.  Most importantly, KRAKEN accomplishes this with your current video assets. No disruptive change to your publishing flow. No need to add resources to create more video. Just a machine learning tool that maximizes your video footprint.  

    In essence, you don’t need to make more donuts.  You simply get to serve more of them to your audience.  And, KRAKEN does that for you!

     

    For more information about InfiniGraph, you can contact me at tom.morrissy@infinigraph.com or read my last blog post  AdTech? Think “User Tech” For a Better Video Experience

     

    How Deep Learning Increases Video Viewability

    Video viewability is a top priority for video publishers who are under pressure to verify that their audience is actually watching advertisers’ content. In a previous post How Deep Learning Video Sequence Drives Profits, we demonstrated why image sequences draw consumer attention. Advanced technologies such as Deep Learning are increasing video Viewability through identifying and learning which images make people stick to content. This content intelligence is the foundation for advancing video machine learning and improving overall video performance. In this post, we will explore some challenges in viewability and how deep learning is boosting video watch rates.

    Side by Side Default Thumbnail vs. KRAKEN Rotation powered by Deep Learning

     

    In the two examples above, which one do you think would increase viability? The video on the right has images selected by deep learning and automatically adjusted image rotation. It delivered a whopping 120% more plays than the static image on the left, which was chosen by an editor. Higher viewability is validated by the fact that the same video with the same placement at the same time achieved a greater audience take rate with images chosen by machine learning.

    This boost in video performance was powered by KRAKEN, a video machine learning technology. KRAKEN is designed to understand what visuals (contained in the video) consumers are more likely to engage with based on learning. More views equals more revenue.

    Measurement

    Video Deep Learning Machine Learning A_B Testing KRAKEN InfiniGraphA/B testing is required when looking to verify optimization. For decades, video players have been void of any intelligence. They have been a ‘dumb’ interface for displaying a video stream to consumers. The fact was that without intelligence, the video player was just bit-pipe. Very basic measurements were taken, such as Video Starts, Completes, Views as well as some advanced metrics such as how long a user watched, etc. A new thinking was required to be more responsive to the audience and take advantage of what images people would reacted on. Increasing reaction increase viewability.

    So how does KRAKEN do its A/B Testing? The goal was to create the most accurate measurement foundation possible to test for visuals consumers are more likely to engage with and measure the crowds response to one image vs another. KRAKEN implemented 90/10 splitting of traffic whereby 10% of traffic shows the default thumbnail image (the control) and 90% of traffic to the KRAKEN selected images. It is very simple to see why testing video performance through A/B testing is possible. Now that HTML5 is the standard and Adobe Flash has been deprecated, the ability to run A/B testing within video players has been furthered simplified.

    User experience

    Mobile Video Sponsor Content In FeedMaking sure a video is “in view” is one thing, but the experience has a great deal to do with legitimate viewability. A bigger question is: Will a person engage and really want to watch? People have a choice to watch content. It’s not that complex. If the content is bad, why would anyone want to watch it? If the site is known for identifying or creating great content then that box can be checked off.

    Understanding what visual(s) makes people tick and get engaged is a key factor to increase viewability. Consumers have affinities to visuals and those affinities are core to them taking action. Tap into the right images and you will enhance the first impression and consumer experience.

    What is Visual Cognitive Loading?

    MIT-Object-Rec_0-Visual Congnition 2

    How the brain recognizes objects – MIT Neuroscientists find evidence that the brain’s inferotemporal cortex can identify objects.  Visual induce human response using the right visuals increase attraction and attention. Photo: MIT

    A single image is very hard to convey a video story with a single image. Yes, an image is worth a 1000 words but some people need more information to get excited. Video is a linear body of work that tells a story. Humans are motivated by emotion, intrigue and actions. Senses of sight and motion create a visual story that can be a turn on or turn off. Finding the right turn on images that tells a story is golden. Identifying what will draw them into a video is priceless.

    The human visual cortex is connected to your eyes via the optic nerve; it’s like a super computer. Your ability to detect faces and objects at lightning speed is also how fast someone can get turned off to your video. Digital expectations are high in the age of digital natives. For this very reason, the right visual impression is required to get a video to stick, i.e. “sticky videos”. If you’re video isn’t sticky you will loose massive numbers of viewers and be effectively ignored just like “Banner Blindness”. The more visual information shown to a person the higher the probability of inducing an emotional response. Cognitive loading thereby gives them more information about what’s in the video.  If you’re going to increase viewability you have to increase cognitive loading. It’s all about whether the content is worthy of their time.

    Why Deep Learning

    Deep Learning layers of object recognition. Understanding whats in the images is as valuable as the meta data and title.

    Deep Learning layers of object recognition. Understanding whats in the images is as valuable as the meta data and title. Photo: VICOS

    The ability to identify what images and why are a big deal over the previous method of “plug a pray”. Systems now can recognize what’s in the image and linking that information back in real time with consumer behavior creates a very powerful leaning environment for video. Its now possible to create a hierarchical shape vocabulary for multi-class object representation further expanding a meaningful data layer.

    In our previous post How Deep Learning Powers Video SEO we describe the elements behind deep learning in video and the power of object recognition. This same power can be applied to video selection and managing visual in real time. Both image rotation and full animation (clips) provides maximum visual cognitive loading.

    The KRAKEN Hypothesis

    Quality video and actuate measurement are paramount when optimizing video. Many ask, Why are KRAKEN images better? The reality is they are because using deep learning to select the right starting images increases the probability of nailing the right images that consumers will want to engage with. Over time, the system gets smarter and optimizes faster. A real time active feedback mechanism is created continuously adjusting and sending information back into the algorithm to improve over time.

    Because KRAKEN consists of consumer curated actions, proactive video image selection is made possible.  We make the assertion that optimized thumbnails result in more engaged video watchers as proven by the increase in video plays. KRAKEN drives viewability and enable publishers move premium O&O rates as a result.

    Viewability or go home

    After the Facebook blunder or “miss calculating video plays” and other measurement stumbles major brands have taken notice …. if you want to believe this was just a “mistake.”  A 3 second play in AUTO PLAY isn’t a play in a feed environment when audio is off according to Rob Norman of Group M. The big challenge is there really isn’t a clear standard, just advice on handling viewability from the IAB. However, the big media buyers like Group M are demanding more and requiring half the video plays have a click to play to meet their viewability standard. This is wake up call for video publishers to get very serious about viewability and advertiser to create better content. All agree viewability is a top KPI when judging a campaigns effectiveness. 2017 is going to be an exciting year to watch how advertisers and publishers work together to increase video viewability. See The state of video Ad viewability in 5 charts as the conversation heats up.