Cable and Satellite International IBC 2017 review – AI brewing
By Goran Nastic | 9/29/17
Artificial Intelligence bubbled underneath the surface, with the first tantalising glimpses of how it is being used in media and how it is set to transform workflows.
Andrew Ng, former chief scientist at Baidu, where he led the company’s Artificial Intelligence Group, recently talked about AI becoming like electricity. It will become just another tool of life and business, but these are early days and every industry needs to figure out what it is.
Looking at the media & entertainment space, recommendations and automatic metadata generation are the most obvious use cases, but there are many more AI applications appearing, some of which are highlighted below .
There were a few AI-related start-ups to be seen wandering the floors of the RAI but apart from IBM Watson, which showed how it is working with the US Open (among a few other tennis tournaments), most companies weren’t quite ready to make AI the centrepiece of their demos despite the buzz slowly building. Expect this to change in 2018.
Blurred lines
AI is a component of a bigger bubble together with machine learning and deep learning, but Ericsson’s Mark Russel pointed out that there is a “real fuzzy border” between real AI and just a good use of analytics.
Ericsson is looking at how AI can be used in its business, including helping human decisions. “Nothing disruptive to start with, just for better insight. Once you have more confidence in a machine to make decisions better and more quickly then comes AI augmentation through intervention. We will get to point where those interventions are automated,” said Russel.
“Every modern product has an analytics engine at their core and we our doing that ourselves. We are more conservative around our use of AI than others. It’s an ambiguous and overused term, and will continue for a while,” he said, adding that R&D in this area tends to be expensive.
IBM echoed Russel’s sentiment.
“AI can be a loaded word for us too. We define it as a system learning and improving without human intervention,” said David Kulczar, who works on the analytics side of IBM Cloud Video.
Kulczar describes IBM Watson as a ‘video enrichment solution’ that helps companies monetise through better content and business decisions.
Watson processes video on a scene-by-scene analysis, based on context and thematic clues. The assets are then categorised and fed into a natural language understanding pool.
According to Kulczar, these types of solutions have a number of use cases, including content search and discovery (index, archive and access); recommendations uploads; compliance (eg tagging scenes for regionalisation of content, nudity detection etc) and closed captioning.
Watson is currently restricted to specific target partners but IBM plans to open the API by the end of the year end.
According to nScreenMedia analyst Colin Dixon, while IBM bids for market leader status, one of the drawbacks of Watson in the cloud is that once integrated into an application a user must pay each time they use it.
One start-up to watch out for is Seattle-based DimensionalMechanics, formed in 2015 by Rajeev Dutt. (Incidentally, Dutt mentions that in Seattle a typical machine learning engineer earns 18% more than the average software engineer.)
Dutt explained that Machine Learning has been around since the 1940s, a concept that advanced with the development of neural networks in the 1980s and then deep learning this century.
“Fast forward today, the new commodity is not data but what you do with it. One issue is that you need vast amounts of time and data to be able to train a neural network,” he said. “And what do you do with it once you’ve built it? How do you do bug fixes and know it’s doing good job?”
According to Dutt, his company’s NeoPulse Framework solution solves problems and makes it accessible to most businesses. The product, which uses Nvidia GPUs and is available in the AWS cloud, greatly reduces the amount of software code that needs to be written to create deep learning solutions. It also eliminates the need for people to have specialist skills. Using just a few lines of code a programmer can produce a portable inference model (PIM) for sophisticated video analysis.
One example is a model that detects sentiment. Dutt claims using NeoPulse gives 85% accuracy without a user knowing anything about machine learning. This is all thanks to an underlying language created by Dutt called NML (Modelling Language), used for automating the process of building AI models.
NeoPulse use cases include video object recognition, event analysis, programme compliance (eg nudity in the background) and of course speech-to-text. A partnership with GrayMeta also enables content providers to search through vast content archives using deep learning. The company is also targeting the defence and medical sectors.
Machine learning through the chain
Piksel CTO Mark Christie, who at IBC chaired a session on this topic at IBC, mentioned how broadcasters like NHK used the Rio Olympics to trial how big data and machine learning can improve live sports production. Brazil’s Globo, he said, has seen a big uptick in successfully pushing videos to be watched after one another as a result of more accurate recommendations.
Piksel, for its part, uses machine learning in the ingest process to match metadata from content across different catalogue windows, to match data sets for a single view across all different windows for better search and discovery. The company’s Palette software stack, he said, is being opened up to third-party developers to take advantage of its APIs.
In the compression space, encoding vendors believe applying machine learning can result in a step-up in efficiency. Harmonic’s EyeQ compression engine, for example, features an early form of machine learning, which the company argues does not require new codecs.
“We believe in machine learning but as opposed to some companies think this can be applied to existing codecs. You don’t need new codecs to apply new techniques like machine learning. We have high confidence this can have an impact on bit rate,” said director of multiscreen applications, Elie Sader.
On the network delivery side, CDN provider Vimmi uses deep learning to identify patterns of consumption. The company’s takes automatic decisions allocating video samples before demand occurs, with a cache hit-miss ratio of 95%, according to co-founder and CEO Eitan Koter.
“Most of the video feeds in our system are being served from the edge without waiting for the first click for that clip. We put it in advance using deep learning,” said Koter. The intelligence is combined in the CDN and in the CMS on the back-end for improved user experience and scalability.
Business intelligence and data resale
Companies like Irdeto and Nagra are also highlighting AI’s use in tackling content piracy, namely the swift and efficient detection and identification of illegally redistributed streams.
Nagra has created a SaaS platform and new division combining advanced analytics with machine learning capabilities and big data algorithms with the aim of using that technology across product groups, such as anti-piracy. It also aims to work more directly with service providers to address business challenges, reducing cost of content and TV operations in general, according to Simon Trudelle. Some customers have millions of unconnected boxes and need to use data to get smarter, he pointed out.
“Customers know they are competing with data natives like Netflix and Amazon, and TV needs to get the same level of understanding into consumer behaviour, how to use data with other sources of data, data synergies, and the resale of data. It’s about selling the intelligence you have. What you capture for video may be interesting to other industries,” said Trudelle.
He gave Singapore’s StarHub as an example of an operator tapping into these opportunities. StarHub has created a data platform in its new headend that captures TV usage data, merges that with other sources of network data, which it then provides to start-ups and third-party companies in the advertising space, so a case of turning that data into new generating capabilities.
As things mature, expect AI to make its mark at shows like IBC next year once companies crystallise their thinking and propositions. Surely it is set to bring about some dramatic changes in the longer term. Gartner has identified as one of the three key technology mega-trends over the next decade. “Artificial intelligence technologies will be the most disruptive class of technologies over the next 10 years,” it said.
Source: CSI Magazine