How does artificial intelligence streamline the production process?
There’s a mismatch between the cost of production and revenue associated with any individual program. The only solution to that is to improve the production efficiency. All the work we are doing is to make the production process much more efficient. Market demand for efficient production process is accelerating.
MediaMind is a platform that integrates with different production systems and the storage device. The end result is during the production process, people no longer have to go in to find the content or call each other to ask where the content is. The content relevant to their production needs will automatically appear.
MediaMind is enabled by artificial intelligence to generate metadata. The whole goal is to make the production process more efficient.
[Broadcasters] need to create content targeted to the audience on the digital platform. The engineers we talked to say their organization wants them to do more on digital, but they don’t have the budget for the people they need to do the full production. That’s where MediaMind can really help.
An average television news clip is about two minutes, but social media is 10 seconds. If you’ve got two minutes of content on social media, no one will watch it. It’s a different viewing behavior.
To recreate this content for different digital platform, they need a faster, low-cost production platform. Before, they would skim through the content, identify what’s needed and see what fits the video they want to post. The whole process can take a half hour, an hour. MediaMind can reduce production time from an hour for a clip to about five minutes, one-tenth the original cost of production.
Why are so many news organizations interested in sharing content, and what have been some of the hurdles in the past to doing so efficiently?
The expensive part of news programs is getting the raw footage. That’s the reason a lot of news organizations began to share content. The story is not determined by the footage, but by the editorial. The footage supports the story they want to write.
In the past, they may have had the footage, but not many people knew about it. The only way to discover it was to skim through the whole thing. Technology enables cost-effective sharing of content to happen.
A number of news agencies are using TVU Grid. It distributes content to affiliates around the country. What makes this different is the connectivity. Our system can combine with artificial intelligence and cloud-based storage. In the past, there may have been 1,300 stations recording something, just in case they wanted to use it, but now, there’s just a single recording in the cloud and everyone can access it. It’s much more efficient for the user to find the content they want.
Moving operations to the cloud results in quite a lot of flexibility. What will it take for more organizations to take their operations to the cloud?
It just takes time. A lot of media organizations have existing infrastructure. They’re not going to shift over to the cloud just for the sake of the cloud. But for the content, they are open to moving new production processes and new workflows into the cloud.
Traditionally, if you have a sports event to cover, you send all the people to the event, and it’s quite expensive. Cameramen, executive producers, video production people, post production people, engineers — everyone has to be on site. What more and more people realize is they need to bring the camera feed into the existing production facility. The benefit is people don’t have to go out.
Customers are using the TVU RPS [remote production system] not just to bring the signal into the studio, but into the cloud, and they are using TVU Producer, which is a 100% cloud-based producer.
All you need is a computer. Production people can be at any location. You can have a whole production team spread across the earth, everywhere. As long as the talent has the capability, they can do the production for you. It’s a fundamental change in how the content is produced, and it reduces the costs significantly. AI is driving that.
What are some of the ways AI and deep learning can be used in news organizations, and how are these technologies growing?
AI automates functions like transcription to make them more cost-effective. Humans are very costly. AI can be used to make content discoverable through metadata and indexing.
More capabilities are coming such as quality control. In the past, quality control has been done by humans. Humans monitor it for FCC rules. In the past, the only way to do that was human. AI is a very effective way to deal with all that. The applications are almost unlimited. A lot of internet reports are written by robotics. With video, we believe that will happen, too, in the long term.
We have continued development on the AI side. We are going beyond facial and speech recognition. Our system automatically detects whether it’s a protest or a celebration or a conference. It can give more context beyond just who is talking.
With audio and speech recognition, we are able to reach 96% accuracy for automatic closed captioning with TVU Transcriber. More and more context flows through this platform, which relies on deep learning. We are deploying transcription to our customers, helping them comply with the FCC. With this approach, it’s assured that all their output signal is closed captioned.
When it comes to interoperability, what are the major challenges?
There are a lot of existing legacy systems built 20 and 30 years ago. They have inflexible infrastructure built behind them. They can only take certain file formats. They cannot accept newer formats. If the video is compressed efficiently, even if the bit rate is high, the encoding and decoding degrades the picture quality. The inflexibility increases operating costs but also degrades the efficiency and picture quality. These old systems are causing some problems.
Some media organizations recognize the problem. Some are saying: Don’t worry about those systems, we are ready to move forward. They recognize that more advanced systems benefit their operations. For example, a media organization may want their users to transparently navigate between on-premises and off-premises storage devices. This gives more flexibility to store the content where they want it, and gives the user transparency to access that content regardless of where they are.
Sometimes legacy systems are difficult to integrate. The industry is beginning to realize that. New systems are easier to integrate. Everything in MediaMind is 100% open API. We have integrated with a lot of people. We have integrated Sony and Panasonic on the camera side. We are also working with a number of production systems, like Adobe. Everything is focused on being 100% open.