媒体制作方案的革新将帮助挖掘媒体资产的隐藏价值
自动识别视频每一帧所包含元素(语音、文本和对象)的技术将使创建一个包含丰富元数据的媒体数据库成为可能
Imagine farmers equipped with modern tractors, planters and combines planting and harvesting a bumper corn crop, but by some inexplicable fluke there were no modern food processing plants to remove the husks, inspect the ears and strip the kernels to be frozen or package ears for distribution.
Or, picture a circumstance in which modern oil drilling equipment, rigs and pumps, unlocked massive reserves, pipelines send the find on its way, but the destination was akin in size and technology to America’s first oil refinery, an 1850s-era plant in Pittsburgh.
Both hypotheticals –while patently absurd—are, however, apt metaphors for today’s media supply chain.
In the farming scenario, how much corn would be left to rot in fields? Or, oil left unrefined? While such an imbalance between the acquisition of corn or crude oil and its processing is unimaginable, isn’t it fair to say that a similar imbalance is the current state of affairs in the media supply chain?
Look at how much video footage is left unused on the proverbial “cutting room floor,” never to be a part of a news story, entertainment program or sports show. How many people were assigned to gather that footage? What did it cost?
These factors are key in determining the cost of a minute of finished video. A widely recognized rule of thumb pegs today’s cost at $1,000 per minute for high-quality video. (However, it’s not uncommon to see a range between $700 and $10,000 per minute quoted, depending on many variables.)
But what if two minutes of finished video were derived from the same raw footage? Three minutes? What would be the impact if every minute of footage acquired was usable? Could the industry conceivably get to a cost of $10 per minute for finished video?
Perhaps, but not without a new set of media processing technologies that will allow the Media & Entertainment industry to extract value from media assets that go unused and ultimately are deleted.
To date, raw media is a passive ingredient in the finished product. In other words, it does not participate in the creative process. Someone must look at it and determine what content makes up the footage.
Metadata, while helpful, typically is created at a higher level than what would be necessary to transform media from a passive element into an active agent in the creative process. Frequently, metadata catalogs GPS location information, time when shot and camera setting information—all important pieces of data, but limited in usefulness in a next-generation process where raw media informs and drives the creative process.
However, if technology were introduced into the process that automatically identifies characteristics of each individual frame of video –things like who is speaking, whose voice is heard, what they are saying, and what objects and people are in the foreground and background of each frame—it would become possible to build a centralized database of rich metadata for all media that’s continuously updated as new tools come along.
Such a database is the first step on the path to advancing the media processing tools needed to automatically create content and thereby unearth value from raw footage that otherwise was inaccessible to media organizations.
When it becomes possible to identify what’s happening in each frame of video, these same organizations also can realize a wide variety of other benefits, ranging from monetizing archival footage to personalizing program content to match the interests of viewers—two of the many advantages offered in a transformed media processing environment, which we will explore in detail in our next blog.
Start a Discussion
For any information on this case or for any enquiries on the above products
We can help