Updated March 2020
Avid has a long heritage in the broadcast sector, with a product portfolio that spans asset management (Interplay PAM and MAM), audio, graphics, live sound, media suites, newsrooms, notation & scoring, video and audio mixing, AQC, storage, NLE, and video servers. Among this wide range of products is their MediaCentral platform - a modern and highly capable media asset management platform.
Primarily targeted at the production and post-production stages of the media supply chain, MediaCentral can has functionality that can be applied in other stages too, including import &QC, promos & marketing, archive, distribution & syndication, and delivery to linear TV, OTT, VOD and social media platforms.
Available either as a service in an AVID managed cloud environment or for deployment on-premise, MediaCentral pricing is based primarily on user count, with resource integrations, software modules, workflows and specific use cases taken into account. Licensing is typically on an opex / subscription model, although a capex / perpetual licensing model is possible.
MediaCentral is installed in between 20 to 30 sites in each of North America and Western Europe, and has a strong install base worldwide, most notably in Eastern Europe, Middle East, Asia Pacifica and Japan
AVID MediaCentral has a modern architecture, which is service orientated, micro-services based and dockerized/containerized. It can be hosted by AVID on a managed cloud environment as a single or multi-tenanted service or can be deployed on a customer premise with optional remote environment management and multi-site support. It is not offered for deployment on a customer's cloud environment. Their platform is not machine resource, human resource or service location-aware, which may prevent optimal use of resources in multi-instance deployments. The MediaCentral platform database is indexed using elastic search, but unlike some other platforms, AVID relies only on a relational database rather than combining the capabilities of multiple types of databases (non-relational, graph). Arguably this may mean that it is less performant for some types of search. AVID report that they use a modern DevOps technology stack for both building and managing their software, but seem not to be adopting the latest approaches for monitoring or testing of network and infrastructure, error and performance or service monitoring.
AVID MediaCentral has a flexible integration framework with connectors for a wide portfolio of third party products and systems. They also support a SDK for connector development and the inclusion of custom integration scripts in their workflows. The platform supports all standard AV containers and formats, MXF and IMF. It has a RESTful API and supports the MOS protocol and BXF. For on-premise resources, MediaCentral has a very strong portfolio of proven integrations, both to modern IP based resources and to traditional SDI legacy infrastructure. In particular, they emphasise integration to deep archive storage (disk-based) and proven connectors for integration with deep archive providers such as BlackPearl, Oracle DIVArchive, Quantum StorNext and SGL/Masstech. Other key integrations cited include BATON for AQC, and Minnetonka for audio processing. Visual QC available via Media Composer. For cloud services their portfolio of integrations includes proven connectors for cloud-based AQC, the Microsoft Video indexer AI engine, cloud storage, social media platforms, and cloud linear playout. MediaCentral does not have proven connectors for cloud-based transcoders or other third party cloud-based solutions. With regards to integration with other intelligent systems, MediaCentral has proven integration with rights management systems, scheduling and traffic systems, hierarchical storage systems, third party media asset management systems, VOD platforms and newsroom computer systems. It does not have proven integrations with CRM systems, eCommerce systems or facility management systems.
AVID MediaCentral offers a modern and capable automation capability, including a graphical workflow design tool that supports custom scripted nodes, single and multi-paths workflows, and the launching of workflows from within workflows. Workflows can include human tasks with decision-making and are real-time metadata responsive. Monitoring and reporting of automated activity are supported, including automated action queue monitoring and human tasks taken monitoring, presented as text, components and as workflow graphics on the MediaCentral UI. Notifications can be sent to appropriate users, groups and external systems. Human resource tasks can be automatically prioritised and modified in real-time. MediaCentral is capable of automating the spin-up of additional machine resources in the cloud and automatically synchronise with a disaster recovery system. Unfortunately, with no location, cost, or action time & duration awareness, MediaCentral is not able to use its automation capabilities to dynamically optimise the efficiency of a media supply chain. The lack of resource scheduling capability is also apparent, with no machine, human or facility/equipment resource availability awareness, no ability to set timed actions or tasks, no ability to schedule these with internal or external calendars, and no support for real-time resource reallocation.
MediaCentral has a modern metadata management capability, including a metadata design tool with drag and drop functionality. Support is provided for media and business object metadata definitions, temporal metadata definitions (applied to a single point or in/out durations), and spacial metadata definitions (on image and video). Object hierarchies, asset genealogy and asset collections functionality are provided. Spatial metadata can be reviewed and annotated using the OverCast HQ app within MediaCentral. Metadata access control and metadata entry control both appear to be strong. This includes field-level access control by user role or group, customisable metadata entry validation, keywords and tagging, taxonomies, and thesaurus. Metadata change management seems to be less well supported with no history maintained for an asset, object or file metadata, nor for machine resource (configuration management), rights or communications metadata. There is no support for version control and notifications of version changes, nor is there any asset-contract linking. MediaCentral supports logging MAM assets - segmenting strata and adding information to them. This information includes text and other properties, which can be used for reference during story creation and media editing. Media Central Asset Management provides three types of strata that can be displayed and edited in MediaCentral - Simple strata (only one property assigned to them - for any data type such as “text,” “timecode,” “date,” “legal list"...), structured strata (multiple properties assigned to them) and strata groups (combine several strata in a strata group with the segmentation across the strata and within strata always synchronized).
MediaCentral user interfaces adopt a modern style, with a consistency of theme, layout, components, and taxonomies, and modern typography and components. They can seem a bit cluttered, but there is some ability to remove functionality by role. Scrubbing within the browse UI keyframes and actions pull-down is particularly user friendly. There is no support for customisable widget-based components, For collaborative working, MediaCentral supports automated task allocation and group task pools. Timeline comments can be added and reviewed by others. Android and IoS apps are available. A built-in chat capability is not provided. A strong asset search capability includes search auto-complete when using taxonomies, and the ability to customise table headers and filters. Media Central integration with Media Composer and Adobe Premiere Pro NLEs provide the same consistent and powerful search interface and capabilities. There is no support for a filter by business object attributes though and no AI-driven recommendations. The Media Central UI can apply simply too complex restrictions strata (see Information above) marked on the player timeline for an asset, with thumbnails and the associated metadata displayed below this. MediaCentral offers a more capable built-in non-linear editing experience than the cut-cut editing most platforms we have reviewed and integrates with the above-mentioned NLEs for craft editing. A dual player UI is available for comparing versions. Voice over recording and editing, audio track selection and swapping, and caption generation, entry and viewing is supported, but no built-in image tools are provided, such as object aware cropping or background removal.
From the information provided by Avid, it appears that Analytics may be the weakest area of MediaCentral capability. While machine resource usage by time, capacity and location can all be reported on, there appears to be very limited capability to present performance metrics for machine resources (no metrics on jobs failed, completed or in progress) or indication of bottlenecks (queue times for human or machine resources). While the time for human resources to take tasks is measured, task durations by role are not. MediaCentral supports no financial analytics - so no cost awareness or tracking, no cost versus budget, no forecast cost to complete. It does integrate with third-party AQC to provide some quality metrics. There appears to be no reporting on asset usage.
MediaCentral has proven integration with the Microsoft Azure Video Indexer service for AI/ML-powered detection and identification of people (right down to their age) speech, sounds, emotions, colours, OCR, objects and dialogue are all supported. MediaCentral uses Nexidia for our Phonetic Search. No other AI/ML integration is supported, either directly or with AI aggregators. For automated technical compliance checking MediaCentral utilises Interra Baton. Automatic verification of captioning, video description and languages is achieved using Avid Illuminate. Automatic analysis of audio media to create a searchable index allowing searches using text-based phonemes is achieved using the MediaCentral phonetic index. MediaCentral does not offer AI-driven colour detection, brand accuracy or colour correction. It does not support AI-driven placement of assets in linear schedules, VOD publishing grouping or intelligent ad placement. While it can automatically detect logical scene boundaries, it does not support the AI-driven automated storytelling clip compilation or highlights package creation that some other platforms offer. MediaCentral has not implemented an AI/ML-driven operations management capability - so no AI/ML-driven resource or applications optimisation or automated application spinning-up based on demand prediction, no AI/ML-driven human resourcing insights or recommendations, and no AI/ML-driven fault prediction.