There are many ways to harness and activate the data that Uru generates.

Brand Safety API

Ensure that brands are paired with brand-safe, brand-aligned content.

Uru’s Brand Safety API analyzes an inputted video —both its audio and the images it contains— and returns a JSON response that contains an overall brand safety score for that video as well as granular data about how that score was calculated. Use this API to screen or rank content before it enters the advertising pipeline. Or use it to retrospectively monitor the brand safety level of video campaigns.

Content Recognition API

Get structured data describing a video’s content category as well as the objects and themes inside it.

Uru examines the audio, images, and text inside an inputted video in order to predict its overall IAB Content Taxonomy categories and to generate metadata and keywords describing the objects and themes inside that video and each of its scenes. Importantly, when generating that data, our predictive models are trained to sift for the high impact themes and objects that will help make video content more searchable, power SEO or recommendation systems, or enable advertisers to target video inventory based on what is actually inside it. No more parsing a large long of tags that are irrelevant to you and your customers or audience.

Brand Integration API & Application

Find and fill branding opportunities inside visual content.

Our Brand Integration API turbocharges the process of putting brands inside videos. It ingests a video or photo, returning a list of all all the times and 3D surfaces inside it where brand graphics can be harmoniously inserted or immersed. If you select or customize any of those times or surfaces (or simply tell Uru to pick the best ones) and also tell us the brand graphics to put inside them, we’ll return a version of the video or photo with the graphics seamlessly integrated into it.

Storybreak API

Find the most seamless, unobtrusive ad breaks.

Our API to instantly identifies the breaks inside the story that a video tells (and, therefore, where you can insert mid-roll or ad pods in a seamless-feeling, less disruptive way). This process is powered by AI that studies all of the sights, sounds, and text inside the video. If you feed us data about how viewers are interacting with the Storybreaks that we have found, our AI will learn from it and adjust the Storybreaks inside that video (and future ones) accordingly, optimizing engagement based on viewer location, time, device and profile.