Uru’s Brand Safety API analyzes an inputted video —both its audio and the images it contains— and returns a JSON response that contains an overall brand safety score for that video as well as granular data about how that score was calculated. Use this API to screen or rank content before it enters the advertising pipeline. Or use it to retrospectively monitor the brand safety level of video campaigns.
Uru examines the audio, the images, and any text inside an inputted video in order to predict its overall IAB Content Taxonomy category as well as the objects and themes inside it and each of its scenes. Importantly, when identifying those objects and themes, our predictive models are trained to focus on finding the high impact ones that will help you make video content searchable, power SEO or recommendation systems, or better pair that content with brands through ads and e-commerce. No more parsing a large list of tags that are useless to you, your customers, and your community.
Our Brand Integration API turbocharges the process of putting brands inside videos. It ingests a video or photo, returning a list of all all the times and 3D surfaces inside it where brand graphics can be harmoniously inserted or immersed. If you select or customize any of those times or surfaces (or simply tell Uru to pick the best ones) and also tell us the brand graphics to put inside them, we’ll return a version of the video or photo with the graphics seamlessly integrated into it.
Our Product Listings API and plug-in tap into our Content Recognition API, using that data to help power product listings that respond to what is happening on-screen. Turn your static product listings into a dynamic story that engages and delights viewers.
Our API to instantly identifies the breaks inside the story that a video tells (and, therefore, where you can insert mid-roll or ad pods in a seamless-feeling, less disruptive way). This process is powered by AI that studies all of the sights, sounds, and text inside the video. If you feed us data about how viewers are interacting with the Storybreaks that we have found, our AI will learn from it and adjust the Storybreaks inside that video (and future ones) accordingly, optimizing engagement based on viewer location, time, device and profile.