Moments Lab, the AI firm redefining how organizations work with video, has raised $24 million in new funding, led by Oxx with participation from Orange Ventures, Kadmos, Supernova Make investments, and Elaia Companions. The funding will supercharge the corporate’s U.S. growth and assist continued growth of its agentic AI platform — a system designed to show huge video archives into immediately searchable and monetizable belongings.
The guts of Moments Lab is MXT-2, a multimodal video-understanding AI that watches, hears, and interprets video with context-aware precision. It doesn’t simply label content material — it narrates it, figuring out folks, locations, logos, and even cinematographic parts like shot varieties and pacing. This natural-language metadata turns hours of footage into structured, searchable intelligence, usable throughout inventive, editorial, advertising, and monetization workflows.
However the true leap ahead is the introduction of agentic AI — an autonomous system that may plan, cause, and adapt to a consumer’s intent. As a substitute of merely executing directions, it understands prompts like “generate a spotlight reel for social” and takes motion: pulling scenes, suggesting titles, choosing codecs, and aligning outputs with a model’s voice or platform necessities.
“With MXT, we already index video sooner than any human ever might,” stated Philippe Petitpont, CEO and co-founder of Moments Lab. “However with agentic AI, we’re constructing the following layer — AI that acts as a teammate, doing all the pieces from crafting tough cuts to uncovering storylines hidden deep within the archive.”
From Search to Storytelling: A Platform Constructed for Pace and Scale
Moments Lab is greater than an indexing engine. It’s a full-stack platform that empowers media professionals to maneuver on the velocity of story. That begins with search — arguably probably the most painful a part of working with video right now.
Most manufacturing groups nonetheless depend on filenames, folders, and tribal data to find content material. Moments Lab modifications that with plain textual content search that behaves like Google in your video library. Customers can merely kind what they’re searching for — “CEO speaking about sustainability” or “crowd cheering at sundown” — and retrieve actual clips inside seconds.
Key options embody:
- AI video intelligence: MXT-2 doesn’t simply tag content material — it describes it utilizing time-coded pure language, capturing what’s seen, heard, and implied.
- Search anybody can use: Designed for accessibility, the platform permits non-technical customers to go looking throughout hundreds of hours of footage utilizing on a regular basis language.
- Instantaneous clipping and export: As soon as a second is discovered, it may be clipped, trimmed, and exported or shared in seconds — no want for timecode handoffs or third-party instruments.
- Metadata-rich discovery: Filter by folks, occasions, dates, areas, rights standing, or any customized side your workflow requires.
- Quote and soundbite detection: Routinely transcribes audio and highlights probably the most impactful segments — excellent for interview footage and press conferences.
- Content material classification: Practice the system to type footage by theme, tone, or use case — from trailers to company reels to social clips.
- Translation and multilingual assist: Transcribes and interprets speech, even in multilingual settings, making content material globally usable.
This end-to-end performance has made Moments Lab an indispensable accomplice for TV networks, sports activities rights holders, advert businesses, and international manufacturers. Latest shoppers embody Thomson Reuters, Amazon Advertisements, Sinclair, Hearst, and Banijay — all grappling with more and more complicated content material libraries and rising calls for for velocity, personalization, and monetization.
Constructed for Integration, Educated for Precision
MXT-2 is skilled on 1.5 billion+ knowledge factors, decreasing hallucinations and delivering excessive confidence outputs that groups can depend on. Not like proprietary AI stacks that lock metadata in unreadable codecs, Moments Lab retains all the pieces in open textual content, guaranteeing full compatibility with downstream instruments like Adobe Premiere, Last Minimize Professional, Brightcove, YouTube, and enterprise MAM/CMS platforms through API or no-code integrations.
“The true energy of our system isn’t just velocity, however adaptability,” stated Fred Petitpont, co-founder and CTO. “Whether or not you’re a broadcaster clipping sports activities highlights or a model licensing footage to companions, our AI works the best way your staff already does — simply 100x sooner.”
The platform is already getting used to energy all the pieces from archive migration to reside occasion clipping, editorial analysis, and content material licensing. Customers can share safe hyperlinks with collaborators, promote footage to exterior consumers, and even practice the system to align with area of interest editorial kinds or compliance pointers.
From Startup to Commonplace-Setter
Based in 2016 by twin brothers Frederic Petitpont and Phil Petitpont, Moments Lab started with a easy query: What should you might Google your video library? Right now, it’s answering that — and extra — with a platform that redefines how inventive and editorial groups work with media. It has turn out to be probably the most awarded indexing AI within the video business since 2023 and exhibits no indicators of slowing down.
“Once we first noticed MXT in motion, it felt like magic,” stated Gökçe Ceylan, Principal at Oxx. “That is precisely the type of product and staff we search for — technically good, customer-obsessed, and fixing an actual, rising want.”
With this new spherical of funding, Moments Lab is poised to steer a class that didn’t exist 5 years in the past — agentic AI for video — and outline the way forward for content material discovery.