We’re excited to announce the overall availability of multimodal retrieval for Amazon Bedrock Information Bases. This new functionality provides native help for video and audio content material, on prime of textual content and pictures. With it you may construct Retrieval Augmented Technology (RAG) purposes that may search and retrieve info throughout textual content, photos, audio, and video—all inside a totally managed service.
Fashionable enterprises retailer helpful info in a number of codecs. Product documentation consists of diagrams and screenshots, coaching supplies comprise tutorial movies, and buyer insights are captured in recorded conferences. Till now, constructing synthetic intelligence (AI) purposes that might successfully search throughout these content material varieties required complicated customized infrastructure and vital engineering effort.
Beforehand, Bedrock Information Bases used text-based embedding fashions for retrieval. Whereas it supported textual content paperwork and pictures, photos needed to be processed utilizing basis fashions (FM) or Bedrock Knowledge Automation to generate textual content descriptions—a text-first method that misplaced visible context and prevented visible search capabilities. Video and audio required customized preprocessing exterior pipelines. Now, with multimodal embeddings, the retriever natively helps textual content, photos, audio, and video inside a single embedding mannequin.
With multimodal retrieval in Bedrock Information Bases, now you can ingest, index, and retrieve info from textual content, photos, video, and audio utilizing a single, unified workflow. Content material is encoded utilizing multimodal embeddings that protect visible and audio context, enabling your purposes to seek out related info throughout media varieties. You possibly can even search utilizing a picture to seek out visually comparable content material or find particular scenes in movies.
On this submit, we’ll information you thru constructing multimodal RAG purposes. You’ll learn the way multimodal data bases work, how to decide on the appropriate processing technique primarily based in your content material kind, and learn how to configure and implement multimodal retrieval utilizing each the console and code examples.
Understanding multimodal data bases
Amazon Bedrock Information Bases automates the whole RAG workflow: ingesting content material out of your information sources, parsing and chunking it into searchable segments, changing chunks to vector embeddings, and storing them in a vector database. Throughout retrieval, person queries are embedded and matched in opposition to saved vectors to seek out semantically comparable content material, which augments the immediate despatched to your basis mannequin.
With multimodal retrieval, this workflow now handles photos, video, and audio alongside textual content via two processing approaches. Amazon Nova Multimodal Embeddings encodes content material natively right into a unified vector house, for cross-modal retrieval the place you may question with textual content and retrieve movies, or search utilizing photos to seek out visible content material.
Alternatively, Bedrock Knowledge Automation converts multimedia into wealthy textual content descriptions and transcripts earlier than embedding, offering high-accuracy retrieval over spoken content material. Your alternative relies on whether or not visible context or speech precision issues most to your use case.
We discover every of those approaches on this submit.
Amazon Nova Multimodal Embeddings
Amazon Nova Multimodal Embeddings is the primary unified embedding mannequin that encodes textual content, paperwork, photos, video, and audio right into a single shared vector house. Content material is processed natively with out textual content conversion. The mannequin helps as much as 8,172 tokens for textual content and 30 seconds for video/audio segments, handles over 200 languages, and affords 4 embedding dimensions (with 3072-dimension as default, 1,024, 384, 256) to stability accuracy and effectivity. Bedrock Information Bases segments video and audio robotically into configurable chunks (5-30 seconds), with every phase independently embedded.

For video content material, Nova embeddings seize visible components—scenes, objects, movement, and actions—in addition to audio traits like music, sounds, and ambient noise. For movies the place spoken dialogue is essential to your use case, you need to use Bedrock Knowledge Automation to extract transcripts alongside visible descriptions. For standalone audio information, Nova processes acoustic options akin to music, environmental sounds, and audio patterns. The cross-modal functionality permits use instances akin to describing a visible scene in textual content to retrieve matching movies, add a reference picture to seek out comparable merchandise, or find particular actions in footage—all with out pre-existing textual content descriptions.
Finest for: Product catalogs, visible search, manufacturing movies, sports activities footage, safety cameras, and eventualities the place visible content material drives the use case.
Amazon Bedrock Knowledge Automation
Bedrock Knowledge Automation takes a unique method by changing multimedia content material into wealthy textual representations earlier than embedding. For photos, it generates detailed descriptions together with objects, scenes, textual content inside photos, and spatial relationships. For video, it produces scene-by-scene summaries, identifies key visible components, and extracts the on-screen textual content. For audio and video with speech, Bedrock Knowledge Automation supplies correct transcriptions with timestamps and speaker identification, together with phase summaries that seize the important thing factors mentioned.

As soon as transformed to textual content, this content material is chunked and embedded utilizing textual content embedding fashions like Amazon Titan Textual content Embeddings or Amazon Nova Multimodal Embeddings. This text-first method permits extremely correct question-answering over spoken content material—when customers ask about particular statements made in a gathering or subjects mentioned in a podcast, the system searches via exact transcripts relatively than audio embeddings. This makes it notably helpful for compliance eventualities the place you want precise quotes and verbatim data for audit trails, assembly evaluation, buyer help name mining, and use instances the place you want to retrieve and confirm particular spoken info.
Finest for: Conferences, webinars, interviews, podcasts, coaching movies, help calls, and eventualities requiring exact retrieval of particular statements or discussions.
Use case situation: Visible product seek for e-commerce
Multimodal data bases can be utilized for purposes starting from enhanced buyer experiences and worker coaching to upkeep operations and authorized evaluation. Conventional e-commerce search depends on textual content queries, requiring clients to articulate what they’re in search of with the appropriate key phrases. This breaks down after they’ve seen a product elsewhere, have a photograph of one thing they like, or wish to discover gadgets just like what seems in a video. Now, clients can search your product catalog utilizing textual content descriptions, add a picture of an merchandise they’ve photographed, or reference a scene from a video to seek out matching merchandise. The system retrieves visually comparable gadgets by evaluating the embedded illustration of their question—whether or not textual content, picture, or video—in opposition to the multimodal embeddings of your product stock. For this situation, Amazon Nova Multimodal Embeddings is the best alternative. Product discovery is basically visible—clients care about colours, kinds, shapes, and visible particulars. By encoding your product photos and movies into the Nova unified vector house, the system matches primarily based on visible similarity with out counting on textual content descriptions which may miss refined visible traits. Whereas an entire suggestion system would incorporate buyer preferences, buy historical past, and stock availability, retrieval from a multimodal data base supplies the foundational functionality: discovering visually related merchandise no matter how clients select to go looking.
Console walkthrough
Within the following part, we stroll via the high-level steps to arrange and take a look at a multimodal data base for our e-commerce product search instance. We create a data base containing smartphone product photos and movies, then display how clients can search utilizing textual content descriptions, uploaded photos, or video references. The GitHub repository supplies a guided pocket book which you can comply with to deploy this instance in your account.
Stipulations
Earlier than you get began, just be sure you have the next conditions:
Present the data base particulars and information supply kind
Begin by opening the Amazon Bedrock console and creating a brand new data base. Present a descriptive identify to your data base and choose your information supply kind—on this case, Amazon S3 the place your product photos and movies are saved.

Configure information supply
Join your S3 bucket containing product photos and movies. For the parsing technique, choose Amazon Bedrock default parser. Since we’re utilizing Nova Multimodal Embeddings, the pictures and movies are processed natively and embedded instantly into the unified vector house, preserving their visible traits with out conversion to textual content.

Configure information storage and processing
Choose Amazon Nova Multimodal Embeddings as your embedding mannequin. This unified embedding mannequin encodes each your product photos and buyer queries into the identical vector house, enabling cross-modal retrieval the place textual content queries can retrieve photos and picture queries can discover visually comparable merchandise. For this instance, we use Amazon S3 Vectors because the vector retailer (you possibly can optionally use different obtainable vector shops), which supplies cost-effective and sturdy storage optimized for large-scale vector information units whereas sustaining sub-second question efficiency. You additionally must configure the multimodal storage vacation spot by specifying an S3 location. Information Bases makes use of this location to retailer extracted photos and different media out of your information supply. When customers question the data base, related media is retrieved from this storage.

Evaluation and create
Evaluation your configuration settings together with the data base particulars, information supply configuration, embedding mannequin choice—we’re utilizing Amazon Nova Multimodal Embeddings v1 with 3072 vector dimensions (increased dimensions present richer representations; you need to use decrease dimensions like 1,024, 384, or 256 to optimize for storage and value) —and vector retailer setup (Amazon S3 Vectors). As soon as every part seems to be appropriate, create your data base.
Create an ingestion job
As soon as created, provoke the sync course of to ingest your product catalog. The data base processes every picture and video, generates embeddings and shops them within the managed vector database. Monitor the sync standing to substantiate the paperwork are efficiently listed.

Check the data base utilizing textual content as enter in your immediate
Along with your data base prepared, take a look at it utilizing a textual content question within the console. Search with product descriptions like “A metallic telephone cowl” (or something equal that could possibly be related to your merchandise media) to confirm that text-based retrieval works appropriately throughout your catalog.

Check the data base utilizing a reference picture and retrieve totally different modalities
Now for the highly effective half—visible search. Add a reference picture of a product you wish to discover. For instance, think about you noticed a cellular phone cowl on one other web site and wish to discover comparable gadgets in your catalog. Merely add the picture with out further textual content immediate.

The multimodal data base extracts visible options out of your uploaded picture and retrieves visually comparable merchandise out of your catalog. As you may see within the outcomes, the system returns telephone covers with comparable design patterns, colours, or visible traits. Discover the metadata related to every chunk within the Supply particulars panel. The x-amz-bedrock-kb-chunk-start-time-in-millis and x-amz-bedrock-kb-chunk-end-time-in-millis fields point out the precise temporal location of this phase inside the supply video. When constructing purposes programmatically, you need to use these timestamps to extract and show the precise video phase that matched the question, enabling options like “leap to related second” or clip era instantly out of your supply movies. This cross-modal functionality transforms the procuring expertise—clients not want to explain what they’re in search of with phrases; they will present you.
Check the data base utilizing a reference picture and retrieve totally different modalities utilizing Bedrock Knowledge Automation
Now we take a look at what the outcomes would appear like when you configured Bedrock Knowledge Automation parsing in the course of the information supply setup. Within the following screenshot, discover the transcript part within the Supply particulars panel.

For every retrieved video chunk, Bedrock Knowledge Automation robotically generates an in depth textual content description—on this instance, describing the smartphone’s metallic rose gold end, studio lighting, and visible traits. This transcript seems instantly within the take a look at window alongside the video, offering wealthy textual context. You get each visible similarities matching from the multimodal embeddings and detailed product descriptions that may reply particular questions on options, colours, supplies, and different attributes seen within the video.
Clear-up
To wash up your sources, full the next steps, beginning with deleting the data base:
- On the Amazon Bedrock console, select Information Bases
- Choose your Information Base and word each the IAM service function identify and S3 Vector index ARN
- Select Delete and ensure
To delete the S3 Vector as a vector retailer, use the next AWS Command Line Interface (AWS CLI) instructions:
- On the IAM console, discover the function famous earlier
- Choose and delete the function
To delete the pattern dataset:
- On the Amazon S3 console, discover your S3 bucket
- Choose and delete the information you uploaded for this tutorial
Conclusion
Multimodal retrieval for Amazon Bedrock Information Bases removes the complexity of constructing RAG purposes that span textual content, photos, video, and audio. With native help for video and audio content material, now you can construct complete data bases that unlock insights out of your enterprise information—not simply textual content paperwork.
The selection between Amazon Nova Multimodal Embeddings and Bedrock Knowledge Automation provides you flexibility to optimize to your particular content material. The Nova unified vector house permits cross-modal retrieval for visual-driven use instances, whereas the Bedrock Knowledge Automation text-first method delivers exact transcription-based retrieval for speech-heavy content material. Each approaches combine seamlessly into the identical absolutely managed workflow, assuaging the necessity for customized preprocessing pipelines.
Availability
Area availability relies on the options chosen for multimodal help, please discuss with the documentation for particulars.
Subsequent steps
Get began with multimodal retrieval right now:
- Discover the documentation: Evaluation the Amazon Bedrock Information Bases documentation and Amazon Nova Consumer Information for extra technical particulars.
- Experiment with code examples: Take a look at the Amazon Bedrock samples repository for hands-on notebooks demonstrating multimodal retrieval.
- Be taught extra about Nova: Learn the Amazon Nova Multimodal Embeddings announcement for deeper technical insights.
In regards to the authors
Dani Mitchell is a Generative AI Specialist Options Architect at Amazon Net Providers (AWS). He’s targeted on serving to speed up enterprises internationally on their generative AI journeys with Amazon Bedrock and Bedrock AgentCore.
Pallavi Nargund is a Principal Options Architect at AWS. She is a generative AI lead for US Greenfield and leads the AWS for Authorized Tech staff. She is obsessed with ladies in know-how and is a core member of Girls in AI/ML at Amazon. She speaks at inside and exterior conferences akin to AWS re:Invent, AWS Summits, and webinars. Pallavi holds a Bachelor’s of Engineering from the College of Pune, India. She lives in Edison, New Jersey, together with her husband, two ladies, and her two pups.
Jean-Pierre Dodel is a Principal Product Supervisor for Amazon Bedrock, Amazon Kendra, and Amazon Fast Index. He brings 15 years of Enterprise Search and AI/ML expertise to the staff, with prior work at Autonomy, HP, and search startups earlier than becoming a member of Amazon 8 years in the past. JP is presently specializing in improvements for multimodal RAG, agentic retrieval, and structured RAG.

