Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Toyota Motor Manufacturing Canada to deploy Agility Robotics’ Digit humanoids

    February 19, 2026

    Exposing biases, moods, personalities, and summary ideas hidden in massive language fashions | MIT Information

    February 19, 2026

    Pricing Construction and Fundamental Capabilities

    February 19, 2026
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Machine Learning & Research»Construct unified intelligence with Amazon Bedrock AgentCore
    Machine Learning & Research

    Construct unified intelligence with Amazon Bedrock AgentCore

    Oliver ChambersBy Oliver ChambersFebruary 19, 2026No Comments17 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Construct unified intelligence with Amazon Bedrock AgentCore
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Constructing cohesive and unified buyer intelligence throughout your group begins with lowering the friction your gross sales representatives face when toggling between Salesforce, help tickets, and Amazon Redshift. A gross sales consultant making ready for a buyer assembly would possibly spend hours clicking by a number of completely different dashboards—product suggestions, engagement metrics, income analytics, and so forth. – earlier than creating a whole image of the shopper’s state of affairs. At AWS, our gross sales group skilled this firsthand as we scaled globally. We wanted a strategy to unify siloed buyer knowledge throughout metrics databases, doc repositories, and exterior business sources – with out constructing complicated customized orchestration infrastructure.

    We constructed the Buyer Agent & Information Engine (CAKE), a buyer centric chat agent utilizing Amazon Bedrock AgentCore to resolve this problem. CAKE coordinates specialised retriever instruments – querying information graphs in Amazon Neptune, metrics in Amazon DynamoDB, paperwork in Amazon OpenSearch Service, and exterior market knowledge utilizing an online search API, together with safety enforcement utilizing Row Degree Safety instrument (RLS), delivering buyer insights by pure language queries in beneath 10 seconds (as noticed in agent load assessments).

    On this put up, we reveal how you can construct unified intelligence methods utilizing Amazon Bedrock AgentCore by our real-world implementation of CAKE. You may construct customized brokers that unlock the next options and advantages:

    • Coordination of specialised instruments by dynamic intent evaluation and parallel execution
    • Integration of purpose-built knowledge shops (Neptune, DynamoDB, OpenSearch Service) with parallel orchestration
    • Implementation of row-level safety and governance inside workflows
    • Manufacturing engineering practices for reliability, together with template-based reporting to stick to enterprise semantic and elegance
    • Efficiency optimization by mannequin flexibility

    These architectural patterns may help you speed up improvement for various use instances, together with buyer intelligence methods, enterprise AI assistants, or multi-agent methods that coordinate throughout completely different knowledge sources.

    Why buyer intelligence methods want unification

    As gross sales organizations scale globally, they usually face three vital challenges: fragmented knowledge throughout specialised instruments (product suggestions, engagement dashboards, income analytics, and so forth.) requiring hours to collect complete buyer views, lack of enterprise semantics in conventional databases that may’t seize semantic relationships explaining why metrics matter, and handbook consolidation processes that may’t scale with rising knowledge volumes. You want a unified system that may mixture buyer knowledge, perceive semantic relationships, and cause by buyer wants in enterprise context, making CAKE the important linchpin for enterprises in every single place.

    Answer overview

    CAKE is a customer-centric chat agent that transforms fragmented knowledge into unified, actionable intelligence. By consolidating inner and exterior knowledge sources/tables right into a single conversational endpoint, CAKE delivers personalised buyer insights powered by context-rich information graphs—all in beneath 10 seconds. In contrast to conventional instruments that merely report numbers, the semantic basis of CAKE captures the that means and relationships between enterprise metrics, buyer behaviors, business dynamics, and strategic contexts. This permits CAKE to clarify not simply what is going on with a buyer, however why it’s taking place and how you can act.

    Amazon Bedrock AgentCore offers the runtime infrastructure that multi-agent AI methods require as a managed service, together with inter-agent communication, parallel execution, dialog state monitoring, and gear routing. This helps groups give attention to defining agent behaviors and enterprise logic slightly than implementing distributed methods infrastructure.

    For CAKE, we constructed a customized agent on Amazon Bedrock AgentCore that coordinates 5 specialised instruments, every optimized for various knowledge entry patterns:

    • Neptune retriever instrument for graph relationship queries
    • DynamoDB agent for immediate metric lookups
    • OpenSearch retriever instrument for semantic doc search
    • Internet search instrument for exterior business intelligence
    • Row degree safety (RLS) instrument for safety enforcement

    The next diagram reveals how Amazon Bedrock AgentCore helps the orchestration of those elements.

    The answer flows by a number of key phases in response to a query (for instance, “What are the highest growth alternatives for this buyer?”):

    • Analyzes intent and routes the question – The supervisor agent, working on Amazon Bedrock AgentCore, analyzes the pure language question to find out its intent. The query requires buyer understanding, relationship knowledge, utilization metrics, and strategic insights. The agent’s tool-calling logic, utilizing Amazon Bedrock AgentCore Runtime, identifies which specialised instruments to activate.
    • Dispatches instruments in parallel – Fairly than executing instrument calls sequentially, the orchestration layer dispatches a number of retriever instruments in parallel, utilizing the scalable execution atmosphere of Amazon Bedrock AgentCore Runtime. The agent manages the execution lifecycle, dealing with timeouts, retries, and error circumstances routinely.
    • Synthesizes a number of outcomes – As specialised instruments return outcomes, Amazon Bedrock AgentCore streams these partial responses to the supervisor agent, which synthesizes them right into a coherent reply. The agent causes about how completely different knowledge sources relate to one another, identifies patterns, and generates insights that span a number of information domains.
    • Enforces safety boundaries – Earlier than knowledge retrieval begins, the agent invokes the RLS instrument to deterministically implement consumer permissions. The customized agent then verifies that subsequent instrument calls respect these safety boundaries, routinely filtering outcomes and serving to stop unauthorized knowledge entry. This safety layer operates on the infrastructure degree, lowering the danger of implementation errors.

    This structure operates on two parallel tracks: Amazon Bedrock AgentCore offers the runtime for the real-time serving layer that responds to consumer queries with minimal latency, and an offline knowledge pipeline periodically refreshes the underlying knowledge shops from the analytical knowledge warehouse. Within the following sections, we focus on the agent framework design and core resolution elements, together with the information graph, knowledge shops, and knowledge pipeline.

    Agent framework design

    Our multi-agent system leverages the AWS Strands Brokers framework to ship structured reasoning capabilities whereas sustaining the enterprise controls required for regulatory compliance and predictable efficiency. The multi-agent system is constructed on the AWS Strands Brokers framework, which offers a model-driven basis for constructing brokers from many various fashions. The supervisor agent analyzes incoming inquiries to intelligently choose which specialised brokers and instruments to invoke and how you can decompose consumer queries. The framework exposes agent states and outputs to implement decentralized analysis at each agent and supervisor ranges. Constructing on model-driven strategy, we implement agentic reasoning by GraphRAG reasoning chains that assemble deterministic inference paths by traversing information relationships. Our brokers carry out autonomous reasoning inside their specialised domains, grounded round pre-defined ontologies whereas sustaining predictable, auditable conduct patterns required for enterprise purposes.

    The supervisor agent employs a multi-phase choice protocol:

    • Query evaluation – Parse and perceive consumer intent
    • Supply choice – Clever routing determines which mixture of instruments are wanted
    • Question decomposition – Unique questions are damaged down into specialised sub-questions optimized for every chosen instrument
    • Parallel execution – Chosen instruments execute concurrently by serverless AWS Lambda motion teams

    Instruments are uncovered by a hierarchical composition sample (accounting for knowledge modality—structured vs. unstructured) the place high-level brokers and instruments coordinate a number of specialised sub-tools:

    • Graph reasoning instrument – Manages entity traversal, relationship evaluation, and information extraction
    • Buyer insights agent – Coordinates a number of fine-tuned fashions in parallel for producing buyer summaries from tables
    • Semantic search instrument – Orchestrates unstructured textual content evaluation (comparable to subject notes)
    • Internet analysis instrument – Coordinates internet/information retrieval

    We lengthen the core AWS Strands Brokers framework with enterprise-grade capabilities together with buyer entry validation, token optimization, multi-hop LLM choice for mannequin throttling resilience, and structured GraphRAG reasoning chains. These extensions ship the autonomous decision-making capabilities of contemporary agentic methods whereas facilitating predictable efficiency and regulatory compliance alignment.

    Constructing the information graph basis

    CAKE’s information graph in Neptune represents buyer relationships, product utilization patterns, and business dynamics in a structured format that empowers AI brokers to carry out environment friendly reasoning. In contrast to conventional databases that retailer info in isolation, CAKE’s information graph captures the semantic that means of enterprise entities and their relationships.

    Graph development and entity modeling

    We designed the information graph round AWS gross sales ontology—the core entities and relationships that gross sales groups focus on each day:

    • Buyer entities – With properties extracted from knowledge sources together with business classifications, income metrics, cloud adoption section, and engagement scores
    • Product entities – Representing AWS providers, with connections to make use of instances, business purposes, and buyer adoption patterns
    • Answer entities – Linking merchandise to enterprise outcomes and strategic initiatives
    • Alternative entities – Monitoring gross sales pipeline, deal phases, and related stakeholders
    • Contact entities – Mapping relationship networks inside buyer organizations

    Amazon Neptune excels at answering questions that require understanding connections—discovering how two entities are associated, figuring out paths between accounts, or discovering oblique relationships that span a number of hops. The offline knowledge development course of runs scheduled queries in opposition to Redshift clusters to organize knowledge to be loaded within the graph.

    Capturing relationship context

    CAKE’s information graph captures how relationships join entities. When the graph connects a buyer to a product by an elevated utilization relationship, it additionally shops contextual attributes: the speed of enhance, the enterprise driver (from account plans), and associated product adoption patterns. This contextual richness helps the LLM perceive enterprise context and supply explanations grounded in precise relationships slightly than statistical correlation alone.

    Function-built knowledge shops

    Fairly than storing knowledge in a single database, CAKE makes use of specialised knowledge shops, every designed for the way it will get queried. Our customized agent, working on Amazon Bedrock AgentCore, manages the coordination throughout these shops—sending queries to the precise database, working them on the identical time, and mixing outcomes—so each customers and builders work with what appears like a single knowledge supply:

    • Neptune for graph relationships – Neptune shops the net of connections between clients, accounts, stakeholders, and organizational entities. Neptune excels at multi-hop traversal queries that require costly joins in relational databases—discovering relationship paths between disconnected accounts, or discovering clients in an business who’ve adopted particular AWS providers. When Amazon Bedrock AgentCore identifies a question requiring relationship reasoning, it routinely routes to the Neptune retriever instrument.
    • DynamoDB for immediate metrics – DynamoDB operates as a key-value retailer for precomputed aggregations. Fairly than computing buyer well being scores or engagement metrics on-demand, the offline pipeline pre-computes these values and shops them listed by buyer ID. DynamoDB then delivers sub-10ms lookups, enabling on the spot report technology. Instrument chaining in Amazon Bedrock AgentCore permits it to retrieve metrics from DynamoDB, cross them to the magnifAI agent (our customized table-to-text agent) for formatting, and return polished reviews—all with out customized integration code.
    • OpenSearch Service for semantic doc search – OpenSearch Service shops unstructured content material like account plans and subject notes. Utilizing embedding fashions, OpenSearch Service converts textual content into vector representations that help semantic matching. When Amazon Bedrock AgentCore receives a question about “digital transformation,” for instance, it acknowledges the necessity for semantic search and routinely routes to the OpenSearch Service retriever instrument, which finds related passages even when paperwork use completely different terminology.
    • S3 for doc storage – Amazon Easy Storage Service (Amazon S3) offers the muse for OpenSearch Service. Account plans are saved as Parquet recordsdata in Amazon S3 earlier than being listed as a result of the supply warehouse (Amazon Redshift) has truncation limits that will lower off giant paperwork. This multi-step course of—Amazon S3 storage, embedding technology, OpenSearch Service indexing—preserves full content material whereas sustaining the low latency required for real-time queries.

    Constructing on Amazon Bedrock AgentCore makes these multi-database queries really feel like a single, unified knowledge supply. When a question requires buyer relationships from Neptune, metrics from DynamoDB, and doc context from OpenSearch Service, our agent routinely dispatches requests to all three in parallel, manages their execution, and synthesizes their outcomes right into a single coherent response.

    Information pipeline and steady refresh

    The CAKE offline knowledge pipeline operates as a batch course of that runs on a scheduled cadence to maintain the serving layer synchronized with the newest enterprise knowledge. The pipeline structure separates knowledge development from knowledge serving, so the real-time question layer can keep low latency whereas the batch pipeline handles computationally intensive aggregations and graph development.

    The Information Processing Orchestration layer coordinates transformations throughout a number of goal databases. For every database, the pipeline performs the next steps:

    • Extracts related knowledge from Amazon Redshift utilizing optimized queries
    • Applies enterprise logic transformations particular to every knowledge retailer’s necessities
    • Masses processed knowledge into the goal database with acceptable indexes and partitioning

    For Neptune, this entails extracting entity knowledge, setting up graph nodes and edges with property attributes, and loading the graph construction with semantic relationship varieties. For DynamoDB, the pipeline computes aggregations and metrics, constructions knowledge as key-value pairs optimized for buyer ID lookups, and applies atomic updates to take care of consistency. For OpenSearch Service, the pipeline follows a specialised path: giant paperwork are first exported from Amazon Redshift to Amazon S3 as Parquet recordsdata, then processed by embedding fashions to generate vector representations, that are lastly loaded into the OpenSearch Service index with acceptable metadata for filtering and retrieval.

    Engineering for manufacturing: Reliability and accuracy

    When transitioning CAKE from prototype to manufacturing, we applied a number of vital engineering practices to facilitate reliability, accuracy, and belief in AI-generated insights.

    Mannequin flexibility

    The Amazon Bedrock AgentCore structure decouples the orchestration layer from the underlying LLM, permitting versatile mannequin choice. We applied mannequin hopping to offer computerized fallback to various fashions when throttling happens. This resilience occurs transparently inside AgentCore’s Runtime—detecting throttling circumstances, routing requests to obtainable fashions, and sustaining response high quality with out user-visible degradation.

    Row-Degree Safety (RLS) and Information Governance

    Earlier than knowledge retrieval happens, the RLS instrument enforces row-level safety based mostly on consumer id and organizational hierarchy. This safety layer operates transparently to customers whereas sustaining strict knowledge governance:

    • Gross sales representatives entry solely clients assigned to their territories
    • Regional managers view aggregated knowledge throughout their areas
    • Executives have broader visibility aligned with their tasks

    The RLS instrument routes queries to acceptable knowledge partitions and applies filters on the database question degree, so safety might be enforced within the knowledge layer slightly than counting on application-level filtering.

    Outcomes and affect

    CAKE has remodeled how AWS gross sales groups entry and act on buyer intelligence. By offering on the spot entry to unified insights by pure language queries, CAKE reduces the time spent looking for info from hours to seconds as per surveys/suggestions from customers, serving to gross sales representatives give attention to strategic buyer engagement slightly than knowledge gathering.

    The multi-agent structure delivers question responses in seconds for many queries, with the parallel execution mannequin supporting simultaneous knowledge retrieval from a number of sources. The information graph permits refined reasoning that goes past easy knowledge aggregation—CAKE explains why tendencies happen, identifies patterns throughout seemingly unrelated knowledge factors, and generates suggestions grounded in enterprise relationships. Maybe most significantly, CAKE democratizes entry to buyer intelligence throughout the group. Gross sales representatives, account managers, options architects, and executives work together with the identical unified system, offering constant buyer insights whereas sustaining acceptable safety and entry controls.

    Conclusion

    On this put up, we confirmed how Amazon Bedrock AgentCore helps CAKE’s multi-agent structure. Constructing multi-agent AI methods historically requires vital infrastructure funding, together with implementing customized agent coordination protocols, managing parallel execution frameworks, monitoring dialog state, dealing with failure modes, and constructing safety enforcement layers. Amazon Bedrock AgentCore reduces this undifferentiated heavy lifting by offering these capabilities as managed providers inside Amazon Bedrock.

    Amazon Bedrock AgentCore offers the runtime infrastructure for orchestration, and specialised knowledge shops excel at their particular entry patterns. Neptune handles relationship traversal, DynamoDB offers on the spot metric lookups, and OpenSearch Service helps semantic doc search, however our customized agent, constructed on Amazon Bedrock AgentCore, coordinates these elements, routinely routing queries to the precise instruments, executing them in parallel, synthesizing their outcomes, and sustaining safety boundaries all through the workflow. The CAKE expertise demonstrates how Amazon Bedrock AgentCore may help groups construct multi-agent AI methods, rushing up the method from months of infrastructure improvement to weeks of enterprise logic implementation. By offering orchestration infrastructure as a managed service, Amazon Bedrock AgentCore helps groups give attention to area experience and buyer worth slightly than constructing distributed methods infrastructure from scratch.

    To study extra about Amazon Bedrock AgentCore and constructing multi-agent AI methods, check with the Amazon Bedrock Consumer Information, Amazon Bedrock Workshop, and Amazon Bedrock Brokers. For the newest information on AWS, see What’s New with AWS.

    Acknowledgments

    We lengthen our honest gratitude to our government sponsors and mentors whose imaginative and prescient and steerage made this initiative doable: Aizaz Manzar, Director of AWS World Gross sales; Ali Imam, Head of Startup Phase; and Akhand Singh, Head of Information Engineering.

    We additionally thank the devoted crew members whose technical experience and contributions have been instrumental in bringing this product to life: Aswin Palliyali Venugopalan, Software program Dev Supervisor; Alok Singh, Senior Software program Growth Engineer; Muruga Manoj Gnanakrishnan, Principal Information Engineer; Sai Meka, Machine Studying Engineer; Invoice Tran, Information Engineer; and Rui Li, Utilized Scientist.


    Concerning the authors

    Monica Jain is a Senior Technical Product Supervisor at AWS World Gross sales and an analytics skilled driving AI-powered gross sales intelligence at scale. She leads the event of generative AI and ML-powered knowledge merchandise—together with information graphs, AI-augmented analytics, pure language question methods, and advice engines, that enhance vendor productiveness and decision-making. Her work permits AWS executives and sellers worldwide to entry real-time insights and speed up data-driven buyer engagement and income progress.

    M. Umar Javed is a Senior Utilized Scientist at AWS, with over 8 years of expertise throughout academia and business and a PhD in ML idea. At AWS, he builds production-grade generative AI and machine studying options, with work spanning multi-agent LLM architectures, analysis on small language fashions, information graphs, advice methods, reinforcement studying, and multi-modal deep studying. Previous to AWS, Umar contributed to ML analysis at NREL, CISCO, Oxford, and UCSD. He’s a recipient of the ECEE Excellence Award (2021) and contributed to 2 Donald P. Eckman Awards (2021, 2023).

    Damien Forthomme is a Senior Utilized Scientist at AWS, main a Information Science crew in AWS Gross sales, Advertising and marketing, and World Companies (SMGS). With greater than 10 years of expertise and a PhD in Physics, he focuses on utilizing and constructing superior machine studying and generative AI instruments to floor the precise knowledge to the precise folks on the proper time. His work encompasses initiatives comparable to forecasting, advice methods, core foundational datasets creation, and constructing generative AI merchandise that improve gross sales productiveness for the group.

    Mihir Gadgil is a Senior Information Engineer in AWS Gross sales, Advertising and marketing, and World Companies (SMGS), specializing in enterprise-scale knowledge options and generative AI purposes. With over 9 years of expertise and a Grasp’s in Data Expertise & Administration, he focuses on constructing sturdy knowledge pipelines, complicated knowledge modeling, and ETL/ELT processes. His experience drives enterprise transformation by revolutionary knowledge engineering options and superior analytics capabilities.

    Sujit Narapareddy, Head of Information & Analytics at AWS World Gross sales, is a expertise chief driving world enterprise transformation. He leads knowledge product and platform groups that energy the AWS’s Go-to-Market by AI-augmented analytics and clever automation. With a confirmed observe file in enterprise options, he has remodeled gross sales productiveness, knowledge governance, and operational excellence. Beforehand at JPMorgan Chase Enterprise Banking, he formed next-generation FinTech capabilities by knowledge innovation.

    Norman Braddock, Senior Supervisor of AI Product Administration at AWS, is a product chief driving the transformation of enterprise intelligence by agentic AI. He leads the Analytics & Insights Product Administration crew inside Gross sales, Advertising and marketing, and World Companies (SMGS), delivering merchandise that bridge AI mannequin efficiency with measurable enterprise affect. With a background spanning procurement, manufacturing, and gross sales operations, he combines deep operational experience with product innovation to form the way forward for autonomous enterprise administration.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Oliver Chambers
    • Website

    Related Posts

    How Claude Abilities Flip Judgment into Artifacts – O’Reilly

    February 19, 2026

    Unifying Rating and Technology in Question Auto-Completion through Retrieval-Augmented Technology and Multi-Goal Alignment

    February 19, 2026

    From Messy to Clear: 8 Python Tips for Easy Information Preprocessing

    February 18, 2026
    Top Posts

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025

    Meta resumes AI coaching utilizing EU person knowledge

    April 18, 2025
    Don't Miss

    Toyota Motor Manufacturing Canada to deploy Agility Robotics’ Digit humanoids

    By Arjun PatelFebruary 19, 2026

    Digit moved bins at a GXO Logistics facility in a RaaS mannequin. Supply: Agility Robotics…

    Exposing biases, moods, personalities, and summary ideas hidden in massive language fashions | MIT Information

    February 19, 2026

    Pricing Construction and Fundamental Capabilities

    February 19, 2026

    The Week In Vulnerabilities: SolarWinds, Ivanti, And Vital ICS Publicity

    February 19, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2026 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.