Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    6 Mindsets for Drawback-Fixing In Unsure Instances From The Board Chair Of Patagonia

    February 4, 2026

    Bedrock Robotics’ $270M Collection B paves the way in which for operator-less excavators

    February 4, 2026

    Brian Hedden named co-associate dean of Social and Moral Tasks of Computing | MIT Information

    February 4, 2026
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Machine Learning & Research»Democratizing enterprise intelligence: BGL’s journey with Claude Agent SDK and Amazon Bedrock AgentCore
    Machine Learning & Research

    Democratizing enterprise intelligence: BGL’s journey with Claude Agent SDK and Amazon Bedrock AgentCore

    Oliver ChambersBy Oliver ChambersFebruary 4, 2026No Comments14 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Democratizing enterprise intelligence: BGL’s journey with Claude Agent SDK and Amazon Bedrock AgentCore
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    This put up is cowritten with James Luo from BGL.

    Knowledge evaluation is rising as a high-impact use case for AI brokers. In keeping with Anthropic’s 2026 State of AI Brokers Report, 60% of organizations rank information evaluation and report technology as their most impactful agentic AI functions. 65% of enterprises cite it as a high precedence. In follow, companies face two widespread challenges:

    • Enterprise customers with out technical information depend on information groups for queries, which is time-consuming and creates a bottleneck.
    • Conventional text-to-SQL options don’t present constant and correct outcomes.

    Like many different companies, BGL confronted comparable challenges with its information evaluation and reporting use circumstances. BGL is a number one supplier of self-managed superannuation fund (SMSF) administration options that assist people handle the complicated compliance and reporting of their very own or a shopper’s retirement financial savings, serving over 12,700 companies throughout 15 international locations. BGL’s answer processes complicated compliance and monetary information by way of over 400 analytics tables, every representing a selected enterprise area, corresponding to aggregated buyer suggestions, funding efficiency, compliance monitoring, and monetary reporting. BGL’s clients and staff want to search out insights from the information. For instance, Which merchandise had essentially the most unfavorable suggestions final quarter? or Present me funding tendencies for high-net-worth accounts. Working with Amazon Net Companies (AWS), BGL constructed an AI agent utilizing Claude Agent SDK hosted on Amazon Bedrock AgentCore. By utilizing the AI agent enterprise customers can retrieve analytic insights by way of pure language whereas aligning with the safety and compliance necessities of monetary providers, together with session isolation and identity-based entry controls.

    On this weblog put up, we discover how BGL constructed its production-ready AI agent utilizing Claude Agent SDK and Amazon Bedrock AgentCore. We cowl three key points of BGL’s implementation:

    • Why constructing a sturdy information basis is crucial for dependable AI agent-based text-to-SQL options
    • How BGL designed its AI agent utilizing Claude Agent SDK for code execution, context administration, and domain-specific experience
    • How BGL used AgentCore to supply the best stateful execution classes in manufacturing for a safer, scalable AI agent.

    Establishing sturdy information foundations for an AI agent-based text-to-SQL answer

    When engineering groups implement an AI agent for analytics use circumstances, a standard anti-pattern is to have the agent deal with every little thing together with understanding database schemas, remodeling complicated datasets, finding out enterprise logic for analyses and deciphering outcomes. The AI agent is prone to produce inconsistent outcomes and fail by becoming a member of tables incorrectly, lacking edge circumstances, or producing incorrect aggregations.

    BGL used its current mature large information answer powered by Amazon Athena and dbt Labs, to course of and remodel terabytes of uncooked information throughout numerous enterprise information sources. The extract, remodel, and cargo (ETL) course of builds analytic tables and every desk solutions a selected class of enterprise questions. These tables are aggregated, denormalized datasets (with metrics and, summaries) that function a business-ready single supply of fact for enterprise intelligence (BI) instruments, AI brokers, and functions. For particulars on learn how to construct a serverless information transformation structure with Athena and dbt, see How BMW Group constructed a serverless terabyte-scale information transformation structure with dbt and Amazon Athena.

    The AI agent’s function is to deal with complicated information transformation throughout the information system by specializing in deciphering the consumer’s pure language questions, translating it, and producing SQL SELECT queries in opposition to well-structured analytic tables. When wanted, the AI agent writes Python scripts to additional course of outcomes and generate visualizations. This separation of issues considerably reduces the chance of hallucination and gives a number of key advantages:

    • Consistency: The information system handles complicated enterprise logic in a extra deterministic manner: joins, aggregations, and enterprise guidelines are validated by the information workforce forward of time. The AI agent’s activity turns into simple: interpret questions and generate fundamental SELECT queries in opposition to these tables.
    • Efficiency: Analytic tables are pre-aggregated and optimized with correct indexes. The agent performs fundamental queries quite than complicated joins throughout uncooked tables, leading to a sooner response time even for giant datasets.
    • Maintainability and governance: Enterprise logic resides within the information system, not within the AI’s context window. This helps be sure that the AI agent depends on the identical single supply of fact as different shoppers, corresponding to BI instruments. If a enterprise rule modifications, the information workforce updates the information transformation logic in dbt, and the AI agent mechanically consumes the up to date analytic tables that mirror these modifications.

    “Many individuals suppose the AI agent is so highly effective that they will skip constructing the information platform; they need the agent to do every little thing. However you possibly can’t obtain constant and correct outcomes that manner. Every layer ought to resolve complexity on the applicable stage” 

    – James Luo, BGL Head of Knowledge and AI

    How BGL builds AI brokers utilizing Claude Agent SDK with Amazon Bedrock

    BGL’s growth workforce has been utilizing Claude Code powered by Amazon Bedrock as its AI coding assistant. This integration makes use of non permanent, session-based entry to mitigate credential publicity, and integrates with current identification suppliers to align with monetary providers compliance necessities. For particulars of integration, see Steerage for Claude Code with Amazon Bedrock

    By way of its each day use of the Claude Code, BGL acknowledged that its core capabilities lengthen past coding. BGL used its means to purpose by way of complicated issues, write and execute code, and work together with recordsdata and methods autonomously. Claude Agent SDK packages the identical agentic capabilities right into a Python and TypeScript SDK, in order that builders can construct customized AI brokers on high of Claude Code. For BGL, this meant they might construct an analytics AI agent with:

    • Code execution: The agent writes and runs Python code to course of datasets returned from analytic tables and generate visualizations
    • Computerized context administration: Lengthy-running classes don’t overwhelm token limits
    • Sandboxed execution: Manufacturing-grade isolation and permission controls
    • Modular reminiscence and information: A CLAUDE.md file for challenge context and Agent Abilities for product line domain-specific experience

    Why code execution issues for information analytics

    Analytics queries typically return 1000’s of rows and typically past megabytes of knowledge. Commonplace tool-use, perform calling, and Mannequin Context Protocol (MCP) patterns typically go retrieved information straight into the context window, which shortly reaches mannequin context window limits. BGL carried out a distinct method: the agent writes SQL to question Athena, then writes Python code to course of the CSV file outcomes straight in its file system. This allows the agent to deal with giant consequence units, carry out complicated aggregations, and generate charts with out reaching context window limits. You may study extra concerning the code execution patterns in Code execution with MCP: Constructing extra environment friendly brokers.

    Modular information structure

    To deal with BGL’s various product strains and complicated area information, the implementation makes use of a modular method with two key configuration varieties that work collectively seamlessly.

    CLAUDE.md (challenge context)

    The CLAUDE.md file supplies the agent with world context—the challenge construction, setting configuration (check, manufacturing, and so forth), and critically, learn how to execute SQL queries. It defines which folders retailer intermediate outcomes and ultimate outputs, ensuring recordsdata land in an outlined file path that customers can entry. The next diagram reveals the construction of a CLAUDE.md file:

    SKILL.md (Product area experience)

    BGL organizes their agent area information by product strains utilizing the SKILL.md configuration recordsdata. Every ability acts as a specialised information analyst for a selected product. For instance, the BGL CAS 360 product has a ability known as CAS360 Knowledge Analyst agent, which handles firm and belief administration with ASIC compliance alignment; whereas BGL’s Easy Fund 360 product has a ability known as Easy Fund 360 Knowledge Analyst agent, which is supplied with SMSF administration and compliance-related area expertise. A SKILL.md file defines three issues:

    • When to set off: What sorts of questions ought to activate this ability
    • Which tables to make use of or map: References to the related analytic tables within the information folder (as proven within the previous determine)
    • The best way to deal with complicated situations: Step-by-step steerage for multi-table queries or particular enterprise questions if required

    By utilizing SKILL.md recordsdata, the agent can dynamically uncover and cargo the proper ability to achieve domain-specific experience for corresponding duties.

    • Unified context: When a ability is triggered, Claude Agent SDK dynamically merges its specialised directions with the worldwide CLAUDE.md file right into a single immediate. This permits the agent to concurrently apply project-wide requirements (for instance, at all times save to disk) whereas utilizing domain-specific information (corresponding to mapping consumer inquiries to a bunch of tables).
    • Progressive discovery: Not all expertise have to be loaded into the context window directly. The agent first reads the question to find out which ability must be triggered. It hundreds the ability physique and references to grasp which analytic desk’s metadata is required. It then additional explores corresponding information folders. This retains context utilization environment friendly whereas offering complete protection.
    • Iterative refinement: If the AI agent is unable to deal with some enterprise information due to an absence of recent area information, the workforce will collect suggestions from customers, determine the gaps, and add new information to current expertise utilizing a human-in-the-loop course of so expertise are up to date and refined iteratively.

    This technical architecture diagram illustrates an Agent Virtual Machine system designed for AI automation and skill management. The diagram is organized into two main sections: At the top level, the system provides two scripting execution environments: Bash for shell command execution and Python for running Python scripts. These environments enable the agent to perform various computational tasks. The lower section displays the file system architecture, represented by a light blue container. Within this file system, skills are organized using a standardized directory structure following the pattern "skills/[skillname]360/". Three specific skill modules are shown: skills/sf360/ containing a SKILL.md documentation file and a references subdirectory skills/cas360/ containing a SKILL.md documentation file and a references subdirectory skills/smartdocs360/ containing a SKILL.md documentation file and a references subdirectory An ellipsis notation indicates additional skill directories follow the same organizational pattern. Each skill module maintains consistent structure with documentation (SKILL.md) and supporting reference materials stored in dedicated subdirectories. This modular architecture enables the AI agent system to access, execute, and manage multiple capabilities programmatically, with each skill packaged alongside its documentation and resources for efficient automation workflows.

    As proven within the previous determine, agent expertise are organized per product line. Every product folder incorporates a SKILL.md definition file and a references listing with extra area information and assist supplies that the agent hundreds on demand.

    For particulars about Anthropic Agent Abilities, see the Anthropic weblog put up, brokers for the true world with Agent Abilities

    Excessive-level answer structure

    To ship a safer and scalable text-to-SQL expertise, BGL makes use of Amazon Bedrock AgentCore to host Claude Agent SDK whereas protecting information transformation within the current large information answer.

    AWS Cloud Architecture with dbt, Amazon Athena, and Claude Agent Integration Image Description This architecture diagram illustrates an AWS Cloud-based data pipeline system that integrates multiple AWS services with dbt and Slack to enable intelligent data processing and AI-powered interactions. Components The diagram shows seven key components within the AWS Cloud environment: dbt (data build tool): A data transformation tool positioned on the left side, represented by its distinctive logo Amazon Athena: AWS's serverless interactive query service for analyzing data Amazon S3: AWS's object storage service for storing and retrieving data AgentCore runtime with Claude agent hosted: The central orchestration component that runs an AI agent powered by Claude Amazon Bedrock: AWS's fully managed service for foundation models and generative AI capabilities Slack: An external communication platform that serves as the user interface Data Flow The architecture demonstrates a seven-step data flow pattern: Users initiate requests from Slack to the AgentCore runtime The AgentCore runtime communicates with Amazon Bedrock for AI processing The agent queries Amazon Athena for structured data analysis Amazon Athena retrieves data from Amazon S3 storage Data flows from Amazon S3 back to the AgentCore runtime Amazon Bedrock returns AI-generated responses to the agent The AgentCore runtime sends final results back to Slack Additionally, dbt maintains a bidirectional connection with Amazon Athena, enabling data transformation workflows. Purpose This architecture enables users to interact with AWS data services and AI capabilities through Slack. The Claude agent orchestrates requests across multiple AWS services, combining data querying, transformation, and AI-powered analysis to deliver intelligent responses to user queries. Legal Notice dbt and the dbt logo are trademarks of dbt Labs, Inc. This diagram does not imply affiliation with or endorsement by dbt Labs.

    The previous determine illustrates a high-level structure and workflow. The analytic tables are pre-built each day utilizing Athena and dbt, and function the single supply of fact. A typical consumer interplay flows by way of the next phases:

    1. Consumer request: A consumer asks a enterprise query utilizing Slack (for instance, Which merchandise had essentially the most unfavorable suggestions final quarter?).
    2. Schema discovery and SQL technology: The agent identifies related tables utilizing expertise and writes SQL queries.
    3. SQL safety validation: To assist stop unintended information modification, a safety layer permits solely SELECT queries and blocks DELETE, UPDATE, and DROP operations.
    4. Question execution: Athena executes the question and shops outcomes into Amazon Easy Storage Service (Amazon S3).
    5. Outcome Obtain: The agent downloads the ensuing CSV file to the file system on AgentCore, fully bypassing the context window to keep away from token limits.
    6. Evaluation and visualization: The agent writes Python code to investigate the CSV file and generate visualizations or refined datasets relying on the enterprise query.
    7. Response supply: Remaining insights and visualizations are formatted and returned to the consumer in Slack.

    Why use Amazon Bedrock AgentCore to host Claude Agent SDK

    Deploying an AI agent that executes arbitrary Python code requires vital infrastructure issues. For example, you want isolation to assist be sure that there’s no cross-session entry to information or credentials. Amazon Bedrock AgentCore supplies fully-managed, stateful execution classes, every session has its personal remoted microVM with a separate CPU, reminiscence, and file system. When a session ends, the microVM terminates absolutely and sanitizes reminiscence, serving to to make sure no remnants persist for future classes. BGL discovered this service particularly priceless:

    • Stateful execution session: AgentCore maintains session state for as much as 8 hours. Customers can have ongoing conversations with the agent, referring again to earlier queries with out dropping context.
    • Framework flexibility: It’s framework-agnostic. It helps deployment of AI brokers corresponding to Strands Brokers SDK, Claude Agent SDK, LangGraph, and CrewAI with a couple of strains of code.
    • Aligned with safety greatest practices: It supplies session isolation, VPC assist, AWS Id and Entry Administration (IAM) or OAuth based mostly identification to facilitate ruled, compliance-aligned agent operations at scale.
    • System integration: This can be a forward-looking consideration.

    “There’s Gateway, Reminiscence, Browser instruments, a complete ecosystem constructed round it. I do know AWS is investing on this course, so every little thing we construct now can combine with these providers sooner or later.”

    – James Luo, BGL Head of Knowledge and AI. 

    BGL is already planning to combine AgentCore Reminiscence for storing consumer preferences and question patterns.

    Outcomes and affect

    For BGL’s greater than 200 staff, this represents a big shift in how they extract enterprise intelligence. Product managers can now validate hypotheses immediately with out ready for the information workforce. Compliance groups can spot danger tendencies with out studying SQL. Buyer success managers can pull account-specific analytics in real-time throughout shopper calls. This democratization of knowledge entry helps remodel analytics from a bottleneck right into a aggressive benefit, enabling sooner decision-making throughout the group whereas releasing the information workforce to deal with strategic initiatives quite than one-time question requests.

    Conclusion and key takeaways

    BGL’s journey demonstrates how combining a powerful information basis with agentic AI can democratize enterprise intelligence. By utilizing Amazon Bedrock AgentCore and the Claude Agent SDK, BGL constructed a safer and scalable AI agent that empowers staff to faucet into their information to reply enterprise questions. Listed here are some key takeaways:

    • Spend money on a powerful information basis: Accuracy begins with a powerful information basis. By utilizing the information system and information pipeline to deal with complicated enterprise logic (joins and aggregations), the agent can deal with fundamental, dependable logic.
    • Set up information by area: Use Agent Abilities to encapsulate domain-specific experience (for instance, Tax Legislation or Funding Efficiency). This retains the context window clear and manageable. Moreover, set up a suggestions loop: repeatedly monitor consumer queries to determine gaps and iteratively replace these expertise.
    • Use code execution for information processing: Keep away from utilizing an agent to course of giant datasets utilizing a big language mannequin (LLM) context. As an alternative, instruct the agent to put in writing and execute code to filter, mixture, and visualize information.
    • Select stateful, session-based infrastructure to host the agent: Conversational analytics requires persistent context. Amazon Bedrock AgentCore simplifies this by offering built-in state persistence (as much as 8-hour classes), assuaging the necessity to construct customized state dealing with layers on high of stateless compute.

    In the event you’re able to construct comparable capabilities to your group, get began by exploring the Claude Agent SDK and a brief demo of Deploying Claude Agent SDK on Amazon Bedrock AgentCore Runtime. If in case you have the same use case or want assist designing your structure, attain out to your AWS account workforce.

    References:


    Concerning the authors

    Dustin Liu is a options architect at AWS, centered on supporting monetary providers and insurance coverage (FSI) startups and SaaS corporations. He has a various background spanning information engineering, information science, and machine studying, and he’s obsessed with leveraging AI/ML to drive innovation and enterprise transformation.

    Melanie Li, PhD, is a Senior Generative AI Specialist Options Architect at AWS based mostly in Sydney, Australia, the place her focus is on working with clients to construct options leveraging state-of-the-art AI and machine studying instruments. She has been actively concerned in a number of Generative AI initiatives throughout APJ, harnessing the facility of Massive Language Fashions (LLMs). Previous to becoming a member of AWS, Dr. Li held information science roles within the monetary and retail industries.

    Frank Tan is a Senior Options Architect at AWS with a particular curiosity in Utilized AI. Coming from a product growth background, he’s pushed to bridge know-how and enterprise success.

    James Luo is Head of Knowledge & AI at BGL Company Options, a world-leading supplier of compliance software program for accountants and monetary professionals. Since becoming a member of BGL in 2008, James has progressed from developer to architect to his present management function, spearheading the Knowledge Platform and Roni AI Agent initiatives. In 2015, he shaped BGL’s BigData workforce, implementing the primary deep studying mannequin within the SMSF trade (2017), which now processes 200+ million transactions yearly. He has spoken at Massive Knowledge & AI World and AWS Summit, and BGL’s AI work has been featured in a number of AWS case research.

    Dr. James Bland is a Know-how Chief with 30+ years driving AI transformation at scale. He holds a PhD in Pc Science with a machine studying focus and leads strategic AI initiatives at AWS, enabling enterprises to undertake AI-powered growth lifecycles and agentic capabilities. Dr. Bland spearheaded the AI-SDLC initiative, authored complete guides on Generative AI within the SDLC, and helps enterprises architect production-scale AI options that basically remodel how organizations function in an AI-first world.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Oliver Chambers
    • Website

    Related Posts

    A Reinforcement Studying Based mostly Common Sequence Design for Polar Codes

    February 4, 2026

    Past Big Fashions: Why AI Orchestration Is the New Structure

    February 4, 2026

    How Clarus Care makes use of Amazon Bedrock to ship conversational contact middle interactions

    February 3, 2026
    Top Posts

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025

    Meta resumes AI coaching utilizing EU person knowledge

    April 18, 2025
    Don't Miss

    6 Mindsets for Drawback-Fixing In Unsure Instances From The Board Chair Of Patagonia

    By Charlotte LiFebruary 4, 2026

    http://site visitors.libsyn.com/safe/futureofworkpodcast/Audio_45min_-_Charles_Conn-_WITH_ADS.mp3 This can be a free publish, in case you aren’t a paid subscriber…

    Bedrock Robotics’ $270M Collection B paves the way in which for operator-less excavators

    February 4, 2026

    Brian Hedden named co-associate dean of Social and Moral Tasks of Computing | MIT Information

    February 4, 2026

    Firefox is Including a “No Thanks” Button to AI

    February 4, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2026 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.