Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Video games for Change provides 5 new leaders to its board

    June 9, 2025

    Constructing clever AI voice brokers with Pipecat and Amazon Bedrock – Half 1

    June 9, 2025

    ChatGPT’s Reminiscence Restrict Is Irritating — The Mind Reveals a Higher Method

    June 9, 2025
    Facebook X (Twitter) Instagram
    UK Tech Insider
    Facebook X (Twitter) Instagram Pinterest Vimeo
    UK Tech Insider
    Home»Machine Learning & Research»Safe distributed logging in scalable multi-account deployments utilizing Amazon Bedrock and LangChain
    Machine Learning & Research

    Safe distributed logging in scalable multi-account deployments utilizing Amazon Bedrock and LangChain

    Oliver ChambersBy Oliver ChambersMay 21, 2025No Comments14 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Safe distributed logging in scalable multi-account deployments utilizing Amazon Bedrock and LangChain
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Knowledge privateness is a important problem for software program firms that present providers within the information administration area. If they need prospects to belief them with their information, software program firms want to point out and show that their prospects’ information will stay confidential and inside managed environments. Some firms go to nice lengths to keep up confidentiality, typically adopting multi-account architectures, the place every buyer has their information in a separate AWS account. By isolating information on the account degree, software program firms can implement strict safety boundaries, assist stop cross-customer information leaks, and help adherence with business rules reminiscent of HIPAA or GDPR with minimal threat.

    Multi-account deployment represents the gold normal for cloud information privateness, permitting software program firms to ensure buyer information stays segregated even at huge scale, with AWS accounts offering safety isolation boundaries as highlighted within the AWS Nicely-Architected Framework. Software program firms more and more undertake generative AI capabilities like Amazon Bedrock, which offers totally managed basis fashions with complete safety features. Nonetheless, managing a multi-account deployment powered by Amazon Bedrock introduces distinctive challenges round entry management, quota administration, and operational visibility that would complicate its implementation at scale. Always requesting and monitoring quota for invoking basis fashions on Amazon Bedrock turns into a problem when the variety of AWS accounts reaches double digits. One method to simplify operations is to configure a devoted operations account to centralize administration whereas information from prospects transits by managed providers and is saved at relaxation solely of their respective buyer accounts. By centralizing operations in a single account whereas conserving information in numerous accounts, software program firms can simplify the administration of mannequin entry and quotas whereas sustaining strict information boundaries and safety isolation.

    On this publish, we current an answer for securing distributed logging multi-account deployments utilizing Amazon Bedrock and LangChain.

    Challenges in logging with Amazon Bedrock

    Observability is essential for efficient AI implementations—organizations can’t optimize what they don’t measure. Observability may help with efficiency optimization, price administration, and mannequin high quality assurance. Amazon Bedrock presents built-in invocation logging to Amazon CloudWatch or Amazon Easy Storage Service (Amazon S3) by a configuration on the AWS Administration Console, and particular person logs may be routed to totally different CloudWatch accounts with cross-account sharing, as illustrated within the following diagram.

    Routing logs to every buyer account presents two challenges: logs containing buyer information could be saved within the operations account for the user-defined retention interval (at the least 1 day), which could not adjust to strict privateness necessities, and CloudWatch has a restrict of 5 monitoring accounts (buyer accounts). With these limitations, how can organizations construct a safe logging resolution that scales throughout a number of tenants and prospects?

    On this publish, we current an answer for enabling distributed logging for Amazon Bedrock in multi-account deployments. The target of this design is to supply sturdy AI observability whereas sustaining strict privateness boundaries for information at relaxation by conserving logs solely throughout the buyer accounts. That is achieved by transferring logging to the shopper accounts reasonably than invoking it from the operations account. By configuring the logging directions in every buyer’s account, software program firms can centralize AI operations whereas imposing information privateness, by conserving buyer information and logs inside strict information boundaries in every buyer’s account. This structure makes use of AWS Safety Token Service (AWS STS) to permit buyer accounts to imagine devoted roles in AWS Id and Entry Administration (IAM) within the operations account whereas invoking Amazon Bedrock. For logging, this resolution makes use of LangChain callbacks to seize invocation metadata instantly in every buyer’s account, making all the course of within the operations account memoryless. Callbacks can be utilized to log token utilization, efficiency metrics, and the general high quality of the mannequin in response to buyer queries. The proposed resolution balances centralized AI service administration with robust information privateness, ensuring buyer interactions stay inside their devoted environments.

    Answer overview

    The entire circulation of mannequin invocations on Amazon Bedrock is illustrated within the following determine. The operations account is the account the place the Amazon Bedrock permissions shall be managed utilizing an identity-based coverage, the place the Amazon Bedrock consumer shall be created, and the place the IAM position with the proper permissions will exist. Each buyer account will assume a distinct IAM position within the operations account. The shopper accounts are the place prospects will entry the software program or utility. This account will include an IAM position that can assume the corresponding position within the operations account, to permit Amazon Bedrock invocations. You will need to be aware that it isn’t obligatory for these two accounts to exist in the identical AWS group. On this resolution, we use an AWS Lambda operate to invoke fashions from Amazon Bedrock, and use LangChain callbacks to write down invocation information to CloudWatch. With out lack of generality, the identical precept may be utilized to different types of compute reminiscent of servers in Amazon Elastic Compute Cloud (Amazon EC2) cases or managed containers on Amazon Elastic Container Service (Amazon ECS).

    The sequence of steps in a mannequin invocation are:

    1. The method begins when the IAM position within the buyer account assumes the position within the operations account, permitting it to entry the Amazon Bedrock service. That is achieved by the AWS STS AssumeRole API operation, which establishes the mandatory cross-account relationship.
    2. The operations account verifies that the requesting principal (IAM position) from the shopper account is permitted to imagine the position it’s focusing on. This verification is predicated on the belief coverage connected to the IAM position within the operations account. This step makes positive that solely licensed buyer accounts and roles can entry the centralized Amazon Bedrock assets.
    3. After belief relationship verification, momentary credentials (entry key ID, secret entry key, and session token) with specified permissions are returned to the shopper account’s IAM execution position.
    4. The Lambda operate within the buyer account invokes the Amazon Bedrock consumer within the operations account. Utilizing momentary credentials, the shopper account’s IAM position sends prompts to Amazon Bedrock by the operations account, consuming the operations account’s mannequin quota.
    5. After the Amazon Bedrock consumer response returns to the shopper account, LangChain callbacks log the response metrics instantly into CloudWatch within the buyer account.

    Enabling cross-account entry with IAM roles

    The important thing thought on this resolution is that there shall be an IAM position per buyer within the operations account. The software program firm will handle this position and assign permissions to outline points reminiscent of which fashions may be invoked, during which AWS Areas, and what quotas they’re topic to. This centralized method considerably simplifies the administration of mannequin entry and permissions, particularly when scaling to a whole lot or hundreds of shoppers. For enterprise prospects with a number of AWS accounts, this sample is especially helpful as a result of it permits the software program firm to configure a single position that may be assumed by a lot of the shopper’s accounts, offering constant entry insurance policies and simplifying each permission administration and price monitoring. By way of fastidiously crafted belief relationships, the operations account maintains management over who can entry what, whereas nonetheless enabling the flexibleness wanted in advanced multi-account environments.

    The IAM position can have assigned a number of insurance policies. For instance, the next coverage permits a sure buyer to invoke some fashions:

    {
        "Model": "2012-10-17",
        "Assertion": {
            "Sid": "AllowInference",
            "Impact": "Permit",
            "Motion": [
                "bedrock:Converse",
                "bedrock:ConverseStream",
                "bedrock:GetAsyncInvoke",
                "bedrock:InvokeModel",
                "bedrock:InvokeModelWithResponseStream",
                "bedrock:StartAsyncInvoke"
            ],
            "Useful resource": "arn:aws:bedrock:*::foundation-model/"
        }
    }

    The management could be applied on the belief relationship degree, the place we’d solely enable some accounts to imagine that position. For instance, within the following script, the belief relationship permits the position for buyer 1 to solely be assumed by the allowed AWS account when the ExternalId matches a specified worth, with the aim of stopping the confused deputy drawback:

    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Sid": "AmazonBedrockModelInvocationCustomer1",
                "Effect": "Allow",
                "Principal": {
                    "Service": "bedrock.amazonaws.com"
                    },
                "Action": "sts:AssumeRole",
                "Condition": {
                    "StringEquals": {
                        "aws:SourceAccount": "",
                        "sts:ExternalId": ""
                    },
                    "ArnLike": {
                        "aws:SourceArn": "arn:aws:bedrock:::*"
                    }
                }
            }
        ]
    }

    AWS STS AssumeRole operations represent the cornerstone of safe cross-account entry inside multi-tenant AWS environments. By implementing this authentication mechanism, organizations set up a strong safety framework that permits managed interactions between the operations account and particular person buyer accounts. The operations staff grants exactly scoped entry to assets throughout the shopper accounts, with permissions strictly ruled by the assumed position’s belief coverage and connected IAM permissions. This granular management makes positive that the operational staff and prospects can carry out solely licensed actions on particular assets, sustaining robust safety boundaries between tenants.

    As organizations scale their multi-tenant architectures to embody hundreds of accounts, the efficiency traits and reliability of those cross-account authentication operations change into more and more important concerns. Engineering groups should fastidiously design their cross-account entry patterns to optimize for each safety and operational effectivity, ensuring that authentication processes stay responsive and reliable even because the setting grows in complexity and scale.

    When contemplating the service quotas that govern these operations, it’s essential to notice that AWS STS requests made utilizing AWS credentials are topic to a default quota of 600 requests per second, per account, per Area—together with AssumeRole operations. A key architectural benefit emerges in cross-account eventualities: solely the account initiating the AssumeRole request (buyer account) counts in opposition to its AWS STS quota; the goal account’s (operations account) quota stays unaffected. This uneven quota consumption signifies that the operations account doesn’t deplete their AWS STS service quotas when responding to API requests from buyer accounts. For many multi-tenant implementations, the usual quota of 600 requests per second offers ample capability, although AWS presents quota adjustment choices for environments with distinctive necessities. This quota design permits scalable operational fashions the place a single operations account can effectively service hundreds of tenant accounts with out encountering service limits.

    Writing personal logs utilizing LangChain callbacks

    LangChain is a well-liked open supply orchestration framework that permits builders to construct highly effective purposes by connecting varied elements by chains, that are sequential collection of operations that course of and remodel information. On the core of LangChain’s extensibility is the BaseCallbackHandler class, a basic abstraction that gives hooks into the execution lifecycle of chains, permitting builders to implement customized logic at totally different phases of processing. This class may be prolonged to exactly outline behaviors that ought to happen upon completion of a sequence’s invocation, enabling refined monitoring, logging, or triggering of downstream processes. By implementing customized callback handlers, builders can seize metrics, persist outcomes to exterior methods, or dynamically alter the execution circulation primarily based on intermediate outputs, making LangChain each versatile and highly effective for production-grade language mannequin purposes.

    Implementing a customized CloudWatch logging callback in LangChain offers a strong resolution for sustaining information privateness in multi-account deployments. By extending the BaseCallbackHandler class, we are able to create a specialised handler that establishes a direct connection to the shopper account’s CloudWatch logs, ensuring mannequin interplay information stays throughout the account boundaries. The implementation begins by initializing a Boto3 CloudWatch Logs consumer utilizing the shopper account’s credentials, reasonably than the operations account’s credentials. This consumer is configured with the suitable log group and stream names, which may be dynamically generated primarily based on buyer identifiers or utility contexts. Throughout mannequin invocations, the callback captures important metrics reminiscent of token utilization, latency, immediate particulars, and response traits. The next Python script serves for instance of this implementation:

    class CustomCallbackHandler(BaseCallbackHandler):
    
        def log_to_cloudwatch(self, message: str):
            """Operate to write down extracted metrics to CloudWatch"""
    
        def on_llm_end(self, response, **kwargs):
            print("nChat mannequin completed processing.")
            # Extract model_id and token utilization from the response
            input_token_count = response.llm_output.get("utilization", {}).get("prompt_tokens", None)
            output_token_count = response.llm_output.get("utilization", {}).get("completion_tokens", None)
            model_id=response.llm_output.get("model_id", None)
    
            # Right here we invoke the callback
            self.log_to_cloudwatch(
                  f"Consumer ID: {self.user_id}nApplication ID: {self.application_id}n Enter tokens: {input_token_count}n Output tokens: {output_token_count}n Invoked mannequin: {model_id}"
                 )
    
        def on_llm_error(self, error: Exception, **kwargs):
            print(f"Chat mannequin encountered an error: {error}")

    The on_llm_start, on_llm_end, and on_llm_error strategies are overridden to intercept these lifecycle occasions and persist the related information. For instance, the on_llm_end methodology can extract token counts, execution time, and model-specific metadata, formatting this data into structured log entries earlier than writing them to CloudWatch. By implementing correct error dealing with and retry logic throughout the callback, we offer dependable logging even throughout intermittent connectivity points. This method creates a complete audit path of AI interactions whereas sustaining strict information isolation within the buyer account, as a result of the logs don’t transit by or relaxation within the operations account.

    The AWS Shared Duty Mannequin in multi-account logging

    When implementing distributed logging for Amazon Bedrock in multi-account architectures, understanding the AWS Shared Duty Mannequin turns into paramount. Though AWS secures the underlying infrastructure and providers like Amazon Bedrock and CloudWatch, prospects stay answerable for securing their information, configuring entry controls, and implementing acceptable logging methods. As demonstrated in our IAM position configurations, prospects should fastidiously craft belief relationships and permission boundaries to assist stop unauthorized cross-account entry. The LangChain callback implementation outlined locations the accountability on prospects to implement correct encryption of logs at relaxation, outline acceptable retention durations that align with compliance necessities, and implement entry controls for who can view delicate AI interplay information. This aligns with the multi-account design precept the place buyer information stays remoted inside their respective accounts. By respecting these safety boundaries whereas sustaining operational effectivity, software program firms can uphold their tasks throughout the shared safety mannequin whereas delivering scalable AI capabilities throughout their buyer base.

    Conclusion

    Implementing a safe, scalable multi-tenant structure with Amazon Bedrock requires cautious planning round account construction, entry patterns, and operational administration. The distributed logging method we’ve outlined demonstrates how organizations can keep strict information isolation whereas nonetheless benefiting from centralized AI operations. By utilizing IAM roles with exact belief relationships, AWS STS for safe cross-account authentication, and LangChain callbacks for personal logging, firms can create a strong basis that scales to hundreds of shoppers with out compromising on safety or operational effectivity.

    This structure addresses the important problem of sustaining information privateness in multi-account deployments whereas nonetheless enabling complete observability. Organizations ought to prioritize automation, monitoring, and governance from the start to keep away from technical debt as their system scales. Implementing infrastructure as code for position administration, automated monitoring of cross-account entry patterns, and common safety opinions will be sure the structure stays resilient and can assist keep adherence with compliance requirements as enterprise necessities evolve. As generative AI turns into more and more central to software program supplier choices, these architectural patterns present a blueprint for sustaining the best requirements of information privateness whereas delivering modern AI capabilities to prospects throughout various regulatory environments and safety necessities.

    To be taught extra, discover the great Generative AI Safety Scoping Matrix by Securing generative AI: An introduction to the Generative AI Safety Scoping Matrix, which offers important frameworks for securing AI implementations. Constructing on these safety foundations, strengthen Amazon Bedrock deployments by getting conversant in IAM authentication and authorization mechanisms that set up correct entry controls. As organizations develop to require multi-account buildings, these IAM practices join seamlessly with AWS STS, which delivers momentary safety credentials enabling safe cross-account entry patterns. To finish this built-in safety method, delve into LangChain and LangChain on AWS capabilities, providing highly effective instruments that construct upon these foundational safety providers to create safe, context-aware AI purposes, whereas sustaining acceptable safety boundaries throughout your complete generative AI workflow.


    Concerning the Authors

    Mohammad Tahsin is an AI/ML Specialist Options Architect at AWS. He lives for staying up-to-date with the most recent applied sciences in AI/ML and serving to prospects deploy bespoke options on AWS. Exterior of labor, he loves all issues gaming, digital artwork, and cooking.

    Felipe Lopez is a Senior AI/ML Specialist Options Architect at AWS. Previous to becoming a member of AWS, Felipe labored with GE Digital and SLB, the place he targeted on modeling and optimization merchandise for industrial purposes.

    Aswin Vasudevan is a Senior Options Architect for Safety, ISV at AWS. He’s an enormous fan of generative AI and serverless structure and enjoys collaborating and dealing with prospects to construct options that drive enterprise worth.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Oliver Chambers
    • Website

    Related Posts

    Constructing clever AI voice brokers with Pipecat and Amazon Bedrock – Half 1

    June 9, 2025

    Run the Full DeepSeek-R1-0528 Mannequin Domestically

    June 9, 2025

    7 Cool Python Initiatives to Automate the Boring Stuff

    June 9, 2025
    Top Posts

    Video games for Change provides 5 new leaders to its board

    June 9, 2025

    How AI is Redrawing the World’s Electrical energy Maps: Insights from the IEA Report

    April 18, 2025

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025
    Don't Miss

    Video games for Change provides 5 new leaders to its board

    By Sophia Ahmed WilsonJune 9, 2025

    Video games for Change, the nonprofit group that marshals video games and immersive media for…

    Constructing clever AI voice brokers with Pipecat and Amazon Bedrock – Half 1

    June 9, 2025

    ChatGPT’s Reminiscence Restrict Is Irritating — The Mind Reveals a Higher Method

    June 9, 2025

    Stopping AI from Spinning Tales: A Information to Stopping Hallucinations

    June 9, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2025 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.