This put up was co-written with Nick Frichette and Vijay George from Datadog.
As organizations more and more undertake Amazon Bedrock for generative AI functions, defending towards misconfigurations that might result in information leaks or unauthorized mannequin entry turns into vital. The AWS Generative AI Adoption Index, which surveyed 3,739 senior IT decision-makers throughout 9 nations, revealed that 45% of organizations chosen generative AI instruments as their high price range precedence in 2025. As extra AWS and Datadog clients speed up their adoption of AI, constructing AI safety into present processes will turn out to be important, particularly as extra stringent rules emerge. However AI dangers in a silo isn’t sufficient; AI dangers have to be contextualized alongside different dangers akin to identification exposures and misconfigurations. The mixture of Amazon Bedrock and Datadog’s complete safety monitoring helps organizations innovate quicker whereas sustaining strong safety controls.
Amazon Bedrock delivers enterprise-grade safety by incorporating built-in protections throughout information privateness, entry controls, community safety, compliance, and accountable AI safeguards. Buyer information is encrypted each in transit utilizing TLS 1.2 or above and at relaxation with AWS Key Administration Service (AWS KMS), and organizations have full management over encryption keys. Knowledge privateness is central: your enter, prompts, and outputs will not be shared with mannequin suppliers nor used to coach or enhance basis fashions (FMs). Effective-tuning and customizations happen on non-public copies of fashions, offering information confidentiality. Entry is tightly ruled by means of AWS Identification and Entry Administration (IAM) and resource-based insurance policies, supporting granular authorization for customers and roles. Amazon Bedrock integrates with AWS PrivateLink and helps digital non-public cloud (VPC) endpoints for personal, inner communication, so site visitors doesn’t depart the Amazon community. The service complies with key business requirements akin to ISO, SOC, CSA STAR, HIPAA eligibility, GDPR, and FedRAMP Excessive, making it appropriate for regulated industries. Moreover, Amazon Bedrock contains configurable guardrails to filter delicate or dangerous content material and promote accountable AI use. Safety is structured underneath the AWS Shared Accountability Mannequin, the place AWS manages infrastructure safety and clients are answerable for safe configurations and entry controls inside their Amazon Bedrock surroundings.
Constructing on these strong AWS safety features, Datadog and AWS have partnered to supply a holistic view of AI infrastructure dangers, vulnerabilities, delicate information publicity, and different misconfigurations. Datadog Cloud Safety employs each agentless and agent-based scanning to assist organizations determine, prioritize, and remediate dangers throughout cloud assets. This integration helps AWS customers prioritize dangers based mostly on enterprise criticality, with safety findings enriched by observability information, thereby enhancing their total safety posture in AI implementations.
We’re excited to announce new safety capabilities in Datadog Cloud Safety that may assist you detect and remediate Amazon Bedrock misconfigurations earlier than they turn out to be safety incidents. This integration helps organizations embed strong safety controls and safe their use of the highly effective capabilities of Amazon Bedrock by providing three vital benefits: holistic AI safety by integrating AI safety into your broader cloud safety technique, real-time danger detection by means of figuring out potential AI-related safety points as they emerge, and simplified compliance to assist meet evolving AI rules with pre-built detections.
AWS and Datadog: Empowering clients to undertake AI securely
The partnership between AWS and Datadog is targeted on serving to clients function their cloud infrastructure securely and effectively. As organizations quickly undertake AI applied sciences, extending this partnership to incorporate Amazon Bedrock is a pure evolution. Amazon Bedrock is a completely managed service that makes high-performing FMs from main AI corporations and Amazon out there by means of a unified API, making it an excellent start line for Datadog’s AI safety capabilities.
The choice to prioritize Amazon Bedrock integration is pushed by a number of components, together with robust buyer demand, complete safety wants, and the present integration basis. With over 900 integrations and a partner-built Market, Datadog’s long-standing partnership with AWS and deep integration capabilities have helped Datadog rapidly develop complete safety monitoring for Amazon Bedrock whereas utilizing their present cloud safety experience.
All through This fall 2024, Datadog Safety Analysis noticed rising risk actor curiosity in cloud AI environments, making this integration significantly well timed. By combining the highly effective AI capabilities of AWS with Datadog’s safety experience, you may safely speed up your AI adoption whereas sustaining strong safety controls.
How Datadog Cloud Safety helps safe Amazon Bedrock assets
After including the AWS integration to your Datadog account and enabling Datadog Cloud Safety, Datadog Cloud Safety constantly displays your AWS surroundings, figuring out misconfigurations, identification dangers, vulnerabilities, and compliance violations. These detections use the Datadog Severity Scoring system to prioritize them based mostly on infrastructure context. The scoring considers quite a lot of variables, together with if the useful resource is in manufacturing, is publicly accessible, or has entry to delicate information. This multi-layer evaluation can assist you cut back noise and focus your consideration to essentially the most vital misconfigurations by contemplating runtime habits.
Partnering with AWS, Datadog is happy to supply detections for Datadog Cloud Safety clients, akin to:
- Amazon Bedrock customized fashions mustn’t output mannequin information to publicly accessible S3 buckets
- Amazon Bedrock customized fashions mustn’t practice from publicly writable S3 buckets
- Amazon Bedrock guardrails ought to have a immediate assault filter enabled and block immediate assaults at excessive sensitivity
- Amazon Bedrock agent guardrails ought to have the delicate data filter enabled and block extremely delicate PII entities
Detect AI misconfigurations with Datadog Cloud Safety
To grasp how these detections can assist safe your Amazon Bedrock infrastructure, let’s have a look at a selected use case, by which Amazon Bedrock customized fashions mustn’t practice from publicly writable Amazon Easy Storage Service (Amazon S3) buckets.
With Amazon Bedrock, you may customise AI fashions by fine-tuning on area particular information. To do that, that information is saved in an S3 bucket. Menace actors are continuously evaluating the configuration of S3 buckets, searching for the potential to entry delicate information and even the flexibility to write down to S3 buckets.
If a risk actor finds an S3 bucket that was misconfigured to allow public write entry, and that very same bucket contained information that was used to coach an AI mannequin, a nasty actor may poison that dataset and introduce malicious habits or output to the mannequin. This is called a information poisoning assault.
Usually, detecting all these misconfigurations requires a number of steps: one to determine the S3 bucket misconfigured with write entry, and one to determine that the bucket is being utilized by Amazon Bedrock. With Datadog Cloud Safety, this detection is considered one of tons of which might be activated out of the field.
Within the Datadog Cloud Safety system, you may view this situation alongside surrounding infrastructure utilizing Cloud Map. This offers dwell diagrams of your cloud structure, as proven within the following screenshot. AI dangers are then contextualized alongside delicate information publicity, identification dangers, vulnerabilities, and different misconfigurations to provide you a 360-view of dangers.
For instance, you may see that your software is utilizing Anthropic’s Claude 3.7 on Amazon Bedrock and accessing coaching or immediate information saved in an S3 bucket that additionally has public write entry. This might inadvertently affect mannequin integrity by introducing unapproved information to the big language mannequin (LLM), so it would be best to replace this configuration. Although primary, step one for many safety initiatives is figuring out the difficulty. With agentless scanning, Datadog scans your AWS surroundings at intervals between quarter-hour and a pair of hours, so customers can determine misconfigurations as they’re launched to their surroundings. The subsequent step is to remediate this danger. Datadog Cloud Safety presents robotically generated remediation steering, particularly for every danger (see the next screenshot). You’re going to get a step-by-step rationalization of methods to repair every discovering. On this scenario, we are able to remediate this situation by modifying the S3 bucket’s coverage, serving to forestall public write entry. You are able to do this immediately in AWS, create a JIRA ticket, or use the built-in workflow automation instruments. From right here, you may apply remediation steps immediately inside Datadog and ensure that the misconfiguration has been resolved.

Resolving this situation will positively affect your compliance posture, as illustrated by the posture rating in Datadog Cloud Safety, serving to groups meet inner benchmarks and regulatory requirements. Groups also can create customized frameworks or iterate on present ones for tailor-made compliance controls.

As generative AI is embraced throughout industries, the regulatory surroundings will evolve. Datadog will proceed partnering with AWS to increase their detection library and help safe AI adoption and compliance.
How Datadog Cloud Safety detects misconfigurations in your cloud surroundings
You’ll be able to deploy Datadog Cloud Safety both with the Datadog agent, agentlessly, or each to maximise safety protection in your cloud surroundings. Datadog clients can begin monitoring their AWS accounts for misconfigurations by first including the AWS integration to Datadog. This allows Datadog to crawl cloud assets in buyer AWS accounts.
Because the Datadog system finds assets, it runs by means of a catalog of tons of of out-of-the-box detection guidelines towards these assets, searching for misconfigurations and risk paths that adversaries can exploit.
Safe your AI infrastructure with Datadog
Misconfigurations in AI techniques will be dangerous, however with the precise instruments, you may have the visibility and context wanted to handle them. With Datadog Cloud Safety, groups acquire visibility into these dangers, detect threats early, and remediate points with confidence. As well as, Datadog has additionally launched quite a few agentic AI safety features, designed to assist groups acquire visibility into the well being and safety of vital AI workload, which incorporates new bulletins made to Datadog’s LLM observability options.
Lastly, Datadog introduced Bits AI Safety Analyst alongside different Bits AI brokers at DASH. Included as a part of Cloud SIEM, Bits is an agentic AI safety analyst that automates triage for AWS CloudTrail alerts. Bits investigates every alert like a seasoned analyst: pulling in related context from throughout your Datadog surroundings, annotating key findings, and providing a transparent advice on whether or not the sign is probably going benign or malicious. By accelerating triage and surfacing actual threats quicker, Bits helps cut back imply time to remediation (MTTR) and frees analysts to deal with essential risk looking and response initiatives. This helps throughout completely different threats, together with AI-related threats.
To study extra about how Datadog helps safe your AI infrastructure, see Monitor Amazon Bedrock with Datadog or take a look at our safety documentation. For those who’re not already utilizing Datadog, you may get began with Datadog Cloud Safety with a 14-day free trial.
Concerning the Authors
Nina Chen is a Buyer Options Supervisor at AWS specializing in main software program corporations to make use of the facility of the AWS Cloud to speed up their product innovation and development. With over 4 years of expertise working within the strategic impartial software program vendor (ISV) vertical, Nina enjoys guiding ISV companions by means of their cloud transformation journeys, serving to them optimize their cloud infrastructure, driving product innovation, and delivering distinctive buyer experiences.
Sujatha Kuppuraju is a Principal Options Architect at AWS, specializing in cloud and generative AI safety. She collaborates with software program corporations’ management groups to architect safe, scalable options on AWS and information strategic product growth. Utilizing her experience in cloud structure and rising applied sciences, Sujatha helps organizations optimize choices, preserve strong safety, and convey revolutionary merchandise to market in an evolving tech panorama.
Nick Frichette is a Workers Safety Researcher for Cloud Safety Analysis at Datadog.
Vijay George is a Product Supervisor for AI Safety at Datadog.

