Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Video games for Change provides 5 new leaders to its board

    June 9, 2025

    Constructing clever AI voice brokers with Pipecat and Amazon Bedrock – Half 1

    June 9, 2025

    ChatGPT’s Reminiscence Restrict Is Irritating — The Mind Reveals a Higher Method

    June 9, 2025
    Facebook X (Twitter) Instagram
    UK Tech Insider
    Facebook X (Twitter) Instagram Pinterest Vimeo
    UK Tech Insider
    Home»Machine Learning & Research»Use {custom} metrics to judge your generative AI software with Amazon Bedrock
    Machine Learning & Research

    Use {custom} metrics to judge your generative AI software with Amazon Bedrock

    Oliver ChambersBy Oliver ChambersMay 13, 2025No Comments24 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Use {custom} metrics to judge your generative AI software with Amazon Bedrock
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    With Amazon Bedrock Evaluations, you possibly can consider basis fashions (FMs) and Retrieval Augmented Technology (RAG) programs, whether or not hosted on Amazon Bedrock or one other mannequin or RAG system hosted elsewhere, together with Amazon Bedrock Information Bases or multi-cloud and on-premises deployments. We not too long ago introduced the overall availability of the big language mannequin (LLM)-as-a-judge approach in mannequin analysis and the brand new RAG analysis device, additionally powered by an LLM-as-a-judge behind the scenes. These instruments are already empowering organizations to systematically consider FMs and RAG programs with enterprise-grade instruments. We additionally talked about that these analysis instruments don’t need to be restricted to fashions or RAG programs hosted on Amazon Bedrock; with the deliver your personal inference (BYOI) responses characteristic, you possibly can consider fashions or purposes if you happen to use the enter formatting necessities for both providing.

    The LLM-as-a-judge approach powering these evaluations allows automated, human-like analysis high quality at scale, utilizing FMs to evaluate high quality and accountable AI dimensions with out handbook intervention. With built-in metrics like correctness (factual accuracy), completeness (response thoroughness), faithfulness (hallucination detection), and accountable AI metrics equivalent to harmfulness and reply refusal, you and your group can consider fashions hosted on Amazon Bedrock and data bases natively, or utilizing BYOI responses out of your custom-built programs.

    Amazon Bedrock Evaluations presents an intensive listing of built-in metrics for each analysis instruments, however there are occasions while you may need to outline these analysis metrics another way, or make utterly new metrics which might be related to your use case. For instance, you may need to outline a metric that evaluates an software response’s adherence to your particular model voice, or need to classify responses in keeping with a {custom} categorical rubric. You may need to use numerical scoring or categorical scoring for numerous functions. For these causes, you want a method to make use of {custom} metrics in your evaluations.

    Now with Amazon Bedrock, you possibly can develop {custom} analysis metrics for each mannequin and RAG evaluations. This functionality extends the LLM-as-a-judge framework that drives Amazon Bedrock Evaluations.

    On this put up, we reveal how one can use {custom} metrics in Amazon Bedrock Evaluations to measure and enhance the efficiency of your generative AI purposes in keeping with your particular enterprise necessities and analysis standards.

    Overview

    Customized metrics in Amazon Bedrock Evaluations provide the next options:

    • Simplified getting began expertise – Pre-built starter templates can be found on the AWS Administration Console based mostly on our industry-tested built-in metrics, with choices to create from scratch for particular analysis standards.
    • Versatile scoring programs – Help is out there for each quantitative (numerical) and qualitative (categorical) scoring to create ordinal metrics, nominal metrics, and even use analysis instruments for classification duties.
    • Streamlined workflow administration – It can save you {custom} metrics for reuse throughout a number of analysis jobs or import beforehand outlined metrics from JSON recordsdata.
    • Dynamic content material integration – With built-in template variables (for instance, {{immediate}}, {{prediction}}, and {{context}}), you possibly can seamlessly inject dataset content material and mannequin outputs into analysis prompts.
    • Customizable output management – You need to use our really useful output schema for constant outcomes, with superior choices to outline {custom} output codecs for specialised use circumstances.

    Customized metrics provide you with unprecedented management over the way you measure AI system efficiency, so you possibly can align evaluations together with your particular enterprise necessities and use circumstances. Whether or not assessing factuality, coherence, helpfulness, or domain-specific standards, {custom} metrics in Amazon Bedrock allow extra significant and actionable analysis insights.

    Within the following sections, we stroll by the steps to create a job with mannequin analysis and {custom} metrics utilizing each the Amazon Bedrock console and the Python SDK and APIs.

    Supported knowledge codecs

    On this part, we evaluate some essential knowledge codecs.

    Choose immediate importing

    To add your beforehand saved {custom} metrics into an analysis job, comply with the JSON format within the following examples.

    The next code illustrates a definition with numerical scale:

    {
        "customMetricDefinition": {
            "metricName": "my_custom_metric",
            "directions": "Your full {custom} metric immediate together with no less than one {{enter variable}}",
            "ratingScale": [
                {
                    "definition": "first rating definition",
                    "value": {
                        "floatValue": 3
                    }
                },
                {
                    "definition": "second rating definition",
                    "value": {
                        "floatValue": 2
                    }
                },
                {
                    "definition": "third rating definition",
                    "value": {
                        "floatValue": 1
                    }
                }
            ]
        }
    }

    The next code illustrates a definition with string scale:

    {
        "customMetricDefinition": {
            "metricName": "my_custom_metric",
            "directions": "Your full {custom} metric immediate together with no less than one {{enter variable}}",
            "ratingScale": [
                {
                    "definition": "first rating definition",
                    "value": {
                        "stringValue": "first value"
                    }
                },
                {
                    "definition": "second rating definition",
                    "value": {
                        "stringValue": "second value"
                    }
                },
                {
                    "definition": "third rating definition",
                    "value": {
                        "stringValue": "third value"
                    }
                }
            ]
        }
    }

    The next code illustrates a definition with no scale:

    {
        "customMetricDefinition": {
            "metricName": "my_custom_metric",
            "directions": "Your full {custom} metric immediate together with no less than one {{enter variable}}"
        }
    }

    For extra data on defining a choose immediate with no scale, see the most effective practices part later on this put up.

    Mannequin analysis dataset format

    When utilizing LLM-as-a-judge, just one mannequin might be evaluated per analysis job. Consequently, you need to present a single entry within the modelResponses listing for every analysis, although you possibly can run a number of analysis jobs to check totally different fashions. The modelResponses subject is required for BYOI jobs, however not wanted for non-BYOI jobs. The next is the enter JSONL format for LLM-as-a-judge in mannequin analysis. Fields marked with ? are non-compulsory.

    {
        "immediate": string
        "referenceResponse"?: string
        "class"?: string
         "modelResponses"?: [
            {
                "response": string
                "modelIdentifier": string
            }
        ]
    }

    RAG analysis dataset format

    We up to date the analysis job enter dataset format to be much more versatile for RAG analysis. Now, you possibly can deliver referenceContexts, that are anticipated retrieved passages, so you possibly can examine your precise retrieved contexts to your anticipated retrieved contexts. You’ll find the brand new referenceContexts subject within the up to date JSONL schema for RAG analysis:

    {
        "conversationTurns": [{
                "prompt": {
                    "content": [{
                        "text": string
                    }]
                },
                "referenceResponses" ?: [{
                    "content": [{
                        "text": string
                    }]
                }],
                "referenceContexts" ? : [{
                    "content": [{
                        "text": string
                    }]
                }],
                "output" ?: {
                    "textual content": string
                    "modelIdentifier" ? : string 
                    "knowledgeBaseIdentifier": string 
                    "retrievedPassages": {
                        "retrievalResults": [{
                            "name" ? : string 
                            "content": {
                                "text": string
                            },
                            "metadata" ? : {
                                [key: string]: string
                            }
                        }]
                    }
                }]
        }
    }

    Variables for knowledge injection into choose prompts

    To guarantee that your knowledge is injected into the choose prompts in the proper place, use the variables from the next desk. We now have additionally included a information to point out you the place the analysis device will pull knowledge out of your enter file, if relevant. There are circumstances the place if you happen to deliver your personal inference responses to the analysis job, we are going to use that knowledge out of your enter file; if you happen to don’t use deliver your personal inference responses, then we are going to name the Amazon Bedrock mannequin or data base and put together the responses for you.

    The next desk summarizes the variables for mannequin analysis.

    Plain Title Variable Enter Dataset JSONL Key Obligatory or Non-compulsory
    Immediate {{immediate}} immediate Non-compulsory
    Response {{prediction}}

    For a BYOI job:

    modelResponses.response 

    If you happen to don’t deliver your personal inference responses, the analysis job will name the mannequin and put together this knowledge for you.

    Obligatory
    Floor reality response {{ground_truth}} referenceResponse Non-compulsory

    The next desk summarizes the variables for RAG analysis (retrieve solely).

    Plain Title Variable Enter Dataset JSONL Key Obligatory or Non-compulsory
    Immediate {{immediate}} immediate Non-compulsory
    Floor reality response {{ground_truth}}

    For a BYOI job:

    output.retrievedResults.retrievalResults 

    If you happen to don’t deliver your personal inference responses, the analysis job will name the Amazon Bedrock data base and put together this knowledge for you.

    Non-compulsory
    Retrieved passage {{context}}

    For a BYOI job:

    output.retrievedResults.retrievalResults 

    If you happen to don’t deliver your personal inference responses, the analysis job will name the Amazon Bedrock data base and put together this knowledge for you.

    Obligatory
    Floor reality retrieved passage {{reference_contexts}} referenceContexts Non-compulsory

    The next desk summarizes the variables for RAG analysis (retrieve and generate).

    Plain Title Variable Enter dataset JSONL key Obligatory or non-compulsory
    Immediate {{immediate}} immediate Non-compulsory
    Response {{prediction}}

    For a BYOI job:

    Output.textual content

    If you happen to don’t deliver your personal inference responses, the analysis job will name the Amazon Bedrock data base and put together this knowledge for you.

    Obligatory
    Floor reality response {{ground_truth}} referenceResponses Non-compulsory
    Retrieved passage {{context}}

    For a BYOI job:

    Output.retrievedResults.retrievalResults

    If you happen to don’t deliver your personal inference responses, the analysis job will name the Amazon Bedrock data base and put together this knowledge for you.

    Non-compulsory
    Floor reality retrieved passage {{reference_contexts}} referenceContexts Non-compulsory

    Conditions

    To make use of the LLM-as-a-judge mannequin analysis and RAG analysis options with BYOI, you need to have the next conditions:

    Create a mannequin analysis job with {custom} metrics utilizing Amazon Bedrock Evaluations

    Full the next steps to create a job with mannequin analysis and {custom} metrics utilizing Amazon Bedrock Evaluations:

    1. On the Amazon Bedrock console, select Evaluations within the navigation pane and select the Fashions
    2. Within the Mannequin analysis part, on the Create dropdown menu, select Computerized: mannequin as a choose.
    3. For the Mannequin analysis particulars, enter an analysis identify and non-compulsory description.
    4. For Evaluator mannequin, select the mannequin you need to use for computerized analysis.
    5. For Inference supply, choose the supply and select the mannequin you need to consider.

    For this instance, we selected Claude 3.5 Sonnet because the evaluator mannequin, Bedrock fashions as our inference supply, and Claude 3.5 Haiku as our mannequin to judge.

    1. The console will show the default metrics for the evaluator mannequin you selected. You possibly can choose different metrics as wanted.
    2. Within the Customized Metrics part, we create a brand new metric known as “Comprehensiveness.” Use the template offered and modify based mostly in your metrics. You need to use the next variables to outline the metric, the place solely {{prediction}} is necessary:
      1. immediate
      2. prediction
      3. ground_truth

    The next is the metric we outlined in full:

    Your function is to evaluate the comprehensiveness of a solution based mostly on the query and 
    the prediction. Assess the standard, accuracy, and helpfulness of language mannequin response,
     and use these to evaluate how complete the response is. Award greater scores to responses
     which might be detailed and considerate.
    
    Rigorously consider the comprehensiveness of the LLM response for the given question (immediate)
     in opposition to all specified standards. Assign a single general rating that greatest represents the 
    comprehensivenss, and supply a short clarification justifying your ranking, referencing 
    particular strengths and weaknesses noticed.
    
    When evaluating the response high quality, think about the next rubrics:
    - Accuracy: Factual correctness of knowledge offered
    - Completeness: Protection of essential points of the question
    - Readability: Clear group and presentation of knowledge
    - Helpfulness: Sensible utility of the response to the person
    
    Consider the next:
    
    Question:
    {{immediate}}
    
    Response to judge:
    {{prediction}}

    1. Create the output schema and extra metrics. Right here, we outline a scale that gives most factors (10) if the response could be very complete, and 1 if the response shouldn’t be complete in any respect.
    2. For Datasets, enter your enter and output areas in Amazon S3.
    3. For Amazon Bedrock IAM function – Permissions, choose Use an present service function and select a task.
    4. Select Create and watch for the job to finish.

    Concerns and greatest practices

    When utilizing the output schema of the {custom} metrics, observe the next:

    • If you happen to use the built-in output schema (really useful), don’t add your grading scale into the principle choose immediate. The analysis service will mechanically concatenate your choose immediate directions together with your outlined output schema ranking scale and a few structured output directions (distinctive to every choose mannequin) behind the scenes. That is so the analysis service can parse the choose mannequin’s outcomes and show them on the console in graphs and calculate common values of numerical scores.
    • The totally concatenated choose prompts are seen within the Preview window if you’re utilizing the Amazon Bedrock console to assemble your {custom} metrics. As a result of choose LLMs are inherently stochastic, there may be some responses we are able to’t parse and show on the console and use in your common rating calculations. Nevertheless, the uncooked choose responses are all the time loaded into your S3 output file, even when the analysis service can not parse the response rating from the choose mannequin.
    • If you happen to don’t use the built-in output schema characteristic (we suggest you employ it as a substitute of ignoring it), then you’re chargeable for offering your ranking scale within the choose immediate directions physique. Nevertheless, the analysis service won’t add structured output directions and won’t parse the outcomes to point out graphs; you will notice the complete choose output plaintext outcomes on the console with out graphs and the uncooked knowledge will nonetheless be in your S3 bucket.

    Create a mannequin analysis job with {custom} metrics utilizing the Python SDK and APIs

    To make use of the Python SDK to create a mannequin analysis job with {custom} metrics, comply with these steps (or discuss with our instance pocket book):

    1. Arrange the required configurations, which ought to embrace your mannequin identifier for the default metrics and {custom} metrics evaluator, IAM function with acceptable permissions, Amazon S3 paths for enter knowledge containing your inference responses, and output location for outcomes:
      import boto3
      import time
      from datetime import datetime
      
      # Configure data base and mannequin settings
      evaluator_model = "anthropic.claude-3-5-sonnet-20240620-v1:0"
      generator_model = "amazon.nova-lite-v1:0"
      custom_metrics_evaluator_model = "anthropic.claude-3-5-sonnet-20240620-v1:0"
      role_arn = "arn:aws:iam:::function/"
      BUCKET_NAME = ""
      
      # Specify S3 areas
      input_data = f"s3://{BUCKET_NAME}/evaluation_data/enter.jsonl"
      output_path = f"s3://{BUCKET_NAME}/evaluation_output/"
      
      # Create Bedrock consumer
      # NOTE: You possibly can change the area identify to the area of your selecting.
      bedrock_client = boto3.consumer('bedrock', region_name="us-east-1") 

    2. To outline a {custom} metric for mannequin analysis, create a JSON construction with a customMetricDefinition Embody your metric’s identify, write detailed analysis directions incorporating template variables (equivalent to {{immediate}} and {{prediction}}), and outline your ratingScale array with evaluation values utilizing both numerical scores (floatValue) or categorical labels (stringValue). This correctly formatted JSON schema allows Amazon Bedrock to judge mannequin outputs constantly in keeping with your particular standards.
      comprehensiveness_metric ={
          "customMetricDefinition": {
              "identify": "comprehensiveness",
              "directions": """Your function is to evaluate the comprehensiveness of an 
      reply based mostly on the query and the prediction. Assess the standard, accuracy, 
      and helpfulness of language mannequin response, and use these to evaluate how complete
       the response is. Award greater scores to responses which might be detailed and considerate.
      
      Rigorously consider the comprehensiveness of the LLM response for the given question (immediate)
       in opposition to all specified standards. Assign a single general rating that greatest represents the 
      comprehensivenss, and supply a short clarification justifying your ranking, referencing 
      particular strengths and weaknesses noticed.
      
      When evaluating the response high quality, think about the next rubrics:
      - Accuracy: Factual correctness of knowledge offered
      - Completeness: Protection of essential points of the question
      - Readability: Clear group and presentation of knowledge
      - Helpfulness: Sensible utility of the response to the person
      
      Consider the next:
      
      Question:
      {{immediate}}
      
      Response to judge:
      {{prediction}}""",
              "ratingScale": [
                  {
                      "definition": "Very comprehensive",
                      "value": {
                          "floatValue": 10
                      }
                  },
                  {
                      "definition": "Mildly comprehensive",
                      "value": {
                          "floatValue": 3
                      }
                  },
                  {
                      "definition": "Not at all comprehensive",
                      "value": {
                          "floatValue": 1
                      }
                  }
              ]
          }
      }

    3. To create a mannequin analysis job with {custom} metrics, use the create_evaluation_job API and embrace your {custom} metric within the customMetricConfig part, specifying each built-in metrics (equivalent to Builtin.Correctness) and your {custom} metric within the metricNames array. Configure the job together with your generator mannequin, evaluator mannequin, and correct Amazon S3 paths for enter dataset and output outcomes.
      # Create the mannequin analysis job
      model_eval_job_name = f"model-evaluation-custom-metrics{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}"
      
      model_eval_job = bedrock_client.create_evaluation_job(
          jobName=model_eval_job_name,
          jobDescription="Consider mannequin efficiency with {custom} comprehensiveness metric",
          roleArn=role_arn,
          applicationType="ModelEvaluation",
          inferenceConfig={
              "fashions": [{
                  "bedrockModel": {
                      "modelIdentifier": generator_model
                  }
              }]
          },
          outputDataConfig={
              "s3Uri": output_path
          },
          evaluationConfig={
              "automated": {
                  "datasetMetricConfigs": [{
                      "taskType": "General",
                      "dataset": {
                          "name": "ModelEvalDataset",
                          "datasetLocation": {
                              "s3Uri": input_data
                          }
                      },
                      "metricNames": [
                          "Builtin.Correctness",
                          "Builtin.Completeness",
                          "Builtin.Coherence",
                          "Builtin.Relevance",
                          "Builtin.FollowingInstructions",
                          "comprehensiveness"
                      ]
                  }],
                  "customMetricConfig": {
                      "customMetrics": [
                          comprehensiveness_metric
                      ],
                      "evaluatorModelConfig": {
                          "bedrockEvaluatorModels": [{
                              "modelIdentifier": custom_metrics_evaluator_model
                          }]
                      }
                  },
                  "evaluatorModelConfig": {
                      "bedrockEvaluatorModels": [{
                          "modelIdentifier": evaluator_model
                      }]
                  }
              }
          }
      )
      
      print(f"Created mannequin analysis job: {model_eval_job_name}")
      print(f"Job ID: {model_eval_job['jobArn']}")

    4. After submitting the analysis job, monitor its standing with get_evaluation_job and entry outcomes at your specified Amazon S3 location when full, together with the usual and {custom} metric efficiency knowledge.

    Create a RAG system analysis with {custom} metrics utilizing Amazon Bedrock Evaluations

    On this instance, we stroll by a RAG system analysis with a mix of built-in metrics and {custom} analysis metrics on the Amazon Bedrock console. Full the next steps:

    1. On the Amazon Bedrock console, select Evaluations within the navigation pane.
    2. On the RAG tab, select Create.
    3. For the RAG analysis particulars, enter an analysis identify and non-compulsory description.
    4. For Evaluator mannequin, select the mannequin you need to use for computerized analysis. The evaluator mannequin chosen right here can be used to calculate default metrics if chosen. For this instance, we selected Claude 3.5 Sonnet because the evaluator mannequin.
    5. Embody any non-compulsory tags.
    6. For Inference supply, choose the supply. Right here, you could have the choice to pick out between Bedrock Information Bases and Deliver your personal inference responses. If you happen to’re utilizing Amazon Bedrock Information Bases, you have to to decide on a beforehand created data base or create a brand new one. For BYOI responses, you possibly can deliver the immediate dataset, context, and output from a RAG system. For this instance, we selected Bedrock Information Base as our inference supply.
    7. Specify the analysis kind, response generator mannequin, and built-in metrics. You possibly can select between a mixed retrieval and response analysis or a retrieval solely analysis, with choices to make use of default metrics, {custom} metrics, or each to your RAG analysis. The response generator mannequin is barely required when utilizing an Amazon Bedrock data base because the inference supply. For the BYOI configuration, you possibly can proceed with no response generator. For this instance, we chosen Retrieval and response technology as our analysis kind and selected Nova Lite 1.0 as our response generator mannequin.
    8. Within the Customized Metrics part, select your evaluator mannequin. We chosen Claude 3.5 Sonnet v1 as our evaluator mannequin for {custom} metrics.
    9. Select Add {custom} metrics.
    10. Create your new metric. For this instance, we create a brand new {custom} metric for our RAG analysis known as information_comprehensiveness. This metric evaluates how totally and utterly the response addresses the question through the use of the retrieved data. It measures the extent to which the response extracts and incorporates related data from the retrieved passages to supply a complete reply.
    11. You possibly can select between importing a JSON file, utilizing a preconfigured template, or making a {custom} metric with full configuration management. For instance, you possibly can choose the preconfigured templates for the default metrics and alter the scoring system or rubric. For our information_comprehensiveness metric, we choose the {custom} choice, which permits us to enter our evaluator immediate instantly.
    12. For Directions, enter your immediate. For instance:
      Your function is to judge how comprehensively the response addresses the question 
      utilizing the retrieved data. Assess whether or not the response supplies a radical 
      therapy of the topic by successfully using the obtainable retrieved passages.
      
      Rigorously consider the comprehensiveness of the RAG response for the given question
       in opposition to all specified standards. Assign a single general rating that greatest represents
       the comprehensiveness, and supply a short clarification justifying your ranking, 
      referencing particular strengths and weaknesses noticed.
      
      When evaluating response comprehensiveness, think about the next rubrics:
      - Protection: Does the response make the most of the important thing related data from the retrieved
       passages?
      - Depth: Does the response present ample element on essential points from the
       retrieved data?
      - Context utilization: How successfully does the response leverage the obtainable
       retrieved passages?
      - Data synthesis: Does the response mix retrieved data to create
       a radical therapy?
      
      Consider the next:
      
      Question: {{immediate}}
      
      Retrieved passages: {{context}}
      
      Response to judge: {{prediction}}

    13. Enter your output schema to outline how the {custom} metric outcomes can be structured, visualized, normalized (if relevant), and defined by the mannequin.

    If you happen to use the built-in output schema (really useful), don’t add your ranking scale into the principle choose immediate. The analysis service will mechanically concatenate your choose immediate directions together with your outlined output schema ranking scale and a few structured output directions (distinctive to every choose mannequin) behind the scenes in order that your choose mannequin outcomes might be parsed. The totally concatenated choose prompts are seen within the Preview window if you’re utilizing the Amazon Bedrock console to assemble your {custom} metrics.

    1. For Dataset and analysis outcomes S3 location, enter your enter and output areas in Amazon S3.
    2. For Amazon Bedrock IAM function – Permissions, choose Use an present service function and select your function.
    3. Select Create and watch for the job to finish.

    Begin a RAG analysis job with {custom} metrics utilizing the Python SDK and APIs

    To make use of the Python SDK for creating an RAG analysis job with {custom} metrics, comply with these steps (or discuss with our instance pocket book):

    1. Arrange the required configurations, which ought to embrace your mannequin identifier for the default metrics and {custom} metrics evaluator, IAM function with acceptable permissions, data base ID, Amazon S3 paths for enter knowledge containing your inference responses, and output location for outcomes:
      import boto3
      import time
      from datetime import datetime
      
      # Configure data base and mannequin settings
      knowledge_base_id = ""
      evaluator_model = "anthropic.claude-3-5-sonnet-20240620-v1:0"
      generator_model = "amazon.nova-lite-v1:0"
      custom_metrics_evaluator_model = "anthropic.claude-3-5-sonnet-20240620-v1:0"
      role_arn = "arn:aws:iam:::function/"
      BUCKET_NAME = ""
      
      # Specify S3 areas
      input_data = f"s3://{BUCKET_NAME}/evaluation_data/enter.jsonl"
      output_path = f"s3://{BUCKET_NAME}/evaluation_output/"
      
      # Configure retrieval settings
      num_results = 10
      search_type = "HYBRID"
      
      # Create Bedrock consumer
      # NOTE: You possibly can change the area identify to the area of your selecting
      bedrock_client = boto3.consumer('bedrock', region_name="us-east-1") 

    2. To outline a {custom} metric for RAG analysis, create a JSON construction with a customMetricDefinition Embody your metric’s identify, write detailed analysis directions incorporating template variables (equivalent to {{immediate}}, {{context}}, and {{prediction}}), and outline your ratingScale array with evaluation values utilizing both numerical scores (floatValue) or categorical labels (stringValue). This correctly formatted JSON schema allows Amazon Bedrock to judge responses constantly in keeping with your particular standards.
      # Outline our {custom} information_comprehensiveness metric
      information_comprehensiveness_metric = {
          "customMetricDefinition": {
              "identify": "information_comprehensiveness",
              "directions": """
              Your function is to judge how comprehensively the response addresses the 
      question utilizing the retrieved data. 
              Assess whether or not the response supplies a radical therapy of the topic
      by successfully using the obtainable retrieved passages.
      
      Rigorously consider the comprehensiveness of the RAG response for the given question
      in opposition to all specified standards. 
      Assign a single general rating that greatest represents the comprehensiveness, and 
      present a short clarification justifying your ranking, referencing particular strengths
      and weaknesses noticed.
      
      When evaluating response comprehensiveness, think about the next rubrics:
      - Protection: Does the response make the most of the important thing related data from the 
      retrieved passages?
      - Depth: Does the response present ample element on essential points from 
      the retrieved data?
      - Context utilization: How successfully does the response leverage the obtainable 
      retrieved passages?
      - Data synthesis: Does the response mix retrieved data to 
      create a radical therapy?
      
      Consider utilizing the next:
      
      Question: {{immediate}}
      
      Retrieved passages: {{context}}
      
      Response to judge: {{prediction}}
      """,
              "ratingScale": [
                  {
                      "definition": "Very comprehensive",
                      "value": {
                          "floatValue": 3
                      }
                  },
                  {
                      "definition": "Moderately comprehensive",
                      "value": {
                          "floatValue": 2
                      }
                  },
                  {
                      "definition": "Minimally comprehensive",
                      "value": {
                          "floatValue": 1
                      }
                  },
                  {
                      "definition": "Not at all comprehensive",
                      "value": {
                          "floatValue": 0
                      }
                  }
              ]
          }
      }

    3. To create a RAG analysis job with {custom} metrics, use the create_evaluation_job API and embrace your {custom} metric within the customMetricConfig part, specifying each built-in metrics (Builtin.Correctness) and your {custom} metric within the metricNames array. Configure the job together with your data base ID, generator mannequin, evaluator mannequin, and correct Amazon S3 paths for enter dataset and output outcomes.
      # Create the analysis job
      retrieve_generate_job_name = f"rag-evaluation-generate-{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}"
      
      retrieve_generate_job = bedrock_client.create_evaluation_job(
          jobName=retrieve_generate_job_name,
          jobDescription="Consider retrieval and technology with {custom} metric",
          roleArn=role_arn,
          applicationType="RagEvaluation",
          inferenceConfig={
              "ragConfigs": [{
                  "knowledgeBaseConfig": {
                      "retrieveAndGenerateConfig": {
                          "type": "KNOWLEDGE_BASE",
                          "knowledgeBaseConfiguration": {
                              "knowledgeBaseId": knowledge_base_id,
                              "modelArn": generator_model,
                              "retrievalConfiguration": {
                                  "vectorSearchConfiguration": {
                                      "numberOfResults": num_results
                                  }
                              }
                          }
                      }
                  }
              }]
          },
          outputDataConfig={
              "s3Uri": output_path
          },
          evaluationConfig={
              "automated": {
                  "datasetMetricConfigs": [{
                      "taskType": "General",
                      "dataset": {
                          "name": "RagDataset",
                          "datasetLocation": {
                              "s3Uri": input_data
                          }
                      },
                      "metricNames": [
                          "Builtin.Correctness",
                          "Builtin.Completeness",
                          "Builtin.Helpfulness",
                          "information_comprehensiveness"
                      ]
                  }],
                  "evaluatorModelConfig": {
                      "bedrockEvaluatorModels": [{
                          "modelIdentifier": evaluator_model
                      }]
                  },
                  "customMetricConfig": {
                      "customMetrics": [
                          information_comprehensiveness_metric
                      ],
                      "evaluatorModelConfig": {
                          "bedrockEvaluatorModels": [{
                              "modelIdentifier": custom_metrics_evaluator_model
                          }]
                      }
                  }
              }
          }
      )
      
      print(f"Created analysis job: {retrieve_generate_job_name}")
      print(f"Job ID: {retrieve_generate_job['jobArn']}")

    4. After submitting the analysis job, you possibly can examine its standing utilizing the get_evaluation_job technique and retrieve the outcomes when the job is full. The output can be saved on the Amazon S3 location specified within the output_path parameter, containing detailed metrics on how your RAG system carried out throughout the analysis dimensions together with {custom} metrics.

    Customized metrics are solely obtainable for LLM-as-a-judge. On the time of writing, we don’t settle for {custom} AWS Lambda capabilities or endpoints for code-based {custom} metric evaluators. Human-based mannequin analysis has supported {custom} metric definition since its launch in November 2023.

    Clear up

    To keep away from incurring future expenses, delete the S3 bucket, pocket book cases, and different sources that have been deployed as a part of the put up.

    Conclusion

    The addition of {custom} metrics to Amazon Bedrock Evaluations empowers organizations to outline their very own analysis standards for generative AI programs. By extending the LLM-as-a-judge framework with {custom} metrics, companies can now measure what issues for his or her particular use circumstances alongside built-in metrics. With help for each numerical and categorical scoring programs, these {custom} metrics allow constant evaluation aligned with organizational requirements and objectives.

    As generative AI turns into more and more built-in into enterprise processes, the flexibility to judge outputs in opposition to custom-defined standards is important for sustaining high quality and driving steady enchancment. We encourage you to discover these new capabilities by the Amazon Bedrock console and API examples offered, and uncover how personalised analysis frameworks can improve your AI programs’ efficiency and enterprise affect.


    In regards to the Authors

    Shreyas Subramanian is a Principal Knowledge Scientist and helps clients through the use of generative AI and deep studying to unravel their enterprise challenges utilizing AWS companies. Shreyas has a background in large-scale optimization and ML and in using ML and reinforcement studying for accelerating optimization duties.

    Adewale Akinfaderin is a Sr. Knowledge Scientist–Generative AI, Amazon Bedrock, the place he contributes to leading edge improvements in foundational fashions and generative AI purposes at AWS. His experience is in reproducible and end-to-end AI/ML strategies, sensible implementations, and serving to international clients formulate and develop scalable options to interdisciplinary issues. He has two graduate levels in physics and a doctorate in engineering.

    Jesse Manders is a Senior Product Supervisor on Amazon Bedrock, the AWS Generative AI developer service. He works on the intersection of AI and human interplay with the aim of making and enhancing generative AI services and products to fulfill our wants. Beforehand, Jesse held engineering group management roles at Apple and Lumileds, and was a senior scientist in a Silicon Valley startup. He has an M.S. and Ph.D. from the College of Florida, and an MBA from the College of California, Berkeley, Haas College of Enterprise.

    Ishan Singh is a Sr. Generative AI Knowledge Scientist at Amazon Net Providers, the place he helps clients construct modern and accountable generative AI options and merchandise. With a powerful background in AI/ML, Ishan focuses on constructing Generative AI options that drive enterprise worth. Outdoors of labor, he enjoys enjoying volleyball, exploring native bike trails, and spending time along with his spouse and canine, Beau.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Oliver Chambers
    • Website

    Related Posts

    Constructing clever AI voice brokers with Pipecat and Amazon Bedrock – Half 1

    June 9, 2025

    Run the Full DeepSeek-R1-0528 Mannequin Domestically

    June 9, 2025

    7 Cool Python Initiatives to Automate the Boring Stuff

    June 9, 2025
    Top Posts

    Video games for Change provides 5 new leaders to its board

    June 9, 2025

    How AI is Redrawing the World’s Electrical energy Maps: Insights from the IEA Report

    April 18, 2025

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025
    Don't Miss

    Video games for Change provides 5 new leaders to its board

    By Sophia Ahmed WilsonJune 9, 2025

    Video games for Change, the nonprofit group that marshals video games and immersive media for…

    Constructing clever AI voice brokers with Pipecat and Amazon Bedrock – Half 1

    June 9, 2025

    ChatGPT’s Reminiscence Restrict Is Irritating — The Mind Reveals a Higher Method

    June 9, 2025

    Stopping AI from Spinning Tales: A Information to Stopping Hallucinations

    June 9, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2025 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.