Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Why Your Conversational AI Wants Good Utterance Knowledge?

    November 15, 2025

    5 Plead Responsible in U.S. for Serving to North Korean IT Staff Infiltrate 136 Firms

    November 15, 2025

    Google’s new AI coaching technique helps small fashions sort out advanced reasoning

    November 15, 2025
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Machine Learning & Research»Iterate sooner with Amazon Bedrock AgentCore Runtime direct code deployment
    Machine Learning & Research

    Iterate sooner with Amazon Bedrock AgentCore Runtime direct code deployment

    Oliver ChambersBy Oliver ChambersNovember 5, 2025No Comments7 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Iterate sooner with Amazon Bedrock AgentCore Runtime direct code deployment
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Amazon Bedrock AgentCore is an agentic platform for constructing, deploying, and working efficient brokers securely at scale. Amazon Bedrock AgentCore Runtime is a totally managed service of Bedrock AgentCore, which offers low latency serverless environments to deploy brokers and instruments. It offers session isolation, helps a number of agent frameworks together with widespread open-source frameworks, and handles multimodal workloads and long-running brokers.

    definition is offeredagent iscontainer

    choose to not fear about Docker experience and container infrastructure when deploying agents.

    On this put up, we’ll show how one can use direct code deployment (for Python).

    Introducing AgentCore Runtime direct code deployment

    With the container deployment technique, builders create a Dockerfile, construct ARM-compatible containers, handle ECR repositories, and add containers for code adjustments. This works nicely the place container DevOps pipelines have already been established to automate deployments. 

    deployment, which can considerably enhance developer time and productiveness. D

    We’ll focus on the strengths of every deployment possibility that can assist you select the correct strategy on your use case. 

    With direct code deployment, builders create a zipper archive of code and dependencies, add to Amazon S3, and configure the bucket within the agent configuration. When utilizing the AgentCore starter toolkit, the toolkit handles dependency detection, packaging, and add which offers a much-simplified developer expertise. Direct code deployment can be supported utilizing the API.

    Let’s evaluate the deployment steps at a excessive degree between the 2 strategies:

    Container-based deployment

    The container-based deployment technique includes the next steps:

    • Create a Dockerfile
    • Construct ARM-compatible container
    • Create ECR repository
    • Add to ECR
    • Deploy to AgentCore Runtime

    Direct code deployment

    The direct code deployment technique includes the next steps:

    • Bundle your code and dependencies into a zipper archive
    • Add it to S3
    • Configure the bucket in agent configuration
    • Deploy to AgentCore Runtime

    use direct code deployment

    Let’s illustrate how direct code deployment works with an agent created with Strands Brokers SDK and utilizing the AgentCore starter-toolkit to deploy the agent.

    Stipulations

    Earlier than you start, be sure to have the next:

    • Any of the variations of Python 3.10 to three.13
    • Your most well-liked package deal supervisor put in. For instance, we use uv package deal supervisor.
    • AWS account for creating and deploying brokers
    • Amazon Bedrock mannequin entry to Anthropic Claude Sonnet 4.0

    Step 1: Initialize your venture

    Arrange a brand new Python venture utilizing the uv package deal supervisor, then navigate into the venture listing:

    Step 2: Add the dependencies for the venture

    Set up the required Bedrock AgentCore libraries and growth instruments on your venture. On this instance, dependencies are added utilizing .toml file, alternatively they are often laid out in necessities.txt file:

    uv add bedrock-agentcore strands-agents strands-agents-tools
    uv add --dev bedrock-agentcore-starter-toolkit
    supply .venv/bin/activate

    Step 3: Create an agent.py file

    Create the primary agent implementation file that defines your AI agent’s conduct:

    from bedrock_agentcore import BedrockAgentCoreApp 
    from strands import Agent, device 
    from strands_tools import calculator  
    from strands.fashions import BedrockModel 
    import logging 
    
    app = BedrockAgentCoreApp(debug=True) 
    
    # Logging setup 
    logging.basicConfig(degree=logging.INFO) 
    logger = logging.getLogger(__name__) 
    
    # Create a customized device  
    @device 
    def climate(): 
         """ Get climate """  
         return "sunny" 
    
    model_id = "us.anthropic.claude-sonnet-4-20250514-v1:0" 
    mannequin = BedrockModel( 
         model_id=model_id, 
    ) 
    
    agent = Agent( 
         mannequin=mannequin, 
         instruments=[calculator, weather], 
         system_prompt="You are a useful assistant. You are able to do basic math calculation, and inform the climate." 
    ) 
    
    @app.entrypoint 
    def invoke(payload): 
         """Your AI agent perform""" 
         user_input = payload.get("immediate", "Hey! How can I assist you right now?") 
         logger.data("n Consumer enter: %s", user_input) 
         response = agent(user_input) 
         logger.data("n Agent end result: %s ", response.message) 
         return response.message['content'][0]['text'] 
    
    if __name__ == "__main__": 
         app.run() 

    Step 4: Deploy to AgentCore Runtime

    Configure and deploy your agent to the AgentCore Runtime atmosphere:

    agentcore configure --entrypoint agent.py --name 

    This can launch an interactive session the place you configure the S3 bucket to add the zip deployment package deal to and select a deployment configuration sort (as proven within the following configuration). To go for direct code deployment, select possibility 1 – Code Zip.

    Deployment Configuration

    Choose deployment sort:

    1. Code Zip (beneficial) – Easy, serverless, no Docker required
    2. Container – For customized runtimes or complicated dependencies

    This command creates a zipper deployment package deal, uploads it to the desired S3 bucket, and launches the agent within the AgentCore Runtime atmosphere, making it able to obtain and course of requests.

    To check the answer, let’s immediate the agent to see how the climate is:

    agentcore invoke '{"immediate":"How is the climate right now?"}'

    The primary deployment takes roughly 30 seconds to finish, however subsequent updates to the agent profit from the streamlined direct code deployment course of and will take lower than half the time, supporting sooner iteration cycles throughout growth.

    When to decide on direct code as a substitute of container-based deployment

    Let’s have a look at among the dimensions and see how the direct code and container-based deployment choices are totally different. This can assist you select the choice that’s best for you:

    • Deployment course of: Direct code deploys brokers as zip information with no Docker, ECR, or CodeBuild required. Container-based deployment makes use of Docker and ECR with full Dockerfile management.
    • Deployment time: Though there’s not a lot distinction throughout first deployment of an agent, subsequent updates to the agent are considerably sooner with direct code deployment (from a mean of 30 seconds for containers to about 10 seconds for direct code deployment).
    • Artifact storage: inth  2026
    • Customization: Direct code deployment helps customized dependencies by way of ZIP-based packaging, whereas container based mostly is dependent upon a Dockerfile.
    • Bundle measurement: Direct code deployment limits the package deal measurement to 250MB whereas container-based packages will be as much as 2GB in measurement.
    • Language Assist: Direct code presently helps Python 3.10, 3.11, 3.12, and three.13. Container-based deployment helps many languages and runtimes.

    Our basic steerage is:

    Container-based deployment is the correct selection when your package deal exceeds 250MB, you’ve present container CI/CD pipelines, otherwise you want extremely specialised dependencies and customized packaging necessities. Select containers should you require multi-language help, customized system dependencies or direct management over artifact storage and versioning in your account.

    Direct code deployment is the correct selection when your package deal is below 250MB, you employ Python 3.10-3.13 with widespread frameworks like LangGraph, Strands, or CrewAI, and also you want fast prototyping with quick iteration cycles. Select direct code in case your construct course of is easy with out complicated dependencies, and also you wish to take away the Docker/ECR/CodeBuild setup.

    A hybrid strategy works nicely for a lot of groups, use direct code for fast prototyping and experimentation the place quick iteration and easy setup speed up growth, then graduate to containers for manufacturing when package deal measurement, multi-language necessities, or specialised construct processes demand it.

    Conclusion

    Amazon Bedrock AgentCore direct code deployment makes iterative agent growth cycles even sooner, whereas nonetheless benefiting from enterprise safety and scale of deployments. Builders can now quickly prototype and iterate by deploying their code straight, with out having to create a container. To get began with Amazon Bedrock AgentCore direct code deployment, go to the AWS documentation.


    In regards to the authors

    Chaitra Mathur is as a GenAI Specialist Options Architect at AWS. She works with prospects throughout industries in constructing scalable generative AI platforms and operationalizing them. All through her profession, she has shared her experience at quite a few conferences and has authored a number of blogs within the Machine Studying and Generative AI domains.

    Author QingweiQingwei Li is a Machine Studying Specialist at Amazon Internet Providers. He acquired his Ph.D. in Operations Analysis after he broke his advisor’s analysis grant account and did not ship the Nobel Prize he promised. At present he helps prospects within the monetary service and insurance coverage trade construct machine studying options on AWS. In his spare time, he likes studying and educating.

    Kosti Vasilakakis is a Principal PM at AWS on the Agentic AI crew, the place he has led the design and growth of a number of Bedrock AgentCore companies from the bottom up, together with Runtime, Browser, Code Interpreter, and Id. He beforehand labored on Amazon SageMaker since its early days, launching AI/ML capabilities now utilized by 1000’s of firms worldwide. Earlier in his profession, Kosti was a knowledge scientist. Exterior of labor, he builds private productiveness automations, performs tennis, and enjoys life together with his spouse and children.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Oliver Chambers
    • Website

    Related Posts

    Construct a biomedical analysis agent with Biomni instruments and Amazon Bedrock AgentCore Gateway

    November 15, 2025

    Constructing AI Automations with Google Opal

    November 15, 2025

    Mastering JSON Prompting for LLMs

    November 14, 2025
    Top Posts

    Why Your Conversational AI Wants Good Utterance Knowledge?

    November 15, 2025

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025
    Don't Miss

    Why Your Conversational AI Wants Good Utterance Knowledge?

    By Hannah O’SullivanNovember 15, 2025

    Have you ever ever questioned how chatbots and digital assistants get up whenever you say,…

    5 Plead Responsible in U.S. for Serving to North Korean IT Staff Infiltrate 136 Firms

    November 15, 2025

    Google’s new AI coaching technique helps small fashions sort out advanced reasoning

    November 15, 2025

    The 9 Mindsets and Expertise of At this time’s Prime Leaders

    November 15, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2025 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.