Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    ShinyHunters Claims 1 Petabyte Information Breach at Telus Digital

    March 14, 2026

    Easy methods to Purchase Used or Refurbished Electronics (2026)

    March 14, 2026

    Rent Gifted Offshore Copywriters In The Philippines

    March 14, 2026
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Machine Learning & Research»Construct character constant storyboards utilizing Amazon Nova in Amazon Bedrock – Half 2
    Machine Learning & Research

    Construct character constant storyboards utilizing Amazon Nova in Amazon Bedrock – Half 2

    Oliver ChambersBy Oliver ChambersSeptember 5, 2025No Comments13 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Construct character constant storyboards utilizing Amazon Nova in Amazon Bedrock – Half 2
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Though cautious immediate crafting can yield good outcomes, attaining professional-grade visible consistency typically requires adapting the underlying mannequin itself. Constructing on the immediate engineering and character growth strategy coated in Half 1 of this two-part sequence, we now push the consistency degree for particular characters by fine-tuning an Amazon Nova Canvas basis mannequin (FM). By means of fine-tuning strategies, creators can instruct the mannequin to take care of exact management over character appearances, expressions, and stylistic components throughout a number of scenes.

    On this put up, we take an animated quick movie, Picchu, produced by FuzzyPixel from Amazon Internet Providers (AWS), put together coaching information by extracting key character frames, and fine-tune a character-consistent mannequin for the principle character Mayu and her mom, so we are able to rapidly generate storyboard ideas for brand new sequels like the next pictures.

    Resolution overview

    To implement an automatic workflow, we suggest the next complete resolution structure that makes use of AWS providers for an end-to-end implementation.

    The workflow consists of the next steps:

    1. The consumer uploads a video asset to an Amazon Easy Storage Service (Amazon S3) bucket.
    2. Amazon Elastic Container Service (Amazon ECS) is triggered to course of the video asset.
    3. Amazon ECS downsamples the frames, selects these containing the character, after which center-crops them to supply the ultimate character pictures.
    4. Amazon ECS invokes an Amazon Nova mannequin (Amazon Nova Professional) from Amazon Bedrock to create captions from the photographs.
    5. Amazon ECS writes the picture captions and metadata to the S3 bucket.
    6. The consumer makes use of a pocket book surroundings in Amazon SageMaker AI to invoke the mannequin coaching job.
    7. The consumer fine-tunes a {custom} Amazon Nova Canvas mannequin by invoking Amazon Bedrock create_model_customization_job and create_model_provisioned_throughput API calls to create a {custom} mannequin out there for inference.

    This workflow is structured in two distinct phases. The preliminary section, in Steps 1–5, focuses on making ready the coaching information. On this put up, we stroll by way of an automatic pipeline to extract pictures from an enter video after which generate labeled coaching information. The second section, in Steps 6–7, focuses on fine-tuning the Amazon Nova Canvas mannequin and performing check inference utilizing the custom-trained mannequin. For these latter steps, we offer the preprocessed picture information and complete instance code within the following GitHub repository to information you thru the method.

    Put together the coaching information

    Let’s start with the primary section of our workflow. In our instance, we construct an automatic video object/character extraction pipeline to extract high-resolution pictures with correct caption labels utilizing the next steps.

    Inventive character extraction

    We suggest first sampling video frames at fastened intervals (for instance, 1 body per second). Then, apply Amazon Rekognition label detection and face assortment search to determine frames and characters of curiosity. Label detection can determine over 2,000 distinctive labels and find their positions inside frames, making it excellent for preliminary detection of common character classes or non-human characters. To tell apart between completely different characters, we then use the Amazon Rekognition characteristic to search faces in a set. This characteristic identifies and tracks characters by matching their faces towards a pre-populated face assortment. If these two approaches aren’t exact sufficient, we are able to use Amazon Rekognition Customized Labels to coach a {custom} mannequin for detecting particular characters. The next diagram illustrates this workflow.

    After detection, we center-crop every character with acceptable pixel padding after which run a deduplication algorithm utilizing the Amazon Titan Multimodal Embeddings mannequin to take away semantically comparable pictures above a threshold worth. Doing so helps us construct a various dataset as a result of redundant or practically similar frames may result in mannequin overfitting (when a mannequin learns the coaching information too exactly, together with its noise and fluctuations, making it carry out poorly on new, unseen information). We will calibrate the similarity threshold to fine-tune what we take into account to be similar pictures, so we are able to higher management the steadiness between dataset range and redundancy elimination.

    Knowledge labeling

    We generate captions for every picture utilizing Amazon Nova Professional in Amazon Bedrock after which add the picture and label manifest file to an Amazon S3 location. This course of focuses on two essential elements of immediate engineering: character description to assist the FM determine and identify the characters primarily based on their distinctive attributes, and diverse description technology that avoids repetitive patterns within the caption (for instance, “an animated character”). The next is an instance immediate template used throughout our information labeling course of:

    system_prompt = """ 
        You might be an skilled picture description specialist who creates concise, pure alt
        textual content that makes visible content material accessible whereas sustaining readability and focus.
        Your job is to research the offered picture and supply a artistic description
        (20-30 phrases) that emphasizes the Three essential characters, capturing the important
        components of their interplay whereas avoiding pointless particulars.
    """
    
    immediate = """
        
        1. Establish the principle characters within the picture: Character 1, Character 2, and
            Character 3 a minimum of one can be within the image so present at a minimal a
            description with a minimum of one character identify.
          - "Character 1" describe the primary character, key traits, background, attributes.
          - "Character 2" describe the second character, key traits, background, attributes.
          - "Character 3" describe the third character, key traits, background, attributes. 
        2. Simply state their identify WITHOUT including any commonplace traits.
        3. Solely seize visible ingredient outdoors the usual traits
        4. Seize the core interplay between them
        5. Embody solely contextual particulars which are essential for understanding the scene
        6. Create a pure, flowing description utilizing on a regular basis language
        
        Listed here are some examples
        
           ...
        
        
        
        [Identify the main characters]
        [Assessment of their primary interaction]
        [Selection of crucial contextual elements]
        [Crafting of concise, natural description]
        
        
        {
            "alt_text": "[Concise, natural description focusing on the main characters]"
        }
        
        
        Be aware: Present solely the JSON object as the ultimate response.

    The information labeling output is formatted as a JSONL file, the place every line pairs a picture reference Amazon S3 path with a caption generated by Amazon Nova Professional. This JSONL file is then uploaded to Amazon S3 for coaching. The next is an instance of the file:

    {"image_ref": "s3://media-ip-dataset/characters/blue_character_01.jpg", "alt_text": "This
        animated character includes a spherical face with massive expressive eyes. The character
        has a particular blue colour scheme with a small tuft of hair on prime. The design is
        stylized with clear traces and a minimalist strategy typical of contemporary animation."}
    {"image_ref": "s3://media-ip-dataset/props/iconic_prop_series1.jpg", "alt_text": "This
        object seems to be an iconic prop from the franchise. It has a metallic look
        with distinctive engravings and a singular form that followers would instantly acknowledge.
        The lighting highlights its dimensional qualities and fantastic particulars that make it
        immediately identifiable."}

    Human verification

    For enterprise use instances, we suggest incorporating a human-in-the-loop course of to confirm labeled information earlier than continuing with mannequin coaching. This verification could be carried out utilizing Amazon Augmented AI (Amazon A2I), a service that helps annotators confirm each picture and caption high quality. For extra particulars, confer with Get Began with Amazon Augmented AI.

    Positive-tune Amazon Nova Canvas

    Now that now we have the coaching information, we are able to fine-tune the Amazon Nova Canvas mannequin in Amazon Bedrock. Amazon Bedrock requires an AWS Identification and Entry Administration (IAM) service position to entry the S3 bucket the place you saved your mannequin customization coaching information. For extra particulars, see Mannequin customization entry and safety. You may carry out the fine-tuning job immediately on the Amazon Bedrock console or use the Boto3 API. We clarify each approaches on this put up, and you could find the end-to-end code pattern in picchu-finetuning.ipynb.

    Create a fine-tuning job on the Amazon Bedrock console

    Let’s begin by creating an Amazon Nova Canvas fine-tuning job on the Amazon Bedrock console:

    1. On the Amazon Bedrock console, within the navigation pane, select Customized fashions underneath Basis fashions.
    2. Select Customise mannequin after which Create Positive-tuning job.

    1. On the Create Positive-tuning job particulars web page, select the mannequin you wish to customise and enter a reputation for the fine-tuned mannequin.
    2. Within the Job configuration part, enter a reputation for the job and optionally add tags to affiliate with it.
    3. Within the Enter information part, enter the Amazon S3 location of the coaching dataset file.
    4. Within the Hyperparameters part, enter values for hyperparameters, as proven within the following screenshot.

    1. Within the Output information part, enter the Amazon S3 location the place Amazon Bedrock ought to save the output of the job.
    2. Select Positive-tune mannequin job to start the fine-tuning course of.

    This hyperparameter mixture yielded good outcomes throughout our experimentation. Normally, growing the educational fee makes the mannequin prepare extra aggressively, which regularly presents an fascinating trade-off: we would obtain character consistency extra rapidly, however it would possibly impression general picture high quality. We suggest a scientific strategy to adjusting hyperparameters. Begin with the recommended batch dimension and studying fee, and take a look at growing or lowering the variety of coaching steps first. If the mannequin struggles to study your dataset even after 20,000 steps (the utmost allowed in Amazon Bedrock), then we recommend both growing the batch dimension or adjusting the educational fee upward. These changes, by way of delicate, could make a big distinction in our mannequin’s efficiency. For extra particulars concerning the hyperparameters, confer with Hyperparameters for Inventive Content material Era fashions.

    Create a fine-tuning job utilizing the Python SDK

    The next Python code snippet creates the identical fine-tuning job utilizing the create_model_customization_job API:

    bedrock = boto3.consumer('bedrock')
    jobName = "picchu-canvas-v0"
    # Set parameters
    hyperParameters = {
            "stepCount": "14000",
            "batchSize": "64",
            "learningRate": "0.000001",
        }
    
    # Create job
    response_ft = bedrock.create_model_customization_job(
        jobName=jobName,
        customModelName=jobName,
        roleArn=roleArn,
        baseModelIdentifier="amazon.nova-canvas-v1:0",
        hyperParameters=hyperParameters,
        trainingDataConfig={"s3Uri": training_path},
        outputDataConfig={"s3Uri": f"s3://{bucket}/{prefix}"}
    )
    
    jobArn = response_ft.get('jobArn')
    print(jobArn)

    When the job is full, you’ll be able to retrieve the brand new customModelARN utilizing the next code:

    custom_model_arn = bedrock.list_model_customization_jobs(
        nameContains=jobName
    )["modelCustomizationJobSummaries"][0]["customModelArn"]

    Deploy the fine-tuned mannequin

    With the previous hyperparameter configuration, this fine-tuning job would possibly take as much as 12 hours to finish. When it’s full, it’s best to see a brand new mannequin within the {custom} fashions listing. You may then create provisioned throughput to host the mannequin. For extra particulars on provisioned throughput and completely different dedication plans, see Improve mannequin invocation capability with Provisioned Throughput in Amazon Bedrock.

    Deploy the mannequin on the Amazon Bedrock console

    To deploy the mannequin from the Amazon Bedrock console, full the next steps:

    1. On the Amazon Bedrock console, select Customized fashions underneath Basis fashions within the navigation pane.
    2. Choose the brand new {custom} mannequin and select Buy provisioned throughput.

    1. Within the Provisioned Throughput particulars part, enter a reputation for the provisioned throughput.
    2. Underneath Choose mannequin, select the {custom} mannequin you simply created.
    3. Then specify the dedication time period and mannequin models.

    After you buy provisioned throughput, a brand new mannequin Amazon Useful resource Title (ARN) is created. You may invoke this ARN when the provisioned throughput is in service.

    Deploy the mannequin utilizing the Python SDK

    The next Python code snippet creates provisioned throughput utilizing the create_provisioned_model_throughput API:

    custom_model_name = "picchu-canvas-v0"
    
    # Create the availability throughput job and retrieve the provisioned mannequin id
    provisioned_model_id = bedrock.create_provisioned_model_throughput(
        modelUnits=1,
        # create a reputation to your provisioned throughput mannequin
        provisionedModelName=custom_model_name, 
        modelId=custom_model_arn
    )['provisionedModelArn']

    Check the fine-tuned mannequin

    When the provisioned throughput is dwell, we are able to use the next code snippet to check the {custom} mannequin and experiment with producing some new pictures for a sequel to Picchu:

    import json
    import io
    from PIL import Picture
    import base64
    
    def decode_base64_image(img_b64):
        return Picture.open(io.BytesIO(base64.b64decode(img_b64)))
        
    def generate_image(immediate,
                       negative_prompt="textual content, ugly, blurry, distorted, low
                           high quality, pixelated, watermark, textual content, deformed", 
                       num_of_images=3,
                       seed=1):
        """
        Generate a picture utilizing Amazon Nova Canvas.
        """
    
        image_gen_config = {
                "numberOfImages": num_of_images,
                "high quality": "premium",
                "width": 1024,  # Most decision 2048 x 2048
                "top": 1024,  # 1:1 ratio
                "cfgScale": 8.0,
                "seed": seed,
            }
    
        # Put together the request physique
        request_body = {
            "taskType": "TEXT_IMAGE",
            "textToImageParams": {
                "textual content": immediate,
                "negativeText": negative_prompt,  # Listing issues to keep away from
            },
            "imageGenerationConfig": image_gen_config
        } 
    
        response = bedrock_runtime.invoke_model(
            modelId=provisioned_model_id,
            physique=json.dumps(request_body)
        )
    
        # Parse the response
        response_body = json.masses(response['body'].learn())
    
        if "pictures" in response_body:
            # Extract the picture
            return [decode_base64_image(img) for img in response_body['images']]
        else:
            return
    seed = random.randint(1, 858993459)
    print(f"seed: {seed}")
    
    pictures = generate_image(immediate=immediate, seed=seed)

    Mayu face exhibits a mixture of nervousness and dedication. Mommy kneels beside her, gently holder her. A panorama is seen within the background. A steep cliff face with an extended picket ladder extending downwards. Midway down the ladder is Mayu with a decided expression on her face. Mayu’s small palms grip the edges of the ladder tightly as she fastidiously locations her ft on every rung. The encircling surroundings exhibits a rugged, mountainous panorama. Mayu standing proudly on the entrance of a easy faculty constructing. Her face beams with a large smile, expressing delight and accomplishment.

    Clear up

    To keep away from incurring AWS costs after you’re carried out testing, full the cleanup steps in picchu-finetuning.ipynb and delete the next sources:

    • Amazon SageMaker Studio area
    • Positive-tuned Amazon Nova mannequin and provision throughput endpoint

    Conclusion

    On this put up, we demonstrated how you can elevate character and magnificence consistency in storyboarding from Half 1 by fine-tuning Amazon Nova Canvas in Amazon Bedrock. Our complete workflow combines automated video processing, clever character extraction utilizing Amazon Rekognition, and exact mannequin customization utilizing Amazon Bedrock to create an answer that maintains visible constancy and dramatically accelerates the storyboarding course of. By fine-tuning the Amazon Nova Canvas mannequin on particular characters and kinds, we’ve achieved a degree of consistency that surpasses commonplace immediate engineering, so artistic groups can produce high-quality storyboards in hours moderately than weeks. Begin experimenting with Nova Canvas fine-tuning at this time, so you can even elevate your storytelling with higher character and magnificence consistency.


    Concerning the authors

    Dr. Achin Jain is a Senior Utilized Scientist at Amazon AGI, the place he works on constructing multi-modal basis fashions. He brings over 10+ years of mixed business and educational analysis expertise. He has led the event of a number of modules for Amazon Nova Canvas and Amazon Titan Picture Generator, together with supervised fine-tuning (SFT), mannequin customization, instantaneous customization, and steerage with colour palette.

    James Wu is a Senior AI/ML Specialist Resolution Architect at AWS. serving to prospects design and construct AI/ML options. James’s work covers a variety of ML use instances, with a major curiosity in pc imaginative and prescient, deep studying, and scaling ML throughout the enterprise. Previous to becoming a member of AWS, James was an architect, developer, and expertise chief for over 10 years, together with 6 years in engineering and 4 years in advertising and marketing & promoting industries.

    Randy Ridgley is a Principal Options Architect targeted on real-time analytics and AI. With experience in designing information lakes and pipelines. Randy helps organizations remodel various information streams into actionable insights. He makes a speciality of IoT options, analytics, and infrastructure-as-code implementations. As an open-source contributor and technical chief, Randy offers deep technical information to ship scalable information options throughout enterprise environments.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Oliver Chambers
    • Website

    Related Posts

    5 Highly effective Python Decorators for Excessive-Efficiency Information Pipelines

    March 14, 2026

    What OpenClaw Reveals In regards to the Subsequent Part of AI Brokers – O’Reilly

    March 14, 2026

    mAceReason-Math: A Dataset of Excessive-High quality Multilingual Math Issues Prepared For RLVR

    March 14, 2026
    Top Posts

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025

    Meta resumes AI coaching utilizing EU person knowledge

    April 18, 2025
    Don't Miss

    ShinyHunters Claims 1 Petabyte Information Breach at Telus Digital

    By Declan MurphyMarch 14, 2026

    The Canadian telecoms large Telus is at present selecting up the items after a large…

    Easy methods to Purchase Used or Refurbished Electronics (2026)

    March 14, 2026

    Rent Gifted Offshore Copywriters In The Philippines

    March 14, 2026

    5 Highly effective Python Decorators for Excessive-Efficiency Information Pipelines

    March 14, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2026 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.