Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    New PathWiper Malware Strikes Ukraine’s Vital Infrastructure

    June 9, 2025

    Soneium launches Sony Innovation Fund-backed incubator for Soneium Web3 recreation and shopper startups

    June 9, 2025

    ML Mannequin Serving with FastAPI and Redis for sooner predictions

    June 9, 2025
    Facebook X (Twitter) Instagram
    UK Tech Insider
    Facebook X (Twitter) Instagram Pinterest Vimeo
    UK Tech Insider
    Home»Machine Learning & Research»Modernize and migrate on-premises fraud detection machine studying workflows to Amazon SageMaker
    Machine Learning & Research

    Modernize and migrate on-premises fraud detection machine studying workflows to Amazon SageMaker

    Oliver ChambersBy Oliver ChambersJune 5, 2025No Comments19 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Modernize and migrate on-premises fraud detection machine studying workflows to Amazon SageMaker
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    This publish is co-written with Qing Chen and Mark Sinclair from Radial.

    Radial is the most important 3PL success supplier, additionally providing built-in cost, fraud detection, and omnichannel options to mid-market and enterprise manufacturers. With over 30 years of business experience, Radial tailors its companies and options to align strategically with every model’s distinctive wants.

    Radial helps manufacturers in tackling frequent ecommerce challenges, from scalable, versatile success enabling supply consistency to offering safe transactions. With a dedication to fulfilling guarantees from click on to supply, Radial empowers manufacturers to navigate the dynamic digital panorama with the boldness and functionality to ship a seamless, safe, and superior ecommerce expertise.

    On this publish, we share how Radial optimized the fee and efficiency of their fraud detection machine studying (ML) purposes by modernizing their ML workflow utilizing Amazon SageMaker.

    Companies want for fraud detection fashions

    ML has confirmed to be an efficient method in fraud detection in comparison with conventional approaches. ML fashions can analyze huge quantities of transactional information, study from historic fraud patterns, and detect anomalies that sign potential fraud in actual time. By repeatedly studying and adapting to new fraud patterns, ML can make certain fraud detection programs keep resilient and strong towards evolving threats, enhancing detection accuracy and decreasing false positives over time. This publish showcases how firms like Radial can modernize and migrate their on-premises fraud detection ML workflows to SageMaker. Through the use of the AWS Expertise-Primarily based Acceleration (EBA) program, they will improve effectivity, scalability, and maintainability by shut collaboration.

    Challenges of on-premises ML fashions

    Though ML fashions are extremely efficient at combating evolving fraud developments, managing these fashions on premises presents important scalability and upkeep challenges.

    Scalability

    On-premises programs are inherently restricted by the bodily {hardware} out there. Throughout peak procuring seasons, when transaction volumes surge, the infrastructure would possibly battle to maintain up with out substantial upfront funding. This may end up in slower processing instances or a lowered capability to run a number of ML purposes concurrently, probably resulting in missed fraud detections. Scaling an on-premises infrastructure is often a gradual and resource-intensive course of, hindering a enterprise’s capacity to adapt shortly to elevated demand. On the mannequin coaching facet, information scientists usually face bottlenecks as a result of restricted sources, forcing them to attend for infrastructure availability or scale back the scope of their experiments. This delays innovation and might result in suboptimal mannequin efficiency, placing companies at an obstacle in a quickly altering fraud panorama.

    Upkeep

    Sustaining an on-premises infrastructure for fraud detection requires a devoted IT group to handle servers, storage, networking, and backups. Sustaining uptime usually includes implementing and sustaining redundant programs, as a result of a failure may end in crucial downtime and an elevated threat of undetected fraud. Furthermore, fraud detection fashions naturally degrade over time and require common retraining, deployment, and monitoring. On-premises programs usually lack the built-in automation instruments wanted to handle the total ML lifecycle. Because of this, IT groups should manually deal with duties comparable to updating fashions, monitoring for drift, and deploying new variations. This provides operational complexity, will increase the probability of errors, and diverts helpful sources from different business-critical actions.

    Frequent modernization challenges in ML cloud migration

    Organizations face a number of important challenges when modernizing their ML workloads by cloud migration. One main hurdle is the talent hole, the place builders and information scientists would possibly lack experience in microservices structure, superior ML instruments, and DevOps practices for cloud environments. This could result in improvement delays, complicated and expensive architectures, and elevated safety vulnerabilities. Cross-functional obstacles, characterised by restricted communication and collaboration between groups, can even impede modernization efforts by hindering info sharing. Gradual decision-making is one other crucial problem. Many organizations take too lengthy to make selections about their cloud transfer. They spend an excessive amount of time desirous about choices as a substitute of taking motion. This delay may cause them to overlook possibilities to hurry up their modernization. It additionally stops them from utilizing the cloud’s capacity to shortly attempt new issues and make adjustments. Within the fast-moving world of ML and cloud know-how, being gradual to resolve can put firms behind their opponents. One other important impediment is complicated challenge administration, as a result of modernization initiatives usually require coordinating work throughout a number of groups with conflicting priorities. This problem is compounded by difficulties in aligning stakeholders on enterprise outcomes, quantifying and monitoring advantages to reveal worth, and balancing long-term advantages with short-term targets. To handle these challenges and streamline modernization efforts, AWS gives the EBA program. This technique is designed to help prospects in aligning executives’ imaginative and prescient and resolving roadblocks, accelerating their cloud journey, and reaching a profitable migration and modernization of their ML workloads to the cloud.

    EBA: AWS group collaboration

    EBA is a 3-day interactive workshop that makes use of SageMaker to speed up enterprise outcomes. It guides individuals by a prescriptive ML lifecycle, beginning with figuring out enterprise targets and ML downside framing, and progressing by information processing, mannequin improvement, manufacturing deployment, and monitoring.

    We acknowledge that prospects have completely different beginning factors. For these starting from scratch, it’s usually less complicated to start out with low code or no code options like Amazon SageMaker Canvas and Amazon SageMaker JumpStart, step by step transitioning to growing customized fashions on Amazon SageMaker Studio. Nonetheless, as a result of Radial has an present on-premises ML infrastructure, we are able to start straight through the use of SageMaker to handle challenges of their present answer.

    Throughout the EBA, skilled AWS ML material specialists and the AWS Account Staff labored intently with Radial’s cross-functional group. The AWS group supplied tailor-made recommendation, tackled obstacles, and enhanced the group’s capability for ongoing ML integration. As a substitute of concentrating solely on information and ML know-how, the emphasis is on addressing crucial enterprise challenges. This technique helps organizations extract important worth from beforehand underutilized sources.

    Modernizing ML workflows: From a legacy on-premises information heart to SageMaker

    Earlier than modernization, Radial hosted its ML purposes on premises inside its information heart. The legacy ML workflow introduced a number of challenges, significantly within the time-intensive mannequin improvement and deployment processes.

    Legacy workflow: On-premises ML improvement and deployment

    When the info science group wanted to construct a brand new fraud detection mannequin, the event course of usually took 2–4 weeks. Throughout this section, information scientists carried out duties comparable to the next:

    • Information cleansing and exploratory information evaluation (EDA)
    • Function engineering
    • Mannequin prototyping and coaching experiments
    • Mannequin analysis to finalize the fraud detection mannequin

    These steps have been carried out utilizing on-premises servers, which restricted the variety of experiments that might be run concurrently as a result of {hardware} constraints. After the mannequin was finalized, the info science group handed over the mannequin artifacts and implementation code—together with detailed directions—to the software program builders and DevOps groups. This transition initiated the mannequin deployment course of, which concerned:

    • Provisioning infrastructure – The software program group arrange the required infrastructure to host the ML API in a take a look at surroundings.
    • API implementation and testing – In depth testing and communication between the info science and software program groups have been required to verify the mannequin inference API behaved as anticipated. This section usually added 2–3 weeks to the timeline.
    • Manufacturing deployment – The DevOps and system engineering groups provisioned and scaled on-premises {hardware} to deploy the ML API into manufacturing, a course of that would take as much as a number of weeks relying on useful resource availability.

    General, the legacy workflow was susceptible to delays and inefficiencies, with important communication overhead and a reliance on handbook provisioning.

    Fashionable workflow: SageMaker and MLOps

    With the migration to SageMaker and the adoption of a machine studying operations (MLOps) structure, Radial streamlined its whole ML lifecycle—from improvement to deployment. The brand new workflow consists of the next phases:

    • Mannequin improvement – The info science group continues to carry out duties comparable to information cleansing, EDA, characteristic engineering, and mannequin coaching inside 2–4 weeks. Nonetheless, with the scalable and on-demand compute sources of SageMaker, they will conduct extra coaching experiments in the identical timeframe, resulting in improved mannequin efficiency and quicker iterations.
    • Seamless mannequin deployment – When a mannequin is prepared, the info science group approves it in SageMaker and triggers the MLOps pipeline to deploy the mannequin to the take a look at (pre-production) surroundings. This eliminates the necessity for back-and-forth communication with the software program group at this stage. Key enhancements embrace:
      • The ML API inference code is preconfigured and wrapped by the info scientists throughout improvement, offering constant habits between improvement and deployment.
      • Deployment to check environments takes minutes, as a result of the MLOps pipeline automates infrastructure provisioning and deployment.
    • Ultimate integration and testing – The software program group shortly integrates the API and performs crucial checks, comparable to integration and cargo testing. After the checks are profitable, the group triggers the pipeline to deploy the ML fashions into manufacturing, which takes solely minutes.

    The MLOps pipeline not solely automates the provisioning of cloud sources, but additionally supplies consistency between pre-production and manufacturing environments, minimizing deployment dangers.

    Legacy vs. trendy workflow comparability

    The brand new workflow considerably reduces time and complexity:

    • Handbook provisioning and communication overheads are lowered
    • Deployment instances are lowered from weeks to minutes
    • Consistency between environments supplies smoother transitions from improvement to manufacturing

    This transformation allows Radial to reply extra shortly to evolving fraud developments whereas sustaining excessive requirements of effectivity and reliability. The next determine supplies a visible comparability of the legacy and trendy ML workflows.

    Resolution overview

    When Radial migrated their fraud detection programs to the cloud, they collaborated with AWS Machine Studying Specialists and Options Architects to revamp how Radial handle the lifecycle of ML fashions. Through the use of AWS and integrating steady integration and supply (CI/CD) pipelines with GitLab, Terraform, and AWS CloudFormation, Radial developed a scalable, environment friendly, and safe MLOps structure. This new design accelerates mannequin improvement and deployment, so Radial can reply quicker to evolving fraud detection challenges.

    The structure incorporates finest practices in MLOps, ensuring that the completely different phases of the ML lifecycle—from information preparation to manufacturing deployment—are optimized for efficiency and reliability. Key elements of the answer embrace:

    • SageMaker – Central to the structure, SageMaker facilitates mannequin coaching, analysis, and deployment with built-in instruments for monitoring and model management
    • GitLab CI/CD pipelines – These pipelines automate the workflows for testing, constructing, and deploying ML fashions, decreasing handbook overhead and offering constant processes throughout environments
    • Terraform and AWS CloudFormation – These companies allow infrastructure as code (IaC) to provision and handle AWS sources, offering a repeatable and scalable setup for ML purposes

    The general answer structure is illustrated within the following determine, showcasing how every element integrates seamlessly to help Radial’s fraud detection initiatives.

    Account isolation for safe and scalable MLOps

    To streamline operations and implement safety, the MLOps structure is constructed on a multi-account technique that isolates environments based mostly on their function. This design enforces strict safety boundaries, reduces dangers, and promotes environment friendly collaboration throughout groups. The accounts are as follows:

    • Growth account (mannequin improvement workspace) – The event account is a devoted workspace for information scientists to experiment and develop fashions. Safe information administration is enforced by isolating datasets inside Amazon Easy Storage Service (Amazon S3) buckets. Information scientists use SageMaker Studio for information exploration, characteristic engineering, and scalable mannequin coaching. When the mannequin construct CI/CD pipeline in GitLab is triggered, Terraform and CloudFormation scripts automate the provisioning of infrastructure and AWS sources wanted for SageMaker coaching pipelines. Skilled fashions that meet predefined analysis metrics are versioned and registered within the Amazon SageMaker Mannequin Registry. With this setup, information scientists and ML engineers can carry out a number of rounds of coaching experiments, evaluation outcomes, and finalize one of the best mannequin for deployment testing.
    • Pre-production account (staging surroundings) – After a mannequin is validated and authorized within the improvement account, it’s moved to the pre-production account for staging. At this stage, the info science group triggers the mannequin deploy CI/CD pipeline in GitLab to configure the endpoint within the pre-production surroundings. Mannequin artifacts and inference pictures are synced from the event account to the pre-production surroundings. The newest authorized mannequin is deployed as an API in a SageMaker endpoint, the place it undergoes thorough integration and cargo testing to validate efficiency and reliability.
    • Manufacturing account (stay surroundings) – After passing the pre-production checks, the mannequin is promoted to the manufacturing account for stay deployment. This account mirrors the configurations of the pre-production surroundings to take care of consistency and reliability. The MLOps manufacturing group triggers the mannequin deploy CI/CD pipeline to launch the manufacturing ML API. When it’s stay, the mannequin is repeatedly monitored utilizing Amazon SageMaker Mannequin Monitor and Amazon CloudWatch to verify it performs as anticipated. Within the occasion of deployment points, automated rollback mechanisms revert to a secure mannequin model, minimizing disruptions and sustaining enterprise continuity.

    With this multi-account structure, information scientists can work independently whereas offering seamless transitions between improvement and manufacturing. The automation of CI/CD pipelines reduces deployment cycles, enhances scalability, and supplies the safety and efficiency crucial to take care of efficient fraud detection programs.

    Information privateness and compliance necessities

    Radial prioritizes the safety and safety of their prospects’ information. As a frontrunner in ecommerce options, they’re dedicated to assembly the excessive requirements of knowledge privateness and regulatory compliance comparable to CPPA and PCI. Radial fraud detection ML APIs course of delicate info comparable to transaction particulars and behavioral analytics. To fulfill strict compliance necessities, they use AWS Direct Join, Amazon Digital Personal Cloud (Amazon VPC), and Amazon S3 with AWS Key Administration Service (AWS KMS) encryption to construct a safe and compliant structure.

    Defending information in transit with Direct Join

    Information isn’t uncovered to the general public web at any stage. To keep up the safe switch of delicate information between on-premises programs and AWS environments, Radial makes use of Direct Join, which gives the next capabilities:

    • Devoted community connection – Direct Join establishes a non-public, high-speed connection between the info heart and AWS, assuaging the dangers related to public web visitors, comparable to interception or unauthorized entry
    • Constant and dependable efficiency – Direct Join supplies constant bandwidth and low latency, ensuring fraud detection APIs function with out delays, even throughout peak transaction volumes

    Isolating workloads with Amazon VPC

    When information reaches AWS, it’s processed in a VPC for optimum safety. This gives the next advantages:

    • Personal subnets for delicate information – The elements of the fraud detection ML API, together with SageMaker endpoints and AWS Lambda features, reside in non-public subnets, which aren’t accessible from the general public web
    • Managed entry with safety teams – Strict entry management is enforced by safety teams and community entry management lists (ACLs), permitting solely approved programs and customers to work together with VPC sources
    • Information segregation by account – As talked about beforehand concerning the multi-account technique, workloads are remoted throughout improvement, staging, and manufacturing accounts, every with its personal VPC, to restrict cross-environment entry and keep compliance.

    Securing information at relaxation with Amazon S3 and AWS KMS encryption

    Information concerned within the fraud detection workflows (for each mannequin improvement and real-time inference) is securely saved in Amazon S3, with encryption powered by AWS KMS. This gives the next advantages:

    • AWS KMS encryption for delicate information – Transaction logs, mannequin artifacts, and prediction outcomes are encrypted at relaxation utilizing managed KMS keys
    • Encryption in transit – Interactions with Amazon S3, together with uploads and downloads, are encrypted to verify information stays safe throughout switch
    • Information retention insurance policies – Lifecycle insurance policies implement information retention limits, ensuring delicate information is saved solely so long as crucial for compliance and enterprise functions earlier than scheduled deletion

    Information privateness by design

    Information privateness is built-in into each step of the ML API workflow:

    • Safe inference – Incoming transaction information is processed inside VPC-secured SageMaker endpoints, ensuring predictions are made in a non-public surroundings
    • Minimal information retention – Actual-time transaction information is anonymized the place potential, and solely aggregated outcomes are saved for future evaluation
    • Entry management and governance – Assets are ruled by AWS Identification and Entry Administration (IAM) insurance policies, ensuring solely approved personnel and companies can entry information and infrastructure

    Advantages of the brand new ML workflow on AWS

    To summarize, the implementation of the brand new ML workflow on AWS gives a number of key advantages:

    • Dynamic scalability – AWS allows Radial to scale their infrastructure dynamically to deal with spikes in each mannequin coaching and real-time inference visitors, offering optimum efficiency throughout peak intervals.
    • Sooner infrastructure provisioning – The brand new workflow accelerates the mannequin deployment cycle, decreasing the time to provision infrastructure and deploy new fashions by as much as a number of weeks.
    • Consistency in mannequin coaching and deployment – By streamlining the method, Radial achieves constant mannequin coaching and deployment throughout environments. This reduces communication overhead between the info science group and engineering/DevOps groups, simplifying the implementation of mannequin deployment.
    • Infrastructure as code – With IaC, they profit from model management and reusability, decreasing handbook configurations and minimizing the chance of errors throughout deployment.
    • Constructed-in mannequin monitoring – The built-in capabilities of SageMaker, comparable to experiment monitoring and information drift detection, assist them keep mannequin efficiency and supply well timed updates.

    Key takeaways and classes discovered from Radial’s ML mannequin migration

    To assist modernize your MLOps workflow on AWS, the next are a couple of key takeaways and classes discovered from Radial’s expertise:

    • Collaborate with AWS for custom-made options – Interact with AWS to debate your particular use instances and determine templates that intently match your necessities. Though AWS gives a variety of templates for frequent MLOps situations, they could should be custom-made to suit your distinctive wants. Discover methods to adapt these templates for migrating or revamping your ML workflows.
    • Iterative customization and help – As you customise your answer, work intently with each your inside group and AWS Help to handle any points. Plan for execution-based assessments and schedule workshops with AWS to resolve challenges at every stage. This may be an iterative course of, however it makes positive your modules are optimized in your surroundings.
    • Use account isolation for safety and collaboration – Use account isolation to separate mannequin improvement, pre-production, and manufacturing environments. This setup promotes seamless collaboration between your information science group and DevOps/MLOps group, whereas additionally implementing sturdy safety boundaries between environments.
    • Keep scalability with correct configuration – Radial’s fraud detection fashions efficiently dealt with transaction spikes throughout peak seasons. To keep up scalability, configure occasion quota limits appropriately inside AWS, and conduct thorough load testing earlier than peak visitors intervals to keep away from any efficiency points throughout high-demand instances.
    • Safe mannequin metadata sharing – Take into account opting out of sharing mannequin metadata when constructing your SageMaker pipeline to verify your aggregate-level mannequin info stays safe.
    • Forestall picture conflicts with correct configuration – When utilizing an AWS managed picture for mannequin inference, specify a hash digest inside your SageMaker pipeline. As a result of the most recent hash digest would possibly change dynamically for a similar picture mannequin model, this step helps keep away from conflicts when retrieving inference pictures throughout mannequin deployment.
    • High quality-tune scaling metrics by load testing – High quality-tune scaling metrics, comparable to occasion kind and computerized scaling thresholds, based mostly on correct load testing. Simulate what you are promoting’s visitors patterns throughout each regular and peak intervals to substantiate your infrastructure scales successfully.
    • Applicability past fraud detection – Though the implementation described right here is tailor-made to fraud detection, the MLOps structure is adaptable to a variety of ML use instances. Firms seeking to modernize their MLOps workflows can apply the identical rules to numerous ML tasks.

    Conclusion

    This publish demonstrated the high-level method taken by Radial’s fraud group to efficiently modernize their ML workflow by implementing an MLOps pipeline and migrating from on premises to the AWS Cloud. This was achieved by shut collaboration with AWS in the course of the EBA course of. The EBA course of begins with 4–6 weeks of preparation, culminating in a 3-day intensive workshop the place a minimal viable MLOps pipeline is created utilizing SageMaker, Amazon S3, GitLab, Terraform, and AWS CloudFormation. Following the EBA, groups usually spend a further 2–6 weeks to refine the pipeline and fine-tune the fashions by characteristic engineering and hyperparameter optimization earlier than manufacturing deployment. This method enabled Radial to successfully choose related AWS companies and options, accelerating the coaching, deployment, and testing of ML fashions in a pre-production SageMaker surroundings. Because of this, Radial efficiently deployed a number of new ML fashions on AWS of their manufacturing surroundings round Q3 2024, reaching a greater than 75% discount in ML mannequin deployment cycle and a 9% enchancment in general mannequin efficiency.

    “Within the ecommerce retail area, mitigating fraudulent transactions and enhancing shopper experiences are high priorities for retailers. Excessive-performing machine studying fashions have develop into invaluable instruments in reaching these targets. By leveraging AWS companies, now we have efficiently constructed a modernized machine studying workflow that allows fast iterations in a secure and safe surroundings.”

    – Lan Zhang, Head of Information Science and Superior Analytics

    To study extra about EBAs and the way this method can profit your group, attain out to your AWS Account Supervisor or Buyer Options Supervisor. For added info, confer with Utilizing experience-based acceleration to realize your transformation and Get to Know EBA.


    Concerning the Authors

    Jake Wen is a Options Architect at AWS, pushed by a ardour for Machine Studying, Pure Language Processing, and Deep Studying. He assists Enterprise prospects in reaching modernization and scalable deployment within the Cloud. Past the tech world, Jake finds enjoyment of skateboarding, mountaineering, and piloting air drones.

    Qing Chen is a senior information scientist at Radial, a full-stack answer supplier for ecommerce retailers. In his position, he modernizes and manages the machine studying framework within the cost & fraud group, driving a strong data-driven fraud decisioning circulation to steadiness threat & buyer friction for retailers.

    Mark Sinclair is a senior cloud architect at Radial, a full-stack answer supplier for ecommerce retailers. In his position, he designs, implements and manages the cloud infrastructure and DevOps for Radial engineering programs, driving a strong engineering structure and workflow to supply extremely scalable transactional companies for Radial shoppers.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Oliver Chambers
    • Website

    Related Posts

    ML Mannequin Serving with FastAPI and Redis for sooner predictions

    June 9, 2025

    Construct a Textual content-to-SQL resolution for information consistency in generative AI utilizing Amazon Nova

    June 7, 2025

    Multi-account assist for Amazon SageMaker HyperPod activity governance

    June 7, 2025
    Leave A Reply Cancel Reply

    Top Posts

    New PathWiper Malware Strikes Ukraine’s Vital Infrastructure

    June 9, 2025

    How AI is Redrawing the World’s Electrical energy Maps: Insights from the IEA Report

    April 18, 2025

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025
    Don't Miss

    New PathWiper Malware Strikes Ukraine’s Vital Infrastructure

    By Declan MurphyJune 9, 2025

    A newly recognized malware named PathWiper was just lately utilized in a cyberattack concentrating on…

    Soneium launches Sony Innovation Fund-backed incubator for Soneium Web3 recreation and shopper startups

    June 9, 2025

    ML Mannequin Serving with FastAPI and Redis for sooner predictions

    June 9, 2025

    OpenAI Bans ChatGPT Accounts Utilized by Russian, Iranian and Chinese language Hacker Teams

    June 9, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2025 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.