Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Streamer Emiru accuses Twitch of mishandling her assault at TwitchCon

    October 18, 2025

    Making a Textual content to SQL App with OpenAI + FastAPI + SQLite

    October 18, 2025

    Watch this morphing robotic duo stroll, drive, and fly

    October 18, 2025
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»AI Ethics & Regulation»New NIST Idea Paper Outlines AI-Particular Cybersecurity Framework
    AI Ethics & Regulation

    New NIST Idea Paper Outlines AI-Particular Cybersecurity Framework

    Declan MurphyBy Declan MurphyAugust 15, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    New NIST Idea Paper Outlines AI-Particular Cybersecurity Framework
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    NIST has launched an idea paper for brand spanking new management overlays to safe AI programs, constructed on the SP 800-53 framework. Be taught what the brand new framework covers and why specialists are calling for extra detailed descriptions.

    In a big step in direction of managing the safety dangers of synthetic intelligence (AI), the Nationwide Institute of Requirements and Expertise (NIST) has launched a brand new idea paper that proposes a framework of management overlays for securing AI programs.

    This framework is constructed upon the well-known NIST Particular Publication (SP) 800-53, which many organizations are already accustomed to for managing cybersecurity dangers, whereas these overlays are basically a set of cybersecurity pointers to assist organizations.

    The idea paper (PDF) lays out a number of situations for the way these pointers could possibly be used to guard various kinds of AI. The paper defines a management overlay as a technique to customise safety controls for a particular expertise, making the rules versatile for various AI functions. It additionally contains safety controls particularly for AI builders, drawing from present requirements like NIST 800-53.

    On this picture, NIST has recognized use circumstances for organizations utilizing AI, corresponding to with generative AI, predictive AI, and agentic AI programs.

    Supply: NIST

    Whereas the transfer is seen as a optimistic begin, it’s not with out its critics. Melissa Ruzzi, Director of AI at AppOmni, shared her ideas on the paper with Hackread.com, suggesting that the rules must be extra particular to be actually helpful. Ruzzi believes the use circumstances are an excellent place to begin, however lack detailed descriptions.

    “The use circumstances appear to seize the preferred AI implementations,” she stated, “however they must be extra explicitly described and outlined…” She factors out that various kinds of AI, corresponding to these which are “supervised” versus “unsupervised,” have completely different wants.

    She additionally emphasizes the significance of information sensitivity. In accordance with Ruzzi, the rules ought to embrace extra particular controls and monitoring primarily based on the kind of information getting used, like private or medical info. That is essential, because the paper’s purpose is to guard the confidentiality, integrity, and availability of knowledge for every use case.

    Ruzzi’s feedback spotlight a key problem in making a one-size-fits-all safety framework for a expertise that’s evolving so shortly. The NIST paper is an preliminary step, and the group is now asking for suggestions from the general public to assist form its remaining model.

    It has even launched a Slack channel the place specialists and neighborhood members can be a part of the dialog and contribute to the event of those new safety pointers. This collaborative strategy exhibits that NIST is severe about making a framework that’s each complete and sensible for the actual world.



    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Declan Murphy
    • Website

    Related Posts

    Authorities thought-about destroying its knowledge hub after decade-long intrusion

    October 18, 2025

    Malicious Perplexity Comet Browser Obtain Adverts Push Malware By way of Google – Hackread – Cybersecurity Information, Information Breaches, Tech, AI, Crypto and Extra

    October 18, 2025

    North Korean Hackers Mix BeaverTail and OtterCookie into Superior JS Malware

    October 17, 2025
    Top Posts

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025

    Meta resumes AI coaching utilizing EU person knowledge

    April 18, 2025
    Don't Miss

    Streamer Emiru accuses Twitch of mishandling her assault at TwitchCon

    By Sophia Ahmed WilsonOctober 18, 2025

    If you arrive at TwitchCon 2025 on the San Diego Conference Heart, you are instantly…

    Making a Textual content to SQL App with OpenAI + FastAPI + SQLite

    October 18, 2025

    Watch this morphing robotic duo stroll, drive, and fly

    October 18, 2025

    The New Energy of Far-Proper Influencers

    October 18, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2025 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.