In July 2025, McDonald’s had an surprising downside on the menu, one involving McHire, its AI-powered platform used to recruit and display job candidates. The system, developed by Paradox.ai, featured a rookie-level safety flaw: the backend for restaurant operators accepted “123456” as each username and password, and lacked multi-factor authentication. In consequence, the non-public information of round 64 million candidates was in peril. Fortunately, the flaw was uncovered by safety researchers Ian Carroll and Sam Curry, who notified the corporate.
With organizations dashing to deploy AI instruments with out absolutely auditing them, incidents like this should not unusual. AI adoption is shifting sooner than AI safety and governance, in line with an IBM report. Final 12 months, 13% of organizations reported breaches involving AI fashions or purposes, whereas one other 8% mentioned they don’t even know whether or not these methods have been compromised.
And insurers know that. Many have tightened coverage language, raised premiums, and carved out specific exclusions for sure AI-related incidents, an effort that goals to restrict publicity to dangers which can be poorly understood. A survey by Delinea discovered that 42% of respondents mentioned their cyber insurance coverage insurance policies now embrace exclusions tied to AI misuse and legal responsibility.

