The next article initially appeared on Q McCallum’s weblog and is being republished right here with the writer’s permission.
Generative AI brokers and rogue merchants pose related insider threats to their employers.
Particularly, we will anticipate corporations to deploy agentic AI with broad attain and inadequate oversight. That creates the situations for a specific taste of long-running downside, which in flip creates a novel danger publicity for each the businesses in query and for anybody doing enterprise with them. The bot and the rogue dealer are in a position to inflict sizable, generally existential, harm to the corporations that make use of them.
The important thing distinction is the scope: Rogue merchants function in funding banks, whereas agentic AI shall be deployed to a wider array of corporations and trade verticals. Agentic AI could subsequently create a higher variety of issues than rogue merchants and put a higher quantity of capital in danger.
I’m naming this danger publicity ROT—Rogue Operator Risk—and this doc is a quick explainer on what it’s and learn how to deal with it.
(I nearly known as it RAT, with the A for “agentic,” however then realized that it might apply to any type of automated system. So I broadened the scope to “operator.”)
To set the stage, let’s make a journey to the buying and selling flooring:
Understanding the rogue dealer
Rogue dealer scandals comply with the identical storyline:
- A dealer accrues losses because of dangerous trades.
- They conceal these losses whereas inserting new trades in an try and get better.
- The brand new trades additionally lose cash, digging a deeper gap.
- Repeat.
This cycle continues till they’re caught, at which level the financial institution is sitting on a big loss (generally into the billions of {dollars}) and the dealer faces authorized repercussions.
The story of Barings Financial institution gives a concrete instance. Dealer Nick Leeson had been logging fraudulent trades, over a stretch of three years, in an try and cowl his mounting losses. This solely got here to mild when the Kobe earthquake shifted markets towards his most up-to-date positions and the losses had been not attainable to cover. Leeson’s £800M ($1.3B) gap drove Barings to chapter simply three days later.
That is while you’ll ask: How may knowledgeable buying and selling operation let so many dangerous trades slip via undetected? How may a dealer falsify data? Aren’t buying and selling flooring high-tech operations, filled with digital audit trails?
And the reply is: It’s sophisticated.
Buying and selling operations do hold data, sure. However no system is ideal. Every time a rogue buying and selling scandal involves mild, it seems that there have been loopholes in danger controls. A sufficiently motivated dealer—particularly one determined to cover their errors—discovered and exploited these loopholes, persevering with their shedding streak in plain sight till they might usher in actual cash to backfill the faux data.
That “till” by no means occurred, although. Which is why their employers then confronted monetary, reputational, and generally authorized troubles.
The AI agent’s ROT risk
Much like a dealer, an AI agent operates on behalf of its guardian enterprise and is given room to function independently so it might accomplish its duties.
The danger is that, within the rush to deploy agentic AI, these corporations will possible grant the bots extra leeway than is important. We’ve already seen instances wherein bots have been in a position to delete emails and wipe a manufacturing database. And there are little doubt different tales that haven’t made it into the information.
These points had been at the least caught in actual time. Corporations going through ROT are uncovered to extra longer-running issues wherein the bot is ready to accrue losses or inflict higher harm over an prolonged interval. In these instances the issues will solely be uncovered by chance and/or when it’s too late.
Take into account, for instance, an agent that creates false information data to mirror (nonexistent) gross sales orders. It’s attainable for this to run till some exterior occasion, comparable to investor due diligence or a funds overview, forces somebody to double-check these data towards actuality.
Avoiding ROT: Mitigating the risk
How are you going to slim your draw back danger publicity to ROT? Preventative measures are key. Sturdy danger controls, slim scope of authority, and monitoring can catch rogue operator issues lengthy earlier than they’ve metastasized into an existential risk.
In mild of rogue dealer scandals, buying and selling outlets have been identified to tighten danger controls and in addition separate duties to create a system of checks and balances. (This inhibits merchants from logging their very own faux trades.) Corporations additionally require merchants to take day without work, as fraudulent exercise could floor when the perpetrator isn’t round each day to maintain the system operating.
Adapting these concepts to agentic AI, an organization may monitor and restrict the scope of the bot’s exercise (say, requiring human approval to position greater than 10 orders an hour). It may additionally periodically purge the agent’s reminiscence so it doesn’t accumulate too many developed behaviors, or swap in fully new bots to choose up the place the earlier one had left off. And per my common chorus of “by no means let the bots run unattended,” this firm may make use of folks to cross-check every little thing the bot does. Belief, however confirm.
This is not going to forestall the AI agent from making errors. However guardrails and sufficiently frequent checks ought to restrict the scope of the bot’s harm. As with the rogue dealer, the ROT downside isn’t a couple of single error; it’s about letting the errors develop uncontrolled, undetected.

