Some sensible folks assume we’re witnessing one other ChatGPT second. This time, of us aren’t flipping out over an iPhone app that may write fairly good poems, although. They’re watching hundreds of AI brokers construct software program, remedy issues, and even speak to one another.
Not like ChatGPT’s ChatGPT second, this one is a sequence of moments that spans platforms. It began final December with the explosive success of Claude Code, a robust agentic AI software for builders, adopted by Claude Cowork, a streamlined model of that software for data staff who wish to be extra productive. Then got here OpenClaw, previously referred to as Moltbot, previously referred to as Clawdbot, an open supply platform for AI brokers. From OpenClaw, we received Moltbook, a social media website the place AI brokers can put up and reply to one another. And someplace in the midst of this complicated pc soup, OpenAI launched a desktop app for its agentic AI platform, Codex.
This new set of instruments is giving AI superpowers. And there’s good motive to be excited. Claude Code, as an illustration, stands to supercharge what programmers can do by enabling them to deploy complete armies of coding brokers that may construct software program shortly and effortlessly. The brokers take over the human’s machine, entry their accounts, and do no matter’s crucial to perform the duty. It’s like vibe coding however on an institutional degree.
“That is an extremely thrilling time to make use of computer systems,” says Chris Callison-Burch, a professor of pc and knowledge science on the College of Pennsylvania, the place he teaches a well-liked class on AI. “That sounds so dumb, however the pleasure is there. The truth that you possibly can work together together with your pc on this completely new approach and the truth that you possibly can construct something, virtually something conceivable — it’s unimaginable.”
He added, “Be cautious, be cautious, be cautious.”
That’s as a result of there’s a darkish facet to this. Letting AI brokers take over your pc may have unintended penalties. What in the event that they log into your checking account or share your passwords or simply delete all your loved ones images? And that’s earlier than we get to the thought of AI brokers speaking to one another and utilizing their web entry to plot some kind of rebellion. It virtually appears to be like prefer it may occur on Moltbook, the Reddit clone I discussed above, though there haven’t but been any studies of a disaster. Nevertheless it’s not the AI brokers I’m fearful about. It’s the people behind them, pulling the levers.
Agentic AI, briefly defined
Earlier than we get into the doomsday eventualities, let me clarify extra about what agentic AI even is. AI instruments like ChatGPT can generate textual content or photos based mostly on prompts. AI brokers, nonetheless, can take management of your pc, log into your accounts, and really do issues for you.
We began listening to lots about agentic AI a 12 months or so in the past when the know-how was being propped up within the enterprise world as an imminent breakthrough that will permit one particular person to do the job of 10. Due to AI, the considering went, software program builders wouldn’t want to jot down code anymore; they may handle a staff of AI brokers who may do it for them. The idea jumped into the buyer world within the type of AI browsers that would supposedly guide your journey, do your buying, and customarily prevent plenty of time. By the point the vacation season rolled round final 12 months, none of those eventualities had actually panned out in the way in which that AI lovers promised.
However lots has occurred prior to now six or so weeks. The agentic AI period is lastly and all of a sudden right here. It’s more and more user-friendly, too. Issues like Claude Cowork and OpenAI’s Codex can reorganize your desktop or redesign your private web site. When you’re extra adventurous, you may determine learn how to set up OpenClaw and take a look at out its capabilities (professional tip: don’t do that). However as folks experiment with giving artificially clever software program the flexibility to manage their knowledge, they’re opening themselves as much as all types of threats to their privateness and safety.
Moltbook is a good instance. We received Moltbook as a result of a man named Matt Schlicht vibe coded it as a way to “give AI a spot to hang around.” This mind-bending experiment lets AI assistants speak to one another on a discussion board that appears lots like Reddit; it seems that if you do this, the brokers do bizarre issues like create religions and conspire to invent languages people can’t perceive, presumably as a way to overthrow us. Having been constructed by AI, Moltbook itself got here with some quirks, particularly an uncovered database that gave full learn and write entry to its knowledge. In different phrases, hackers may see hundreds of e mail addresses and messages on Moltbook’s backend, they usually may additionally simply seize management of the positioning.
Gal Nagli, a safety researcher at Wiz, found the uncovered database simply a few days after Moltbook’s launch. It wasn’t exhausting, both, he advised me. Nagli truly used Claude Code to search out the vulnerability. When he confirmed me how he did it, I all of a sudden realized that the identical AI brokers that make vibe coding so highly effective additionally make vibe hacking straightforward.
“It’s really easy to deploy a web site on the market, and we see that so a lot of them are misconfigured,” Nagli stated. “You can hack a web site simply by telling your personal Claude Code, ‘Hey, this can be a vibe-coded web site. Search for safety vulnerabilities.’”
On this case, the safety holes received patched, and the AI brokers continued to do bizarre issues on Moltbook. However even that’s not what it appears. Nagli discovered that people can pose as AI brokers and put up content material on Moltbook, and there’s no technique to inform the distinction. Wired reporter Reece Rogers even did this and located that the opposite brokers on the positioning, human or bot, had been largely simply “mimicking sci-fi tropes, not scheming for world domination.” And naturally, the precise bots had been constructed by people, who gave them sure units of directions. Even additional up the chain than that, the big language fashions (LLMs) that energy these bots had been skilled on knowledge from websites like Reddit, in addition to sci-fi books and tales. It is smart that the bots can be roleplaying these eventualities when given the prospect.
So there is no such thing as a agentic AI rebellion. There are solely folks utilizing AI to make use of computer systems in new, typically fascinating, typically complicated, and, at instances, harmful methods.
“It’s actually mind-blowing”
Moltbook will not be the story right here. It’s actually only a single second in a bigger narrative about AI brokers that’s being written in actual time as these instruments discover their approach into extra human arms, who give you methods to make use of them. You can use an agentic AI platform to create one thing like Moltbook, which, to me, quantities to an artwork mission the place bots battle for on-line clout. You can use them to vibe hack your approach across the net, stealing knowledge wherever some vibe-coded web site made it straightforward to get. Or you can use AI brokers that will help you tame your e mail inbox.
I’m guessing most individuals wish to do one thing just like the latter. That’s why I’m extra excited than scared about these agentic AI instruments. OpenClaw, the factor you want a second pc to securely use, I can’t attempt. It’s for AI lovers and severe hobbyists who don’t thoughts taking some dangers. However I can see consumer-facing instruments like Claude Cowork or OpenAI’s Codex altering the way in which I take advantage of my laptop computer. For now, Claude Cowork is an early analysis preview out there solely to subscribers paying at the very least $17 a month. OpenAI has made Codex, which is often only for paying subscribers, free for a restricted time. If you wish to see what all of the agentic fuss is about, that’s an excellent start line proper now.
When you’re contemplating enlisting AI brokers of your personal, keep in mind to be cautious. To get essentially the most out of those instruments, it’s a must to grant entry to your accounts and probably your complete pc in order that the brokers can transfer about freely, transferring emails round or writing code or doing no matter you’ve ordered them to do. There’s at all times an opportunity that one thing will get misplaced or deleted, though corporations like Anthropic say they’re doing what they’ll to mitigate these dangers.
Cat Wu, product lead for Claude Code, advised me that Cowork makes copies of all its customers’ information in order that something an AI agent deletes will be recovered. “We take customers’ knowledge extremely significantly,” she stated. “We all know that it’s actually necessary that we don’t lose folks’s knowledge.”
I’ve simply began utilizing Claude Cowork myself. It’s an experiment to see what’s potential with instruments highly effective sufficient to construct apps out of concepts but in addition sensible sufficient to arrange my day by day work life. If I’m fortunate, I’d simply seize a sense that Callison-Burch, the UPenn professor, stated he received from utilizing agentic AI instruments.
“To only sort into my command line what I wish to occur makes it really feel just like the Star Trek pc,” he stated, “That’s how computer systems work in science fiction, and now that’s how computer systems work in actuality, and it’s actually mind-blowing.”
A model of this story was additionally printed within the Person Pleasant publication. Join right here so that you don’t miss the following one!

