Next Story
Newszop

ChatGPT's new 'agent' tool can be tricked by bad actors: OpenAI CEO Sam Altman cautions: 'Cutting-edge' but 'experimental'

Send Push
In the fast-unfolding world of artificial intelligence, OpenAI’s latest innovation, the ChatGPT Agent, promises to redefine how humans collaborate with machines. But as CEO Sam Altman put it in his candid new post on X (formerly Twitter), this powerful assistant is as much a peek into the future as it is a reminder to tread carefully.

Described as a leap forward in AI utility, the ChatGPT Agent is more than your average chatbot. It can manage complex, multi-step tasks using its own virtual computer, functioning almost like a digital executive assistant. Want to book travel, buy a wedding outfit, and select a gift for a friend—all without switching tabs? Agent can handle that. Want a report prepared based on your data and transformed into a presentation? It can do that too.

“It can think for a long time, use some tools, think some more, take some actions, think some more,” Altman explained, emphasizing the tool’s advanced reasoning abilities and continuous decision-making.

It’s a blend of Deep Research and OpenAI’s Operator models, but dialed up to full strength.

Altman’s Clear Warning: "Treat It as Experimental"
But despite the allure, Altman is openly cautious about how users should approach the Agent. In his words: “I would explain this to my own family as cutting edge and experimental… not something I’d yet use for high-stakes uses or with a lot of personal information.”

His tone is both enthusiastic and sober—encouraging users to try the tool, but with heavy warnings.

Altman’s honesty isn’t new. He’s previously called out ChatGPT’s own shortcomings, from hallucinations to sycophantic responses. With Agent, he takes that transparency a step further. While OpenAI has built more robust safeguards than ever—ranging from enhanced training to user-level controls—he admits that they “can’t anticipate everything.”

What Could Go Wrong?
Agent’s ability to carry out tasks autonomously means it can also make decisions that come with real-world consequences—especially if given too much access. For instance, Altman suggests that giving Agent access to your email and instructing it to “take care of things” without follow-up questions could end poorly. It might click on phishing links or fall for scams a human would recognize instantly.

He recommends granting Agent only the minimum access needed. Want it to book a group dinner? Give it access to your calendar. Want it to order clothes? No access is needed. The key is intentional use.

The risk isn’t just technical—it’s societal. “Society, the technology, and the risk mitigation strategy will need to co-evolve,” Altman noted in his post. It’s a rare moment of foresight in a space too often dominated by hype.

Loving Newspoint? Download the app now