OpenClaw, an open-source artificial intelligence (AI) agent formerly known as Moltbot and Clawdbot, has gone viral in China in recent weeks, fueled in part by promotional campaigns from Tencent and Alibaba. More than a chatbot, it can handle emails, schedules and payments on behalf of a user.
The move reflects a broader shift first seen in the United States earlier this year, in which developers moved beyond conversational models to agents capable of performing real-world actions. That wave has now REACHED China, sparking debate within industry and government over governance, safeguards and the risks of delegating sensitive tasks to software that can operate with limited transparency.
The Chinese government has warned that OpenClaw, with access to email and bank accounts, could expose sensitive personal and financial data. In China, OpenClaw’s deployment is called “raising lobsters,” a nod to the project’s lobster mascot.
“OpenClaw technology is rapidly spreading throughout society, from enterprises to individual users, bringing efficiency gains alongside increased security risks,” Ministry of State Security. said in offering “how to raise lobsters” on social media on Tuesday. “Agent systems operate with broad permissions and can interact across multiple platforms, creating new vulnerabilities if not properly controlled.”
“Kavaids” lack professional maintenance and remediation mechanisms, and attackers can use malicious plug-ins to bypass their controls and actively exploit sensitive core user data, often with stealth that exceeds traditional Trojans,” he said. “Users must remain vigilant and avoid exposing critical resources to uncontrolled agent access.”
The Ministry suggested to users:
- control public exposure, permissions, credentials and plugin trust;
- apply least privilege, limit scope, encrypt data, keep audit logs, run in a sandboxed virtual machine, and limit basic access;
- treat it like a digital employee, implement governance and keep usage compliant, secure and controlled.
Prior to this, the China National Computer Network Emergency Coordination Center/Technical Team (CNCERT/CC) had warned On March 10, OpenClaw can control computers through natural language, but weak default security leaves users exposed to “rapid injection,” caused by hidden instructions that trick the AI agent into malicious actions.
“Malicious hidden instructions can be inserted into web pages to trick OpenClaw into running them, potentially exposing system keys. Some plugins have also been identified as malicious or high-risk and can steal credentials or perform malicious actions once installed,” he said.
He warned that excessive permissions could allow attackers to take control of systems and expose sensitive personal and financial data.
‘lobster’
Large language models (LLMs) like ChatGPT and DeepSeek can answer questions, write articles, and suggest travel plans, but they only act when asked. In contrast, an AI agent can connect a messenger (WhatsApp, Telegram, or WeChat), an LLM, an email account, a storage device, and an e-wallet to operate on a schedule and execute end-to-end tasks, from brainstorming ideas to making payments, with minimal human input.
A year ago, Manus, developed by Beijing-based startup Butterfly Effect, appeared in China as an early example. Its AI platform can perform tasks in seconds, including planning trips, searching for overseas housing and analyzing financial statements.
Compared to Manus or other AI agent platforms, OpenClaw offers two additional advantages. It can be downloaded to a personal computer and deployed locally for free, and its “lobsters” can automatically generate and test their code to complete tasks through multiple approaches.
Some commentators SAY that using Manus is like renting a robot, while OpenClaw is similar to owning and running the system yourself, with greater flexibility, but also greater complexity and responsibility.
OpenClaw has gained rapid traction in China, with Tencent Cloud AND Alibaba Cloud actively driving adoption. On March 6, Tencent Cloud engineers PROVIDED on-site install and play services in Shenzhen, helping hundreds of users open accounts on Tencent Cloud servers, deploy OpenClaw, configure models and connect messaging tools.
Initially, OpenClaw creator Peter Steinberger criticized Tencent for copying content from the official ClawHub marketplace without coordination.
“They copy but do not support the project in any way,” he has written on X. Tencent later became a sponsor via GitHub Sponsors on March 15, after which Steinberger signaled satisfaction with the support.
“The ‘lobster’ growth plays to Tencent’s strengths in cloud and AI,” Tencent CEO Pony Ma said at the company’s annual results conference call on Wednesday. “By integrating agents with instant messaging apps, users no longer need to wait for responses. Tasks can run in the background, providing a more ‘human-like’ experience that learns and adapts to individual preferences over time.”
He added that agent AI represents a new deployment model, opening up new opportunities across Tencent’s ecosystem. He said the approach is also shaping the company’s plans for WeChat AI, where mini software programs can undergo “lobsterization” and become increasingly intelligent, extending automation to a wide range of services.
AI governance
Scientists broadly describe the development of AI in stages, from static LLMs to generative AI (creating songs and videos) and now to early agent systems that can plan and act with tools. More advanced systems are expected to add memory and enable agent-to-agent collaboration, while artificial general intelligence (AGI) remains a long-term goal towards which AI systems can work, aiming to function like humans.
Today’s systems are still in the early stage of agency. Users must decide how much access to grant, such as emails, documents and wallets, while recognizing that greater autonomy also brings higher cybersecurity risks.
In Europe, AI governance has taken shape through THEY HAVE an actadopted in May 2024, which sets out responsibilities and penalties for providers and users of AI. China has not yet introduced comparable rules, and the authorities have said government bodies, state firms and schools to avoid installing “lobsters”.
Summer Yue, director of outreach at Meta Superintelligence Labs, said in a post on X last month that OpenClaw failed to follow through on her request to review emails for deletion and instead began deleting messages from her inbox. She said she was unable to stop the process and eventually had to shut down her computer to stop the agent.
“Many users lack basic security awareness when deploying OpenClaw,” said Wang Liejun, a security expert at QAX Technology Group, a cyber security firm in China. “They expose their application programming interfaces (APIs, or security keys to access email and data) to the public Internet, keep default credentials unchanged, and leave unnecessary ports open. This allows hackers to scan and take over these agents, then use them to break into networks or steal sensitive data.”
He added that a user should deploy OpenClaw on a virtual machine or separate device to reduce data risks, noting that cloud-based environments provide isolation so that any breach or system failure can be contained without affecting local data or home networks.
Such caution may be in order for now, but innovation is accelerating as US tech giants scramble to make their LLMs better at executing real-world tasks.
On January 12, Apple and Google said that future Apple Foundation models will be built on Google’s Gemini and cloud infrastructure, powering a more personalized Siri. On February 14, Steinberger said that he was joining OpenAI to help improve ChatGPT.
Read: Nvidia chip restrictions turn Singapore into AI hub for China
Follow Jeff Pao on Twitter at @jeffpao3





