Tech

OpenClaw Shows AI Agents Don’t Need to Be Vertically Integrated


Google Gemini can now use your phone apps to order you food. Microsoft Copilot can build you a PowerPoint presentation. AI agents, it seems, are the next extension of existing monopolies. And yet, another project suggests a different path forward. OpenClaw, an open-source AI agent, shows that vertical integration in this market is not such a necessity.

OpenClaw, made by Austrian developer Peter Steinberger, is not tied to any single foundation model. Users can swap in a model of their choosing, be that Claude or ChatGPT, or an open-source offering like DeepSeek. Switching between these options requires only a one-line command, typed into the interface.

This switching is possible because of how OpenClaw is designed. At its center is what it calls the ‘Gateway,’ a piece of software that runs on your device. The Gateway manages the AI agent’s connections to external services such as email and calendar. It also handles your memory and preferences, all of which are stored locally on your machine. When you give it a task, the Gateway feeds the relevant information to whichever foundation model you’re using. To book a restaurant, for instance, the Gateway might pull up your calendar availability and dietary preferences. Because your data lives locally and your connections are managed by the Gateway, switching foundation models doesn’t mean starting over.

OpenClaw has prompted nearly every major Chinese tech company to launch an equivalent product. Xiaomi launched Miclaw. Moonshot AI launched Kimi Claw. Zhipu AI launched AutoClaw. Now even Nvidia is entering the fray with NemoClaw, an open-source product built directly on top of OpenClaw’s framework, with added security and privacy tools. Nvidia’s product defaults to its own Nemotron models but also lets users swap in a foundation model of their choosing.

These products reveal something important: AI agents are not a monolith. The foundation model and the software layer that surrounds it are in fact separable. Vertical integration is not a requirement. Instead, more modular futures are possible, with meaningful choice at each layer.

The leading AI firms are not pursuing a modular future. Instead they are expanding their agents’ capabilities, seeking to mediate more user actions within their own ecosystems. Google’s Gemini can now search through your photos and read your group chats. ChatGPT can now shop for you or connect to your health apps and medical records.

Each new agentic feature may offer real value, but it also extends platforms’ reach into users’ lives. This is all on top of the intimate information that chatbot conversations already provide. Assembling this kind of profile has traditionally required data brokers and tracking cookies. With these agents, the information is already in the hands of one centralized actor. Such detailed and personal data can make targeted advertising far more potent, and more lucrative. OpenAI has been the first to act, introducing ads to ChatGPT.

Google and Microsoft have also embedded their agents into their dominant software products. Google’s Gemini now appears in Android, in Search, and across Google’s Workspace suite. Microsoft has bundled Agent Mode into Copilot, which itself was bundled into Office 365. These integrations expand already sizable walled gardens, and make their agents all but unavoidable.

These vertically integrated agents also open the door to self-preferencing. Any company developing and distributing an agent has obvious incentives to favor their own products, or those of their commercial partners. This already happened in the search engine context, where Google was fined €2.4 billion for burying rival shopping services beneath its own in search results. An agent can steer users in the same way, but less visibly. Where a search engine at least shows a page of alternatives, an agent often presents a single recommendation, with little insight into how that recommendation was reached.

To compound matters, the data that enables user surveillance also becomes a source of lock-in. Over time, an AI agent learns your habits and preferences. This includes both facts you told it explicitly, and facts it inferred. Starting over means reconstructing this knowledge, and reconnecting apps, files, and services to the new agent. With each passing day, switching becomes a little bit harder.

OpenClaw, and the wave of similar projects it has inspired, show that lock-in is a design choice, not a technical reality. With OpenClaw, memories and activity logs are automatically generated, and stored directly on your device. These files are human-readable, meaning users can edit or delete them directly. Integrations like email and calendar also persist across model switches.

The model switching enabled by more modular designs also dampens the prospects for platform surveillance. Users can rotate between different foundation model providers, ensuring that no single actor accumulates a complete record of their online activity. They also have the option of using smaller open-weight models for more sensitive tasks. These models can be downloaded and run locally so that no data leaves the device.

Of course, OpenClaw introduces pitfalls of its own. Security researchers have flagged a number of risks, stemming from its access to sensitive personal data and exposure to untrusted content. Attackers have exploited this through malicious add-ons designed to steal user data, and through prompt injection attacks, where a hidden instruction in an email or webpage can hijack the agent’s behavior.

More generally, modular agent designs make security issues more difficult to manage. With vertically integrated agents, one centralized provider handles the security of the entire system. With more modular designs, users must bear more of that responsibility. Managing these risks will require new infrastructure and safeguards, which are beginning to emerge.

A modular agent market does not serve the interests of today’s platforms, who seek to extend their existing positions. Meta demonstrated as much when it blocked rival AI assistants from WhatsApp in favor of its own. In this case, the Italian Competition Authority and European Commission were swift to intervene, imposing interim measures. Meta’s blocking of rivals was unusually visible, but platforms that never open up in the first place achieve the same outcome without triggering any scrutiny. Ultimately, unless regulators require gatekeepers to provide meaningful access to their platforms, projects like OpenClaw cannot offer a viable alternative.

Steinberger himself has already been hired by OpenAI. Nonetheless his project provides a blueprint for agent portability, with integrations and data persisting across model switches. The market-leading agents should now be held to a similar standard, so that users can move freely between providers. At a minimum, users should have the right to export their data, including memories, chat histories, and uploaded files, in standardized, human-readable formats.

OpenClaw and its offspring point towards a future where users are in control, without a single firm mediating and monitoring their every move. Platform control can feel inevitable, but the market for AI agents is yet to settle.



Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top