We’re in a transition period. In the old internet, there was a clear boundary: GUIs were for humans, and APIs were for machines. Anything else was considered misuse. If a machine tried to interact with the GUI, we called it a bot - and built bot protection to stop it. ‘Bots are bad’ was the zeitgeist, and machines weren’t supposed to touch the interface meant for humans. But that line is now being blurred by AI agents. Agents can use both GUIs and APIs, and that opens up a new kind of interaction model (we wrote about this trend 6 months ago in “The Great GUI Overhaul”)
But before we dive into the implications of this trend, let’s talk about why it’s happening. Why were bots once bad, but AI agents are now good?
The answer is: inevitability.
ChatGPT already has over 800 million users. This is the fastest consumer adoption wave in history. Unlike the traditional internet experience - where users start with Google and then jump off to different websites or apps - ChatGPT encapsulates the entire user journey. You start with ChatGPT, and you expect to finish there. If you’re shopping, you want to discover and complete the purchase within the same interface. If you’re researching a trip, you expect all the recommendations and bookings to happen inside ChatGPT. Users don’t want to switch contexts, and the importance of traditional GUI significantly drops.
Because of this rapid dominance of chat-based and agentic interfaces, if you operate a website, whether you are a merchant or a news outlet, you no longer have a choice. You need to open it up to agents - what we used to call bots - or you risk becoming invisible. The core of this shift is the changing entry point of the internet: from Google to ChatGPT. And with that, the expectations of how the internet should feel and function are changing too.
APIs vs. GUI
Now that we understand why this is happening - agents beginning to use the entire internet - let’s look at how it’s happening. There are two primary routes machines can take to interact with the web: the machine route (APIs) and the human route (browsers and GUIs).
The API infrastructure was designed with machines in mind. It’s structured, documented, and predictable - so it naturally suits agents. The new agentic tool protocol, MCP, is picking up fast, and is often build on top of existing APIs as an abstraction layers (sometimes that works well, sometimes it doesn’t. We’ll probably need to rethink APIs in many areas. But that’s a separate discussion).
The GUI, on the other hand, has long been treated as off-limits for machines - associated with scraping, automation, and other “bad behavior.” Currently, access to GUIs by agents is managed through individual licensing agreements. OpenAI has entered into such agreements with several major media organizations, including the Financial Times, The Guardian Reddit and more. These deals allow OpenAI to use their content for training AI models and integrating information into ChatGPT responses (aka “RAG”). There seems to be developing a market for incorporating real time information into AI responses ,based on individual deals, or systematized solutions like Tolbit, but these solutions only account for information handling. If you do not have this solutions in place, you are stuck with handling bot protection solutions that gate access (that is the case for many browser agents that are getting blocked).
The current solutions are mostly limited to extracting information from websites and feeding it into ChatGPT responses. But that’s just a narrow slice of what agents will actually do - especially in areas like commerce, where the interaction goes beyond information and into transactions. What happens when you are a merchant, and an agent wants to buy directly from you? how do you know if this is a legit agent or a bot scraping prices? how do you know who should you let in? currently there are no real solutions for this.
So the challenges with respect to commerce and bot/agent access:
Many platforms lack APIs or are not designed for agent integration, making GUIs a more accessible option, however -
Existing bot protection systems often hinder agents attempting to interact with GUIs. As a result, while new browser-based agents are emerging regularly, their operations are constrained by traditional bot protection paradigms.
Blocking Good Agents Costs Merchants Real Money
Imagine the following situation: a browser agent trying to book a $240 Rome-to-Paris flight on some-travel-site.com. The agent fills the form and payment credentials (it can use nekuda.ai), but the site’s bot-manager spots headless-browser fingerprints and blocks the checkout page. Even sites that do let the agent through often fail at the payment step: 3-D Secure or Stripe Radar flags the card because the user’s IP, device, and browser telemetry don’t line up. Result – the seat stays unsold, the airline never sees the intent, and the user is bounced back to “sorry, couldn’t complete purchase.”
What we need is an “agent passport” that travels with every request across what used to be the bot-detection pipeline – now the agent-approval gate. The passport proves that this is an agent with positive contribution (on average). Gateways read that data alongside identity signals and decide in milliseconds: good agent, let it through; unknown scraper, block. One check, and real commerce flows while fraud stays out.
This brings us to a pivotal player in the agent economy: Cloudflare.
Cloudflare: Perfectly Positioned
Founded in 2009, Cloudflare has grown into a leading web infrastructure and security company. It offers services such as content delivery networks (CDNs), DDoS mitigation, and internet security solutions. Cloudflare holds a significant position in the market, dominating the global market for bot protection software, with a market share of >80% .
Cloudflare has been exploring the value of enabling bot traffic for several years. In 2022, they introduced the concept of “Friendly Bots” - a framework that allows developers to mark their own bots as safe, so they aren’t blocked by default. This eliminates the need for manual approval or complex configuration. You can see a list of verified bots on Cloudflare app:
There are currently only ~ 232 verified bots across 15 categories, but only a handful of AI agents are whitelisted. Where are the commerce-enabled bots? And when will we start calling them agents instead of bots? Yes, it definitely feels like early days. This list reflects the mindset of “all bots are bad, but we can exempt a trusted few”- not a shift toward “agents are good by default.”
Another key initiative Cloudflare is pushing forward is the use of cryptography to verify bot and agent traffic. The goal is to assign each agent or bot a unique, verifiable identity - creating an internet-wide identity layer based on cryptographic keys, which will be used to provide authenticated signatures on request made by agents. This approach is far more reliable than today’s methods, which rely on signals like IP addresses that are non-deterministic and easily spoofed. A cryptographic identity system would make it much easier to distinguish between different types of internet participants - whether malicious bots or helpful agents. Widespread adoption of this standard could mark the starting point for a more trusted and agent-aware web.
Given its extensive reach and influence, Cloudflare is uniquely positioned to shape the future of agent interactions on the web. By adapting its bot protection mechanisms to distinguish between malicious bots and beneficial agents, Cloudflare could facilitate a more agent-friendly internet environment. We like to think of them as the Keymaker from The Matrix - controlling the doors to the web for agents (last analogy from the Matrix, promising!).
Even if Cloudflare’s identity standard for agents takes hold, a key problem remains: how do we know if an agent is “good”? Maybe it’s not a yes-or-no question. Maybe we need a spectrum - from great agents (10), to decent ones (7), to harmful ones (3).
This isn’t just a technical challenge - it’s an economic one. We need new ways to underwrite agents: looking at their payment flows, estimating their expected value, and identifying malicious behavior (like constantly pinging a pricing endpoint to scrape data). Is this agent a buyer - or just a browser?
To Summarize:
The GUI/API Boundary Is Breaking Down: Historically, GUIs were for humans and APIs for machines - but AI agents are now using both, forcing a rethink of interaction models and access policies.
Chat Interfaces Are the New Entry Point: With platforms like ChatGPT handling end-to-end user journeys, websites must open up to agents or risk becoming irrelevant.
Browser Agents Are Growing, but Blocked: Most platforms lack agent-friendly APIs, and existing bot protection systems are blocking legitimate agent use of GUIs, stalling meaningful commerce.
Cloudflare Is the Gatekeeper: With >80% market share in bot protection, Cloudflare is positioned to shape the future of agent access - through frameworks like Friendly Bots and cryptographic identity standards.
We Need Economic Insight, Not Just Identity: Even with verifiable identities, websites need ways to measure an agent’s value - are they buyers or browsers?
If you’re building agents and want to give them payment capabilities, we can help. Feel free to reach out here or through Substack here:
Really good points made here - things that we have been evangelizing for a while too.