Skip to main content

How to keep your AI agents unblocked: Complete tutorial with unlocker and browser API

Learn to architect resilient AI agents that can navigate complex web defenses by combining lightweight APIs with heavy-duty browser automation.
Author Jake Nulty
Last updated

Using the Browser API

Using the MCP server

To unify these tools, we configured the MCP server in Claude Desktop. This allowed us to bypass writing the routing logic manually.

Here are the steps:

  1. Download and install Claude Desktop
  2. Get your API token:
    1. Go to Bright Data user settings.
    2. Copy your API token
  3. Configure your MCP server
    1. Open Claude Desktop
    2. Go to: Settings → Developer → Edit Config
    3. Add this to your claude_desktop_config.json:

{

    “mcpServers”: {

        “Bright Data”: {

            “command”: “npx”,

            “args”: [“@brightdata/mcp”],

            “env”: {

                “API_TOKEN”: “<replace_with_your_api_token”,

                “WEB_UNLOCKER_ZONE”: “<replace_with_your_web_unlocker_zone>”,

                “BROWSER_ZONE”: “<replace_with_your_browser_zone>”

            }

        }

    }

}

  1. Next, save and restart the Claude desktop application.

Once configured, we could prompt the agent with natural language. Ask Claude: 

Get the product title, price, and availability from this Amazon page:

Claude will attempt to get the result and will fail; it will then ask for your permission to use the agent you have configured. When you permit it, the agent will retrieve the information you requested from Amazon. The agent automatically determines when to use the browser for searching and when to use the API for extraction.

You can also try complex workflows.

Advanced Patterns: Optimizing performance

Through our testing, we identified several architectural patterns that improve reliability and reduce costs.

Complex sites often require setting specific session states, such as selecting a “delivery location” or dismissing a “first-time user” modal, before the correct data is displayed.

A cost-effective strategy is to use the Browser API only for the setup phase.

  1. Browser API: Navigates to the site, handles the modal/location selection and solves any initial CAPTCHAs.
  2. Extraction: The agent extracts the session cookies (session_id, preferences).
  3. Handoff: The agent passes these cookies to the cheaper Unlocker API to scrape the actual data pages.

Pattern 2: Soft-block detection

We found that a “200 OK” status code can be misleading. Sophisticated sites often return a successful HTTP status while serving a “Please verify you are human” page.

To prevent data poisoning, we implemented a validation layer:

def validate_response(html_content):

    suspicious_phrases = [“verify you are human”, “access denied”, “please wait”]

    if any(phrase in html_content.lower() for phrase in suspicious_phrases):

        return False

    return True

If validation fails, our logic triggers a retry with a new IP or escalates to the Browser API.

Pattern 3: The “Diet” Browser

When using the Browser API, we reduced bandwidth costs by intercepting and aborting requests for non-essential resources like high-res images, video ads and custom fonts. This reduced data usage by about 60% in our tests.

Real-world use case: Complex travel aggregation

This architecture can be tested in a real-world scenario, such as a travel analytics tool, where you would need to aggregate flight prices from a travel aggregator. Those travel aggregator sites require complex interactions: selecting “Multi-city” from a dropdown, choosing dates via a calendar widget and waiting for a dynamic list of results that loads via infinite scroll.

The Solution:

  • Infrastructure: Deployed the Browser API to handle the UI interactions (calendar/dropdowns).
  • Workflow: The agent connected to the remote browser, executed the search parameters and scrolled to load results.
  • Result: The browser session successfully mimicked a human user’s journey, generating the necessary cookies and fingerprints to keep the session valid, which simple requests failed to do.

Alternative solutions and tool selection

While we chose Bright Data for this tutorial due to its unified MCP support, it is important to consider the other tools when selecting your stack.

  1. Open source frameworks (Selenium/Playwright): They are free, highly customizable, and have a large community. However, they require you to build and manage your own proxy infrastructure and unblocking logic. Scaling this locally often leads to immediate IP bans.
  2. Managed browser clouds (e.g., Browserbase): They are excellent for “browser-first” workflows with strong debugging tools. However, they often lack a dedicated, lightweight “Unlocker” equivalent for simple requests, which can make high-volume data extraction more expensive.
  3. Unblocking APIs (e.g., ZenRows): They are strong capabilities for retrieving static HTML without blocks. However, they may offer less granular control over complex browser orchestration compared to dedicated browser grids.

Conclusion

Building an unblocked AI agent is more about adopting the right architecture than about the AI itself. Through our implementation, we learned that distinguishing between simple extraction and complex interaction is important for building scalable AI agents.

By using a tiered approach, that is starting with lightweight APIs and then moving to full browsers only when necessary, developers can build agents that are resilient to modern web defenses without blowing their budget.

Photo of Jake Nulty
Written by

Jake Nulty

Software Developer & Writer at Independent

Jacob is a software developer and technical writer with a focus on web data infrastructure, systems design and ethical computing.

221 articles Data collection framework-agnostic system design