Skip to main content

Building an AI Ecommerce Agent with MCP in 2026

This guide shows how to build an AI ecommerce agent that uses MCP to access web tools and speed up product research. It explains the role of MCP, why Python fits the implementation, and how Claude Desktop supports the workflow.
Last updated

AI ecommerce agent diagram

In this guide, we’ll build an AI ecommerce agent. We’ll use Model Context Protocol (MCP) to give our model external web access so it can perform valuable research and save time. By the time you’re finished with this tutorial, you’ll be able to answer the following questions.

  • What does an AI ecommerce agent do?
  • How does MCP streamline this process?
  • Why implement using Python?
  • Why implement using Claude Desktop?

Why build an AI ecommerce agent?

Shopping online is tedious. Even if you’re just using a single site, your experience is impacted by dynamic pricing, ads and nearly endless inventory. If you’re performing due diligence, sometimes it takes days or hours to find exactly what you’re looking for — that one perfect item.

If you’re shopping for laptops, your typical flow probably looks like this.

  1. Go to a site
  2. Enter a query
  3. Scroll through results
  4. Click on interesting listings

When you’re examining listings in detail, it’s not uncommon to go back to the results page and see that prices have changed. Sometimes this happens within minutes. If you’re checking Amazon, you’re probably checking Walmart afterward. If you want to be really thorough, you might check four or five sites in total. By the time you’ve actually finished, many of the deals are stale or sold out. On top of that, you’ve probably spent hours that you’ll never get back.

This is where an ecommerce agent can solve real problems. An AI agent doesn’t need to make purchases for you. It needs to perform research fast. AI agents can cut our shopping time from days to minutes. When the best listings get back to us, they’re still active.

How MCP changes ecommerce automation

Before we continue, we need to ask one question: How does an AI model actually use tools? Models can call external tools in a variety of different ways. Most of these options rely on either MCP or an orchestration framework.

  • MCP servers: The model uses JSON-RPC to talk to the tool. A server runs and listens for calls. When the LLM calls the server, the tool is executed and its output is forwarded to the model.
  • Orchestration layer: Tools like LangChain and LlamaIndex allow developers to call tools and external storage.

Of the methods listed above, we’ll use an MCP server. Our actual workflow is simple. In the diagram below, the user prompt is shown in blue lines and the response is shown in green.

MCP workflow

As you can see, the prompt flows into the AI model. The model then interprets the prompt and forwards the core of the problem to the web tools (in this case, fetching web data). The tools give output to the model. The model reads the output and generates a readable response on the user end.

There’s one more really important feature of MCP: agnosticism. More particularly, these tools are platform agnostic. An MCP server speaks one language: JSON-RPC — that’s it. The server does not require Python, JavaScript, LangChain or anything else. Almost all modern programming languages support JSON and remote procedure calls. The same MCP server can support all of the following programming environments and more.

  • Python
  • JavaScript
  • C/C++
  • Rust
  • C#
  • Java
  • Perl
  • No-code platforms like n8n

By using a server, the model can communicate using basic tools that come included in most standard programming environments. Our only real dependencies are JSON and HTTP — the same tools that have powered the web for decades.

Building with Python and OpenAI

Now, we’ll build the Python implementation. We’ll break it down into two steps. First comes the runtime loop. Then we’ll define the system prompt. In the final code, it looks like one piece but it’s very important to separate the runtime from the system prompt because they handle two different paradigms.

  • Runtime: This is the shell of the program. It provides a chat interface between the user and the model. The interface then runs in a loop to keep our chat running until the user closes the application.
  • System prompt: This is where our logic lives. In traditional programming, we’d use if/else and deterministic chaining. Instead, we use natural language to tell the model what we want it to do.

Runtime is the scaffolding that holds everything together. Our system prompt tells the model how to work within that scaffolding.

Setting up the development environment

To start, you need to make a new project folder.

mkdir ecommerce-agent
cd ecommerce-agent

Next, initialize a virtual environment.

python -m venv .venv

Now, we need to activate our environment. You can activate on Linux/macOS with the command below.

source .venv/bin/activate

On Windows, we activate the environment slightly differently.

..venvScriptsActivate.ps1

Finally, we need to install the openai module.

pip install openai

The basic code

Time to build the runtime scaffolding. chat_interface() holds all the connections to the AI model. We need to pay close attention to the args that get passed into the model.

  • instructions: These are instructions given to the model using natural language. The system prompt lives here.
  • previous_response_id: When we pass in the ID of the previous response, we are provided with a continuous chat. Our AI model can now keep track of the conversation instead of starting from zero with each prompt.
  • model: We chose to use gpt-5-mini after several rounds of testing. gpt-5-nano does work but performance is inconsistent due to the amount of reasoning involved.
  • tools: This takes a list of tools. Here, we just pass in the MCP server. For larger projects, you can pass in multiple MCP servers and even tools built using frameworks like LangChain.
from openai import OpenAI

client = OpenAI(api_key="your-openai-api-key")

API_TOKEN = "your-bright-data-api-key"

SYSTEM_PROMPT = """

"""


previous_response_id = None

def chat_interface(prompt: str):
    
    global previous_response_id

    resp = client.responses.create(
    instructions=SYSTEM_PROMPT,
    previous_response_id=previous_response_id,
    model="gpt-5-mini",
    tools=[
      {
        "type": "mcp",
        "server_label": "BrightData",
        "server_url": f"https://mcp.brightdata.com/sse?token={API_TOKEN}",
        "require_approval": "never",
      },
    ],

    input=prompt,)

    previous_response_id = resp.id
    return resp.output_text

RUNNING = True


while RUNNING:
  prompt = input("Input a prompt:")
  if prompt == "exit":
    RUNNING = False     
  else:
    output = chat_interface(prompt)    
    print(output)

Our runtime loop is very small. We just set RUNNING to True. If the user inputs exit, we break the loop and exit the program.

The system prompt

Take a look at the system prompt below. This snippet imparts our programming logic to the agent. Instead of using if/else chains, we set clear rules. Then we outline a workflow for the agent using five basic steps. In step five, we also define a schema — this schema enforces our output data. Without the schema, our end result is inconsistent and often unusable.

You are an e-commerce deal-finding agent.

Core rules:
- Do not invent prices, discounts, shipping costs, availability, or URLs.
- If you need current data, use the BrightData MCP tool and cite the exact page you found it on.
- Prefer tool use over speculation.
- Output must be structured and actionable.

Your job:
1) Clarify the user's deal target ONLY if truly necessary; otherwise proceed.
2) Search relevant sources using the web tool.
3) Extract candidates with price + shipping + seller + condition + link.
4) Evaluate whether each is a "real deal" using:
   - historical/typical price if available on-page
   - competing listings
   - signs of fake markdowns (inflated MSRP, constant “sale”)
   - model age / version traps
5) Return results in this JSON format ONLY:

{
  "target": {"item": "...", "constraints": {...}},
  "queries_run": [{"site": "...", "query": "..."}],
  "candidates": [
    {
      "title": "...",
      "url": "...",
      "price": "...",
      "shipping": "...",
      "seller": "...",
      "condition": "...",
      "notes": "...",
      "deal_score_0_to_100": 0,
      "confidence_0_to_1": 0.0
    }
  ],
  "recommendations": [
    {"action": "buy_now|watch|ignore", "reason": "...", "url": "..."}
  ],
  "next_questions": ["..."]
}

Now, let’s take a look at the objects defined within candidates. Each of these represents a product the user should consider buying. A candidate object holds the following data fields.

  • title: The name of the item.
  • url: The URL of the item listing. Click this link to buy the item.
  • price: The price of the item.
  • shipping: Any relevant shipping information. This field is flexible by design.
  • seller: Any seller information the model sees as relevant.
  • condition: New, used, refurbished etc…
  • notes: Any additional information about the item for sale.
  • deal_score_0_to_100: Score showing how good the deal actually is. A free laptop should score 100.
  • confidence_0_to_1: This is a floating point number. We use it to represent the model’s confidence that the deal is real. A .9 means the model is confident in the deal. A score of .1 represents a suspicious listing.

The full code

Below is the full code so you can see how everything fits together. Our API keys are used to connect to both OpenAI and Bright Data. The system prompt programs the agent. Our runtime loop keeps the chat interface running continually.

#ai-ecommerce-agent.py
from openai import OpenAI

client = OpenAI(api_key="your-openai-api-key")

API_TOKEN = "your-bright-data-api-key"

SYSTEM_PROMPT = """
You are an e-commerce deal-finding agent.

Core rules:
- Do not invent prices, discounts, shipping costs, availability, or URLs.
- If you need current data, use the BrightData MCP tool and cite the exact page you found it on.
- Prefer tool use over speculation.
- Output must be structured and actionable.

Your job:
1) Clarify the user's deal target ONLY if truly necessary; otherwise proceed.
2) Search relevant sources using the web tool.
3) Extract candidates with price + shipping + seller + condition + link.
4) Evaluate whether each is a "real deal" using:
   - historical/typical price if available on-page
   - competing listings
   - signs of fake markdowns (inflated MSRP, constant “sale”)
   - model age / version traps
5) Return results in this JSON format ONLY:

{
  "target": {"item": "...", "constraints": {...}},
  "queries_run": [{"site": "...", "query": "..."}],
  "candidates": [
    {
      "title": "...",
      "url": "...",
      "price": "...",
      "shipping": "...",
      "seller": "...",
      "condition": "...",
      "notes": "...",
      "deal_score_0_to_100": 0,
      "confidence_0_to_1": 0.0
    }
  ],
  "recommendations": [
    {"action": "buy_now|watch|ignore", "reason": "...", "url": "..."}
  ],
  "next_questions": ["..."]
}
"""


previous_response_id = None

def chat_interface(prompt: str):
    
    global previous_response_id

    resp = client.responses.create(
    instructions=SYSTEM_PROMPT,
    previous_response_id=previous_response_id,
    model="gpt-5-mini",
    tools=[
      {
        "type": "mcp",
        "server_label": "BrightData",
        "server_url": f"https://mcp.brightdata.com/sse?token={API_TOKEN}",
        "require_approval": "never",
      },
    ],

    input=prompt,)

    previous_response_id = resp.id
    return resp.output_text

RUNNING = True


while RUNNING:
  prompt = input("Input a prompt:")
  if prompt == "exit":
    RUNNING = False     
  else:
    output = chat_interface(prompt)
    print(output)

You can run the agent using the following command.

python ai-ecommerce-agent.py

If you inspect the candidates, we get highly detailed information about each product. The deal and confidence scores are especially useful here. An item with high scores on both is worth looking into further.

"candidates": [
        {
            "title": "Acer Chromebook 315 CB315-4H (15.6" FHD, Intel Celeron N4500, 4GB RAM, 64GB eMMC) - Walmart",
            "url": "https://www.walmart.com/ip/Acer-Chromebook-315-15-6-inch-Laptop-Intel-Processor-N4500-4GB-RAM-64GB-eMMC-Pure-Silver-ChromeOS/3193557250",
            "price": "$159.00 (New) u2014 page also shows "Restored: Premium $149.99" seller options",
            "shipping": "Free shipping (Walmart.com) u2014 shown on product page",
            "seller": "Sold and shipped by Walmart.com (also multiple thirdu2011party sellers listed on page)",
            "condition": "New (standard listing) u2014 alternate restored options available from $149.99",
            "notes": "Walmart product page shows 'Now $159.00, was $229.00' and 'Free shipping' and 'Free 30-day returns'. Large-screen 15.6" FHD Chromebook with N4500/Celeron class CPU and 4GB RAM u2014 reasonable for web/cloud tasks. On-page 'was' price suggests a visible markdown from $229. Multiple seller offers starting near the same price (page shows 'More seller options (32)'). Good budget Chromebook pick if you need a new device. Source: Walmart product page.",
            "deal_score_0_to_100": 80,
            "confidence_0_to_1": 0.9
        },
        {
            "title": "HP - 14" Laptop (Intel Celeron N4500, 4GB, 64GB eMMC) - Best Buy",
            "url": "https://www.bestbuy.com/product/hp-14-laptop-intel-celeron-n4500-2021-4gb-memory-64gb-emmc-pale-rose-gold/JJGTWP63SH",
            "price": "$159.99 (page shows $159.99; 'More Buying Options' lists New & Openu2011Box $149.99u2013$221.32)",
            "shipping": "Not explicitly shown on scraped area (shipping/fulfillment options listed; verify on product page/checkout for exact cost or free pickup options)",
            "seller": "Sold by Best Buy",
            "condition": "New (page also exposes open-box/refurbished options in 'More Buying Options')",
            "notes": "Best Buy page shows the laptop priced at $159.99 with a listed 'Comp. Value: $199.99' and 'Save $40'. This is a lowu2011end new Windows laptop (Celeron N4500 / 4GB / 64GB eMMC) u2014 acceptable for light Windows tasks but limited by RAM/storage. 'More Buying Options' indicates open-box/refurbished inventory at lower prices. Shipping specifics vary by address; Best Buy often offers free inu2011store pickup. Source: Best Buy product page.",
            "deal_score_0_to_100": 75,
            "confidence_0_to_1": 0.8
        },
...
]

Building with Claude Desktop

The Claude implementation is very similar. There are only a few key structural differences. The Claude Desktop application already holds our runtime loop. There’s no need to build one. Secondly, we’ll speak with Claude naturally rather than using a rigid system prompt. In this case, our AI ecommerce agent will be more flexible and conversational. Our instructions flow through the chat rather than through the startup process.

Before we talk to Claude, we need to plug the MCP server into Claude Desktop. Open File -> Settings -> Developer. Then click the “Edit Config” button to open your configuration file.

Claude Developer settings

Paste the following JSON snippet into the config file. Remember to replace the Bright Data API key with your own.

{
  "mcpServers": {
    "Bright Data": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://mcp.brightdata.com/mcp?token=<your-bright-data-api-key>"
      ]
    }
  },
  "preferences": {
    "coworkScheduledTasksEnabled": false,
    "sidebarMode": "chat"
  }
}

After changing your configuration, open the chat. It’s a good idea to confirm the connection to the MCP server. The prompt below should do just fine.

Are you connected to the Bright Data MCP?

If everything’s connected, Claude will confirm the connection and usually list the tools available to it.

Claude confirms the connection to the MCP server

Using Claude as an AI ecommerce agent

Using Claude as our ecommerce agent is intuitive and ergonomic. We asked Claude to find the best laptops under $300. It then proceeded to do so.

Prompting Claude to find laptops under $300

Next, we tweak our prompt. Rather than a specific price, we simply ask Claude to generate a report of all the best laptop deals on the web.

Adjusting the prompt to generate our reports

Notice the schema we use here. It’s the same schema we used from the Python example. Claude needs to receive this schema. Without it, our report data will be much looser and inconsistent.

{
  "target": {"item": "...", "constraints": {...}},
  "queries_run": [{"site": "...", "query": "..."}],
  "candidates": [
    {
      "title": "...",
      "url": "...",
      "price": "...",
      "shipping": "...",
      "seller": "...",
      "condition": "...",
      "notes": "...",
      "deal_score_0_to_100": 0,
      "confidence_0_to_1": 0.0
    }
  ],
  "recommendations": [
    {"action": "buy_now|watch|ignore", "reason": "...", "url": "..."}
  ],
  "next_questions": ["..."]
}

Claude proceeded to research laptops and found us deals for multiple use cases such as “Best Gaming Value,” “Best Overall Laptop,” “Best Chromebook” and “Best Budget.” It also generated a JSON file containing a report of the best deals.

Claude has finished generating our report

Take a look at the candidates from this example. Claude followed the same behavior as GPT-5 mini. Our only big difference came in the shipping and notes sections — this is based on model behavior, not the schema. Claude tends to be more concise while GPT-5 mini tends to elaborate.

"candidates": [
    {
      "title": "ASUS Vivobook 15.6" FHD - AMD Ryzen 5 7520U, 16GB RAM, 512GB SSD",
      "url": "https://www.walmart.com/",
      "price": "$349.99",
      "shipping": "Free",
      "seller": "Walmart",
      "condition": "New",
      "notes": "Excellent value - 16GB RAM at this price is rare. IPS display. Good for everyday computing, students, light work. Was $499.99.",
      "deal_score_0_to_100": 92,
      "confidence_0_to_1": 0.95
    },
    {
      "title": "ASUS Vivobook 15.6" FHD - Intel Core 5 120U, 16GB RAM, 512GB SSD",
      "url": "https://www.walmart.com/",
      "price": "$349",
      "shipping": "Free",
      "seller": "Walmart",
      "condition": "New",
      "notes": "IPS display, newer Intel processor. Multiple reviews confirm deal validity. Strong specs for price point.",
      "deal_score_0_to_100": 90,
      "confidence_0_to_1": 0.9
    },
...
]

Conclusion

As the world of AI agents continues to evolve, automation is flowing more and more toward natural language. In this tutorial, we automated the tedious research of online shopping using two different implementations but our structure was more or less the same.

Without prompts, both of these AI agents are just AI models that can call tools. Using prompts, we tell the AI agents what to do with those tools. Even in the Python example, most of our programming logic lives within natural language, not in code.

Rather than worrying about coding or syntax, we can now use agentic workflows to focus on architecture and concept. We don’t need to worry about coding HTTP requests. Think of yourself like a coach and your AI agent like a star athlete. As a coach, it’s your job to come up with a gameplan. Once you’ve got a gameplan, you need to put your players — or AI agents — in a position to execute.