Skip to main content

How to use the Firecrawl MCP in 2026

Step-by-step guide to Firecrawl MCP: connect OpenAI or Claude, configure MCP, and scrape structured data with natural-language prompts
Author Jake Nulty
Last updated

Today, we’re going to learn how to use Firecrawl’s MCP server. This tutorial assumes you’ve already got access to either the OpenAI API or Claude Desktop.

By the time you’ve finished this tutorial, you’ll be able to answer the following questions.

  • What is Firecrawl
  • What is Model Context Protocol (MCP)?
  • How can you connect the OpenAI API to the Firecrawl MCP?
  • How can you connect Claude Desktop to the Firecrawl MCP?

What is Firecrawl?

Firecrawl home page
Firecrawl home page

If you don’t already have one, you’ll need to create an account with Firecrawl. They offer a variety of price plans based on usage. Their free plan is more than sufficient for today’s tutorial. You’ll get 500 API credits for free — no credit card required. You can learn about their other pricing plans here.

Once you’ve got an account, head over to their dashboard. Scroll down the page and you should see two boxes on the right hand side: API Key and MCP Integration. Keep these in a safe place.

Getting your API keys and MCP settings
Getting your API keys and MCP settings

For the rest of the tutorial, we’ll use the MCP Integration.

{
  "mcpServers": {
    "firecrawl-mcp": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "your-firecrawl-api-key"
      }
    }
  }
}

What is MCP?

Before we dive in, we need to get a better understanding of Model Context Protocol (MCP). MCP lets AI models communicate with external tools using an MCP server. Imagine you ask your AI assistant to perform a simple math equation: 1+1. Most humans know innately that our answer is 2. However, we don’t want this answer from prior human knowledge or model training. We want the model to actually work through and solve the problem.

Our prompt is likely something simple like the one below.

what is 1+1?

This small prompt initiates a complex chain of events. Our model needs to do all the following things in order.

  1. Infer that the user wants to solve a math problem.
  2. Extract the problem (1+1) from the prompt.
  3. Decide to use the calculator
  4. Enter the output into the calculator
  5. Read the output from the calculator
  6. Forward the answer back to the user as a chat response.

This tool calling process isn’t limited to just calculators. As reasoning improves, the chain of events can scale to exponential complexity. With reasoning and memory, our model can fetch and extract web data using MCP.

Here’s what our model needs to do using the Firecrawl MCP.

  1. Interpret a user prompt to scrape a specific website.
  2. Extract the URL from the user prompt.
  3. Use the Firecrawl MCP to fetch the URL.
  4. Extract and output structured data from the site.
  5. Output the extracted data to the user.

Using the Firecrawl MCP from Python

To start, you’ll need to install the OpenAI Python library. You can install it using pip.

pip install openai

Interfacing with the AI model

In the first portion of our code, we import the package and create a chat_interface() function. This function allows us to talk to any OpenAI model. In this example, we use GPT-5 mini. However, feel free to choose any model you want. You can view their list of available models here.

from openai import OpenAI

client = OpenAI(api_key="your-openai-api-key")

API_TOKEN = "your-firecrawl-api-key"


def chat_interface(prompt: str):
    resp = client.responses.create(
    model="gpt-5-mini",
    tools=[
      {
        "type": "mcp",
        "server_label": "Firecrawl",
        "server_url": f"https://mcp.firecrawl.dev/{API_TOKEN}/v2/mcp",
        "require_approval": "never",
      },
    ],
    input=prompt,)
    return resp.output_text

Also take note of our tools which we pass into client.responses.create(). This function takes a list so developers can connect multiple tools to the same AI agent. We only have one however. Our configuration here is slightly different than the MCP configuration from our Firecrawl dashboard. This configuration is adjusted for Python whereas our original config tells the model how to start the MCP server as a JavaScript program.

Creating a runtime

Now that we’ve got a function for calling our AI model, we need to create a runtime so we can use the chat program. This is relatively straightforward. We use a simple while loop to set up repeated calls to chat_interface(). If the user types exit, we exit the program.

RUNNING = True

while RUNNING:
  prompt = input("Input a prompt: ")
  if prompt == "exit":
    RUNNING = False
  else:
    print(chat_interface(prompt))

Full Python code

You can view the full code below. Remember to replace the OpenAI and Firecrawl API keys with your own. We’ll name this file firecrawl-mcp-agent.py. Feel free to name it whatever you like. Just make sure to use the .py extension.

from openai import OpenAI

client = OpenAI(api_key="your-openai-api-key")

API_TOKEN = "your-firecrawl-api-key"


def chat_interface(prompt: str):
    resp = client.responses.create(
    model="gpt-5-mini",
    tools=[
      {
        "type": "mcp",
        "server_label": "Firecrawl",
        "server_url": f"https://mcp.firecrawl.dev/{API_TOKEN}/v2/mcp",
        "require_approval": "never",
      },
    ],
    input=prompt,)
    return resp.output_text

RUNNING = True

while RUNNING:
  prompt = input("Input a prompt: ")
  if prompt == "exit":
    RUNNING = False
  else:
    print(chat_interface(prompt))

When running the file, our chat loop begins. In the image below, we ask the model if it’s connected to the Firecrawl MCP. The model responds confirming that it has access to the MCP.

Checking the connection

Now, we’ll ask it to extract books from the first page of Books to Scrape. The image below only contains our first result but the schema is clear.

The OpenAI model extracted books from the page

The AI model extracts the following data for each book.

  • title: The title of the book.
  • url: The URL of the listing.
  • price: The price of the book.
  • availability: Whether or not the book is in stock.
  • image: The URL of the image used for the book.

Using the Firecrawl MCP with Claude

Next, we’ll plug into the Firecrawl MCP using the Claude Desktop app. If you haven’t already, start the app. Open your Claude settings.

Opening the Claude settings
Opening the Claude settings

Click Developer on the sidebar and then click the Edit Config button.

Opening your config file
Opening your config file

This will open up a JSON file that Claude uses to read your settings. Paste the following block into the file. Make sure to replace the Firecrawl API key with your own.

{
  "mcpServers": {
    "firecrawl-mcp": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "y
our-firecrawl-api-ke"y      }
    }
  }
}

Save the file and open the dropdown again. Go Developer and click the Reload MCP Configuration option. This reloads Claude Desktop with our new MCP settings.

Reload MCP Configuration
Reload MCP Configuration

Now, if you look at the Developer settings, you should now see firecrawl-mcp listed on the page.

Firecrawl MCP shows up in our Developer settings
Firecrawl MCP shows up in our Developer settings

Now, we’ll test our connection with a basic prompt.

Are you connected to the firecrawl mcp?

As you can see below, the model confirms our connection and lists the different operations it can perform using the Firecrawl MCP.

The AI model confirms the connection
The AI model confirms the connection

Now, we’ll extract our books. We used the prompt below. Depending on your model, adding the full URL to the prompt can save on reasoning steps and reduce errors.

Can you extract books from the first page of books.toscrape.com?

Shortly after our prompt, the model gives us a list of the books it extracted from the site. Our list here only holds titles and prices.

Claude's list of extracted books
Claude’s list of extracted books

To change the schema, we can simply tell Claude our desired schema. In the image below, we change our prompt slightly and Claude responds immediately.

Claude extracts the data based on our defined schema
Claude extracts the data based on our defined schema

When Claude has finished, we get a file ready to open in our text editor.

Claude presents the finished JSON file to us
Claude presents the finished JSON file to us

Conclusion

MCP is quickly changing the development landscape. In the past, we would’ve written manual extraction logic for this job. Currently, whether we’re extracting our data from a development environment or from an AI desktop app, the real logic is contained within our prompt. The rest is just scaffolding.

When we want to change the program logic, we simply change the code. Rather than hardcoding a URL, we just tell the model which URL we want to scrape. Modern AI models can infer data structure with strong accuracy. However, we did need to define the full schema when using Claude. Once again, this wasn’t a hardcoded development task — just an adjustment to the prompt.

Using MCP, you can build almost anything with minimal code. Most of your actual program logic is written using natural language and the model handles it from there.

Photo of Jake Nulty
Written by

Jake Nulty

Software Developer & Writer at Independent

Jacob is a software developer and technical writer with a focus on web data infrastructure, systems design and ethical computing.

214 articles Data collection framework-agnostic system design