Skip to main content

How to use Firecrawl with n8n?

Learn how to use Firecrawl’s MCP with n8n to build real AI agents that search, scrape, and extract live web data for RAG and automation workflows

Firecrawl allows teams to scrape websites by simply inputting a URL. Their Extract tool allows teams to extract data structures using user defined schemas. Firecrawl also offers a Model Context Protocol (MCP).

MCP gives us a standardized protocol for AI agents to communicate with external tools, in this case: Firecrawl. With a standard protocol, it’s much easier to build pipelines for AI training and Retrieval-Augmented Generation (RAG).

By the time you’ve completed this tutorial, you’ll know how to do the following.

  • Install n8n locally
  • Create an n8n workflow
  • Build an AI agent using n8n
  • Configure MCP tools using n8n

Getting started

Before we get started, you’ll need a Firecrawl account and you’ll need access to n8n. We’ll go through the process of installing n8n locally.

Setting up n8n

To run n8n locally, you’ll need NodeJS or Docker installed. We’ll show you how to install it using NodeJS. If you don’t already have it installed, you can download it here.

Once you’ve got NodeJS, n8n can be installed simply using npm. We use the -g flag to install the package globally.

npm install n8n -g

Once the installation is finished, you can run n8n with the command below.

n8n start

After you’ve launched the server, you can press o to open n8n up within your browser.

n8n is running locally
n8n is running locally

You’ll be prompted to create an account when the page opens up.

n8n setup page
n8n setup page

Building a workflow

After you’ve created your account, simply create a new workflow to get started. Now, we’ll walk through the steps involved in building that workflow.

Creating a trigger

To begin, we need a trigger. Since we’re building an AI agent, it makes sense to trigger the agent using a chat message — if you can talk to your AI agent as it runs, you can instruct it using natural language.

Adding a chat trigger
Adding a chat trigger

Adding an AI agent

After you’ve added the chat trigger, click the + button and search for AI agent. From this agent node, we can configure our model, attach memory and plug in any external tools that we want the model to use.

Adding an AI agent
Adding an AI agent

Configuring your AI agent

Now, its time to configure the agent. Open up the agent node and click Add Chat Model near the bottom. You can use whichever model or provider you prefer.

Configuration options
Configuration options

Choosing a model

In the search bar, simply type openai, or whichever model provider you’d like to use. n8n supports many providers, you can view some of them below.

  • Anthropic
  • Azure OpenAI
  • AWS Bedrock
  • Cohere
  • DeepSeek
  • Google Gemini
  • Google Vertex
  • Groq
  • Mistral
  • Ollama
  • xAI Grok
Adding an OpenAI chat model
Adding an OpenAI chat model

Now, open the model and click Create a new credential. Next, you’ll be prompted to enter your OpenAI API key.

Adding your OpenAI credendtials
Adding your OpenAI credendtials

We can use the From list option to select a model. OpenAI offers a large variety of models. In this case, we’re using GPT 5 mini. This provides us with a 400,000 token context window without the added cost of full GPT 5.

Selecting a model to use
Selecting a model to use

Adding memory

Now, it’s time to add memory support. Even with a 400,000 token window, it’s important to create a memory instance. This prevents our model from crashing or getting lost mid-task. When interacting with tools and reading web pages, our model receives large amounts of input. We can easily max out our context window.

We’ll go with simple memory here. There are larger memory tools available but in this case, local memory will suffice. We just don’t want our agent to forget what it’s doing.

When our context limit is reached, memory allows the AI agent to pick up where it left off.

Adding Simple Memory
Adding Simple Memory

Adding the Firecrawl MCP Server

Finally, it’s time to add MCP support. This is where the magic of our AI agent actually comes from. Click the + button to add a tool. Enter mcp into the search bar and choose MCP Client Tool.

Adding an MCP integration
Adding an MCP integration

Now, head on over to the Firecrawl Dashboard and get your API key. This key gives the AI agent access to your Firecrawl account.

Getting the Firecrawl MCP configuration
Getting the Firecrawl MCP configuration

It’s time to configure the MCP client. You can paste the URL below into the endpoint. Make sure to replace the API key with your own.

https://mcp.firecrawl.dev/{YOUR_FIRECRAWL_API_KEY}/v2/mcp
Adding the Firecrawl MCP server URL
Adding the Firecrawl MCP server URL

Now, feel free to use the chat interface to test the connection. A simple prompt like the one below should do the job.

Are you connected to Firecrawl?

You should receive a response similar to the one below.

Short answer: yes — in this chat I have access to the Firecrawl tools (search, scrape, crawl, map, extract) and can invoke them to fetch public web content on demand.

Clarifications:

- I do not have a persistent background connection or any personal Firecrawl account — the tools are available in-session and I call them only when you ask.
- Results come from live web access through those tools (subject to the tool's limits and site restrictions).
- If you want, tell me what site or query to run and which tool/format you prefer (search, scrape a page, map a site, crawl a section, or extract structured data) and I’ll run it.

In the figure below, the agent tells us which tools it can access. The AI agent is ready to use.

Testing the AI agent tool connection
Testing the AI agent tool connection

Using your Firecrawl AI agent

There’s one thing left to do — give our agent a prompt. We’ll ask the agent to do a search for AI news and give citations. Feel free to adjust the prompt however you’d like.

Can you get me the latest news on AI with citations?

Here’s the output from the AI agent.

Here are the latest AI headlines (Dec 10–11, 2025) with short summaries and source links so you can open each story.

1. TIME speculation — “The Architects of AI” tipped for 2025 Person of the Year

- A viral image and betting odds have raised expectations that “The Architects of AI” (AI builders/companies) may be TIME’s 2025 Person of the Year. This is still reporting on speculation until TIME’s official announcement.
- Source: Hindustan Times — https://www.hindustantimes.com/world-news/us-news/the-architects-of-ai-to-be-named-2025-time-person-of-the-year-viral-image-drops-hint-101765416580482.html

2. Linux Foundation launches Agentic AI Foundation (AAIF)

- The Linux Foundation announced the Agentic AI Foundation to create open standards, governance and tooling for agentic (autonomous agent) systems; the initiative aggregates/aligns open projects (e.g., MCP, goose, AGENTS.md) and has industry backers.
- Source: eeNews Europe — https://www.eenewseurope.com/en/linux-foundation-launches-agentic-ai-foundation-to-drive-open-agent-standards/

3. Samsung + KT validate AI‑RAN on commercial networks (6G development)

- Samsung and South Korea’s KT reported field validation of AI‑based RAN optimization (AI‑RAN) on KT’s commercial network, showing per‑user adaptation and improved connection quality — a step toward 6G capabilities.
- Source / press release: Samsung Newsroom — https://news.samsung.com/global/samsung-electronics-and-kt-corporation-successfully-validate-ai-ran-on-commercial-networks-accelerating-6g-development

4. U.S. DoD launches GenAi.mil and encourages use of generative AI (Google Gemini)

- Defense Secretary Pete Hegseth announced GenAi.mil (powered by Google Gemini) to promote generative‑AI tools for DOD personnel for research, formatting and imagery analysis; the site is intended for service members, contractors and civilian staff.
- Source: Association of Defense Communities — https://defensecommunities.org/2025/12/hegseth-encourages-dod-personnel-to-use-ai/

5. Microsoft Ignite: agentic AI, Foundry, Copilot announcements

- Microsoft’s Ignite recap highlights agentic‑AI initiatives, Claude models in Foundry, Foundry IQ for knowledge retrieval, and next‑gen Copilot agents / cloud developer tooling previews to help companies build agentic systems.
- Source: Microsoft Azure Blog — https://azure.microsoft.com/en-us/blog/actioning-agentic-ai-5-ways-to-build-with-news-from-microsoft-ignite-2025/

6. Market / investment signals — some investors letting AI manage large portions of portfolios (crypto)

- A survey reported growing trust in AI trading bots: about 1 in 3 investors letting AI manage >50% of crypto portfolios, many reporting better short‑term results and broader adoption expectations. (Take surveys with caution.)
- Source: NFT Plazas — https://nftplazas.com/crypto-ai-trading-investing/

If you’d like, I can:

- Pull the full text or a longer summary of any single article (or multiple) and extract key quotes.
- Find additional authoritative coverage (Reuters, AP, NYT, Reuters technology feed) for any of these items. Which story should I expand on?
Our AI agent performed the search and output the results
Our AI agent performed the search and output the results

Conclusion

Firecrawl’s MCP server gives n8n users the power to build real AI agents with access to the web. When we create an AI agent, we need to connect it to a model provider, a memory instance and a Firecrawl MCP tool. The agent now has the power to search the web, visit sites and extract structured results.

This basic workflow can be expanded however you’d like. You can chain additional nodes for output, email alerts, social media posts and even additional AI agents. The MCP server handles the difficult portion of tool calling and maintaining a web connection. You only need to focus on your program logic.