
TL;DR: Best OpenClaw Search Providers
| Provider | What it does |
|---|---|
| Firecrawl | Search, scrape, structured extraction, and autonomous web research |
| Brave Search | Independent web index, privacy-first, officially recommended |
| Exa | Neural and semantic search with built-in content extraction |
| Gemini | AI-synthesized answers with Google Search grounding and citations |
| Grok | AI-synthesized answers via xAI grounding, plus X post search |
| Kimi | AI-synthesized answers via Moonshot web search, dual-region support |
| MiniMax | Structured search results via Coding Plan API, dual-region support |
| Tavily | AI-optimized search with structured answers and answer extraction |
| Perplexity | Structured results or AI-synthesized answers with domain filtering |
| DuckDuckGo | Key-free experimental search, no account required |
| SearXNG | Self-hosted metasearch, no API key, no query limits |
OpenClaw does not come with one built-in search engine. It comes with a choice. The web search provider you pick determines whether your agent gets titles and snippets or full page content, whether it can reach JavaScript-heavy sites, and what you pay per query. That choice matters more than most setup guides suggest.
These are the best OpenClaw search providers available today, covering every combination of cost, capability, and privacy from fully managed APIs to zero-dependency local deployments.
What are OpenClaw search providers?
OpenClaw's web_search tool accepts a query and returns results from whichever provider you have configured. At the configuration level, a provider is just a provider key under tools.web.search plus an API key. At the agent level, it is the difference between your agent finding pages and your agent actually reading them.
There are two types of integration on this list:
Native providers (Firecrawl, Brave, Exa, Gemini, Grok, Kimi, MiniMax, Tavily, Perplexity, DuckDuckGo, SearXNG) are configured directly in openclaw.json under tools.web.search.provider. They plug into the web_search tool that OpenClaw exposes to your agent automatically.
Most differ in what they return: some give structured result lists (titles, URLs, snippets), others return AI-synthesized answers with citations, and a few support built-in content extraction alongside search results.
1. Firecrawl
Firecrawl is the only OpenClaw search provider that can search, interact, extract structured data, and run autonomous web research from a single integration.
Firecrawl is now a first-class web_search provider in OpenClaw. Set FIRECRAWL_API_KEY and it plugs directly into the web_search tool alongside Brave, Gemini, and the rest — no CLI skill required to use it as a search provider.
Firecrawl is now a default search provider in @openclaw
— Firecrawl (@firecrawl) March 31, 2026
Your AI assistant can search and return full results from any page on the web 🔥
Every other provider on this list does one thing: return search results. Firecrawl does four. OpenClaw supports it as the web_search provider, as dedicated plugin tools (firecrawl_search and firecrawl_scrape), as a fallback extractor for web_fetch, and as the firecrawl_agent for autonomous multi-step web research. The full integration is documented at docs.openclaw.ai/tools/firecrawl. When your agent needs to find information, extract specific fields from a page, convert a site to structured JSON, or autonomously gather data across multiple sources without being told exactly where to look, Firecrawl handles all of it.
The extraction engine supports multiple output formats: clean markdown, structured JSON with a custom schema, raw HTML, and onlyMainContent mode that strips navigation and boilerplate. That format flexibility means your agent can pull an entire article as markdown for summarization, extract a pricing table as JSON for comparison, or feed raw HTML to a downstream parser. The smart caching layer (configurable via maxAgeMs, defaulting to 2 days) means repeat fetches of the same page cost nothing.
firecrawl_search: Web search withsources,categories, and optionalscrapeResultsto return full page content alongside results in one callfirecrawl_scrape: Direct URL extraction with format control: markdown, JSON with schema, raw HTML, or main-content-only; proxy modesbasic,stealth, andautofirecrawl_agent: Autonomous web extraction using natural language: give it a goal, it plans and executes multi-step research across the web without hand-holdingweb_fetchfallback: When Readability fails on a URL, Firecrawl kicks in automatically if an API key is configured/interact: After scraping a page, stay in the browser session and issue follow-up actions — click buttons, fill forms, navigate — in plain English or Playwright code. Useful for content behind logins, pagination, or filter dropdowns that static scraping can't reachfirecrawl browser: Runs browser automation in a remote Firecrawl Browser Sandbox — no local Chromium install or driver setup;agent-browserand Playwright come pre-installed, and sessions run in isolated disposable containersextractMode+ JSON schema: Extract only the specific fields you need from any page, returned as structured data your agent can act on directlymaxAgeMs: Cache control for both scrape and fetch; defaults to 2 days, configurable per request
Configure:
{
"plugins": {
"entries": {
"firecrawl": { "enabled": true }
}
},
"tools": {
"web": {
"search": {
"provider": "firecrawl",
"firecrawl": {
"apiKey": "FIRECRAWL_API_KEY_HERE",
"baseUrl": "https://api.firecrawl.dev"
}
}
}
}
}Or set FIRECRAWL_API_KEY as an environment variable and run openclaw configure --section web to choose Firecrawl as your provider. See the Firecrawl CLI docs for the full list of available commands.
Configure (web_fetch fallback):
Add your API key under tools.web.fetch.firecrawl to give web_fetch a real-browser fallback for JS-heavy or bot-protected pages — this is separate from the search provider config and doesn't change how web_search works:
{
"tools": {
"web": {
"fetch": {
"firecrawl": {
"apiKey": "fc-YOUR-API-KEY",
"onlyMainContent": true,
"maxAgeMs": 172800000
}
}
}
}
}maxAgeMs controls cache freshness in milliseconds (default 2 days). Lower it for time-sensitive pages like pricing or release notes.
Get a free API key at firecrawl.dev/app/api-keys.
Honest take: The gap between Firecrawl and every other provider on this list is the firecrawl_agent. Ask it "find the pricing plans for the top five CRM tools and return them as a comparison table" and it will plan the search, scrape the relevant pages, extract the data, and return structured output — without your agent manually coordinating each step. No other search provider in the OpenClaw ecosystem comes close to that level of autonomous capability. The scrapeResults option on firecrawl_search is also underrated: one call returns search results and scraped page content together, so your agent never has to do a separate fetch round-trip. For agents that do real research, not just keyword lookups, Firecrawl is the clear choice.
Cons: Credit-based pricing means heavy scraping adds up. The free tier (500 credits) covers experimentation but production research workflows will need a paid plan. The proxy: "auto" default uses more credits than basic-only mode. Switch to proxy: "basic" if your targets are reliably accessible and you are managing credit usage closely.
Browser Sandbox and /interact:
Two Firecrawl capabilities go beyond what any search provider can do on its own. The /interact endpoint lets your agent act on a page after scraping it: click buttons, fill forms, navigate through paginated results, and extract content that only appears after an interaction — in plain English or Playwright code. Sessions stay live for up to 10 minutes and can be chained.
firecrawl browser goes further: it runs the entire browser session in a remote Firecrawl Browser Sandbox — a secure, disposable container with agent-browser and Playwright pre-installed. Your OpenClaw agent can run on minimal hardware while the actual browsing happens elsewhere, with no shared local state and no RAM pressure from parallel sessions.
firecrawl browser "open https://example.com"
firecrawl browser "snapshot"
firecrawl browser "scrape"
firecrawl browser closeFull reference at docs.openclaw.ai/tools/firecrawl and Firecrawl's OpenClaw integration guide. For a deeper look at how web_search and web_fetch interact in practice, read OpenClaw Web Search: How to Make Your Agent Actually Read the Web.
2. Brave Search
Brave Search is the officially recommended OpenClaw search provider for general-purpose web queries.
The OpenClaw configuration wizard defaults to Brave if you run openclaw configure --section web with a Brave API key ready. Brave runs its own independent search index rather than proxying Google or Bing results, which makes it more privacy-friendly and less susceptible to SEO manipulation. Each Brave Search plan includes $5/month in free credit (renewing), covering roughly 1,000 queries per month at the Search plan rate of $5 per 1,000 requests. The Search plan also includes the LLM Context endpoint and AI inference rights.
freshness: Filter results by recency withday,week,month, oryeardate_after/date_before: Pin results to a specific date range (YYYY-MM-DD format)country+language: Locale-specific results using ISO country and language codessearch_lang: Brave-specific search language code (e.g.en,en-gb,zh-hans) — pin this explicitly if you hit 422 errors on non-English localesui_lang: ISO language code for UI elements; must include region subtag (e.g.en-US)webSearch.mode:web(default) returns titles, URLs, and snippets;llm-contextswitches to the Brave LLM Context API, returning pre-extracted text chunks and sources for grounding instead of standard snippetscacheTtlMinutes: Results cached for 15 minutes by default, configurable- Independent index: Does not depend on Google or Bing data
Configure:
{
"plugins": {
"entries": {
"brave": {
"config": {
"webSearch": {
"apiKey": "BRAVE_API_KEY_HERE",
"mode": "web"
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "brave",
"maxResults": 5,
"timeoutSeconds": 30
}
}
}
}Provider-specific settings now live under plugins.entries.brave.config.webSearch. The old tools.web.search.apiKey path still loads via a compatibility shim but is no longer canonical. Note: llm-context mode does not support ui_lang, freshness, date_after, or date_before.
Or set BRAVE_API_KEY as an environment variable.
Get an API key at brave.com/search/api/.
Honest take: Brave is the sensible default for most OpenClaw setups. The independent index produces clean results and the freshness and date filtering are genuinely useful for news-heavy or time-sensitive research tasks. The 1,000 queries per month on the free credit covers a typical personal agent. Where it falls short: results are titles and snippets only, no full page content. If your agent needs to actually read a page, pair Brave with Firecrawl as the web_fetch fallback.
Cons: Snippet-only results with no built-in content extraction. Rate limits on the free credit tier can become a constraint for agents that run multiple searches per conversation. Legacy Brave plans (the original free plan with 2,000 queries per month) remain valid but do not include the LLM Context endpoint or higher rate limits.
The latency issue is a real friction point that comes up in the community. Here is what one user shared on Reddit recently:
I'm running OpenClaw and finally got my Brave Search API integrated, but I'm hitting a wall. The web_search tool is noticeably slow — by the time it hits the API, gets the results, and the LLM processes them, I could have Googled it myself twice. I have access to the Brave Answers API, which is way faster for direct info, but OpenClaw doesn't seem to have a field for it. The config only has tools.web.search.apiKey for the standard Search API. Has anyone figured out a workaround to use the Answers API (or the new LLM Context endpoint) to speed this up? Or is there a way to tweak the search tool so it isn't such a bottleneck? Right now, having a 'pro' search key feels kind of useless if the integration is this sluggish.
Worth keeping in mind if your agent does frequent, latency-sensitive searches. Pairing Brave with Firecrawl as the web_fetch fallback helps on the content extraction side, but does not address the underlying API round-trip speed.
Full reference at docs.openclaw.ai/brave-search.
3. Tavily
Tavily is an AI-optimized search API designed specifically for LLMs and agent research workflows.
Unlike Brave or Perplexity which return standard search result fields, Tavily is built from the ground up for agents: clean JSON responses, automatic answer extraction, and full article content instead of raw snippets. Tavily works both as a native web_search provider and as explicit plugin tools (tavily_search and tavily_extract) for when you need Tavily-specific controls. Tavily's free tier includes 1,000 searches per month.
- Structured JSON responses with direct answer extraction surfaced at the top of results
search_depth:basic(fast, balanced) oradvanced(highest relevance, slower — best for precision research)topic:general(default),news(real-time updates), orfinancetime_range: filter results by recency —day,week,month, oryear- Include or exclude specific domains to restrict or block result sources
tavily_extract: extract clean content from 1–20 URLs in a single call; handles JavaScript-rendered pages; supportsbasicoradvancedextract depth and query-focused chunking viachunks_per_source- Returns full article content, not just snippets
Configure:
{
"plugins": {
"entries": {
"tavily": {
"enabled": true,
"config": {
"webSearch": {
"apiKey": "tvly-your-api-key-here"
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "tavily"
}
}
}
}Choosing Tavily in openclaw configure --section web enables the bundled Tavily plugin automatically. For Tavily-specific controls like search_depth, topic, include_answer, or domain filters, use tavily_search directly instead of the generic web_search tool.
Get a free API key at tavily.com. The key starts with tvly-.
Honest take: Tavily produces noticeably better-structured output than raw web search results. The answer extraction means your agent does not have to infer the key point from a wall of text. For any agent doing multi-step research or fact-checking workflows, Tavily is worth the extra install step. The skill-based integration is slightly more setup than a native provider, but the quality difference on research queries is real.
See Firecrawl vs Tavily for a full comparison of search and extraction capabilities.
Cons: The 1,000 searches per month free limit matches Brave. Advanced search mode is slower and counts against the same quota. When both tavily_search and the generic web_search tool are available, your agent may not always pick the right one for the task — explicit prompting ("use Tavily to search for...") helps if you see inconsistent behavior.
Full reference at docs.openclaw.ai/tools/tavily. If Tavily does not fit your needs, see our Tavily alternatives roundup.
4. Perplexity
Perplexity gives OpenClaw two modes: structured web search results and AI-synthesized answers with citations.
Perplexity is a native web_search provider in OpenClaw (docs.openclaw.ai/perplexity) with a feature no other provider on this list offers: a dual-mode configuration. The native Perplexity Search API path returns structured results (title, url, snippet) like Brave. But point it at OpenRouter or set a baseUrl and model, and it switches to the Sonar chat-completions path and returns AI-synthesized answers with inline citations instead. The Search API path also has the richest filtering options of any native provider: domain_filter lets you allowlist or denylist up to 20 domains per query, and max_tokens can scale up to 1,000,000 for content-heavy extraction tasks.
domain_filter: Allowlist (e.g.,["nature.com", ".edu"]) or denylist (prefix with-) up to 20 domains per querymax_tokens: Total content budget per search (default 25,000, max 1,000,000)max_tokens_per_page: Per-page token limit (default 2,048, adjustable)freshness+date_after/date_before: Time filtering on the Search API path- OpenRouter compatibility: Set
baseUrlandmodelto switch to Sonar for AI-synthesized answers
Configure (native Perplexity Search API):
{
"plugins": {
"entries": {
"perplexity": {
"config": {
"webSearch": {
"apiKey": "pplx-..."
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "perplexity"
}
}
}
}Configure (OpenRouter/Sonar for AI-synthesized answers):
{
"plugins": {
"entries": {
"perplexity": {
"config": {
"webSearch": {
"apiKey": "<openrouter-api-key>",
"baseUrl": "https://openrouter.ai/api/v1",
"model": "perplexity/sonar-pro"
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "perplexity"
}
}
}
}Or set PERPLEXITY_API_KEY as an environment variable.
Get an API key at perplexity.ai/settings/api.
Honest take: The domain_filter parameter is the feature that makes Perplexity worth evaluating over Brave for technical or academic research. Restricting results to .gov, .edu, and nature.com meaningfully improves signal quality on topics where SEO noise is a problem. The Sonar mode is useful when you want synthesized summaries over raw results, though it is a different interaction pattern from every other provider.
Cons: The dual-mode setup is a source of confusion: switching from the Search API to Sonar changes the response format and disables most filter parameters (query, count, and freshness are accepted on the Sonar path; country, language, date_after, date_before, domain_filter, max_tokens, and max_tokens_per_page return explicit errors). If provider: "perplexity" is configured but the key is missing, OpenClaw fails fast at startup rather than silently degrading. No free tier is advertised in the OpenClaw documentation.
Full reference at docs.openclaw.ai/perplexity. For other search options with similar capabilities, see our Perplexity alternatives guide.
5. SearXNG
SearXNG is the zero-cost option: a self-hosted metasearch engine that requires no API key and has no query limits.
SearXNG is an open-source metasearch engine that queries multiple search backends simultaneously and aggregates the results. It runs entirely on your own machine or server, which means no API key, no monthly quota, and no third-party data logging. SearXNG is a native built-in web_search provider in OpenClaw — no third-party skill required. Run a SearXNG instance, point OpenClaw at it, and it plugs straight into web_search via SearXNG's native JSON API.
- No API key and no query limits: bounded only by your hardware
- Meta-search: aggregates results from Google, Bing, DuckDuckGo, and other configured backends simultaneously
- Privacy-first: queries never leave your network; no query data logged by a third party
- Self-hosted: runs entirely on your machine or VPS
categories: scope searches togeneral,news,science, or any other SearXNG categorylanguage: ISO language code for results (e.g.en,de,fr)- Auto-detected last: if
SEARXNG_BASE_URLis set and no API-backed provider is configured, OpenClaw picks it up automatically
Setup:
# Run SearXNG locally with Docker
docker run -d -p 8888:8080 searxng/searxngMake sure your SearXNG instance has the json format enabled in settings.yml under search.formats — OpenClaw uses the native JSON API endpoint, not HTML scraping.
{
"plugins": {
"entries": {
"searxng": {
"config": {
"webSearch": {
"baseUrl": "http://localhost:8888",
"categories": "general,news",
"language": "en"
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "searxng"
}
}
}
}Or set SEARXNG_BASE_URL as an environment variable and run openclaw configure --section web to select SearXNG as your provider. Public SearXNG hosts must use https://; http:// is only accepted for trusted private-network or loopback hosts.
Honest take: SearXNG is the right choice if you want unlimited searches with zero ongoing cost and have a machine or VPS to host it on. The privacy benefit is real: you control which search backends are active and nothing is logged by a third party. The setup is more involved than any other option here, but once running it is stable. Worth it for home server setups, privacy-sensitive deployments, or situations where you are running a high-volume agent and API costs are a concern.
Cons: Self-hosting carries maintenance overhead: container updates, ensuring the json format is enabled in SearXNG's config, and firewall rules for private-only access. Result quality depends on which SearXNG backends are active and can be inconsistent compared to dedicated AI-optimized APIs.
Full reference at docs.openclaw.ai/tools/searxng-search.
6. Exa
Exa is a neural search API built for LLMs, with built-in content extraction that returns highlights, full text, and AI summaries alongside results.
Unlike keyword-based search engines, Exa indexes the web semantically and supports multiple search modes: auto (Exa picks the best mode), neural (meaning-based), fast (quick keyword), deep (thorough), deep-reasoning, and instant. The key differentiator is the contents parameter: a single web_search call can return extracted page content alongside results without a separate fetch step. Full reference at docs.openclaw.ai/tools/exa-search.
type:auto,neural,fast,deep,deep-reasoning, orinstant— controls how Exa searchescontents.text: Extract full page text (settrueor{ maxCharacters }for a character limit)contents.highlights: Key sentences extracted from results (numSentences,highlightsPerUrl, optionalqueryfocus)contents.summary: AI-generated summary of each result (settrueor{ query }for a focused summary)freshness/date_after/date_before: Time filters — cannot combinefreshnesswith date range filters- Up to 100 results per query, results cached 15 minutes by default
Configure:
{
"plugins": {
"entries": {
"exa": {
"config": {
"webSearch": {
"apiKey": "exa-..."
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "exa"
}
}
}
}Or set EXA_API_KEY as an environment variable and run openclaw configure --section web.
Get an API key at exa.ai.
Honest take: The contents parameter is the feature that sets Exa apart. Passing contents: { highlights: { numSentences: 3 }, summary: true } means your agent gets a structured summary of each result in the same call, no follow-up scrape required. For research tasks where the question is "what do these pages say about X", that is a meaningful latency saving. Neural mode also performs better than keyword search on conceptual queries where the exact search terms don't appear on the target pages.
Cons: Neural search costs more per query than fast mode. The 100-result cap and 15-minute cache are fine for most agents but can be limiting for high-volume monitoring tasks. freshness and date-range filters are mutually exclusive — you have to pick one time-filter approach per query.
7. Gemini
Gemini brings Google Search grounding to OpenClaw, returning AI-synthesized answers with inline citations backed by live Google results.
Unlike traditional search providers that return a list of links and snippets, Gemini Search uses Google's grounding API to produce a single synthesized answer with citations — closer to how Perplexity Sonar works than how Brave or Tavily work. Citation URLs from Gemini grounding are automatically resolved from Google redirect URLs to direct URLs before being returned. Full reference at docs.openclaw.ai/tools/gemini-search.
- Default model:
gemini-2.5-flash(fast and cost-effective); any Gemini model supporting grounding can be set viaplugins.entries.google.config.webSearch.model - Returns one synthesized answer with citations rather than an N-result list
countis accepted forweb_searchAPI compatibility but does not change the output format- Provider-specific filters (
country,language,freshness,domain_filter) are not supported
Configure:
{
"plugins": {
"entries": {
"google": {
"config": {
"webSearch": {
"apiKey": "AIza...",
"model": "gemini-2.5-flash"
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "gemini"
}
}
}
}Or set GEMINI_API_KEY as an environment variable. Get an API key at Google AI Studio.
Honest take: Gemini's grounding is a practical choice for agents that already run on Google infrastructure or where the Google Search index quality matters for the use case. The single synthesized answer format is well-suited to conversational agents that need a clean, citable response rather than a result list. The lack of filtering parameters is the main limitation: you cannot restrict to domains or time ranges.
Cons: Returns one synthesized answer with citations — not a traditional result list. No country, language, freshness, or domain_filter support. If your agent needs to see multiple independent sources rather than a synthesized view, a list-based provider like Brave or Exa is a better fit.
8. Grok
Grok uses xAI web-grounded responses to produce AI-synthesized answers with citations — and the same API key unlocks X post search via x_search.
Grok Search works like Gemini Search at the interface level: one synthesized answer with citations per query. The notable bonus is that the XAI_API_KEY that powers web_search also enables the bundled x_search tool for first-class X (formerly Twitter) post search. If you store the key under plugins.entries.xai.config.webSearch.apiKey, OpenClaw reuses it as a fallback for the bundled xAI model provider. Full reference at docs.openclaw.ai/tools/grok-search.
- Web-grounded responses synthesized by Grok with inline citations
x_searchbonus: sameXAI_API_KEYenables X post search; configure it as a follow-up duringopenclaw onboardoropenclaw configure --section web- For X post metrics (reposts, replies, bookmarks, views), use
x_searchwith the exact post URL or status ID rather than a broad query countis accepted forweb_searchAPI compatibility but Grok still returns one synthesized answer- No provider-specific filters currently supported
Configure:
{
"plugins": {
"entries": {
"xai": {
"config": {
"webSearch": {
"apiKey": "xai-..."
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "grok"
}
}
}
}Or set XAI_API_KEY as an environment variable. Get an API key at console.x.ai.
Honest take: For agents that need to track social signals alongside web results, Grok's dual key use is the most efficient way to get both from a single API subscription. The web-grounded answers are on par with Gemini for general queries. Where it differs: Grok has access to X content that is not indexed by Google, which can matter for real-time topics.
Cons: Same limitation as Gemini: returns one synthesized answer rather than a result list, and no provider-specific filter parameters. Not the right choice if your agent needs to enumerate multiple independent sources or filter by domain or date range.
9. Kimi
Kimi uses Moonshot web search to produce AI-synthesized answers with citations, with support for both global and China API regions.
Kimi is Moonshot AI's web-search-enabled model. Like Gemini and Grok, it returns synthesized answers with citations rather than a result list. The practical distinction is the dual-region support: Chinese users can set the CN API host (https://api.moonshot.cn/v1) to avoid the 401 errors that occur when CN-issued keys hit the international endpoint. OpenClaw infers the correct endpoint from your Moonshot model config if baseUrl is omitted. Full reference at docs.openclaw.ai/tools/kimi-search.
- Synthesized answers with inline citations via Moonshot web search
- Default model:
kimi-k2.6; configurable viaplugins.entries.moonshot.config.webSearch.model - Two API regions:
https://api.moonshot.ai/v1(global) andhttps://api.moonshot.cn/v1(CN) - OpenClaw inherits the CN host from
models.providers.moonshot.baseUrlifbaseUrlis omitted in the web search config — CN keys do not hit the international endpoint by mistake countis accepted for API compatibility but Kimi returns one synthesized answer per query- No provider-specific filter parameters currently supported
Configure:
{
"plugins": {
"entries": {
"moonshot": {
"config": {
"webSearch": {
"apiKey": "sk-...",
"baseUrl": "https://api.moonshot.ai/v1",
"model": "kimi-k2.6"
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "kimi"
}
}
}
}Or set KIMI_API_KEY or MOONSHOT_API_KEY as an environment variable. Get an API key at platform.moonshot.cn.
Honest take: Kimi is the obvious pick if you are already running Moonshot models in OpenClaw and want web search from the same provider without adding a second API subscription. The automatic CN/global host resolution is a genuine quality-of-life improvement that avoids a common 401 footgun.
Cons: Returns one synthesized answer rather than a result list. No filter parameters. Like Gemini and Grok, not the right fit when you need enumerable source lists or domain-specific filtering.
10. MiniMax
MiniMax provides structured search results via its Coding Plan API, returning titles, URLs, snippets, and related queries in a traditional result-list format.
Unlike Gemini, Grok, and Kimi which return synthesized answers, MiniMax Search returns structured results — making it more similar to Brave or Exa at the output level. It requires a MiniMax Coding Plan key (prefix sk-cp-...), which is separate from a standard MiniMax API key. Like Kimi, it supports both global and CN endpoints. Full reference at docs.openclaw.ai/tools/minimax-search.
- Returns structured results: titles, URLs, snippets, and related queries
- Requires a Coding Plan key (
sk-cp-...prefix) — not a standard MiniMax API key - Two regions:
global(https://api.minimax.io/v1/coding_plan/search) andcn(https://api.minimaxi.com/v1/coding_plan/search) - Region auto-inherited from
MINIMAX_API_HOST,models.providers.minimax.baseUrl, ormodels.providers.minimax-portal.baseUrlwhen not set explicitly countsupported: OpenClaw trims the result list to the requested count- No provider-specific filter parameters currently supported
Configure:
{
"plugins": {
"entries": {
"minimax": {
"config": {
"webSearch": {
"apiKey": "sk-cp-...",
"region": "global"
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "minimax"
}
}
}
}Or set MINIMAX_CODE_PLAN_KEY as an environment variable. Get a Coding Plan key at platform.minimax.io.
Honest take: MiniMax is the natural pick if you are already in the MiniMax ecosystem and want to consolidate to fewer API providers. The structured result format means your agent gets a standard result list rather than a synthesized answer, which works better for tasks where you want to enumerate and evaluate sources.
Cons: Requires a specific Coding Plan key that is separate from a standard MiniMax API key — easy to misconfigure with the wrong key type. No filter parameters. Result quality depends on MiniMax's search index, which is less established than Brave's independent index or Exa's neural search.
11. DuckDuckGo
DuckDuckGo is the only key-free option: no API key, no account, no quota — but it is experimental and not backed by an official API.
DuckDuckGo works in OpenClaw by scraping DuckDuckGo's non-JavaScript HTML search pages. No API key or account is required. That makes it the easiest provider to set up, but it comes with a caveat: this is an unofficial, experimental integration that can break when DuckDuckGo changes its HTML structure or serves a CAPTCHA challenge under automated load. For production agents, a proper API-backed provider is recommended. Full reference at docs.openclaw.ai/tools/duckduckgo-search.
- No API key or account required
region: DuckDuckGo region code (e.g.us-en,uk-en,de-de)safeSearch:strict,moderate(default), oroffcount: 1–10 results (default 5)- Auto-detection order 100: first key-free fallback; API-backed providers with configured keys run first
- Experimental: results depend on HTML page structure, which can change without notice
Configure:
{
"plugins": {
"entries": {
"duckduckgo": {
"config": {
"webSearch": {
"region": "us-en",
"safeSearch": "moderate"
}
}
}
}
},
"tools": {
"web": {
"search": {
"provider": "duckduckgo"
}
}
}
}Or just run openclaw configure --section web and select duckduckgo — no key needed.
Honest take: DuckDuckGo is genuinely useful for quick local testing and prototyping where you do not want to create an API account before trying something out. It also works as a zero-cost fallback in personal agents where occasional failures are tolerable. The auto-detection priority (order 100, key-free fallback) means it kicks in automatically if you have no other provider configured.
Cons: Unofficial HTML scraping: can fail at any time due to page structure changes or CAPTCHA challenges. Not appropriate for production agents or any setup where reliability matters. Maximum 10 results per query. For a free but stable alternative, Brave Search's $5/month free credit covers roughly 1,000 queries with a proper API integration.
Building the top OpenClaw search providers into your workflow
No single provider wins on every dimension. The combination that works depends on what your agent actually does.
For most setups, start with Brave as the native web_search provider and add Firecrawl as the web_fetch fallback via tools.web.fetch.firecrawl.apiKey. Brave handles the search, Firecrawl handles the pages that Brave's snippets do not fully cover. That two-provider stack costs roughly 1,000 Brave queries per month free and uses Firecrawl credits only when a page needs actual extraction.
For research-heavy agents (fact-checking, competitive monitoring, multi-step information gathering), Firecrawl is the right upgrade. Use firecrawl_agent to hand the entire research goal to Firecrawl and get back structured output, or use firecrawl_search with scrapeResults to get full page content alongside results in a single call. No other provider on this list handles that in one step. If your agent also needs to interact with pages rather than just read them, our browser automation tools comparison covers the options.
For technical or academic research where source quality matters, Perplexity's domain_filter is the tool that changes things. Being able to say "only return results from these 10 trusted domains" produces a fundamentally different result quality than keyword search on the open web.
For privacy-first or high-volume deployments where API costs become a concern, SearXNG removes the per-query cost entirely. It takes more setup but runs indefinitely on your own infrastructure.
The full OpenClaw documentation at docs.openclaw.ai is the authoritative reference for current provider configuration options. For a deeper look at how web_search and web_fetch interact, and how to configure Firecrawl for the full OpenClaw web stack, read OpenClaw Web Search: How to Make Your Agent Actually Read the Web. If you are building a Firecrawl-powered agent from scratch, the OpenClaw Firecrawl guide covers the full integration from API key to browser automation. For a broader look at evaluating web data tools for your agent stack, our guide on choosing web scraping tools covers how the options compare.
Frequently Asked Questions
What are OpenClaw search providers?
OpenClaw search providers are the services that power the web_search tool inside your OpenClaw agent. When your agent needs to look something up, it calls web_search with a query and the configured provider fetches results. Options include Firecrawl, Brave Search, Perplexity, Tavily, and SearXNG, each with different strengths for content quality, pricing, and privacy.
Which OpenClaw search provider is the default?
OpenClaw auto-detects the provider based on available API keys in this order: Brave, MiniMax, Gemini, Grok, Kimi, Perplexity, Firecrawl, Exa, Tavily, DuckDuckGo, Ollama, SearXNG. If no key is found, web_search returns an error prompting you to configure one. You can also skip the native providers entirely and use the Firecrawl CLI skill, which adds firecrawl search without requiring a web_search provider configuration.
How do I configure a search provider in OpenClaw?
Run openclaw configure --section web and follow the interactive wizard. It will ask which provider you want and prompt for your API key. The key is saved to ~/.openclaw/openclaw.json under tools.web.search. You can also set the key as an environment variable (BRAVE_API_KEY, FIRECRAWL_API_KEY, PERPLEXITY_API_KEY) and OpenClaw will pick it up automatically.
Is there a free OpenClaw search provider?
Yes. Brave Search includes $5/month in free credit (renewing), which covers roughly 1,000 queries per month at $5 per 1,000 requests. Tavily offers a free tier of 1,000 searches per month. SearXNG is completely free with no query limits since it runs on your own infrastructure. Firecrawl has a free tier of 500 credits for experimentation.
What is the difference between Firecrawl and Brave Search in OpenClaw?
Brave Search returns titles and snippets from its independent web index. Firecrawl returns those plus full scraped page content. Firecrawl also handles JavaScript-rendered pages and bot-protected sites that plain HTTP requests cannot reach. If your agent needs to read pages, not just find them, Firecrawl is the right tool. If you need fast, general-purpose keyword search, Brave is simpler and cheaper.
Can I use multiple search providers in OpenClaw at once?
Only one provider can be set as the active web_search provider at a time. However, you can layer providers: configure Brave as the native web_search provider, add Firecrawl as the web_fetch fallback (via tools.web.fetch.firecrawl.apiKey). The Firecrawl CLI skill also adds independent firecrawl search and firecrawl scrape commands that operate outside the web_search system.
What is the best OpenClaw search provider for AI agent research?
For research tasks that require reading full page content, Firecrawl is the strongest option because it scrapes content alongside search results. For structured AI-optimized results with answer extraction, Tavily is the most popular choice in the OpenClaw community. For academic or domain-specific research where you need to restrict results to trusted sources, Perplexity with domain_filter gives the most control.
Does SearXNG work with OpenClaw?
Yes. SearXNG is a native built-in web_search provider in OpenClaw — no third-party skill required. Run a SearXNG instance with Docker (docker run -d -p 8888:8080 searxng/searxng), set SEARXNG_BASE_URL or configure it via openclaw configure --section web, and it plugs straight into web_search. The trade-off is setup complexity: you need Docker, a running SearXNG container with the json format enabled in settings.yml, and firewall rules to keep the instance private.

data from the web