meta-webindexer
meta-webindexer indexes web pages for an AI-powered search product operated by Meta. Unlike a pure training crawler, AI search crawlers are designed to drive users back to the original source via citations and links.
The crawl pattern looks similar to a traditional search engine: regular, broad, and bounded by your robots.txt directives. The difference is that ranking is done by an LLM, not a classic ranking algorithm.
Allowing meta-webindexer is generally how your site stays discoverable inside AI answer engines. The traffic it sends back is small but high-intent: users who clicked a citation usually wanted exactly what you wrote.
See meta-webindexer on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
Verify by IP
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
Published IP ranges
Crawl frequency
Continuous
Honors robots.txt
Yes
Honors Crawl-delay
Yes
Meta runs 16 bots in total. Each one is a separate user-agent so you can allow or block them independently.
Link Unfurler
9Training Crawler
3Live-Fetch AI
2AI Search Index
1- meta-webindexerYou are here
SEO Crawler
1Should I let meta-webindexer through?
In most cases, yes. AI search crawlers cite and link back. Allowing is how your content becomes discoverable inside AI answers. If volume gets noisy, rate-limit it before you block it outright.
Does blocking meta-webindexer affect my Google rankings?
No. meta-webindexer feeds Meta's AI answer engine, which is a separate distribution channel from classical search. Blocking it removes you from citations inside Meta's product, but Google and Bing keep ranking you the same.
How do I confirm a request is really from meta-webindexer?
Two checks. The User-Agent header should match a known meta-webindexer string, and the request's source IP should fall inside Meta's published ranges. The User-Agent alone is trivially spoofable, so the IP check is what gives you confidence. Meta publishes the ranges so you can validate at the CDN or edge.
How is meta-webindexer different from Googlebot?
Both crawl the web, but they feed completely different surfaces. Googlebot powers Google Search, where you compete for ten blue links. meta-webindexer powers Meta's AI answer engine, where you compete for one of a handful of citations in a written-out paragraph. The crawl mechanics are similar, the consumption pattern is not.
How is meta-webindexer different from Meta's other bots?
Meta splits work across multiple user-agents so site owners can decide on each one independently. Training crawlers, live-fetch agents, search indexers, and agentic browsers each get their own name. Worth scanning the rest of the Meta family above to see which ones actually matter for your site.
What's the cleanest way to control meta-webindexer?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.
Verify everything above against the operator's own documentation.