Kagi-Fetcher
Kagi-Fetcher indexes web pages for an AI-powered search product operated by Kagi. Unlike a pure training crawler, AI search crawlers are designed to drive users back to the original source via citations and links.
The crawl pattern looks similar to a traditional search engine: regular, broad, and bounded by your robots.txt directives. The difference is that ranking is done by an LLM, not a classic ranking algorithm.
Allowing Kagi-Fetcher is generally how your site stays discoverable inside AI answer engines. The traffic it sends back is small but high-intent: users who clicked a citation usually wanted exactly what you wrote.
See Kagi-Fetcher on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Continuous
Honors robots.txt
Yes
Honors Crawl-delay
Yes
Kagi runs 3 bots in total. Each one is a separate user-agent so you can allow or block them independently.
Should I let Kagi-Fetcher through?
In most cases, yes. AI search crawlers cite and link back. Allowing is how your content becomes discoverable inside AI answers. If volume gets noisy, rate-limit it before you block it outright.
Does blocking Kagi-Fetcher affect my Google rankings?
No. Kagi-Fetcher feeds Kagi's AI answer engine, which is a separate distribution channel from classical search. Blocking it removes you from citations inside Kagi's product, but Google and Bing keep ranking you the same.
How do I confirm a request is really from Kagi-Fetcher?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be Kagi-Fetcher", not "the traffic is genuinely Kagi-Fetcher". If you need stronger guarantees, look for a reverse-DNS check or wait for Kagi to publish IP ranges.
How is Kagi-Fetcher different from Googlebot?
Both crawl the web, but they feed completely different surfaces. Googlebot powers Google Search, where you compete for ten blue links. Kagi-Fetcher powers Kagi's AI answer engine, where you compete for one of a handful of citations in a written-out paragraph. The crawl mechanics are similar, the consumption pattern is not.
How is Kagi-Fetcher different from Kagi's other bots?
Kagi splits work across multiple user-agents so site owners can decide on each one independently. Training crawlers, live-fetch agents, search indexers, and agentic browsers each get their own name. Worth scanning the rest of the Kagi family above to see which ones actually matter for your site.
What's the cleanest way to control Kagi-Fetcher?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.