Crawlspace
Crawlspace is a training crawler operated by Crawlspace. Its job is to read public web pages and feed that content into a machine-learning pipeline that trains future versions of the model.
Unlike a search-engine crawler, a training crawler does not send users back to your site. The content is consumed once, baked into the model, and shows up later in the model's responses. There is usually no citation and no referral traffic.
If Crawlspace ships a new model version, you will likely see Crawlspace traffic spike for a few weeks while it gathers fresh data, then quiet down again.
See Crawlspace on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Periodic, broad
Honors robots.txt
Often ignored
Honors Crawl-delay
No
Should I let Crawlspace through?
There's a real trade-off here. Training crawlers consume content without sending users back. Decide whether the trade for being in the model is worth your bandwidth. If Crawlspace actually drives traffic or citations back to you, letting it through usually pays for itself. If it just consumes bandwidth, block it.
Does blocking Crawlspace affect my Google rankings?
No. Crawlspace collects training data, not search-index pages. Your classical search rankings stay intact. The actual trade is whether you want your content folded into the next model release.
How do I confirm a request is really from Crawlspace?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be Crawlspace", not "the traffic is genuinely Crawlspace". If you need stronger guarantees, look for a reverse-DNS check or wait for Crawlspace to publish IP ranges.
What happens to my content if I let Crawlspace fetch it?
It gets pulled into Crawlspace's training pipeline and stored. Whether and how it influences a future model release is rarely disclosed. The only real lever you have on the outcome is what you allow at fetch time.
What's the cleanest way to control Crawlspace?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.