Amazon Bedrock AgentCore Browser (US East 1)
Amazon Bedrock AgentCore Browser (US East 1) is a live-fetch agent operated by Amazon. It does not crawl the web on a schedule. It hits your site only when an end-user asks the underlying AI a question that requires fresh information from a specific page.
Traffic is bursty and unpredictable. A single trending topic can send hundreds of Amazon Bedrock AgentCore Browser (US East 1) requests in an hour, then nothing for days. Each request typically reads one or two pages, not your whole site.
Allowing Amazon Bedrock AgentCore Browser (US East 1) is how your content becomes part of Amazon's answers. Blocking it means users asking that AI about your topic will be answered using someone else's content instead.
See Amazon Bedrock AgentCore Browser (US East 1) on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
Verify by IP
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
The polite way is a robots.txt rule. Compliant agents respect it; the others ignore it.
robots.txt
Test a URL
Paste any URL on your site and we'll fetch its robots.txt to check whether Amazon Bedrock AgentCore Browser (US East 1) is allowed.
Renders JavaScript
Sometimes
IP verification
Published IP ranges
Crawl frequency
Burst, user-driven
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Amazon runs 38 bots in total. Each one is a separate user-agent so you can allow or block them independently.
Live-Fetch AI
11- Amzn-User
- Amazon Bedrock AgentCore Browser (AP Northeast)
- Amazon Bedrock AgentCore Browser (AP South)
- Amazon Bedrock AgentCore Browser (AP Southeast 1)
- Amazon Bedrock AgentCore Browser (AP Southeast)
- Amazon Bedrock AgentCore Browser (EU Central 1)
- Amazon Bedrock AgentCore Browser (EU West 1)
- Amazon Bedrock AgentCore Browser (US East 1)You are here
- Amazon Bedrock AgentCore Browser (US East 2)
- Amazon Bedrock AgentCore Browser (US West 2)
- Amazon Buy For Me
Training Crawler
7Brand Intelligence
3Agentic Browser
3AI Search Index
2Generic Crawler
2DevOps & Monitoring
2Shopping Bot
2Search Engine
1Task Automation
1Ad Verification
1Ads Network Bot
1Link Unfurler
1AI Coding Tool
1Should I let Amazon Bedrock AgentCore Browser (US East 1) through?
In most cases, yes. Live-fetch agents drive citations inside AI answers. Allowing keeps your content in the conversation. If volume gets noisy, rate-limit it before you block it outright.
Does blocking Amazon Bedrock AgentCore Browser (US East 1) affect my Google rankings?
No. Amazon Bedrock AgentCore Browser (US East 1) fetches a page only when a user is actively asking Amazon a question. It has nothing to do with how Google or Bing rank you. The cost of blocking is that Amazon can't quote your content in its answer.
How do I confirm a request is really from Amazon Bedrock AgentCore Browser (US East 1)?
Two checks. The User-Agent header should match a known Amazon Bedrock AgentCore Browser (US East 1) string, and the request's source IP should fall inside Amazon's published ranges. The User-Agent alone is trivially spoofable, so the IP check is what gives you confidence. Amazon publishes the ranges so you can validate at the CDN or edge.
Does a Amazon Bedrock AgentCore Browser (US East 1) visit count as a real user visit?
Sort of. There is a human asking Amazon a question on the other end, but they never load your page in their own browser. They see whatever Amazon quotes back, usually a snippet plus a citation link. Count it as upstream attention rather than as a session.
How is Amazon Bedrock AgentCore Browser (US East 1) different from Amazon's other bots?
Amazon splits work across multiple user-agents so site owners can decide on each one independently. Training crawlers, live-fetch agents, search indexers, and agentic browsers each get their own name. Worth scanning the rest of the Amazon family above to see which ones actually matter for your site.
What's the cleanest way to control Amazon Bedrock AgentCore Browser (US East 1)?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.
Verify everything above against the operator's own documentation.