Robots.txt Tester
Test any site's /robots.txt. See which bots are blocked — including AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended). Test specific URLs against specific bots.
Bot access matrix — who can crawl your site root
Each row tests if that bot is allowed to crawl /. Click any row for the exact rule that applied.
Test a specific URL against a specific bot
Sitemaps declared
For your developer — full robots.txt content + parsed rule blocks
Raw robots.txt
Parsed rule blocks
Why this matters
For SEO
If Googlebot is blocked from a URL, Google will never index it. If it's blocked from your CSS/JS, Google can't render the page properly. Most sites accidentally block too much through inheritance from old Disallow rules.
For AEO (the bigger story)
If GPTBot, ClaudeBot, PerplexityBot, or Google-Extended can't crawl your site, you're invisible to ChatGPT, Claude, Perplexity, and Google AI Overviews. Many sites block AI bots without realizing it kills their AI-engine traffic.
How robots.txt rules work
Each User-agent: block applies to specific bots. The most-specific match wins. Bots with their own block ignore the wildcard * block. Within a block, the longest matching rule wins; Allow beats Disallow at equal length.
Common mistakes
Disallow: /on the wildcard*block — blocks everything for everyone- Using
Disallow:with no path — does nothing (people think it's "allow all") - Blocking
/wp-admin/while accidentally allowing/wp-admin/admin-ajax.php - Forgetting to update robots.txt after migrating