Robots.txt Tester
Paste your robots.txt, test URL paths against it, and validate syntax. Catch misconfigurations before search engines do.
Quick Answer
The robots.txt file lives at the root of your domain (example.com/robots.txt) and tells search engine crawlers which URLs they can and cannot access. It uses User-agent directives to target specific bots (e.g., Googlebot), Allow and Disallow rules to control access, and Sitemap directives to point crawlers to your XML sitemap. A misconfigured robots.txt can accidentally block your entire site from being indexed — always test changes before deploying.
Parsed Rules
//admin//private//api//tmp/*/*.json$/api/public//api//search?*Sitemaps Found
About This Tool
The Robots.txt Tester is a free tool that lets you validate your robots.txt file and test whether specific URL paths are allowed or blocked for any search engine crawler. Paste your robots.txt content, enter a URL path and user-agent, and instantly see whether the path is allowed or blocked — along with the exact matching rule.
A misconfigured robots.txt file is one of the most common and costly SEO mistakes. A single misplaced rule can block search engines from crawling your entire site, your most important pages, or your sitemap. This tool helps you catch these issues before deploying by parsing every directive, validating syntax, and flagging common mistakes like missing User-agent declarations, conflicting rules, and invalid patterns.
The parser supports all standard robots.txt directives: User-agent, Allow, Disallow, Sitemap, and Crawl-delay. It handles wildcard patterns (*) for matching any sequence of characters and the end-of-string anchor ($) for exact URL endings. When multiple rules match a URL, the tool applies the most-specific-match-wins algorithm used by Google and most major search engines.
All parsing and testing happens entirely in your browser. Your robots.txt content is never sent to any server, stored, or logged. The tool is free, requires no signup, and works offline once loaded.