Robots.txt Generator
Generate robots.txt file for search engines
Calculator
Enter the full URL to your sitemap file
Time between crawler requests (optional, not all bots respect this)
Result
Generated robots.txt
# robots.txt generated by Mercy Tools # https://mercy.tools/seo-tools/robots-txt-generator User-agent: * Allow: /
How to Use
- Download or copy the generated robots.txt file
- Upload it to the root directory of your website
- The file should be accessible at: yourdomain.com/robots.txt
- Test using Google Search Console or Bing Webmaster Tools
How to Use
Create robots.txt files to control search engine crawling
Select user agents
Choose which bots to configure (Googlebot, Bingbot, all, etc.)
Add rules
Specify Allow and Disallow rules for directories and files
Add sitemap
Include your sitemap URL for search engine discovery
Generate and download
Create the robots.txt file and upload to your site root
Frequently Asked Questions
robots.txt is a text file at your site root that tells search engine crawlers which pages to access or avoid. It uses User-agent, Allow, and Disallow directives. It is a request, not enforcement; malicious bots may ignore it.
Use Disallow: /folder-name/ to block a directory and all its contents. Example: Disallow: /admin/ blocks /admin/ and /admin/page.html. Remember robots.txt is public; do not rely on it for security.
Disallow in robots.txt prevents crawling but pages may still be indexed if linked elsewhere. Noindex meta tag or header prevents indexing but allows crawling. For best results, use noindex for pages you want hidden from search.
No, Google recommends allowing CSS and JS files so it can render pages properly for indexing. Blocking them may hurt rankings. Only block files that truly should not be crawled, like sensitive configuration files.