A robots.txt file is a plain-text document placed at the root of your website that instructs search engine crawlers which pages or directories they may access. Properly configuring robots.txt helps you manage crawl budget, protect private areas from indexing, and direct bots to your XML sitemap. Use the builder below to create valid rules for any user-agent, then copy or download the finished file.
CMS Presets
User-Agent Blocks
Global Settings
Validation Warnings
robots.txt Preview
How to Use This Robots.txt Generator
A robots.txt file is one of the most important files for SEO and website management. It sits at the root of your domain and tells search engine crawlers like Googlebot and Bingbot which parts of your site they can and cannot access. Without a properly configured robots.txt, crawlers may waste time on pages you do not want indexed, or miss your sitemap entirely.
Step 1: Choose a CMS Preset or Start From Scratch
If you are using a common platform like WordPress, Shopify, Next.js, or Laravel, click the corresponding preset button to automatically load recommended rules for that CMS. Each preset blocks platform-specific directories like /wp-admin/ for WordPress or /admin/ for Shopify. You can customize the rules after loading a preset.
Step 2: Configure User-Agent Blocks
Each block in the robots.txt targets a specific crawler. The default * user-agent applies to all bots. You can add separate blocks for Googlebot, Bingbot, or any custom bot by clicking "Add Block" and selecting the user-agent. Within each block, add Allow and Disallow rules to control which paths the bot can access.
Step 3: Add Sitemap and Crawl-delay
Enter your XML sitemap URL in the Sitemap field so search engines can discover all your pages efficiently. The Crawl-delay field is optional and tells bots how many seconds to wait between requests. Note that Google ignores Crawl-delay but Bing respects it. Most sites do not need a crawl-delay unless the server has limited resources.
Step 4: Review Warnings and Preview
The generator validates your rules in real time. If you accidentally block CSS or JavaScript files, or disallow your entire site, a warning will appear. Review the live preview to make sure the output looks correct. The preview shows exactly what your robots.txt file will contain, formatted with proper line breaks and comments.
Step 5: Copy or Download
When you are satisfied with the output, click "Copy" to place the content on your clipboard, or "Download" to save it as a robots.txt file. Upload the file to the root directory of your website so it is accessible at https://yourdomain.com/robots.txt. You can verify it using Google Search Console's robots.txt Tester.
Frequently Asked Questions
Is this robots.txt generator completely free?
Yes, this robots.txt generator is 100% free with no limits. You can create as many robots.txt files as you need without signing up or paying anything. Everything runs in your browser, so there are no usage caps or restrictions.
Is my data safe when using this tool?
Absolutely. All processing happens entirely in your browser using client-side JavaScript. Your URLs, paths, and configuration are never sent to any server, never stored, and never logged. You can safely use this tool for confidential projects.
What is a robots.txt file and why do I need one?
A robots.txt file is a plain text file placed at the root of your website that tells search engine crawlers which pages or directories they are allowed or disallowed to access. It helps you control crawl budget, prevent indexing of private areas, and guide bots to your sitemap. Every website should have one for proper SEO.
Where do I place the robots.txt file on my website?
The robots.txt file must be placed at the root of your domain, accessible at https://yourdomain.com/robots.txt. It will not work if placed in a subdirectory. Most web servers and CMS platforms have a specific location or setting for this file. For WordPress, plugins like Yoast SEO can manage it automatically.
Can robots.txt block pages from appearing in Google?
Robots.txt can prevent crawling, but it does not guarantee a page will not appear in search results. Google may still index a URL if other pages link to it, showing it with a 'No information is available for this page' message. To fully block indexing, use a noindex meta tag or X-Robots-Tag HTTP header in addition to robots.txt.
What is the Crawl-delay directive in robots.txt?
The Crawl-delay directive tells bots how many seconds to wait between successive requests to your server. It helps reduce server load from aggressive crawlers. Note that Google ignores this directive and uses Google Search Console instead. Bing and some other bots do respect Crawl-delay. Common values range from 1 to 10 seconds.
Should I block CSS and JavaScript files in robots.txt?
No, you should not block CSS and JavaScript files in robots.txt. Google needs access to these files to properly render and understand your pages. Blocking them can hurt your SEO because Google cannot see your page layout and content the way users do. This generator warns you if your rules would block common CSS or JS paths.
What CMS presets are available in this generator?
This generator includes presets for WordPress, Shopify, Next.js, and Laravel. Each preset pre-fills common Disallow rules specific to that platform, such as blocking wp-admin in WordPress or admin paths in Shopify. You can load a preset and customize it further before copying or downloading the final file.