Create robots.txt files for search engine crawlers
robots.txt is the file search engine and crawler bots read first when visiting your site. It declares which paths they may crawl and where to find your sitemap. This generator builds the file interactively — multiple user-agent blocks, allow/disallow rules, crawl delays — and outputs a valid, spec-compliant robots.txt you can drop straight into the root of your domain.
At the root of your domain — https://example.com/robots.txt. It must be served with text/plain content type.
No. It's a request, not an enforcement. Bad actors ignore it. Never use Disallow to 'hide' sensitive URLs — use authentication instead.
Add User-agent: GPTBot (and CCBot, ClaudeBot, Google-Extended, PerplexityBot, etc.) with Disallow: /. Note: this only blocks compliant bots.
Yes. Each block applies to the listed agent. Use User-agent: * as a catch-all last.
For rules inside a single User-agent group, Google applies the most-specific rule. Across groups, the first matching User-agent block wins.