Robots.txt Generator

    Robots.txt Generator

    Create robots.txt files for search engine crawlers

    Start from a preset

    Rule 1

    Test a path— will Googlebot / AI crawlers index this URL?
    🚫Blocked
    Matched Disallow "/admin".
    Disallow: /admin

    About the Robots.txt Generator

    robots.txt is the file search engine and crawler bots read first when visiting your site. It declares which paths they may crawl and where to find your sitemap. This generator builds the file interactively — multiple user-agent blocks, allow/disallow rules, crawl delays — and outputs a valid, spec-compliant robots.txt you can drop straight into the root of your domain.

    Features

    How it works

    1. Add one or more User-agent rules (use * to apply to all bots).
    2. Under each, list paths to Allow or Disallow.
    3. Optionally add Crawl-delay and one or more Sitemap URLs.
    4. Copy or download the file and upload it to the root of your domain.

    Use cases

    Frequently asked questions

    Where does robots.txt live?

    +

    At the root of your domain — https://example.com/robots.txt. It must be served with text/plain content type.

    Is robots.txt a security control?

    +

    No. It's a request, not an enforcement. Bad actors ignore it. Never use Disallow to 'hide' sensitive URLs — use authentication instead.

    How do I block ChatGPT or other AI crawlers?

    +

    Add User-agent: GPTBot (and CCBot, ClaudeBot, Google-Extended, PerplexityBot, etc.) with Disallow: /. Note: this only blocks compliant bots.

    Can I have multiple User-agent blocks?

    +

    Yes. Each block applies to the listed agent. Use User-agent: * as a catch-all last.

    Does order matter?

    +

    For rules inside a single User-agent group, Google applies the most-specific rule. Across groups, the first matching User-agent block wins.