What’s the Robot Exclusion Standard?

Because they do have the potential to wreak havoc on a web site, there has to be some kind of
guidelines to keep crawlers in line. Those guidelines are called the Robot Exclusion Standard, Robots
Exclusion Protocol, or robots.txt.


The file robots.txt is the actual element that you’ll work with. It’s a text-based document that should
be included in the root of your domain, and it essentially contains instructions to any crawler that
comes to your site about what they are and are not allowed to index.
To communicate with the crawler, you need a specific syntax that it can understand. In its most
basic form, the text might look something like this:
User-agent: *
Disallow: /
These two parts of the text are essential. The first part, User-agent:, tells a crawler what user
agent, or crawler, you’re commanding. The asterisk (*) indicates that all crawlers are covered, but
you can specify a single crawler or even multiple crawlers.
The second part, Disallow:, tells the crawler what it is not allowed to access. The slash (/) indicates
“all directories.” So in the preceding code example, the robots.txt file is essentially saying that
“all crawlers are to ignore all directories.”
When you’re writing robots.txt, remember to include the colon (:) after the User-agent indicator
and after the Disallow indicator. The colon indicates that important information follows to which
the crawler should pay attention.
You won’t usually want to tell all crawlers to ignore all directories. Instead, you can tell all crawlers
to ignore your temporary directories by writing the text like this:
User-agent: *
Disallow: /tmp/
Or you can take it one step further and tell all crawlers to ignore multiple directories:
User-agent: *
Disallow: /tmp/
Disallow: /private/
Disallow: /links/listing.html
That piece of text tells the crawler to ignore temporary directories, private directories