Re: Robots and Cookie problems
by Thomas James Allen <tjallen(at)pipeline.com>
|
Date: |
Fri, 24 Sep 1999 02:35:01 -0400 |
To: |
"Rajnish Bhaskar" <9705228b(at)student.gla.ac.uk> |
Cc: |
hwg-basics(at)hwg.org |
|
todo: View
Thread,
Original
|
|
At 07:29 PM 9/23/99 +0100, you wrote:
>Hi all,
<snip>
>And finally, according to my HTML book, a robots.txt can be used to
>disallow individual pages from being spidered, but I seem to recall
>someone on this list saying that robots.txt can only be used to disallow
>whole directories. Am I imagining things, or is this actually the case?
>
>Thanks a lot,
>Raj.
>------------------------------------------------------------------
For good info on robots.txt, see:
http://info.webcrawler.com/mak/projects/robots/robots.html
Includes an extensive FAQ.
Includes info on avoiding being indexed, with a meta tag and robot.txt
Hope this helps,
jimmy
---
Jimmy Allen (Thomas James Allen)
Business - Get Your Website Here
http://www.getyourwebsitehere.com
Personal - Eclecticity
http://www.getyourwebsitehere.com/personal
HTML: hwg-basics mailing list archives,
maintained by Webmasters @ IWA