diff --git a/html/about/guide.html b/html/about/guide.html
index 9b0daf3565c49eb42958a4df9961c01f70dcb966..159a3e983063cdd63b3808e47c85e8f8a820400b 100755
--- a/html/about/guide.html
+++ b/html/about/guide.html
@@ -273,7 +273,7 @@ You may want to run this on startup, easiest way to set that is with a cron job
Start the Crawler
-It is best to run the crawler in a screen session so that you can monitor its output. You can have more than one crawler running as long as you keep them in separate directories, include a symlink to the same robots folder, and also set the correct parameters on each.
+It is best to run the crawler in a screen session so that you can monitor its output. You can have more than one crawler running as long as you keep them in separate directories, include symlinks to the same robots folder and 'shards' file, and also set the correct parameters on each.
To view the parameters, type './cr -h'. Without any parameters set, you can only run one crawler (which might be all you need anyway). If necessary, you can change the database connection from 'localhost' to a different IP from inside cr.c, then rebuild.