The World Wide Web started to grow in 1990. In the three years since it was begun, the volume of information avialable has increased enormously. In additiona to the ability to access preexisting network resources such as WAIS, FTP, and Gopher, the mass of material avaialable via HTTP has become a vast resource of its own. Though by simply wandering the web with one of the many browsers available, it is clear that the web is huge, it is very difficult to actually estimate the size of the web in this fashion. Due to the structure of the web, many documents are obscure and only reachable through a thin trail of documents with few other links to them. This leaves the question, what is a realistic, quantitative measure of the size of the World Wide Web? In an attempt to answer this question I wrote an automoton that would do a limited parsing of HTML and would attempt to do a depth-first search of the Web. The automoton became known as the World Wide Web Wanderer, or W4. The Wanderer, a script written in perl, wandered the Web for many hours, accumulating URL's and new site names. The Wanderer did not follow any links that were not HTTP sites, and had certain special cases to eliminate certain vast gatewayed structures such as Techinfo and AFS. Though these are certainly part of the Web, it is not desirable to have them included in a scale estimate of the Web for two reasons. First, I am not including ftp or wais or any other 'large' structures available over the web, so including AFS or Techinfo would be a similar misrepresentation. The second concern is duration of the wandering. Wandering the entire Techinfo tree would be a project in itself. Further complicating the search, a number of WWW servers provide complete access to local directory trees, which are larger and not representative of the number of documents available on the Web. In an attempt to avoid these medium scale structures, I incorporated a 'boredom' factor into the Wanderer that would make an attempt to recognize directory tree like structures and skip parts of them, to expidite the search and to avoid misrepresenting the number of documents on the Web. Finally, the Wanderer was complete enough and intelligent enough to wander the Web and produce a reasonable estimate of the size of the Web. In total, the Wanderer found more than 100 HTTP sites and a total of well over two hundred thousand documents. This of course includes all of the reduction mentioned above, indicating that the actual size of the Web is far greater. Additionally new sites are being added to the Web every week with many thousands of new documents becoming available every month. Additionally, the Wanderer could not accurately measure the volume of information available via the many searchable indeces available on the Web. When one then considers all the services that have been gatewayed to the Web, the ability to access the massive information resources in gopher, ftp and wais, and that the Web is only three years old, the incredible size and growth rate of the Web becomes apparent.