Skip to content

The Archive.org Crawler works through volunteering users who install an extension on their browsers. When the user visits a webpage, the URL is anonymously added to the Archive.org database.

License

Notifications You must be signed in to change notification settings

hudson-newey/User-Web-Crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

User Web Crawler

The user web crawler is a website indexer that is built upon what websites a browser navigates to

How it works

The user web crawler works through volunteering users who install an extension on their browsers. When the user visits a webpage, the URL is anonymously added to an upstream database that holds all the unique webpages. Note: There is currently no centralized database that the data is pushed to. To start logging data, you will need to setup your own backend service

Usage

go run server.go

Install the Tampermonkey browser extension

Run the following Python3 script when you want to push your code to the upstream database

python3 ./commit.py

About

The Archive.org Crawler works through volunteering users who install an extension on their browsers. When the user visits a webpage, the URL is anonymously added to the Archive.org database.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published