Parameter Spider is a powerful tool for discovering HTTP GET parameters in web applications. It leverages Web Archive (Wayback Machine) data to capture unique HTTP GET parameters from web applications. This ensures comprehensive discovery of historical parameters for analysis and testing.
- Multi-domain support
- Output results to a specified directory
- Lightweight and easy to use
- Flexible execution: via
bun link
or manual usage withparamspider.ts
- Ensure you have Bun installed on your system.
- Clone the repository:
$ git clone https://github.com/binsarjr/paramspider
$ cd paramspider
- Install dependencies:
$ bun install
- Link the tool globally (optional):
$ bun link
Once linked globally, you can run the tool as:
paramspider -d <domain1> <domain2> ... -l <domain_list_file> -o <output_directory>
Alternatively, you can run the tool directly with Bun:
bun run paramspider.ts -d <domain1> <domain2> ... -l <domain_list_file> -o <output_directory>
Usage: paramspider [options]
Options:
-d
,--domain <string...>
: Domain to crawl (e.g.,example.com domainku.com
).-l
,--list <list...>
: (Optional) Path to a file containing a list of domains.-p
,--placeholder <string>
: Placeholder to replace query parameters (default:"FUZZ"
).-o
,--output-dir <string>
: Output directory (default:"./results"
).-h
,--help
: Display help for command.
paramspider -d example.com domainku.com -l domainlist.txt -o output_dir/result
bun run paramspider.ts -d example.com domainku.com -l domainlist.txt -o output_dir/result
This command crawls example.com
and domainku.com
, reads additional domains from domainlist.txt
, and saves the results in the output_dir/result
directory.
The tool generates:
- A list of unique HTTP GET parameters discovered during the crawl.
- Separate files for each domain containing their respective parameters.
Feel free to contribute to the project by submitting pull requests or reporting issues. Make sure to follow the standard coding guidelines and include detailed descriptions in your contributions.