Amalgam is collection of (mainly) Python scripts with a web interface (based on Flask) you can use in SEO.
More info for developers : Dev page
- How to run it ( Docker )
- How to run it ( Windows 10 )
- How to run it ( XUbuntu 20 )
- Scripts ( Scripts )
- Run crawler against a site ( Crawler )
Go on Docker's download page and install Docker
docker run -it -p 5000:5000 scriptoid/amalgam:0.1
Note 1 : Port 5000 of your PC should be available
Note 2 : If you want to run it on a different port use the following command:
docker run -it -p <Your Port>:5000 scriptoid/amalgam:0.1
replacing 'Your Port' with a free port at your desire.
Just open a browser and access http://localhost:5000
Simply go to console application launched by Docker and press Ctrl-C.
Download the amalgam zip file and unzip it into a folder.
Check if you have Python installed. Open a Command Prompt (Start > Command Prompt) and type:
python --version
You should get something like
If not then download it and install it from: https://www.python.org/
Note: Remember to check "Add Python 3.x to PATH" during installation so you can access Python from Command Prompt
Step 3: Have Pip3 installed. Usually Python3 comes with PIP3 by default but you can check (in Command Prompt) with:
pip3 --version
First go inside the folder of the project and open a Command Prompt and then type:
pip install -r requirements.txt
From a Command Prompt inside the project folder type: python app.py
Access it at: http://127.0.0.1:5000/ with any browser.
Check if you have Python installed
python --version
If not then run:
sudo apt-get install python3
pip3 --version
sudo pip install -r requirements.txt
python app.py
Access it at: http://127.0.0.1:5000/
Scripts are small programs that are part of Amalgam but do not require to run the whole Amalgam application.
They are isolated enough (from main application) to be able to run them independently.
The Crawler script is the part of Amalgam that is crawling the pages of a site and collects informations.
Note : In order to run it you should have complete the step "Install Python libraries" - for you operating system - inside the above tutorial.
To run it go inside [lab]/crawler/requests
cd ./lab/crawler/requests
and run
python crawler.py --domain=<name_of_domain> --max-links=<maximum number of links>
The collected data wil be available inside:
crawl-requests-report.log
file.