A web application that scrapes trending topics from X's (formerly Twitter) "What's New" section, stores them in MongoDB, and maintains a local backup.
- Scrapes top 4 trending topics from X
- Stores trending data in MongoDB
- Creates local backup in
twitter_trends.txt
- Web interface built with React + TypeScript + Vite
- Python 3.x
- Node.js and npm
- MongoDB instance
- Chrome browser
- "Get cookies.txt LOCALLY" browser extension
|-- app.py # Main Flask app
|-- twitter_scrape.py # Scraping logic
|-- twitter_trends.txt # Local backup file
|-- requirements.txt # Python dependencies
|-- .env # Environment variables
|-- drivers/ # Chrome driver directory
|-- cookies/ # Cookies storage
|-- img/ # Readme Images
|-- frontend/ # React + TypeScript frontend
|-- src/ # Source code
|-- dist/ # Built files
|-- public/ # Static assets
- Clone the repository
- Create a
.env
file based on.env.example
- Add your MongoDB URI to the
.env
file:MONGODB_URI=your_mongodb_connection_string
# Navigate to frontend directory
cd frontend
# Install dependencies
npm install
# Build the frontend
npm run build
Note: Cookies are used only for authentication and are not stored permanently
# Return to root directory (if in frontend/)
cd ../
# Install Python dependencies
pip install -r requirements.txt
# Run the application
python app.py
- The application requires valid X (Twitter) cookies for authentication
- Ensure Chrome browser and chromedriver versions match
- MongoDB connection must be established before running the application
- Frontend must be built before running the main application
- Backend: Python, Flask
- Frontend: React, TypeScript, Vite
- Database: MongoDB
- Styling: Tailwind CSS
Feel free to open issues and submit pull requests.
MIT License