mudd.one is the first phase of the mudd suite, a comprehensive toolkit for processing, analyzing, measuring, and providing diagnostic suggestions for veterinary ultrasound imaging data. This phase focuses on video data analysis and preparation of ML training datasets.
mudd.one aims to streamline the process of ultrasound video data analysis and dataset preparation for machine learning applications. Our goal is to provide veterinary professionals and ML researchers with powerful, user-friendly tools for processing ultrasound imaging data.
- Multiple Format Support
- Video: mp4, mov, avi, wmv (ffmpeg compatible)
- Medical Imaging: DICOM (single & multiframe)
- Raw ultrasound data streams
- Automatic Cropping
- AI-powered ultrasound area detection
- Depth scale recognition
- Configurable confidence threshold (โฅ0.9 default)
- Manual Adjustments
- Interactive canvas interface
- Precision cropping tools
- Real-time preview
- Video Enhancement
- Resolution normalization
- Grayscale optimization
- Noise reduction
- Filtering Suite
- Histogram equalization
- Contrast adjustment
- Adaptive thresholding
- Edge detection (Canny)
- Custom filter integration
- Flexible Masking Tools
- Model-assisted segmentation
- Manual annotation capabilities
- Multiple label support
- Export Options
- Custom dataset formats
- Batch processing
- Metadata preservation
- Core: Python 3.8+
- API Framework: FastAPI
- Image Processing: OpenCV, scikit-image
- Medical Imaging: pydicom
- ML Integration: Custom model endpoints
- Framework: React 18 with Next.js
- State Management: React Hooks
- UI Components: Custom component library
- Canvas Handling: Custom WebGL renderer
- Version Control: Git
- Documentation: OpenAPI/Swagger
- Testing: pytest, React Testing Library
- Linting: pylint, ESLint
- CPU: 4 cores
- RAM: 8GB
- Storage: 20GB
- GPU: Optional
- CPU: 8+ cores
- RAM: 16GB+
- Storage: 50GB+ SSD
- GPU: CUDA/Metal compatible
- Python 3.8 or higher
- Node.js 16 or higher
- FFmpeg
- Git
# Clone repository
git clone https://github.com/Szowesgad/mudd.one.git
cd mudd.one
# Install dependencies
chmod +x install.sh
./install.sh
# Backend setup
cd server
python -m venv venv
source venv/bin/activate # or `venv\Scripts\activate` on Windows
pip install -r requirements.txt
# Frontend setup
cd client
npm install
- Copy environment template:
cp .env.example .env
- Configure environment variables:
NODE_ENV=development
API_URL=http://localhost:8000
MODEL_PATH=/path/to/model
# Start backend server
python run.py
# In another terminal, start frontend
cd client
npm run dev
- Upload ultrasound video/DICOM
- Review automatic cropping
- Adjust if necessary
- Apply processing filters
- Create segmentation masks
- Export processed data
mudd.one/
โโโ client/ # Frontend Application
โ โโโ src/
โ โ โโโ components/ # React Components
โ โ โ โโโ ui/ # Base UI Components
โ โ โ โโโ upload/ # Upload Interface
โ โ โ โโโ processing/ # Processing Tools
โ โ โ โโโ visualization/ # Results Display
โ โ โโโ pages/ # Next.js Pages
โ โ โโโ styles/ # Global Styles
โ โโโ public/ # Static Assets
โโโ server/ # Backend Application
โ โโโ core/
โ โ โโโ sm_integration/ # Segmentation Integration
โ โ โโโ memory_bank/ # Memory Management
โ โ โโโ processors/ # Image Processing
โ โโโ api/
โ โ โโโ routes/ # API Endpoints
โ โ โโโ services/ # Business Logic
โ โโโ ml/
โ โโโ models/ # ML Models
โ โโโ training/ # Training Utils
โโโ docs/ # Documentation
โโโ installation/ # Install Scripts
POST /api/upload
Accepts video files for processing.
POST /api/process
Initiates video processing pipeline.
POST /api/apply-filter
Applies specified filter to frames.
POST /api/create-mask
Generates segmentation mask.
cd server
pip install -r requirements.txt
uvicorn app:app --reload
cd client
npm install
npm run dev
# Backend tests
pytest
# Frontend tests
npm test
We welcome contributions! See our Contributing Guide for details.
- Fork repository
- Create feature branch (
git checkout -b feature/NewFeature
) - Commit changes (
git commit -m 'Add NewFeature'
) - Push to branch (
git push origin feature/NewFeature
) - Submit Pull Request
- Follow Python (PEP 8) and JavaScript style guides
- Add tests for new features
- Update documentation as needed
- Maintain compatibility with existing features
Error: Could not load segmentation model
Solution: Verify model path and GPU compatibility
Error: Cannot read DICOM file
Solution: Check pydicom installation and file integrity
Error: Out of memory
Solution: Adjust batch size or reduce video resolution
This project is licensed under the MIT License - see LICENSE for details.
Developed and maintained by hiai.visionยฎ (the AMLT.ai brand)
- GitHub Issues: Issue Tracker
- Documentation: Wiki
- Email: mudd.project@hiai.vision
- Website: hiai.vision
- Core video processing functionality
- Basic segmentation tools
- Dataset export capabilities
- Advanced ML model integration
- Real-time processing
- Cloud deployment options
- Enhanced reporting features
- v0.1.0 - Initial Release
- Basic video processing
- Manual cropping
- Simple filters mudd.one app is the first phase of the complex mudd suite focused on ultrasound video data analysis and preparation training datasets for various ML tools. It supports various video formats (ffmpeg compatibile) and DICOM files (single and multiframe) by pydicom implementation, providing automatic and manual cropping, frame extraction, preprocessing, filtering, and masking capabilities using user-prefered segmentation model.
- Multiple format support (mp4, mov, avi, wmv, DICOM multiframe)
- Automatic ultrasound area detection and cropping using simple binary detection or AI segmentation model of user's choice (start- and endpoints provided)
- Automatic cropping of ultrasound area with depth scale based on segmentation model confidence level (โฅ0.9 by default)
- Manual cropping with canvas tools upon failed to apply automatic crop
- Video preprocessing (resolution normalization, grayscale conversion)
- Various filtering options (histogram equalization, contrast adjustment, adaptive thresholding, canny and others)
- Manual masking and labeling canvas tool with basic segmentation libraries drawing function or advenced segmentation model of user's choice delivered canvas tool (implementation start- and endpoints provided)
- Customizable export formats tailored to user's ML dataset requirements
- FastAPI Backend and React/Node.js Frontend with basic UI demo provided
mudd.one/
โโโ client/ # Frontend Application
โ โโโ src/
โ โ โโโ components/ # React Components
โ โ โ โโโ ui/ # Reusable UI Components
โ โ โ โ โโโ Button/ # Custom Button Components
โ โ โ โ โโโ Input/ # Form Input Components
โ โ โ โ โโโ Layout/ # Layout Components
โ โ โ โโโ upload/ # Image Upload Components
โ โ โ โโโ processing/ # Image Processing Interface
โ โ โ โโโ visualization/ # Results Visualization
โ โ โโโ pages/ # Next.js Pages
โ โ โโโ styles/ # Global Styles
โ โโโ public/ # Static Assets
โโโ server/ # Backend Application
โ โโโ core/
โ โ โโโ sm_integration/ # Segmentation model Integration
โ โ โโโ memory_bank/ # Memory & Confidence System
โ โ โโโ processors/ # Image Processors
โ โโโ api/
โ โ โโโ routes/ # FastAPI Endpoints
โ โ โโโ services/ # Business Logic
โ โโโ ml/
โ โโโ models/ # AI Models
โ โโโ training/ # Training Scripts
โโโ docs/ # Documentation and dependencies list
โโโ installation/ # Platform dependend installation .sh scripts
- Clone the repository:
git clone https://github.com/Szowesgad/mudd.one.git
cd mudd
- Run the installation script:
chmod +x install.sh
./install.sh
- Configure the environment:
- Copy
.env.example
to.env
- Adjust settings according to your environment
- Start the application:
python run.py
- Open web browser and navigate to:
http://localhost:3000
- Processing workflow:
- Upload video/DICOM file
- Verify automatic cropping or adjust manually
- Review cropped video with cineloop scrolling tool
- Apply filters as needed
- Create masks
- Export processed data
POST /api/upload
- Upload video filePOST /api/process
- Process videoPOST /api/apply-filter
- Apply filter to frame (noise detection dependend)POST /api/create-mask
- Create mask using segmentation tool (as example)
cd backend
pip install -r requirements.txt
uvicorn app:app --reload
cd frontend
npm install
npm run dev
- SAM2.1 Model Loading
Error: Could not load SAM model
Solution: Ensure CUDA or MPS is properly installed and model checkpoint exists
- DICOM Processing
Error: Cannot read DICOM file
Solution: Check if pydicom is properly installed and file is not corrupted
- Memory Issues
Error: Out of memory
Solution: Reduce batch size or video resolution
- Fork the repository
- Create feature branch
- Commit changes
- Push to branch
- Create Pull Request
MIT License
For support or questions, please contact: mudd.project@hiai.vision# Multimodal Ultrasound Data Distiller by hiai.visionยฎ (the AMLT.ai brand)