This repository contains the Python code used for multimodal learning, aiming to enhance the performance of early Mild Cognitive Impairment (MCI) detection by combining statistically significant Virtual Reality (VR)-derived and Magnetic Resonance Imaging (MRI) biomarkers. The chosen machine learning model is the Support Vector Machine (SVM) algorithm, known for its effectiveness in similar tasks.
Extraction of four VR-derived biomarkers from behavioral data in the virtual kiosk test. Hand movement speed is calculated using the collected hand movement data from the virtual kiosk test. Scanpath length is derived from the eye movement data. The time to completion and the number of errors are calculated based on the performance data.
Extraction of 22 MRI biomarkers from both hemispheres using the Split-attention U-Net architecture. Following multi-label segmentation of the brain’s region of interest, each brain region’s volumes are quantified as MRI biomarkers. Each hemisphere has eleven biomarkers including the cerebral white matter, cerebral gray matter, ventricles, amygdala, hippocampus, entorhinal cortex, parahippocampal gyrus, fusiform gyrus, and superior, middle, and inferior temporal gyrus.