Skip to content

seoultech-HAILAB/Multimodal-learning-performance-using-both-VR-derived-and-MRI-biomarkers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

Multimodal-learning-performance-using-both-VR-derived-and-MRI-biomarkers

This repository contains the Python code used for multimodal learning, aiming to enhance the performance of early Mild Cognitive Impairment (MCI) detection by combining statistically significant Virtual Reality (VR)-derived and Magnetic Resonance Imaging (MRI) biomarkers. The chosen machine learning model is the Support Vector Machine (SVM) algorithm, known for its effectiveness in similar tasks.

VR-derived biomarkers

Figure3_Extraction of four VR biomarkers Extraction of four VR-derived biomarkers from behavioral data in the virtual kiosk test. Hand movement speed is calculated using the collected hand movement data from the virtual kiosk test. Scanpath length is derived from the eye movement data. The time to completion and the number of errors are calculated based on the performance data.

MRI biomarkers

Figure4_Extraction of 16 MRI biomarkers Extraction of 22 MRI biomarkers from both hemispheres using the Split-attention U-Net architecture. Following multi-label segmentation of the brain’s region of interest, each brain region’s volumes are quantified as MRI biomarkers. Each hemisphere has eleven biomarkers including the cerebral white matter, cerebral gray matter, ventricles, amygdala, hippocampus, entorhinal cortex, parahippocampal gyrus, fusiform gyrus, and superior, middle, and inferior temporal gyrus.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages