Skip to content

Brain Computer Interface with Virtual Reality Neurofeedback

Marta edited this page Mar 2, 2023 · 10 revisions

This project was developed as part of a Project Management in Software Development class at the Technical University of Munich in collaboration with Mentalab. The goal was to develop a virtual reality (VR) neurofeedback rehabilitation system for Parkinson'n patients in a form of a game. Neurofeedback is used as a technique to reinfoce healthy brain conditioning in a way similar to reinfercement learning, where the feedback is based on an award and punishment system.

The components of the system are represented in the figure below.

The project was divided into 4 main tasks:

  1. EEG device to data processing module connection pipeline
  2. Data processing module to Unity connection pipeline
  3. Classification of the EEG data into 3 states: left, right, and rest
  4. Development of the VR gameplay

Harware Components:

The hardware for the project was provided by Mentalab. Mentalab's EEG device was used for the brain signal acqusition. Unity was used as the game engine and Oculus Quest 2 as the VR headset.

EEG - Data Processing Unit (Python) Connection

The solution to the first task has been already implemented by Mentalab in a form of a Python library called explorepy. Explorepy was used for that step in the project.

Data Processing Unit (Python) - Game Engine (Unity) Connection

The connection pipeline implemented by Mentalab is only working between their EEG device and Python. Thus, it had to be extended to connect to Unity. The most important problem of this task was the difference in the programming languages between whose the data was send. The data from the EEG device was read in Python in a form of a Numpy array. Unity uses C#, which does not support Numpy. To overcome this problem the acquired EEG data was converted into a JSON format, that could be communicated between Python and C#. As the acquired data is multidimensional, read in a format of (number of channels) x (amount of data read (dependent on for how long the data has been read)), in C# it was transformed into a list of lists.

ZeroMQ, an open-source universal messaging library, was used to implement the connection between Python (pyzmq) and C#. The data read by the Python file can be send to Unity in its original on processed form.

Classification

Data Acquistino Protocol

  • 6 seconds for left hand imaginary grasping
  • 6 seconds for right hand imaginary grasping
  • 5 seconds resting state

The data is acquired with eyes open. It should be as similar to trying to play the game as possible.

Classification Algorithm

To show the neurofeedback back to the patient, it is necessary to classify the acquired EEG data. For this project, the neurofeedback is displayed in 3 states:

  • left
  • right
  • rest.

The left state corresponds to the user imaging the motion of grasping left hand and right state to the right hand. The resting state is when no motion is being imagined.

We follow the classification algorithm presented in [1]. The acquired data is being preprocessed and then fed into Common Spatial Pattern (CSP) algorithm for dimensionality reduction. The results of CSP are fed into Linear Discriminant Analysis for the final prediction of the state. Due to the time limitations of the project, we weren't able to find whether it transforming data from amplitude to frequency domain could improve the results. It is also unclear what filters could improve the accuracy of the classifier in our particular case.

Virtual Reality Gameplay

The goal of this project was to present the neurofeedback in a form of a virtual reality (VR) gameplay. Neurofeedback rehabilitation requires multiple repeated sessions to develop mental strategies to cope with the disease symptoms. Thus, the rehabilitation system can keep the patient entertained for multiple sessions to maximize the rehabilitation outcome.

Gameplay Design

The gameplay had to be design in a way such that the immersion of the user is maximized for a proper motion imagery motions and related to the states predicted by the classification algorithm. For this system, the imagery motion is left and right hand grasping. Thus, the gameplay was separated into two sections: left and right. The user interacts with objects in their left and right hand and each hand has an opponent assigned to it to score points for a reinforcement reaction.

Final Gameplay

To maximize the immersion into the grasping motion, two Pepsi guns are placed into the gameplay. Monsters are added into the gameplay as the opponent. The goal of the game is to remove the monster from the game area through the imaginary motion.

The video below represents a demo of the game. Due to issues with data acquisition and processing, the bullets are an effect of a random left or right signal generated by a Python script. Ideally, the gameplay would be fully controlled by the brain signals.

Everything Is AWESOME

References

[1] J. W. Choi, B. H. Kim, S. Huh, and S. Jo, “Observing actions through immersive virtual reality enhances motor imagery training,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 28, no. 7, pp. 1614–1622, 2020.

Clone this wiki locally