From 0ffb89ed97be5605b1bbd5fe79712057bc839bf6 Mon Sep 17 00:00:00 2001 From: laurabpaulsen <202005791@post.au.dk> Date: Fri, 13 Dec 2024 11:44:46 +0100 Subject: [PATCH] add more information --- docs/co-registration/co-reg_alpha1_helmet.md | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/docs/co-registration/co-reg_alpha1_helmet.md b/docs/co-registration/co-reg_alpha1_helmet.md index 0d1dc2d..087b477 100644 --- a/docs/co-registration/co-reg_alpha1_helmet.md +++ b/docs/co-registration/co-reg_alpha1_helmet.md @@ -47,13 +47,20 @@ mne.rename_channels( ) ``` +After loading the data, any digitised head points, and EEG if any, is added to the Raw data object. +```python +add_dig_montage(raw, points) +``` + Define a variable with the depth measurements. It is important that the depth measurements are in the same order as the label input in the next code chunk!! ``` depth_meas = [40/1000, 47/1000, 44/1000, 40/1000] # mm converted to meter (order = 3, 10, 16, 62) ``` -If you are using another type of sensor (e.g. not the default "FieldLine OPM sensor Gen1 size = 2.00 mm") remember to specify it here using the `coil_type` flag. +The next step is to determine the position and orientation of the OPM sensors relative to each other. This is done by initialising the OPMSensorLayout class, which takes a helmet template (in this case the FL_alpha1_helmet), labels of the sensors used and the corresponding depth measurements. + +*Important note:* If you are using another type of sensor (e.g. not the default "FieldLine OPM sensor Gen1 size = 2.00 mm") remember to specify it here using the `coil_type` flag. ```python sensor_layout = OPMSensorLayout( label=["FL3", "FL10", "FL16", "FL62"], @@ -64,17 +71,20 @@ sensor_layout = OPMSensorLayout( ) ``` -```python -add_dig_montage(raw, points) -``` +The information about this subject-specific sensor layout which has been defined taking the depth measurement into account, is added to the data object using the code below. Here an example is shown using the a Raw object, but it works other MNE classes such as Epochs and Evokeds as well. ```python add_sensor_layout(raw, sensor_layout) ``` +To move the sensor array to head space, a rigid 3D transform algorithm (Arun et al., 1987) is used to determine the rotation matrix R and translation vector Tthat aligns the OPM positions as described in the sensor array to the digitised OPM marks in head space. This transformation is described as the device to head transformation in MNE, and can be determined and added to the data object using the following code: ```python add_device_to_head(raw, points) +``` +As this function relies on the sensor positions found in the data object, the order of using the functions is important. + +To verify, that aligning the MR, head and sensor array coordinatesystems went well, we plot the alignment. ```python fig = plot_alignment(raw.info, meg=("sensors"), dig = True, coord_frame="head", verbose = True) Plotter().show()