MRI on BEAR is a collection of educational resources created by members of the Centre for Human Brain Health (CHBH), University of Birmingham, to provide a basic introduction to fundamentals in magnetic resonance imaging (MRI) data analysis, using the computational resources available to the University of Birmingham research community.
"},{"location":"#about-this-website","title":"About this website","text":"This website contains workshop materials created for the MSc module 'Magnetic Resonance Imaging in Cognitive Neuroscience' (MRICN) and its earlier version - Fundamentals in Brain Imaging taught by Dr Peter C. Hansen - at the School of Psychology, University of Birmingham. It is a ten-week course consisting of lectures and workshops introducing the main techniques of functional and structural brain mapping using MRI with a strong emphasis on - but not limited to - functional MRI (fMRI). Topics include the physics of MRI, experimental design for neuroimaging experiments and the analysis of fMRI, and other types of MRI data. This website includes only the workshop materials, which provide a basic training in analysis of brain imaging data and data visualization.
Learning objectives
At the end of the course you will be able to:
For externals not on the course
Whilst we have made these resources publicly available for anyone to use, please BEAR in mind that the course has been specifically designed to run on the computing resources at the University of Birmingham.
"},{"location":"#teaching-staff","title":"Teaching Staff","text":"Dr Magdalena ChechlaczRole: Course Lead
Magdalena Chechlacz is an Assistant Professor in Cognition and Ageing at the School of Psychology, University of Birmingham. She initially trained and carried out a doctorate in cellular and molecular biology (2002). After working as a biologist (Larry L. Hillblom Foundation Fellowship at the University of California, San Diego) she decided on a career change to a more human-oriented science and neuroimaging. In order to gain formal training in cognitive neuroscience and neuroimaging, she completed a second doctorate in psychology at the University of Birmingham under the supervision of Glyn Humphreys (2012). From 2013 to 2016, she held a British Academy Postdoctoral Fellowship and EPA Cephalosporin Junior Research Fellowship, Linacre College at the University of Oxford. In 2016, Dr Chechlacz returned to the School of Psychology, University of Birmingham as a Bridge Fellow.
m.chechlacz@bham.ac.uk 0000-0003-1811-3946Aamir Sohail
Role: Teaching Assistant
Aamir Sohail is an MRC Advanced Interdisciplinary Methods (AIM) DTP PhD student based at the Centre for Human Brain Health (CHBH), University of Birmingham, where he is supervised by Lei Zhang and Patricia Lockwood. He completed a BSc in Biomedical Science at Imperial College London, followed by an MSc in Brain Imaging at the University of Nottingham. He then worked as a Junior Research Fellow at the Centre for Integrative Neuroscience and Neurodynamics (CINN), University of Reading. Outside of research, he is also passionate about facilitating inclusivity and diversity in academia, as well as promoting open and reproducible science.
axs2210@bham.ac.uk sohaamir AamirNSohail 0009-0000-6584-4579 sohaamir.github.ioAccessing additional course materials
If you are a CHBH member and would like access to additional course materials (lecture recordings etc.), please contact one of the teaching staff members listed above.
"},{"location":"contributors/","title":"Contributors","text":"Many thanks to our contributors for creating and maintaining these resources!
Andrew Quinn\ud83d\udea7 \ud83d\udd8b Aamir Sohail\ud83d\udea7 \ud83d\udd8b James Carpenter\ud83d\udd8b Magda Chechlacz\ud83d\udd8bAcknowledgements
Thank you to Charnjit Sidhu for their assistance with running the course!
LicenseMRI on BEAR is hosted on GitHub. All content in this book (ie, any files and content in the docs/
folder) is licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. Please see the LICENSE
file in the GitHub repository for more details.
For those wanting to develop their learning beyond the scope of the module, here is a (non-exhaustive) list of links and pages for neuroscientists covering skills related to working with neuroimaging data, both with the concepts and practical application.
Contributing to the list
Feel free to suggest additional resources to the list by opening a thread on the GitHub page!
FSL Wiki
Most relevant to the course is the FSL Wiki, the comprehensive guide for FSL by the Wellcome Centre for Integrative Neuroimaging at the University of Oxford.
"},{"location":"resources/#existing-lists-of-resources","title":"Existing lists of resources","text":"Here are some current 'meta-lists' which already cover a lot of resources themselves:
Struggling to grasp the fundamentals of MRI/fMRI? Want to quickly refresh your mind on the physiological basis of the BOLD signal? Well, these resources are for you!
An Introduction to Resting State fMRI Functional Connectivity (2017, Oxford University Press) by Janine Bijsterbosch, Stephen M. Smith, and Christian F. Beckmann
Handbook of Functional MRI Data Analysis (2011, Cambridge University Press) by Russell A. Poldrack, Jeanette A. Mumford, and Thomas E. Nichols
Introduction to Functional Magnetic Resonance Imaging (1998, Cambridge University Press) by Richard B. Buxton
Introduction to Neuroimaging Analysis (2018, Oxford University Press) by Mark Jenkinson and Michael Chappell
Short Introduction to Brain Anatomy for Neuroimaging (2018, Oxford University Press) by Mark Jenkinson and Michael Chappell
Short Introduction to the General Linear Model (2018, Oxford University Press) by Mark Jenkinson and Michael Chappell
Short Introduction to MRI Physics (2018, Oxford University Press) by Mark Jenkinson and Michael Chappell
Before you start with any workshop materials, you will need to familiarise yourself with the CHBH\u2019s primary computational resource, BlueBEAR. The following pages are aimed at helping you get started.
To put these workshop materials into practical use you will be expected to understand what BlueBEAR is, what it is used for and to make sure you have access.
Student Responsibility
If you are an MSc student taking the MRICN module, please note that while there will be help available during all in person workshops, in case you have any problems with using the BEAR Portal, it is your responsibility to make sure that you have access, and that you are familiar with the information provided in pre-workshop materials. Failing to gain an understanding of BlueBEAR and using the BEAR Portal will prevent you from participating in the practical sessions and completing the module\u2019s main assessment (data analysis).
"},{"location":"setting-up/#what-are-bear-and-bluebear","title":"What are BEAR and BlueBEAR?Signing in to the BEAR Portal","text":"BEAR stands for Birmingham Environment for Academic Research and is a collection of services provided specifically for researchers at the University of Birmingham. BEAR services are used by researchers at the Centre for Human Brain Health (CHBH) for various types of neuroimaging data analysis.
BEAR services and basic resources - such as the ones we will be using for the purpose of the MRICN module - are freely available to the University of Birmingham research community. Extra resources which may be needed for some research projects can be purchased e.g., access to dedicated nodes and extra storage. This is something your PI/MSc/PhD project supervisor might be using and will give you access to.
BlueBEAR refers to the Linux High Performance Computing (HPC) environment which:
As computing resources on BlueBEAR rely on Linux, in Workshop 1 you will learn some basic commands, which you will need to be familiar with to participate in subsequent practical sessions and to complete the module\u2019s main assessment (data analysis assessment). More Linux commands and basic principle of scriptings will be introduced in subsequent workshops.
There are two steps to gaining access to BlueBEAR:
Gaining access to BEAR Projects
Only a member of academic staff e.g., your project supervisor or module lead, can apply for a BEAR project. As a student you cannot apply for a BEAR project. If you are registered as a student on the MRICN module, you should have already been added as member to the project chechlmy-chbh-mricn
. If not please contact one of the teaching staff.
Even if you are already a member of a BEAR project giving you BlueBEAR access, you will still need to activate your BEAR Linux account via the self-service route or the service desk form. The information on how to do it and step-by-step instructions are available on the BEAR website, see the following link.
Please follow these steps as above to make sure you have a BEAR Linux account before starting with workshop 1 materials. To do this you will need to be on campus or using the University Remote Access Service (VPN).
After you have activated your BEAR Linux account, you can now sign-in to the BEAR Portal.
BEAR Portal access requirements
Remember that the BEAR Portal is only available on campus or using the VPN!
If your log in is successful, you will be directed to the main BEAR Portal page as below. This means that you have successfully launched the BEAR Portal.
If you get to this page, you are ready for Workshop 1. For now, you can log out. If you have any problems logging on to BEAR Portal, please email chbh-help@contacts.bham.ac.uk for help and advice.
"},{"location":"setting-up/#bear-storage","title":"BEAR Storage","text":"The storage associated with each BEAR project is called the BEAR Research Data Store (RDS). Each BEAR project gets 3TB of storage space for free, but researchers (e.g., your MSc project supervisor) can pay for additional storage if needed. The RDS should be used for all data, job scripts and output on BlueBEAR.
If you are registered as a student on the MRICN module, all the data and resources you will need to participate in the MRICN workshops and to complete the module\u2019s main assessment have been added to the MRICN module RDS, and you have been given access to the folder /rds/projects/c/chechlmy-chbh-mricn
. When working on your MSc project using BEAR services, your supervisor will direct you to the relevant RDS project.
External access to data
If you are not registered on the module and would like access to the data, please contact one of the teaching staff members.
"},{"location":"setting-up/#finding-additional-information","title":"Finding additional information","text":"There is extensive BEAR technical documentation provided by the University of Birmingham BEAR team (see links below). While for the purpose of this module, you are not expected to be familiar with all the provided there information, you might find it useful if you want to know more about computing resources available to researchers at CHBH via BEAR services, especially if you will be using BlueBEAR for other purposes (e.g., for your MSc project).
You can find out more about BEAR, BlueBEAR and RDS on the dedicated BEAR webpages:
University of Birmingham BEAR Homepage
More information on BlueBEAR
More information on Research Data Storage
At this point you should know how to log in and access the main BEAR Portal page.
Please navigate to https://portal.bear.bham.ac.uk, log in and launch the BEAR Portal; you should get to the page as below.
BlueBEAR Portal is a web-based interface enabling access to various BEAR services and BEAR apps including:
BlueBEAR portal is basically a user friendly alternative to using the command line interface, your computer terminal.
To view all files and data you have access to on BlueBEAR, click on 'Files' as illustrated above. You will see your home directory (your BEAR Linux home directory), and all RDS projects you are a member of.
You should be able to see /rds/projects/c/chechlmy-chbh-mricn
(MRICN module\u2019s RDS project). By selecting the 'Home Directory' or any 'RDS project' you will open a second browser tab, displaying the content. In the example below, you see the content of one of Magda's projects.
Inside the module\u2019s RDS project, you will find that you have a folder labelled xxx, where xxx is your University of Birmingham ADF username. If you navigate to that folder rds/projects/c/chechlmy-chbh-mricn/xxx
, you will be able to perform various file operations from there. However, for now, please do not move, download, or delete any files.
Data confidentiality
Please also note that the MRI data you will be given to work with should be used on BlueBEAR only and not downloaded on your personal desktop or laptop!
"},{"location":"workshop1/intro-to-bluebear/#launching-the-bluebear-gui","title":"Launching the BlueBEAR GUI","text":"The BlueBEAR Portal options in the menu bar, 'Jobs', 'Clusters' and 'My Interactive Sessions' can be used to submit and edit jobs to run on the BlueBEAR cluster and to get information about your currently running jobs and interactive sessions. Some of these processes can be also executed using Code Server Editor (VS Code) accessible via Interactive Apps. We won\u2019t explore these options in detail now but some of these will be introduced later when needed.
For example, from the 'Cluster' option you can jump directly on BlueBEAR terminal and by using this built-in terminal, submit data analysis jobs and/or employ own contained version of neuroimaging software rather than software already available on BlueBEAR. We will cover containers, scripting and submitting jobs in later workshops. For now, just click on this option and see what happens; you can subsequently exit/close the terminal page.
Finally, from the BlueBEAR Portal menu bar you can select 'Interactive Apps' and from there access various GUI applications you wish to use, including JupyterLab, RStudio, MATLAB and most importantly the BlueBEAR GUI, which we will be using to analyse MRI data in the subsequent workshops.
Please select 'BlueBEAR GUI'. This will bring up a page for you to specify options for your job to start the BlueBEAR GUI. You can leave some of these options as default. But please change \u201cNumber of Hours\u201d to 2 (our workshops will last 2 hours; for some other analysis tasks you might need more time) and make sure that the selected 'BEAR Project' is chechlmy-chbh-mricn
. Next click on Launch.
It will take few minutes for the job to start. Once it\u2019s ready you\u2019ll see an option to connect to the BlueBEAR GUI. Click on 'Launch BlueBEAR GUI'.
Once you have launched the BlueBEAR GUI, you are now in a Linux environment, on a Linux Desktop. The following section will guide you on navigating and using this environment effectively.
Re-launching the BlueBEAR GUI
In the main window of the BlueBEAR portal you will be able to see that you have an Interactive session running (the information above will remain there). This is important as if you close the Linux Desktop by mistake, you can click on Launch BlueBEAR GUI again to open it.
"},{"location":"workshop1/intro-to-linux/","title":"Introduction to Linux","text":"Linux is a computer Operating System (OS) similar to Microsoft Windows or Mac OS. Linux is very widely used in the academic world especially in the sciences. It is derived from one of the oldest and most stable and used OS platforms around, Unix. We use Linux on BlueBEAR. Many versions of Linux are freely available to download and install, including CentOS (Community ENTerprise Operating System) and Debian, which you might be familiar with. You can also use these operating systems with Microsoft Windows in Dual Boot Environment on your laptop or desktop computer.
Linux and neuroimaging
Linux is particularly suited for clustering computers together and for efficient batch processing of data. All major neuroimaging software runs on Linux. This includes FSL, SPM, AFNI, and many others. Linux, or some version of Unix, is used in almost all leading neuroimaging centres. Both MATLAB and Python also run well in a Linux environment.
If you work in neuroimaging, it is to your advantage to become familiar with Linux. The more familiar you are, the more productive you will become. For some of you, this might be a challenge. The environment will present a new learning experience, one that will take time and effort to learn. But in the end, you should hopefully realize that the benefits of learning to work in this new computer environment are indeed worth the effort.
Linux is not like the Windows or Mac OSX environments. It is best used by typing commands into a Terminal client and by writing small batch command programs. Frequently you may not even need to use the mouse. Using the Linux environment alone may take some getting used to, but will become more familar throughout the course, as we use them to navigate through our file system and to script our analyses. For now, we will simply explore using the Linux terminal and simple commands.
"},{"location":"workshop1/intro-to-linux/#using-the-linux-terminal","title":"Using the Linux Terminal","text":"BlueBEAR GUI enables to load various apps and applications by using the Linux environment and a built-in Terminal client. Once you have launched the BlueBEAR GUI, you will see a new window and from there you can open the Terminal client. There are different ways to open Terminal in BlueBEAR GUI window as illustrated below.
Either by selecting from the drop-down menu:
Or by selecting the folder at the bottom of the screen:
In either case you will load the terminal:
Once you have started the terminal you, you will be able to load required applications (e.g., to start the FSL GUI). FSL (FMRIB Software Library) is a neuroimaging software package we will be using in our workshops for MRI data analysis.
When using the BlueBEAR GUI Linux desktop, you can simultaneously work in four separate spaces/windows. For example, if you are planning on using multiple apps, rather than opening multiple terminals and apps in the same space, you can move to another space. You can do that by clicking on \u201cworkspace manager\u201d in Linux desktop window.
Linux is fundamentally a command line-based operating system, so although you can use the GUI interface with many applications, it is essential you get used to issuing commands through the Terminal interface to improve your work efficiency.
Make sure you have an open Terminal as per the instructions above. Note that a Terminal is a text-based interface, so generally the mouse is not much use. You need to get used to taking your hand off the mouse and letting go of it. Move it away, out of reach. You can then get used to using both hands to type into a Terminal client.
[chechlmy@bear-pg0210u07a ~]$
as shown above in the Terminal Client is known as the system prompt. The prompt usually identifies the user and the system hostname. You can type commands at the system prompt (press the Enter key after each command to make it run). The system then returns output based on your command to the same Terminal.
Try typing ls
in the Terminal.
This command tells Linux to print a list of the current directory contents. We will get back later to basic Linux commands, which you should learn to use BlueBEAR for neuroimaging data analysis.
Why bother with Linux?
You may wonder why you should invest the time to learn the names of the various commands needed to copy files, change directories and to do general things such as run image analysis programs via the command line. This may seem rather clunky. However, the commands you learn to run on the command line in a terminal can alternatively be written in a text file. This text file can then be converted to a batch script that can be run on data sets using the BlueBEAR cluster, potentially looping over hundreds or thousands of different analyses, taking many days to run. This is vastly more efficient and far less error prone than using equivalent graphical tools to do the same thing, one at a time.
When you open a new terminal window it opens in a particular directory. By default, this will be your home directory:
/rds/homes/x/xxx
or the Desktop folder in your home directory:
/rds/homes/x/xxx/Desktop
(where x is the first letter of your last name and xxx is your University of Birmingham ADF username).
On BlueBEAR files are stored in directories (folders) and arranged into a tree hierarchy.
Examples of directories on BlueBEAR include:
/rds/homes/x/xxx
(your home directory) /rds/projects/c/chechlmy-chbh-mricn
(our module RDS project directory) Directory separators on Linux and Windows
/ (forward slash) is the Linux directory separator. Note that this is different from Windows (where the backward slash \\ is the directory separator).
The current directory is always called .
(i.e. a single dot).
The directory above the current directory is always called ..
(i.e. dot dot)
Your home directory can always be accessed using the shortcut ~
(the tilde symbol). Note that this is the same as /rds/homes/x/xxx
.
You need to remember this to use and understand basic Linux Commands.
"},{"location":"workshop1/intro-to-linux/#basic-linux-commands","title":"Basic Linux Commands","text":"pwd (Print Working Directory)
In a Terminal type pwd
followed by the return (enter) key to find out the name of the directory where you are. You are always in a directory and can (usually) move to directories above you or below to subdirectories.
For example if you type pwd
in your terminal you will see: /rds/homes/x/xxx
(e.g., /rds/homes/c/chechlmy
)
cd (Change Directory)
In a Terminal window, type cd
followed by the name of a directory to gain access to it. Keep in mind that you are always in a directory and normally are allowed access to any directories hierarchically above or below.
Type in your terminal the examples below:
cd /rds/projects
cd /rds/homes/
cd ..
(to change to the directory above using .. shortcut)
To find out where you are now, type pwd
:
(answer: /rds
)
If the directory is not located above or below the current directory, then it is often less confusing to write out the complete path instead. Try this in your terminal:
cd /rds/homes/x/xxx/Desktop
(where x is the first letter of your last name and xxx is your ADF username)
Changing directories with full paths
Note that it does not matter what directory you are in when you execute this command, the directory will always be changed based on the full pathway you specified.
Remember that the tilde symbol ~
is a shortcut for your home directory. Try this:
cd /rds/projects \ncd ~ \npwd\n
You should be now back in your home directory.
ls (List Files)
The ls
command (lowercase L, S) allows you to see a summary list of the files and directories located in the current directory. Try this:
cd /rds/projects/c\nls\n
(you should now see a long list of various BEAR RDS projects)
Before moving to the next section, please close your terminal by clicking on \u201cx\u201d in the top right of the Terminal.
cp (Copy files/directories)
The cp
command will copy files and/or directories FROM a source TO a destination in the current working directory. This command will create the destination file if it doesn't exist. In some cases, to do that you might need to specify a complete path to a file location.
Here are some examples (please do not type them, they are only examples):
Command Functioncp myfile yourfile
Basic file copy (in current directory) cp data data_copy
Copy a directory (but not sub-directories) cp -r ~fred/data .
Recursively copy fred
dir to current dir cp ~fred/fredsfile myfile
Copy remote file and rename it cp ~fred/* .
Copy all files from fred
dir to current dir cp ~fred/test* .
Copy all files that begin with test e.g. test
, test1.txt
In the subsequent workshops we will practice using the cp
command. For now, looking at the examples above to understand its usage. There are also some exercises below to check your understanding.
mv, rmdir and mkdir (Moving, removing and making files/directories)
The mv
command will move files FROM a source TO a destination. It works like copy, except the file is actually moved. If applied to a single file, this effectively changes the name of the file. (Note there is no separate renaming command in Linux). The command also works on directories.
Here are some examples (again please do not type these in):
Command Functionmv myfile yourfile
renames file mv ~/data/somefile somefile
moves file mv ~/data/somefile yourfile
moves and renames mv ~/data/* .
moves multiple files There are also the mkdir
and rmdir
commands:
mkdir
\u2013 to make a new directory e.g. mkdir testdir
rmdir
\u2013 to remove an empty directory e.g. rmdir testdir
You can try these two commands. Open a new Terminal and type:
mkdir testdir\nls\n
In your home directory you will see now a new directory testdir
. Now type:
rmdir testdir\nls\n
You should notice that the testdir
has been removed from your home directory.
To remove a file you can use the rm
command. Note that once files are deleted at the command line prompt in a terminal window, unlike in Microsoft Windows, you cannot get files back from the wastebin.
e.g. rm junk.txt
(this is just an example, do not type it in your terminal)
Clearing your terminal
Often when running many commands, your terminal will be full and difficult to understand. To clear the terminal screen type clear
. This is an especially helpful command when you have been typing lots of commands and need a clean terminal to help you focus.
Note that most commands in Linux have a similar syntax: command name [modifiers/options] input output
The syntax of the command is very important. There must be spaces in between the different parts of the command. You need to specify input and output. The modifiers (in brackets) are optional and may or may not be needed depending on what you want to achieve.
For example, take the following command:
cp -r /rds/projects/f/fred/data ~/tmp
(This is an example, do not type this)
In the above example -r
is an option meaning 'recursive' often used with cp
and other commands, used in this case to copy a directory including all its content from one directory to another directory.
FSL (FMRIB Software Library) is a software library containing multiple tools for processing, statistical analyses, and visualisation of magnetic resonance imaging (MRI) data. Subsequent workshops will cover usage of some of the FSL tools for structural, functional and diffusion MRI data. This workshop only covers how to start FSL app on BlueBEAR GUI Linux desktop, and some practical aspects of using FSL, specifically running it in the terminal either in the foreground or in the background.
There are several different versions of FSL software available on BlueBEAR. You can search which versions of FSL are available on BlueBEAR as well as all other available software using the following link: https://bear-apps.bham.ac.uk
From there you will also find information how to load different software. Below you will find an example of loading one of the available versions of FSL.
To open FSL in terminal, you first need to load the FSL module. To do this, you need to type in the Terminal a specific command.
First, either close the Terminal you have been previously using and open a new one, or simply clean it. Next, type:
module load FSL/6.0.5.1-foss-2021a
You will see various processes running the terminal. Once these have stopped and you see a system prompt in the terminal, type:
fsl
This fsl
command will initiate the FSL GUI as shown below.
Now try typing ls
in the same terminal window and pressing return.
Notice how nothing appears to happen (your keystrokes are shown as being typed in but no actual event seems to be actioned). Indeed, nothing you type is being processed and the commands are being ignored. That is because the fsl
command is running in the foreground in the terminal window. Because of that it is blocking other commands from being run in the same terminal.
Notice now that control has been returned to the Terminal and how commands you type are now being acted on. Try typing ls
again; it should now work in the Terminal.
Go back to the terminal window again, but this time type fsl &
at the system prompt and press return. Again, you should see the FSL GUI pop up.
Now try typing ls
in the same Terminal.
Notice that your new commands are now being processed. The fsl
command is now running in the background in the Terminal allowing you to run other commands in parallel from the same Terminal. Typing the &
after any command makes it run in the background and keeps the Terminal free for you to use.
Sometimes you may forget to type &
after a command.
fsl
(without the &) so that it is running in the foreground. You should get a message like \u201c[1]+ Stopped fsl\u201d
. You will notice that the FSL GUI is now unresponsive (try clicking on some of the buttons). The fsl
process has been suspended.
bg
in the terminal window (followed by pressing the return key). You should find the FSL GUI is now responsive again and input to the terminal now works once more. If you clicked the 'Exit' button when the FSL GUI was unresponsive, FSL might close now.
Running and killing commands in the terminal
If, for some reason, you want to make the command run in the foreground then rather than typing bg
(for background) instead type fg
(for foreground). If you want to kill (rather than suspend) a command that was running in the foreground, press CTRL-c (CTRL key and c key).
Linux: some final useful tips
TIP 1:
When typing a command - or the name of a directory or file - you never need to type everything out. The terminal will self-complete the command or file name if you type the TAB key as you go along. Try using TAB key when typing commands or complete path to specific directory.
TIP 2:
If you need help understanding what the options are, or how to use a command, try adding --help
to the end of your command. For example, for better understanding of the du
options, type:
du --help [enter]
TIP 3:
There are many useful online lists of these various commands, for example: www.ss64.com/bash
Exercise: Basic Linux commands
Please complete the following exercises, you should hopefully know which Linux commands to use!
cd
back to your home directory test
test1
and make another directory called test2
test1
to your folder on modules\u2019s RDS project (i.e., rds/projects/c/chechlmy-chbh-mricn/xxx
)test1
and test2
directories and confirm itIf unsure, check your results with someone else or ask for help!
The correct commands are provided below. (click to reveal)
Linux Commands Exercise (Answers)clear
cd ~
or cd /rds/homes/x/xxx
pwd
mkdir test
mv test test1
mkdir test2
cp -r test1 /rds/projects/c/chechlmy-chbh-mricn/xxx/
or mv test1 /rds/projects/c/chechlmy-chbh-mricn/xxx/
rm -r test1 test2
ls
Workshop 1: Further Reading and Reference Material
Here are some additional resources that introduce users to Linux:
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 01 workshop materials.
"},{"location":"workshop1/workshop1-intro/","title":"Workshop 1 - Introduction to BlueBEAR and Linux","text":"Welcome to the first workshop of the MRICN course!
Overview of Workshop 1
Topics for this workshop include:
Pre-requisites for the workshop
Please ensure that you have completed the 'Setting Up' section of this course, as you will require access to the BEAR Portal for this workshop.
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 01 workshop materials.
"},{"location":"workshop2/mri-data-formats/","title":"Working with MRI Data - Files and Formats","text":"MRI Image FundamentalsWhen you acquire an MRI image of the brain, in most cases it is either a 3D image i.e., a volume acquired at one single timepoint (e.g., T1-weighted, FLAIR scans) or a 4D multi-volume image acquired as a timeseries (e.g., fMRI scans). Each 3D volume consists of multiple 2D slices, which are individual images.
The volume consists of 3D voxels, with a typical size between 0.25 to 4mm, but not necessarily same in all three directions. For example, you can have voxel size [1mm x 1mm x 1mm] or [0.5mm x 0.5mm x 2mm]. The voxel size represents image resolution.
The final important feature of an MRI image is field of view (FOV), a matrix of voxels represented as the voxel size multiplied by number of voxels. It provides information about the coverage of the brain in your MRI image. The FOV is sometime provided for the entire 3D volume or the individual 2D slice. Sometimes, the FOV is defined based on slice thickness and number of acquired slices.
Image and standard spaceWhen you acquire MRI images of the brain, you will find that these images will be different in terms of head position, image resolution and FOV, depending on the sequence and data type (e.g., T1 anatomical, diffusion MRI, fMRI). We often use term \u201cimage space\u201d to depict these differences i.e., structural (T1), diffusion or functional space.
In addition, we also use term \"standard space\" to represent standard dimensions and coordinates of the template brain, which are used when reporting results of group analyses. Our brains differ in terms of size and shape and thus for the purpose of our analyses (both single-subject and group-level) we need to use standard space. The most common brain template is the MNI152 brain (an average of 152 healthy brains).
The process of alignment between different image spaces is called registration or normalization, and its purpose is to make sure that voxel and anatomical locations correspond to the same parts of the brain for each image type and/or participant.
"},{"location":"workshop2/mri-data-formats/#mri-data-formats","title":"MRI Data Formats","text":"MRI scanners collect MRI data in an internal format that is unique to the scanner manufacturer, e.g., Philips, Siemens or GE. The manufacturer then allows you to export the data into a more usable intermediate format. We often refer to this intermediate format as raw data as it is not directly usable and needs to be converted before being accessible to most neuroimaging software packages.
The most common format used by various scanner manufacturers is the DICOM format. DICOM images corresponding to a single scan (e.g., a T1-weighted scan) might be one large file or multiple files (1 per each volume or one per each slice acquired). This will depend on the scanner and data server used to retrieve/export data from the scanner. There are other data formats e.g., PAR/REC that are specific to Philips scanners. The raw data needs to be converted into a format that the analysis packages can use.
Retrieving MRI data at the CHBH
At CHBH we have a Siemens 3T PRISMA scanner. When you acquire MRI scans at CHBH, data is pushed directly to a data server in the DICOM format. This should be automatic for all research scans. In addition, for most scans, this data is also directly converted to NIfTI format. So, at the CHBH you will likely retrieve MRI data from the scanner in NIfTI format.
NIfTI (Neuroimaging Informatics Technology Initiative) is the most widely used format for MRI data, accessible by majority of the neuroimaging software packages e.g., FSL or SPM. Another older data format which is still sometimes used, is Analyze (with each image consisting of two files .img
and .hdr
).
NIfTI format files have either the extension .nii
or .nii.gz
(compressed .nii
file), where there is only one NIfTI image file per scan. DICOM files usually have a suffix of .dcm
, although these files might be additionally compressed with gzip
as .dcm.gz
files.
We will now ourselves convert some DICOM images to NIfTI, using some data collected at the CHBH.
Servers do not always provide MRI data as NIfTIs
While at CHBH you can download the MRI data in NIfTI format, this might not be the case at some other neuroimaging centres. Thus, you should learn how to do it yourself.
The data is located in /rds/projects/c/chechlmy-chbh-mricn/module_data/CHBH
.
First, log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). Open a new terminal window and navigate to your MRICN project folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx
[where XXX=your ADF username]
Next copy the data from CHBH scanning sessions:
cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/CHBH .\npwd\n
After typing pwd
, the terminal should show /rds/projects/c/chechlmy-chbh-mricn/xxx
(i.e., you should be inside your MRICN project folder).
Then type:
cd CHBH \nls\n
You should see data from 3 scanning sessions. Note that there are two files per scan session. One is labelled XXX_dicom.zip
. This contains the DICOM files of all data from the scan session. The other file is labelled XXX_nifti.zip
. This contains the NIFTI files of the same data, converted from DICOM.
In general, both DICOM and NifTI data should be always copied from the server and saved by the researcher after each scan session. The DICOM file is needed in case there are problems with the automatic conversion to NIfTI. However, most of the time the only file you will need to work with is the XXX_nifti.zip
file containing NIfTI versions of the data.
We will now unpack some of the data to explore the data structure. In your terminal, type:
unzip 20191008#C4E7_nifti.zip\ncd 20191008#C4E7_nifti\nls\n
You should see six files listed as below, corresponding to 3 scans (two fMRI scans and one structural scan):
2.5mm_2000_fMRI_v1_6.json \n2.5mm_2000_fMRI_v1_6.nii.gz \n2.5mm_2000_fMRI_v1_7.json \n2.5mm_2000_fMRI_v1_7.nii.gz \nT1_vol_v1_5.json \nT1_vol_v1_5.nii.gz \n
JSON files
You may have noticed that for each scan file (NifTI file, .nii.gz
), there is also an autogenerated .json file
. This is an information file (in an open standard format) that contains important information for our data analysis. For example, the 2.5mm_2000_fMRI_v1_6.json
file contains slice timing information about the exact point in time during the 2s TR (repetition time) when each slice is acquired, which can be used later in the fMRI pre-processing. We will come back to this later in the course.
For now, let's look at another dataset. In your terminal type:
cd ..\nunzip 20221206#C547_nifti.zip\ncd 20221206#C547_nifti\nls\n
You should now see a list of 10 files, corresponding to 3 scans (two diffusion MRI scans and one structural scan). For each diffusion scan, in addition to the .nii.gz
and .json
files, there are two additional files, .bval
and .bvec
that contain important information about gradient strength and gradient directions (as mentioned in the MRI physics lecture). These two files are also needed for later analysis (of diffusion MRI data).
We will now look at a method for converting data from the DICOM format to NIfTI.
cd ..\nunzip 20191008#C4E7_dicom.zip\ncd 20191008#C4E7_dicom\nls\n
You should see a list of 7 sub-directories. Each top level DICOM directory contains sub-directories with each individual scan sequence. The structure of DICOM directories can vary depending on how it is stored/exported on different systems. The 7 sub-directories here contain data for four localizer scans/planning scans, two fMRI scans and one structural scan. Each sub-directory contains several .dcm
files.
There are several software packages which can be used to convert DICOM to NIfTI, but dcm2niix
is the most widely used. It is available as standalone software, or part of MRIcroGL a popular tool for brain visualization similar to FSLeyes. dcm2niix
is available on BlueBEAR, but to use it you need to load it first using the terminal.
To do this, in the terminal type:
module load bear-apps/2022b
Wait for the apps to load and then type:
module load dcm2niix/1.0.20230411-GCCcore-12.2.0
To convert the .dcm
files in one of the sub-directories to NIfTI using dcm2niix
from terminal, type:
dcm2niix T1_vol_v1_5
If you now check the T1_vol_v1_5
sub-directory, you should find there a single .nii
file and a .json
file.
Converting more MRI data
Now try to convert to NIfTI the .dcm
files from the scanning session 20221206#C547
with 3 DICOM sub-directories, the two diffusion scans diff_AP
and diff_PA
and one structural scan MPRAGE.
To do this, you will first need to change current directory, unzip, change directory again and then run the dcm2niix
command as above.
If you have done it correctly you will find .nii
and .json
files generated in the structural sub-directories, and in the diffusion sub-directories you will also find .bval
and .bvec
files.
Now that we have our MRI data in the correct format, we will take a look at the brain images themselves using FSLeyes.
"},{"location":"workshop2/visualizing-mri-data/","title":"MRI data visualization with FSLeyes","text":"FSL (FMRIB Software Library) is a comprehensive neuroimaging software library for the analysis of structural and functional MRI data. FSL is widely used, freely available, runs on both Linux and Mac OS as well as on Windows via a Virtual Machine.
FSLeyes is the FSL viewer for 3D and 4D data. FSLeyes is available on BlueBEAR, but you need to load it first. You can just load FSLeyes as a standalone software, but as it is often used with other FSL tools, you often want to load both (FSL and FSLeyes).
In this session we will only be loading FSLeyes by itself, and not with FSL.
FSL Wiki
Remember that the FSL Wiki is an important source for all things FSL!
"},{"location":"workshop2/visualizing-mri-data/#getting-started-with-fsleyes","title":"Getting started with FSLeyes","text":"Assuming that you have started directly from the previous page, first close your previous terminal (to close dcm2nii
). Then open a new terminal and to navigate to the correct folder, type in your terminal:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH
To open FSLeyes, type:
module load FSL/6.0.5.1-foss-2021a-fslpython
There are different version of FSL on BlueBEAR, however this is the one which you need to use it together with FSLeyes.
Wait for FSL to load and then type:
module load FSLeyes/1.3.3-foss-2021a
Again, wait for FSLeyes to load (it may take a few minutes). After this, to open FSLeyes, type in your terminal:
fsleyes &
The importance of '&'
Why do we type fsleyes &
instead of fsleyes
?
You should then see the setup below, which is the default FSLeyes viewer without an image loaded.
You can now load/open an image to view. Click 'File' \u2192 'Add from file' (and then select the file in your directory e.g., rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii
).
You can also type directly in the terminal fsleyes file.nii.gz
where you replace file.nii.gz
with the name of the actual file you want to open. However, you will need to include the full path to the file if you are not in the same directory when you open the terminal window e.g. fsleyes rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii
You should now see a T1 scan loaded in ortho view with three canvases corresponding to the sagittal, coronal, and axial planes.
Please now explore the various settings in the ortho view panel:
Also notice the abbreviations on the three canvases:
FSL comes with a collection of\u00a0NIFTI standard templates, which are used for image registration and normalisation (part of MRI data analysis). You can also load these templates in FSLeyes.
To load a template, Click 'File' \u2192 'Add Standard' (for example select the file named MNI152_T1_2mm.nii.gz
. If you still have the T1.nii
image open, first close this image (by selecting 'Overlay' \u2192 'Remove') and then load the template.
The image below depicts the various tools that you can use on FSLeyes, give them a go!
We will now look at fMRI data. First close the previous image ('Overlay' \u2192 'Remove') and then load the fMRI image. To do this, click 'File' \u2192 'Add from file' and then select the file rds/projects/c/chechlmy-chbhmricn/xxx/CHBH/visualization2.5mm_2000_fMRI.nii.gz
.
Your window should now look like this:
Remember this fMRI data file is a 4D image \u2013 a set of 90-odd volumes representing a timeseries. To cycle through volumes, use the up/down buttons or type in a volume in the 'Volume' box to step through several volumes.
Now try playing the 4D file in 'Movie' mode by clicking this button. You should see some slight head movement over time. Click the button again to stop the movie.
As the fMRI data is 4D, this means that every voxel in the 3D-brain has a timecourse associated with it. Let's now have a look at this.
Keeping the same dataset open (2.5mm_2000_fMRI.nii.gz
) and now in the FSLeyes menu, select 'View' \u2192 'Time series'.
FSLeyes should now look like the picture below.
What exactly are we looking at?
The functional image displayed here is the data straight from the scanner, i.e., raw, un-preprocessed data that has not been analyzed. In later workshops we will learn how to view analyzed data e.g., display statistical maps etc.
You should see a timeseries shown at the bottom of the screen corresponding to the voxel that is selected in the main viewer. Move the mouse to select other voxels to investigate how variable the timecourse is.
Within the timeseries window, hit the '+' button to show the 'Plot List' characteristics for this timeseries.
Compare the timeseries in different parts of the brain, just outside the brain (skull and scalp), and in the airspace outside the skull. You should observe that these have very different mean intensities.
The timeseries of multiple different voxels can be compared using the '+' button. Hit '+' and then select a new voxel. Characteristics of the timeseries such as plotting colour can also be changed using the buttons on the lower left of the interface.
"},{"location":"workshop2/visualizing-mri-data/#atlas-tools","title":"Atlas tools","text":"FSL comes not only with a collection of\u00a0NIFTI standard templates but also with several built-in atlases, both probabilistic and histological (anatomical), comprising cortical, sub-cortical, and white matter parcellations. You can explore the full list of included atlases here.
We will now have a look at some of these atlases.
Firstly, close all open files in FSLeyes (or close FSLeyes altogether and start it up again in your terminal by running fsleyes &
).
In the FSLeyes menu, select 'File' \u2192 'Add Standard' and then choose the file called MNI152_T1_2mm.nii.gz
(this is a template brain in MNI space).
The MNI152 atlas
Remember that the MNI152 atlas is a standard brain template created by averaging 152 MRI scans of healthy adults widely used as a reference space in neuroimaging research.
Now select from the menu 'Settings' \u2192 'Ortho View 1' and tick the box for 'Atlases' at the bottom.
You should now see the 'Atlases' panel open as shown below.
The 'Atlases' panel is organized into three sections:
The 'Atlas information' tab provides information about the current display location, relative to one or more atlases selected in this tab. We will soon see how to use this information.
The 'Atlas search' tab can be used to search for specific regions by browsing through the atlases. We will later look how to use this tab to create region-of-interest (ROI) masks.
The 'Atlas management' tab can be used to add or delete atlases. This is an advanced feature, and we will not be using it during our workshops.
We will now have a look at how to work with FSL atlases. First we need to choose some atlases to reference. In the 'Atlases' \u2192 'Atlas Information' window (bottom of screen in middle panel) make sure the following are ticked:
Now let's select a point in the standard brain. Move the cursor to the voxel position: [x=56, y=61, z=27] or enter the voxel location in the 'Location' window (2nd column).
MNI Co-ordinate Equivalent
Note that the equivalent MNI coordinates (shown in the 1st column/Location window) are [-22,-4,-18].
It may not be immediately obvious what part of the brain you are looking at. Look at the 'Atlases' window. The report should say something like:
Harvard-Oxford Cortical Structural Atlas \nHarvard-Oxford Subcortical Structural Atlas \n98% Left Amygdala\n
Checking the brain region with other atlases
What do the Juelich Histological Atlas & Talairach Daemon Labels report?
The Harvard-Oxford and Juelich are both probabilistic atlases. They report the percentage likelihood that the area named matches the point where the cursor is.
The Talairach Atlas is a simpler labelling atlas. It is based on a single brain (of a 60-year-old French woman) and is an example of a deterministic atlas. it reports the name of the nearest label to the cursor coordinates.
From the previous reports, particularly the Harvard-Oxford Subcortical Atlas and the Juelich Atlas, it should be obvious that we are most likely in the left amygdala.
Now click the '(Show/Hide)' link after the Left Amygdala result (as shown below):
This shows the (max) volume that the probabilistic Harvard-Oxford Subcortical Atlas has encoded for the Left Amygdala. The cursor is right in the middle of this volume.
In the 'Overlay list' click and select the top amygdala overlay. You will note that the min/max ranges are set to 0 and 100. If it\u2019s not, change it to 0 and 100. These reflect the % likelihood of the labelling being correct.
If you increase the min value from 0% to 50%, then you will see the size of the probability volume for the left amygdala will decrease.
It now shows only the volume where there is a 50% or greater probability that this label is correct.
Click the (Show/Hide) link after the Left Amygdala; the amygdala overlay will disappear.
Exercise: Coordinate Localization
Have a go at localizing exactly what the appropriate label is for these coordinates:
If unsure check your results with someone else, or ask for help!
Make sure all overlays are closed (but keep the MNI152_T1_2mm.nii.gz
open) before moving to the next section.
It is often helpful to locate where a specific structure is in the brain and to visually assess its size and extent.
Let's suppose we want to visualize where Heschl's Gyrus is. In the bottom 'Atlases' window, click on the second tab ('Atlas search').
In the Search box, start typing the word 'Heschl\u2026'. You should find that the system quickly locates an entry for Heschl's Gyrus in the Harvard-Oxford Cortical Atlas. Click on it to select.
Now if you now the tick box immediately below next to the Heschl's Gyrus, an overlay will be added to the 'Overlay' list on the bottom (see below). Heschl's Gyrus should now be visible in the main image viewer.
Now click on the '+' button next to the tick box. This will centre the viewing coordinates to be in the middle of the atlas volume (see below).
Exercise: Atlas visualization
Now try this for yourself:
You can change the colour of the overlays by selecting the option below:
Other options also exist to help you navigate the brain and recognize the different brain structures and their relative positions.
Make sure you have firstly closed/removed all previous overlays. Now, select the 'Atlas Search' tab in the 'Atlases' window again. This time, in the left panel listing different atlases, tick on the option for only one of the atlases, such as the Harvard-Oxford Cortical Structural Atlas, and make sure all others are unticked.
Now you should see all of the areas covered by the Harvard-Oxford cortical atlas shown on the standard brain. You can click around with the cursor, the labels for the different areas can be seen in the bottom right panel.
In addition to atlases covering various grey matter structures, there are also two white matter atlases: the JHU ICBM-DTI-81 white-matter labels atlas & JHU white-matter tractography atlas. If you tick (select) these atlases as per previous instructions (hint using the 'Atlas search' tab), you will see a list of all included white matter tracts (pathways) as shown below:
"},{"location":"workshop2/visualizing-mri-data/#using-atlas-tools-to-create-a-region-of-interest-mask","title":"Using atlas tools to create a region-of-interest mask","text":"
You can also use atlas tools in FSLeyes to not only locate specific brain structures but also to create masks for ROI (region-of-interest) analysis. We will now create ROI masks (one grey matter mask and one white matter) using FSL tools and built-in atlases.
To start, please close 'FSLeyes' entirely, either by clicking 'x' in the right corner of the FSLeyes window or by selecting 'FSLeyes' \u2192 'Close'. Then close your current terminal and open a new terminal window.
Then do the following:
ROImasks
. Navigate into this directory. fsl
and open FSLeyes in the background.Here are the commands to do this:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/\nmkdir ROImasks\ncd ROImasks\nmodule load FSL/6.0.5.1-foss-2021a-fslpython \nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes & \n
Wait for FSLeyes to load, then:
harvardoxford-cortical_prob_Middle_Frontal_Gyrus
) from the 'Overlay' list and save it in your ROImasks
directory as MFG
(select 'Overlay' \u2192 'Save' \u2192 Name: MFG).You should now see the MFG overlay in the overlay list (as below) and have a MFG.nii.gz
file in the ROImasks
directory. You can check this by typing ls
in the terminal.
We will now create a white matter mask. Here are the steps:
ROImasks
directory as FM ('Overlay' \u2192 'Save' \u2192 Name: FM). You should now see the FM overlay in the overlay list (as below) and also have a FM.nii.gz
file in the ROImasks
directory.
You now have two \u201cprobabilistic ROI masks\u201d. To use these masks for various analyses, you need to first binarize these images.
Why binarize?
Why do you think we need to binarize the mask first? There are several reasons, but primarily it creates clear boundaries between regions which simplifies our statistical analysis and reduces computation.
To do this, first close FSLeyes. Make sure that you are in the ROImasks
directory and check if you have the two masks. If you type pwd
in the terminal, you should get the output rds/projects/c/chechlmy-chbh-mricn/xxx/ROImasks
(where XXX=your ADF username) and when you type ls
, you should see FM.nii.gz
and MFG.nii.gz
.
To binarize the masks, you can use one of the FSL tools for image manipulation, fslmaths
. The basic structure of an fslmaths
command is:
fslmaths input image [modifiers/options] output
Type in your terminal:
fslmaths FM.nii.gz -bin FM_binary\nfslmaths MFG.nii.gz -bin MFG_binary\n
This simply takes your ROI mask, binarizes it and saves the binarized mask with the _binary
name.
You should now have 4 files in the ROImasks directory.
Now open FSLeyes and examine one of the binary masks you just created. First load a template (Click 'File' \u2192 'Add Standard' \u2192 'MNI152_T1_2mm') and add the binary mask (e.g., Click 'File' \u2192 'Add from file' \u2192 'FM_binary.nii.gz').
You can see the difference between the probabilistic and binarized ROI masks below:
Probabilistic ROI mask
Binary ROI mask
To use ROI masks in your analysis, you might also need to threshold it i.e., to change/restrict the probability of the volume. We previously did this for the amygdala manually (e.g., from 0-100% to 50%-100%). The choice of the threshold might depend on the type of analysis and the type of ROI mask you need to use. The instructions below explain how to threshold and binarize your ROI image in one single step using fslmaths
.
Open your terminal and make sure that you are in the ROImasks
directory (pwd
). To both threshold and binarize the MFG mask, type:
fslmaths MFG.nii.gz -thr 25 -bin MFGthr_binary
(option -thr
is used to threshold the image below a specific number, in this case 25 corresponding to 25% probability)
Now let's compare the thresholded and unthresholded MFG binarized masks.
MFGthr_binary.nii.gz
), and to avoid confusion, change the colour of the second mask to blue. You can either toggle its visibility on and off (click the eye icon) to compare mask or use the 'Opacity' button. You can see the difference in size between the two below:
Binarized MFG mask
Binarized and thresholded MFG mask
Exercise: Atlases and masks
Have a go at the following exercises:
If unsure, check your results with someone else or ask for help!
Workshop 2: Further Reading and Reference Material
FSLeyes is not the only MRI visualization tool available. Here are some others:
More details of what is available on BEAR at the CHBH can be found at the BEAR Technical Docs website.
"},{"location":"workshop2/workshop2-intro/","title":"Workshop 2 - MRI data formats, data visualization and atlas tools","text":"Welcome to the second workshop of the MRICN course! Prior lectures introduced you to the basics of the physics and technology behind MRI data acquisition. In this workshop we will explore, MRI image fundamentals, MRI data formats, data visualization and atlas tools.
Overview of Workshop 2
Topics for this workshop include:
You will need this information before you can analyse data, regardless if using structural or functional MRI data.
For the purpose of the module we will be using BlueBEAR. You should remember from Workshop 1, how to access the BlueBEAR Portal and use the BlueBEAR GUI.
You have already been given access to the RDS project, rds/projects/c/chechlmy-chbh-mricn
. Inside the module\u2019s RDS project, you will find that you have a folder labelled xxx
(xxx
= University of Birmingham ADF username).
If you navigate to that folder (rds/projects/c/chechlmy-chbh-mricn/xxx)
, you will be able to perform the various file operations from there during workshops.
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 02 workshop materials.
"},{"location":"workshop3/diffusion-intro/","title":"Diffusion MRI basics - visualization and preprocessing","text":"In this workshop and the workshop next week, we will follow some basic steps in the diffusion MRI analysis pipeline below. The instructions here are specific to tools available in FSL, however other neuroimaging software packages can be used to perform similar analyses. You might also recall from lectures that models other than diffusion tensor and methods other than probabilistic tractography are also often used.
FSL diffusion MRI analysis pipeline
First, if you have not already, log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). You should know how to do it from the previous workshops. Open a new terminal window and navigate to your MRICN project folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx
[where XXX=your ADF username]
Please check your directory by typing pwd
. This should return: /rds/projects/c/chechlmy-chbh-mricn/xxx
.
Where has all my data gone?
Before this workshop, any old directories and files from previous workshops have been removed (you will not need it for subsequent workshops and storing unnecessary data would result in exceeding allocated quota). Your XXX directory should therefore be empty.
Next you need to copy over the data for this workshop.
cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/diffusionMRI/ .
(make sure you do not omit spaces and .)
This might take a while, but once it has completed, change into that downloaded directory:
cd diffusionMRI
(your XXX
subdirectory you should now have the folder diffusionMRI
)
Type ls
. You should now see three subdirectories/folders (DTIfit
, TBSS
and tractography
). Change into the DTIfit
folder:
cd DTIfit
We will first look at what diffusion images look like and explore text files which contain information about gradient strength and gradient directions.
In your terminal type ls
. This should return:
p01/\np02/\n
So, the folder DTIfit
contains data from two participants contained within the p01
and p02
folders.
Inside each folder (p01
and p02
) you will find a T1 scan, uncorrected diffusion data (blip_up.nii.gz
, blip_down.nii.gz
) acquired with two opposing PE-directions (AP/blip_up
and PA/blip_down
) and corresponding bvals
(e.g., blip_up.bval
) and bvecs
(e.g., blip_up.bvec
) files.
bvals
files contain b-values (scalar values for each applied gradient). bvecs
files contain a list of gradient directions (diffusion encoding directions), including a [3x1] vector for each gradient. The number of entries in bvals
and bvecs
files equals the number of volumes in the diffusion data files.
Finally, inside p01
and p02
there is also subdirectory data with distortion-corrected diffusion images.
We will start with viewing the uncorrected data. Please navigate inside the p01
folder, open FSLeyes and then load one of the uncorrected diffusion images:
cd p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n
The image you have loaded is 4D and consists of 64 volumes acquired with different diffusion encoding directions. Some of the volumes are non-diffusion images (b-value = 0), while most are diffusion weighted images. The first volume, which you can see after loading the file, is a non-diffusion weighted image as demonstrated below.
Viewing separate volumes
You can view the separate volumes by changing the number in the Volume box or playing movie mode. Note that the volume count starts from 0. You should also note that there are significant differences in the image intensity between different volumes.
Now go back to volume 0 and - if needed - stop movie mode. In the non-diffusion weighted image, the ventricles containing CSF are bright and the rest of the image is relatively dark. Now change the volume number to 2, which is a diffusion weighted image (with a b-value of approximately 1500).
The intensity of this volume is different. To see anything, please change max. intensity to 400. Now the ventricles are dark and you can see some contrast between different voxels.
Let's view the content of the bvals
and bvecs
files by using the cat
command. In your terminal type:
cat blip_down.bval
The first number is 0. This indicates that indeed the first volume (volume 0) is a non-diffusion weighted image and the third volume (volume 2) is diffusion weighted volume with b=1500. Based on the content of this bval
file, you should be able to tell how many diffusion-weighted volumes were acquired and how many without any diffusion weighting (b0 volumes).
Comparing diffusion-weighted volumes
Please compare this with the file you loaded into FSLeyes.
Now type:
cat blip_down.bvec
You should now see 3 separate rows of numbers representing the diffusion encoding directions (3x1 vector for each acquired volume; x,y,z directions) and that for volume 2 the diffusion encoding is represented by the vector [0.578, 0.671, 0.464].
Distortion correctionAs explained in the lectures, diffusion imaging suffers from various distortions (susceptibility, eddy-currents and movement induced distortions). These need to be corrected before further analysis. The most most noticeable geometric distortions are susceptibility-induced distortions caused by field inhomogeneities, and so we will have a closer look at these.
All types of distortions need correction during pre-processing steps in diffusion imaging analysis. FSL includes two tools used for distortion correction, topup and eddy. The processing with these two tools is time and computing intensive. Therefore we will not run the distortion correction steps in the workshop but instead explore some principles behind it.
For this, you are given distortion corrected data to conduct further analysis, diffusion tensor fitting and probabilistic tractography.
First close the current image in FSLeyes ('Overlay' \u2192 'Remove') and load both uncorrected images (blip_up.nii.gz
, blip_down.nii.gz
) acquired with two opposing PE-directions (PE=phase encoding).
Compare the first volumes in each file. To do that you can either toggle the visibility on and off (click the eye icon) or use the 'Opacity' button (you should remember from the previous workshop how to do this).
The circled area indicates the differences in susceptibility-induced distortions between the two images acquired with two opposing PE-directions.
Now change the max. intensity to 400 and compare the third volumes in each file. Again, the circled area indicate the differences in distortions between the two images acquired with the two opposing PE-directions.
Finally, we will look at distortion corrected data. First close the current image ('Overlay' \u2192 'Remove').
Now in FSLeyes load data.nii.gz
(the distortion-corrected diffusion image located inside the data subdirectory) and have a look at one of the the non-diffusion weighted and diffusion-weighted volumes.
Comparing corrected to uncorrected diffusion-weighted volumes
Can you tell the difference in the corrected compared to the uncorrected diffusion images?
Further examining the difference between uncorrected and corrected diffusion data
In your own time (outside of this workshop as part of independent study), load both the corrected and uncorrected data for p01
and compare using the 'Volume' box or 'Movie' mode. Also explore the data in p02
folder using the instructions above.
In the next part of the workshop, we will look FSL's Brain Extraction Tool (BET).
Brain extraction is a necessary pre-processing step, which removes non-brain tissue from the image. It is applied to structural images prior to tissue segmentation and is needed to prepare anatomical scans for registration of functional MRI or diffusion scans to MNI space. BET can be also used to create binary brain masks (e.g., brain masks needed to run diffusion tensor fitting, DTIfit).
In this workshop we will look at only at creating a binary brain mask as required for DTIfit. In subsequent workshops we will look at using BET for removing non-brain tissues from diffusion and T1 scans (\u201cskull-stripping\u201d) in preparation for registration.
First close FSLeyes and to make sure you do not have any processes running in the background, close your current terminal.
Open a new terminal window, navigate to the p02
subdirectory, and load FSL and FSLeyes again:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p02\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a \n
Now check the content of the p02
subdirectory by typing ls
. You should get the response bvals
, bvecs
and data.nii.gz
.
From the data.nii.gz
(distortion corrected diffusion 4D image) we will extract a single volume without diffusion weighting (e.g. the first volume). You can extract it using one of FSL's utility commands, fslroi
.
What is fslroi
used for?
fslroi
is used to extract a region of interest (ROI) or subset of data from a larger 3D or 4D image file.
In the terminal, type:
fslroi data.nii.gz nodif 0 1
where:
data.nii.gz
is your input image, nodif
is your output image (3D non-diffusion weighted volume), You should have a new file nodif.nii.gz
(type ls
to confirm) and can now create a binary brain mask using BET.
To do this, first open BET in terminal. You can open the BET GUI directly in a terminal window by typing:
Bet &
Or by runnning FSL in a terminal window and accessing BET from the FSL GUI. To do it this way, type:
fsl &
and then open the 'BET brain extraction tool' by clicking on it in the GUI.
In either case, once BET is opened, click on advanced options and make sure the first two outputs are selected ('brain extracted image' and 'binary brain mask') as below. Select as the 'Input' image the previously created nodif.nii.gz
and change 'Fractional Intensity Threshold' to 0.4. Then click the 'Go' button.
Completing BET in the terminal
After running BET you may need to hit return to get a visible prompt back after seeing \"Finished\u201d in the terminal!
You will see 'Finished' in the terminal when you are ready to inspect the results. Close BET and open FSLeyes and load three files (nodif.nii.gz
, nodif_brain.nii.gz
and nodif_brain_mask
). Compare the files. To do that you can either toggle the visibility on and off (click the eye icon) or use 'Opacity button' (you should remember from previous workshop how to do it).
The nodif_brain_mask
is a single binarized image with ones inside the brain and zeroes outside the brain. You need this image both for DTIfit and tractography.
Comparing between BET and normal images
Can you tell the difference between nodif.nii.gz
and nodif_brain.nii.gz
? It might be easier to compare these images if you change max intensity to 1500 and nodif_brain
colour to green.
The next thing we will do is to look at how to run and examine the results of diffusion tensor fitting.
First close FSLeyes, and to make sure you do not have any processes running in the background, close the current terminal.
Open a new terminal window, navigate to the p01
subdirectory, load FSL and FSLeyes again, and finally open FSL (with & to background it):
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsl & \n
To run the diffusion tensor fit, you need 4 files as specified below:
data.nii.gz
nodif_brain_mask.nii.gz
bvecs
(test file with gradient directions)bvals
(text file with list of b-values)All these files are included inside the data subdirectory p01/data
. You will later learn how to create a binary brain mask but first we will run DTIfit.
In the FSL GUI, first click on 'FDT diffusion', and in the FDT window, select 'DTIFIT Reconstruct diffusion tensors'. Now choose as 'Input directory' the data
subdirectory located inside p01
and click 'Go'.
You should see something happening in the terminal and once you see 'Done!' you are ready to view the results.
Click 'OK' when the message appears.
Different ways of running DTIfitInstead of running DTIfit by choosing the 'Input' directory, you can also run it by specifying the input file manually. If you click it now, the files would be auto-filled but otherwise you would need to provide inputs as below.
Running DTIfit in your own time
Please do NOT run it now, but instead try it in your own time with data in the p02
folder.
Finally, you can also run DTIfit directly from the terminal. To do this, you would need to type dtifit
in the terminal and choose the dtifit
compulsory arguments:
To run DTIfit from the terminal, you would need to navigate inside the subdirectory/folder
with all the data and type the full dtifit
command, specifying compulsory arguments as below:
dtifit --data=data --mask=nodif_brain_mask --bvecs=bvecs --bvals=bvals --out=dti
This command only works when running it from inside a folder where all the data is located, otherwise you will need to specify the full path with the data location. This would be useful if you want to write a script; we will look at it in the later workshops.
Running DTIfit from the terminal in your own time
Again, please do NOT run it now but try it in your own time with data in the p02
folder.
The results of running DTIfit are several output files as specified below. We will look closer at the highlighted files in bold. All of these files should be located in the data
subdirectory, i.e. within /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01/data/
.
To do this, firstly close the FSL GUI, open FSLeyes and load the FA map ('File' \u2192 'Add from file' \u2192 dti_FA
)
Next add the principal eigenvector map (dti_V1
) to your display ('File' \u2192 'Add from file' \u2192 dti_V1
).
FSLeyes will open the image dti_V1
as a 3-direction vector image (RGB) with diffusion direction coded by colour. To display the standard PDD colour coded orientation map (as below), you need to modulate the colour intensity with the FA map so that the anisotropic voxels appear bright.
In the display panel (click on 'Settings' (the Cog icon)) and change 'Modulate' by setting it to dti_FA
.
Finally, compare the FA and MD maps (dti_FA
and dti_MD
). To do this, load the FA map and add the MD map. By contrast to the FA map, the MD map appears uniform in both gray and white matter, plus higher intensities are in the CSF-filled ventricles and indicate higher diffusivity. This is opposed to dark ventricles in the FA map.
Differences between the FA and MD maps
Why are there such differences?
"},{"location":"workshop3/diffusion-mri-analysis/#tract-based-spatial-statistics-tbss","title":"Tract-Based Spatial Statistics (TBSS)Tract-Based Spatial Statistics analysis pipeline","text":"In the next part of the workshop, we will look at running TBSS, Tract-Based Spatial Statistics.
TBSS is used for a whole brain \u201cvoxelwise\u201d cross-subject analysis of diffusion-derived measures, usually FA (fractional anisotropy).
We will look at an example TBSS analysis of a small dataset consisting of FA maps from ten younger (y1-y10) and five older (o1-o5) participants. Specifically, you will learn how to run the second stage of TBSS analysis, \u201cvoxelwise\u201d statistics, and learn how to display results using FSLeyes. The statistical analysis that you will run aims to examine where on the tract skeleton younger versus older (two groups) participants have significantly different FA values.
Before that, let's shortly recap TBSS as it was covered in the lecture.
The steps for Tract-Based Spatial Statistics are:
To save time, some of the pre-processing stages including generating FA maps (tensor fitting), preparing data for analysis, registration of FA maps and skeletonization have been run for you and all outputs are included in the data
folder you have copied at the start of this workshop.
You will only run the TBSS statistical analysis to explore group differences in FA values based upon age (younger versus older participants).
First close FSLeyes (if you still have it open) and make sure that you do not have any processes running in the background by closing your current terminal.
Then open a new terminal window, navigate to the subdirectory where pre-processed data are located and load both FSL and FSLeyes:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/TBSS/TBSS_analysis_p2/\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a \n
Once you have loaded all the required software, we will start with exploring the pre-processed data. If you correctly followed the previous steps, you should be inside the subdirectory TBSS_analysis_p2
. Confirm that, and then check the content of that subdirectory by typing:
pwd
(answer /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/TBSS/TBSS_analysis_p2/
)
ls
(you should see 3 data folders listed: FA
, origdata
, stats
)
We need to firstly check if all the pre-processing steps have been run correctly and that we have all the required files.
Navigate inside the stats
folder and check the files inside by typing in your terminal:
cd stats\nls\n
You should find inside the files listed below.
all_FA
(4D image file with all participants\u2019 FA maps registered into standard space)mean_FA
(3D image file mean of all participants FA maps)all_FA_skeletonised
(4D image file with all participants skeletonised FA data)mean_FA_skeleton
(3D image file mean FA skeleton)Exploring the data
If this is the case, open FSLeyes and explore these files one by one to make sure you understand what each represents. You might need to change the colour to visualise some image files.
Remember to ask for help!
If you are unsure about something, or need help, please ask!
Once you have finished taking a look, close FSLeyes.
Before using the General Linear Model (GLM) GUI to set up the statistical model, you need to determine the order in which participants\u2019 files have been entered into the single 4D skeletonized file (i.e., the data order in the all_FA_skeletonised
file). The easiest way to determine the alphabetical order of participants in the the final 4D file (all_FA_skeletonised
), is to check in which order FSL lists the pre-processed FA maps inside the FA folder. You can do this in the terminal with the commands below
cd .. \ncd FA \nimglob *_FA.*\n
You should see data from the 5 older (o1-o5) followed by data fromthe 10 (y1-y10) younger participants.
Next navigate back to the stats
folder and open FSL:
cd ..\ncd stats\nfsl &\n
Click on 'Miscellaneous tools' and select 'GLM Setup' to open the GLM GUI.
In the workshop we will set up a simple group analysis (a two sample unpaired t-test).
How to set up more complex models
To find information re how to set up more complex models, using GUI, click on this link: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/statistics/glm
In the 'GLM Setup' window, change 'Timeseries design' to 'Higher-level/non-timeseries design' and '# inputs' to 15.
Then click on 'Wizard' and select 'two groups, unpaired' and set 'Number of subjects in first group' to 5. Then click 'Process'.
In the 'EVs' tab, name 'EV1' and 'EV2' as per your groups (old, young).
In the contrast window set number of contrasts to 2 and re-name them accordingly to the image below:
(C1: old > young, [1 -1]) (C2: young > old, [-1 1])
Click 'View Design', close the image and then go back to the GLM set window and save your design with the filename design
. Click 'Exit' and close FSL.
To run the TBSS statistical analysis FSL's randomise
tool is used.
FSL's randomise
Randomise is FSL's tool for nonparametric permutation inference on various types of neuroimaging data (statistical analysis tool). For more information click on this link: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/statistics/randomise
The basic command line to use this tool is:
randomise -i <input> -o <input> -d <design.mat> -t <design.con> [options]
You can explore options and the set up by typing randomise
in your terminal.
The basic command line to use randomise for TBSS is below:
randomise -i all_FA_skeletonised -o tbss -m mean_FA_skeleton_mask -d design.mat -t design.con -n 500 --T2
Check if you are inside the stats
folder and run the command above in terminal to run your TBSS group analysis:
The elements of this command are explained below:
Argument Description -i input image -o output image basename -m mask -d design matrix -t design contrast -n number of permutations --T2 TFCEWhy so few permutations?
To save time we only run 500 permutations; this number will vary depending on the type of analysis, but usually it is between 5,000 to 10,000 or higher.
The output from randomise
will include two raw (unthresholded) tstat images, tbss_tstat1
and tbss_tstat2
.
The TFCE p-value images (fully corrected for multiple comparisons across space) will be tbss_tfce_corrp_tstat1
and tbss_tfce_corrp_tstat2
.
Based on the set up of your design, contrast 1 gives the older > young test and contrast 2 gives the young > older test; the contrast which will likely give significant results is the 2nd contrast i.e., we are expecting higher FA in younger participants (due to the age related decline in FA).
To check that, use FSLeyes to view results of your TBSS analysis. Open FSLeyes, load mean_FA
plus the mean_FA_skeleton
template and add your display TFCE corrected stats-2 image:
mean_FA.nii.gz
mean_FA_skeleton.nii.gz
(change greyscale to green)tbss_tfce_corrp_tstat2.nii.gz
(change greyscale to red-yellow and set up Max to 1, and Min to 0.95 or 0.99)Please note that TFCE-corrected images, are actually 1-p for convenience of display, so thresholding at 0.95 gives significant clusters at p corrected < 0.05, and 0.99 gives significant clusters at p corrected < 0.01.
You should see the same results as below:
Interpreting the results
Are the results as expected? Why/why not?
Reviewing the tstat1 image
Next review the tbss_tfce_corrp_tstat1.nii.gz
Further information on TBSS
More information on TBSS, can be found on the 'TBSS' section of the FSL Wiki: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/tbss
"},{"location":"workshop3/workshop3-intro/","title":"Workshop 3 - Basic diffusion MRI analysis","text":"Welcome to the third workshop of the MRICN course! Prior lectures in the module introduced you to basics of the diffusion MRI and its applications, including data acquisition, the theory behind diffusion tensor imaging and using tractography to study structural connectivity. The aim of the next two workshops is to introduce you to some of the core FSL tools used for diffusion MRI analysis.
Specifically, we will explore different elements of the FMRIB's Diffusion Toolbox (FDT) to walk you through basic steps in diffusion MRI analysis. We will also cover the use of Brain Extraction Tool (BET).
By the end of the two workshops, you should be able to understand the principles of correcting for distortions in diffusion MRI data, how to run and explore results of a diffusion tensor fit, and how to run a whole brain group analysis and probabilistic tractography.
Overview of Workshop 3
Topics for this workshop include:
We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask.
Note that for your own projects, you are very likely to want to change some of these settings/parameters depending on your study aims and design.
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 03 workshop materials.
"},{"location":"workshop4/probabilistic-tractography/","title":"Probabilistic Tractography","text":"In the first part of the workshop, we will look again at BET, FSL's Brain Extraction Tool.
Brain extraction is a necessary pre-processing step which allows us to remove non-brain tissue from the image. It is applied to structural images prior to tissue segmentation and is needed to prepare anatomical scans for the registration of functional MRI or diffusion scans to MNI space. BET can be also used to create binary brain masks (e.g., brain masks needed to run diffusion tensor fitting, DTIfit).
"},{"location":"workshop4/probabilistic-tractography/#skull-stripping-our-data-using-bet","title":"Skull-stripping our data using BET","text":"In this workshop we will first look at a very simple example of removing non-brain tissues from diffusion and T1 scans (\u201cskull-stripping\u201d) in preparation for the registration of diffusion data to MNI space.
Log into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours).
In your session, open a new terminal window and navigate to the diffusionMRI
data in your MRICN
folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI
[where XXX=your ADF username]
In case you missed the previous workshop
You were instructed to copy the diffusionMRI
data in the previous workshop. If you have not completed last week's workshop, you either need to find details on how to copy the data in the 'Workshop 3: Basic diffusion MRI analysis' materials or work with someone who has completed the previous workshop.
Then load FSL and FSLeyes:
module load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\n
We will now look at how to \u201dskull-strip\u201d the T1 image (remove skull and non-brain areas); this step is needed for the registration step in both fMRI and diffusion MRI analysis pipelines.
We will do this using BET on the command line. The basic command-line version of BET is:
bet <input> <output> [options]
In this workshop we will look at a simple brain extraction i.e., performed without changing any default options.
To do this, navigate inside the p01
folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01
Then in your terminal type:
bet T1.nii.gz T1_brain
Once BET has completed (should only take a few seconds at most), open FSLeyes (with & to background it). Then in FSLeyes:
T1.nii.gz
and add the T1_brain
imageT1_brain
to 'Red' or 'Green'This will likely show that in this case the default brain extraction was good. The reason behind such a good brain extraction with default options is a small FOV and data from a young healthy adult. This is not always the case e.g., when we have a large FOV or data from older participants.
More brain extraction to come? You BET!
In the next workshop (Workshop 5) we will explore different BET [options] and how to troubleshoot brain extraction.
"},{"location":"workshop4/probabilistic-tractography/#preparing-our-data-with-bedpostx","title":"Preparing our data with BEDPOSTX","text":"BEDPOSTX is an FSL tool used for a step in the diffusion MRI analysis pipeline, which prepares the data for probabilistic tractography. BEDPOSTX (Bayesian Estimation of Diffusion Parameters Obtained using Sampling Techniques, X = modelling Crossing Fibres) estimates fibre orientation in each voxel within the brain. BEDPOSTX employs Markov Chain Monte Carlo (MCMC) sampling to reconstruct distributions of diffusion parameters at each voxel.
We will not run\u00a0it\u00a0during this workshop as it takes a long time. The data has been processed for you, and you copied it at the start of the previous workshop.
To run it, you would need to open FSL GUI, click on FDT diffusion and from drop down menu select 'BEDPOSTX'. The input directory must contain the distortion corrected diffusion file (data.nii.gz
), binary brain mask (nodif_brain_mask.nii.gz
) and two text files with the b-values (bvals
) and gradient orientations (bvecs
).
In case of the data being used for this workshop with a single b-value, we need to specify the single-shell model.
After the workshop, in your own time, you could run it using the provided data (see Tractography Exercises section at the end of workshop notes).
BEDPOSTX outputs a directory at the same level as the input directory called [inputdir].bedpostX
(e.g. data.bedpostX
). It contains various files (including mean fibre orientation and diffusion parameter distributions) needed to run probabilistic tractography.
As we will look at tractography in different spaces, we also need the output from registration. The concept of different image spaces has been introduced in Workshop 2. The registration step can be run from the FDT diffusion toolbox after BEDPOSTX has been run. Typically, registration will be run between three spaces:
nodif_brain
image) This step has been again run for you. To run it, you would need to open FSL GUI, click on 'FDT diffusion' and from the drop down menu select 'Registration'. The main structural image would be your \u201dskull-stripped\u201d T1 (T1_brain
) and non-betted structural image would be T1. Plus you need to select data.bedpostX
as the 'BEDPOSTX directory'.
After the workshop, you can try running it in your own time (see Tractography Exercises section at the end of workshop notes).
Registration output directory
The outputs from registration needed for probabilistic tractography are stored in the xfms
subdirectory.
PROBTRACKX (probabilistic tracking with crossing fibres) is an FSL tool used for probabilistic tractography. To run it, you need to open FSL GUI, click on FDT diffusion and from the drop down menu select PROBTRACKX (it should default to it).
PROBTRACKX can be used to run tractography either in diffusion or non-diffusion space (e.g., standard or structural). If running it in non-diffusion space you will need to provide a reference image. You can also run tractography from a single seed (voxel), single mask (ROI) or from multiple masks which can be specified in either diffusion or non-diffusion space.
We will look at some examples of different ways of running tractography.
First close any processes still running and open a new terminal. Next navigate inside where all the files to run tractography have been prepared for you:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/tractography/p01
As you may recall, on BlueBEAR there are different versions of FSL available. These correspond to different FSL software releases and have been compiled in a different way. The different versions of FSL are suitable for different purposes i.e., used for different MRI data analyses.
To run BEDPOSTX and PROBTRACKX, you need to use a specific version of FSL (FSL 6.0.7.6), which you can load by typing in your terminal:
module load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\n
Once you have loaded FSL using these commands, open the FDT toolbox from either the FSL GUI or directly typing in your terminal:
Fdt &
We will start with tractography from a single voxel in diffusion space. Specifically, we will run it from a voxel with coordinates [47, 37, 29] located within the forceps major of the corpus callosum, a white matter fibre bundle which connects the occipital lobes.
Running tractography on another voxel
Later, you can use the FA map (dti_FA
inside the p01/data
folder) loaded to FSLeyes to check the location of the selected voxel, choose another voxel within a different white matter pathway, and run the tractography again.
You should have the FDT Toolbox window open as below:
From here do the following:
data.bedpostX
as the 'BEDPOSTX directory'After the tractography has finished, check the contents of subdirectory /corpus
with the tractography output files. It should contain:
probtrackx.log
with the probtrackx
command that was run fdt_coordinates.text
with used coordinates corpus_47_37_29.nii.gz
(general structure outputname_X_Y_Z.nii.gz
; where\u00a0outputname\u00a0= name of the subdirectory and\u00a0X,\u00a0Y, and\u00a0Z = the seed voxel coordinates). This file contains for each voxel a count of how many of the streamlines intersected with that voxel. We will explore the results later. First, you will learn how to run tractography in the standard (MNI) space.
Close FDT toolbox and then open it again from the terminal to make sure you don\u2019t have anything running in the background.
We will now run tractography using a combination of masks (ROIs) in standard space to reconstruct tracts connecting the right motor thalamus (portion of the thalamus involved in motor function) with the right primary motor cortex. The ROI masks have been prepared for you and put inside the mask subdirectory ~/diffusionMRI/tractography/masks
. The ROIs have been created using FSL's ATLAS tools (you\u2019ve learnt in a previous workshop how to do this) and are in standard/MNI space, thus we will run tractography in MNI (standard) space and not in diffusion space.
This is the most typical design of tractography studies.
In the FDT Toolbox window - before you select your input in the 'Data' tab - go to the 'Options' tab (as below) and reduce the number of samples to 500 under 'Options'. You would normally run 5000 (default) but reducing this number will speed up processing and is useful for exploratory analyses.
Now going back to the 'Data' tab (as below) do the following:
data.bedpostX
as 'BEDPOSTX directory'Thalamus_motor_RH.nii.gz
from the masks
subdirectorydata.bedpost/xfms
directory. Select standard2diff_warp
as 'Seed to diff transform' and diff2standard_warp
as 'diff to Seed transform'. These files are generated during registration.cortex_M1_right.nii.gz
from the masks
subdirectory to isolate only those tracts that reach from the motor thalamus. Use this mask also as a termination mask to avoid tracking to other parts of the brain.MotorThalamusM1
Specifying masks
Without selecting the waypoint and termination masks, you would also get other tracts passing via motor thalamus, including random offshoots with low probability (noise). This is expected for probabilistic tractography, as the random sampling without specifying direction can result in spurious offshoots into nearby tracts and give low probability noise.
It will take significantly longer this time to run the tractography in standard space. However, once it has finished, you will see the window 'Done!/OK'. Before proceeding, click 'OK'.
A new subdirectory will be created with the chosen output name MotorThalamusM1
. Check the contents of this subdirectory. It contains slightly different files compared to the previous tractography output. The main output, the streamline density map is called fdt_paths.nii.gz
. There is also a file called waytotal
that contains the total number of valid streamlines runs.
We will now explore the results from both tractography runs. First close FDT and your terminal as we need FSLeyes, which cannot be loaded together with the current version of FSL.
Next navigate inside where all the tractography results have been generated and load/open FSLeyes:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/tractography/p01\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n
We will start with our results from tractography in seed space. In FSLeyes, do the following:
~/diffusionMRI/tractography/p01/data/dti_FA.nii.gz
) and tractography output file (~/corpus/corpus_47_37_29.nii.gz
)Your window should look like this:
Once you have finished reviewing results of tractography in see space, close the results ('Overlay \u2192 Remove all').
We will now explore the results from our tractography ran in MNI space, but to do so we need a standard template. Assuming you have closed all previous images:
~/diffusionMRI/tractography/MNI152T1_brain.nii.gz
) and tractography output file (/MotorThalamusM1/fdt_paths.nii.gz.
)Tractography exercises
In your own time, you should try the exercises below to consolidate your tractography skills. If you have any problems completing or any further questions, you can ask for help during one of the upcoming workshops.
Thalamus_motor_RH.nii.gz
mask as seed image). Compare the results to the output from our tractography we ran during the workshop.cortex_M1_right.nii.gz
mask as the seed image and without Thalamus_motor_RH.nii.gz
as waypoint and termination masks. Compare these results to previous outputs (from thw tractography we ran during the workshop). Are the results the same? Why not? mask
subdirectory, you will find two other masks: LGN_left.nii.gz
and V1_left.nii.gz
. You can use a combination of these two masks to reconstruct portion of the left hemispheric optic radiation connecting the left lateral geniculate nucleus (LGN) and left primary visual cortex (V1). Hint: use LGN as the seed image and the V1 mask as waypoint and termination masks. p02
(~/diffusionMRI/tractography/p02/
). It might take ~60-90 minutes to run.p02
(~/diffusionMRI/tractography/p02/
). To run it you first need to complete Exercise 5. It will take ~15min to complete registration. Help and further information
As always, more information on diffusion analyses in FSL, can be found on the 'diffusion' section of the FSL Wiki and this practical course ran by FMRIB (the creators of FSL).
"},{"location":"workshop4/workshop4-intro/","title":"Workshop 4 - Advanced diffusion MRI analysis","text":"Welcome to the fourth workshop of the MRICN course!
In the previous workshop we started exploring different elements of the FMRIB's Diffusion Toolbox (FDT). This week we will continue with the different applications of the FDT toolbox and the use of Brain Extraction Tool (BET).
Overview of Workshop 4
Topics for this workshop include:
We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask.
Note that for your own projects, you are very likely to want to change some of these settings/parameters depending on your study aims and design.
In this workshop we will follow basic steps in the diffusion MRI analysis pipeline, specifically with running tractography. The instructions here are specific to tools available in FSL. Other neuroimaging software packages can be used to perform similar analyses.
Example of Diffusion MRI analysis pipeline
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 04 workshop materials.
"},{"location":"workshop8/functional-connectivity/","title":"Functional connectivity analysis of resting-state fMRI data using FSL","text":"This workshop is based upon the excellent FSL fMRI Resting State Seed-based Connectivity tutorial by Dianne Paterson at the University of Arizona, which has been adapted to run on the BEAR systems at the University of Birmingham, with some additional content covering Neurosynth.
We will run a group-level functional connectivity analysis on resting-state fMRI data of three participants, specifically examining the functional connectivity of the posterior cingulate cortex (PCC), a region of the default mode network (DMN) that is commonly found to be active in resting-state data.
Overview of Workshop 8
To do this, we will:
Navigate to your shared directory within the MRICN folder and copy the data over:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx\ncp -r /rds/projects/c/chechlmy-chbh-mricn/aamir_test/SBC .\ncd SBC\nls\n
You should now see the following:
sub1 sub2 sub3\n
Each of the folders has a single resting-state scan, called sub1.nii.gz
,sub2.nii.gz
and sub3.nii.gz
respectively.
We will now create our seed region for the PCC. To do this, firstly load FSL and fsleyes
in the terminal by running:
module load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\n
Check that we are in the correct directory (blah/your_username/SBC
):
pwd\n
and create a new directory called seed
:
mkdir seed\n
Now when you run ls
you should see:
seed sub1 sub2 sub3\n
Lets open FSLeyes:
fsleyes &\n
Creating the PCC mask in FSLeyes We need to open the standard MNI template brain, select the PCC and make a mask.
Here are the following steps:
File \u279c Add standard
and select MNI152_T1_2mm_brain.nii.gz
.Settings \u279c Ortho View 1 \u279c Atlases
. An atlas panel then opens on the bottom section.Atlas information
(if it already hasn't loaded).cing
in the search box. Check the Cingulate Gyrus, posterior division (lower right) so that it is overlaid on the standard brain. (The full name may be obscured, but you can always check which region you have loaded by looking at the panel on the bottom right).
At this point, your window should look something like this:
To save the seed, click the save symbol which is the first of three icons on the bottom left of the window.
The window that opens up should be your project SBC directory. Open into the seed
folder and save your seed as PCC
.
We now need to binarise the seed and to extract the mean timeseries. To do this, leaving FSLeyes open, go into your terminal (you may have to press Enter if some text about dc.DrawText
is there) and type:
cd seed\nfslmaths PCC -thr 0.1 -bin PCC_bin\n
In FSLeyes now click File \u279c Add from file, and select PCC_bin
to compare PCC.nii.gz
(before binarization) and PCC_bin.nii.gz
(after binarization). You should note that the signal values are all 1.0 for the binarized PCC.
You can now close FSLeyes.
For each subject, you want to extract the average time series from the region defined by the PCC mask. To calculate this value for sub1
, do the following:
cd ../sub1\nfslmeants -i sub1 -o sub1_PCC.txt -m ../seed/PCC_bin\n
This will generate a file within the sub1
folder called sub1_PCC.txt
.
We can have a look at the contents by running cat sub1_PCC.txt
. The terminal will print out a list of numbers with the last five being:
20014.25528\n20014.919\n20010.17317\n20030.02886\n20066.05141\n
This is the mean level of 'activity' for the PCC at each time-point.
Now let's repeat this for the other two subjects.
cd ../sub2\nfslmeants -i sub2 -o sub2_PCC.txt -m ../seed/PCC_bin\ncd ../sub3\nfslmeants -i sub3 -o sub3_PCC.txt -m ../seed/PCC_bin\n
Now if you go back to the SBC directory and list all of the files within the subject folders:
cd ..\nls -R\n
You should see the following:
This is all we need to run the subject and group-level analyses using FEAT.
"},{"location":"workshop8/functional-connectivity/#running-the-feat-analyses","title":"Running the FEAT analyses","text":""},{"location":"workshop8/functional-connectivity/#single-subject-analysis","title":"Single-subject analysisExamining the FEAT outputScripting the other two subjects","text":"Close your terminal, open another one, move to your SBC
folder, load FSL and open FEAT:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/SBC\nmodule load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\nFeat &\n
We will run the first-level analysis for sub1
. Set-up the following settings in the respective tabs:
Data
Number of inputs:
sub1
folder and choose sub1.nii.gz
. Click OK. You will see a box saying that the 'Input file has a TR of 1...' this is fine, just click OK again.Output directory:
sub1
folder and click OK. Nothing will be in the right hand column, but that is because there are no folders within sub1
. We will create our .feat
folder within sub1
. This is what your data tab should look like (with the input data opened for show).
Pre-stats
The data has already been pre-processed, so just set 'Motion correction' to 'None' and uncheck BET. Your pre-stats should look like this:
Registration
Nothing needs to be changed here.
Stats
Click on 'Full Model Setup' and do the following:
sub1
folder and select sub1_PCC.txt
. This is the mean time series of the PCC for sub-001 and is the statistical regressor in our GLM model. This is different from analyses of task-based data which will usually have an events.tsv
file with the onset times for each regressor of interest.What are we doing specifically?
The first-level analysis will subsequently identify brain voxels that show a significant correlation with the seed (PCC) time series data.
Your window should look like this:
In the same General Linear Model window, click the 'Contrast & F-tests' tab, type PCC in the title, and click 'Done'.
A blue and red design matrix will then be displayed. You can close it.
Post-stats
Nothing needs to be changed here.
You are ready to run the first-level analysis. Click 'Go' to run. On BEAR, this should only take a few minutes.
To actually examine the output, go to the BEAR Portal and at the menu bar select Files \u279c /rds/projects/c/chechlmy-chbh-mricn/
Then go into SBC/sub1.feat
, select report.html
and click 'View' (top left of the window). Navigate to the 'Post-stats' tab and examine the outputs. It should look like this:
We can now run the second and third subjects. As we only have three subjects, we could manually run the other two by just changing three things:
sub_PCC.txt
pathWhilst it would probably be quicker to do it manually in this case, it is not practical in other instances (e.g., more subjects, subjects with different number of scans etc.). So, instead we will be scripting the first level FEAT analyses for the other two subjects.
The importance of scripting
Scripting analyses may seem challenging at first, but it is an essential skill of modern neuroimaging research. It enables you to automate repetitive processing steps, dramatically reduces the chance of human error, and ensures your research is reproducible.
To do this, go back into your terminal, you don't need to open a new terminal or close FEAT.
The setup for each analysis is saved as a specific file, the design.fsf
file within the FEAT output directory. We can see this by opening the design.fsf
file for sub1
:
pwd # make sure you are in your SBC directory e.g., blah/xxx/SBC\ncd sub1.feat\ncat design.fsf\n
FEAT acts as a large 'function' with its many variables corresponding to the options that we choose when setting up in the GUI. We just need to change three of these (the three mentioned above). In the design.fsf
file this corresponds to:
set fmri(outputdir) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1\"\nset feat_files(1) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1/sub1/\"\nset fmri(custom1) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1/sub1_PCC.txt\"\n
To run the script, please copy the run_feat.sh
script into your own SBC
directory:
cd ..\npwd # make sure you are in your SBC directory\ncp /rds/projects/c/chechlmy-chbh-mricn/axs2210/SBC/run_feat.sh .\n
Viewing the script
If you would like, you can have a look at the script yourself by typing cat run_bash.sh
The first line #!/bin/bash
is always needed to run bash
scripts. The rest of the code just replaces the 3 things we wanted to change for the defined subjects, sub2
and sub3
.
Run the code (from your SBC directory) by typing bash run_feat.sh
. (It will ask you for your University account name, this is your ADF username (axs2210 for me)).
The script should take about 5-10 minutes to run on BEAR.
After it has finished running, have a look at the report.html
file for both directories, they should look like this:
sub2
sub3
"},{"location":"workshop8/functional-connectivity/#group-level-analysis","title":"Group-level analysisExamining the output","text":"Ok, so now that we have our FEAT directories for all three subjects, we can run the group level analysis. Close FEAT and open a new FEAT by running Feat &
in your SBC
directory.
Here are instructions on how to setup the group-level FEAT:
Data
Your window should look like this (before closing the 'Input' window):
\u00a0\u00a0\u00a0\u00a05. Keep 'Use lower-level COPEs' ticked.
\u00a0\u00a0\u00a0\u00a06. In 'Output directory' stay in your current directory (SBC), and in the bottom bar, type in PCC_group
at the end of the file path.
Don't worry about it being empty, FSL will fill out the file path for us.
If you click the folder again, it should look similar to this (with your ADF username instead of axs2210
):
Stats
The interface should look like this:
After that, click 'Done' and close the GLM design matrix that pops up (you don't need to change anything in the 'Contrasts and F-tests' tab).
Post-stats
Lowering our statistical threshold
Why do you think we are lowering this to 2.3 in our analysis instead of keeping it at 3.1? The reason is because we only have three subjects, we want to be relatively lenient with our threshold value, otherwise we might not see any activation at all! For group-level analyses with more subjects, we would be more strict.
Click 'Go' to run!
This should only take about 2-3 minutes.
While this is running, you can load the report.html
through the file browser as you did for the individual subjects.
Click on the 'Results' tab, and then on 'Lower-level contrast 1 (PCC)'. When the analysis has finished, your results should look like this:
These are voxels demonstrating significant functional connectivity with the PCC at a group-level (Z > 2.3).
So, we have just ran our group-level analysis. Let's have a closer look at the outputted data.
Close FEAT and your terminal, open a new terminal, go to your SBC
directory and open FSLeyes:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/SBC\nmodule load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n
In FSLeyes, open up the standard brain (Navigate to the top menu and click on 'File \u279c Add standard' and select MNI152_T1_2mm_brain.nii.gz
).
Then add in our contrast image (File \u279c Add from file, and then go into the PCC_group.gfeat
and then into cope1.feat
and open the file thresh_zstat1.nii.gz
).
When opened, change the colour to 'Red-Yellow' and the 'Minimum' up to 2.3 (The max should be around 3.12). If you set the voxel location to [42, 39, 52] your screen should look like this:
This is the map that we saw in the report.html
file. In fact we can double check this by changing the voxel co-ordinates to [45, 38, 46].
Our thresholded image in fsleyes
The FEAT output Our image matches the one on the far right below:
"},{"location":"workshop8/functional-connectivity/#bonus-identifying-regions-of-interest-with-atlases-and-neurosynth","title":"Bonus: Identifying regions of interest with atlases and Neurosynth","text":"
So we know which voxels demonstrate significant correlation with the PCC, but what region(s) of the brain are they located in?
Let's go through two ways in which we can work this out.
Firstly, as you have already done in the course, we can simply just overlap an atlas on the image and see which regions the activated voxels fall under.
To do this:
By having a look at the 'Location' window (bottom left) we can now see that significant voxels of activity are mainly found in the:
Right superior lateral occipital cortex
Posterior cingulate cortex (PCC) / precuneus
Alternatively, we can also use Neurosynth, a website where you can get the resting-state functional connectivity of any voxel location or brain region. It does this by extracting data from studies and performing a meta-analysis on brain imaging studies that have results associated with your voxel/region of interest.
About Neurosynth
While Neurosynth has been superseded by Neurosynth Compose we will use the original Neurosynth in this tutorial.
If you click the following link, you will see regions demonstrating significant connectivity with the posterior cingulate.
If you type [46, -70, 32] as co-ordinates in Neurosynth, and then into the MNI co-ordinates section in FSLeyes, not into the voxel location, because Neurosynth works with MNI space, you can see that in both cases the right superior lateral occipital cortex is activated.
Image orientation
Note that the orientations of left and right are different between Neurosynth and FSLeyes!
Neurosynth
FSLeyes
This is a great result given that we only have three subjects!
Learning outcomes of this workshop
In this workshop, you have:
MRI on BEAR is a collection of educational resources created by members of the Centre for Human Brain Health (CHBH), University of Birmingham, to provide a basic introduction to fundamentals in magnetic resonance imaging (MRI) data analysis, using the computational resources available to the University of Birmingham research community.
"},{"location":"#about-this-website","title":"About this website","text":"This website contains workshop materials created for the MSc module 'Magnetic Resonance Imaging in Cognitive Neuroscience' (MRICN) and its earlier version - Fundamentals in Brain Imaging taught by Dr Peter C. Hansen - at the School of Psychology, University of Birmingham. It is a ten-week course consisting of lectures and workshops introducing the main techniques of functional and structural brain mapping using MRI with a strong emphasis on - but not limited to - functional MRI (fMRI). Topics include the physics of MRI, experimental design for neuroimaging experiments and the analysis of fMRI, and other types of MRI data. This website includes only the workshop materials, which provide a basic training in analysis of brain imaging data and data visualization.
Learning objectives
At the end of the course you will be able to:
For externals not on the course
Whilst we have made these resources publicly available for anyone to use, please BEAR in mind that the course has been specifically designed to run on the computing resources at the University of Birmingham.
"},{"location":"#teaching-staff","title":"Teaching Staff","text":"Dr Magdalena ChechlaczRole: Course Lead
Magdalena Chechlacz is an Assistant Professor in Cognition and Ageing at the School of Psychology, University of Birmingham. She initially trained and carried out a doctorate in cellular and molecular biology (2002). After working as a biologist (Larry L. Hillblom Foundation Fellowship at the University of California, San Diego) she decided on a career change to a more human-oriented science and neuroimaging. In order to gain formal training in cognitive neuroscience and neuroimaging, she completed a second doctorate in psychology at the University of Birmingham under the supervision of Glyn Humphreys (2012). From 2013 to 2016, she held a British Academy Postdoctoral Fellowship and EPA Cephalosporin Junior Research Fellowship, Linacre College at the University of Oxford. In 2016, Dr Chechlacz returned to the School of Psychology, University of Birmingham as a Bridge Fellow.
m.chechlacz@bham.ac.uk 0000-0003-1811-3946Aamir Sohail
Role: Teaching Assistant
Aamir Sohail is an MRC Advanced Interdisciplinary Methods (AIM) DTP PhD student based at the Centre for Human Brain Health (CHBH), University of Birmingham, where he is supervised by Lei Zhang and Patricia Lockwood. He completed a BSc in Biomedical Science at Imperial College London, followed by an MSc in Brain Imaging at the University of Nottingham. He then worked as a Junior Research Fellow at the Centre for Integrative Neuroscience and Neurodynamics (CINN), University of Reading. Outside of research, he is also passionate about facilitating inclusivity and diversity in academia, as well as promoting open and reproducible science.
axs2210@bham.ac.uk sohaamir AamirNSohail 0009-0000-6584-4579 sohaamir.github.ioAccessing additional course materials
If you are a CHBH member and would like access to additional course materials (lecture recordings etc.), please contact one of the teaching staff members listed above.
"},{"location":"contributors/","title":"Contributors","text":"Many thanks to our contributors for creating and maintaining these resources!
Andrew Quinn\ud83d\udea7 \ud83d\udd8b Aamir Sohail\ud83d\udea7 \ud83d\udd8b James Carpenter\ud83d\udd8b Magda Chechlacz\ud83d\udd8bAcknowledgements
Thank you to Charnjit Sidhu for their assistance with running the course!
LicenseMRI on BEAR is hosted on GitHub. All content in this book (ie, any files and content in the docs/
folder) is licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. Please see the LICENSE
file in the GitHub repository for more details.
For those wanting to develop their learning beyond the scope of the module, here is a (non-exhaustive) list of links and pages for neuroscientists covering skills related to working with neuroimaging data, both with the concepts and practical application.
Contributing to the list
Feel free to suggest additional resources to the list by opening a thread on the GitHub page!
FSL Wiki
Most relevant to the course is the FSL Wiki, the comprehensive guide for FSL by the Wellcome Centre for Integrative Neuroimaging at the University of Oxford.
"},{"location":"resources/#existing-lists-of-resources","title":"Existing lists of resources","text":"Here are some current 'meta-lists' which already cover a lot of resources themselves:
Struggling to grasp the fundamentals of MRI/fMRI? Want to quickly refresh your mind on the physiological basis of the BOLD signal? Well, these resources are for you!
An Introduction to Resting State fMRI Functional Connectivity (2017, Oxford University Press) by Janine Bijsterbosch, Stephen M. Smith, and Christian F. Beckmann
Handbook of Functional MRI Data Analysis (2011, Cambridge University Press) by Russell A. Poldrack, Jeanette A. Mumford, and Thomas E. Nichols
Introduction to Functional Magnetic Resonance Imaging (1998, Cambridge University Press) by Richard B. Buxton
Introduction to Neuroimaging Analysis (2018, Oxford University Press) by Mark Jenkinson and Michael Chappell
Short Introduction to Brain Anatomy for Neuroimaging (2018, Oxford University Press) by Mark Jenkinson and Michael Chappell
Short Introduction to the General Linear Model (2018, Oxford University Press) by Mark Jenkinson and Michael Chappell
Short Introduction to MRI Physics (2018, Oxford University Press) by Mark Jenkinson and Michael Chappell
Before you start with any workshop materials, you will need to familiarise yourself with the CHBH\u2019s primary computational resource, BlueBEAR. The following pages are aimed at helping you get started.
To put these workshop materials into practical use you will be expected to understand what BlueBEAR is, what it is used for and to make sure you have access.
Student Responsibility
If you are an MSc student taking the MRICN module, please note that while there will be help available during all in person workshops, in case you have any problems with using the BEAR Portal, it is your responsibility to make sure that you have access, and that you are familiar with the information provided in pre-workshop materials. Failing to gain an understanding of BlueBEAR and using the BEAR Portal will prevent you from participating in the practical sessions and completing the module\u2019s main assessment (data analysis).
"},{"location":"setting-up/#what-are-bear-and-bluebear","title":"What are BEAR and BlueBEAR?Signing in to the BEAR Portal","text":"BEAR stands for Birmingham Environment for Academic Research and is a collection of services provided specifically for researchers at the University of Birmingham. BEAR services are used by researchers at the Centre for Human Brain Health (CHBH) for various types of neuroimaging data analysis.
BEAR services and basic resources - such as the ones we will be using for the purpose of the MRICN module - are freely available to the University of Birmingham research community. Extra resources which may be needed for some research projects can be purchased e.g., access to dedicated nodes and extra storage. This is something your PI/MSc/PhD project supervisor might be using and will give you access to.
BlueBEAR refers to the Linux High Performance Computing (HPC) environment which:
As computing resources on BlueBEAR rely on Linux, in Workshop 1 you will learn some basic commands, which you will need to be familiar with to participate in subsequent practical sessions and to complete the module\u2019s main assessment (data analysis assessment). More Linux commands and basic principle of scriptings will be introduced in subsequent workshops.
There are two steps to gaining access to BlueBEAR:
Gaining access to BEAR Projects
Only a member of academic staff e.g., your project supervisor or module lead, can apply for a BEAR project. As a student you cannot apply for a BEAR project. If you are registered as a student on the MRICN module, you should have already been added as member to the project chechlmy-chbh-mricn
. If not please contact one of the teaching staff.
Even if you are already a member of a BEAR project giving you BlueBEAR access, you will still need to activate your BEAR Linux account via the self-service route or the service desk form. The information on how to do it and step-by-step instructions are available on the BEAR website, see the following link.
Please follow these steps as above to make sure you have a BEAR Linux account before starting with workshop 1 materials. To do this you will need to be on campus or using the University Remote Access Service (VPN).
After you have activated your BEAR Linux account, you can now sign-in to the BEAR Portal.
BEAR Portal access requirements
Remember that the BEAR Portal is only available on campus or using the VPN!
If your log in is successful, you will be directed to the main BEAR Portal page as below. This means that you have successfully launched the BEAR Portal.
If you get to this page, you are ready for Workshop 1. For now, you can log out. If you have any problems logging on to BEAR Portal, please email chbh-help@contacts.bham.ac.uk for help and advice.
"},{"location":"setting-up/#bear-storage","title":"BEAR Storage","text":"The storage associated with each BEAR project is called the BEAR Research Data Store (RDS). Each BEAR project gets 3TB of storage space for free, but researchers (e.g., your MSc project supervisor) can pay for additional storage if needed. The RDS should be used for all data, job scripts and output on BlueBEAR.
If you are registered as a student on the MRICN module, all the data and resources you will need to participate in the MRICN workshops and to complete the module\u2019s main assessment have been added to the MRICN module RDS, and you have been given access to the folder /rds/projects/c/chechlmy-chbh-mricn
. When working on your MSc project using BEAR services, your supervisor will direct you to the relevant RDS project.
External access to data
If you are not registered on the module and would like access to the data, please contact one of the teaching staff members.
"},{"location":"setting-up/#finding-additional-information","title":"Finding additional information","text":"There is extensive BEAR technical documentation provided by the University of Birmingham BEAR team (see links below). While for the purpose of this module, you are not expected to be familiar with all the provided there information, you might find it useful if you want to know more about computing resources available to researchers at CHBH via BEAR services, especially if you will be using BlueBEAR for other purposes (e.g., for your MSc project).
You can find out more about BEAR, BlueBEAR and RDS on the dedicated BEAR webpages:
University of Birmingham BEAR Homepage
More information on BlueBEAR
More information on Research Data Storage
At this point you should know how to log in and access the main BEAR Portal page.
Please navigate to https://portal.bear.bham.ac.uk, log in and launch the BEAR Portal; you should get to the page as below.
BlueBEAR Portal is a web-based interface enabling access to various BEAR services and BEAR apps including:
BlueBEAR portal is basically a user friendly alternative to using the command line interface, your computer terminal.
To view all files and data you have access to on BlueBEAR, click on 'Files' as illustrated above. You will see your home directory (your BEAR Linux home directory), and all RDS projects you are a member of.
You should be able to see /rds/projects/c/chechlmy-chbh-mricn
(MRICN module\u2019s RDS project). By selecting the 'Home Directory' or any 'RDS project' you will open a second browser tab, displaying the content. In the example below, you see the content of one of Magda's projects.
Inside the module\u2019s RDS project, you will find that you have a folder labelled xxx, where xxx is your University of Birmingham ADF username. If you navigate to that folder rds/projects/c/chechlmy-chbh-mricn/xxx
, you will be able to perform various file operations from there. However, for now, please do not move, download, or delete any files.
Data confidentiality
Please also note that the MRI data you will be given to work with should be used on BlueBEAR only and not downloaded on your personal desktop or laptop!
"},{"location":"workshop1/intro-to-bluebear/#launching-the-bluebear-gui","title":"Launching the BlueBEAR GUI","text":"The BlueBEAR Portal options in the menu bar, 'Jobs', 'Clusters' and 'My Interactive Sessions' can be used to submit and edit jobs to run on the BlueBEAR cluster and to get information about your currently running jobs and interactive sessions. Some of these processes can be also executed using Code Server Editor (VS Code) accessible via Interactive Apps. We won\u2019t explore these options in detail now but some of these will be introduced later when needed.
For example, from the 'Cluster' option you can jump directly on BlueBEAR terminal and by using this built-in terminal, submit data analysis jobs and/or employ own contained version of neuroimaging software rather than software already available on BlueBEAR. We will cover containers, scripting and submitting jobs in later workshops. For now, just click on this option and see what happens; you can subsequently exit/close the terminal page.
Finally, from the BlueBEAR Portal menu bar you can select 'Interactive Apps' and from there access various GUI applications you wish to use, including JupyterLab, RStudio, MATLAB and most importantly the BlueBEAR GUI, which we will be using to analyse MRI data in the subsequent workshops.
Please select 'BlueBEAR GUI'. This will bring up a page for you to specify options for your job to start the BlueBEAR GUI. You can leave some of these options as default. But please change \u201cNumber of Hours\u201d to 2 (our workshops will last 2 hours; for some other analysis tasks you might need more time) and make sure that the selected 'BEAR Project' is chechlmy-chbh-mricn
. Next click on Launch.
It will take few minutes for the job to start. Once it\u2019s ready you\u2019ll see an option to connect to the BlueBEAR GUI. Click on 'Launch BlueBEAR GUI'.
Once you have launched the BlueBEAR GUI, you are now in a Linux environment, on a Linux Desktop. The following section will guide you on navigating and using this environment effectively.
Re-launching the BlueBEAR GUI
In the main window of the BlueBEAR portal you will be able to see that you have an Interactive session running (the information above will remain there). This is important as if you close the Linux Desktop by mistake, you can click on Launch BlueBEAR GUI again to open it.
"},{"location":"workshop1/intro-to-linux/","title":"Introduction to Linux","text":"Linux is a computer Operating System (OS) similar to Microsoft Windows or Mac OS. Linux is very widely used in the academic world especially in the sciences. It is derived from one of the oldest and most stable and used OS platforms around, Unix. We use Linux on BlueBEAR. Many versions of Linux are freely available to download and install, including CentOS (Community ENTerprise Operating System) and Debian, which you might be familiar with. You can also use these operating systems with Microsoft Windows in Dual Boot Environment on your laptop or desktop computer.
Linux and neuroimaging
Linux is particularly suited for clustering computers together and for efficient batch processing of data. All major neuroimaging software runs on Linux. This includes FSL, SPM, AFNI, and many others. Linux, or some version of Unix, is used in almost all leading neuroimaging centres. Both MATLAB and Python also run well in a Linux environment.
If you work in neuroimaging, it is to your advantage to become familiar with Linux. The more familiar you are, the more productive you will become. For some of you, this might be a challenge. The environment will present a new learning experience, one that will take time and effort to learn. But in the end, you should hopefully realize that the benefits of learning to work in this new computer environment are indeed worth the effort.
Linux is not like the Windows or Mac OSX environments. It is best used by typing commands into a Terminal client and by writing small batch command programs. Frequently you may not even need to use the mouse. Using the Linux environment alone may take some getting used to, but will become more familar throughout the course, as we use them to navigate through our file system and to script our analyses. For now, we will simply explore using the Linux terminal and simple commands.
"},{"location":"workshop1/intro-to-linux/#using-the-linux-terminal","title":"Using the Linux Terminal","text":"BlueBEAR GUI enables to load various apps and applications by using the Linux environment and a built-in Terminal client. Once you have launched the BlueBEAR GUI, you will see a new window and from there you can open the Terminal client. There are different ways to open Terminal in BlueBEAR GUI window as illustrated below.
Either by selecting from the drop-down menu:
Or by selecting the folder at the bottom of the screen:
In either case you will load the terminal:
Once you have started the terminal you, you will be able to load required applications (e.g., to start the FSL GUI). FSL (FMRIB Software Library) is a neuroimaging software package we will be using in our workshops for MRI data analysis.
When using the BlueBEAR GUI Linux desktop, you can simultaneously work in four separate spaces/windows. For example, if you are planning on using multiple apps, rather than opening multiple terminals and apps in the same space, you can move to another space. You can do that by clicking on \u201cworkspace manager\u201d in Linux desktop window.
Linux is fundamentally a command line-based operating system, so although you can use the GUI interface with many applications, it is essential you get used to issuing commands through the Terminal interface to improve your work efficiency.
Make sure you have an open Terminal as per the instructions above. Note that a Terminal is a text-based interface, so generally the mouse is not much use. You need to get used to taking your hand off the mouse and letting go of it. Move it away, out of reach. You can then get used to using both hands to type into a Terminal client.
[chechlmy@bear-pg0210u07a ~]$
as shown above in the Terminal Client is known as the system prompt. The prompt usually identifies the user and the system hostname. You can type commands at the system prompt (press the Enter key after each command to make it run). The system then returns output based on your command to the same Terminal.
Try typing ls
in the Terminal.
This command tells Linux to print a list of the current directory contents. We will get back later to basic Linux commands, which you should learn to use BlueBEAR for neuroimaging data analysis.
Why bother with Linux?
You may wonder why you should invest the time to learn the names of the various commands needed to copy files, change directories and to do general things such as run image analysis programs via the command line. This may seem rather clunky. However, the commands you learn to run on the command line in a terminal can alternatively be written in a text file. This text file can then be converted to a batch script that can be run on data sets using the BlueBEAR cluster, potentially looping over hundreds or thousands of different analyses, taking many days to run. This is vastly more efficient and far less error prone than using equivalent graphical tools to do the same thing, one at a time.
When you open a new terminal window it opens in a particular directory. By default, this will be your home directory:
/rds/homes/x/xxx
or the Desktop folder in your home directory:
/rds/homes/x/xxx/Desktop
(where x is the first letter of your last name and xxx is your University of Birmingham ADF username).
On BlueBEAR files are stored in directories (folders) and arranged into a tree hierarchy.
Examples of directories on BlueBEAR include:
/rds/homes/x/xxx
(your home directory) /rds/projects/c/chechlmy-chbh-mricn
(our module RDS project directory) Directory separators on Linux and Windows
/ (forward slash) is the Linux directory separator. Note that this is different from Windows (where the backward slash \\ is the directory separator).
The current directory is always called .
(i.e. a single dot).
The directory above the current directory is always called ..
(i.e. dot dot)
Your home directory can always be accessed using the shortcut ~
(the tilde symbol). Note that this is the same as /rds/homes/x/xxx
.
You need to remember this to use and understand basic Linux Commands.
"},{"location":"workshop1/intro-to-linux/#basic-linux-commands","title":"Basic Linux Commands","text":"pwd (Print Working Directory)
In a Terminal type pwd
followed by the return (enter) key to find out the name of the directory where you are. You are always in a directory and can (usually) move to directories above you or below to subdirectories.
For example if you type pwd
in your terminal you will see: /rds/homes/x/xxx
(e.g., /rds/homes/c/chechlmy
)
cd (Change Directory)
In a Terminal window, type cd
followed by the name of a directory to gain access to it. Keep in mind that you are always in a directory and normally are allowed access to any directories hierarchically above or below.
Type in your terminal the examples below:
cd /rds/projects
cd /rds/homes/
cd ..
(to change to the directory above using .. shortcut)
To find out where you are now, type pwd
:
(answer: /rds
)
If the directory is not located above or below the current directory, then it is often less confusing to write out the complete path instead. Try this in your terminal:
cd /rds/homes/x/xxx/Desktop
(where x is the first letter of your last name and xxx is your ADF username)
Changing directories with full paths
Note that it does not matter what directory you are in when you execute this command, the directory will always be changed based on the full pathway you specified.
Remember that the tilde symbol ~
is a shortcut for your home directory. Try this:
cd /rds/projects \ncd ~ \npwd\n
You should be now back in your home directory.
ls (List Files)
The ls
command (lowercase L, S) allows you to see a summary list of the files and directories located in the current directory. Try this:
cd /rds/projects/c\nls\n
(you should now see a long list of various BEAR RDS projects)
Before moving to the next section, please close your terminal by clicking on \u201cx\u201d in the top right of the Terminal.
cp (Copy files/directories)
The cp
command will copy files and/or directories FROM a source TO a destination in the current working directory. This command will create the destination file if it doesn't exist. In some cases, to do that you might need to specify a complete path to a file location.
Here are some examples (please do not type them, they are only examples):
Command Functioncp myfile yourfile
Basic file copy (in current directory) cp data data_copy
Copy a directory (but not sub-directories) cp -r ~fred/data .
Recursively copy fred
dir to current dir cp ~fred/fredsfile myfile
Copy remote file and rename it cp ~fred/* .
Copy all files from fred
dir to current dir cp ~fred/test* .
Copy all files that begin with test e.g. test
, test1.txt
In the subsequent workshops we will practice using the cp
command. For now, looking at the examples above to understand its usage. There are also some exercises below to check your understanding.
mv, rmdir and mkdir (Moving, removing and making files/directories)
The mv
command will move files FROM a source TO a destination. It works like copy, except the file is actually moved. If applied to a single file, this effectively changes the name of the file. (Note there is no separate renaming command in Linux). The command also works on directories.
Here are some examples (again please do not type these in):
Command Functionmv myfile yourfile
renames file mv ~/data/somefile somefile
moves file mv ~/data/somefile yourfile
moves and renames mv ~/data/* .
moves multiple files There are also the mkdir
and rmdir
commands:
mkdir
\u2013 to make a new directory e.g. mkdir testdir
rmdir
\u2013 to remove an empty directory e.g. rmdir testdir
You can try these two commands. Open a new Terminal and type:
mkdir testdir\nls\n
In your home directory you will see now a new directory testdir
. Now type:
rmdir testdir\nls\n
You should notice that the testdir
has been removed from your home directory.
To remove a file you can use the rm
command. Note that once files are deleted at the command line prompt in a terminal window, unlike in Microsoft Windows, you cannot get files back from the wastebin.
e.g. rm junk.txt
(this is just an example, do not type it in your terminal)
Clearing your terminal
Often when running many commands, your terminal will be full and difficult to understand. To clear the terminal screen type clear
. This is an especially helpful command when you have been typing lots of commands and need a clean terminal to help you focus.
Note that most commands in Linux have a similar syntax: command name [modifiers/options] input output
The syntax of the command is very important. There must be spaces in between the different parts of the command. You need to specify input and output. The modifiers (in brackets) are optional and may or may not be needed depending on what you want to achieve.
For example, take the following command:
cp -r /rds/projects/f/fred/data ~/tmp
(This is an example, do not type this)
In the above example -r
is an option meaning 'recursive' often used with cp
and other commands, used in this case to copy a directory including all its content from one directory to another directory.
FSL (FMRIB Software Library) is a software library containing multiple tools for processing, statistical analyses, and visualisation of magnetic resonance imaging (MRI) data. Subsequent workshops will cover usage of some of the FSL tools for structural, functional and diffusion MRI data. This workshop only covers how to start FSL app on BlueBEAR GUI Linux desktop, and some practical aspects of using FSL, specifically running it in the terminal either in the foreground or in the background.
There are several different versions of FSL software available on BlueBEAR. You can search which versions of FSL are available on BlueBEAR as well as all other available software using the following link: https://bear-apps.bham.ac.uk
From there you will also find information how to load different software. Below you will find an example of loading one of the available versions of FSL.
To open FSL in terminal, you first need to load the FSL module. To do this, you need to type in the Terminal a specific command.
First, either close the Terminal you have been previously using and open a new one, or simply clean it. Next, type:
module load FSL/6.0.5.1-foss-2021a
You will see various processes running the terminal. Once these have stopped and you see a system prompt in the terminal, type:
fsl
This fsl
command will initiate the FSL GUI as shown below.
Now try typing ls
in the same terminal window and pressing return.
Notice how nothing appears to happen (your keystrokes are shown as being typed in but no actual event seems to be actioned). Indeed, nothing you type is being processed and the commands are being ignored. That is because the fsl
command is running in the foreground in the terminal window. Because of that it is blocking other commands from being run in the same terminal.
Notice now that control has been returned to the Terminal and how commands you type are now being acted on. Try typing ls
again; it should now work in the Terminal.
Go back to the terminal window again, but this time type fsl &
at the system prompt and press return. Again, you should see the FSL GUI pop up.
Now try typing ls
in the same Terminal.
Notice that your new commands are now being processed. The fsl
command is now running in the background in the Terminal allowing you to run other commands in parallel from the same Terminal. Typing the &
after any command makes it run in the background and keeps the Terminal free for you to use.
Sometimes you may forget to type &
after a command.
fsl
(without the &) so that it is running in the foreground. You should get a message like \u201c[1]+ Stopped fsl\u201d
. You will notice that the FSL GUI is now unresponsive (try clicking on some of the buttons). The fsl
process has been suspended.
bg
in the terminal window (followed by pressing the return key). You should find the FSL GUI is now responsive again and input to the terminal now works once more. If you clicked the 'Exit' button when the FSL GUI was unresponsive, FSL might close now.
Running and killing commands in the terminal
If, for some reason, you want to make the command run in the foreground then rather than typing bg
(for background) instead type fg
(for foreground). If you want to kill (rather than suspend) a command that was running in the foreground, press CTRL-c (CTRL key and c key).
Linux: some final useful tips
TIP 1:
When typing a command - or the name of a directory or file - you never need to type everything out. The terminal will self-complete the command or file name if you type the TAB key as you go along. Try using TAB key when typing commands or complete path to specific directory.
TIP 2:
If you need help understanding what the options are, or how to use a command, try adding --help
to the end of your command. For example, for better understanding of the du
options, type:
du --help [enter]
TIP 3:
There are many useful online lists of these various commands, for example: www.ss64.com/bash
Exercise: Basic Linux commands
Please complete the following exercises, you should hopefully know which Linux commands to use!
cd
back to your home directory test
test1
and make another directory called test2
test1
to your folder on modules\u2019s RDS project (i.e., rds/projects/c/chechlmy-chbh-mricn/xxx
)test1
and test2
directories and confirm itIf unsure, check your results with someone else or ask for help!
The correct commands are provided below. (click to reveal)
Linux Commands Exercise (Answers)clear
cd ~
or cd /rds/homes/x/xxx
pwd
mkdir test
mv test test1
mkdir test2
cp -r test1 /rds/projects/c/chechlmy-chbh-mricn/xxx/
or mv test1 /rds/projects/c/chechlmy-chbh-mricn/xxx/
rm -r test1 test2
ls
Workshop 1: Further Reading and Reference Material
Here are some additional resources that introduce users to Linux:
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 01 workshop materials.
"},{"location":"workshop1/workshop1-intro/","title":"Workshop 1 - Introduction to BlueBEAR and Linux","text":"Welcome to the first workshop of the MRICN course!
Overview of Workshop 1
Topics for this workshop include:
Pre-requisites for the workshop
Please ensure that you have completed the 'Setting Up' section of this course, as you will require access to the BEAR Portal for this workshop.
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 01 workshop materials.
"},{"location":"workshop2/mri-data-formats/","title":"Working with MRI Data - Files and Formats","text":"MRI Image FundamentalsWhen you acquire an MRI image of the brain, in most cases it is either a 3D image i.e., a volume acquired at one single timepoint (e.g., T1-weighted, FLAIR scans) or a 4D multi-volume image acquired as a timeseries (e.g., fMRI scans). Each 3D volume consists of multiple 2D slices, which are individual images.
The volume consists of 3D voxels, with a typical size between 0.25 to 4mm, but not necessarily same in all three directions. For example, you can have voxel size [1mm x 1mm x 1mm] or [0.5mm x 0.5mm x 2mm]. The voxel size represents image resolution.
The final important feature of an MRI image is field of view (FOV), a matrix of voxels represented as the voxel size multiplied by number of voxels. It provides information about the coverage of the brain in your MRI image. The FOV is sometime provided for the entire 3D volume or the individual 2D slice. Sometimes, the FOV is defined based on slice thickness and number of acquired slices.
Image and standard spaceWhen you acquire MRI images of the brain, you will find that these images will be different in terms of head position, image resolution and FOV, depending on the sequence and data type (e.g., T1 anatomical, diffusion MRI, fMRI). We often use term \u201cimage space\u201d to depict these differences i.e., structural (T1), diffusion or functional space.
In addition, we also use term \"standard space\" to represent standard dimensions and coordinates of the template brain, which are used when reporting results of group analyses. Our brains differ in terms of size and shape and thus for the purpose of our analyses (both single-subject and group-level) we need to use standard space. The most common brain template is the MNI152 brain (an average of 152 healthy brains).
The process of alignment between different image spaces is called registration or normalization, and its purpose is to make sure that voxel and anatomical locations correspond to the same parts of the brain for each image type and/or participant.
"},{"location":"workshop2/mri-data-formats/#mri-data-formats","title":"MRI Data Formats","text":"MRI scanners collect MRI data in an internal format that is unique to the scanner manufacturer, e.g., Philips, Siemens or GE. The manufacturer then allows you to export the data into a more usable intermediate format. We often refer to this intermediate format as raw data as it is not directly usable and needs to be converted before being accessible to most neuroimaging software packages.
The most common format used by various scanner manufacturers is the DICOM format. DICOM images corresponding to a single scan (e.g., a T1-weighted scan) might be one large file or multiple files (1 per each volume or one per each slice acquired). This will depend on the scanner and data server used to retrieve/export data from the scanner. There are other data formats e.g., PAR/REC that are specific to Philips scanners. The raw data needs to be converted into a format that the analysis packages can use.
Retrieving MRI data at the CHBH
At CHBH we have a Siemens 3T PRISMA scanner. When you acquire MRI scans at CHBH, data is pushed directly to a data server in the DICOM format. This should be automatic for all research scans. In addition, for most scans, this data is also directly converted to NIfTI format. So, at the CHBH you will likely retrieve MRI data from the scanner in NIfTI format.
NIfTI (Neuroimaging Informatics Technology Initiative) is the most widely used format for MRI data, accessible by majority of the neuroimaging software packages e.g., FSL or SPM. Another older data format which is still sometimes used, is Analyze (with each image consisting of two files .img
and .hdr
).
NIfTI format files have either the extension .nii
or .nii.gz
(compressed .nii
file), where there is only one NIfTI image file per scan. DICOM files usually have a suffix of .dcm
, although these files might be additionally compressed with gzip
as .dcm.gz
files.
We will now ourselves convert some DICOM images to NIfTI, using some data collected at the CHBH.
Servers do not always provide MRI data as NIfTIs
While at CHBH you can download the MRI data in NIfTI format, this might not be the case at some other neuroimaging centres. Thus, you should learn how to do it yourself.
The data is located in /rds/projects/c/chechlmy-chbh-mricn/module_data/CHBH
.
First, log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). Open a new terminal window and navigate to your MRICN project folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx
[where XXX=your ADF username]
Next copy the data from CHBH scanning sessions:
cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/CHBH .\npwd\n
After typing pwd
, the terminal should show /rds/projects/c/chechlmy-chbh-mricn/xxx
(i.e., you should be inside your MRICN project folder).
Then type:
cd CHBH \nls\n
You should see data from 3 scanning sessions. Note that there are two files per scan session. One is labelled XXX_dicom.zip
. This contains the DICOM files of all data from the scan session. The other file is labelled XXX_nifti.zip
. This contains the NIFTI files of the same data, converted from DICOM.
In general, both DICOM and NifTI data should be always copied from the server and saved by the researcher after each scan session. The DICOM file is needed in case there are problems with the automatic conversion to NIfTI. However, most of the time the only file you will need to work with is the XXX_nifti.zip
file containing NIfTI versions of the data.
We will now unpack some of the data to explore the data structure. In your terminal, type:
unzip 20191008#C4E7_nifti.zip\ncd 20191008#C4E7_nifti\nls\n
You should see six files listed as below, corresponding to 3 scans (two fMRI scans and one structural scan):
2.5mm_2000_fMRI_v1_6.json \n2.5mm_2000_fMRI_v1_6.nii.gz \n2.5mm_2000_fMRI_v1_7.json \n2.5mm_2000_fMRI_v1_7.nii.gz \nT1_vol_v1_5.json \nT1_vol_v1_5.nii.gz \n
JSON files
You may have noticed that for each scan file (NifTI file, .nii.gz
), there is also an autogenerated .json file
. This is an information file (in an open standard format) that contains important information for our data analysis. For example, the 2.5mm_2000_fMRI_v1_6.json
file contains slice timing information about the exact point in time during the 2s TR (repetition time) when each slice is acquired, which can be used later in the fMRI pre-processing. We will come back to this later in the course.
For now, let's look at another dataset. In your terminal type:
cd ..\nunzip 20221206#C547_nifti.zip\ncd 20221206#C547_nifti\nls\n
You should now see a list of 10 files, corresponding to 3 scans (two diffusion MRI scans and one structural scan). For each diffusion scan, in addition to the .nii.gz
and .json
files, there are two additional files, .bval
and .bvec
that contain important information about gradient strength and gradient directions (as mentioned in the MRI physics lecture). These two files are also needed for later analysis (of diffusion MRI data).
We will now look at a method for converting data from the DICOM format to NIfTI.
cd ..\nunzip 20191008#C4E7_dicom.zip\ncd 20191008#C4E7_dicom\nls\n
You should see a list of 7 sub-directories. Each top level DICOM directory contains sub-directories with each individual scan sequence. The structure of DICOM directories can vary depending on how it is stored/exported on different systems. The 7 sub-directories here contain data for four localizer scans/planning scans, two fMRI scans and one structural scan. Each sub-directory contains several .dcm
files.
There are several software packages which can be used to convert DICOM to NIfTI, but dcm2niix
is the most widely used. It is available as standalone software, or part of MRIcroGL a popular tool for brain visualization similar to FSLeyes. dcm2niix
is available on BlueBEAR, but to use it you need to load it first using the terminal.
To do this, in the terminal type:
module load bear-apps/2022b
Wait for the apps to load and then type:
module load dcm2niix/1.0.20230411-GCCcore-12.2.0
To convert the .dcm
files in one of the sub-directories to NIfTI using dcm2niix
from terminal, type:
dcm2niix T1_vol_v1_5
If you now check the T1_vol_v1_5
sub-directory, you should find there a single .nii
file and a .json
file.
Converting more MRI data
Now try to convert to NIfTI the .dcm
files from the scanning session 20221206#C547
with 3 DICOM sub-directories, the two diffusion scans diff_AP
and diff_PA
and one structural scan MPRAGE.
To do this, you will first need to change current directory, unzip, change directory again and then run the dcm2niix
command as above.
If you have done it correctly you will find .nii
and .json
files generated in the structural sub-directories, and in the diffusion sub-directories you will also find .bval
and .bvec
files.
Now that we have our MRI data in the correct format, we will take a look at the brain images themselves using FSLeyes.
"},{"location":"workshop2/visualizing-mri-data/","title":"MRI data visualization with FSLeyes","text":"FSL (FMRIB Software Library) is a comprehensive neuroimaging software library for the analysis of structural and functional MRI data. FSL is widely used, freely available, runs on both Linux and Mac OS as well as on Windows via a Virtual Machine.
FSLeyes is the FSL viewer for 3D and 4D data. FSLeyes is available on BlueBEAR, but you need to load it first. You can just load FSLeyes as a standalone software, but as it is often used with other FSL tools, you often want to load both (FSL and FSLeyes).
In this session we will only be loading FSLeyes by itself, and not with FSL.
FSL Wiki
Remember that the FSL Wiki is an important source for all things FSL!
"},{"location":"workshop2/visualizing-mri-data/#getting-started-with-fsleyes","title":"Getting started with FSLeyes","text":"Assuming that you have started directly from the previous page, first close your previous terminal (to close dcm2nii
). Then open a new terminal and to navigate to the correct folder, type in your terminal:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH
To open FSLeyes, type:
module load FSL/6.0.5.1-foss-2021a-fslpython
There are different version of FSL on BlueBEAR, however this is the one which you need to use it together with FSLeyes.
Wait for FSL to load and then type:
module load FSLeyes/1.3.3-foss-2021a
Again, wait for FSLeyes to load (it may take a few minutes). After this, to open FSLeyes, type in your terminal:
fsleyes &
The importance of '&'
Why do we type fsleyes &
instead of fsleyes
?
You should then see the setup below, which is the default FSLeyes viewer without an image loaded.
You can now load/open an image to view. Click 'File' \u2192 'Add from file' (and then select the file in your directory e.g., rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii
).
You can also type directly in the terminal fsleyes file.nii.gz
where you replace file.nii.gz
with the name of the actual file you want to open. However, you will need to include the full path to the file if you are not in the same directory when you open the terminal window e.g. fsleyes rds/projects/c/chechlmy-chbh-mricn/xxx/CHBH/visualization/T1.nii
You should now see a T1 scan loaded in ortho view with three canvases corresponding to the sagittal, coronal, and axial planes.
Please now explore the various settings in the ortho view panel:
Also notice the abbreviations on the three canvases:
FSL comes with a collection of\u00a0NIFTI standard templates, which are used for image registration and normalisation (part of MRI data analysis). You can also load these templates in FSLeyes.
To load a template, Click 'File' \u2192 'Add Standard' (for example select the file named MNI152_T1_2mm.nii.gz
. If you still have the T1.nii
image open, first close this image (by selecting 'Overlay' \u2192 'Remove') and then load the template.
The image below depicts the various tools that you can use on FSLeyes, give them a go!
We will now look at fMRI data. First close the previous image ('Overlay' \u2192 'Remove') and then load the fMRI image. To do this, click 'File' \u2192 'Add from file' and then select the file rds/projects/c/chechlmy-chbhmricn/xxx/CHBH/visualization2.5mm_2000_fMRI.nii.gz
.
Your window should now look like this:
Remember this fMRI data file is a 4D image \u2013 a set of 90-odd volumes representing a timeseries. To cycle through volumes, use the up/down buttons or type in a volume in the 'Volume' box to step through several volumes.
Now try playing the 4D file in 'Movie' mode by clicking this button. You should see some slight head movement over time. Click the button again to stop the movie.
As the fMRI data is 4D, this means that every voxel in the 3D-brain has a timecourse associated with it. Let's now have a look at this.
Keeping the same dataset open (2.5mm_2000_fMRI.nii.gz
) and now in the FSLeyes menu, select 'View' \u2192 'Time series'.
FSLeyes should now look like the picture below.
What exactly are we looking at?
The functional image displayed here is the data straight from the scanner, i.e., raw, un-preprocessed data that has not been analyzed. In later workshops we will learn how to view analyzed data e.g., display statistical maps etc.
You should see a timeseries shown at the bottom of the screen corresponding to the voxel that is selected in the main viewer. Move the mouse to select other voxels to investigate how variable the timecourse is.
Within the timeseries window, hit the '+' button to show the 'Plot List' characteristics for this timeseries.
Compare the timeseries in different parts of the brain, just outside the brain (skull and scalp), and in the airspace outside the skull. You should observe that these have very different mean intensities.
The timeseries of multiple different voxels can be compared using the '+' button. Hit '+' and then select a new voxel. Characteristics of the timeseries such as plotting colour can also be changed using the buttons on the lower left of the interface.
"},{"location":"workshop2/visualizing-mri-data/#atlas-tools","title":"Atlas tools","text":"FSL comes not only with a collection of\u00a0NIFTI standard templates but also with several built-in atlases, both probabilistic and histological (anatomical), comprising cortical, sub-cortical, and white matter parcellations. You can explore the full list of included atlases here.
We will now have a look at some of these atlases.
Firstly, close all open files in FSLeyes (or close FSLeyes altogether and start it up again in your terminal by running fsleyes &
).
In the FSLeyes menu, select 'File' \u2192 'Add Standard' and then choose the file called MNI152_T1_2mm.nii.gz
(this is a template brain in MNI space).
The MNI152 atlas
Remember that the MNI152 atlas is a standard brain template created by averaging 152 MRI scans of healthy adults widely used as a reference space in neuroimaging research.
Now select from the menu 'Settings' \u2192 'Ortho View 1' and tick the box for 'Atlases' at the bottom.
You should now see the 'Atlases' panel open as shown below.
The 'Atlases' panel is organized into three sections:
The 'Atlas information' tab provides information about the current display location, relative to one or more atlases selected in this tab. We will soon see how to use this information.
The 'Atlas search' tab can be used to search for specific regions by browsing through the atlases. We will later look how to use this tab to create region-of-interest (ROI) masks.
The 'Atlas management' tab can be used to add or delete atlases. This is an advanced feature, and we will not be using it during our workshops.
We will now have a look at how to work with FSL atlases. First we need to choose some atlases to reference. In the 'Atlases' \u2192 'Atlas Information' window (bottom of screen in middle panel) make sure the following are ticked:
Now let's select a point in the standard brain. Move the cursor to the voxel position: [x=56, y=61, z=27] or enter the voxel location in the 'Location' window (2nd column).
MNI Co-ordinate Equivalent
Note that the equivalent MNI coordinates (shown in the 1st column/Location window) are [-22,-4,-18].
It may not be immediately obvious what part of the brain you are looking at. Look at the 'Atlases' window. The report should say something like:
Harvard-Oxford Cortical Structural Atlas \nHarvard-Oxford Subcortical Structural Atlas \n98% Left Amygdala\n
Checking the brain region with other atlases
What do the Juelich Histological Atlas & Talairach Daemon Labels report?
The Harvard-Oxford and Juelich are both probabilistic atlases. They report the percentage likelihood that the area named matches the point where the cursor is.
The Talairach Atlas is a simpler labelling atlas. It is based on a single brain (of a 60-year-old French woman) and is an example of a deterministic atlas. it reports the name of the nearest label to the cursor coordinates.
From the previous reports, particularly the Harvard-Oxford Subcortical Atlas and the Juelich Atlas, it should be obvious that we are most likely in the left amygdala.
Now click the '(Show/Hide)' link after the Left Amygdala result (as shown below):
This shows the (max) volume that the probabilistic Harvard-Oxford Subcortical Atlas has encoded for the Left Amygdala. The cursor is right in the middle of this volume.
In the 'Overlay list' click and select the top amygdala overlay. You will note that the min/max ranges are set to 0 and 100. If it\u2019s not, change it to 0 and 100. These reflect the % likelihood of the labelling being correct.
If you increase the min value from 0% to 50%, then you will see the size of the probability volume for the left amygdala will decrease.
It now shows only the volume where there is a 50% or greater probability that this label is correct.
Click the (Show/Hide) link after the Left Amygdala; the amygdala overlay will disappear.
Exercise: Coordinate Localization
Have a go at localizing exactly what the appropriate label is for these coordinates:
If unsure check your results with someone else, or ask for help!
Make sure all overlays are closed (but keep the MNI152_T1_2mm.nii.gz
open) before moving to the next section.
It is often helpful to locate where a specific structure is in the brain and to visually assess its size and extent.
Let's suppose we want to visualize where Heschl's Gyrus is. In the bottom 'Atlases' window, click on the second tab ('Atlas search').
In the Search box, start typing the word 'Heschl\u2026'. You should find that the system quickly locates an entry for Heschl's Gyrus in the Harvard-Oxford Cortical Atlas. Click on it to select.
Now if you now the tick box immediately below next to the Heschl's Gyrus, an overlay will be added to the 'Overlay' list on the bottom (see below). Heschl's Gyrus should now be visible in the main image viewer.
Now click on the '+' button next to the tick box. This will centre the viewing coordinates to be in the middle of the atlas volume (see below).
Exercise: Atlas visualization
Now try this for yourself:
You can change the colour of the overlays by selecting the option below:
Other options also exist to help you navigate the brain and recognize the different brain structures and their relative positions.
Make sure you have firstly closed/removed all previous overlays. Now, select the 'Atlas Search' tab in the 'Atlases' window again. This time, in the left panel listing different atlases, tick on the option for only one of the atlases, such as the Harvard-Oxford Cortical Structural Atlas, and make sure all others are unticked.
Now you should see all of the areas covered by the Harvard-Oxford cortical atlas shown on the standard brain. You can click around with the cursor, the labels for the different areas can be seen in the bottom right panel.
In addition to atlases covering various grey matter structures, there are also two white matter atlases: the JHU ICBM-DTI-81 white-matter labels atlas & JHU white-matter tractography atlas. If you tick (select) these atlases as per previous instructions (hint using the 'Atlas search' tab), you will see a list of all included white matter tracts (pathways) as shown below:
"},{"location":"workshop2/visualizing-mri-data/#using-atlas-tools-to-create-a-region-of-interest-mask","title":"Using atlas tools to create a region-of-interest mask","text":"
You can also use atlas tools in FSLeyes to not only locate specific brain structures but also to create masks for ROI (region-of-interest) analysis. We will now create ROI masks (one grey matter mask and one white matter) using FSL tools and built-in atlases.
To start, please close 'FSLeyes' entirely, either by clicking 'x' in the right corner of the FSLeyes window or by selecting 'FSLeyes' \u2192 'Close'. Then close your current terminal and open a new terminal window.
Then do the following:
ROImasks
. Navigate into this directory. fsl
and open FSLeyes in the background.Here are the commands to do this:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/\nmkdir ROImasks\ncd ROImasks\nmodule load FSL/6.0.5.1-foss-2021a-fslpython \nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes & \n
Wait for FSLeyes to load, then:
harvardoxford-cortical_prob_Middle_Frontal_Gyrus
) from the 'Overlay' list and save it in your ROImasks
directory as MFG
(select 'Overlay' \u2192 'Save' \u2192 Name: MFG).You should now see the MFG overlay in the overlay list (as below) and have a MFG.nii.gz
file in the ROImasks
directory. You can check this by typing ls
in the terminal.
We will now create a white matter mask. Here are the steps:
ROImasks
directory as FM ('Overlay' \u2192 'Save' \u2192 Name: FM). You should now see the FM overlay in the overlay list (as below) and also have a FM.nii.gz
file in the ROImasks
directory.
You now have two \u201cprobabilistic ROI masks\u201d. To use these masks for various analyses, you need to first binarize these images.
Why binarize?
Why do you think we need to binarize the mask first? There are several reasons, but primarily it creates clear boundaries between regions which simplifies our statistical analysis and reduces computation.
To do this, first close FSLeyes. Make sure that you are in the ROImasks
directory and check if you have the two masks. If you type pwd
in the terminal, you should get the output rds/projects/c/chechlmy-chbh-mricn/xxx/ROImasks
(where XXX=your ADF username) and when you type ls
, you should see FM.nii.gz
and MFG.nii.gz
.
To binarize the masks, you can use one of the FSL tools for image manipulation, fslmaths
. The basic structure of an fslmaths
command is:
fslmaths input image [modifiers/options] output
Type in your terminal:
fslmaths FM.nii.gz -bin FM_binary\nfslmaths MFG.nii.gz -bin MFG_binary\n
This simply takes your ROI mask, binarizes it and saves the binarized mask with the _binary
name.
You should now have 4 files in the ROImasks directory.
Now open FSLeyes and examine one of the binary masks you just created. First load a template (Click 'File' \u2192 'Add Standard' \u2192 'MNI152_T1_2mm') and add the binary mask (e.g., Click 'File' \u2192 'Add from file' \u2192 'FM_binary.nii.gz').
You can see the difference between the probabilistic and binarized ROI masks below:
Probabilistic ROI mask
Binary ROI mask
To use ROI masks in your analysis, you might also need to threshold it i.e., to change/restrict the probability of the volume. We previously did this for the amygdala manually (e.g., from 0-100% to 50%-100%). The choice of the threshold might depend on the type of analysis and the type of ROI mask you need to use. The instructions below explain how to threshold and binarize your ROI image in one single step using fslmaths
.
Open your terminal and make sure that you are in the ROImasks
directory (pwd
). To both threshold and binarize the MFG mask, type:
fslmaths MFG.nii.gz -thr 25 -bin MFGthr_binary
(option -thr
is used to threshold the image below a specific number, in this case 25 corresponding to 25% probability)
Now let's compare the thresholded and unthresholded MFG binarized masks.
MFGthr_binary.nii.gz
), and to avoid confusion, change the colour of the second mask to blue. You can either toggle its visibility on and off (click the eye icon) to compare mask or use the 'Opacity' button. You can see the difference in size between the two below:
Binarized MFG mask
Binarized and thresholded MFG mask
Exercise: Atlases and masks
Have a go at the following exercises:
If unsure, check your results with someone else or ask for help!
Workshop 2: Further Reading and Reference Material
FSLeyes is not the only MRI visualization tool available. Here are some others:
More details of what is available on BEAR at the CHBH can be found at the BEAR Technical Docs website.
"},{"location":"workshop2/workshop2-intro/","title":"Workshop 2 - MRI data formats, data visualization and atlas tools","text":"Welcome to the second workshop of the MRICN course! Prior lectures introduced you to the basics of the physics and technology behind MRI data acquisition. In this workshop we will explore, MRI image fundamentals, MRI data formats, data visualization and atlas tools.
Overview of Workshop 2
Topics for this workshop include:
You will need this information before you can analyse data, regardless if using structural or functional MRI data.
For the purpose of the module we will be using BlueBEAR. You should remember from Workshop 1, how to access the BlueBEAR Portal and use the BlueBEAR GUI.
You have already been given access to the RDS project, rds/projects/c/chechlmy-chbh-mricn
. Inside the module\u2019s RDS project, you will find that you have a folder labelled xxx
(xxx
= University of Birmingham ADF username).
If you navigate to that folder (rds/projects/c/chechlmy-chbh-mricn/xxx)
, you will be able to perform the various file operations from there during workshops.
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 02 workshop materials.
"},{"location":"workshop3/diffusion-intro/","title":"Diffusion MRI basics - visualization and preprocessing","text":"In this workshop and the workshop next week, we will follow some basic steps in the diffusion MRI analysis pipeline below. The instructions here are specific to tools available in FSL, however other neuroimaging software packages can be used to perform similar analyses. You might also recall from lectures that models other than diffusion tensor and methods other than probabilistic tractography are also often used.
FSL diffusion MRI analysis pipeline
First, if you have not already, log in into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours). You should know how to do it from the previous workshops. Open a new terminal window and navigate to your MRICN project folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx
[where XXX=your ADF username]
Please check your directory by typing pwd
. This should return: /rds/projects/c/chechlmy-chbh-mricn/xxx
.
Where has all my data gone?
Before this workshop, any old directories and files from previous workshops have been removed (you will not need it for subsequent workshops and storing unnecessary data would result in exceeding allocated quota). Your XXX directory should therefore be empty.
Next you need to copy over the data for this workshop.
cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/diffusionMRI/ .
(make sure you do not omit spaces and .)
This might take a while, but once it has completed, change into that downloaded directory:
cd diffusionMRI
(your XXX
subdirectory you should now have the folder diffusionMRI
)
Type ls
. You should now see three subdirectories/folders (DTIfit
, TBSS
and tractography
). Change into the DTIfit
folder:
cd DTIfit
We will first look at what diffusion images look like and explore text files which contain information about gradient strength and gradient directions.
In your terminal type ls
. This should return:
p01/\np02/\n
So, the folder DTIfit
contains data from two participants contained within the p01
and p02
folders.
Inside each folder (p01
and p02
) you will find a T1 scan, uncorrected diffusion data (blip_up.nii.gz
, blip_down.nii.gz
) acquired with two opposing PE-directions (AP/blip_up
and PA/blip_down
) and corresponding bvals
(e.g., blip_up.bval
) and bvecs
(e.g., blip_up.bvec
) files.
bvals
files contain b-values (scalar values for each applied gradient). bvecs
files contain a list of gradient directions (diffusion encoding directions), including a [3x1] vector for each gradient. The number of entries in bvals
and bvecs
files equals the number of volumes in the diffusion data files.
Finally, inside p01
and p02
there is also subdirectory data with distortion-corrected diffusion images.
We will start with viewing the uncorrected data. Please navigate inside the p01
folder, open FSLeyes and then load one of the uncorrected diffusion images:
cd p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n
The image you have loaded is 4D and consists of 64 volumes acquired with different diffusion encoding directions. Some of the volumes are non-diffusion images (b-value = 0), while most are diffusion weighted images. The first volume, which you can see after loading the file, is a non-diffusion weighted image as demonstrated below.
Viewing separate volumes
You can view the separate volumes by changing the number in the Volume box or playing movie mode. Note that the volume count starts from 0. You should also note that there are significant differences in the image intensity between different volumes.
Now go back to volume 0 and - if needed - stop movie mode. In the non-diffusion weighted image, the ventricles containing CSF are bright and the rest of the image is relatively dark. Now change the volume number to 2, which is a diffusion weighted image (with a b-value of approximately 1500).
The intensity of this volume is different. To see anything, please change max. intensity to 400. Now the ventricles are dark and you can see some contrast between different voxels.
Let's view the content of the bvals
and bvecs
files by using the cat
command. In your terminal type:
cat blip_down.bval
The first number is 0. This indicates that indeed the first volume (volume 0) is a non-diffusion weighted image and the third volume (volume 2) is diffusion weighted volume with b=1500. Based on the content of this bval
file, you should be able to tell how many diffusion-weighted volumes were acquired and how many without any diffusion weighting (b0 volumes).
Comparing diffusion-weighted volumes
Please compare this with the file you loaded into FSLeyes.
Now type:
cat blip_down.bvec
You should now see 3 separate rows of numbers representing the diffusion encoding directions (3x1 vector for each acquired volume; x,y,z directions) and that for volume 2 the diffusion encoding is represented by the vector [0.578, 0.671, 0.464].
Distortion correctionAs explained in the lectures, diffusion imaging suffers from various distortions (susceptibility, eddy-currents and movement induced distortions). These need to be corrected before further analysis. The most most noticeable geometric distortions are susceptibility-induced distortions caused by field inhomogeneities, and so we will have a closer look at these.
All types of distortions need correction during pre-processing steps in diffusion imaging analysis. FSL includes two tools used for distortion correction, topup and eddy. The processing with these two tools is time and computing intensive. Therefore we will not run the distortion correction steps in the workshop but instead explore some principles behind it.
For this, you are given distortion corrected data to conduct further analysis, diffusion tensor fitting and probabilistic tractography.
First close the current image in FSLeyes ('Overlay' \u2192 'Remove') and load both uncorrected images (blip_up.nii.gz
, blip_down.nii.gz
) acquired with two opposing PE-directions (PE=phase encoding).
Compare the first volumes in each file. To do that you can either toggle the visibility on and off (click the eye icon) or use the 'Opacity' button (you should remember from the previous workshop how to do this).
The circled area indicates the differences in susceptibility-induced distortions between the two images acquired with two opposing PE-directions.
Now change the max. intensity to 400 and compare the third volumes in each file. Again, the circled area indicate the differences in distortions between the two images acquired with the two opposing PE-directions.
Finally, we will look at distortion corrected data. First close the current image ('Overlay' \u2192 'Remove').
Now in FSLeyes load data.nii.gz
(the distortion-corrected diffusion image located inside the data subdirectory) and have a look at one of the the non-diffusion weighted and diffusion-weighted volumes.
Comparing corrected to uncorrected diffusion-weighted volumes
Can you tell the difference in the corrected compared to the uncorrected diffusion images?
Further examining the difference between uncorrected and corrected diffusion data
In your own time (outside of this workshop as part of independent study), load both the corrected and uncorrected data for p01
and compare using the 'Volume' box or 'Movie' mode. Also explore the data in p02
folder using the instructions above.
In the next part of the workshop, we will look FSL's Brain Extraction Tool (BET).
Brain extraction is a necessary pre-processing step, which removes non-brain tissue from the image. It is applied to structural images prior to tissue segmentation and is needed to prepare anatomical scans for registration of functional MRI or diffusion scans to MNI space. BET can be also used to create binary brain masks (e.g., brain masks needed to run diffusion tensor fitting, DTIfit).
In this workshop we will look at only at creating a binary brain mask as required for DTIfit. In subsequent workshops we will look at using BET for removing non-brain tissues from diffusion and T1 scans (\u201cskull-stripping\u201d) in preparation for registration.
First close FSLeyes and to make sure you do not have any processes running in the background, close your current terminal.
Open a new terminal window, navigate to the p02
subdirectory, and load FSL and FSLeyes again:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p02\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a \n
Now check the content of the p02
subdirectory by typing ls
. You should get the response bvals
, bvecs
and data.nii.gz
.
From the data.nii.gz
(distortion corrected diffusion 4D image) we will extract a single volume without diffusion weighting (e.g. the first volume). You can extract it using one of FSL's utility commands, fslroi
.
What is fslroi
used for?
fslroi
is used to extract a region of interest (ROI) or subset of data from a larger 3D or 4D image file.
In the terminal, type:
fslroi data.nii.gz nodif 0 1
where:
data.nii.gz
is your input image, nodif
is your output image (3D non-diffusion weighted volume), You should have a new file nodif.nii.gz
(type ls
to confirm) and can now create a binary brain mask using BET.
To do this, first open BET in terminal. You can open the BET GUI directly in a terminal window by typing:
Bet &
Or by runnning FSL in a terminal window and accessing BET from the FSL GUI. To do it this way, type:
fsl &
and then open the 'BET brain extraction tool' by clicking on it in the GUI.
In either case, once BET is opened, click on advanced options and make sure the first two outputs are selected ('brain extracted image' and 'binary brain mask') as below. Select as the 'Input' image the previously created nodif.nii.gz
and change 'Fractional Intensity Threshold' to 0.4. Then click the 'Go' button.
Completing BET in the terminal
After running BET you may need to hit return to get a visible prompt back after seeing \"Finished\u201d in the terminal!
You will see 'Finished' in the terminal when you are ready to inspect the results. Close BET and open FSLeyes and load three files (nodif.nii.gz
, nodif_brain.nii.gz
and nodif_brain_mask
). Compare the files. To do that you can either toggle the visibility on and off (click the eye icon) or use 'Opacity button' (you should remember from previous workshop how to do it).
The nodif_brain_mask
is a single binarized image with ones inside the brain and zeroes outside the brain. You need this image both for DTIfit and tractography.
Comparing between BET and normal images
Can you tell the difference between nodif.nii.gz
and nodif_brain.nii.gz
? It might be easier to compare these images if you change max intensity to 1500 and nodif_brain
colour to green.
The next thing we will do is to look at how to run and examine the results of diffusion tensor fitting.
First close FSLeyes, and to make sure you do not have any processes running in the background, close the current terminal.
Open a new terminal window, navigate to the p01
subdirectory, load FSL and FSLeyes again, and finally open FSL (with & to background it):
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsl & \n
To run the diffusion tensor fit, you need 4 files as specified below:
data.nii.gz
nodif_brain_mask.nii.gz
bvecs
(test file with gradient directions)bvals
(text file with list of b-values)All these files are included inside the data subdirectory p01/data
. You will later learn how to create a binary brain mask but first we will run DTIfit.
In the FSL GUI, first click on 'FDT diffusion', and in the FDT window, select 'DTIFIT Reconstruct diffusion tensors'. Now choose as 'Input directory' the data
subdirectory located inside p01
and click 'Go'.
You should see something happening in the terminal and once you see 'Done!' you are ready to view the results.
Click 'OK' when the message appears.
Different ways of running DTIfitInstead of running DTIfit by choosing the 'Input' directory, you can also run it by specifying the input file manually. If you click it now, the files would be auto-filled but otherwise you would need to provide inputs as below.
Running DTIfit in your own time
Please do NOT run it now, but instead try it in your own time with data in the p02
folder.
Finally, you can also run DTIfit directly from the terminal. To do this, you would need to type dtifit
in the terminal and choose the dtifit
compulsory arguments:
To run DTIfit from the terminal, you would need to navigate inside the subdirectory/folder
with all the data and type the full dtifit
command, specifying compulsory arguments as below:
dtifit --data=data --mask=nodif_brain_mask --bvecs=bvecs --bvals=bvals --out=dti
This command only works when running it from inside a folder where all the data is located, otherwise you will need to specify the full path with the data location. This would be useful if you want to write a script; we will look at it in the later workshops.
Running DTIfit from the terminal in your own time
Again, please do NOT run it now but try it in your own time with data in the p02
folder.
The results of running DTIfit are several output files as specified below. We will look closer at the highlighted files in bold. All of these files should be located in the data
subdirectory, i.e. within /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01/data/
.
To do this, firstly close the FSL GUI, open FSLeyes and load the FA map ('File' \u2192 'Add from file' \u2192 dti_FA
)
Next add the principal eigenvector map (dti_V1
) to your display ('File' \u2192 'Add from file' \u2192 dti_V1
).
FSLeyes will open the image dti_V1
as a 3-direction vector image (RGB) with diffusion direction coded by colour. To display the standard PDD colour coded orientation map (as below), you need to modulate the colour intensity with the FA map so that the anisotropic voxels appear bright.
In the display panel (click on 'Settings' (the Cog icon)) and change 'Modulate' by setting it to dti_FA
.
Finally, compare the FA and MD maps (dti_FA
and dti_MD
). To do this, load the FA map and add the MD map. By contrast to the FA map, the MD map appears uniform in both gray and white matter, plus higher intensities are in the CSF-filled ventricles and indicate higher diffusivity. This is opposed to dark ventricles in the FA map.
Differences between the FA and MD maps
Why are there such differences?
"},{"location":"workshop3/diffusion-mri-analysis/#tract-based-spatial-statistics-tbss","title":"Tract-Based Spatial Statistics (TBSS)Tract-Based Spatial Statistics analysis pipeline","text":"In the next part of the workshop, we will look at running TBSS, Tract-Based Spatial Statistics.
TBSS is used for a whole brain \u201cvoxelwise\u201d cross-subject analysis of diffusion-derived measures, usually FA (fractional anisotropy).
We will look at an example TBSS analysis of a small dataset consisting of FA maps from ten younger (y1-y10) and five older (o1-o5) participants. Specifically, you will learn how to run the second stage of TBSS analysis, \u201cvoxelwise\u201d statistics, and learn how to display results using FSLeyes. The statistical analysis that you will run aims to examine where on the tract skeleton younger versus older (two groups) participants have significantly different FA values.
Before that, let's shortly recap TBSS as it was covered in the lecture.
The steps for Tract-Based Spatial Statistics are:
To save time, some of the pre-processing stages including generating FA maps (tensor fitting), preparing data for analysis, registration of FA maps and skeletonization have been run for you and all outputs are included in the data
folder you have copied at the start of this workshop.
You will only run the TBSS statistical analysis to explore group differences in FA values based upon age (younger versus older participants).
First close FSLeyes (if you still have it open) and make sure that you do not have any processes running in the background by closing your current terminal.
Then open a new terminal window, navigate to the subdirectory where pre-processed data are located and load both FSL and FSLeyes:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/TBSS/TBSS_analysis_p2/\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a \n
Once you have loaded all the required software, we will start with exploring the pre-processed data. If you correctly followed the previous steps, you should be inside the subdirectory TBSS_analysis_p2
. Confirm that, and then check the content of that subdirectory by typing:
pwd
(answer /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/TBSS/TBSS_analysis_p2/
)
ls
(you should see 3 data folders listed: FA
, origdata
, stats
)
We need to firstly check if all the pre-processing steps have been run correctly and that we have all the required files.
Navigate inside the stats
folder and check the files inside by typing in your terminal:
cd stats\nls\n
You should find inside the files listed below.
all_FA
(4D image file with all participants\u2019 FA maps registered into standard space)mean_FA
(3D image file mean of all participants FA maps)all_FA_skeletonised
(4D image file with all participants skeletonised FA data)mean_FA_skeleton
(3D image file mean FA skeleton)Exploring the data
If this is the case, open FSLeyes and explore these files one by one to make sure you understand what each represents. You might need to change the colour to visualise some image files.
Remember to ask for help!
If you are unsure about something, or need help, please ask!
Once you have finished taking a look, close FSLeyes.
Before using the General Linear Model (GLM) GUI to set up the statistical model, you need to determine the order in which participants\u2019 files have been entered into the single 4D skeletonized file (i.e., the data order in the all_FA_skeletonised
file). The easiest way to determine the alphabetical order of participants in the the final 4D file (all_FA_skeletonised
), is to check in which order FSL lists the pre-processed FA maps inside the FA folder. You can do this in the terminal with the commands below
cd .. \ncd FA \nimglob *_FA.*\n
You should see data from the 5 older (o1-o5) followed by data fromthe 10 (y1-y10) younger participants.
Next navigate back to the stats
folder and open FSL:
cd ..\ncd stats\nfsl &\n
Click on 'Miscellaneous tools' and select 'GLM Setup' to open the GLM GUI.
In the workshop we will set up a simple group analysis (a two sample unpaired t-test).
How to set up more complex models
To find information re how to set up more complex models, using GUI, click on this link: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/statistics/glm
In the 'GLM Setup' window, change 'Timeseries design' to 'Higher-level/non-timeseries design' and '# inputs' to 15.
Then click on 'Wizard' and select 'two groups, unpaired' and set 'Number of subjects in first group' to 5. Then click 'Process'.
In the 'EVs' tab, name 'EV1' and 'EV2' as per your groups (old, young).
In the contrast window set number of contrasts to 2 and re-name them accordingly to the image below:
(C1: old > young, [1 -1]) (C2: young > old, [-1 1])
Click 'View Design', close the image and then go back to the GLM set window and save your design with the filename design
. Click 'Exit' and close FSL.
To run the TBSS statistical analysis FSL's randomise
tool is used.
FSL's randomise
Randomise is FSL's tool for nonparametric permutation inference on various types of neuroimaging data (statistical analysis tool). For more information click on this link: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/statistics/randomise
The basic command line to use this tool is:
randomise -i <input> -o <input> -d <design.mat> -t <design.con> [options]
You can explore options and the set up by typing randomise
in your terminal.
The basic command line to use randomise for TBSS is below:
randomise -i all_FA_skeletonised -o tbss -m mean_FA_skeleton_mask -d design.mat -t design.con -n 500 --T2
Check if you are inside the stats
folder and run the command above in terminal to run your TBSS group analysis:
The elements of this command are explained below:
Argument Description -i input image -o output image basename -m mask -d design matrix -t design contrast -n number of permutations --T2 TFCEWhy so few permutations?
To save time we only run 500 permutations; this number will vary depending on the type of analysis, but usually it is between 5,000 to 10,000 or higher.
The output from randomise
will include two raw (unthresholded) tstat images, tbss_tstat1
and tbss_tstat2
.
The TFCE p-value images (fully corrected for multiple comparisons across space) will be tbss_tfce_corrp_tstat1
and tbss_tfce_corrp_tstat2
.
Based on the set up of your design, contrast 1 gives the older > young test and contrast 2 gives the young > older test; the contrast which will likely give significant results is the 2nd contrast i.e., we are expecting higher FA in younger participants (due to the age related decline in FA).
To check that, use FSLeyes to view results of your TBSS analysis. Open FSLeyes, load mean_FA
plus the mean_FA_skeleton
template and add your display TFCE corrected stats-2 image:
mean_FA.nii.gz
mean_FA_skeleton.nii.gz
(change greyscale to green)tbss_tfce_corrp_tstat2.nii.gz
(change greyscale to red-yellow and set up Max to 1, and Min to 0.95 or 0.99)Please note that TFCE-corrected images, are actually 1-p for convenience of display, so thresholding at 0.95 gives significant clusters at p corrected < 0.05, and 0.99 gives significant clusters at p corrected < 0.01.
You should see the same results as below:
Interpreting the results
Are the results as expected? Why/why not?
Reviewing the tstat1 image
Next review the tbss_tfce_corrp_tstat1.nii.gz
Further information on TBSS
More information on TBSS, can be found on the 'TBSS' section of the FSL Wiki: https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/tbss
"},{"location":"workshop3/workshop3-intro/","title":"Workshop 3 - Basic diffusion MRI analysis","text":"Welcome to the third workshop of the MRICN course! Prior lectures in the module introduced you to basics of the diffusion MRI and its applications, including data acquisition, the theory behind diffusion tensor imaging and using tractography to study structural connectivity. The aim of the next two workshops is to introduce you to some of the core FSL tools used for diffusion MRI analysis.
Specifically, we will explore different elements of the FMRIB's Diffusion Toolbox (FDT) to walk you through basic steps in diffusion MRI analysis. We will also cover the use of Brain Extraction Tool (BET).
By the end of the two workshops, you should be able to understand the principles of correcting for distortions in diffusion MRI data, how to run and explore results of a diffusion tensor fit, and how to run a whole brain group analysis and probabilistic tractography.
Overview of Workshop 3
Topics for this workshop include:
We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask.
Note that for your own projects, you are very likely to want to change some of these settings/parameters depending on your study aims and design.
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 03 workshop materials.
"},{"location":"workshop4/probabilistic-tractography/","title":"Probabilistic Tractography","text":"In the first part of the workshop, we will look again at BET, FSL's Brain Extraction Tool.
Brain extraction is a necessary pre-processing step which allows us to remove non-brain tissue from the image. It is applied to structural images prior to tissue segmentation and is needed to prepare anatomical scans for the registration of functional MRI or diffusion scans to MNI space. BET can be also used to create binary brain masks (e.g., brain masks needed to run diffusion tensor fitting, DTIfit).
"},{"location":"workshop4/probabilistic-tractography/#skull-stripping-our-data-using-bet","title":"Skull-stripping our data using BET","text":"In this workshop we will first look at a very simple example of removing non-brain tissues from diffusion and T1 scans (\u201cskull-stripping\u201d) in preparation for the registration of diffusion data to MNI space.
Log into the BlueBEAR Portal and start a BlueBEAR GUI session (2 hours).
In your session, open a new terminal window and navigate to the diffusionMRI
data in your MRICN
folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI
[where XXX=your ADF username]
In case you missed the previous workshop
You were instructed to copy the diffusionMRI
data in the previous workshop. If you have not completed last week's workshop, you either need to find details on how to copy the data in the 'Workshop 3: Basic diffusion MRI analysis' materials or work with someone who has completed the previous workshop.
Then load FSL and FSLeyes:
module load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\n
We will now look at how to \u201dskull-strip\u201d the T1 image (remove skull and non-brain areas); this step is needed for the registration step in both fMRI and diffusion MRI analysis pipelines.
We will do this using BET on the command line. The basic command-line version of BET is:
bet <input> <output> [options]
In this workshop we will look at a simple brain extraction i.e., performed without changing any default options.
To do this, navigate inside the p01
folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/DTIfit/p01
Then in your terminal type:
bet T1.nii.gz T1_brain
Once BET has completed (should only take a few seconds at most), open FSLeyes (with & to background it). Then in FSLeyes:
T1.nii.gz
and add the T1_brain
imageT1_brain
to 'Red' or 'Green'This will likely show that in this case the default brain extraction was good. The reason behind such a good brain extraction with default options is a small FOV and data from a young healthy adult. This is not always the case e.g., when we have a large FOV or data from older participants.
More brain extraction to come? You BET!
In the next workshop (Workshop 5) we will explore different BET [options] and how to troubleshoot brain extraction.
"},{"location":"workshop4/probabilistic-tractography/#preparing-our-data-with-bedpostx","title":"Preparing our data with BEDPOSTX","text":"BEDPOSTX is an FSL tool used for a step in the diffusion MRI analysis pipeline, which prepares the data for probabilistic tractography. BEDPOSTX (Bayesian Estimation of Diffusion Parameters Obtained using Sampling Techniques, X = modelling Crossing Fibres) estimates fibre orientation in each voxel within the brain. BEDPOSTX employs Markov Chain Monte Carlo (MCMC) sampling to reconstruct distributions of diffusion parameters at each voxel.
We will not run\u00a0it\u00a0during this workshop as it takes a long time. The data has been processed for you, and you copied it at the start of the previous workshop.
To run it, you would need to open FSL GUI, click on FDT diffusion and from drop down menu select 'BEDPOSTX'. The input directory must contain the distortion corrected diffusion file (data.nii.gz
), binary brain mask (nodif_brain_mask.nii.gz
) and two text files with the b-values (bvals
) and gradient orientations (bvecs
).
In case of the data being used for this workshop with a single b-value, we need to specify the single-shell model.
After the workshop, in your own time, you could run it using the provided data (see Tractography Exercises section at the end of workshop notes).
BEDPOSTX outputs a directory at the same level as the input directory called [inputdir].bedpostX
(e.g. data.bedpostX
). It contains various files (including mean fibre orientation and diffusion parameter distributions) needed to run probabilistic tractography.
As we will look at tractography in different spaces, we also need the output from registration. The concept of different image spaces has been introduced in Workshop 2. The registration step can be run from the FDT diffusion toolbox after BEDPOSTX has been run. Typically, registration will be run between three spaces:
nodif_brain
image) This step has been again run for you. To run it, you would need to open FSL GUI, click on 'FDT diffusion' and from the drop down menu select 'Registration'. The main structural image would be your \u201dskull-stripped\u201d T1 (T1_brain
) and non-betted structural image would be T1. Plus you need to select data.bedpostX
as the 'BEDPOSTX directory'.
After the workshop, you can try running it in your own time (see Tractography Exercises section at the end of workshop notes).
Registration output directory
The outputs from registration needed for probabilistic tractography are stored in the xfms
subdirectory.
PROBTRACKX (probabilistic tracking with crossing fibres) is an FSL tool used for probabilistic tractography. To run it, you need to open FSL GUI, click on FDT diffusion and from the drop down menu select PROBTRACKX (it should default to it).
PROBTRACKX can be used to run tractography either in diffusion or non-diffusion space (e.g., standard or structural). If running it in non-diffusion space you will need to provide a reference image. You can also run tractography from a single seed (voxel), single mask (ROI) or from multiple masks which can be specified in either diffusion or non-diffusion space.
We will look at some examples of different ways of running tractography.
First close any processes still running and open a new terminal. Next navigate inside where all the files to run tractography have been prepared for you:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/tractography/p01
As you may recall, on BlueBEAR there are different versions of FSL available. These correspond to different FSL software releases and have been compiled in a different way. The different versions of FSL are suitable for different purposes i.e., used for different MRI data analyses.
To run BEDPOSTX and PROBTRACKX, you need to use a specific version of FSL (FSL 6.0.7.6), which you can load by typing in your terminal:
module load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\n
Once you have loaded FSL using these commands, open the FDT toolbox from either the FSL GUI or directly typing in your terminal:
Fdt &
We will start with tractography from a single voxel in diffusion space. Specifically, we will run it from a voxel with coordinates [47, 37, 29] located within the forceps major of the corpus callosum, a white matter fibre bundle which connects the occipital lobes.
Running tractography on another voxel
Later, you can use the FA map (dti_FA
inside the p01/data
folder) loaded to FSLeyes to check the location of the selected voxel, choose another voxel within a different white matter pathway, and run the tractography again.
You should have the FDT Toolbox window open as below:
From here do the following:
data.bedpostX
as the 'BEDPOSTX directory'After the tractography has finished, check the contents of subdirectory /corpus
with the tractography output files. It should contain:
probtrackx.log
with the probtrackx
command that was run fdt_coordinates.text
with used coordinates corpus_47_37_29.nii.gz
(general structure outputname_X_Y_Z.nii.gz
; where\u00a0outputname\u00a0= name of the subdirectory and\u00a0X,\u00a0Y, and\u00a0Z = the seed voxel coordinates). This file contains for each voxel a count of how many of the streamlines intersected with that voxel. We will explore the results later. First, you will learn how to run tractography in the standard (MNI) space.
Close FDT toolbox and then open it again from the terminal to make sure you don\u2019t have anything running in the background.
We will now run tractography using a combination of masks (ROIs) in standard space to reconstruct tracts connecting the right motor thalamus (portion of the thalamus involved in motor function) with the right primary motor cortex. The ROI masks have been prepared for you and put inside the mask subdirectory ~/diffusionMRI/tractography/masks
. The ROIs have been created using FSL's ATLAS tools (you\u2019ve learnt in a previous workshop how to do this) and are in standard/MNI space, thus we will run tractography in MNI (standard) space and not in diffusion space.
This is the most typical design of tractography studies.
In the FDT Toolbox window - before you select your input in the 'Data' tab - go to the 'Options' tab (as below) and reduce the number of samples to 500 under 'Options'. You would normally run 5000 (default) but reducing this number will speed up processing and is useful for exploratory analyses.
Now going back to the 'Data' tab (as below) do the following:
data.bedpostX
as 'BEDPOSTX directory'Thalamus_motor_RH.nii.gz
from the masks
subdirectorydata.bedpost/xfms
directory. Select standard2diff_warp
as 'Seed to diff transform' and diff2standard_warp
as 'diff to Seed transform'. These files are generated during registration.cortex_M1_right.nii.gz
from the masks
subdirectory to isolate only those tracts that reach from the motor thalamus. Use this mask also as a termination mask to avoid tracking to other parts of the brain.MotorThalamusM1
Specifying masks
Without selecting the waypoint and termination masks, you would also get other tracts passing via motor thalamus, including random offshoots with low probability (noise). This is expected for probabilistic tractography, as the random sampling without specifying direction can result in spurious offshoots into nearby tracts and give low probability noise.
It will take significantly longer this time to run the tractography in standard space. However, once it has finished, you will see the window 'Done!/OK'. Before proceeding, click 'OK'.
A new subdirectory will be created with the chosen output name MotorThalamusM1
. Check the contents of this subdirectory. It contains slightly different files compared to the previous tractography output. The main output, the streamline density map is called fdt_paths.nii.gz
. There is also a file called waytotal
that contains the total number of valid streamlines runs.
We will now explore the results from both tractography runs. First close FDT and your terminal as we need FSLeyes, which cannot be loaded together with the current version of FSL.
Next navigate inside where all the tractography results have been generated and load/open FSLeyes:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/diffusionMRI/tractography/p01\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n
We will start with our results from tractography in seed space. In FSLeyes, do the following:
~/diffusionMRI/tractography/p01/data/dti_FA.nii.gz
) and tractography output file (~/corpus/corpus_47_37_29.nii.gz
)Your window should look like this:
Once you have finished reviewing results of tractography in see space, close the results ('Overlay \u2192 Remove all').
We will now explore the results from our tractography ran in MNI space, but to do so we need a standard template. Assuming you have closed all previous images:
~/diffusionMRI/tractography/MNI152T1_brain.nii.gz
) and tractography output file (/MotorThalamusM1/fdt_paths.nii.gz.
)Tractography exercises
In your own time, you should try the exercises below to consolidate your tractography skills. If you have any problems completing or any further questions, you can ask for help during one of the upcoming workshops.
Thalamus_motor_RH.nii.gz
mask as seed image). Compare the results to the output from our tractography we ran during the workshop.cortex_M1_right.nii.gz
mask as the seed image and without Thalamus_motor_RH.nii.gz
as waypoint and termination masks. Compare these results to previous outputs (from thw tractography we ran during the workshop). Are the results the same? Why not? mask
subdirectory, you will find two other masks: LGN_left.nii.gz
and V1_left.nii.gz
. You can use a combination of these two masks to reconstruct portion of the left hemispheric optic radiation connecting the left lateral geniculate nucleus (LGN) and left primary visual cortex (V1). Hint: use LGN as the seed image and the V1 mask as waypoint and termination masks. p02
(~/diffusionMRI/tractography/p02/
). It might take ~60-90 minutes to run.p02
(~/diffusionMRI/tractography/p02/
). To run it you first need to complete Exercise 5. It will take ~15min to complete registration. Help and further information
As always, more information on diffusion analyses in FSL, can be found on the 'diffusion' section of the FSL Wiki and this practical course ran by FMRIB (the creators of FSL).
"},{"location":"workshop4/workshop4-intro/","title":"Workshop 4 - Advanced diffusion MRI analysis","text":"Welcome to the fourth workshop of the MRICN course!
In the previous workshop we started exploring different elements of the FMRIB's Diffusion Toolbox (FDT). This week we will continue with the different applications of the FDT toolbox and the use of Brain Extraction Tool (BET).
Overview of Workshop 4
Topics for this workshop include:
We will be working with various previously acquired datasets (similar to the data acquired during the CHBH MRI Demonstration/Site visit). We will not go into details as to why and how specific sequence parameters and specific values of the default settings have been chosen. Some values should be clear to you from the lectures or assigned on Canvas readings, please check there, or if you are still unclear, feel free to ask.
Note that for your own projects, you are very likely to want to change some of these settings/parameters depending on your study aims and design.
In this workshop we will follow basic steps in the diffusion MRI analysis pipeline, specifically with running tractography. The instructions here are specific to tools available in FSL. Other neuroimaging software packages can be used to perform similar analyses.
Example of Diffusion MRI analysis pipeline
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 04 workshop materials.
"},{"location":"workshop5/first-level-analysis/","title":"Running the first-level fMRI analysis","text":"We are now ready to proceed with running our data analysis. We will start with the first dataset (first participant /p01
) and our first step will be to skull-strip the data using BET. You should now be able by now to not only run BET but also to troubleshoot poor BET i.e., use different methods to run BET.
The p01
T1 scan was acquired with a large FOV (you can check this using FSLeyes; it is generally a good practice to explore the data before the start of any analysis, especially if you were not the person who acquired the data). Therefore, we will apply an appropriate method using BET as per the example we explored earlier. This will be likely the right method to be applied to all datasets in the /recon
folder but please check.
Open a terminal and use the commands below to skull-strip the T1:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01\nmodule load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a \nimmv T1 T1neck \nrobustfov -i T1neck -r T1 \nbet T1.nii.gz T1_brain -R \n
Remember that:
immv
command renames the T1 image, and automatically takes care of the filename extensionsrobustfov
command crops the image and names it back to T1.nii.gz
bet -R
command runs BET recursivelyIt is very important that after running BET that you examine, using FSLeyes, the quality of the brain extraction process performed on each and every T1.
A poor brain extraction will affect the registration of the functional data into MNI space giving a poorer quality of registered image. This in turn will mean that the higher-level analyses (where functional data are combined in MNI space) will be less than optimal. It will then be harder to detect small BOLD signal changes in the group.
Re-doing inaccurate BETs
Whenever the BET process is unsatisfactory you will need to go back and redo the individual BET extraction by hand, by tweaking the \u201cFractional intensity threshold\u201d and/or the Advanced option parameters for the Centre coordinates and/or the \u201cThreshold gradient\u201d.
You should be still inside the /p01
folder; please rename the fMRI scan by typing:
immv fs005a001 fmri1
We are now ready to proceed with our fMRI data analysis. To do that we will need a different version of FSL installed on BlueBEAR. Close your terminal and again navigate inside the p01
folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01
Now load FSL using the commands below:
module load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\n
Finally, open FEAT (from the FSL GUI or by typing Feat &
in a terminal window).
On the menus, make sure 'First-level analysis' and 'Full analysis' are selected. Now work through the tabs, setting and changing the values for each parameter as described below. Try to understand how these settings relate to the description of the experiment as provided at the start.
Misc Tab
Accept all the defaults.
Data TabInput file
The input file is the 4D fMRI data (the functional data for participant 1 should be called something like fmri1.nii.gz
if you have renamed it as above). Select this using the 'Select 4D data' button. Note that when you have selected the input, 'Total volumes' should jump from 0.
Total volumes troubleshooting
If \u201cTotal volumes\u201d is still set to 0, or jumps to 1, you have done something wrong. If you get this, stop and fix the error at this point. DO NOT CARRY ON. If \u201cTotal volumes\u201d is still set to 0, that means you have not yet selected any data. Try again. If \u201cTotal volumes\u201d is set to 1, that means you have most likely selected the T1 image, not the fMRI data. Try again, but selecting the correct file.
Check carefully at this point that the total number of volumes is correct (93 volumes were collected on participants 1-2, 94 volumes on participants 3-15).
Output directory
Enter a directory name in the output directory. This needs to be something systematic that you can use for all the participants and which is comprehensible. It needs to make sense to you when you look at it again in a year or more in the future. It is important here to use full path names. It is also very important that you do not use shortened or partial path names and that you do not put any spaces in the filenames you use. If you do, these may cause some programs to crash with errors that may not seem to make much sense.
For example, use an output directory name like:
/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1
where:
/rds/projects/c/chechlmy-chbh-mricn/xxx/feat
is the top level directory where you intend to put all of your upcoming FEAT analyses for the experiment/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1
is the sub-directory where you intend to put specifically only the 1st level (per session) FEAT analyses (and not the 2nd or higher level analyses). p01
refers to participant 1 and s1
refers to session/scan 1Note that when FEAT is eventually run this will automatically create a new directory called /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat
for you containing the output of this particular analysis. If the directory structure does not exist, FEAT will try and make it. You do not need to make it yourself in advance.
Repetition Time (TR)
For this experiment make sure that the TR is set to 2.0s. If FEAT can read the TR from the header information it will try and set it automatically. If not you will need to set it manually.
High pass filter cutoff
Set 'High pass filter cutoff' to 60sec (i.e. 50% greater than OFF+ON length of time).
Pre-stats
Set the following:
Set the following:
Select the \u201cFull model setup\u201d option; and then on the 'EVs' tab:
On the Contrasts Tab:
Check the plot of the design that will be generated and then click on the image to dismiss it.
Post-statsChange the 'Thresholding' pull down option to be of type 'Uncorrected' and leave the P threshold value at p<0.05.
Thresholding and processing time
Note this is not the correct thresholding that you will want at the final (third stage) of processing (where you will probably want 'Cluster thresholding') but for the convenience of the workshop, at this stage it will speed up the processing per run.
RegistrationSet the following:
T1_brain.nii.gz
) as the main structural image with 'Linear Options: Normal search, BBR'The model should now be set up with all the correct details and be ready to be analyzed.
Hit the GO button!
Running FSL on BlueBEAR
FSL jobs are now submitted in an automated way to a back-end high performance computing cluster on BlueBEAR for execution. Processing time for this analysis will vary but will probably be about 5 mins per run.
"},{"location":"workshop5/first-level-analysis/#monitoring-and-viewing-the-data","title":"Monitoring and viewing the dataSeeing the effect of other parametersAnalysing other participants' data","text":"FEAT has a built-in progress watcher, the 'FEAT Report', which you can open in a web browser.
To do that, you need to navigate inside the p01_s1.feat
folder from the BlueBEAR Portal as below and from there select the report.html
file, and either open it in a new tab or in a new window.
Watch the webpage for progress. Refresh the page to update and click the links (Tabs near the top of the page) to see the results when available (the 'STILL RUNNING' message will disappear).
Example FEAT Reports for processes that are still running, and which have completed.
After it has completed, first look at the webpage, click on the various links and try to understand what each part means.
Now let's use FSLeyes to look at the output in more detail. To do that you will need to open a separate terminal and load FSLeyes:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01\nmodule load FSL/6.0.5.1-foss-2021a-fslpython\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n
Open the p01_s1.feat
folder and select the filtered_func_data
(this is the fMRI data after it has been preprocessed by motion correction etc).
Put FSLeyes into movie mode and see if you can identify areas that change in activity.
Now, add the thresh_zstat1
image and try to identify the time course of the stimulation in some of the most highly activated voxels. You should remember how to complete the above tasks from previous workshops. You can also use the \u201ccamera\u201d icon to take a snapshot of the results.
Let's have a look and see the effects that other parameters have on the data. To do this, do the following steps:
Feat &
)design.fsf
file in the p01_s1.feat
directory for the first participantNote that each time you rerun Feat, it creates a new folder with a '+' sign in the name. So you will have folders rather messily named 'p01_s1.feat', \u201c'01_s1+.feat', 'p01_s1+ +.feat', and so on. This is rather wasteful of of your precious quota space, so you should delete unnecessary ones after looking at them.
For example, if you wanted to remove all files and directories that end with '+' for participant 1:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/ \nrm -rf *+\n
You might want also to change the previous output directory name to have a more meaningful name in order to make it more obvious what parameter has changed, e.g. p01_s1_motion_off.feat
.
For participant 2, you will need to repeat the main steps above:
p01
To rerun a FEAT analysis, rather than re-entering all the model details:
p01
Now change the input 4D file, the output directory name, and the registration details (the BET'ed reoriented T1 for participant 2), and hit 'Go'.
Design files
You can also save the design files (design.fsf
) using the 'Save' button on the FEAT GUI. You can then edit this in a text editor, which is useful when running large group studies. You can also run FEAT from the command line, by giving it a design file to use e.g., feat my_saved_design.fsf
. We will take a look at modifying the design.fsf
files directly in the Functional Connectivity workshop.
Running a first-level analysis on the remaining participants
In your own time, you should analyse the remaining participants as above.
Remember:
There are therefore 29 separate analyses that need to be done.
Scripting your analysis
It will seem laborious to re-write and re-run 29 separate FEAT analyses; a much quicker way is by scripting our analyses using bash
. If you would like, try scripting your analyses! Contact one of the course TA's or convenors if you are stuck!
As always, help and further information is also available on the relevant section of the FSL Wiki.
"},{"location":"workshop5/preprocessing/","title":"Pre-processing the functional MRI data","text":"In the first part of the workshop,
Background and set-upThe data that we will be using are data collected from 15 participants scanned on the same experimental protocol on the Phillips 3T scanner (our old scanner).
The stimulus protocol was a visual checkerboard reversing at 2Hz (i.e., 500ms between each reversal) and was presented alternately (20s active \u201con\u201d checkerboard, 20s grey screen \u201coff\u201d), starting and finishing with \u201coff\u201d and including 4 blocks of \u201con\u201d (i.e., off, on, off, on, off, on, off, on, off) = 180 sec.
A few extra seconds of \u201coff\u201d (6-8s) were later added at the very end of the run to match the number of volumes acquired by the scan protocol.
Normally in any experiment it is very important to keep all the protocol parameters fixed when acquiring the neuroimaging data. However, in this case we can see different parameters being used which reflect slightly different \u201cbest choices\u201d made by different operators over the yearly demonstration sessions:
Sequence order
Note that sometimes the T1 was the first scan acquired after the planning scan, sometimes it was the very last scan acquired.
Now that we know what the data is, let's start our analyses. Log in into BlueBEAR portal and start BlueBEAR GUI session (2 hours). You should know how to do it from previous workshops.
Open a new terminal window and navigate to your MRICN project folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx
[where XXX=your ADF username]
Please check that you are in the correct directory by typing pwd
. This should return: /rds/projects/c/chechlmy-chbh-mricn/xxx
(where XXX = your login username)
You now need to create a copy of the reconstructed fMRI data to be analysed during the workshop but in your own MRICN folder. To do this, in your terminal type:
cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/recon/ .
Be patient as this might take few minutes to copy over. In the meantime, we will revisit BET and learn how to troubleshoot the often problematic process of \u201dskull-stripping\u201d.
"},{"location":"workshop5/preprocessing/#skull-stripping-t1-scans-using-bet-on-the-command-line","title":"Skull-stripping T1 scans using BET on the command-line","text":"We will now look at how to \u201dskull-strip\u201d the T1 image (remove the skull and non-brain areas), as this step is needed as part of the registration step in the fMRI analysis pipeline. We will do this using FSL's BET on the command line. As you should know from previous workshops the basic command-line version of BET is:
(do not type this command, this is just a reminder)
bet <input> <output> [options]
where:
T1_scan
)T1_brain
)We will firstly explore the different options and how to troubleshoot brain extraction.
If the fMRI data has finished copying over, you can use the same terminal which you have previously opened. If not, keep that terminal open and instead open a new terminal, navigating inside your MRICN project folder (i.e., /rds/projects/c/chechlmy-chbh-mricn/xxx
)
Next you need to copy the data for this part of the workshop. As there is only 1 file, it will not take a long time. Type:
cp -r /rds/projects/c/chechlmy-chbh-mricn/module_data/BET/ .
And then load FSL and FSLeyes by typing:
module load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\n
After this, navigate inside the copied BET folder and type:
bet T1.nii.gz T1_brain1
Open FSLeyes (fsleyes &
), and when this is open, load up the T1 image, and add the T1_brain1
image. Change the colour for the T1_brain1
to Red.
This will likely show that the default brain extraction was not very good and included nonbrain matter. It may also have cut into the brain and thus some part of the cortex is missing. The reason behind the poor brain extraction is a large FOV (resulting in the head plus a large amount of neck present).
There are different ways to fix a poor BET output i.e., problematic \u201dskull-stripping\u201d.
First of all, you can use the -R
option.
This option is used to run BET in an iterative fashion which allows it to better determine the centre of the brain itself.
In your terminal type:
bet T1.nii.gz T1_brain2 -R
Instead of using the bet
command from the terminal, you can also use the BET GUI. To run it this way, you would need to select the processing option \u201cRobust brain centre estimation (iterates bet2 several times)\u201d from the pull down menu.
You will find that running BET with -R
option takes longer than before because of the extra iterations. Reload the newly extracted brain (T1_brain2
) into FSLeyes and check that the extraction now looks improved.
In the case of T1 images with a large FOV, you can first crop the image (to remove portion of the neck) and run BET again. To do that you need to use command robustfov
before applying BET. But first rename the original image.
Type in your terminal:
immv T1 T1neck` \nrobustfov -i T1neck -r T1 \nbet T1.nii.gz T1_brain3 -R \n
T1.nii.gz
-R
option.Reload the newly extracted brain (T1_brain3
) into FSLeyes and compare it to T1_brain1
and to check that the extraction looks improved. Also compare the cropped T1 image to the original one with a large FOV (T1neck
).
Another option is to leave the large FOV and to manually set the initial centre by hand via the -c
option on the command line. To do that you need to firstly examine the T1 scan in FSLeyes to get a rough estimation (in voxels) of where the centre of the brain is.
There is another BET option, which can improve \u201dskull stripping\u201d, the fractional intensity threshold, which by default is set to 0.5. You can change it from any value between 0-1. Smaller values give larger brain outline estimates (and vice versa). Thus, you can make it smaller if you think that too much brain tissue has been removed. To use it, you need to use the -f
option (e.g., bet T1.nii.gz T1_brain -f 0.3
).
Changing the fractional intensity
In your own time (after the workshop) you can check the effect of changing the fractional intensity threshold to 0.1 and 0.9 (however make sure you name the outputs accordingly, so you know which one is which).
It is very important that after running BET you always examine (using FSLeyes) the quality of the brain extraction process performed on each and every T1.
The strategy you might need to use could be different for participants in the same study. You might need to try different options. The general recommendation is to combine the cropping (if needed) and the -R
option. However, it may not work for all T1 scans, some types of T1 scans work better with one strategy than with another. Therefore, it is good to always try a range of options.
Now you should be able to \u201cskull-strip\u201d T1 scans as needed for fMRI analyses!
"},{"location":"workshop5/preprocessing/#exploring-the-data-and-renaming-the-mri-scan-files","title":"Exploring the data and renaming the MRI scan files","text":"By now you should have a copy of the reconstructed fMRI data in your own folder. As described above, the /recon
version of the directory contains the MRI data from 15 participants acquired over several years from various site visits.
The datasets have been reconstructed into the NIFTI format. The T1 images in each directory are named T1.nii.gz
. The first (planning) scan sequences (localisers) have been removed in each directory as these will not be needed for any subsequent analysis we are doing.
Navigate inside the recon
folder and list the contents of these directories (using the ls
command) to make sure they actually contain imaging files. Note that all the imaging data here should be in NIFTI format.
You should see the set of participant directories labelled p01
, p02
etc., all the way up to the final directory,p15
.
The directory structure should look like this:
~/chechlmy-chbh-mricn/xxx/recon/\n \u251c\u2500\u2500 p01/\n \u251c\u2500\u2500 p02/\n \u251c\u2500\u2500 p03/\n \u251c\u2500\u2500 p04/\n \u251c\u2500\u2500 p05/\n \u251c\u2500\u2500 ...\n \u251c\u2500\u2500 p13/\n \u251c\u2500\u2500 p14/\n \u2514\u2500\u2500 p15/\n
Verifying the data structure
Please verify that you have this directory structure before proceeding!
Explore what\u2019s inside each participant folder. Please note that each participant folder only contains reconstructed data. It\u2019s a good idea to store raw and reconstructed data separately. At this point you should have access to reconstructed participants p01
to p15
. The reconstructed data should be in folders named ~/chechlmy-chbh-mricn/xxx/recon/p01
etc.
However, apart from the T1 images that have been already renamed for you, the other reconstructed files in this directory will have unusual names, created automatically by the dcm2nii
conversion program.
You can see this by typing into your terminal:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p03 \nls\n
Which should list:
fs004a001.nii.gz\nfs005a001.nii.gz\nT1.nii.gz\n
It is poor practice to keep with these names as they do not reflect the actual experiment and will likely be a source of confusion later on. We should therefore rename the files to be something meaningful. For this participant (p03
) the first fMRI scan is file 1 (fs004001.nii.gz
) and the second fMRI scan is file 2 (fs005a001.nii.gz
). Rename the files as follows (to do that you need to be inside folder p03
):
immv fs004a001 fmri1 \nimmv fs005a001 fmri2\n
Renaming files
Notes:
mv
command except that it automatically takes care of the filename extensions. It saves from having to write out: mv fs004a001.nii.gz fmri1.nii.gz
which would be the standard Linux command to rename a file.run1
or fmri_run1
or epi1
or whatever. The important thing is that you need to be extremely consistent in the naming of files for the different participants.For this workshop we will use the naming convention above and call the files fmri1.nii.gz
and fmri2.nii.gz
.
As the experimenter you would normally be familiar with the order of acquisition of the different sequences and therefore the order of the resulting files created, including which one is the T1 image. You would write these down in your research log book whilst acquiring the MRI data. But sometimes, as here, if data is given to you later it may not be clear which particular file is the T1 image.
There are several ways to figure this out:
ls -al
) you should be able to see that the T1 image is smaller than most typical EPI fMRI images. Also, if there are more than one fMRI sequences (as here with p03
onwards) you will also see that several files have the same file size and the odd one out is the T1.If you have access to the NIFTI format files (.nii.gz
as we have here) then you can use one of the FSL command line tools (in a terminal window) called fslinfo
to examine the protocol information on the file. This will show you the number of volumes in the acquisition (remember this is 1 volume for a T1 image) as well as other information about the number of voxels and the voxel size.
Together this information is sufficient to work out which file is the T1 and which are the fMRI sequence(s).
For example if you type the following in your terminal:
cd ..\ncd p08 \nfslinfo fs005a001.nii.gz\n
You should see something like the image below:
Before proceeding to the next section, close your terminal.
"},{"location":"workshop5/workshop5-intro/","title":"Workshop 5 - First level fMRI analysis","text":"Welcome to the fifth workshop of the MRICN course!
The module lectures provide a basic introduction to fMRI concepts and the theory behind fMRI analysis, including the physiological basis of the BOLD response, fMRI paradigm design, pre-processing and single subject model-based analysis.
In this workshop you will learn how to analyse fMRI data for individual subjects (i.e., at the first level). This includes running all pre-processing stages and the first level fMRI analysis itself. The aim of this workshop is to introduce you to some of the core FSL tools used in the analysis of fMRI data and to gain practical experience with analyzing real fMRI data.
Specifically, we will explore FEAT (FMRI Expert Analysis Tool, part of FSL) to walk you through basic steps in first level fMRI analysis. We will also revisit the use of Brain Extraction Tool (BET), and learn how to troubleshoot problematic \u201dskull-stripping\u201d for certain cases.
Overview of Workshop 5
Topics for this workshop include:
robustfov
)We will not go into details as to why and how specific values of the default settings have been chosen. Some values should be clear to you from the lectures or resource list readings, please check there or if you are still unclear feel free to ask. We will explore some general examples. Note that for your own projects you are very likely to want to change some of these settings/parameters depending on your study aims and design.
The copy of this workshop notes can be found on Canvas 39058 - LM Magnetic Resonance Imaging in Cognitive Neuroscience in Week 05 workshop materials.
"},{"location":"workshop8/functional-connectivity/","title":"Functional connectivity analysis of resting-state fMRI data using FSL","text":"This workshop is based upon the excellent FSL fMRI Resting State Seed-based Connectivity tutorial by Dianne Paterson at the University of Arizona, which has been adapted to run on the BEAR systems at the University of Birmingham, with some additional content covering Neurosynth.
We will run a group-level functional connectivity analysis on resting-state fMRI data of three participants, specifically examining the functional connectivity of the posterior cingulate cortex (PCC), a region of the default mode network (DMN) that is commonly found to be active in resting-state data.
Overview of Workshop 8
To do this, we will:
Navigate to your shared directory within the MRICN folder and copy the data over:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx\ncp -r /rds/projects/c/chechlmy-chbh-mricn/aamir_test/SBC .\ncd SBC\nls\n
You should now see the following:
sub1 sub2 sub3\n
Each of the folders has a single resting-state scan, called sub1.nii.gz
,sub2.nii.gz
and sub3.nii.gz
respectively.
We will now create our seed region for the PCC. To do this, firstly load FSL and fsleyes
in the terminal by running:
module load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\n
Check that we are in the correct directory (blah/your_username/SBC
):
pwd\n
and create a new directory called seed
:
mkdir seed\n
Now when you run ls
you should see:
seed sub1 sub2 sub3\n
Lets open FSLeyes:
fsleyes &\n
Creating the PCC mask in FSLeyes We need to open the standard MNI template brain, select the PCC and make a mask.
Here are the following steps:
File \u279c Add standard
and select MNI152_T1_2mm_brain.nii.gz
.Settings \u279c Ortho View 1 \u279c Atlases
. An atlas panel then opens on the bottom section.Atlas information
(if it already hasn't loaded).cing
in the search box. Check the Cingulate Gyrus, posterior division (lower right) so that it is overlaid on the standard brain. (The full name may be obscured, but you can always check which region you have loaded by looking at the panel on the bottom right).
At this point, your window should look something like this:
To save the seed, click the save symbol which is the first of three icons on the bottom left of the window.
The window that opens up should be your project SBC directory. Open into the seed
folder and save your seed as PCC
.
We now need to binarise the seed and to extract the mean timeseries. To do this, leaving FSLeyes open, go into your terminal (you may have to press Enter if some text about dc.DrawText
is there) and type:
cd seed\nfslmaths PCC -thr 0.1 -bin PCC_bin\n
In FSLeyes now click File \u279c Add from file, and select PCC_bin
to compare PCC.nii.gz
(before binarization) and PCC_bin.nii.gz
(after binarization). You should note that the signal values are all 1.0 for the binarized PCC.
You can now close FSLeyes.
For each subject, you want to extract the average time series from the region defined by the PCC mask. To calculate this value for sub1
, do the following:
cd ../sub1\nfslmeants -i sub1 -o sub1_PCC.txt -m ../seed/PCC_bin\n
This will generate a file within the sub1
folder called sub1_PCC.txt
.
We can have a look at the contents by running cat sub1_PCC.txt
. The terminal will print out a list of numbers with the last five being:
20014.25528\n20014.919\n20010.17317\n20030.02886\n20066.05141\n
This is the mean level of 'activity' for the PCC at each time-point.
Now let's repeat this for the other two subjects.
cd ../sub2\nfslmeants -i sub2 -o sub2_PCC.txt -m ../seed/PCC_bin\ncd ../sub3\nfslmeants -i sub3 -o sub3_PCC.txt -m ../seed/PCC_bin\n
Now if you go back to the SBC directory and list all of the files within the subject folders:
cd ..\nls -R\n
You should see the following:
This is all we need to run the subject and group-level analyses using FEAT.
"},{"location":"workshop8/functional-connectivity/#running-the-feat-analyses","title":"Running the FEAT analyses","text":""},{"location":"workshop8/functional-connectivity/#single-subject-analysis","title":"Single-subject analysisExamining the FEAT outputScripting the other two subjects","text":"Close your terminal, open another one, move to your SBC
folder, load FSL and open FEAT:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/SBC\nmodule load bear-apps/2022b\nmodule load FSL/6.0.7.6\nsource $FSLDIR/etc/fslconf/fsl.sh\nFeat &\n
We will run the first-level analysis for sub1
. Set-up the following settings in the respective tabs:
Data
Number of inputs:
sub1
folder and choose sub1.nii.gz
. Click OK. You will see a box saying that the 'Input file has a TR of 1...' this is fine, just click OK again.Output directory:
sub1
folder and click OK. Nothing will be in the right hand column, but that is because there are no folders within sub1
. We will create our .feat
folder within sub1
. This is what your data tab should look like (with the input data opened for show).
Pre-stats
The data has already been pre-processed, so just set 'Motion correction' to 'None' and uncheck BET. Your pre-stats should look like this:
Registration
Nothing needs to be changed here.
Stats
Click on 'Full Model Setup' and do the following:
sub1
folder and select sub1_PCC.txt
. This is the mean time series of the PCC for sub-001 and is the statistical regressor in our GLM model. This is different from analyses of task-based data which will usually have an events.tsv
file with the onset times for each regressor of interest.What are we doing specifically?
The first-level analysis will subsequently identify brain voxels that show a significant correlation with the seed (PCC) time series data.
Your window should look like this:
In the same General Linear Model window, click the 'Contrast & F-tests' tab, type PCC in the title, and click 'Done'.
A blue and red design matrix will then be displayed. You can close it.
Post-stats
Nothing needs to be changed here.
You are ready to run the first-level analysis. Click 'Go' to run. On BEAR, this should only take a few minutes.
To actually examine the output, go to the BEAR Portal and at the menu bar select Files \u279c /rds/projects/c/chechlmy-chbh-mricn/
Then go into SBC/sub1.feat
, select report.html
and click 'View' (top left of the window). Navigate to the 'Post-stats' tab and examine the outputs. It should look like this:
We can now run the second and third subjects. As we only have three subjects, we could manually run the other two by just changing three things:
sub_PCC.txt
pathWhilst it would probably be quicker to do it manually in this case, it is not practical in other instances (e.g., more subjects, subjects with different number of scans etc.). So, instead we will be scripting the first level FEAT analyses for the other two subjects.
The importance of scripting
Scripting analyses may seem challenging at first, but it is an essential skill of modern neuroimaging research. It enables you to automate repetitive processing steps, dramatically reduces the chance of human error, and ensures your research is reproducible.
To do this, go back into your terminal, you don't need to open a new terminal or close FEAT.
The setup for each analysis is saved as a specific file, the design.fsf
file within the FEAT output directory. We can see this by opening the design.fsf
file for sub1
:
pwd # make sure you are in your SBC directory e.g., blah/xxx/SBC\ncd sub1.feat\ncat design.fsf\n
FEAT acts as a large 'function' with its many variables corresponding to the options that we choose when setting up in the GUI. We just need to change three of these (the three mentioned above). In the design.fsf
file this corresponds to:
set fmri(outputdir) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1\"\nset feat_files(1) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1/sub1/\"\nset fmri(custom1) \"/rds/projects/c/chechlmy-chbh-mricn/xxx/SBC/sub1/sub1_PCC.txt\"\n
To run the script, please copy the run_feat.sh
script into your own SBC
directory:
cd ..\npwd # make sure you are in your SBC directory\ncp /rds/projects/c/chechlmy-chbh-mricn/axs2210/SBC/run_feat.sh .\n
Viewing the script
If you would like, you can have a look at the script yourself by typing cat run_bash.sh
The first line #!/bin/bash
is always needed to run bash
scripts. The rest of the code just replaces the 3 things we wanted to change for the defined subjects, sub2
and sub3
.
Run the code (from your SBC directory) by typing bash run_feat.sh
. (It will ask you for your University account name, this is your ADF username (axs2210 for me)).
The script should take about 5-10 minutes to run on BEAR.
After it has finished running, have a look at the report.html
file for both directories, they should look like this:
sub2
sub3
"},{"location":"workshop8/functional-connectivity/#group-level-analysis","title":"Group-level analysisExamining the output","text":"Ok, so now that we have our FEAT directories for all three subjects, we can run the group level analysis. Close FEAT and open a new FEAT by running Feat &
in your SBC
directory.
Here are instructions on how to setup the group-level FEAT:
Data
Your window should look like this (before closing the 'Input' window):
\u00a0\u00a0\u00a0\u00a05. Keep 'Use lower-level COPEs' ticked.
\u00a0\u00a0\u00a0\u00a06. In 'Output directory' stay in your current directory (SBC), and in the bottom bar, type in PCC_group
at the end of the file path.
Don't worry about it being empty, FSL will fill out the file path for us.
If you click the folder again, it should look similar to this (with your ADF username instead of axs2210
):
Stats
The interface should look like this:
After that, click 'Done' and close the GLM design matrix that pops up (you don't need to change anything in the 'Contrasts and F-tests' tab).
Post-stats
Lowering our statistical threshold
Why do you think we are lowering this to 2.3 in our analysis instead of keeping it at 3.1? The reason is because we only have three subjects, we want to be relatively lenient with our threshold value, otherwise we might not see any activation at all! For group-level analyses with more subjects, we would be more strict.
Click 'Go' to run!
This should only take about 2-3 minutes.
While this is running, you can load the report.html
through the file browser as you did for the individual subjects.
Click on the 'Results' tab, and then on 'Lower-level contrast 1 (PCC)'. When the analysis has finished, your results should look like this:
These are voxels demonstrating significant functional connectivity with the PCC at a group-level (Z > 2.3).
So, we have just ran our group-level analysis. Let's have a closer look at the outputted data.
Close FEAT and your terminal, open a new terminal, go to your SBC
directory and open FSLeyes:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/SBC\nmodule load FSL/6.0.5.1-foss-2021a\nmodule load FSLeyes/1.3.3-foss-2021a\nfsleyes &\n
In FSLeyes, open up the standard brain (Navigate to the top menu and click on 'File \u279c Add standard' and select MNI152_T1_2mm_brain.nii.gz
).
Then add in our contrast image (File \u279c Add from file, and then go into the PCC_group.gfeat
and then into cope1.feat
and open the file thresh_zstat1.nii.gz
).
When opened, change the colour to 'Red-Yellow' and the 'Minimum' up to 2.3 (The max should be around 3.12). If you set the voxel location to [42, 39, 52] your screen should look like this:
This is the map that we saw in the report.html
file. In fact we can double check this by changing the voxel co-ordinates to [45, 38, 46].
Our thresholded image in fsleyes
The FEAT output Our image matches the one on the far right below:
"},{"location":"workshop8/functional-connectivity/#bonus-identifying-regions-of-interest-with-atlases-and-neurosynth","title":"Bonus: Identifying regions of interest with atlases and Neurosynth","text":"
So we know which voxels demonstrate significant correlation with the PCC, but what region(s) of the brain are they located in?
Let's go through two ways in which we can work this out.
Firstly, as you have already done in the course, we can simply just overlap an atlas on the image and see which regions the activated voxels fall under.
To do this:
By having a look at the 'Location' window (bottom left) we can now see that significant voxels of activity are mainly found in the:
Right superior lateral occipital cortex
Posterior cingulate cortex (PCC) / precuneus
Alternatively, we can also use Neurosynth, a website where you can get the resting-state functional connectivity of any voxel location or brain region. It does this by extracting data from studies and performing a meta-analysis on brain imaging studies that have results associated with your voxel/region of interest.
About Neurosynth
While Neurosynth has been superseded by Neurosynth Compose we will use the original Neurosynth in this tutorial.
If you click the following link, you will see regions demonstrating significant connectivity with the posterior cingulate.
If you type [46, -70, 32] as co-ordinates in Neurosynth, and then into the MNI co-ordinates section in FSLeyes, not into the voxel location, because Neurosynth works with MNI space, you can see that in both cases the right superior lateral occipital cortex is activated.
Image orientation
Note that the orientations of left and right are different between Neurosynth and FSLeyes!
Neurosynth
FSLeyes
This is a great result given that we only have three subjects!
Learning outcomes of this workshop
In this workshop, you have:
We are now ready to proceed with running our data analysis. We will start with the first dataset (first participant /p01
) and our first step will be to skull-strip the data using BET.
+You should now be able by now to not only run BET but also to troubleshoot poor BET i.e., use different methods to run BET.
The p01
T1 scan was acquired with a large FOV (you can check this using FSLeyes; it is generally a good practice to explore the data before the start of any analysis, especially if you were not the person who acquired the data). Therefore, we will apply an appropriate method using BET as per the example we explored earlier. This will be likely the right method to be applied to all datasets in the /recon
folder but please check.
Open a terminal and use the commands below to skull-strip the T1:
+cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01
+module load FSL/6.0.5.1-foss-2021a
+module load FSLeyes/1.3.3-foss-2021a
+immv T1 T1neck
+robustfov -i T1neck -r T1
+bet T1.nii.gz T1_brain -R
+
Remember that:
+immv
command renames the T1 image, and automatically takes care of the filename extensionsrobustfov
command crops the image and names it back to T1.nii.gz
bet -R
command runs BET recursivelyIt is very important that after running BET that you examine, using FSLeyes, the quality of the brain extraction process performed on each and every T1.
+A poor brain extraction will affect the registration of the functional data into MNI space giving a poorer quality of registered image. This in turn will mean that the higher-level analyses (where functional data are combined in MNI space) will be less than optimal. It will then be harder to detect small BOLD signal changes in the group.
+Re-doing inaccurate BETs
+Whenever the BET process is unsatisfactory you will need to go back and redo the individual BET extraction by hand, by tweaking the “Fractional intensity threshold” and/or the Advanced option parameters for the Centre coordinates and/or the “Threshold gradient”.
+You should be still inside the /p01
folder; please rename the fMRI scan by typing:
immv fs005a001 fmri1
We are now ready to proceed with our fMRI data analysis. To do that we will need a different version of FSL installed on BlueBEAR. Close your terminal and again navigate inside the p01
folder:
cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01
Now load FSL using the commands below:
+module load bear-apps/2022b
+module load FSL/6.0.7.6
+source $FSLDIR/etc/fslconf/fsl.sh
+
Finally, open FEAT (from the FSL GUI or by typing Feat &
in a terminal window).
On the menus, make sure 'First-level analysis' and 'Full analysis' are selected. Now work through the tabs, setting and changing the values for each parameter as described below. +Try to understand how these settings relate to the description of the experiment as provided at the start.
++ +
+ +Accept all the defaults.
+Input file
+The input file is the 4D fMRI data (the functional data for participant 1 should be called something like fmri1.nii.gz
if you have renamed it as above). Select this using the 'Select 4D data' button. Note that when you have selected the input, 'Total volumes' should jump from 0.
Total volumes troubleshooting
+If “Total volumes” is still set to 0, or jumps to 1, you have done something wrong. If you get this, stop and fix the error at this point. DO NOT CARRY ON. If “Total volumes” is still set to 0, that means you have not yet selected any data. Try again. If “Total volumes” is set to 1, that means you have most likely selected the T1 image, not the fMRI data. Try again, but selecting the correct file.
+Check carefully at this point that the total number of volumes is correct (93 volumes were collected on participants 1-2, 94 volumes on participants 3-15).
+Output directory
+Enter a directory name in the output directory. This needs to be something systematic that you can use for all the participants and which is comprehensible. +It needs to make sense to you when you look at it again in a year or more in the future. It is important here to use full path names. +It is also very important that you do not use shortened or partial path names and that you do not put any spaces in the filenames you use. +If you do, these may cause some programs to crash with errors that may not seem to make much sense.
+For example, use an output directory name like:
+/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1
where:
+/rds/projects/c/chechlmy-chbh-mricn/xxx/feat
is the top level directory where you intend to put all of your upcoming FEAT analyses for the experiment/rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1
is the sub-directory where you intend to put specifically only the 1st level (per session) FEAT analyses (and not the 2nd or higher level analyses). p01
refers to participant 1 and s1
refers to session/scan 1Note that when FEAT is eventually run this will automatically create a new directory called /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/p01_s1.feat
for you containing the output of this particular analysis. If the directory structure does not exist, FEAT will try and make it. You do not need to make it yourself in advance.
Repetition Time (TR)
+For this experiment make sure that the TR is set to 2.0s. If FEAT can read the TR from the header information it will try and set it automatically. If not you will need to set it manually.
+High pass filter cutoff
+Set 'High pass filter cutoff' to 60sec (i.e. 50% greater than OFF+ON length of time).
++ +
+ +Set the following:
++ +
+ +Set the following:
+Select the “Full model setup” option; and then on the 'EVs' tab:
+On the Contrasts Tab:
+Check the plot of the design that will be generated and then click on the image to dismiss it.
+Change the 'Thresholding' pull down option to be of type 'Uncorrected' and leave the P threshold value at p<0.05.
+Thresholding and processing time
+Note this is not the correct thresholding that you will want at the final (third stage) of processing (where you will probably want 'Cluster thresholding') but for the convenience of the workshop, at this stage it will speed up the processing per run.
+Set the following:
+T1_brain.nii.gz
) as the main structural image with 'Linear Options: Normal search, BBR'The model should now be set up with all the correct details and be ready to be analyzed.
+Hit the GO button!
+Running FSL on BlueBEAR
+FSL jobs are now submitted in an automated way to a back-end high performance computing cluster on BlueBEAR for execution. Processing time for this analysis will vary but will probably be about 5 mins per run.
+FEAT has a built-in progress watcher, the 'FEAT Report', which you can open in a web browser.
+To do that, you need to navigate inside the p01_s1.feat
folder from the BlueBEAR Portal as below and from there select the report.html
file, and either open it in a new tab or in a new window.
+ +
+ +Watch the webpage for progress. Refresh the page to update and click the links (Tabs near the top of the page) to see the results when available (the 'STILL RUNNING' message will disappear).
++ +
+Example FEAT Reports for processes that are still running, and which have completed.
+ +After it has completed, first look at the webpage, click on the various links and try to understand what each part means.
++ +
+ +Now let's use FSLeyes to look at the output in more detail. To do that you will need to open a separate terminal and load FSLeyes:
+cd /rds/projects/c/chechlmy-chbh-mricn/xxx/recon/p01
+module load FSL/6.0.5.1-foss-2021a-fslpython
+module load FSLeyes/1.3.3-foss-2021a
+fsleyes &
+
Open the p01_s1.feat
folder and select the filtered_func_data
(this is the fMRI data after it has been preprocessed by motion correction etc).
Put FSLeyes into movie mode and see if you can identify areas that change in activity.
+Now, add the thresh_zstat1
image and try to identify the time course of the stimulation in some of the most highly activated voxels. You should remember how to complete the above tasks from previous workshops. You can also use the “camera” icon to take a snapshot of the results.
+ +
+ +Let's have a look and see the effects that other parameters have on the data. To do this, do the following steps:
+Feat &
)design.fsf
file in the p01_s1.feat
directory for the first participantNote that each time you rerun Feat, it creates a new folder with a '+' sign in the name. So you will have folders rather messily named 'p01_s1.feat', “'01_s1+.feat', 'p01_s1+ +.feat', and so on. This is rather wasteful of of your precious quota space, so you should delete unnecessary ones after looking at them.
+For example, if you wanted to remove all files and directories that end with '+' for participant 1:
+cd /rds/projects/c/chechlmy-chbh-mricn/xxx/feat/1/
+rm -rf *+
+
You might want also to change the previous output directory name to have a more meaningful name in order to make it more obvious what parameter has changed, e.g. p01_s1_motion_off.feat
.
For participant 2, you will need to repeat the main steps above:
+p01
To rerun a FEAT analysis, rather than re-entering all the model details:
+p01
Now change the input 4D file, the output directory name, and the registration details (the BET'ed reoriented T1 for participant 2), and hit 'Go'.
+Design files
+You can also save the design files (design.fsf
) using the 'Save' button on the FEAT GUI. You can then edit this in a text editor, which is useful when running large group studies. You can also run FEAT from the command line, by giving it a design file to use e.g., feat my_saved_design.fsf
. We will take a look at modifying the design.fsf
files directly in the Functional Connectivity workshop.
Running a first-level analysis on the remaining participants
+In your own time, you should analyse the remaining participants as above.
+Remember:
+There are therefore 29 separate analyses that need to be done.
+Scripting your analysis
+It will seem laborious to re-write and re-run 29 separate FEAT analyses; a much quicker way is by scripting our analyses using bash
. If you would like, try scripting your analyses! Contact one of the course TA's or convenors if you are stuck!
As always, help and further information is also available on the relevant section of the FSL Wiki.
@@ -1113,12 +1398,12 @@