{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Diffusion Analysis with MRtrix: Part 3 - Registration and Streamline Fitting\n",
"\n",
"\n",
"This notebook is the third in a series on Diffusion Analysis using MRtrix and builds upon the data generated in **Diffusion Analysis with MRtrix: Part 2 - Constrained Spherical Deconvolution and Tissue estimation**. The data necessary for this example will be downloaded from OSF. \n",
"\n",
"\n",
"####\n",
"Author: Monika Doerig\n",
"\n",
"Citation:\n",
"\n",
"__Andy's Brain Book:__\n",
"\n",
"- This MRtrix example is based on the [Diffusion Analysis with MRtrix](https://andysbrainbook.readthedocs.io/en/latest/MRtrix/MRtrix_Introduction.html#) chapter from Andy’s Brain Book (Jahn, 2022. [doi:10.5281/zenodo.5879293](https://zenodo.org/records/5879294))\n",
"\n",
"__Original Data from OpenNeuro:__\n",
"- Hannelore Aerts and Daniele Marinazzo (2018). BTC_preop. [OpenNeuro Dataset ds001226](https://openneuro.org/datasets/ds001226/versions/00001)\n",
"\n",
"__Diffusion Data from the Previous Notebook (Available on OSF):__\n",
"- Dörig, M. (2024, November 19). Diffusion MRI Analysis with MRtrix: An Interactive Three-Part Series on Neurodesk. Retrieved from [OSF](https://osf.io/y2dq4/)\n",
"\n",
"__MRtrix3:__ \n",
"- Tournier, J.-D.; Smith, R. E.; Raffelt, D.; Tabbara, R.; Dhollander, T.; Pietsch, M.; Christiaens, D.; Jeurissen, B.; Yeh, C.-H. & Connelly, A. MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. NeuroImage, 2019, 202, 116137. https://doi.org/10.1016/j.neuroimage.2019.116137\n",
"- For more details: https://www.mrtrix.org/\n",
"\n",
"__Fury - Free Unified Rendering in Python:__\n",
"- Eleftherios Garyfallidis, Serge Koudoro, Javier Guaje, Marc-Alexandre Côté, Soham Biswas, David Reagan, Nasim Anousheh, Filipi Silva, Geoffrey Fox, and Fury Contributors. \"FURY: advanced scientific visualization.\" Journal of Open Source Software 6, no. 64 (2021): 3384. https://doi.org/10.21105/joss.03384\n",
"- https://fury.gl/latest/index.html"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Neurodesk"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%%capture\n",
"import os\n",
"import sys\n",
"IN_COLAB = 'google.colab' in sys.modules\n",
"\n",
"if IN_COLAB:\n",
" os.environ[\"LD_PRELOAD\"] = \"\";\n",
" os.environ[\"APPTAINER_BINDPATH\"] = \"/content,/tmp,/cvmfs\"\n",
" os.environ[\"MPLCONFIGDIR\"] = \"/content/matplotlib-mpldir\"\n",
" os.environ[\"LMOD_CMD\"] = \"/usr/share/lmod/lmod/libexec/lmod\"\n",
"\n",
" !curl -J -O https://raw.githubusercontent.com/NeuroDesk/neurocommand/main/googlecolab_setup.sh\n",
" !chmod +x googlecolab_setup.sh\n",
" !./googlecolab_setup.sh\n",
"\n",
" os.environ[\"MODULEPATH\"] = ':'.join(map(str, list(map(lambda x: os.path.join(os.path.abspath('/cvmfs/neurodesk.ardc.edu.au/neurodesk-modules/'), x),os.listdir('/cvmfs/neurodesk.ardc.edu.au/neurodesk-modules/')))))"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"vendor_id\t: GenuineIntel\n",
"model name\t: Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz\n"
]
}
],
"source": [
"# Output CPU information:\n",
"!cat /proc/cpuinfo | grep 'vendor' | uniq\n",
"!cat /proc/cpuinfo | grep 'model name' | uniq"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import Python Modules"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"%%capture\n",
"! pip install nibabel matplotlib fury"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import subprocess\n",
"import nibabel as nib\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import tempfile\n",
"from ipyniivue import AnyNiivue, SliceType\n",
"from IPython.display import display, Markdown\n",
"import ipywidgets as widgets\n",
"from fury import colormap\n",
"import random"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"source": [
"In this third notebook of the series, we focus on coregistering the diffusion and anatomical images and generating streamlines, building on the concepts introduced earlier."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load packages"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['mrtrix3/3.0.4', 'fsl/6.0.7.4']"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import lmod\n",
"await lmod.load('mrtrix3/3.0.4')\n",
"await lmod.load('fsl/6.0.7.4')\n",
"await lmod.list()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Downloading the data from MRtrix Part 2"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"100%|███████████████████████████████████| 75.4M/75.4M [00:00<00:00, 105Mbytes/s]\n",
"100%|██████████████████████████████████| 2.22M/2.22M [00:00<00:00, 53.8Mbytes/s]\n",
"100%|██████████████████████████████████| 2.22M/2.22M [00:00<00:00, 44.5Mbytes/s]\n",
"100%|██████████████████████████████████| 99.5M/99.5M [00:01<00:00, 88.8Mbytes/s]\n",
"100%|█████████████████████████████████████| 226M/226M [00:01<00:00, 126Mbytes/s]\n"
]
}
],
"source": [
"! osf -p y2dq4 fetch fod/5tt_nocoreg.mif MRtrix_fod/5tt_nocoreg.mif\n",
"! osf -p y2dq4 fetch fod/csffod_norm.mif MRtrix_fod/csffod_norm.mif\n",
"! osf -p y2dq4 fetch fod/gmfod_norm.mif MRtrix_fod/gmfod_norm.mif\n",
"! osf -p y2dq4 fetch fod/wmfod_norm.mif MRtrix_fod/wmfod_norm.mif\n",
"! osf -p y2dq4 fetch fod/sub-02_den_preproc_unbiased.mif MRtrix_fod/sub-02_den_preproc_unbiased.mif"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'/home/jovyan/Git_repositories/MRtrix_fod'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Save the original directory\n",
"original_dir = os.getcwd()\n",
"# Navigate to the dowloaded data\n",
"os.chdir(\"MRtrix_fod\")\n",
"os.getcwd()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Try typing one of the commands from the library, such as ```mrconvert```. If MRtrix has been installed correctly, you should see the help page printed by default when no arguments are passed to the command:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"MRtrix 3.0.4 mrconvert Mar 20 2024\n",
"\n",
" mrconvert: part of the MRtrix3 package\n",
"\n",
"SYNOPSIS\n",
"\n",
" Perform conversion between different file types and optionally extract a\n",
" subset of the input image\n",
"\n",
"USAGE\n",
"\n",
" mrconvert [ options ] input output\n",
"\n",
" input the input image.\n",
"\n",
" output the output image.\n",
"\n",
"\n"
]
}
],
"source": [
"! mrconvert | head -n 18"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Coregistering the Diffusion and Anatomical Images\n",
"Our next step is to coregister the anatomical and diffusion-weighted images. This ensures that the boundaries of the tissue types are aligned with the boundaries of the diffusion-weighted images; even small differences in the location of the two scans can throw off the tractography results.\n",
"\n",
"We will first use the commands ```dwiextract``` and ```mrmath``` to average together the B0 images from the diffusion data. These are the images that look most like T2-weighted functional scans, since a diffusion gradient wasn’t applied during their acquisition - in other words, they were acquired with a b-value of zero. To see how this works, type the following command:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"dwiextract: [100%] extracting volumes\u001b[0K[0K\u001b[?7h\u001b[?7l\n",
"mrmath: [100%] preloading data for \"/tmp/mrtrix-tmp-hZcbyF.mif\"\u001b[0K[0K\u001b[?7h\u001b[?7l\n",
"mrmath: [100%] computing mean along axis 3...\u001b[0K[0K\u001b[?7h\u001b[?7l\n"
]
}
],
"source": [
"! dwiextract sub-02_den_preproc_unbiased.mif - -bzero | mrmath - mean mean_b0.mif -axis 3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### \n",
"There are two parts to this command, separated by a pipe (”|”). The left half of the command, ```dwiextract```, takes the preprocessed diffusion-weighted image as an input, and the -bzero option extracts the B0 images; the solitary - argument indicates that the output should be used as input for the second part of the command, to the right of the pipe. ```mrmath``` then takes these output B0 images and computes the mean along the 3rd axis, or the time dimension. In other words, if we start with an index of 0, then the number 3 indicates the 4th dimension, which simply means to average over all of the volumes.\n",
"\n",
"In order to carry out the coregistration between the diffusion and anatomical images, we will need to take a brief detour outside of MRtrix. The software package doesn’t have a coregistration command in its library, so we will need to use another software package’s commands instead. Although you can choose any one you want, we will focus here on FSL’s flirt command.\n",
"\n",
"The first step is to convert both the segmented anatomical image and the B0 images we just extracted:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"mrconvert: [100%] copying from \"mean_b0.mif\" to \"mean_b0.nii.gz\"\u001b[0K[0K\u001b[?7h\n",
"mrconvert: [100%] compressing image \"mean_b0.nii.gz\"\u001b[0K[0K\u001b[?7h\u001b[?7l\u001b[?7l\n",
"mrconvert: [100%] copying from \"5tt_nocoreg.mif\" to \"5tt_nocoreg.nii.gz\"\u001b[0K[0K\u001b[?7h\u001b[?7l\u001b[?7l\n",
"mrconvert: [100%] compressing image \"5tt_nocoreg.nii.gz\"\u001b[0K[0K\u001b[?7h\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\n"
]
}
],
"source": [
"! mrconvert mean_b0.mif mean_b0.nii.gz\n",
"! mrconvert 5tt_nocoreg.mif 5tt_nocoreg.nii.gz"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"####\n",
"Since ```flirt``` can only work with a single 3D image (not 4D datasets), we will use ```fslroi``` to extract the first volume of the segmented dataset, which corresponds to the Grey Matter segmentation"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"! fslroi 5tt_nocoreg.nii.gz 5tt_vol0.nii.gz 0 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We then use the ```flirt``` command to coregister the two datasets:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"! flirt -in mean_b0.nii.gz -ref 5tt_vol0.nii.gz -interp nearestneighbour -dof 6 -omat diff2struct_fsl.mat"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This command uses the grey matter segmentation (i.e., “5tt_vol0.nii.gz”) as the reference image, meaning that it stays stationary. The averaged B0 images are then moved to find the best fit with the grey matter segmentation. The output of this command, “diff2struct_fsl.mat”, contains the transformation matrix that was used to overlay the diffusion image on top of the grey matter segmentation.\n",
"\n",
"Now that we have generated our transformation matrix, we will need to convert it into a format that can be read by MRtrix. That is, we are now ready to travel back into MRtrix after briefly stepping outside of it. The command ```transformconvert``` does this:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"! transformconvert diff2struct_fsl.mat mean_b0.nii.gz 5tt_nocoreg.nii.gz flirt_import diff2struct_mrtrix.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the above steps used the anatomical segmentation as the reference image. We did this because usually the coregistration is more accurate if the reference image has higher spatial resolution and sharper distinction between the tissue types. However, we also want to introduce as few edits and interpolations to the functional data as possible during preprocessing. Therefore, since we already have the steps to transform the diffusion image to the anatomical image, we can take the inverse of the transformation matrix to do the opposite - i.e., coregister the anatomical image to the diffusion image:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"mrtransform: [100%] copying from \"5tt_nocoreg.mif\" to \"5tt_coreg.mif\"...\u001b[0K[0K\u001b[?7h\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\u001b[?7l\n"
]
}
],
"source": [
"! mrtransform 5tt_nocoreg.mif -linear diff2struct_mrtrix.txt -inverse 5tt_coreg.mif"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The resulting file, “5tt_coreg.mif”, can be loaded into ```mrview``` in order to examine the quality of the coregistration. This is the command to visualizie it with mrview:\n",
"```javascript\n",
"mrview sub-02_den_preproc_unbiased.mif -overlay.load 5tt_nocoreg.mif -overlay.colourmap 2 -overlay.load 5tt_coreg.mif -overlay.colourmap 1\n",
"```\n",
"\n",
"The “overlay.colourmap” options specify different color codes for each image that is loaded. In this case, the boundaries before coregistration will be depicted in blue, and the boundaries after coregistration will be shown in red. The change in the boundaries before and after coregistration may be very slight, but they will have a large effect on the rest of the steps that we do. Make sure to check the boundaries in all three views; you can also use the Tool -> Overlay menu to display or hide the different overlays.\n",
"\n",
"We will visualzie the coregistration result with AnyNiivue. The unregistered image is depicted in gray, the registered image in red:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "fb1222638d2341b996c5c680e59d37b0",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"VBox(children=(RadioButtons(description='Slice Type:', index=3, layout=Layout(width='auto'), options=(('Axial'…"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "534530d7dc6d4da08d43482f7d52a240",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"AnyNiivue()"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Define the volumes with fixed colormaps\n",
"volumes = [\n",
" {\"path\": \"5tt_nocoreg.mif\", \"colormap\": \"gray\", \"visible\": True, \"opacity\": 1.0},\n",
" {\"path\": \"5tt_coreg.mif\", \"colormap\": \"red\", \"visible\": True, \"opacity\": 1.0},\n",
"]\n",
"\n",
"# Create the AnyNiivue viewer\n",
"nv = AnyNiivue()\n",
"nv.load_volumes(volumes)\n",
"nv.crosshair_color = [1,1,0,1]\n",
"\n",
"# Slice type widget (Axial, Coronal, Sagittal, Multiplanar, Render)\n",
"widget_slice_type = widgets.RadioButtons(\n",
" options=[('Axial', 0), ('Coronal', 1), ('Sagittal', 2), ('Multiplanar', 3), ('Render', 4)],\n",
" value=3, # Default to Multiplanar\n",
" description='Slice Type:',\n",
" style={'description_width': 'initial'},\n",
" layout={'width': 'auto'}\n",
")\n",
"\n",
"# Function to update slice type\n",
"def update_slice_type(change):\n",
" nv.slice_type = change.new\n",
"\n",
"widget_slice_type.observe(update_slice_type, names='value')\n",
"\n",
"# Unregistered image opacity widget\n",
"widget_unreg_opacity = widgets.FloatSlider(\n",
" value=1.0, min=0.0, max=1.0, step=0.1,\n",
" description='Unregistered Image Opacity:',\n",
" orientation='horizontal',\n",
" style={'description_width': 'initial'},\n",
" layout={'width': '400px'}\n",
")\n",
"\n",
"# Function to update unregistered image opacity\n",
"def update_unreg_opacity(change):\n",
" nv.volumes[0].opacity = change.new\n",
"\n",
"widget_unreg_opacity.observe(update_unreg_opacity, names='value')\n",
"\n",
"# Registered image opacity widget\n",
"widget_reg_opacity = widgets.FloatSlider(\n",
" value=1.0, min=0.0, max=1.0, step=0.1,\n",
" description='Registered Image Opacity:',\n",
" orientation='horizontal',\n",
" style={'description_width': 'initial'},\n",
" layout={'width': '400px'}\n",
")\n",
"\n",
"# Function to update registered image opacity\n",
"def update_reg_opacity(change):\n",
" nv.volumes[1].opacity = change.new\n",
"\n",
"widget_reg_opacity.observe(update_reg_opacity, names='value')\n",
"\n",
"# Arrange widgets in a vertical box for better layout\n",
"widgets_box = widgets.VBox([widget_slice_type, widget_unreg_opacity, widget_reg_opacity])\n",
"\n",
"# Display the AnyNiivue viewer and widgets\n",
"display(widgets_box, nv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The last step to create the “seed” boundary - the boundary separating the grey from the white matter, which we will use to create the seeds for our streamlines - is created with the command ```5tt2gmwmi``` (which stands for “5 Tissue Type (segmentation) to Grey Matter / White Matter Interface)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"5tt2gmwmi: [100%] Generating GMWMI seed mask\u001b[0K[0K\u001b[?7h\u001b[?7l\u001b[?7l\u001b[?7l\n"
]
}
],
"source": [
"! 5tt2gmwmi 5tt_coreg.mif gmwmSeed_coreg.mif"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Again, we will check the result with mrview to make sure the interface is where we think it should be. This is the ```mrview```command:\n",
"```javascript\n",
"mrview sub-02_den_preproc_unbiased.mif -overlay.load gmwmSeed_coreg.mif\n",
"```\n",
"\n",
"We will visualize this boundary between the grey and white matter again with AnyNiivue: "
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "6fb97340edc8410e968996a04c975dbf",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"AnyNiivue()"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nv2 = AnyNiivue() \n",
"nv2.load_volumes([{ \"path\": \"gmwmSeed_coreg.mif\", \"colormap\": \"green\"}])\n",
"nv2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have determined where the boundary is between the grey matter and the white matter, we are ready to begin generating streamlines in order to reconstruct the major white matter pathways of the brain."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Streamlines\n",
"Having created the interface between the white matter and the grey matter, we are ready to generate streamlines - threads that connect anatomically distinct regions of grey matter. These are estimates of the underlying white matter tracts, and MRtrix uses a probabilistic approach to do this; a large number of streamlines are generated for each voxel of the grey matter / white matter boundary, and then those streamlines are culled based on different criteria that we specify. We will discuss some of the most popular options below."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Anatomically Constrained Tractography\n",
"One of MRtrix’s features is Anatomically Constrained Tractography, or ACT. This method will only determine that a streamline is valid if it is biologically plausible. For example, a streamline that terminates in the cerebrospinal fluid will be discarded, since white matter tracts tend to both originate and terminate in grey matter. In other words, the streamlines will be constrained to the white matter. The effect of either including or omitting this step can be seen in the following figure:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"