"Create a directory in the users home directory to store the downloaded data:\n",
"\n",
"`expanduser` will expand the `~` to the be users home directory:"
"> **NOTE:** `expanduser` will expand the `~` to the be users home directory:"
]
},
{
...
...
@@ -91,7 +65,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Download the data (if not already downloaded). We use a method from [`nilearn`](https://nilearn.github.io/index.html) called `fetch_cobre` to download the fMRI data:"
"Download the data (if not already downloaded):\n",
"\n",
"> **Note:** We use a method from [`nilearn`](https://nilearn.github.io/index.html) called `fetch_cobre` to download the fMRI data"
"We use methods from [`nilearn`](https://nilearn.github.io/index.html) to regress confounds from the data (```clean_img```) and to spatially smooth the data with a gaussian filter of 10mm FWHM (```smooth_img```):"
"Regress confounds from the data and to spatially smooth the data with a gaussian filter of 10mm FWHM.\n",
"\n",
"> **Note:**\n",
"> 1. We use `clean_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to regress confounds from the data\n",
"> 2. We use `smooth_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to spatially smooth the data\n",
"> 3. `zip` takes iterables and aggregates them in a tuple. Here it is used to iterate through four lists simultaneously\n",
"> 4. We use list comprehension to loop through all the filenames and append suffixes\n"
]
},
{
...
...
@@ -135,7 +117,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To run ```melodic``` we will need a brain mask in MNI152 space at the same resolution as the fMRI. Here we use [`nilearn`](https://nilearn.github.io/index.html) methods to load the MNI152 mask (```load_mni152_brain_mask```), resample to the resolution of the fMRI (```resample_to_img```), and binarize (```math_img```):"
"To run ```melodic``` we will need a brain mask in MNI152 space at the same resolution as the fMRI. \n",
"\n",
"> **Note:**\n",
"> 1. We use `load_mni152_brain_mask` from the [`nilearn`](https://nilearn.github.io/index.html) package to load the MNI152 mask\n",
"> 2. We use `resample_to_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to resample the mask to the resolution of the fMRI \n",
"> 3. We use `math_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to binarize the resample mask\n",
"> 4. The mask is plotted using `plot_anat` from the [`nilearn`](https://nilearn.github.io/index.html) package"
]
},
{
...
...
@@ -165,7 +153,12 @@
"<a class=\"anchor\" id=\"run-melodic\"></a>\n",
"### Run ```melodic```\n",
"\n",
"Generate a command line string and run group ```melodic``` on the smoothed fMRI with a dimension of 10 components:"
"Generate a command line string and run group ```melodic``` on the smoothed fMRI with a dimension of 10 components:\n",
"\n",
"> **Note**: \n",
"> 1. Here we use python [f-strings](https://www.python.org/dev/peps/pep-0498/), formally known as literal string interpolation, which allow for easy formatting\n",
"> 2. `op.join` will join path strings using the platform-specific directory separator\n",
"> 3. `','.join(smooth)` will create a comma seprated string of all the items in the list `smooth`"
]
},
{
...
...
@@ -179,6 +172,15 @@
"print(melodic_cmd)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Note:** \n",
"> 1. Here we use the `!` operator to execute the command in the shell\n",
"> 2. The `{}` will expand the contained python variable in the shell"
]
},
{
"cell_type": "code",
"execution_count": null,
...
...
@@ -198,7 +200,14 @@
"\n",
"Now we can load and plot the group ICs generated by ```melodic```.\n",
"\n",
"This function will be used to plot ICs:"
"This function will be used to plot ICs:\n",
"\n",
"> **NOTE:**\n",
"> 1. Here we use `plot_stat_map` from the `nilearn` package to plot the orthographic images\n",
"> 2. `subplots` from `matplotlib.pyplot` creates a figure and multiple subplots\n",
"> 3. `find_xyz_cut_coords` from the `nilearn` package will find the image coordinates of the center of the largest activation connected component\n",
"> 4. `zip` takes iterables and aggregates them in a tuple. Here it is used to iterate through two lists simultaneously\n",
"> 5. `iter_img` from the `nilearn` package creates an iterator from an image that steps through each volume/time-point of the image"
]
},
{
...
...
%% Cell type:markdown id: tags:
# Fetch Data
This notebook will download an open fMRI dataset (~50MB) for use in the MIGP demo. It also regresses confounds from the data and performs spatial smoothing with 10mm FWHM.
This data is a derivative from the COBRE sample found in the International Neuroimaging Data-sharing Initiative (http://fcon_1000.projects.nitrc.org/indi/retro/cobre.html), originally released under Creative Commons - Attribution Non-Commercial.
It comprises 10 preprocessed resting-state fMRI selected from 72 patients diagnosed with schizophrenia and 74 healthy controls (6mm isotropic, TR=2s, 150 volumes).
*[Download the data](#download-the-data)
*[Clean the data](#clean-the-data)
*[Run `melodic`](#run-melodic)
*[Plot group ICs](#plot-group-ics)
Firstly we will import the necessary packages for this notebook:
Create a directory in the users home directory to store the downloaded data:
`expanduser` will expand the `~` to the be users home directory:
> **NOTE:** `expanduser` will expand the `~` to the be users home directory:
%% Cell type:code id: tags:
``` python
data_dir=op.expanduser('~/nilearn_data')
ifnotop.exists(data_dir):
os.makedirs(data_dir)
```
%% Cell type:markdown id: tags:
Download the data (if not already downloaded). We use a method from [`nilearn`](https://nilearn.github.io/index.html) called `fetch_cobre` to download the fMRI data:
Download the data (if not already downloaded):
> **Note:** We use a method from [`nilearn`](https://nilearn.github.io/index.html) called `fetch_cobre` to download the fMRI data
%% Cell type:code id: tags:
``` python
d=datasets.fetch_cobre(data_dir=data_dir)
```
%% Cell type:markdown id: tags:
<aclass="anchor"id="clean-the-data"></a>
## Clean the data
We use methods from [`nilearn`](https://nilearn.github.io/index.html) to regress confounds from the data (```clean_img```) and to spatially smooth the data with a gaussian filter of 10mm FWHM (```smooth_img```):
Regress confounds from the data and to spatially smooth the data with a gaussian filter of 10mm FWHM.
> **Note:**
> 1. We use `clean_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to regress confounds from the data
> 2. We use `smooth_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to spatially smooth the data
> 3. `zip` takes iterables and aggregates them in a tuple. Here it is used to iterate through four lists simultaneously
> 4. We use list comprehension to loop through all the filenames and append suffixes
%% Cell type:code id: tags:
``` python
# Create a list of filenames for cleaned and smoothed data
To run ```melodic``` we will need a brain mask in MNI152 space at the same resolution as the fMRI. Here we use [`nilearn`](https://nilearn.github.io/index.html) methods to load the MNI152 mask (```load_mni152_brain_mask```), resample to the resolution of the fMRI (```resample_to_img```), and binarize (```math_img```):
To run ```melodic``` we will need a brain mask in MNI152 space at the same resolution as the fMRI.
> **Note:**
> 1. We use `load_mni152_brain_mask` from the [`nilearn`](https://nilearn.github.io/index.html) package to load the MNI152 mask
> 2. We use `resample_to_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to resample the mask to the resolution of the fMRI
> 3. We use `math_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to binarize the resample mask
> 4. The mask is plotted using `plot_anat` from the [`nilearn`](https://nilearn.github.io/index.html) package
Generate a command line string and run group ```melodic``` on the smoothed fMRI with a dimension of 10 components:
> **Note**:
> 1. Here we use python [f-strings](https://www.python.org/dev/peps/pep-0498/), formally known as literal string interpolation, which allow for easy formatting
> 2. `op.join` will join path strings using the platform-specific directory separator
> 3. `','.join(smooth)` will create a comma seprated string of all the items in the list `smooth`