Skip to content
Snippets Groups Projects
Commit ee1184f3 authored by Sean Fitzgibbon's avatar Sean Fitzgibbon
Browse files

updated docs

parent 112eec34
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
# MIGP
For group ICA, `melodic` uses multi-session temporal concatenation. This will perform a single 2D ICA run on the concatenated data matrix (obtained by stacking all 2D data matrices of every single data set on top of each other).
![temporal concatenation](concat_diag.png)
Resulting in **high dimension** datasets!
Furthermore, with ICA we are typically only interested in a comparitively low dimension decomposition so that we can capture spatially extended networks.
Therefore the first step is to reduce the dimensionality of the data. This can be achieved in a number of ways, but `melodic`, by default, uses `MIGP`.
> MIGP is an incremental approach that aims to provide a very close approximation to full temporal concatenation followed by PCA, but without the large memory requirements *(Smith et al., 2014)*.
Essentially, MIGP stacks the datasets incrementally in the temporal dimension, and whenever the temporal dimension exceeds a specified size, a PCA-based temporal reduction is performed.
> MIGP does not increase at all in memory requirement with increasing numbers of subjects, no large matrices are ever formed, and the computation time scales linearly with the number of subjects. It is easily parallelisable, simply by applying the approach in parallel to subsets of subjects, and then combining across these with the same “concatenate and reduce” approach described above *(Smith et al., 2014)*.
## This notebook
This notebook will download an open fMRI dataset (~50MB) for use in the MIGP demo, regresses confounds from the data, performs spatial smoothing with 10mm FWHM, and then runs group `melodic` with `MIGP`.
* [Fetch the data](#download-the-data)
* [Clean the data](#clean-the-data)
* [Run `melodic`](#run-melodic)
* [Plot group ICs](#plot-group-ics)
Firstly we will import the necessary packages for this notebook:
%% Cell type:code id: tags:
``` python
from nilearn import datasets
from nilearn import image
from nilearn import plotting
import nibabel as nb
import numpy as np
import os.path as op
import os
import glob
import matplotlib.pyplot as plt
```
%% Cell type:markdown id: tags:
<a class="anchor" id="download-the-data"></a>
## Fetch the data
This data is a derivative from the COBRE sample found in the International Neuroimaging Data-sharing Initiative (http://fcon_1000.projects.nitrc.org/indi/retro/cobre.html), originally released under Creative Commons - Attribution Non-Commercial.
This data is a derivative from the [COBRE](http://fcon_1000.projects.nitrc.org/indi/retro/cobre.html) sample found in the International Neuroimaging Data-sharing Initiative, originally released under Creative Commons - Attribution Non-Commercial.
It comprises 10 preprocessed resting-state fMRI selected from 72 patients diagnosed with schizophrenia and 74 healthy controls (6mm isotropic, TR=2s, 150 volumes).
Create a directory in the users home directory to store the downloaded data:
> **NOTE:** `expanduser` will expand the `~` to the be users home directory:
> **NOTE:** [`expanduser`](https://docs.python.org/3.7/library/os.path.html#os.path.expanduser) will expand the `~` to the be users home directory:
%% Cell type:code id: tags:
``` python
data_dir = op.expanduser('~/nilearn_data')
if not op.exists(data_dir):
os.makedirs(data_dir)
```
%% Cell type:markdown id: tags:
Download the data (if not already downloaded):
> **Note:** We use a method from [`nilearn`](https://nilearn.github.io/index.html) called `fetch_cobre` to download the fMRI data
> **Note:** We use a method from [`nilearn`](https://nilearn.github.io/index.html) called [`fetch_cobre`](https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_cobre.html) to download the fMRI data
%% Cell type:code id: tags:
``` python
d = datasets.fetch_cobre(data_dir=data_dir)
```
%% Cell type:markdown id: tags:
<a class="anchor" id="clean-the-data"></a>
## Clean the data
Regress confounds from the data and to spatially smooth the data with a gaussian filter of 10mm FWHM.
> **Note:**
> 1. We use `clean_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to regress confounds from the data
> 2. We use `smooth_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to spatially smooth the data
> 3. `zip` takes iterables and aggregates them in a tuple. Here it is used to iterate through four lists simultaneously
> 1. We use [`clean_img`](https://nilearn.github.io/modules/generated/nilearn.image.clean_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to regress confounds from the data
> 2. We use [`smooth_img`](https://nilearn.github.io/modules/generated/nilearn.image.smooth_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to spatially smooth the data
> 3. [`zip`](https://docs.python.org/3.3/library/functions.html#zip) takes iterables and aggregates them in a tuple. Here it is used to iterate through four lists simultaneously
> 4. We use list comprehension to loop through all the filenames and append suffixes
%% Cell type:code id: tags:
``` python
# Create a list of filenames for cleaned and smoothed data
clean = [f.replace('.nii.gz', '_clean.nii.gz') for f in d.func]
smooth = [f.replace('.nii.gz', '_clean_smooth.nii.gz') for f in d.func]
# loop through each subject, regress confounds and smooth
for img, cleaned, smoothed, conf in zip(d.func, clean, smooth, d.confounds):
print(f'{img}: regress confounds: ', end='')
image.clean_img(img, confounds=conf).to_filename(cleaned)
print(f'smooth.')
image.smooth_img(img, 10).to_filename(smoothed)
```
%% Cell type:markdown id: tags:
To run ```melodic``` we will need a brain mask in MNI152 space at the same resolution as the fMRI.
> **Note:**
> 1. We use `load_mni152_brain_mask` from the [`nilearn`](https://nilearn.github.io/index.html) package to load the MNI152 mask
> 2. We use `resample_to_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to resample the mask to the resolution of the fMRI
> 3. We use `math_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to binarize the resample mask
> 4. The mask is plotted using `plot_anat` from the [`nilearn`](https://nilearn.github.io/index.html) package
> 1. We use [`load_mni152_brain_mask`](https://nilearn.github.io/modules/generated/nilearn.datasets.load_mni152_brain_mask.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to load the MNI152 mask
> 2. We use [`resample_to_img`](https://nilearn.github.io/modules/generated/nilearn.image.resample_to_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to resample the mask to the resolution of the fMRI
> 3. We use [`math_img`](https://nilearn.github.io/modules/generated/nilearn.image.math_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to binarize the resample mask
> 4. The mask is plotted using [`plot_anat`](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_anat.html) from the [`nilearn`](https://nilearn.github.io/index.html) package
%% Cell type:code id: tags:
``` python
# load a single fMRI dataset (func0)
func0 = nb.load(d.func[0].replace('.nii.gz', '_clean_smooth.nii.gz'))
# load MNI153 brainmask, resample to func0 resolution, binarize, and save to nifti
mask = datasets.load_mni152_brain_mask()
mask = image.resample_to_img(mask, func0)
mask = image.math_img('img > 0.5', img=mask)
mask.to_filename(op.join(data_dir, 'brain_mask.nii.gz'))
# plot brainmask to make sure it looks OK
disp = plotting.plot_anat(image.index_img(func0, 0))
disp.add_contours(mask, threshold=0.5)
```
%% Cell type:markdown id: tags:
<a class="anchor" id="run-melodic"></a>
### Run ```melodic```
Generate a command line string and run group ```melodic``` on the smoothed fMRI with a dimension of 10 components:
> **Note**:
> 1. Here we use python [f-strings](https://www.python.org/dev/peps/pep-0498/), formally known as literal string interpolation, which allow for easy formatting
> 2. `op.join` will join path strings using the platform-specific directory separator
> 3. `','.join(smooth)` will create a comma seprated string of all the items in the list `smooth`
> 2. [`op.join`](https://docs.python.org/3.7/library/os.path.html#os.path.join) will join path strings using the platform-specific directory separator
> 3. [`','.join(smooth)`](https://docs.python.org/3/library/stdtypes.html#str.join) will create a comma seprated string of all the items in the list `smooth`
%% Cell type:code id: tags:
``` python
# generate melodic command line string
melodic_cmd = f"melodic -i {','.join(smooth)} --mask={op.join(data_dir, 'brain_mask.nii.gz')} -d 10 -v -o cobre.gica "
print(melodic_cmd)
```
%% Cell type:markdown id: tags:
> **Note:**
> 1. Here we use the `!` operator to execute the command in the shell
> 2. The `{}` will expand the contained python variable in the shell
%% Cell type:code id: tags:
``` python
# run melodic
! {melodic_cmd}
```
%% Cell type:markdown id: tags:
<a class="anchor" id="plot-group-ics"></a>
### Plot group ICs
Now we can load and plot the group ICs generated by ```melodic```.
This function will be used to plot ICs:
> **NOTE:**
> 1. Here we use `plot_stat_map` from the `nilearn` package to plot the orthographic images
> 2. `subplots` from `matplotlib.pyplot` creates a figure and multiple subplots
> 3. `find_xyz_cut_coords` from the `nilearn` package will find the image coordinates of the center of the largest activation connected component
> 4. `zip` takes iterables and aggregates them in a tuple. Here it is used to iterate through two lists simultaneously
> 5. `iter_img` from the `nilearn` package creates an iterator from an image that steps through each volume/time-point of the image
> 1. Here we use [`plot_stat_map`](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_stat_map.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to plot the orthographic images
> 2. [`subplots`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html) from `matplotlib.pyplot` creates a figure and multiple subplots
> 3. [`find_xyz_cut_coords`](https://nilearn.github.io/modules/generated/nilearn.plotting.find_xyz_cut_coords.html) from the [`nilearn`](https://nilearn.github.io/index.html) package will find the image coordinates of the center of the largest activation connected component
> 4. [`zip`](https://docs.python.org/3.3/library/functions.html#zip) takes iterables and aggregates them in a tuple. Here it is used to iterate through two lists simultaneously
> 5. [`iter_img`](https://nilearn.github.io/modules/generated/nilearn.image.iter_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package creates an iterator from an image that steps through each volume/time-point of the image
%% Cell type:code id: tags:
``` python
def map_plot(d):
N = d.shape[-1]
fig, ax = plt.subplots(int(np.ceil((N/2))),2, figsize=(12, N))
for img, ax0 in zip(image.iter_img(d), ax.ravel()):
coord = plotting.find_xyz_cut_coords(img, activation_threshold=3.5)
plotting.plot_stat_map(img, cut_coords=coord, vmax=10, axes=ax0)
return fig
```
%% Cell type:markdown id: tags:
Hopefully you can see some familiar looking RSN spatial patterns:
%% Cell type:code id: tags:
``` python
# Load ICs
ics = nb.load('cobre.gica/melodic_IC.nii.gz')
# plot
fig = map_plot(ics)
```
......
%% Cell type:markdown id: tags:
## Matlab MIGP
This notebook will load the dimension reduced data from Matlab MIGP, run group ICA, and then plot the group ICs.
* [Run `melodic`](#run-matlab-melodic)
* [Plot group ICs](#plot-matlab-group-ics)
Firstly we will import the necessary packages for this notebook:
%% Cell type:code id: tags:
``` python
from nilearn import plotting
from nilearn import image
import nibabel as nb
import matplotlib.pyplot as plt
import numpy as np
import os.path as op
```
%% Cell type:markdown id: tags:
It will be necessary to know the location where the data was stored so that we can load the brainmask:
> **Note**: `expanduser` will expand the `~` to the be users home directory
> **Note**: [`expanduser`](https://docs.python.org/3.7/library/os.path.html#os.path.expanduser) will expand the `~` to the be users home directory
%% Cell type:code id: tags:
``` python
data_dir = op.expanduser('~/nilearn_data')
```
%% Cell type:markdown id: tags:
<a class="anchor" id="run-matlab-melodic"></a>
### Run ```melodic```
Generate a command line string and run group ```melodic``` on the Matlab MIGP dimension reduced data with a dimension of 10 components. We disable MIGP because it was already run separately in Matlab.
> **Note**:
> 1. Here we use python [f-strings](https://www.python.org/dev/peps/pep-0498/), formally known as literal string interpolation, which allow for easy formatting
> 2. `op.join` will join path strings using the platform-specific directory separator
> 2. [`op.join`](https://docs.python.org/3.7/library/os.path.html#os.path.join) will join path strings using the platform-specific directory separator
%% Cell type:code id: tags:
``` python
# generate melodic command line string
melodic_cmd = f"melodic -i matMIGP.nii.gz --mask={op.join(data_dir, 'brain_mask.nii.gz')} -d 10 -v --nobet --disableMigp -o matmigp.gica"
print(melodic_cmd)
```
%% Cell type:markdown id: tags:
> **Note:**
> 1. Here we use the `!` operator to execute the command in the shell
> 2. The `{}` will expand the contained python variable in the shell
%% Cell type:code id: tags:
``` python
# run melodic
! {melodic_cmd}
```
%% Cell type:markdown id: tags:
<a class="anchor" id="plot-matlab-group-ics"></a>
### Plot group ICs
Now we can load and plot the group ICs generated by ```melodic```.
This function will be used to plot ICs:
> **NOTE:**
> 1. Here we use `plot_stat_map` from the `nilearn` package to plot the orthographic images
> 2. `subplots` from `matplotlib.pyplot` creates a figure and multiple subplots
> 3. `find_xyz_cut_coords` from the `nilearn` package will find the image coordinates of the center of the largest activation connected component
> 4. `zip` takes iterables and aggregates them in a tuple. Here it is used to iterate through two lists simultaneously
> 5. `iter_img` from the `nilearn` package creates an iterator from an image that steps through each volume/time-point of the image
> 1. Here we use [`plot_stat_map`](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_stat_map.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to plot the orthographic images
> 2. [`subplots`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html) from `matplotlib.pyplot` creates a figure and multiple subplots
> 3. [`find_xyz_cut_coords`](https://nilearn.github.io/modules/generated/nilearn.plotting.find_xyz_cut_coords.html) from the [`nilearn`](https://nilearn.github.io/index.html) package will find the image coordinates of the center of the largest activation connected component
> 4. [`zip`](https://docs.python.org/3.3/library/functions.html#zip) takes iterables and aggregates them in a tuple. Here it is used to iterate through two lists simultaneously
> 5. [`iter_img`](https://nilearn.github.io/modules/generated/nilearn.image.iter_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package creates an iterator from an image that steps through each volume/time-point of the image
%% Cell type:code id: tags:
``` python
def map_plot(d):
N = d.shape[-1]
fig, ax = plt.subplots(int(np.ceil((N/2))),2, figsize=(12, N))
for img, ax0 in zip(image.iter_img(d), ax.ravel()):
coord = plotting.find_xyz_cut_coords(img, activation_threshold=3.5)
plotting.plot_stat_map(img, cut_coords=coord, vmax=10, axes=ax0)
return fig
```
%% Cell type:markdown id: tags:
Hopefully you can see some familiar looking RSN spatial patterns:
%% Cell type:code id: tags:
``` python
ics = nb.load('matmigp.gica/melodic_IC.nii.gz')
fig = map_plot(ics)
```
%% Cell type:code id: tags:
``` python
```
......
%% Cell type:markdown id: tags:
## Python MIGP
This notebook will perform *python* MIGP dimension reduction, run group ICA, and then plot the group ICs.
* [Run python `MIGP`](#run-python-migp)
* [Run `melodic`](#run-python-melodic)
* [Plot group ICs](#plot-python-group-ics)
Firstly we will import the necessary packages for this notebook:
%% Cell type:code id: tags:
``` python
import glob
import random
import nibabel as nb
import numpy as np
from scipy.sparse.linalg import svds, eigs
import matplotlib.pyplot as plt
from nilearn import plotting
from nilearn import image
import os.path as op
```
%% Cell type:markdown id: tags:
It will be necessary to know the location where the data was stored so that we can load the brainmask.
> **Note:** `expanduser` will expand the `~` to the be users home directory:
> **Note**: [`expanduser`](https://docs.python.org/3.7/library/os.path.html#os.path.expanduser) will expand the `~` to the be users home directory
%% Cell type:code id: tags:
``` python
data_dir = op.expanduser('~/nilearn_data')
```
%% Cell type:markdown id: tags:
<a class="anchor" id="run-python-migp"></a>
### Run python `MIGP`
Firstly we need to set the MIGP parameters:
> **Note:**
> 1. `glob.glob` will create a list of filenames that match the glob/wildcard pattern
> 2. `nb.load` from the `nibabel` package will load the image into `nibabel.Nifti1Image` object. This will not load the actual data though.
> 3. We use a list comprehension to loop through all the filenames and load them with `nibabel`
> 1. [`glob.glob`](https://docs.python.org/3/library/glob.html) will create a list of filenames that match the glob/wildcard pattern
> 2. [`nb.load`](https://nipy.org/nibabel/gettingstarted.html) from the [`nibabel`](https://nipy.org/nibabel/index.html) package will load the image into [`nibabel.Nifti1Image`](https://nipy.org/nibabel/reference/nibabel.nifti1.html) object. This will not load the actual data though.
> 3. We use a list comprehension to loop through all the filenames and load them with [`nibabel`](https://nipy.org/nibabel/index.html)
%% Cell type:code id: tags:
``` python
# create lists of (nibabel) image objects
in_list = [nb.load(f) for f in glob.glob(f'{data_dir}/cobre/fmri_*_smooth.nii.gz')]
in_mask = nb.load(f'{data_dir}/brain_mask.nii.gz')
# set user parameters (equivalent to melodic defaults)
GO = 'pyMIGP.nii.gz' # output filename
dPCA_int = 299 # internal number of components - typically 2-4 times number of timepoints in each run (if you have enough RAM for that)
dPCA_out = 299 # number of eigenvectors to output - should be less than dPCAint and more than the final ICA dimensionality
sep_vn = False # switch on separate variance nomalisation for each input dataset
```
%% Cell type:markdown id: tags:
> **Note:**
> 1. `random.shuffle` will shuffle a list, in this instance it shuffles the list of `nibabel.Nifti1Image` objects
> 2. `ravel` will unfold a n-d array into vector. Similar to the `:` operator in Matlab
> 3. `reshape` works similarly to reshape in Matlab, but be careful becase the default order is different from Matlab.
> 4. `.T` does a transpose in `numpy`
> 1. [`random.shuffle`](https://docs.python.org/3.7/library/random.html#random.shuffle) will shuffle a list, in this instance it shuffles the list of [`nibabel.Nifti1Image`](https://nipy.org/nibabel/reference/nibabel.nifti1.html) objects
> 2. [`ravel`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) will unfold a n-d array into vector. Similar to the `:` operator in Matlab
> 3. [`reshape`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html) works similarly to reshape in Matlab, but be careful becase the default order is different from Matlab.
> 4. [`.T`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.T.html) does a transpose in `numpy`
> 5. The final element of an array is indexed with `-1` in `numpy`, as opposed to `end` in Matlab
> 6. `svds` and `eigs` come from the `scipy.sparse.linalg` package
> 7. `svds` and `eigs` are very similar to their Matlab counterparts, but be careful because Matlab `svds` returns $U$, $S$, and $V$, whereas python `svds` returns $U$, $S$, and $V^T$
> 6. [`svds`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.svds.html) and [`eigs`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigs.html?highlight=eigs#scipy.sparse.linalg.eigs) come from the [`scipy.sparse.linalg`](https://docs.scipy.org/doc/scipy/reference/sparse.linalg.html) package
> 7. [`svds`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.svds.html) and [`eigs`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigs.html?highlight=eigs#scipy.sparse.linalg.eigs) are very similar to their Matlab counterparts, but be careful because Matlab `svds` returns $U$, $S$, and $V$, whereas python `svds` returns $U$, $S$, and $V^T$
> 8. We index into the output of `eigs(W@W.T, dPCA_int)[1]` to only return the 2nd output (index 1)
%% Cell type:code id: tags:
``` python
# randomise the subject order
random.shuffle(in_list)
# load and unravel brainmask
mask = in_mask.get_fdata().ravel()
# function to demean the data
def demean(x):
return x - np.mean(x, axis=0)
# loop through input files/subjects
for i, f in enumerate(in_list):
# read data
print(f'Reading data file {f.get_filename()}')
grot = f.get_fdata()
grot = np.reshape(grot, [-1, grot.shape[-1]])
grot = grot[mask!=0, :].T
# demean
print(f'\tRemoving mean image')
grot = demean(grot)
# var-norm
if sep_vn:
print(f'\tNormalising by voxel-wise variance')
[uu, ss, vt] = svds(grot, k=30)
vt[np.abs(vt) < (2.3 * np.std(vt))] = 0;
stddevs = np.maximum(np.std(grot - (uu @ np.diag(ss) @ vt), axis=0), 0.001)
grot = grot/stddevs
if i == 0:
W = demean(grot)
else:
# concat
W = np.concatenate((W, demean(grot)), axis=0)
# reduce W to dPCA_int eigenvectors
if W.shape[0]-10 > dPCA_int:
print(f'\tReducing data matrix to a {dPCA_int} dimensional subspace')
uu = eigs(W@W.T, dPCA_int)[1]
uu = np.real(uu)
W = uu.T @ W
# reshape and save
grot = np.zeros([mask.shape[0], dPCA_out])
grot[mask!=0, :] = W[:dPCA_out, :].T
grot = np.reshape(grot, in_list[0].shape[:3] + (dPCA_out,))
print(f'Save to {GO}')
nb.Nifti1Image(grot, affine=in_list[0].affine).to_filename(GO)
```
%% Cell type:markdown id: tags:
<a class="anchor" id="run-python-melodic"></a>
### Run ```melodic```
Generate a command line string and run group ```melodic``` on the Python MIGP dimension reduced data with a dimension of 10 components:
Generate a command line string and run group ```melodic``` on the Matlab MIGP dimension reduced data with a dimension of 10 components. We disable MIGP because it was already run separately in Matlab.
> **Note**:
> 1. Here we use python [f-strings](https://www.python.org/dev/peps/pep-0498/), formally known as literal string interpolation, which allow for easy formatting
> 2. [`op.join`](https://docs.python.org/3.7/library/os.path.html#os.path.join) will join path strings using the platform-specific directory separator
%% Cell type:code id: tags:
``` python
# generate melodic command line string
melodic_cmd = f"melodic -i pyMIGP.nii.gz --mask={op.join(data_dir, 'brain_mask.nii.gz')} -d 10 -v --nobet --disableMigp -o pymigp.gica"
print(melodic_cmd)
```
%% Cell type:markdown id: tags:
> **Note:**
> 1. Here we use the `!` operator to execute the command in the shell
> 2. The `{}` will expand the contained python variable in the shell
%% Cell type:code id: tags:
``` python
# run melodic
! {melodic_cmd}
```
%% Cell type:markdown id: tags:
<a class="anchor" id="plot-python-group-ics"></a>
### Plot group ICs
Now we can load and plot the group ICs generated by ```melodic```.
This function will be used to plot ICs:
> **NOTE:**
> 1. Here we use [`plot_stat_map`](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_stat_map.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to plot the orthographic images
> 2. [`subplots`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html) from `matplotlib.pyplot` creates a figure and multiple subplots
> 3. [`find_xyz_cut_coords`](https://nilearn.github.io/modules/generated/nilearn.plotting.find_xyz_cut_coords.html) from the [`nilearn`](https://nilearn.github.io/index.html) package will find the image coordinates of the center of the largest activation connected component
> 4. [`zip`](https://docs.python.org/3.3/library/functions.html#zip) takes iterables and aggregates them in a tuple. Here it is used to iterate through two lists simultaneously
> 5. [`iter_img`](https://nilearn.github.io/modules/generated/nilearn.image.iter_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package creates an iterator from an image that steps through each volume/time-point of the image
%% Cell type:code id: tags:
``` python
def map_plot(d):
N = d.shape[-1]
fig, ax = plt.subplots(int(np.ceil((N/2))),2, figsize=(12, N))
for img, ax0 in zip(image.iter_img(d), ax.ravel()):
coord = plotting.find_xyz_cut_coords(img, activation_threshold=3.5)
plotting.plot_stat_map(img, cut_coords=coord, vmax=10, axes=ax0)
return fig
```
%% Cell type:markdown id: tags:
Hopefully you can see some familiar looking RSN spatial patterns:
%% Cell type:code id: tags:
``` python
ics = nb.load('pymigp.gica/melodic_IC.nii.gz')
fig = map_plot(ics)
```
%% Cell type:code id: tags:
``` python
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment