diff --git a/talks/matlab_vs_python/migp/fetch_data.ipynb b/talks/matlab_vs_python/migp/fetch_data.ipynb index 67e9b76f1f0f3a53e560508a3e98ad6f83baff75..6778e404ee079e8b4f132865f8ff52a2b7307a33 100644 --- a/talks/matlab_vs_python/migp/fetch_data.ipynb +++ b/talks/matlab_vs_python/migp/fetch_data.ipynb @@ -37,32 +37,6 @@ "import matplotlib.pyplot as plt" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This function will be used to plot ICs later:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def map_plot(d):\n", - "\n", - " N = d.shape[-1]\n", - "\n", - " fig, ax = plt.subplots(int(np.ceil((N/2))),2, figsize=(12, N))\n", - "\n", - " for img, ax0 in zip(image.iter_img(d), ax.ravel()):\n", - " coord = plotting.find_xyz_cut_coords(img, activation_threshold=2.3)\n", - " plotting.plot_stat_map(img, cut_coords=coord, vmax=10, axes=ax0)\n", - " \n", - " return fig" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -72,7 +46,7 @@ "\n", "Create a directory in the users home directory to store the downloaded data:\n", "\n", - "`expanduser` will expand the `~` to the be users home directory:" + "> **NOTE:** `expanduser` will expand the `~` to the be users home directory:" ] }, { @@ -91,7 +65,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Download the data (if not already downloaded). We use a method from [`nilearn`](https://nilearn.github.io/index.html) called `fetch_cobre` to download the fMRI data:" + "Download the data (if not already downloaded):\n", + "\n", + "> **Note:** We use a method from [`nilearn`](https://nilearn.github.io/index.html) called `fetch_cobre` to download the fMRI data" ] }, { @@ -110,7 +86,13 @@ "<a class=\"anchor\" id=\"clean-the-data\"></a>\n", "## Clean the data\n", "\n", - "We use methods from [`nilearn`](https://nilearn.github.io/index.html) to regress confounds from the data (```clean_img```) and to spatially smooth the data with a gaussian filter of 10mm FWHM (```smooth_img```):" + "Regress confounds from the data and to spatially smooth the data with a gaussian filter of 10mm FWHM.\n", + "\n", + "> **Note:**\n", + "> 1. We use `clean_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to regress confounds from the data\n", + "> 2. We use `smooth_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to spatially smooth the data\n", + "> 3. `zip` takes iterables and aggregates them in a tuple. Here it is used to iterate through four lists simultaneously\n", + "> 4. We use list comprehension to loop through all the filenames and append suffixes\n" ] }, { @@ -135,7 +117,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "To run ```melodic``` we will need a brain mask in MNI152 space at the same resolution as the fMRI. Here we use [`nilearn`](https://nilearn.github.io/index.html) methods to load the MNI152 mask (```load_mni152_brain_mask```), resample to the resolution of the fMRI (```resample_to_img```), and binarize (```math_img```):" + "To run ```melodic``` we will need a brain mask in MNI152 space at the same resolution as the fMRI. \n", + "\n", + "> **Note:**\n", + "> 1. We use `load_mni152_brain_mask` from the [`nilearn`](https://nilearn.github.io/index.html) package to load the MNI152 mask\n", + "> 2. We use `resample_to_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to resample the mask to the resolution of the fMRI \n", + "> 3. We use `math_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to binarize the resample mask\n", + "> 4. The mask is plotted using `plot_anat` from the [`nilearn`](https://nilearn.github.io/index.html) package" ] }, { @@ -165,7 +153,12 @@ "<a class=\"anchor\" id=\"run-melodic\"></a>\n", "### Run ```melodic```\n", "\n", - "Generate a command line string and run group ```melodic``` on the smoothed fMRI with a dimension of 10 components:" + "Generate a command line string and run group ```melodic``` on the smoothed fMRI with a dimension of 10 components:\n", + "\n", + "> **Note**: \n", + "> 1. Here we use python [f-strings](https://www.python.org/dev/peps/pep-0498/), formally known as literal string interpolation, which allow for easy formatting\n", + "> 2. `op.join` will join path strings using the platform-specific directory separator\n", + "> 3. `','.join(smooth)` will create a comma seprated string of all the items in the list `smooth`" ] }, { @@ -179,6 +172,15 @@ "print(melodic_cmd)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> **Note:** \n", + "> 1. Here we use the `!` operator to execute the command in the shell\n", + "> 2. The `{}` will expand the contained python variable in the shell" + ] + }, { "cell_type": "code", "execution_count": null, @@ -198,7 +200,14 @@ "\n", "Now we can load and plot the group ICs generated by ```melodic```.\n", "\n", - "This function will be used to plot ICs:" + "This function will be used to plot ICs:\n", + "\n", + "> **NOTE:**\n", + "> 1. Here we use `plot_stat_map` from the `nilearn` package to plot the orthographic images\n", + "> 2. `subplots` from `matplotlib.pyplot` creates a figure and multiple subplots\n", + "> 3. `find_xyz_cut_coords` from the `nilearn` package will find the image coordinates of the center of the largest activation connected component\n", + "> 4. `zip` takes iterables and aggregates them in a tuple. Here it is used to iterate through two lists simultaneously\n", + "> 5. `iter_img` from the `nilearn` package creates an iterator from an image that steps through each volume/time-point of the image" ] }, {