diff --git a/talks/matlab_vs_python/migp/MIGP.ipynb b/talks/matlab_vs_python/migp/MIGP.ipynb
index 5dd73e87ea5807a15363c0e439f1eea4a5727f47..1a0baf9f839a6b10ddc443deb4d46048084d1e6d 100644
--- a/talks/matlab_vs_python/migp/MIGP.ipynb
+++ b/talks/matlab_vs_python/migp/MIGP.ipynb
@@ -58,13 +58,13 @@
     "<a class=\"anchor\" id=\"download-the-data\"></a>\n",
     "## Fetch the data\n",
     "\n",
-    "This data is a derivative from the COBRE sample found in the International Neuroimaging Data-sharing Initiative (http://fcon_1000.projects.nitrc.org/indi/retro/cobre.html), originally released under Creative Commons - Attribution Non-Commercial.\n",
+    "This data is a derivative from the [COBRE](http://fcon_1000.projects.nitrc.org/indi/retro/cobre.html) sample found in the International Neuroimaging Data-sharing Initiative, originally released under Creative Commons - Attribution Non-Commercial.\n",
     "\n",
     "It comprises 10 preprocessed resting-state fMRI selected from 72 patients diagnosed with schizophrenia and 74 healthy controls (6mm isotropic, TR=2s, 150 volumes).\n",
     "\n",
     "Create a directory in the users home directory to store the downloaded data:\n",
     "\n",
-    "> **NOTE:** `expanduser` will expand the `~` to the be users home directory:"
+    "> **NOTE:** [`expanduser`](https://docs.python.org/3.7/library/os.path.html#os.path.expanduser) will expand the `~` to the be users home directory:"
    ]
   },
   {
@@ -85,7 +85,7 @@
    "source": [
     "Download the data (if not already downloaded):\n",
     "\n",
-    "> **Note:** We use a method from [`nilearn`](https://nilearn.github.io/index.html) called `fetch_cobre` to download the fMRI data"
+    "> **Note:** We use a method from [`nilearn`](https://nilearn.github.io/index.html) called [`fetch_cobre`](https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_cobre.html) to download the fMRI data"
    ]
   },
   {
@@ -107,9 +107,9 @@
     "Regress confounds from the data and to spatially smooth the data with a gaussian filter of 10mm FWHM.\n",
     "\n",
     "> **Note:**\n",
-    "> 1. We use `clean_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to regress confounds from the data\n",
-    "> 2. We use `smooth_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to spatially smooth the data\n",
-    "> 3. `zip` takes iterables and aggregates them in a tuple.  Here it is used to iterate through four lists simultaneously\n",
+    "> 1. We use [`clean_img`](https://nilearn.github.io/modules/generated/nilearn.image.clean_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to regress confounds from the data\n",
+    "> 2. We use [`smooth_img`](https://nilearn.github.io/modules/generated/nilearn.image.smooth_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to spatially smooth the data\n",
+    "> 3. [`zip`](https://docs.python.org/3.3/library/functions.html#zip) takes iterables and aggregates them in a tuple.  Here it is used to iterate through four lists simultaneously\n",
     "> 4. We use list comprehension to loop through all the filenames and append suffixes\n"
    ]
   },
@@ -138,10 +138,10 @@
     "To run ```melodic``` we will need a brain mask in MNI152 space at the same resolution as the fMRI.  \n",
     "\n",
     "> **Note:**\n",
-    "> 1. We use `load_mni152_brain_mask` from the [`nilearn`](https://nilearn.github.io/index.html) package to load the MNI152 mask\n",
-    "> 2. We use `resample_to_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to resample the mask to the resolution of the fMRI \n",
-    "> 3. We use `math_img` from the [`nilearn`](https://nilearn.github.io/index.html) package to binarize the resample mask\n",
-    "> 4. The mask is plotted using `plot_anat` from the [`nilearn`](https://nilearn.github.io/index.html) package"
+    "> 1. We use [`load_mni152_brain_mask`](https://nilearn.github.io/modules/generated/nilearn.datasets.load_mni152_brain_mask.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to load the MNI152 mask\n",
+    "> 2. We use [`resample_to_img`](https://nilearn.github.io/modules/generated/nilearn.image.resample_to_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to resample the mask to the resolution of the fMRI \n",
+    "> 3. We use [`math_img`](https://nilearn.github.io/modules/generated/nilearn.image.math_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to binarize the resample mask\n",
+    "> 4. The mask is plotted using [`plot_anat`](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_anat.html) from the [`nilearn`](https://nilearn.github.io/index.html) package"
    ]
   },
   {
@@ -175,8 +175,8 @@
     "\n",
     "> **Note**: \n",
     "> 1. Here we use python [f-strings](https://www.python.org/dev/peps/pep-0498/), formally known as literal string interpolation, which allow for easy formatting\n",
-    "> 2. `op.join` will join path strings using the platform-specific directory separator\n",
-    "> 3. `','.join(smooth)` will create a comma seprated string of all the items in the list `smooth`"
+    "> 2. [`op.join`](https://docs.python.org/3.7/library/os.path.html#os.path.join) will join path strings using the platform-specific directory separator\n",
+    "> 3. [`','.join(smooth)`](https://docs.python.org/3/library/stdtypes.html#str.join) will create a comma seprated string of all the items in the list `smooth`"
    ]
   },
   {
@@ -221,11 +221,11 @@
     "This function will be used to plot ICs:\n",
     "\n",
     "> **NOTE:**\n",
-    "> 1. Here we use `plot_stat_map` from the `nilearn` package to plot the orthographic images\n",
-    "> 2. `subplots` from `matplotlib.pyplot` creates a figure and multiple subplots\n",
-    "> 3. `find_xyz_cut_coords` from the `nilearn` package will find the image coordinates of the center of the largest activation connected component\n",
-    "> 4. `zip` takes iterables and aggregates them in a tuple.  Here it is used to iterate through two lists simultaneously\n",
-    "> 5. `iter_img` from the `nilearn` package creates an iterator from an image that steps through each volume/time-point of the image"
+    "> 1. Here we use [`plot_stat_map`](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_stat_map.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to plot the orthographic images\n",
+    "> 2. [`subplots`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html) from `matplotlib.pyplot` creates a figure and multiple subplots\n",
+    "> 3. [`find_xyz_cut_coords`](https://nilearn.github.io/modules/generated/nilearn.plotting.find_xyz_cut_coords.html) from the [`nilearn`](https://nilearn.github.io/index.html) package will find the image coordinates of the center of the largest activation connected component\n",
+    "> 4. [`zip`](https://docs.python.org/3.3/library/functions.html#zip) takes iterables and aggregates them in a tuple.  Here it is used to iterate through two lists simultaneously\n",
+    "> 5. [`iter_img`](https://nilearn.github.io/modules/generated/nilearn.image.iter_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package creates an iterator from an image that steps through each volume/time-point of the image"
    ]
   },
   {
diff --git a/talks/matlab_vs_python/migp/matlab_MIGP.ipynb b/talks/matlab_vs_python/migp/matlab_MIGP.ipynb
index ca0089a2ca37a9db15c420cebb6b9196bce93963..b017170e77e4e04a6d8d2e69c930323f9c18566b 100644
--- a/talks/matlab_vs_python/migp/matlab_MIGP.ipynb
+++ b/talks/matlab_vs_python/migp/matlab_MIGP.ipynb
@@ -34,7 +34,7 @@
    "source": [
     "It will be necessary to know the location where the data was stored so that we can load the brainmask:\n",
     "\n",
-    "> **Note**: `expanduser` will expand the `~` to the be users home directory"
+    "> **Note**: [`expanduser`](https://docs.python.org/3.7/library/os.path.html#os.path.expanduser) will expand the `~` to the be users home directory"
    ]
   },
   {
@@ -57,7 +57,7 @@
     "\n",
     "> **Note**: \n",
     "> 1. Here we use python [f-strings](https://www.python.org/dev/peps/pep-0498/), formally known as literal string interpolation, which allow for easy formatting\n",
-    "> 2. `op.join` will join path strings using the platform-specific directory separator"
+    "> 2. [`op.join`](https://docs.python.org/3.7/library/os.path.html#os.path.join) will join path strings using the platform-specific directory separator"
    ]
   },
   {
@@ -102,11 +102,11 @@
     "This function will be used to plot ICs:\n",
     "\n",
     "> **NOTE:**\n",
-    "> 1. Here we use `plot_stat_map` from the `nilearn` package to plot the orthographic images\n",
-    "> 2. `subplots` from `matplotlib.pyplot` creates a figure and multiple subplots\n",
-    "> 3. `find_xyz_cut_coords` from the `nilearn` package will find the image coordinates of the center of the largest activation connected component\n",
-    "> 4. `zip` takes iterables and aggregates them in a tuple.  Here it is used to iterate through two lists simultaneously\n",
-    "> 5. `iter_img` from the `nilearn` package creates an iterator from an image that steps through each volume/time-point of the image"
+    "> 1. Here we use [`plot_stat_map`](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_stat_map.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to plot the orthographic images\n",
+    "> 2. [`subplots`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html) from `matplotlib.pyplot` creates a figure and multiple subplots\n",
+    "> 3. [`find_xyz_cut_coords`](https://nilearn.github.io/modules/generated/nilearn.plotting.find_xyz_cut_coords.html) from the [`nilearn`](https://nilearn.github.io/index.html) package will find the image coordinates of the center of the largest activation connected component\n",
+    "> 4. [`zip`](https://docs.python.org/3.3/library/functions.html#zip) takes iterables and aggregates them in a tuple.  Here it is used to iterate through two lists simultaneously\n",
+    "> 5. [`iter_img`](https://nilearn.github.io/modules/generated/nilearn.image.iter_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package creates an iterator from an image that steps through each volume/time-point of the image"
    ]
   },
   {
@@ -144,13 +144,6 @@
     "ics = nb.load('matmigp.gica/melodic_IC.nii.gz')\n",
     "fig = map_plot(ics)"
    ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": []
   }
  ],
  "metadata": {
@@ -169,7 +162,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,
diff --git a/talks/matlab_vs_python/migp/python_MIGP.ipynb b/talks/matlab_vs_python/migp/python_MIGP.ipynb
index 7c3188f27adce0a5dd8dd182713242dbd5494e57..50993ca54b2b7cd9a1888a5869e4a4d63102b84b 100644
--- a/talks/matlab_vs_python/migp/python_MIGP.ipynb
+++ b/talks/matlab_vs_python/migp/python_MIGP.ipynb
@@ -38,7 +38,7 @@
    "source": [
     "It will be necessary to know the location where the data was stored so that we can load the brainmask. \n",
     "\n",
-    "> **Note:** `expanduser` will expand the `~` to the be users home directory:"
+    "> **Note**: [`expanduser`](https://docs.python.org/3.7/library/os.path.html#os.path.expanduser) will expand the `~` to the be users home directory"
    ]
   },
   {
@@ -60,9 +60,9 @@
     "Firstly we need to set the MIGP parameters:\n",
     "\n",
     "> **Note:**\n",
-    "> 1. `glob.glob` will create a list of filenames that match the glob/wildcard pattern\n",
-    "> 2. `nb.load` from the `nibabel` package will load the image into `nibabel.Nifti1Image` object.  This will not load the actual data though.\n",
-    "> 3. We use a list comprehension to loop through all the filenames and load them with `nibabel`"
+    "> 1. [`glob.glob`](https://docs.python.org/3/library/glob.html) will create a list of filenames that match the glob/wildcard pattern\n",
+    "> 2. [`nb.load`](https://nipy.org/nibabel/gettingstarted.html) from the [`nibabel`](https://nipy.org/nibabel/index.html) package will load the image into [`nibabel.Nifti1Image`](https://nipy.org/nibabel/reference/nibabel.nifti1.html) object.  This will not load the actual data though.\n",
+    "> 3. We use a list comprehension to loop through all the filenames and load them with [`nibabel`](https://nipy.org/nibabel/index.html)"
    ]
   },
   {
@@ -87,13 +87,13 @@
    "metadata": {},
    "source": [
     "> **Note:**\n",
-    "> 1. `random.shuffle` will shuffle a list, in this instance it shuffles the list of `nibabel.Nifti1Image` objects\n",
-    "> 2. `ravel` will unfold a n-d array into vector.  Similar to the `:` operator in Matlab\n",
-    "> 3. `reshape` works similarly to reshape in Matlab, but be careful becase the default order is different from Matlab.\n",
-    "> 4. `.T` does a transpose in `numpy`\n",
+    "> 1. [`random.shuffle`](https://docs.python.org/3.7/library/random.html#random.shuffle) will shuffle a list, in this instance it shuffles the list of [`nibabel.Nifti1Image`](https://nipy.org/nibabel/reference/nibabel.nifti1.html) objects\n",
+    "> 2. [`ravel`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) will unfold a n-d array into vector.  Similar to the `:` operator in Matlab\n",
+    "> 3. [`reshape`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html) works similarly to reshape in Matlab, but be careful becase the default order is different from Matlab.\n",
+    "> 4. [`.T`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.T.html) does a transpose in `numpy`\n",
     "> 5. The final element of an array is indexed with `-1` in `numpy`, as opposed to `end` in Matlab\n",
-    "> 6. `svds` and `eigs` come from the `scipy.sparse.linalg` package\n",
-    "> 7. `svds` and `eigs` are very similar to their Matlab counterparts, but be careful because Matlab `svds` returns $U$, $S$, and $V$, whereas python `svds` returns $U$, $S$, and $V^T$\n",
+    "> 6. [`svds`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.svds.html) and [`eigs`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigs.html?highlight=eigs#scipy.sparse.linalg.eigs) come from the [`scipy.sparse.linalg`](https://docs.scipy.org/doc/scipy/reference/sparse.linalg.html) package\n",
+    "> 7. [`svds`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.svds.html) and [`eigs`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigs.html?highlight=eigs#scipy.sparse.linalg.eigs) are very similar to their Matlab counterparts, but be careful because Matlab `svds` returns $U$, $S$, and $V$, whereas python `svds` returns $U$, $S$, and $V^T$\n",
     "> 8. We index into the output of `eigs(W@W.T, dPCA_int)[1]` to only return the 2nd output (index 1)"
    ]
   },
@@ -163,7 +163,11 @@
     "<a class=\"anchor\" id=\"run-python-melodic\"></a>\n",
     "### Run ```melodic```\n",
     "\n",
-    "Generate a command line string and run group ```melodic``` on the Python MIGP dimension reduced data with a dimension of 10 components:"
+    "Generate a command line string and run group ```melodic``` on the Matlab MIGP dimension reduced data with a dimension of 10 components.  We disable MIGP because it was already run separately in Matlab.\n",
+    "\n",
+    "> **Note**: \n",
+    "> 1. Here we use python [f-strings](https://www.python.org/dev/peps/pep-0498/), formally known as literal string interpolation, which allow for easy formatting\n",
+    "> 2. [`op.join`](https://docs.python.org/3.7/library/os.path.html#os.path.join) will join path strings using the platform-specific directory separator"
    ]
   },
   {
@@ -177,6 +181,15 @@
     "print(melodic_cmd)"
    ]
   },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "> **Note:** \n",
+    "> 1. Here we use the `!` operator to execute the command in the shell\n",
+    "> 2. The `{}` will expand the contained python variable in the shell"
+   ]
+  },
   {
    "cell_type": "code",
    "execution_count": null,
@@ -196,7 +209,14 @@
     "\n",
     "Now we can load and plot the group ICs generated by ```melodic```.\n",
     "\n",
-    "This function will be used to plot ICs:"
+    "This function will be used to plot ICs:\n",
+    "\n",
+    "> **NOTE:**\n",
+    "> 1. Here we use [`plot_stat_map`](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_stat_map.html) from the [`nilearn`](https://nilearn.github.io/index.html) package to plot the orthographic images\n",
+    "> 2. [`subplots`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html) from `matplotlib.pyplot` creates a figure and multiple subplots\n",
+    "> 3. [`find_xyz_cut_coords`](https://nilearn.github.io/modules/generated/nilearn.plotting.find_xyz_cut_coords.html) from the [`nilearn`](https://nilearn.github.io/index.html) package will find the image coordinates of the center of the largest activation connected component\n",
+    "> 4. [`zip`](https://docs.python.org/3.3/library/functions.html#zip) takes iterables and aggregates them in a tuple.  Here it is used to iterate through two lists simultaneously\n",
+    "> 5. [`iter_img`](https://nilearn.github.io/modules/generated/nilearn.image.iter_img.html) from the [`nilearn`](https://nilearn.github.io/index.html) package creates an iterator from an image that steps through each volume/time-point of the image"
    ]
   },
   {
@@ -234,13 +254,6 @@
     "ics = nb.load('pymigp.gica/melodic_IC.nii.gz')\n",
     "fig = map_plot(ics)"
    ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": []
   }
  ],
  "metadata": {
@@ -259,7 +272,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,