From a7a82e8af63f16efa97bec9b0ade3e444e21878b Mon Sep 17 00:00:00 2001
From: Paul McCarthy <pauldmccarthy@gmail.com>
Date: Mon, 9 Mar 2020 15:51:16 +0000
Subject: [PATCH] Working on fslpy prac

---
 advanced_topics/08_fslpy.ipynb | 764 +++++++++++++++++++++++++++++++++
 advanced_topics/08_fslpy.md    | 308 ++++++++++++-
 2 files changed, 1062 insertions(+), 10 deletions(-)
 create mode 100644 advanced_topics/08_fslpy.ipynb

diff --git a/advanced_topics/08_fslpy.ipynb b/advanced_topics/08_fslpy.ipynb
new file mode 100644
index 0000000..bd68815
--- /dev/null
+++ b/advanced_topics/08_fslpy.ipynb
@@ -0,0 +1,764 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# `fslpy`\n",
+    "\n",
+    "\n",
+    "[`fslpy`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/) is a\n",
+    "Python library which is built into FSL, and contains a range of functionality\n",
+    "for working with neuroimaging data from Python.\n",
+    "\n",
+    "\n",
+    "This practical highlights some of the most useful features provided by\n",
+    "`fslpy`. You may find `fslpy` useful if you are writing Python code to\n",
+    "perform analyses and image processing in conjunction with FSL.\n",
+    "\n",
+    "\n",
+    "> **Note**: `fslpy` is distinct from `fslpython` - `fslpython` is the Python\n",
+    "> environment that is baked into FSL. `fslpy` is a Python library which is\n",
+    "> installed into the `fslpython` environment.\n",
+    "\n",
+    "\n",
+    "* [The `Image` class, and other data types](#the-image-class-and-other-data-types)\n",
+    "  * [Creating images](#creating-images)\n",
+    "  * [Working with image data](#working-with-image-data)\n",
+    "  * [Loading other file types](#loading-other-file-types)\n",
+    "* [FSL atlases](#fsl-atlases)\n",
+    "* [The `filetree`](#the-filetree)\n",
+    "* [Image processing](#image-processing)\n",
+    "* [FSL wrapper functions](#fsl-wrapper-functions)\n",
+    "* [NIfTI coordinate systems](#nifti-coordinate-systems)\n",
+    "\n",
+    "\n",
+    "<a class=\"anchor\" id=\"the-image-class-and-other-data-types\"></a>\n",
+    "## The `Image` class, and other data types\n",
+    "\n",
+    "\n",
+    "The\n",
+    "[`fsl.data.image`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.image.html#fsl.data.image.Image)\n",
+    "module provides the `Image` class, which sits on top of `nibabel` and contains\n",
+    "some handy functionality if you need to work with coordinate transformations,\n",
+    "or do some FSL-specific processing. The `Image` class provides features such\n",
+    "as:\n",
+    "\n",
+    "- Support for NIFTI1, NIFTI2, and ANALYZE image files\n",
+    "- Access to affine transformations between the voxel, FSL and world coordinate\n",
+    "  systems\n",
+    "- Ability to load metadata from BIDS sidecar files\n",
+    "\n",
+    "\n",
+    "Some simple image processing routines are also provided - these are covered\n",
+    "[below](#image-processing).\n",
+    "\n",
+    "\n",
+    "<a class=\"anchor\" id=\"creating-images\"></a>\n",
+    "### Creating images\n",
+    "\n",
+    "\n",
+    "It's easy to create an `Image` - you can create one from a file name:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import os.path as op\n",
+    "from fsl.data.image import Image\n",
+    "\n",
+    "stddir = op.expandvars('${FSLDIR}/data/standard/')\n",
+    "\n",
+    "# load a FSL image - the file\n",
+    "# suffix is optional, just like\n",
+    "# in real FSL-land!\n",
+    "img = Image(op.join(stddir, 'MNI152_T1_1mm'))\n",
+    "print(img)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "You can create an `Image` from an existing `nibabel` image:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import nibabel as nib\n",
+    "\n",
+    "# load a nibabel image, and\n",
+    "# convert it into an FSL image\n",
+    "nibimg = nib.load(op.join(stddir, 'MNI152_T1_1mm.nii.gz'))\n",
+    "img    = Image(nibimg)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Or you can create an `Image` from a `numpy` array:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import numpy as np\n",
+    "\n",
+    "data = np.zeros((100, 100, 100))\n",
+    "img = Image(data, xform=np.eye(4))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "You can save an image to file via the `save` method:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "img.save('empty.nii.gz')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`Image` objects have all of the attributes you might expect:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "stddir = op.expandvars('${FSLDIR}/data/standard/')\n",
+    "std1mm = Image(op.join(stddir, 'MNI152_T1_1mm'))\n",
+    "\n",
+    "print('name:         ', std1mm.name)\n",
+    "print('file:         ', std1mm.dataSource)\n",
+    "print('NIfTI version:', std1mm.niftiVersion)\n",
+    "print('ndim:         ', std1mm.ndim)\n",
+    "print('shape:        ', std1mm.shape)\n",
+    "print('dtype:        ', std1mm.dtype)\n",
+    "print('nvals:        ', std1mm.nvals)\n",
+    "print('pixdim:       ', std1mm.pixdim)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "and a number of useful methods:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "std2mm  = Image(op.join(stddir, 'MNI152_T1_2mm'))\n",
+    "mask2mm = Image(op.join(stddir, 'MNI152_T1_2mm_mask'))\n",
+    "\n",
+    "print(std1mm.sameSpace(std2mm))\n",
+    "print(std2mm.sameSpace(mask2mm))\n",
+    "print(std2mm.getAffine('voxel', 'world'))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "An `Image` object is a high-level wrapper around a `nibabel` image object -\n",
+    "you can always work directly with the `nibabel` object via the `nibImage`\n",
+    "attribute:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "print(std2mm)\n",
+    "print(std2mm.nibImage)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<a class=\"anchor\" id=\"working-with-image-data\"></a>\n",
+    "### Working with image data\n",
+    "\n",
+    "\n",
+    "You can get the image data as a `numpy` array via the `data` attribute:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "data = std2mm.data\n",
+    "print(data.min, data.max())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "> Note that this will give you the data in its underlying type, unlike the\n",
+    "> `nibabel.get_fdata` method, which up-casts image data to floating-point.\n",
+    "\n",
+    "\n",
+    "You can also read and write data directly via the `Image` object:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "slc = std2mm[:, :, 45]\n",
+    "std2mm[0:10, :, :] = 0"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Doing so has some advantages that may or may not be useful, depending on your\n",
+    "use-case:\n",
+    " - The image data will be kept on disk - only the parts that you access will\n",
+    "   be loaded into RAM (you will also need to pass`loadData=False` when creating\n",
+    "   the `Image` to achieve this).\n",
+    " - The `Image` object will keep track of modifications to the data - this can\n",
+    "   be queried via the `saveState` attribute.\n",
+    "\n",
+    "\n",
+    "<a class=\"anchor\" id=\"loading-other-file-types\"></a>\n",
+    "### Loading other file types\n",
+    "\n",
+    "\n",
+    "The\n",
+    "[`fsl.data`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.html#module-fsl.data)\n",
+    "package has a number of other classes for working with different types of FSL\n",
+    "and neuroimaging data. Most of these are higher-level wrappers around the\n",
+    "corresponding `nibabel` types:\n",
+    "\n",
+    "* The\n",
+    "  [`fsl.data.bitmap.Bitmap`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.bitmap.html)\n",
+    "  class can be used to load a bitmap image (e.g. `jpg, `png`, etc) and\n",
+    "  convert it to a NIfTI image.\n",
+    "* The\n",
+    "  [`fsl.data.dicom.DicomImage`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.dicom.html)\n",
+    "  class uses `dcm2niix` to load NIfTI images contained within a DICOM\n",
+    "  directory<sup>*</sup>.\n",
+    "* The\n",
+    "  [`fsl.data.mghimahe.MGHImage`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.mghimage.html)\n",
+    "  class can be used too load `.mgh`/`.mgz` images (they are converted into\n",
+    "  NIfTI images).\n",
+    "* The\n",
+    "  [`fsl.data.dtifit`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.dtifit.html)\n",
+    "  module contains functions for loading and working with the output of the\n",
+    "  FSL `dtifit` tool.\n",
+    "* The\n",
+    "  [`fsl.data.featanalysis`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.featanalysis.html),\n",
+    "  [`fsl.data.featimage`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.featimage.html),\n",
+    "  and\n",
+    "  [`fsl.data.featdesign`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.featdesign.html)\n",
+    "  modules contain classes and functions for loading data from FEAT\n",
+    "  directories.\n",
+    "* Similarly, the\n",
+    "  [`fsl.data.melodicanalysis`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.melodicanalysis.html)\n",
+    "  and\n",
+    "  [`fsl.data.melodicimage`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.melodicimage.html)\n",
+    "  modules contain classes and functions for loading data from MELODIC\n",
+    "  directories.\n",
+    "* The\n",
+    "  [`fsl.data.gifti`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.gifti.html),\n",
+    "  [`fsl.data.freesurfer`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.freesurfer.html),\n",
+    "  and\n",
+    "  [`fsl.data.vtk`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.vtk.html)\n",
+    "  modules contain functionality form loading surface data from GIfTI,\n",
+    "  freesurfer, and VTK files respectively.\n",
+    "\n",
+    "\n",
+    "> <sup>*</sup>You must make sure that `dcm2niix` is installed on your system\n",
+    "> in order to use this class.\n",
+    "\n",
+    "\n",
+    "<a class=\"anchor\" id=\"fsl-atlases\"></a>\n",
+    "## FSL atlases\n",
+    "\n",
+    "\n",
+    "The\n",
+    "[`fsl.data.atlases`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.atlases.html)\n",
+    "module provides access to all of the atlas images that are stored in the\n",
+    "`$FSLDIR/data/atlases/` directory of a standard FSL installation. It can be\n",
+    "used to load and query probabilistic and label-based atlases.\n",
+    "\n",
+    "\n",
+    "The `atlases` module needs to be initialised using the `rescanAtlases` function:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import fsl.data.atlases as atlases\n",
+    "atlases.rescanAtlases()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "You can list all of the available atlases using `listAtlases`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "for desc in atlases.listAtlases():\n",
+    "    print(desc)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`listAtlases` returns a list of `AtlasDescription` objects, each of which\n",
+    "contains descriptive information about one atlas. You can retrieve the\n",
+    "`AtlasDescription` for a specific atlas via the `getAtlasDescription`\n",
+    "function:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "desc = atlases.getAtlasDescription('harvardoxford-cortical')\n",
+    "print(desc.name)\n",
+    "print(desc.atlasID)\n",
+    "print(desc.specPath)\n",
+    "print(desc.atlasType)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Each `AtlasDescription` maintains a list of `AtlasLabel` objects, each of\n",
+    "which represents one region that is defined in the atlas. You can access all\n",
+    "of the `AtlasLabel` objects via the `labels` attribute:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "for lbl in desc.labels[:5]:\n",
+    "    print(lbl)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Or you can retrieve a specific label using the `find` method:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# search by region name\n",
+    "print(desc.find(name='Occipital Pole'))\n",
+    "\n",
+    "# or by label value\n",
+    "print(desc.find(value=48))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `loadAtlas` function can be used to load the atlas image:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# For probabilistic atlases, you\n",
+    "# can ask for the 3D ROI image\n",
+    "# by setting loadSummary=True.\n",
+    "# You can also request a\n",
+    "# resolution - by default the\n",
+    "# highest resolution version\n",
+    "# will be loaded.\n",
+    "lblatlas = atlases.loadAtlas('harvardoxford-cortical',\n",
+    "                             loadSummary=True,\n",
+    "                             resolution=2)\n",
+    "\n",
+    "# By default you will get the 4D\n",
+    "# probabilistic atlas image (for\n",
+    "# atlases for which this is\n",
+    "# available).\n",
+    "probatlas = atlases.loadAtlas('harvardoxford-cortical',\n",
+    "                              resolution=2)\n",
+    "\n",
+    "print(lblatlas)\n",
+    "print(probatlas)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`LabelAtlas` objects have a method called `label`, which can be used to\n",
+    "interrogate the atlas at specific locations:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# The label method accepts 3D\n",
+    "# voxel or world coordinates\n",
+    "val = lblatlas.label((25, 52, 43), voxel=True)\n",
+    "lbl = lblatlas.find(value=val)\n",
+    "print('Region at voxel [25, 52, 43]: {} [{}]'.format(val, lbl.name))\n",
+    "\n",
+    "\n",
+    "# or a 3D weighted or binary mask\n",
+    "mask = np.zeros(lblatlas.shape)\n",
+    "mask[30:60, 30:60, 30:60] = 1\n",
+    "mask = Image(mask, header=lblatlas.header)\n",
+    "\n",
+    "lbls, props = lblatlas.label(mask)\n",
+    "print('Labels in mask:')\n",
+    "for lbl, prop in zip(lbls, props):\n",
+    "    lblname = lblatlas.find(value=lbl).name\n",
+    "    print('  {} [{}]: {:0.2f}%'.format(lbl, lblname, prop))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`ProbabilisticAtlas` objects have an analogous method called `values`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "vals = probatlas.values((25, 52, 43), voxel=True)\n",
+    "print('Regions at voxel [25, 52, 43]:')\n",
+    "for idx, val in enumerate(vals):\n",
+    "    if val > 0:\n",
+    "        lbl = probatlas.find(index=idx)\n",
+    "        print('  {} [{}]: {:0.2f}%'.format(lbl.value, lbl.name, val))\n",
+    "\n",
+    "print('Average proportions of regions within mask:')\n",
+    "vals = probatlas.values(mask)\n",
+    "for idx, val in enumerate(vals):\n",
+    "    if val > 0:\n",
+    "        lbl = probatlas.find(index=idx)\n",
+    "        print('  {} [{}]: {:0.2f}%'.format(lbl.value, lbl.name, val))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<a class=\"anchor\" id=\"the-filetree\"></a>\n",
+    "## The `filetree`\n",
+    "\n",
+    "<a class=\"anchor\" id=\"nifti-coordinate-systems\"></a>\n",
+    "## NIfTI coordinate systems\n",
+    "\n",
+    "<a class=\"anchor\" id=\"image-processing\"></a>\n",
+    "## Image processing\n",
+    "\n",
+    "<a class=\"anchor\" id=\"fsl-wrapper-functions\"></a>\n",
+    "## FSL wrapper functions\n",
+    "\n",
+    "\n",
+    "\n",
+    "<a class=\"anchor\" id=\"nifti-coordinate-systems\"></a>\n",
+    "## NIfTI coordinate systems\n",
+    "\n",
+    "\n",
+    "\n",
+    "The `getAffine` method gives you access to affine transformations which can be\n",
+    "used to convert coordinates between the different coordinate systems\n",
+    "associated with an image. Have some MNI coordinates you'd like to convert to\n",
+    "voxels? Easy!"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "mnicoords = np.array([[0,   0,  0],\n",
+    "                      [0, -18, 18]])\n",
+    "\n",
+    "world2vox = img.getAffine('world', 'voxel')\n",
+    "vox2world = img.getAffine('voxel', 'world')\n",
+    "\n",
+    "# Apply the world->voxel\n",
+    "# affine to the coordinates\n",
+    "voxcoords = (np.dot(world2vox[:3, :3], mnicoords.T)).T + world2vox[:3, 3]\n",
+    "\n",
+    "# The code above is a bit fiddly, so\n",
+    "# instead of figuring it out, you can\n",
+    "# just use the transform() function:\n",
+    "from fsl.transform.affine import transform\n",
+    "voxcoords = transform(mnicoords, world2vox)\n",
+    "\n",
+    "# just to double check, let's transform\n",
+    "# those voxel coordinates back into world\n",
+    "# coordinates\n",
+    "backtomni = transform(voxcoords, vox2world)\n",
+    "\n",
+    "for m, v, b in zip(mnicoords, voxcoords, backtomni):\n",
+    "    print(m, '->', v, '->', b)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "> The `Image.getAffine` method can give you transformation matrices\n",
+    "> between any of these coordinate systems:\n",
+    ">\n",
+    ">  - `'voxel'`: Image data voxel coordinates\n",
+    ">  - `'world'`: mm coordinates, defined by the sform/qform of an image\n",
+    ">  - `'fsl'`: The FSL coordinate system, used internally by many FSL tools\n",
+    ">    (e.g. FLIRT)\n",
+    "\n",
+    "\n",
+    "\n",
+    "Oh, that example was too easy I hear you say? Try this one on for size. Let's\n",
+    "say we have run FEAT on some task fMRI data, and want to get the MNI\n",
+    "coordinates of the voxel with peak activation.\n",
+    "\n",
+    "> This is what people used to use `Featquery` for, back in the un-enlightened\n",
+    "> days.\n",
+    "\n",
+    "\n",
+    "Let's start by identifying the voxel with the biggest t-statistic:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "featdir = op.join(op.join('05_nifti', 'fmri.feat'))\n",
+    "\n",
+    "# The Image.data attribute returns a\n",
+    "# numpy array containing, well, the\n",
+    "# image data.\n",
+    "tstat1 = Image(op.join(featdir, 'stats', 'tstat1')).data\n",
+    "\n",
+    "# Recall from the numpy practical that\n",
+    "# argmax gives us a 1D index into a\n",
+    "# flattened view of the array. We can\n",
+    "# use the unravel_index function to\n",
+    "# convert it into a 3D index.\n",
+    "peakvox = np.abs(tstat1).argmax()\n",
+    "peakvox = np.unravel_index(peakvox, tstat1.shape)\n",
+    "print('Peak voxel coordinates for tstat1:', peakvox, tstat1[peakvox])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we've got the voxel coordinates in functional space, we need to\n",
+    "transform them into MNI space. FEAT provides a transformation which goes\n",
+    "directly from functional to standard space, in the `reg` directory:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "func2std = np.loadtxt(op.join(featdir, 'reg', 'example_func2standard.mat'))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "But ... wait a minute ... this is a FLIRT matrix! We can't just plug voxel\n",
+    "coordinates into a FLIRT matrix and expect to get sensible results, because\n",
+    "FLIRT works in an internal FSL coordinate system, which is not quite\n",
+    "`'voxel'`, and not quite `'world'`. So we need to do a little more work.\n",
+    "Let's start by loading our functional image, and the MNI152 template (the\n",
+    "source and reference images of our FLIRT matrix):"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "func = Image(op.join(featdir, 'reg', 'example_func'))\n",
+    "std  = Image(op.expandvars(op.join('$FSLDIR', 'data', 'standard', 'MNI152_T1_2mm')))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now we can use them to get affines which convert between all of the different\n",
+    "coordinate systems - we're going to combine them into a single uber-affine,\n",
+    "which transforms our functional-space voxels into MNI world coordinates via:\n",
+    "\n",
+    "   1. functional voxels -> FLIRT source space\n",
+    "   2. FLIRT source space -> FLIRT reference space\n",
+    "   3. FLIRT referece space -> MNI world coordinates"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "vox2fsl = func.getAffine('voxel', 'fsl')\n",
+    "fsl2mni = std .getAffine('fsl',   'world')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Combining two affines into one is just a simple dot-product. There is a\n",
+    "`concat()` function which does this for us, for any number of affines:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from fsl.transform.affine import concat\n",
+    "\n",
+    "# To combine affines together, we\n",
+    "# have to list them in reverse -\n",
+    "# linear algebra is *weird*.\n",
+    "funcvox2mni = concat(fsl2mni, func2std, vox2fsl)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "So we've now got some voxel coordinates from our functional data, and an affine\n",
+    "to transform into MNI world coordinates. The rest is easy:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "mnicoords = transform(peakvox, funcvox2mni)\n",
+    "print('Peak activation (MNI coordinates):', mnicoords)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "> Note that in the above example we are only applying a linear transformation\n",
+    "> into MNI space - in reality you would also want to apply your non-linear\n",
+    "> structural-to-standard transformation too. But this is left as an exercise\n",
+    "> for the reader ;).\n",
+    "\n",
+    "\n",
+    "<a class=\"anchor\" id=\"image-processing\"></a>\n",
+    "## Image processing\n",
+    "\n",
+    "Now, it's all well and good to look at t-statistric values and voxel\n",
+    "coordinates and so on and so forth, but let's spice things up a bit and look\n",
+    "at some images. Let's display our peak activation location in MNI space. To\n",
+    "do this, we're going to resample our functional image into"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from IPython.display import Image as Screenshot\n",
+    "!fsleyes render -of screenshot.png -std"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### (Advanced) Transform coordinates with nonlinear warpfields\n",
+    "\n",
+    "\n",
+    "have to use your own dataset"
+   ]
+  }
+ ],
+ "metadata": {},
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/advanced_topics/08_fslpy.md b/advanced_topics/08_fslpy.md
index bf43f41..ef8b860 100644
--- a/advanced_topics/08_fslpy.md
+++ b/advanced_topics/08_fslpy.md
@@ -1,10 +1,10 @@
 # `fslpy`
 
-# THIS IS A WORK IN PROGRESS - DO NOT READ
 
+[`fslpy`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/) is a
+Python library which is built into FSL, and contains a range of functionality
+for working with neuroimaging data from Python.
 
-`fslpy` is a Python library which is built into FSL, and contains a range of
-functionality for working with neuroimaging data in an FSL context.
 
 This practical highlights some of the most useful features provided by
 `fslpy`. You may find `fslpy` useful if you are writing Python code to
@@ -17,68 +17,356 @@ perform analyses and image processing in conjunction with FSL.
 
 
 * [The `Image` class, and other data types](#the-image-class-and-other-data-types)
+  * [Creating images](#creating-images)
+  * [Working with image data](#working-with-image-data)
+  * [Loading other file types](#loading-other-file-types)
 * [FSL atlases](#fsl-atlases)
 * [The `filetree`](#the-filetree)
-* [NIfTI coordinate systems](#nifti-coordinate-systems)
 * [Image processing](#image-processing)
 * [FSL wrapper functions](#fsl-wrapper-functions)
+* [NIfTI coordinate systems](#nifti-coordinate-systems)
 
 
 <a class="anchor" id="the-image-class-and-other-data-types"></a>
 ## The `Image` class, and other data types
 
 
-The `fsl.data.image` module provides the `Image` class, which sits on top of
-`nibabel` and contains some handy functionality if you need to work with
-coordinate transformations, or do some FSL-specific processing. The `Image`
-class provides features such as:
+The
+[`fsl.data.image`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.image.html#fsl.data.image.Image)
+module provides the `Image` class, which sits on top of `nibabel` and contains
+some handy functionality if you need to work with coordinate transformations,
+or do some FSL-specific processing. The `Image` class provides features such
+as:
 
 - Support for NIFTI1, NIFTI2, and ANALYZE image files
 - Access to affine transformations between the voxel, FSL and world coordinate
   systems
 - Ability to load metadata from BIDS sidecar files
 
+
 Some simple image processing routines are also provided - these are covered
 [below](#image-processing).
 
 
+<a class="anchor" id="creating-images"></a>
 ### Creating images
 
+
 It's easy to create an `Image` - you can create one from a file name:
 
+
 ```
+import os.path as op
 from fsl.data.image import Image
+
 stddir = op.expandvars('${FSLDIR}/data/standard/')
 
 # load a FSL image - the file
 # suffix is optional, just like
 # in real FSL-land!
 img = Image(op.join(stddir, 'MNI152_T1_1mm'))
+print(img)
 ```
 
-You can crearte an `Image` from an existing `nibabel` image:
+
+You can create an `Image` from an existing `nibabel` image:
+
 
 ```
+import nibabel as nib
+
 # load a nibabel image, and
 # convert it into an FSL image
 nibimg = nib.load(op.join(stddir, 'MNI152_T1_1mm.nii.gz'))
 img    = Image(nibimg)
-``
+```
+
 
 Or you can create an `Image` from a `numpy` array:
 
+
 ```
+import numpy as np
+
 data = np.zeros((100, 100, 100))
 img = Image(data, xform=np.eye(4))
 ```
 
+You can save an image to file via the `save` method:
+
+
+```
+img.save('empty.nii.gz')
+```
+
+
+`Image` objects have all of the attributes you might expect:
+
+
+```
+stddir = op.expandvars('${FSLDIR}/data/standard/')
+std1mm = Image(op.join(stddir, 'MNI152_T1_1mm'))
+
+print('name:         ', std1mm.name)
+print('file:         ', std1mm.dataSource)
+print('NIfTI version:', std1mm.niftiVersion)
+print('ndim:         ', std1mm.ndim)
+print('shape:        ', std1mm.shape)
+print('dtype:        ', std1mm.dtype)
+print('nvals:        ', std1mm.nvals)
+print('pixdim:       ', std1mm.pixdim)
+```
+
+
+and a number of useful methods:
+
+
+```
+std2mm  = Image(op.join(stddir, 'MNI152_T1_2mm'))
+mask2mm = Image(op.join(stddir, 'MNI152_T1_2mm_mask'))
+
+print(std1mm.sameSpace(std2mm))
+print(std2mm.sameSpace(mask2mm))
+print(std2mm.getAffine('voxel', 'world'))
+```
+
+
+An `Image` object is a high-level wrapper around a `nibabel` image object -
+you can always work directly with the `nibabel` object via the `nibImage`
+attribute:
+
+
+```
+print(std2mm)
+print(std2mm.nibImage)
+```
+
+
+<a class="anchor" id="working-with-image-data"></a>
+### Working with image data
+
+
+You can get the image data as a `numpy` array via the `data` attribute:
+
+
+```
+data = std2mm.data
+print(data.min, data.max())
+```
+
+> Note that this will give you the data in its underlying type, unlike the
+> `nibabel.get_fdata` method, which up-casts image data to floating-point.
+
+
+You can also read and write data directly via the `Image` object:
+
+
+```
+slc = std2mm[:, :, 45]
+std2mm[0:10, :, :] = 0
+```
 
 
+Doing so has some advantages that may or may not be useful, depending on your
+use-case:
+ - The image data will be kept on disk - only the parts that you access will
+   be loaded into RAM (you will also need to pass`loadData=False` when creating
+   the `Image` to achieve this).
+ - The `Image` object will keep track of modifications to the data - this can
+   be queried via the `saveState` attribute.
+
+
+<a class="anchor" id="loading-other-file-types"></a>
+### Loading other file types
+
+
+The
+[`fsl.data`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.html#module-fsl.data)
+package has a number of other classes for working with different types of FSL
+and neuroimaging data. Most of these are higher-level wrappers around the
+corresponding `nibabel` types:
+
+* The
+  [`fsl.data.bitmap.Bitmap`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.bitmap.html)
+  class can be used to load a bitmap image (e.g. `jpg, `png`, etc) and
+  convert it to a NIfTI image.
+* The
+  [`fsl.data.dicom.DicomImage`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.dicom.html)
+  class uses `dcm2niix` to load NIfTI images contained within a DICOM
+  directory<sup>*</sup>.
+* The
+  [`fsl.data.mghimahe.MGHImage`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.mghimage.html)
+  class can be used too load `.mgh`/`.mgz` images (they are converted into
+  NIfTI images).
+* The
+  [`fsl.data.dtifit`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.dtifit.html)
+  module contains functions for loading and working with the output of the
+  FSL `dtifit` tool.
+* The
+  [`fsl.data.featanalysis`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.featanalysis.html),
+  [`fsl.data.featimage`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.featimage.html),
+  and
+  [`fsl.data.featdesign`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.featdesign.html)
+  modules contain classes and functions for loading data from FEAT
+  directories.
+* Similarly, the
+  [`fsl.data.melodicanalysis`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.melodicanalysis.html)
+  and
+  [`fsl.data.melodicimage`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.melodicimage.html)
+  modules contain classes and functions for loading data from MELODIC
+  directories.
+* The
+  [`fsl.data.gifti`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.gifti.html),
+  [`fsl.data.freesurfer`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.freesurfer.html),
+  and
+  [`fsl.data.vtk`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.vtk.html)
+  modules contain functionality form loading surface data from GIfTI,
+  freesurfer, and VTK files respectively.
+
+
+> <sup>*</sup>You must make sure that `dcm2niix` is installed on your system
+> in order to use this class.
 
 
 <a class="anchor" id="fsl-atlases"></a>
 ## FSL atlases
 
+
+The
+[`fsl.data.atlases`](https://users.fmrib.ox.ac.uk/~paulmc/fsleyes/fslpy/latest/fsl.data.atlases.html)
+module provides access to all of the atlas images that are stored in the
+`$FSLDIR/data/atlases/` directory of a standard FSL installation. It can be
+used to load and query probabilistic and label-based atlases.
+
+
+The `atlases` module needs to be initialised using the `rescanAtlases` function:
+
+
+```
+import fsl.data.atlases as atlases
+atlases.rescanAtlases()
+```
+
+
+You can list all of the available atlases using `listAtlases`:
+
+
+```
+for desc in atlases.listAtlases():
+    print(desc)
+```
+
+
+`listAtlases` returns a list of `AtlasDescription` objects, each of which
+contains descriptive information about one atlas. You can retrieve the
+`AtlasDescription` for a specific atlas via the `getAtlasDescription`
+function:
+
+
+```
+desc = atlases.getAtlasDescription('harvardoxford-cortical')
+print(desc.name)
+print(desc.atlasID)
+print(desc.specPath)
+print(desc.atlasType)
+```
+
+
+Each `AtlasDescription` maintains a list of `AtlasLabel` objects, each of
+which represents one region that is defined in the atlas. You can access all
+of the `AtlasLabel` objects via the `labels` attribute:
+
+
+```
+for lbl in desc.labels[:5]:
+    print(lbl)
+```
+
+
+Or you can retrieve a specific label using the `find` method:
+
+
+```
+# search by region name
+print(desc.find(name='Occipital Pole'))
+
+# or by label value
+print(desc.find(value=48))
+```
+
+
+The `loadAtlas` function can be used to load the atlas image:
+
+
+```
+# For probabilistic atlases, you
+# can ask for the 3D ROI image
+# by setting loadSummary=True.
+# You can also request a
+# resolution - by default the
+# highest resolution version
+# will be loaded.
+lblatlas = atlases.loadAtlas('harvardoxford-cortical',
+                             loadSummary=True,
+                             resolution=2)
+
+# By default you will get the 4D
+# probabilistic atlas image (for
+# atlases for which this is
+# available).
+probatlas = atlases.loadAtlas('harvardoxford-cortical',
+                              resolution=2)
+
+print(lblatlas)
+print(probatlas)
+```
+
+
+`LabelAtlas` objects have a method called `label`, which can be used to
+interrogate the atlas at specific locations:
+
+
+```
+# The label method accepts 3D
+# voxel or world coordinates
+val = lblatlas.label((25, 52, 43), voxel=True)
+lbl = lblatlas.find(value=val)
+print('Region at voxel [25, 52, 43]: {} [{}]'.format(val, lbl.name))
+
+
+# or a 3D weighted or binary mask
+mask = np.zeros(lblatlas.shape)
+mask[30:60, 30:60, 30:60] = 1
+mask = Image(mask, header=lblatlas.header)
+
+lbls, props = lblatlas.label(mask)
+print('Labels in mask:')
+for lbl, prop in zip(lbls, props):
+    lblname = lblatlas.find(value=lbl).name
+    print('  {} [{}]: {:0.2f}%'.format(lbl, lblname, prop))
+```
+
+
+`ProbabilisticAtlas` objects have an analogous method called `values`:
+
+
+```
+vals = probatlas.values((25, 52, 43), voxel=True)
+print('Regions at voxel [25, 52, 43]:')
+for idx, val in enumerate(vals):
+    if val > 0:
+        lbl = probatlas.find(index=idx)
+        print('  {} [{}]: {:0.2f}%'.format(lbl.value, lbl.name, val))
+
+print('Average proportions of regions within mask:')
+vals = probatlas.values(mask)
+for idx, val in enumerate(vals):
+    if val > 0:
+        lbl = probatlas.find(index=idx)
+        print('  {} [{}]: {:0.2f}%'.format(lbl.value, lbl.name, val))
+```
+
+
 <a class="anchor" id="the-filetree"></a>
 ## The `filetree`
 
-- 
GitLab