Commit 4d17b6e0 authored by inhuszar's avatar inhuszar
Browse files

New: tutorial pages

parent 81f2b261
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TIRL tutorial 1\n",
"\n",
"16 August 2020 \\\n",
"Istvan N Huszar (University of Oxford) \\\n",
"TIRL version: 2.2\n",
"\n",
"## Contents\n",
"\n",
"1. Quick introduction to the TIRL TImage class\n",
"2. Example: registering an alternative pair of images with pre-computed transformations"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Quick introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Registration terminology\n",
"\n",
"It is easy to get confused about directions in registration as soon as you need \n",
"to see the details, as the direction of the underlying coordinate \n",
"transformations is the opposite of the perceived direction of the registration.\n",
"\n",
"Given images A and B, the perceived direction of a registration between A and B \n",
"is such that A is said to be registered to B, if A looks like B and has the \n",
"same shape as B.\n",
"\n",
"On the level of the transformations, the operation that produces the registered \n",
"version of A is formulated in a reverse way. A transformation (T) that maps the \n",
"points of B to A is calculated, and T is used as a pull-back to sample A for \n",
"each point in B. The registered version of A is therefore given by A(T(x)), \n",
"which is approximately equal to the image values of B at all points of B: \n",
"\n",
"$$\n",
"A(T(x)) = B(x)\n",
"$$ \n",
"\n",
"Aligning with the intuitive perception of the registration direction, TIRL \n",
"uses the *source* or *moving* term for A and the *target* or *fixed* term for B, if A is \n",
"said to be registered to B. A further distinction is made by the use of the \n",
"verbs *map* and *register*. Mapping refers to the transformation of coordinates,\n",
"whereas registration refers to the operation on the images. Consequently, if A \n",
"is registered to B, that implies a mapping from B to A. \n",
"\n",
"TIRL represents T as a *chain* (compound) of elementary transformations attached \n",
"to the target image (B), which can be modified and reused for the registration of other images that were derived from A and B."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loading a TImage"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Load prerequisites\n",
"import tirl"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"# Load the registered images\n",
"fixed = \"/mnt/nemo/proc/postmort/histology/reproducibility2/CD68/NP091_15/2/\" \\\n",
" \"fixed4_nonlinear.timg\"\n",
"moving = \"/mnt/nemo/proc/postmort/histology/reproducibility2/CD68/NP091_15/2/\" \\\n",
" \"moving.timg\"\n",
"fixed = tirl.load(fixed)\n",
"moving = tirl.load(moving)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point, all images are in the TIRL TImage format, which contains both\n",
"image data and transformations."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"TImage(domain=(2580, 2987), tensor=(1,), dtype=float32, mem)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Inspect the fixed image\n",
"fixed"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The string representation of a TImage contains:\n",
"\n",
"- voxel shape\n",
"- tensor shape\n",
"- data type\n",
"- where the image data are stored (memory or disk)\n",
"\n",
"The tensor shape is the shape of the data at each voxel. E.g. (3,) for an RGB\n",
"image, (4,) for RGBA, and (3, 3) for a 3D diffusion tensor image."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# Visualise the fixed image\n",
"fixed.preview()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[229.071, 228.071, 227.071, ..., 223.712, 222.712, 222.071],\n",
" [227.071, 227.071, 226.071, ..., 224.071, 223.071, 222.071],\n",
" [225.071, 225.071, 224.071, ..., 224.071, 223.071, 222.071],\n",
" ...,\n",
" [193. , 193. , 194. , ..., 239.391, 239.989, 240.288],\n",
" [189. , 188. , 188. , ..., 238.206, 237.989, 239.288],\n",
" [184.718, 192.647, 188.217, ..., 239.473, 242.277, 240.462]],\n",
" dtype=float32)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# The Timage data can be manipulated through an ndarray:\n",
"fixed.data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The TImage domain"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Domain(2580 x 2987, offset=0, tx=7, storage=mem)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# The domain of an image is represented by the Domain object, which\n",
"# is responsible for all coordinate operations in TIRL.\n",
"fixed.domain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The string representation of a Domain object contains:\n",
"\n",
"- domain shape (as a rectangular grid)\n",
"- number of internal domain transformations (*offset*)\n",
"- number of external domain transformations (*chain*)\n",
"- where the coordinates are stored for further calculations (memory or disk)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[ 0, 0],\n",
" [ 0, 1],\n",
" [ 0, 2],\n",
" ...,\n",
" [2579, 2984],\n",
" [2579, 2985],\n",
" [2579, 2986]], dtype=int16)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Retrieve the pixel coordinates of the fixed image:\n",
"fixed.domain.get_voxel_coordinates()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Domain transformations"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The pixel coordinates are mapped into *physical space* (a term in TIRL) by a\n",
"transformation chain. The transformation chain consists of elementary\n",
"Transformation objects. Consecutive Transformations map the coordinates in\n",
"the order they appear in the chain, e.g. tx1 will map the pixel coordinates,\n",
"and tx2 will map the results of the first transformation. TIRL divides the\n",
"transformation chain into two parts: offset and chain. The \"offset\" part\n",
"consists of \"internal\" transformations that are added automatically by TIRL\n",
"on certain TImage operations (e.g. downsampling/upsampling) to keep the image\n",
"fixed in physical space. The \"chain\" stores \"external\" transformations that\n",
"are added by the user who writes the registration script."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Internal transformations of the fixed image:\n",
"Chain[]\n"
]
}
],
"source": [
"print(\"Internal transformations of the fixed image:\")\n",
"print(fixed.domain.offset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The empty Chain object indicates that the fixed image has no internal transformations. This is because the registration script did not preform any image operations on the output, that would have changed the physical coordinates of the TImage, such as downscaling/upscaling or padding."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"External transformations of the fixed image:\n",
"Chain[\n",
"TxScale(parameters=2, locked=0, dim=2, shape=(2, 2), name=resolution)\n",
"TxTranslation(parameters=2, locked=0, dim=2, shape=(2, 3), name=centralise)\n",
"TxRotation2D(parameters=1, locked=0, dim=2, shape=(2, 2), name=rotation)\n",
"TxIsoScale(parameters=1, locked=0, dim=2, shape=(2, 2), name=scale)\n",
"TxTranslation(parameters=2, locked=0, dim=2, shape=(2, 3), name=translation)\n",
"TxAffine(parameters=6, locked=0, dim=2, shape=(2, 3), name=affine)\n",
"TxDisplacementField(parameters=3854520, locked=0, dim=2, name=tx_140252051042768)\n",
"]\n"
]
}
],
"source": [
"print(\"External transformations of the fixed image:\")\n",
"print(fixed.domain.chain)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The external transformations correspond to the following logical units:\n",
"\n",
"- initialisation (into metric space with millimetre coordinates, origin at the image centre)\n",
"- linear transformations (translation, rotation, scale, and a full affine)\n",
"- non-linear transformations (a deformation field)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The whole transformation chain (offset + chain):\n",
"Chain[\n",
"TxScale(parameters=2, locked=0, dim=2, shape=(2, 2), name=resolution)\n",
"TxTranslation(parameters=2, locked=0, dim=2, shape=(2, 3), name=centralise)\n",
"TxRotation2D(parameters=1, locked=0, dim=2, shape=(2, 2), name=rotation)\n",
"TxIsoScale(parameters=1, locked=0, dim=2, shape=(2, 2), name=scale)\n",
"TxTranslation(parameters=2, locked=0, dim=2, shape=(2, 3), name=translation)\n",
"TxAffine(parameters=6, locked=0, dim=2, shape=(2, 3), name=affine)\n",
"TxDisplacementField(parameters=3854520, locked=0, dim=2, name=tx_140252051042768)\n",
"]\n"
]
}
],
"source": [
"print(\"The whole transformation chain (offset + chain):\")\n",
"print(fixed.domain.all_tx())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the distinction between internal and external transformations and the splitting of the chain is based on purely practical reasons, allowing certain parts of TIRL to find transformations more easily. The two parts of the chain can be merged and assigned entirely to the \"chain\" attribute (leaving the \"offset\" chain empty) without any change in the mapping. In many cases, the offset part will be empty."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Slicing chains"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The \\[:\\] slicer expression creates a new Chain object from an existing Chain object. The new chain will share all existing Transformations with the old chain, but removing or adding Transformations to one will leave the other unaffected. Changing the parameters of the shared tr\n",
"\n",
"This can be used instead of fixed.domain.chain.copy(), when it is unnecessary to duplicate the transformations in memory, which may include a large deformation field (>100 MB).\n",
"\n",
"The slicing syntax can be used more generally to extract certain parts from a transformation chain, and paste it elsewhere."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Chain[\n",
"TxScale(parameters=2, locked=0, dim=2, shape=(2, 2), name=resolution)\n",
"TxTranslation(parameters=2, locked=0, dim=2, shape=(2, 3), name=centralise)\n",
"TxRotation2D(parameters=1, locked=0, dim=2, shape=(2, 2), name=rotation)\n",
"TxIsoScale(parameters=1, locked=0, dim=2, shape=(2, 2), name=scale)\n",
"TxTranslation(parameters=2, locked=0, dim=2, shape=(2, 3), name=translation)\n",
"TxAffine(parameters=6, locked=0, dim=2, shape=(2, 3), name=affine)\n",
"]\n"
]
}
],
"source": [
"# For example, extracting the linear subchain from the fixed image\n",
"# (i.e. init + linear transformations without the deformation field):\n",
"linear_subchain = fixed.domain.chain[:-1]\n",
"print(linear_subchain)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Example"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Attaching pre-computed transformations to alternative images"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the reproducibility experiment, consecutive histological sections are \n",
"numbered. For a single subject, block and stain, the corresponding five \n",
"sections were registered in the following direction: $2\\rightarrow 1$, $3\\rightarrow 2$, $4\\rightarrow 3$, $5\\rightarrow 4$.\n",
"\n",
"Each registration consists of an affine and a non-linear transformation. E.g. \n",
"in the case of $2\\rightarrow 1$, the first part of the chain performs an affine mapping of \n",
"the coordinates of 1 to the domain of 2, which is followed by a non-linear \n",
"mapping of the affine-mapped coordinates of 1 further onto the domain of 2. So \n",
"the estimated chain looks like:\n",
" \n",
" 1 --(affine)--(non-linear)--> 2\n",
" \n",
"Non-consecutive images can be registered by combining consecutive chains. E.g. \n",
"the combined chain\n",
"\n",
" 1 --(chain 1)--> 2 --(chain 2) --> 3 ...\n",
" \n",
"maps the points of 1 to 3, i.e. registers the 3rd image on the 1st.\n",
"\n",
"The chains may be reused to register other images that were derived from the \n",
"registered images in native space. In the tutorial below we show this on two \n",
"pairs of images.\n",
"\n",
"The registered pair:\n",
"\n",
"- fixed\n",
"- moving\n",
"\n",
"As the whole-slide histology images were way too large for the purpose of \n",
"registration, the *fixed* and *moving* images correspond to the WSIs at 8 $\\mu m$/px \n",
"resolution (corresponding to level 2 in the SVS resolution pyramid). (The \n",
"OpenSlide backend was used in TIRL to import the images from the SVS file.)\n",
"\n",
"The alternative pair of images:\n",
"\n",
"- alt_fixed\n",
"- alt_moving\n",
"\n",
"It is assumed that the alternative images are scaled versions (including \n",
"identity) of the original images."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"# Load preprequisites\n",
"import numpy as np\n",
"from tirl.timage import TImage\n",
"from tirl.transformations.linear.scale import TxScale"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"# To load the alternative images from non-TImage format, we need to use the TImage class directly\n",
"alt_fixed = \"/mnt/nemo/proc/postmort/histology/reproducibility2/CD68/NP091_15/2/fixed.png\"\n",
"alt_moving = \"/mnt/nemo/proc/postmort/histology/reproducibility2/CD68/NP091_15/2/moving.png\"\n",
"alt_fixed = TImage(alt_fixed)\n",
"alt_moving = TImage(alt_moving)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The \\[:\\] slicer expression creates a new Chain object from an existing Chain object. The new chain will share all existing Transformations with the old chain, but removing or adding Transformations to the new chain will leave the old one unaffected.\n",
"\n",
"We use this instead of fixed.domain.chain.copy(), because it is unnecessary to duplicate the transformations, including a large deformation field (>100 MB)."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"# Skip this step if the alternative fixed image has the same shape as the original fixed image.\n",
"\n",
"# Otherwise, we need to prepend the transformation chain with a transformation that scales the\n",
"# pixels of the alternative fixed image to the pixels of the original fixed image.\n",
"fixed_scale_factors = np.divide(alt_fixed.domain.shape, fixed.domain.shape)\n",
"fixed_scale_adaptor = TxScale(*fixed_scale_factors, name=\"fixed_scale_adaptor\") # the name is optional\n",
"alt_fixed = fixed_scale_adaptor + fixed.domain.chain"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Internal transformations:\n",
"Chain[]\n",
"\n",
"\n",
"External transformations:\n",
"Chain[\n",
"TxScale(parameters=2, locked=0, dim=2, shape=(2, 2), name=resolution)\n",
"TxTranslation(parameters=2, locked=0, dim=2, shape=(2, 3), name=centralise)\n",
"]\n"
]
}
],
"source": [
"# Note that the moving image also has a transformation chain:\n",
"print(\"Internal transformations:\")\n",
"print(moving.domain.offset) # No internal transfomations\n",
"print(\"\\n\")\n",
"print(\"External transformations:\")\n",
"print(moving.domain.chain) # Only external transformations"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"# Skip this step if the alternative moving image has the same shape as the original moving image.\n",
"\n",
"# Otherwise, we need to prepend the transformation chain with a transformation that scales the\n",
"# pixels of the alternative moving image to the pixels of the registered moving image.\n",
"moving_scale_factors = np.divide(alt_moving.domain.shape, moving.domain.shape)\n",
"moving_scale_adaptor = TxScale(*moving_scale_factors, name=\"moving_scale_adaptor\") # the name is optional\n",
"alt_moving.domain.chain = moving_scale_adaptor + moving.domain.chain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Applying transformations to obtain registered images"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As all transformations are in place, the alt_fixed and alt_moving TImages are technically registered, because their points map onto the same domain. To obtain actual images that look alike, one of the TImages need to be evaluated on the other's domain, and the decision is up to you."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Choose either this:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"# Register the moving image to the fixed image\n",
"# This uses the existing transformation chain for the fixed image, and the inverse of the moving image.\n",
"# Since the moving chain consists of only linear transformations, the inversion is quick, and resampling\n",
"# dominates the computational cost.\n",
"moving_on_fixed = alt_moving.evaluate(alt_fixed.domain)\n",
"\n",
"# unifying variables for the next section\n",
"result = moving_on_fixed\n",
"other = fixed"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### or this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Register the fixed image to the moving image\n",
"# This uses the moving chain, and needs to invert the fixed chain. As the fixed chain is non-linear,\n",
"# TIRL will invert the deformation field, which contributes significantly to the computation time here.\n",
"fixed_on_moving = alt_fixed.evaluate(alt_moving.domain)\n",
"\n",
"# Unifying variables for the next section\n",
"result = fixed_on_moving\n",
"other = moving"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inspecting and exporting the result of the registration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Preview the result\n",
"result.preview()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Show the difference between the registered and the other image:\n",
"(result - other).preview()"
]