Skip to content
Snippets Groups Projects
Commit 404c5d25 authored by Matthew Webster's avatar Matthew Webster
Browse files

This commit was manufactured by cvs2svn to create tag 'fsl-5_0_7'.

Sprout from master 2014-02-06 17:27:54 UTC Matthew Webster <mwebster@fmrib.ox.ac.uk> 'Changes for osx 109'
Cherrypick from master 2014-07-08 16:29:20 UTC Matthew Webster <mwebster@fmrib.ox.ac.uk> 'check for bvacs/bvals .txt files':
    CUDA/bedpostx_gpu
    bedpostx
    eddy_correct
Cherrypick from master 2012-09-04 15:58:14 UTC Stamatios Sotiropoulos <stam@fmrib.ox.ac.uk> 'Hide grad_nonlin options':
    dtifitOptions.h
Delete:
    CUDA/bedpostx_multigpu_LSF
    CUDA/bedpostx_multigpu_PBS.sh
    RLR_MAP_routines.cpp
    RLR_MAP_routines.h
    doc/fdt_bedpostx.html
    doc/fdt_bedpostx_parallel.html
    doc/fdt_biggest.html
    doc/fdt_display.html
    doc/fdt_dtifit.html
    doc/fdt_eddy.html
    doc/fdt_images/eddy.gif
    doc/fdt_images/eddy.tif
    doc/fdt_images/eddy2.gif
    doc/fdt_images/fdt_bedpost.gif
    doc/fdt_images/fdt_bedpost.tiff
    doc/fdt_images/fdt_bedpostx.gif
    doc/fdt_images/fdt_fa.gif
    doc/fdt_images/fdt_fa_big.png
    doc/fdt_images/fdt_gui.gif
    doc/fdt_images/fdt_gui.tiff
    doc/fdt_images/fdt_l1.gif
    doc/fdt_images/fdt_lines.gif
    doc/fdt_images/fdt_lines_subs.gif
    doc/fdt_images/fdt_probtrackx.gif
    doc/fdt_images/fdt_rgb.gif
    doc/fdt_images/fdt_rgb_subs.gif
    doc/fdt_images/fdt_seeds2targets_quant_eg.gif
    doc/fdt_images/fdt_seeds2targets_thal.gif
    doc/fdt_images/fdt_simple_tract3.gif
    doc/fdt_images/fdt_spherical_polars.gif
    doc/fdt_images/fdt_twomasks_tracts.gif
    doc/fdt_images/fdt_vecreg.gif
    doc/fdt_images/fdt_vectors.jpg
    doc/fdt_images/fdt_vectorsx.jpg
    doc/fdt_images/fdt_xfibres_axial.png
    doc/fdt_images/fdt_xfibres_coronal.png
    doc/fdt_images/fsl-bg.jpg
    doc/fdt_images/fsl-logo.jpg
    doc/fdt_images/fslview_dti.gif
    doc/fdt_images/fslview_x.gif
    doc/fdt_images/gui_intro.gif
    doc/fdt_images/gui_intro.tif
    doc/fdt_images/qboot.gif
    doc/fdt_pipeline.html
    doc/fdt_probtrackx.html
    doc/fdt_reg.html
    doc/fdt_surface.html
    doc/fdt_thresh.html
    doc/fdt_top.html
    doc/fdt_utils.html
    doc/fdt_vecreg.html
    doc/index.html
    nldtifit.cpp
    nldtifit.h
    rubix.cc
    rubix.h
    rubix_parallel.sh
    rubix_postproc.sh
    rubix_preproc.sh
    rubixoptions.cc
    rubixoptions.h
    rubixvox.cc
    rubixvox.h
parent d3b3343a
No related branches found
No related merge requests found
Showing with 9 additions and 2017 deletions
doc/fdt_images/fslview_dti.gif

86.3 KiB

doc/fdt_images/fslview_x.gif

114 KiB

doc/fdt_images/gui_intro.gif

263 KiB

File deleted
doc/fdt_images/qboot.gif

56.2 KiB

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><meta http-equiv="Content-Type"
content="text/html;charset=utf-8">
<link REL="stylesheet" TYPE="text/css"
href="../fsl.css"><TITLE>FSL</TITLE></HEAD>
<BODY><IFRAME width="100%" scrolling="no" frameborder="0" src="fdt_top.html">Broken</IFRAME>
<h3>Processing pipeline</h3>
<p>To call the FDT GUI, either run <b>Fdt</b> (<b>Fdt_gui</b> on a Mac), or run <b>fsl</b> and press the <b>FDT</b> button.
<p>A typical processing pipeline (and approximate time required for each
stage, based on an Intel 2.4GHz processor, and a 60-direction whole brain
dataset of dimensions 128x128x64, at 2.5 mm isotropic resolution)
would consist of:
<ol>
<li>Any study or scanner-specific pre-processing (e.g., conversion
from DICOM to NIFTI, removal of images affected by large artifacts). This would be
done manually by the user.</li>
<li><a href="fdt_eddy.html">Eddy current correction</a> using FDT
(around 3 minutes per volume). </li>
<li>Brain extraction using BET.</li>
<li><a href="fdt_dtifit.html">dtifit</a> - Fitting of diffusion
tensors on corrected data using dtifit within FDT to check data
quality (1 minute) </li>
<li><a href="fdt_bedpostx.html">bedpostx</a> - Fitting of the probabilistic diffusion model on corrected data
(20 hours, or less if parallelised)</li>
<li><a href="fdt_reg.html">registration</a> - (3-6 minutes) </li>
<li><a href="fdt_probtrackx.html">probtrackx</a> - Probabilistic tractography run on the output of bedpostx (endless.. - depends very much on what the
user wishes to do. Generating a connectivity distribution from a single voxel
of interest takes about 10 seconds)</li>
<li>Further post-processing of <b>probtrackx</b> outputs can be carried out if
required using the command-line <a href="fdt_utils.html">utilities</a> </li>
</ul>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><meta http-equiv="Content-Type"
content="text/html;charset=utf-8">
<link REL="stylesheet" TYPE="text/css"
href="../fsl.css"><TITLE>FSL</TITLE></HEAD>
<BODY><IFRAME width="100%" scrolling="no" frameborder="0" src="fdt_top.html">Broken</IFRAME>
<h3>PROBTRACKX - probabilistic tracking</h3><br>
For details about probabilistic tractography as implemented by FDT,
see <a
href="http://www.fmrib.ox.ac.uk/analysis/techrep/tr03tb1/tr03tb1/">here</a>, and for details about crossing fibre modelling in FDT, see Behrens et al, NeuroImage 2007, 34(1):144-55.<br><br>
Briefly, FDT repetitively samples from the distributions on voxel-wise principal diffusion directions, each time computing a streamline through these local samples to generate a <em>probabilistic streamline</em> or a <em>sample</em> from the distribution on the location of the true streamline. By taking many such samples FDT is able to build up the posterior distribution on the streamline location or the <em>connectivity distribution</em>. The local diffusion directions are calculated using <b>bedpostx</b>, which allows modelling multiple fibre orientations per voxel.
<p>After <b>bedpostx</b> has been applied it is possible to run tractography analyses using <b>probtrackx</b>.
<p>To call the FDT GUI, either run <b>Fdt</b>, or run <b>fsl</b> and press the <b>FDT</b> button. Use the top left drop down menu to select <b>PROBTRACKX</b>.
<p>
PROBTRACKX GUI has two main sections:
<ul>
<li><a href="#seeds">Seed Space</a></li>
<li><a href="#targets">Optional Targets</a></li>
</ul>
Also, <b>probtrackx</b> requires the specification of a bedpostX directory. This directory must contain the following images:<br><br>
<li><b>merged_th&#60;i&#62;samples</b></li>
<li><b>merged_ph&#60;i&#62;samples</b></li>
<li><b>merged_f&#60;i&#62;samples</b></li>
<li><b>nodif_brain_mask</b></li>
<br> (<i>Note that users of previous versions of FDT can still use <b>probtrackx</b> on the old bedpost directories. In this case, the above files are called <code>merged_thsamples</code>, etc.)</i><br>
<p>As explained below, results from <b>probtrackx</b> can be binned in any available space -e.g.,
diffusion space, structural space or standard space. Note, however,
that tractography itself ALWAYS takes place in diffusion space - it is
simply the <em>results</em> of <b>probtrackx</b> that are stored in the
required space. If <b>probtrackx</b> results are to be stored in a space
other than diffusion space then you will need transformations from
this space back into the space of the diffusion data.
<br><br>
You can use the <a href="fdt_reg.html">FDT registration tab</a> to create the required transformations, which will all be stored in the <code>xfms</code> subdirectory of the bedpostX directory.<br>
Probtrackx can take either linear (FLIRT) or nonlinear (FNIRT) transformations. In the latter case, both <i>forward</i> (diffusion to seed space) and <i>backward</i> (seed space to diffusion) transformations are needed. Eg:
<p>For running analyses in structural space:
<li>Linear: <b>xfms/str2diff.mat</b></li>
<li>Nonlinear: <b>xfms/str2diff_warp</b> and <b>xfms/diff2str_warp</b></li>
<p>For running analyses in standard space:
<li>Linear: <b>xfms/standard2diff.mat</b></li>
<li>Nonlinear: <b>xfms/standard2diff_warp</b> and <b>xfms/diff2standard_warp</b></li>
<hr>
<h3>Overview</h3>
PROBTRACKX involves generating connectivity distributions from user-specified seed voxel(s). The output will be a single image in the space of the specified seed like <a href="fdt_images/fdt_simple_tract3.gif">this</a>. All brain voxels will have a value (though many of these will be zero) representing the connectivity value between that voxel and the seed voxel (i.e., the number of samples that pass through that voxel). Note that when connectivity distributions are generated from multiple seed voxels within a region of interest then the time required for the analysis to run will be approximately the number of seed voxels multiplied by the time taken to generate a distribution from a single voxel. <b>probtrackx</b> also allows to specify targets for the tractography. These can either be inclusion/exclusion masks, or targets for seed classification based on connectivity.
<a name="seeds"></a>
<h3>Seed specification - prologue</h3>
A common feature for all seed specification modes is the ability to provide the seed in another space than the diffusion space. If <b>seed space is not diffusion</b>, then check the corresponding button. Set the transformation matrix from seed space to diffusion space (e.g., subject1.bedpostX/xfms/str2diff.mat if seed space is structural space or subject1.bedpostX/xfms/standard2diff.mat if seed space is standard space, or the forward and backward warpfields if using nonlinear registration). Note that, in all cases, the smaller the voxel size in your seed space image, the lower will be the resulting connectivity values to these voxels (This is intuitive - the smaller a voxel is, the less chance that the true streamline will pass through it!). This highlights the problem with binning a continuous distribution into meaningless discrete bins. In order for the probability values to be truly meaningful, the discrete bins chosen should be anatomically meaningful, as is the case when using <a href="#targets">classification targets</a>.
<p><h4>Single voxel</h4>
<IMG ALIGN=RIGHT height=100 SRC="fdt_images/fdt_simple_tract3.gif" ALT="simple tract">
Generates a connectivity distribution from a single, user-specified voxel.
<p>GUI Options: <br>
<b>Seed reference image</b> Use the browse button to locate a reference image (e.g., subject1.bedpostX/struct.nii.gz if seed space is structral space or subject1.bedpostX/standard.nii.gz if seed space is standard space).<br>
<b>Seeds:</b> Enter the x,y,z co-ordinates of a single seed voxel. Use the buttons to the right to specify whether the co-ordinates are given in voxels or millimetres. Note if the "seed space is not diffusion" is set, and the seed space reference image is the MNI152 average brain, then mm coordinates will have their origin at the AC.
<p>The output will be a single image <b>in the space of the specified seed</b>. All brain voxels will have a value (though many of these will be zero) representing the connectivity value between that voxel and the seed voxel (i.e., the number of samples that pass through that voxel). The example on the right shows the connectivity distribution from a single seed in the internal capsule overlaid on an FA image. Note that when the seed space is a single voxel, the classification targets in the <a href="#targets">Optional Targets</a> tab is turned off.
<p><h4>Single mask</h4>
Generates a connectivity distribution from a user-specified region of interest.
<p>GUI Options: <br>
<b>Seed image:</b> Use the browse button to locate the seed image. Probabilistic tractography will be run from every voxel with a value different than 0 in this mask.
<p>The output directory will contain: <br>
<b>probtrackx.log</b> - a text record of the command that was run.<br>
<b>fdt.log</b> - a log of the setup of the FDT GUI when the analysis was run. To recover this GUI setup, type <code>Fdt fdt.log</code><br>
<b>fdt_paths</b> - a 3D image file containing the output connectivity distribution to the seed mask.<br>
<b>waytotal</b> - a text file containing a single number corresponding to the total number of generated tracts that have not been rejected by inclusion/exclusion mask criteria.<br><br>
The output connectivity distribution file will be a single image in the space of the specified seed mask. All brain voxels will have a value (though many of these may be zero) representing the number of samples that pass through that voxel from the seed mask. Connectivity distributions from multiple seed voxels are summed to produce this output. Therefore the connectivity values will depend on the number of voxels in the seed mask.<br>
<p><h4>Multiple masks</h4>
Generates a connectivity distribution between a group of seed masks. This option repeatedly samples tracts from every seed mask in a list, and retains only those tracts that pass through at least <b>one of the other</b> seed masks. The output is the sum of the connectivity distributions from each of the seed masks.
<p>GUI Options: <br>
<b>Masks list</b>: Use the add button to locate each seed mask. Seed masks should all be in the same space (e.g., diffusion, structural or standard space). When all masks are loaded you can press the save list button to save the list of masks as a text file. If you already have a text file list of required seed masks (including their path) then you can load it with the load list button.
<p>The output directory will contain:
<br>
<b>probtrackx.log</b> - a text record of the command that was run.<br>
<b>fdt.log</b> - a log of the setup of the FDT GUI when the analysis was run. To recover this GUI setup, type <code>Fdt fdt.log</code><br>
<b>fdt_paths</b> - a 3D image file containing the output connectivity distribution.<br>
<b>waytotal</b> - a text file containing one number per seed mask (in the order that has been specified when listing them). These numbers correspond to the total number of generated tracts from each seed mask that have reached at least one of the other masks and have not been rejected by inclusion/exclusion mask criteria.<br>
<br>The output file will be a single image in the space of the specified masks. All brain voxels will have a value (though many of these may be zero) representing the number of samples that pass through that voxel from either of the seed masks and which also pass through at least one of the other seedmasks. Connectivity distributions from multiple seed voxels are summed to produce this output. Therefore the connectivity values will depend on the number of voxels in the seed masks.
<hr>
<a name="targets"></a>
<h3>Including targets for tractography - rationale</h3>
<b>probtrackx</b> allows you to include target masks for any tractography experiment. <br><br>
<b>Very Important:</b> Every target mask <b>must be in the same space as the seed masks</b> (or the <b>reference image</b> in the case of a single voxel mode). <br><br>
Targets can be waypoint masks (a.k.a. inclusion masks), for selecting only tracts passing through particular points in the brain; exclusion masks, for excluding tracts passing through parts of the brain; termination masks, for forcing tracts to stop whenever they reach a certain area; or classification target masks for connectivity-based seed classification. All these targets are optional.
<p><h4>Waypoint masks</h4>
<IMG ALIGN=RIGHT height=200 SRC="fdt_images/fdt_twomasks_tracts.gif" ALT="constraining tracts">
Use inclusion masks in the tractography. Tracts that do not pass through ALL these masks will be discarded from the calculation of the connectivity distribution.<br>
The example on the right shows the outputs from two different analyses which use the same seed mask (orange) but different waypoint masks (red).
<br><br>
Use the add and remove buttons to make a list of waypoint masks.
<br>
Note that the criterion to keep a streamline here is to pass through ALL the waypoint masks in the list. If you need an OR condition, i.e. you want to keep streamlines that pass through at least one of the waypoint masks, then first add all these masks with fslmaths and include the result as a single waypoint mask:<br><br>
<code>
fslmaths mywaypoint1 -add mywaypoint2 -add ... myORwaypoint
</code>
<p><h4>Exclusion mask</h4>
If an <b>exclusion mask</b> is to be used then check the box and use the browse button to locate the mask file. Pathways will be discarded if they enter the exclusion mask. For example, an exclusion mask of the midline will remove pathways that cross into the other hemisphere.
<p><h4>Termination mask</h4>
If a <b>termination mask</b> is to be used then check the box and use the browse button to locate the mask file. Pathways will be terminated as soon as they enter the termination mask. The difference between an exclusion and a termination mask is that in the latter case, the tract is stopped at the target mask, but included in the calculation of the connectivity distribution, while in the former case, the tract is completely discarded. (Note that paths are always terminated when they reach the brain surface as defined by nodif_brain_mask)
<p><h4>Classification targets</h4>
<IMG ALIGN=RIGHT height=150 SRC="fdt_images/fdt_seeds2targets_quant_eg.gif" ALT="connectivity-based classification of thalamus">
When using classification targets, <b>probtrackx</b> will quantify connectivity values between a seed mask and any number of user-specified target masks. This option is only active when the seed mask is a single mask. In the example on the right, seed voxels in the thalamus are classified according to the probability of connection to different cortical target masks.
<br><br>
Use the add button to locate each target mask. Targets must be in the same space as the seed mask. When all targets are loaded you can press the save list button to save the list of targets as a text file. If you already have a text file list of required targets (including their path) then you can load it with the load list button.<br>
The output directory will contain a single volume for each target mask, named <b>seeds_to_{target}</b> where {target} is replaced by the file name of the relevant target mask. In these output images, the value of each voxel within the seed mask is the number of samples seeded from that voxel reaching the relevant target mask. The value of all voxels outside the seed mask will be zero.
<p>
<IMG ALIGN=RIGHT height=150 SRC="fdt_images/fdt_seeds2targets_thal.gif" ALT="connectivity-based classification of thalamus">
There are command line utilities that can be run on these outputs:
<ul><li><a href="fdt_thresh.html">proj_thresh</a> - for thresholding some outputs of <b>probtrackx</b></li>
<li><A href="fdt_biggest.html">find_the_bigggest</a> - for performing hard segmentation on the outputs of <b>probtrackx</b>, see example on the right</li>
<hr>
<a name="options"></a>
<h3>Options Tab </h3>
Before running <b>probtrackx</b>, the user is able to change the setting of certain parameters by clicking the <b>options</b> tab.
<p><b>Number of samples</b> (default 5000): This determines the number of individual pathways (or samples) that are drawn through the probability distributions on principle fibre direction (see <a href="http://www.fmrib.ox.ac.uk/analysis/techrep/tr03tb1/tr03tb1/"> appendix </a>for more details on the modelling and tractography methods). By default this is set to 5000 as we are confident that convergence is reached with this number of samples. However, reducing this number will speed up processing and can be useful for preliminary or exploratory analyses.
<p><b>Curvature Threshold</b> (default 0.2): We limit how sharply pathways can turn in order to exclude implausible pathways. This number is the cosine of the minimum allowable angle between two steps. By default this is set to 0.2 (corresponding to a minimum angle of approximately &#177;80 degrees). Adjusting this number can enable pathways with sharper angles to be detected.
<p><b>Verbose:</b> If this option is selected then FDT prints additional logging information to screen while it is running.
<p><b>Loopcheck:</b> By default, we terminate pathways that loop back on themselves -i.e paths that travel to a point where they have already been.
<p><h4>Advanced options:</h4>
<p><b>Use modified Euler streamlining</b>: Use modified Euler integration as opposed to simple Euler for computing probabilistic
streamlines. More accurate but slower.
<p><b>Maximum number of steps</b> (default 2000): By default, samples are terminated when they have travelled 2000 steps. Using a step length of 0.5mm this corresponds to a distance of 1m. These values can be adjusted if required.
<p><b>Step length</b> (default 0.5mm): This determines the length of each step. This setting may be adjusted from default e.g., depending on the voxel size being used, or if tracking is being performed on different sized brains
(e.g., infants or animals).
<p><b>Use anisotropy to constrain tracking</b>:
Use this option if you want the fractional anisotropic volumes (stored in merged_f&#60;i&#62;samples) to influence the tractography. The tracts stop if the anisotropy is lower than a random variable between 0 (low anisotropy) and 1 (high anisotropy).
<p><b>Use distance correction</b>: This option corrects for the fact that connectivity distribution drops with distance from the seed mask. If this option is checked, the connectivity distribution is the expected length of the pathways that cross each voxel times the number of samples that cross it.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><meta http-equiv="Content-Type"
content="text/html;charset=utf-8">
<link REL="stylesheet" TYPE="text/css"
href="../fsl.css"><TITLE>FSL</TITLE></HEAD>
<BODY><IFRAME width="100%" scrolling="no" frameborder="0" src="fdt_top.html">Broken</IFRAME>
<h3>Registration within FDT</h3>
<p>If tractography results are to be stored in any space other than diffusion
space then registration must be run.
<p>Registration within FDT uses <a href="http://www.fmrib.ox.ac.uk/fsl/flirt/index.html" target="_top">FLIRT</a>. When using the GUI, registration can only be applied after <b>bedpostx</b> has been run. Typically, registration will be run between three spaces:
<ul><li>Diffusion space (using the <code>nodif_brain</code> image in the bedpostX directory.)</li>
<li>Structural space (using the struct image in the bedpostX directory, e.g., the space of a high resolution T1-weighted image of the same subject)</li>
<li>Standard space (by default, the MNI152 brain stored within $FSLDIR/data/standard)</li></ul>
<p>Note that the structural (T1-weighted) image must have had <a href="http://www.fmrib.ox.ac.uk/fsl/bet/index.html" target="_top">BET</a> applied. The <code>nodif_brain</code> image should be the brain extracted version of the nodif image that is stored in the bedpostX directory. Create this image using <code>fslroi</code> then <code>bet</code> on the data if it does not already exist. (it is important that the user checks the quality of bet results on these images and adjust the settings in bet where appropriate).
<p>Transformation matrices, and their inverses, will be derived from diffusion to structural space and
from structural to standard space. Relevant matrices will be concatenated to
produce transformation matrices between diffusion and standard space. The resulting matrices are stored within
the <code>xfms</code> subdirectory of the bedpostX directory and named as follows:
<ul><li>diff2str.mat - from diffusion to structural space</li>
<li>str2diff.mat - from structural to diffusion space</li>
<li>diff2standard.mat - from diffusion to standard space</li>
<li>standard2diff.mat - from standard to diffusion space</li>
<li>str2standard.mat - from structural to standard space</li>
<li>standard2str.mat - from standard to structural space</li></ul>
<p>By default, transformation matrices between diffusion and structural space are
derived using 6 degrees of freedom, the correlation ratio cost
function and normal search; transformation matrices between structural and standard space
are derived using 12 degrees of freedom, the correlation ratio cost
function and normal search. These parameters may be adjusted if required
using the drop down menus in the registration panel.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><meta http-equiv="Content-Type"
content="text/html;charset=utf-8">
<link REL="stylesheet" TYPE="text/css"
href="../fsl.css"><TITLE>FSL</TITLE></HEAD>
<BODY><IFRAME width="100%" scrolling="no" frameborder="0" src="fdt_top.html">Broken</IFRAME>
<h3>Running probtrackx from freesurfer cortical surfaces</h3>
<p>
In probtrackx, it is possible to use a seed that is described as a surface patch from a FreeSurfer cortical reconstruction. The advantage of seeding from a cortical surface is that probtrackx then "knows" where the brain is, and tracking is performed towards the brain. In practice, this often leads to less false positive connections such as those crossing a gyrus near the seed location.<br><br>
In order to seed from a FreeSurfer cortical surface, you simply need to (i) define your seed as a FreeSurfer <i>label file</i>, (ii) provide a transformation from FreeSurfer <i>conformed space</i> to diffusion space, and (iii) provide a description of the overall surface that the seed patch lives in. Below we describe each of these steps in more details.
<p>
Before you proceed, make sure that FreeSurfer is installed correctly on your computer.<br>
We will assume that you have already ran <code>recon_all</code> on a subject (that we will call john), and already have a surface model for that subject's cortex. We will also assume that you have provided <code>recon_all</code> with a structural image called struct.nii.gz. <br>
Finally, you will need to create a directory in $SUBJECTS_DIR/john/mri called nifti, and convert the conformed FreeSurfer-type structural image into a nifti file:<br><br>
<code>
mkdir $SUBJECTS_DIR/john/mri/nifti <br>
mri_convert $SUBJECTS_DIR/john/mri/brain.mgz $SUBJECTS_DIR/john/mri/nifti/brain.nii.gz
</code>
<br><br>
You also need to convert the grey/white boundary surface file into ascii format (assuming you want to use the recommended option of tracking from this surface). For example, if you are tracking from the left hemisphere:<br><br>
<code>
mris_convert $SUBJECTS_DIR/john/surf/lh.white $SUBJECTS_DIR/john/surf/lh.white.asc
</code>
<h4>1. Registering FreeSurfer conformed structural space to diffusion space</h4>
Here we assume that you have ran dtifit on your diffusion data with an FA map called dti_FA.nii.gz (we recommend using an FA map to register to T1 structural images), and also that you have a file called struct.nii.gz that you have used as an input to FreeSurfer <code>recon_all</code> program.<br><br>
We will carry on a few steps that aim at calculating the following transformations: fa<->struct<->freesurfer. Then we will concatenate these transformations to get fa<->freesurfer. <br><br>
Let us start with struct<->freesufer:<br><br>
<code>
tkregister2 --mov $SUBJECTS_DIR/john/mri/orig.mgz --targ $SUBJECTS_DIR/john/mri/rawavg.mgz --regheader --reg junk --fslregout freesurfer2struct.mat --noedit <br
convert_xfm -omat struct2freesurfer.mat -inverse freesurfer2struct.mat
</code>
<br><br>
Now for transforming FA to struct, we can either calculate a linear transformation (with FLIRT), or a nonlinear warpfield (with FNIRT). This second option is only recommended when the FA data is of good quality (e.g. at least 2mm isotropic resolution). The structural image needs to be brain extracted (e.g. with BET), and we assume that the brain-extracted structural is called <code>struct_brain.nii.gz</code><br><br>
Alignment using FLIRT:<br><br>
<code>
flirt -in dti_FA -ref struct_brain -omat fa2struct.mat <br>
convert_xfm -omat struct2fa.mat -inverse fa2struct.mat
</code>
<br><br>
And using FNIRT (still need to run FLIRT first!):<br><br>
<code>
flirt -in dti_FA -ref struct_brain -omat fa2struct.mat <br>
fnirt --in=dti_FA --ref=struct_brain --aff=fa2struct.mat --cout=fa2struct_warp <br>
invwarp -w fa2struct_warp -o struct2fa_warp -r dti_FA
</code>
<br><br>
The final stage is to concatenate these transformations:<br>
If you have used FLIRT for fa<-->struct:<br><br>
<code>
convert_xfm -omat fa2freesurfer.mat -concat struct2freesurfer.mat fa2struct.mat <br>
convert_xfm -omat freesurfer2fa.mat -inverse fa2freesurfer.mat
</code>
<br><br>
And if you have used FNIRT for fa<-->struct:<br><br>
<code>
convertwarp -o fa2freesurfer_warp -r $SUBJECTS_DIR/john/mri/nifti/brain -w fa2struct_warp --postmat=struct2freesurfer.mat <br>
convertwarp -o freesurfer2fa_warp -r dti_FA -m freesurfer2struct.mat -w struct2fa_warp
</code>
<h4>2. Creating a label file from FreeSurfer</h4>
<p>
You can create a label file (text file that contains labels of vertices on a surface) using FreeSurfer's <code>tksurfer</code>. <br>
Alternatively, you might have defined a cortical ROI on a T1 structural image, and want to project that onto a FreeSurfer cortical surface, and turn that into a label file. <br>
The first thing to do is to transform this ROI from T1 to the conformed space using the transformation that you have calculated in the previous step. for example: <br><br>
<code>
flirt -in myroi -ref $SUBJECTS_DIR/john/mri/nifti/brain -out myconformedroi -applyxfm -init struct2freesurfer.mat -interp nearestneighbour
</code>
<br>
<p>
The next thing to do is to project this ROI into a FreeSurfer surface. We recommend using the grey/white interface to seed tractography from the cortex: (assuming the ROI is in the left hemisphere)<br><br>
<code>
printf "john\n1\n1\n1\n1 0 0 0\n0 1 0 0\n0 0 1 0\n0 0 0 1\n" > reg.dat <br>
mri_vol2surf --src myconformedroi.nii.gz --srcreg reg.dat --projfrac 0.5 --hemi lh --out myroi2surf.mgh <br>
mri_binarize --i myroi2surf.mgh --min 0.5 --o myroi2surf.mgh <br>
mri_cor2label --i myroi2surf.mgh --surf john lh white --id 1 --l ./myroilabel <br>
</code>
<br>
This will create a file called <code>myroilabel.label</code> that you can use directly in probtrackx (see following section). We recommend checking the label file by loading it onto a freesurfer surface using <code>tksurfer</code>. E.g.:
<br><br>
<code>
tksurfer john lh white
</code>
<br><br>Then File->Label->Load labels
<br><br>
<h4>3. Running probtrackx using surfaces</h4>
All you need to do now is to run probtrackx specifying at least four things: (1) the label file as a seed, (2) a description of the whole cortical surface for the corresponding hemisphere [e.g. lh.white.asc], (3) provide a transformation from conformed FreeSurfer space to diffusion space, and (4) a conformed FreeSurfer volume as a reference space:
<br><br>
<code>
probtrackx -x myroilabel.label --mesh=$SUBJECTS_DIR/john/surf/lh.white.asc --xfm=freesurfer2fa.mat --seedref=$SUBJECTS_DIR/john/mri/nifti/brain [+all the other options]
</code>
<br><br>
You can also run probtrackx using a nonlinear warpfield to get from freesurfer space to diffusion space (if you had used FNIRT in step 1 above):<br><br>
<code>
probtrackx -x myroilabel.label --mesh=$SUBJECTS_DIR/john/surf/lh.white.asc --xfm=freesurfer2fa_warp --invxfm=fa2freesurfer_warp --seedref=$SUBJECTS_DIR/john/mri/nifti/brain [+all the other options]
</code>
<br><br>
Note: in this last case, we need both forward and backward transforms fa<-->freesurfer.
<h4>4. Using some of the outputs</h4>
When using classification targets in probtrackx, together with a surface-based seed, you may use the probtrackx option <code>--s2tastext</code>, in which case an output is created in the form of a matrix called matrix_seeds_to_all_targets. You can use this file to run find_the_biggest and produce label files for each of the hard-classified clusters. You can also use it to produce overlay files containing connectivity scores to each target.<br><br>
Running find_the_biggest using matrix_seeds_to_all_targets:<br><br>
<code>
find_the_biggest matrix_seeds_to_all_targets myroilabel.label myclusters
</code>
<br><br>
The output of this command will be a set of files called myclusters&lt;i&gt;.label, i.e. one label file per cluster. You can combine these to produce a single annotation file:
<br><br>
<code>
mris_label2annot --s john --h lh --ctab $FSLDIR/etc/luts/fsrandlut.txt --a myannot `ls myclusters_*.label | awk '{print "--l " $1}'`
</code>
<br><br>
Once this is done, you can load the result myannot.annot onto <code>tksurfer</code>. Simply type:<br><br>
<code>
tksurfer john lh pial
</code>
<br><br>
And then load the annotation file using File->Load label->Import annotation.<br><br>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><meta http-equiv="Content-Type"
content="text/html;charset=utf-8">
<link REL="stylesheet" TYPE="text/css"
href="../fsl.css"><TITLE>FSL</TITLE></HEAD>
<BODY><IFRAME width="100%" scrolling="no" frameborder="0" src="fdt_top.html">Broken</IFRAME>
<h3>proj_thresh</h3>
<b>proj_thresh</b> is a command line utility that provides an alternative way of
expressing connection probability in connectivity-based segmentation. It is
run on the output of <a href="fdt_probtrackx.html"> <b>probtrackx</b></a> when <b>classification targets</b> are used.
<p>The output of <b>Connectivity-based seed classification</b> is a single volume for
each target mask, named <b>seeds_to_{target}</b> where {target} is replaced
by the file name of the relevant target mask. In these output images, the
value of each voxel within the seed mask is the number of samples seeded
from that voxel reaching the target mask.
<br><b>proj_thresh</b> is run as follows:
<p><b>proj_thresh list_of_volumes threshold</b></p>
Where the list of volumes is the outputs of <b>Connectivity-based seed
classification</b> (i.e., files named seeds_to_target1 etc etc) and threshold is expressed as a number of samples
<br>For each voxel in the seeds mask that has a value above threshold for at
least one target mask, <b>proj_thresh</b> calculates the number of samples reaching each target mask
as a proportion of the total number of samples reaching any target mask. The
output of <b>proj_thresh</b> is a single volume for each target mask.
<link REL="stylesheet" TYPE="text/css" href="../fsl.css">
<TABLE BORDER=0><TR>
<TD ALIGN=CENTER WIDTH="100%">
<TABLE BORDER=0>
<tr><td align=center><font size=+3><b>FMRIB's Diffusion Toolbox - FDT v2.0</b></font></tr>
<tr valign=bottom><td align=center>
<br>
<a href="index.html" target="_top">intro</a> &nbsp;-&nbsp;
<a href="fdt_pipeline.html" target="_top">processing pipeline</a> &nbsp;-&nbsp;
<a href="fdt_eddy.html" target="_top">eddy correction</a> &nbsp;-&nbsp;
<a href="fdt_dtifit.html" target="_top">dtifit</a> &nbsp;-&nbsp;
<a href="fdt_bedpostx.html" target="_top">bedpostx</a> &nbsp;-&nbsp;
<a href="fdt_reg.html" target="_top">registration</a> <br>
<a href="fdt_probtrackx.html" target="_top">probtrackx</a> &nbsp;-&nbsp;
<a href="fdt_surface.html" target="_top">surface tracking</a> &nbsp;-&nbsp;
<a href="fdt_utils.html" target="_top">utilities</a> &nbsp;-&nbsp;
<a href="fdt_display.html" target="_top">using fslview</a> &nbsp;-&nbsp;
<a href="http://www.fmrib.ox.ac.uk/analysis/techrep/tr03tb1/tr03tb1/" target="_top">FDT theory</a>
</tr></table>
<TD ALIGN=RIGHT>
<a href="../index.html" target="_top">
<IMG BORDER=0 SRC="../images/fsl-logo-big.jpg" WIDTH=165></a>
</TD>
</TR></TABLE><hr>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><meta http-equiv="Content-Type"
content="text/html;charset=utf-8">
<link REL="stylesheet" TYPE="text/css"
href="../fsl.css"><TITLE>FSL</TITLE></HEAD>
<BODY><IFRAME width="100%" scrolling="no" frameborder="0" src="fdt_top.html">Broken</IFRAME>
<a name="thresh"></a>
<h3>proj_thresh</h3>
<b>proj_thresh</b> is a command line utility that provides an alternative way of
expressing connection probability in connectivity-based segmentation. It is
run on the output of <a href="fdt_probtrackx.html"> <b>probtrackx</b></a> when <b>classification targets</b> are used.
<p>The output of <b>Connectivity-based seed classification</b> is a single volume for
each target mask, named <b>seeds_to_{target}</b> where {target} is replaced
by the file name of the relevant target mask. In these output images, the
value of each voxel within the seed mask is the number of samples seeded
from that voxel reaching the target mask.
<br><b>proj_thresh</b> is run as follows:
<p><b>proj_thresh list_of_volumes threshold</b></p>
Where the list of volumes is the outputs of <b>Connectivity-based seed
classification</b> (i.e., files named seeds_to_target1 etc etc) and threshold is expressed as a number of samples
<br>For each voxel in the seeds mask that has a value above threshold for at
least one target mask, <b>proj_thresh</b> calculates the number of samples reaching each target mask
as a proportion of the total number of samples reaching any target mask. The
output of <b>proj_thresh</b> is a single volume for each target mask.
<hr>
<a name="biggest"></a>
<h3>find_the_biggest</h3>
<b>find_the_biggest</b> is a command line utility that performs hard
segmentation of a seed region on the basis of outputs of <a
href="fdt_probtrackx.html"><b>probtrackx</b></a> when <b>classification targets</b> are being used.
<p>The output of <b>Connectivity-based seed classification</b> is a single volume for
each target mask, named <b>seeds_to_{target}</b> where {target} is replaced
by the file name of the relevant target mask. In these output images, the
value of each voxel within the seed mask is the number of samples seeded
from that voxel reaching the target mask. <b>find_the_biggest</b> classifies
seed voxels according to the target mask with which they show the highest
probability of connection. It is run as follows:
<p><b>find_the_biggest list_of_volumes outputname</b>
Where the list of volumes is the outputs of <b>Connectivity-based seed
classification</b> (i.e., files named seeds_to_target1 etc etc).
<p>The example below uses <a href="fdt_probtrackx.html"><b>probtrackx</b></a> and find_the_biggest to perform hard segmentation
of the thalamus on the basis of its connections to cortex.
<p><IMG height=300 SRC="fdt_images/fdt_seeds2targets_thal.gif"
ALT="connectivity-based classification of thalamus">
<hr>
<a name="vecreg"></a>
<h3>vecreg - Registration of vector images</h3>
<p>
<IMG ALIGN=RIGHT height=200 SRC="fdt_images/fdt_vecreg.gif"
ALT="vecreg applied to V1">
After running dtifit or bedpostx, it is often useful to register vector data to another space. For example, one might want to represent V1 for different subjects in standard space. <b>vecreg</b> is a command line tool that allows to perform such registration.<br><br>
Vector images cannot be registered by simply applying a transformation (as calculated by, say, <a href="http://www.fmrib.ox.ac.uk/fsl/flirt/index.html">FLIRT</a>) to every voxel's coordinates. The corresponding vectors have to be reoriented accordingly (see D. Alexander 2001, IEEE-TMI 20:1131-39). <b>vecreg</b> performs this operation for you.
The image on the right shows the effect of applying vecreg (right) to the V1 image on the left, compared to simply applying voxelwise transformation (e.g. using applyxfm4D) to the vectors (centre).
<br><br>
<b> Important: </b> vecreg does not calculate a transformation, but simply applies a given transformation to the input vector field. vecreg can apply a linear transformation calculated with FLIRT, or a non-linear transformation calculated by FNIRT.
<p>
types of input that may be used for vecreg<br>
from <a href="fdt_dtifit.html"><b>dtifit</b></a>: V1,V2,V3,tensor<br>
from <a href="fdt_bedpostx.html"><b>bedpostx</b></a>: dyads1, dyads2, etc. <br>
<p>
<h3>Command-line utility</h3>
<b>using a flirt matrix</b><br>
vecreg -i input_vector -o output_vector -r reference_image -t flirt_transform.mat<br>
<b>using a warpfield</b><br>
vecreg -i input_vector -o output_vector -r reference_image -w warp_field<br>
<br><b>more options</b>
<pre>
vecreg -i &lt;input4D&gt; -o &lt;output4D&gt; -r &lt;refvol&gt; [-t &lt;transform&gt;]
Compulsory arguments (You MUST set one or more of):
-i,--input filename for input vector or tensor field
-o,--output filename for output registered vector or tensor field
-r,--ref filename for reference (target) volume
Optional arguments (You may optionally specify one or more of):
-v,--verbose switch on diagnostic messages
-h,--help display this message
-t,--affine filename for affine transformation matrix
-w,--warpfield filename for 4D warp field for nonlinear registration
--rotmat filename for secondary affine matrix
if set, this will be used for the rotation of the vector/tensor field
--rotwarp filename for secondary warp field
if set, this will be used for the rotation of the vector/tensor field
--interp interpolation method : nearestneighbour, trilinear (default), sinc or spline
-m,--mask brain mask in input space
--refmask brain mask in output space (useful for speed up of nonlinear reg)
</pre>
<hr>
<a name="qboot"></a>
<h3>qboot - Estimation of fibre orientations using q-ball ODFs and residual bootstrap</h3>
<p>
<IMG ALIGN=RIGHT height=300 SRC="fdt_images/qboot.gif"
ALT="Different types of ODFs that can be inferred">
<b>qboot</b> is a command line tool that allows estimation of diffusion ODFs and fibre orientations from them. Its output can be used as an input for probtrackX in order to perform probabilistic tractography.<br><br>
ODF estimation is performed using a real spherical harmonics basis. Fibre orientations are estimated as the local maxima of the ODFs. Both deterministic and probabilistic estimation can be performed. For the latter, residual bootstrap is performed to infer on the ODF shape and obtain a distribution of fibre orientations. For more details on the implementation see (S.N. Sotiropoulos, I. Aganj, S. Jbabdi, G. Sapiro, C. Lenglet and T.E. Behrens, "Inference on Constant Solid Angle Orientation Distribution Functions from Diffusion-Weighted MRI", OHBM, 2011).<br><br>
<b>qboot</b> allows reconstruction of q-ball ODFs (Tuch, MRM, 2004), CSA ODFs (Aganj et al, MRM, 2010) and variants of them, obtained via Laplacian sharpening and Laplace-Beltrami regularization (Descoteaux et al, MRM, 2007). Both spherical harmonic coefficients of the reconstructed ODFs and fibre orientation estimates may be returned as ouput.
<br><br>
<p>
<b>Input files for qboot </b>: Similar to dtifit and bedpostx, qboot needs a 4D data file, a binary mask_file, a bvecs and a bvals file.
<p>
<b>Command-line utility</b>
<pre>
qboot -k data_file -m nodif_brain_mask -r bvecs -b bvals
</pre>
<br><b>more options</b>
<pre>
Compulsory arguments (You MUST set one or more of):
-k,--data Data file
-m,--mask Mask file
-r,--bvecs b vectors file
-b,--bvals b values file
Optional arguments (You may optionally specify one or more of):
--ld,--logdir Output directory (default is logdir)
--forcedir Use the actual directory name given - i.e. don't add + to make a new directory
--q File provided with multi-shell data. Indicates the number of directions for each shell
--model Which model to use. 1=Tuch's ODFs, 2=CSA ODFs (default), 3=multi-shell CSA ODFs
--lmax Maximum spherical harmonic order employed (must be even, default=4)
--npeaks Maximum number of ODF peaks to be detected (default 2)
--thr Minimum threshold for a local maxima to be considered an ODF peak.
Expressed as a fraction of the maximum ODF value (default 0.4)
--pf Which peak finder to use. 1=Discrete, 2=Semi-continuous (can be only used with lmax=4) (default=1)
--ns,--nsamples Number of bootstrap samples (default is 50)
--lambda Laplace-Beltrami regularization parameter (default is 0)
--delta Signal attenuation regularization parameter for model=2 (default is 0.01)
--alpha Laplacian sharpening parameter for model=1 (default is 0, should be smaller than 1)
--seed Seed for pseudo-random number generator
--savecoeff Save the ODF coefficients instead of the peaks.
--savemeancoeff Save the mean ODF coefficients across all samples
-V,--verbose Switch on diagnostic messages
</pre>
<b>Possible Outputs of qboot</b>
<p><ul>
<li>qboot will by default return distributions of ODF peaks. Setting <i> --ns=1 </i>, will perform <b> deterministic </b> estimation of
the ODFs and their peaks.</li>
<li>If <i>--savecoeff </i> is set, then only samples of the ODF coefficients will be saved, without any peaks.</li>
<li>If <i>--savecoeff --savemeancoeff </i> are set, then only the mean ODF coefficients (across the samples) will be saved for each voxel and
again no peaks.</li>
<li>If <i> --savemeancoeff </i> is set, then the mean ODF coefficients (across the samples) will be saved for each voxel, along with the
samples of the ODF peaks.</li>
<li>If none of the above is set, samples of the ODF peaks are saved.</li>
</ul>
<p> <p>
<b>Multi-shell data assumptions</b>
<p>
The current implementation of qboot can estimate multi-shell ODFs, assuming the following for the data:
<ul><li>Three-shell data are assumed. The bvalues should form an arithmetic progression, e.g. 1000, 2000, 3000 or 2000, 4000, 6000.</li>
<li>It is assumed that all data from each shell are grouped together and shells are one after the other in data, bvecs and bvals. So, if
for example 3 directions are available for 3 shells these should appear as: dir1_shell1, dir2_shell1, dir3_shell1, dir1_shell2,dir2_shell2, dir3_shell2, dir1_shell3,dir2_shell3, dir3_shell3.</li>
<li>There is no assumption on the number of directions in each shell. Each shell can have its own directions and number of directions. The minimum number of directions in a shell is dictated by the number of coefficients estimated, i.e. if lmax=4 => Num_of_coeff=15 => All shells should have at least 15 directions. This is because interpolation of the data occurs, when: i) different number of directions exist in different shells or ii) same number of directions, but corresponding directions between shells (e.g. dir1_shell1 vs dir1_shell2) differ more than 1 degree. In this case each shell is fitted with SH of the same order, which is by default 10, unless the number of directions in the shell with the less points is not enough (then order is decreased).</li>
<li>By default, same number of datapoints (including b=0's) are assumed for each shell acquisition. If this is not the case, a <i> --q=qshells.txt </i> file should be provided. The information in the qshells file is of the form <i> N1 N2 N3 </i>, each number indicating the number of datapoints (including b=0's) that correspond to each shell acquisition.
</ul>
\ No newline at end of file
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><meta http-equiv="Content-Type"
content="text/html;charset=utf-8">
<link REL="stylesheet" TYPE="text/css"
href="../fsl.css"><TITLE>FSL</TITLE></HEAD>
<BODY><IFRAME width="100%" scrolling="no" frameborder="0" src="fdt_top.html">Broken</IFRAME>
<h3>vecreg - Registration of vector images</h3>
<p>
<IMG ALIGN=RIGHT height=200 SRC="fdt_images/fdt_vecreg.gif"
ALT="vecreg applied to V1">
After running dtifit or bedpostx, it is often useful to register vector data to another space. For example, one might want to represent V1 for different subjects in standard space. <b>vecreg</b> is a command line tool that allows to perform such registration.<br><br>
Vector images cannot be registered by simply applying a transformation (as calculated by, say, <a href="http://www.fmrib.ox.ac.uk/fsl/flirt/index.html">FLIRT</a>) to every voxel's coordinates. The corresponding vectors have to be reoriented accordingly (see D. Alexander 2001, IEEE-TMI 20:1131-39). <b>vecreg</b> performs this operation for you.
The image on the right shows the effect of applying vecreg (right) to the V1 image on the left, compared to simply applying voxelwise transformation (e.g. using applyxfm4D) to the vectors (centre).
<br><br>
<b> Important: </b> vecreg does not calculate a transformation, but simply applies a given transformation to the input vector field. vecreg can apply a linear transformation calculated with FLIRT, or a non-linear transformation calculated by IRTK, using a warpfield calculated using dof2warp.
<p>
types of vectors that may be used for vecreg<br>
from <a href="fdt_dtifit.html"><b>dtifit</b></a>: V1,V2,V3<br>
from <a href="fdt_bedpostx.html"><b>bedpostx</b><a>: dyads1, dyads2, etc. <br>
<p>
<h3>Command-line utility</h3>
<b>using a flirt matrix</b><br>
vecreg -i input_vector -o output_vector -r reference_image -t flirt_transform.mat<br>
<b>using a warpfield</b><br>
vecreg -i input_vector -o output_vector -r reference_image -w warp_field<br>
<br><b>more options</b>
<pre>
vecreg -i &lt;input4Dvector&gt; -o &lt;output4D&gt; -t &lt;transformation&gt;
Compulsory arguments (You MUST set one or more of):
-i,--input filename of input vector
-o,--output filename of output registered vector
-r,--ref filename of reference (target) volume
Optional arguments (You may optionally specify one or more of):
-v,--verbose switch on diagnostic messages
-h,--help display this message
-t,--affine filename of affine transformation matrix
-w,--warpfield filename of 4D warp field for nonlinear registration
--interp interpolation method : nearestneighbour (default) or trilinear or sinc
-m,--mask brain mask in input space
</pre>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><meta http-equiv="Content-Type" content="text/html;charset=utf-8">
<link REL="stylesheet" TYPE="text/css" href="../fsl.css"><TITLE>FSL</TITLE></HEAD>
<BODY><IFRAME width="100%" scrolling="no" frameborder="0" src="fdt_top.html">Broken</IFRAME>
<IMG ALIGN=RIGHT hspace=20 vspace=20 SRC="fdt_images/fdt_gui.gif" ALT="Example GUI view">
<h3>Introduction</h3>
For other information on FDT and updated journal references, see the <a
href="http://www.fmrib.ox.ac.uk/analysis/research/fdt/" target="_top">FDT web
page</a>. If you use FDT in your research, please quote the journal
references listed there.
<p>FDT (FMRIB's Diffusion Toolbox) is a software tool for analysis of diffusion weighted images. FDT is part
of <a href="http://www.fmrib.ox.ac.uk/fsl" target="_top">FSL</a> (FMRIB's Software Library)
. FDT has an easy-to-use graphical user interface (GUI) and its component
programmes can also be run from the command line. FDT includes tools for
data preprocessing, local diffusion modelling and tractography. Each stage in FDT is run
separately.
The main FDT programmes, which are accesible from the GUI are:
<ul><li><a href="fdt_eddy.html">eddycorrect</a> - for correction of eddy current distortion</li>
<li><a href="fdt_bedpostx.html">bedpostx</a> - for local modelling of diffusion parameters.</li>
<li><a href="fdt_probtrackx.html">probtrackx</a> - for tractography and connectivity-based segmentation</li>
<li><a href="fdt_dtifit.html">dtifit</a> - for local fitting of diffusion tensors</li></ul>
<br>The FDT GUI also includes a <a
href="fdt_reg.html">registration</a> option that registers images
using <a href="http://www.fmrib.ox.ac.uk/fsl/flirt/index.html" target="_top">FLIRT</a>.
<p>Additional FDT <a href="fdt_utils.html">utilities</a>, that can be run only from the command line, are:
<ul><li><a href="fdt_utils.html#thresh">proj_thresh</a> - for thresholding some outputs
of probtrack</li>
<li><A href="fdt_utils.html#biggest">find_the_biggest</a> - for performing hard
segmentation on the outputs of connectivity-based thresholding in probtrack</li>
<li><A href="fdt_utils.html#vecreg">vecreg</a> - for registering vector data</li>
<li><A href="fdt_utils.html#qboot">qboot</a> - for estimating fibre
orientations and their uncertainty using ODFs and bootstrapping </li></ul>
<p>The probabilistic tractography tools within FDT are very flexible and allow
the user to generate connectivity distributions from single or multiple
voxels, or from a <a href="http://surfer.nmr.mgh.harvard.edu">FreeSurfer</a> cortical surface;
to limit these distributions based on anatomical criteria and to
perform segmentation based on the probability of connection to user-defined
target regions.
<p>To call the FDT GUI, either run <b>Fdt</b> (<b>Fdt_gui</b> on Mac
or Windows), or run <b>fsl</b> and press the <b>FDT</b> button.
<p>For an overview of the local diffusion modelling and tractography used
within FDT see the <a href="http://www.fmrib.ox.ac.uk/analysis/techrep/tr03tb1/tr03tb1/">appendix</a>.
...@@ -45,7 +45,7 @@ class dtifitOptions { ...@@ -45,7 +45,7 @@ class dtifitOptions {
Option<int> y_max; Option<int> y_max;
Option<int> x_min; Option<int> x_min;
Option<int> x_max; Option<int> x_max;
Option<string> grad_file; FmribOption<string> grad_file;
FmribOption<bool> save_bvals; FmribOption<bool> save_bvals;
bool parse_command_line(int argc, char** argv); bool parse_command_line(int argc, char** argv);
......
...@@ -6,13 +6,19 @@ ...@@ -6,13 +6,19 @@
Usage() { Usage() {
echo "" echo ""
echo "Usage: eddy_correct <4dinput> <4doutput> <reference_no>" echo "Usage: eddy_correct <4dinput> <4doutput> <reference_no> [<interp>]"
echo " Choose interp from {trilinear,spline} def - trilinear "
echo "" echo ""
exit exit
} }
[ "$3" = "" ] && Usage [ "$3" = "" ] && Usage
interpm="trilinear"
if [ $# -eq 4 ]; then
interpm=${4}
fi
input=`${FSLDIR}/bin/remove_ext ${1}` input=`${FSLDIR}/bin/remove_ext ${1}`
output=`${FSLDIR}/bin/remove_ext ${2}` output=`${FSLDIR}/bin/remove_ext ${2}`
ref=${3} ref=${3}
...@@ -30,7 +36,7 @@ full_list=`${FSLDIR}/bin/imglob ${output}_tmp????.*` ...@@ -30,7 +36,7 @@ full_list=`${FSLDIR}/bin/imglob ${output}_tmp????.*`
for i in $full_list ; do for i in $full_list ; do
echo processing $i echo processing $i
echo processing $i >> ${output}.ecclog echo processing $i >> ${output}.ecclog
${FSLDIR}/bin/flirt -in $i -ref ${output}_ref -nosearch -o $i -paddingsize 1 >> ${output}.ecclog ${FSLDIR}/bin/flirt -in $i -ref ${output}_ref -nosearch -interp ${interpm} -o $i -paddingsize 1 >> ${output}.ecclog
done done
fslmerge -t $output $full_list fslmerge -t $output $full_list
......
// Nonlinear estimation of diffusion tensor
// using a Gaussian or Rician noise model
//
// nldtifit
//
// Jesper Andersson & Stam Rastapopoulos, FMRIB Image Analysis Group
//
// Copyright (C) 2011 University of Oxford
//
#include <cstdlib>
#include <iostream>
#include <string>
#include <ctime>
#include <cmath>
#include "newmat.h"
#include "newmatio.h"
#include "newimage/newimageall.h"
#include "RLR_MAP_routines.h"
#include "nldtifit.h"
using namespace std;
using namespace NEWMAT;
using namespace NEWIMAGE;
int main(int argc,
char *argv[])
{
// To be written by Stam
}
namespace NLDTIFIT {
NEWIMAGE::volume4D<float> nldtifit(// Data, b=0 and diffusion weighted
const NEWIMAGE::volume4D<float>& scans,
// Initial guesses of the parameters in the order s2, s0, R1, R2, R3, L1, L2, L3.
// N.B. that s2, s0 and L* must all be positive
const NEWIMAGE::volume4D<float>& inpar,
// Mask indicating where we want calculations to be performed
const NEWIMAGE::volume<char>& mask,
// nx3 matrix with gradient directions in same order as in scans
const NEWMAT::Matrix& bvec,
// nx1 vector with b-values in same order as in scans. N.B. that wa can have many different b-values
const NEWMAT::ColumnVector& bval,
// Indicates what neighbourhood of voxels we want to base s2 estimate on.
NbHood nbhood,
// Gaussian or Rician
NoiseModel nm)
{
verify_input(scans,inpar,mask,bvec,bval,nbhood,nm);
NEWIMAGE::volume4D<float> ovol = inpar;
ovol = 0.0;
int n = scans.tsize();
double *y = new double[nbhood*n];
double *theta = new double[nbhood*7+1];
double *g = new double[3*n];
double *b = new double[n];
repack_g_and_b(bvec,bval,g,b);
int np = 3;
double pmu[3] = {-7.29, -7.30, -7.31};
double pvar[3] = {4.0, 4.0, 4.0};
int pi[3] = {5, 6, 7};
double *ei = new double[nbhood*n];
double hist[MAXITER];
for (int k=0; k<scans.zsize(); k++) {
for (int j=0; j<scans.ysize(); j++) {
for (int i=0; i<scans.xsize(); i++) {
if (mask(i,j,k)) {
int nvox = load_data_and_initial_guesses(scans,inpar,mask,i,j,k,nbhood,y,theta);
if (fit_RLR(g,b,y,n,nvox,pmu,pvar,pi,np,nm,theta,ei,hist) > 0) {
deal_results(theta,i,j,k,ovol);
}
}
}
}
}
delete[] y;
delete[] theta;
delete[] g;
delete[] b;
delete[] ei;
return(ovol);
}
void verify_input(const NEWIMAGE::volume4D<float>& scans,
const NEWIMAGE::volume4D<float>& inpar,
const NEWIMAGE::volume<char>& mask,
const NEWMAT::Matrix& bvec,
const NEWMAT::ColumnVector& bval,
NbHood nbhood,
NoiseModel nm)
{
int n = scans.tsize();
if (!samesize(scans[0],inpar[0])) throw nldtifitException("verify_input: Data and intial guesses must have same dimensions");
if (!samesize(scans[0],mask)) throw nldtifitException("verify_input: Data and mask must have same dimensions");
if (inpar.tsize() != 8) throw nldtifitException("verify_input: There must be initial guesses for eight parameters per voxel");
if (bvec.Nrows() != n) throw nldtifitException("verify_input: The number of gradient vectors must match the number of scans");
if (bvec.Ncols() != 3) throw nldtifitException("verify_input: Each gradient vector must have three elements");
if (bval.Nrows() != n) throw nldtifitException("verify_input: The number of b-values must match the number of scans");
if (nbhood > NONE && nm == GAUSSIAN) throw nldtifitException("verify_input: You cannot combine a neighbourhood for s2 with Gaussian noise model");
}
void deal_results(// Input
const double *theta,
int ii,
int jj,
int kk,
// Output
NEWIMAGE::volume4D<float>& ovol)
{
for (int i=0; i<8; i++) ovol(ii,jj,kk,i) = theta[i];
}
void repack_g_and_b(// Input
const NEWMAT::Matrix& bvec,
const NEWMAT::ColumnVector& bval,
// Output
double *g,
double *b)
{
for (int r=1; r<=bvec.Nrows(); r++) {
*b++ = bval(r);
for (int c=1; c<=bvec.Ncols(); c++) {
*g++ = bvec(r,c);
}
}
return;
}
int load_data_and_initial_guesses(// Input
const NEWIMAGE::volume4D<float>& scans,
const NEWIMAGE::volume4D<float>& inpar,
const NEWIMAGE::volume<char>& mask,
int ii,
int jj,
int kk,
NbHood nbhood,
// Output
double *y,
double *ig)
{
int nvox = 0;
if (mask(ii,jj,kk)) { // This SHOULD be true
y = load_data_single_voxel(scans,ii,jj,kk,y);
*ig++ = inpar(ii,jj,kk,0);
ig = load_guess_single_voxel(inpar,ii,jj,kk,ig);
nvox++;
if (nbhood > NONE) { // Add face voxels
if (ii && mask(ii-1,jj,kk)) {y = load_data_single_voxel(scans,ii-1,jj,kk,y); ig = load_guess_single_voxel(inpar,ii-1,jj,kk,ig); nvox++;}
if (jj && mask(ii,jj-1,kk)) {y = load_data_single_voxel(scans,ii,jj-1,kk,y); ig = load_guess_single_voxel(inpar,ii,jj-1,kk,ig); nvox++;}
if (kk && mask(ii,jj,kk-1)) {y = load_data_single_voxel(scans,ii,jj,kk-1,y); ig = load_guess_single_voxel(inpar,ii,jj,kk-1,ig); nvox++;}
if (ii<scans.xsize()-1 && mask(ii+1,jj,kk)) {y = load_data_single_voxel(scans,ii+1,jj,kk,y); ig = load_guess_single_voxel(inpar,ii+1,jj,kk,ig); nvox++;}
if (jj<scans.ysize()-1 && mask(ii,jj+1,kk)) {y = load_data_single_voxel(scans,ii,jj+1,kk,y); ig = load_guess_single_voxel(inpar,ii,jj+1,kk,ig); nvox++;}
if (kk<scans.zsize()-1 && mask(ii,jj,kk+1)) {y = load_data_single_voxel(scans,ii,jj,kk+1,y); ig = load_guess_single_voxel(inpar,ii,jj,kk+1,ig); nvox++;}
}
if (nbhood > FACE) { // Add edge voxels
if (ii && jj && mask(ii-1,jj-1,kk)) {y = load_data_single_voxel(scans,ii-1,jj-1,kk,y); ig = load_guess_single_voxel(inpar,ii-1,jj-1,kk,ig); nvox++;}
if (ii && kk && mask(ii-1,jj,kk-1)) {y = load_data_single_voxel(scans,ii-1,jj,kk-1,y); ig = load_guess_single_voxel(inpar,ii-1,jj,kk-1,ig); nvox++;}
if (ii && jj<scans.ysize()-1 && mask(ii-1,jj+1,kk)) {y = load_data_single_voxel(scans,ii-1,jj+1,kk,y); ig = load_guess_single_voxel(inpar,ii-1,jj+1,kk,ig); nvox++;}
if (ii && kk<scans.zsize()-1 && mask(ii-1,jj,kk+1)) {y = load_data_single_voxel(scans,ii-1,jj,kk+1,y); ig = load_guess_single_voxel(inpar,ii-1,jj,kk+1,ig); nvox++;}
if (ii<scans.xsize()-1 && jj && mask(ii+1,jj-1,kk)) {y = load_data_single_voxel(scans,ii+1,jj-1,kk,y); ig = load_guess_single_voxel(inpar,ii+1,jj-1,kk,ig); nvox++;}
if (ii<scans.xsize()-1 && kk && mask(ii+1,jj,kk-1)) {y = load_data_single_voxel(scans,ii+1,jj,kk-1,y); ig = load_guess_single_voxel(inpar,ii+1,jj,kk-1,ig); nvox++;}
if (ii<scans.xsize()-1 && jj<scans.ysize()-1 && mask(ii+1,jj+1,kk)) {y = load_data_single_voxel(scans,ii+1,jj+1,kk,y); ig = load_guess_single_voxel(inpar,ii+1,jj+1,kk,ig); nvox++;}
if (ii<scans.xsize()-1 && kk<scans.zsize()-1 && mask(ii+1,jj,kk+1)) {y = load_data_single_voxel(scans,ii+1,jj,kk+1,y); ig = load_guess_single_voxel(inpar,ii+1,jj,kk+1,ig); nvox++;}
if (jj && kk && mask(ii,jj-1,kk-1)) {y = load_data_single_voxel(scans,ii,jj-1,kk-1,y); ig = load_guess_single_voxel(inpar,ii,jj-1,kk-1,ig); nvox++;}
if (jj && kk<scans.zsize()-1 && mask(ii,jj-1,kk+1)) {y = load_data_single_voxel(scans,ii,jj-1,kk+1,y); ig = load_guess_single_voxel(inpar,ii,jj-1,kk+1,ig); nvox++;}
if (jj<scans.ysize()-1 && kk && mask(ii,jj+1,kk-1)) {y = load_data_single_voxel(scans,ii,jj+1,kk-1,y); ig = load_guess_single_voxel(inpar,ii,jj+1,kk-1,ig); nvox++;}
if (jj<scans.ysize()-1 && kk<scans.zsize()-1 && mask(ii,jj+1,kk+1)) {y = load_data_single_voxel(scans,ii,jj+1,kk+1,y); ig = load_guess_single_voxel(inpar,ii,jj+1,kk+1,ig); nvox++;}
}
if (nbhood > EDGE) {
if (ii && jj && kk && mask(ii-1,jj-1,kk-1)) {y = load_data_single_voxel(scans,ii-1,jj-1,kk-1,y); ig = load_guess_single_voxel(inpar,ii-1,jj-1,kk-1,ig); nvox++;}
if (ii<scans.xsize()-1 && jj && kk && mask(ii+1,jj-1,kk-1)) {y = load_data_single_voxel(scans,ii+1,jj-1,kk-1,y); ig = load_guess_single_voxel(inpar,ii+1,jj-1,kk-1,ig); nvox++;}
if (ii && jj<scans.ysize()-1 && kk && mask(ii-1,jj+1,kk-1)) {y = load_data_single_voxel(scans,ii-1,jj+1,kk-1,y); ig = load_guess_single_voxel(inpar,ii-1,jj+1,kk-1,ig); nvox++;}
if (ii && jj && kk<scans.zsize()-1 && mask(ii-1,jj-1,kk+1)) {y = load_data_single_voxel(scans,ii-1,jj-1,kk+1,y); ig = load_guess_single_voxel(inpar,ii-1,jj-1,kk+1,ig); nvox++;}
if (ii<scans.xsize()-1 && jj<scans.ysize()-1 && kk && mask(ii+1,jj+1,kk-1)) {y = load_data_single_voxel(scans,ii+1,jj+1,kk-1,y); ig = load_guess_single_voxel(inpar,ii+1,jj+1,kk-1,ig); nvox++;}
if (ii<scans.xsize()-1 && jj && kk<scans.zsize()-1 && mask(ii+1,jj-1,kk+1)) {y = load_data_single_voxel(scans,ii+1,jj-1,kk+1,y); ig = load_guess_single_voxel(inpar,ii+1,jj-1,kk+1,ig); nvox++;}
if (ii && jj<scans.ysize()-1 && kk<scans.zsize()-1 && mask(ii-1,jj+1,kk+1)) {y = load_data_single_voxel(scans,ii-1,jj+1,kk+1,y); ig = load_guess_single_voxel(inpar,ii-1,jj+1,kk+1,ig); nvox++;}
if (ii<scans.xsize()-1 && jj<scans.ysize()-1 && kk<scans.zsize()-1 && mask(ii+1,jj+1,kk+1)) {y = load_data_single_voxel(scans,ii+1,jj+1,kk+1,y); ig = load_guess_single_voxel(inpar,ii+1,jj+1,kk+1,ig); nvox++;}
}
}
return(nvox);
}
double *load_data_single_voxel(const NEWIMAGE::volume4D<float>& scans,
int ii,
int jj,
int kk,
// Output
double *y)
{
for (int i = 0; i<scans.tsize(); i++) *y++ = scans(ii,jj,kk,i);
return(y);
}
double *load_guess_single_voxel(const NEWIMAGE::volume4D<float>& inpar,
int ii,
int jj,
int kk,
// Output
double *ig)
{
for (int i = 1; i<inpar.tsize(); i++) *ig++ = inpar(ii,jj,kk,i); // N.B. starting at 1 (after s2)
return(ig);
}
} // end namespace NLDTIFIT
// Nonlinear estimation of diffusion tensor
// using a Gaussian or Rician noise model
//
// nldtifit
//
// Jesper Andersson, FMRIB Image Analysis Group
//
// Copyright (C) 2011 University of Oxford
//
namespace NLDTIFIT {
class nldtifitException: public std::exception
{
private:
std::string m_msg;
public:
nldtifitException(const std::string& msg) throw(): m_msg(msg) {}
virtual const char * what() const throw() {
return string("nldtifit: msg=" + m_msg).c_str();
}
~nldtifitException() throw() {}
};
enum NbHood {NONE = 1, FACE=7, EDGE=19, CORNER=27};
enum NoiseModel {GAUSSIAN=1, RICIAN=2};
NEWIMAGE::volume4D<float> nldtifit(// Data, b=0 and diffusion weighted
const NEWIMAGE::volume4D<float>& scans,
// Initial guesses of the parameters in the order s2, s0, R1, R2, R3, L1, L2, L3.
// N.B. that s2, s0 and L* must all be positive
const NEWIMAGE::volume4D<float>& inpar,
// Mask indicating where we want calculations to be performed
const NEWIMAGE::volume<char>& mask,
// nx3 matrix with gradient directions in same order as in scans
const NEWMAT::Matrix& bvec,
// nx1 vector with b-values in same order as in scans. N.B. that wa can have many different b-values
const NEWMAT::ColumnVector& bval,
// Indicates what neighbourhood of voxels we want to base s2 estimate on.
NbHood nbhood,
// Gaussian or Rician
NoiseModel nm);
void verify_input(const NEWIMAGE::volume4D<float>& scans,
const NEWIMAGE::volume4D<float>& inpar,
const NEWIMAGE::volume<char>& mask,
const NEWMAT::Matrix& bvec,
const NEWMAT::ColumnVector& bval,
NbHood nbhood,
NoiseModel nm);
void deal_results(// Input
const double *theta,
int ii,
int jj,
int kk,
// Output
NEWIMAGE::volume4D<float>& ovol);
void repack_g_and_b(// Input
const NEWMAT::Matrix& bvec,
const NEWMAT::ColumnVector& bval,
// Output
double *g,
double *b);
int load_data_and_initial_guesses(// Input
const NEWIMAGE::volume4D<float>& scans,
const NEWIMAGE::volume4D<float>& inpar,
const NEWIMAGE::volume<char>& mask,
int ii,
int jj,
int kk,
NbHood nbhood,
// Output
double *y,
double *ig);
double *load_data_single_voxel(const NEWIMAGE::volume4D<float>& scans,
int ii,
int jj,
int kk,
// Output
double *y);
double *load_guess_single_voxel(const NEWIMAGE::volume4D<float>& inpar,
int ii,
int jj,
int kk,
// Output
double *ig);
int fit_RLR(/* Input */
double *g, /* Gradients, nx3 Matlab matrix. */
double *b, /* b-values. */
double *y, /* Data. */
int n, /* # of data points. */
int nvox, /* # of voxels. */
double *pmu, /* Means of priors. */
double *pvar, /* Variances of priors. */
int *pi, /* Indicies (into theta) of priors. */
int np, /* # of parameters with priors. */
int nm, /* Noise model 1->Gauss, 2->Rician. */
/* Input/Output */
double *theta, /* Parameters. */
double *ei, /* Expected intensities. */
double *hist); /* History of log-likelihoods. */
} // end namespace NLDTIFIT
This diff is collapsed.
/* rubix.h: Classes utilized in RubiX MCMC storage and handling */
/* Stamatios Sotiropoulos, FMRIB Analysis Group */
/* Copyright (C) 2012 University of Oxford */
/* CCOPYRIGHT */
#if !defined(rubix_h)
#define rubix_h
#include <iostream>
#include <fstream>
#include <iomanip>
#define WANT_STREAM
#define WANT_MATH
#include <string>
#include "miscmaths/miscmaths.h"
#include "miscmaths/miscprob.h"
#include "newimage/newimageall.h"
#include "stdlib.h"
using namespace NEWMAT;
using namespace MISCMATHS;
using namespace NEWIMAGE;
namespace RUBIX{
////////////////////////////////////////////
// MCMC SAMPLE STORAGE
////////////////////////////////////////////
//Storing Samples for parameters of all HR voxels
class HRSamples{
Matrix m_dsamples; //Variables for storing MCMC samples of all voxels in the HRgrid
Matrix m_d_stdsamples;
Matrix m_S0samples;
Matrix m_tausamples;
vector<Matrix> m_thsamples;
vector<Matrix> m_phsamples;
vector<Matrix> m_fsamples;
//for storing means
RowVector m_mean_dsamples; //Storing mean_samples for all voxels in the HRgrid
RowVector m_mean_d_stdsamples;
RowVector m_mean_S0samples;
RowVector m_mean_tausamples;
vector<Matrix> m_dyadic_vectors;
vector<RowVector> m_mean_fsamples;
int m_nsamps;
const int m_njumps;
const int m_sample_every;
const int m_numfibres;
const bool m_rician;
const int m_modelnum;
//const string m_logdir;
public:
HRSamples(int nvoxels, const int njumps, const int sample_every, const int numfibres, const bool rician=false, const int modelnum=1):
m_njumps(njumps),m_sample_every(sample_every), m_numfibres(numfibres), m_rician(rician), m_modelnum(modelnum){
int count=0;
int nsamples=0;
for(int i=0;i<m_njumps; i++){
count++;
if(count==m_sample_every){
count=0;
nsamples++;
}
}
m_nsamps=nsamples;
m_dsamples.ReSize(nsamples,nvoxels); m_dsamples=0;
m_S0samples.ReSize(nsamples,nvoxels); m_S0samples=0;
m_mean_dsamples.ReSize(nvoxels); m_mean_dsamples=0;
m_mean_S0samples.ReSize(nvoxels); m_mean_S0samples=0;
if (m_rician){
m_tausamples.ReSize(nsamples,nvoxels); m_tausamples=0;
m_mean_tausamples.ReSize(nvoxels); m_mean_tausamples=0;
}
if (m_modelnum==2){
m_d_stdsamples.ReSize(nsamples,nvoxels); m_d_stdsamples=0;
m_mean_d_stdsamples.ReSize(nvoxels); m_mean_d_stdsamples=0;
}
Matrix tmpvecs(3,nvoxels); tmpvecs=0;
for(int f=0; f<m_numfibres; f++){
m_thsamples.push_back(m_S0samples);
m_phsamples.push_back(m_S0samples);
m_fsamples.push_back(m_S0samples);
m_dyadic_vectors.push_back(tmpvecs);
m_mean_fsamples.push_back(m_mean_S0samples);
}
}
~HRSamples(){}
void record(const HRvoxel& HRv, int vox, int samp); //Store parameters for a certain sample at a certain HR voxel
void finish_voxel(int vox); //Get the mean samples for a voxel once jumping has finished
void save(const volume<float>& mask); //Save samples for all voxels
};
////////////////////////////////////////////
// MCMC SAMPLE STORAGE
////////////////////////////////////////////
//Storing Samples for parameters at the Low-Res level (e.g. priors, Low-res parameters)
class LRSamples{
vector<Matrix> m_thsamples; //Variables for storing MCMC samples of all voxels in the LRgrid
vector<Matrix> m_phsamples;
vector<Matrix> m_ksamples;
Matrix m_S0samples;
Matrix m_tauLRsamples;
Matrix m_sumfsamples;
Matrix m_meandsamples;
Matrix m_lik_energy;
Matrix m_prior_energy;
//for storing means
vector<Matrix> m_dyadic_vectors;
vector<RowVector> m_mean_ksamples;
RowVector m_mean_S0samples;
RowVector m_mean_tausamples;
RowVector m_mean_sumfsamples;
RowVector m_mean_meandsamples;
int m_nsamps;
const int m_njumps;
const int m_sample_every;
const int m_Nmodes;
const bool m_rician;
const bool m_fsumPrior;
const bool m_dPrior;
//const string m_logdir;
public:
LRSamples(int nvoxels, const int njumps, const int sample_every, const int Nmodes, const bool rician=false, const bool fsumPrior=false, const bool dPrior=false):
m_njumps(njumps),m_sample_every(sample_every), m_Nmodes(Nmodes), m_rician(rician), m_fsumPrior(fsumPrior), m_dPrior(dPrior){
int count=0;
int nsamples=0;
for(int i=0;i<m_njumps; i++){
count++;
if(count==m_sample_every){
count=0;
nsamples++;
}
}
m_nsamps=nsamples;
m_S0samples.ReSize(nsamples,nvoxels); m_S0samples=0;
m_lik_energy.ReSize(nsamples,nvoxels); m_lik_energy=0;
m_prior_energy.ReSize(nsamples,nvoxels); m_prior_energy=0;
m_mean_S0samples.ReSize(nvoxels); m_mean_S0samples=0;
if (m_rician){
m_tauLRsamples.ReSize(nsamples,nvoxels); m_tauLRsamples=0;
m_mean_tausamples.ReSize(nvoxels); m_mean_tausamples=0;
}
if (m_fsumPrior){
m_sumfsamples.ReSize(nsamples,nvoxels); m_sumfsamples=0;
m_mean_sumfsamples.ReSize(nvoxels); m_mean_sumfsamples=0;
}
if (m_dPrior){
m_meandsamples.ReSize(nsamples,nvoxels); m_meandsamples=0;
m_mean_meandsamples.ReSize(nvoxels); m_mean_meandsamples=0;
}
Matrix tmpvecs(3,nvoxels); tmpvecs=0;
for(int f=0; f<m_Nmodes; f++){
m_thsamples.push_back(m_S0samples);
m_phsamples.push_back(m_S0samples);
m_ksamples.push_back(m_S0samples);
m_dyadic_vectors.push_back(tmpvecs);
m_mean_ksamples.push_back(m_mean_S0samples);
}
}
~LRSamples(){}
void record(const LRvoxel& LRv, int vox, int samp); //Store parameters for a certain sample at a certain LR voxel
void finish_voxel(int vox); //Get the mean samples for a voxel once jumping has finished
void save(const volume<float>& mask); //Save samples for all voxels
};
//////////////////////////////////////////////
// MCMC HANDLING for a single LR voxel
//////////////////////////////////////////////
class LRVoxelManager{
rubixOptions& opts;
HRSamples& m_HRsamples; //keep MCMC samples of the parameters of all voxels inferred at High-res grid
LRSamples& m_LRsamples; //keep MCMC samples of the parameters of all voxels inferred at Low-res grid
int m_LRvoxnumber;
ColumnVector m_HRvoxnumber;
LRvoxel m_LRv;
const ColumnVector& m_dataLR; //Low-Res Data for the specific LR voxel
const vector<ColumnVector>& m_dataHR; //High-Res Data for all HRvoxels within a LRvoxel
const Matrix& m_bvecsLR; //bvecs at Low-Res (3 x LR_NumPoints)
const Matrix& m_bvalsLR; //bvalues at Low-Res (1 x HR_NumPoints)
const vector<Matrix>& m_bvecsHR; //bvecs at High-Res (HRvoxels within a LRvoxel x 3 x HR_NumPoints)
const vector<Matrix>& m_bvalsHR; //bvalues at High-Res (HRvoxels within a LRvoxel x 1 x HR_NumPoints)
const ColumnVector& m_HRweights; //Holds the volume fraction each HR voxel occupies out of the LR one
public:
//Constructor
LRVoxelManager(HRSamples& Hsamples, LRSamples& Lsamples, int LRvoxnum, ColumnVector& HRvoxnum,
const ColumnVector& dataLR,const vector<ColumnVector>& dataHR,
const Matrix& bvecsLR, const Matrix& bvalsLR, const vector<Matrix>& bvecsHR, const vector<Matrix>& bvalsHR, const ColumnVector& HRweights):
opts(rubixOptions::getInstance()), m_HRsamples(Hsamples), m_LRsamples(Lsamples), m_LRvoxnumber(LRvoxnum),m_HRvoxnumber(HRvoxnum),
m_LRv(bvecsHR, bvalsHR, bvecsLR, bvalsLR, dataLR, dataHR, opts.nfibres.value(), opts.nmodes.value(), HRweights, opts.modelnum.value(), opts.fudge.value(),opts.all_ard.value(), opts.no_ard.value(),opts.kappa_ard.value(), opts.fsumPrior.value(), opts.dPrior.value(), opts.rician.value(), opts.noS0jump.value()),
m_dataLR(dataLR), m_dataHR(dataHR),m_bvecsLR(bvecsLR), m_bvalsLR(bvalsLR), m_bvecsHR(bvecsHR), m_bvalsHR(bvalsHR), m_HRweights(HRweights) { }
~LRVoxelManager() { }
void initialise(); //Initialise all parameters for an LR voxel
void runmcmc(); //Run MCMC for an LR voxel
};
}
#endif
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment