Commit 4806dc1f authored by Matteo Bastiani's avatar Matteo Bastiani

second data release processing pipeline

parent 2afbf887
......@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 Matteo Bastiani
Copyright 2017 University of Oxford
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
......
-----------------------------------------------
dHCP neonatal dMRI data processing pipeline
March, 2018
-----------------------------------------------
dHCP neonatal dMRI data processing pipeline
April, 2018
V 0.0.2: Processing pipeline used for the 2nd public release
-----------------------------------------------
Automated pipeline to consistently analyse neonatal dMRI data from the developing Human Connectome Project (dHCP).
If you use the pipeline in your work, please cite the following article:
Bastiani, M., Andersson, J.L.R., Cordero-Grande, L., Murgasova, M.,
Hutter, J., Price, A.N., Makropoulos, A., Fitzgibbon, S.P., Hughes,
E., Rueckert, D., Victor, S., Rutherford, M., Edwards, A.D., Smith,
S., Tournier, J.-D., Hajnal, J.V., Jbabdi, S., Sotiropoulos,
S.N. (2019). Automated processing pipeline for neonatal diffusion MRI
in the developing Human Connectome Project. NeuroImage, 185, 750-763.
Installation
------------
The pipeline consists of several bash scripts that do not require any installation.
Once it has been downloaded and unpacked, the first thing to do is fill the necessary paths into the file:
setup.sh
After that, source the script from the terminal in the following way:
. setup.sh
V 0.0.1: Pipeline reflecting the first data release processing
-----------------------------------------------
Comprehensive and automated pipeline to consistently analyse neonatal dMRI data from the developing Human Connectome Project (dHCP).
If you use the pipeline in your work, please cite the following article:
Bastiani, M., Andersson, J., Cordero-Grande, L., Murgasova, M., Hutter, J., Price, A.N., Makropoulos, A., Fitzgibbon, S.P., Hughes, E., Rueckert, D., Victor, S., Rutherford, M., Edwards, A.D., Smith, S., Tournier, J.-D., Hajnal, J.V., Jbabdi, S., Sotiropoulos, S.N. (2018). Automated processing pipeline for neonatal diffusion MRI in the developing Human Connectome Project. NeuroImage.
The script needs to be run before launching the processing jobs.
Installation
Dependencies
------------
The pipeline consists of several bash scripts.
Once it has been downloaded, the first thing to do is fill the correct paths into:
dHCP_neo_dMRI_setup.sh
The script needs to be run before launching the processing jobs.
The dMRI neonatal pipeline mainly relies on:
- FSL 6.0.1 (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki)
Additional dependencies are:
- The dHCP structural pipeline (https://github.com/BioMedIA/dhcp-structural-pipeline)
- ANTs (http://stnava.github.io/ANTs/): non-linear registration to template space
- Convert3D (http://www.itksnap.org/pmwiki/pmwiki.php?n=Convert3D.Documentation): convert ANTs to FNIRT warp fields
- IRTK (https://www.doc.ic.ac.uk/~dr/software/usage.html): super resolution
-------------------------------------------
Examples
-------------------------------------------
To launch the pipeline for a single subject, use the following command:
${scriptsFolder}/dHCP_neo_dMRI_setJobs.sh ${rawDataFolder}/sub-${cid} ses-${no} ${rawDataFile} ${cid} ${scriptsFolder}/dhcp300_f.txt ${outFolder} ${age} ${birth} ${n_sessions}
-------------------------------------------
dHCP data
-------------------------------------------
To launch the pipeline, use:
${scriptsFolder}/dHCP_neo_dMRI_launchPipeline.sh participants.tsv ${outFolder}
This command will submit all the necessary scripts to process the raw dMRI data. The necessary inputs are:
${rawDataFolder}/sub-${cid}: Path to raw data
ses-${no}: Session number
${rawDataFile}: Raw data file name
${cid}: Connectome ID
${scriptsFolder}/dhcp300_f.txt: Protocol file
This command will submit all the necessary jobs to process the dHCP neonatal dMRI datasets.
The two inputs are:
participants.tsv: Subject list
${outFolder}: Output folder
${age}: Age at scan (in rounded weeks)
${birth}: Age at birth (in rounded weeks)
${n_sessions}: Total number of scanning sessions for the same subject
-------------------------------------------
Non-dHCP data
-------------------------------------------
To convert a non-dHCP dataset such that it can be used with the pipeline, use the command:
${scriptsFolder}/utils/getProtocol
By typing the command in a terminal, the necessary inputs will be shown.
The script will generate a raw data file and a protocol file that can be used with the pipeline.
-------------------------------------------
Non-dHCP data
-------------------------------------------
To convert a locally acquired dataset such that it can be used with the pipeline, use the command:
${scriptsFolder}/utils/getProtocol
By typing the command in a terminal, the necessary inputs will be shown.
The script will generate a raw data file and a protocol file that can be used with the pipeline.
Directory structure
-------------------
The pipeline expects the following structure for the input dMRI data:
/path/to/data
/subject1
/session-1
/DWI
/data.nii.gz
/session-2
/DWI
/data.nii.gz
/subject2
/session-1
/DWI
/data.nii.gz
.
.
.
/subjectN
/session-1
/DWI
/data.nii.gz
The getProtocol command can be used to obtain each individual's data.nii.gz 4D nifti volume. All of them will need to be placed in the correct subject/session/DWI folder. This directory structure accounts for the fact that data from a single subject can be acquired in multiple sessions.
Examples
--------
To launch the pipeline for a single subject, use the following command:
${scriptsFolder}/dHCP_neo_dMRI_setJobs.sh ${rawDataFolder}/sub-${cid} \
session-${no} \
${rawDataFile} \
${cid} \
${scriptsFolder}/protocol.txt
${slspec} \
${outFolder} \
${age} \
${birth} \
${subjT2} \
${subjSeg} \
${srFlag} \
${gpuFlag}
This command will submit all the necessary scripts to process the raw dMRI data. Typing the command in the terminal without any input will show the user guide.
The necessary inputs are:
${rawDataFolder}/sub-${cid}: Path to raw data
session-${no}: Session folder
${rawDataFile}: Raw data file name
${cid}: Connectome ID
${scriptsFolder}/protocol.txt: Protocol file
${slspec}: eddy slspec file (0 if not available)
${outFolder}: Output folder
${age}: Age at scan (in weeks, rounded)
${birth}: Age at birth (in weeks, rounded)
${subjT2}: Subject's anatomical T2-weighted volume
${subjSeg}: Subject's tissue segmentation
${srFlag}: super resolution flag (0=do not use super resolution, 1=use
super resolution)
${gpuFlag}: if you have an NVIDIA GPU, this will significantly speed processing time
and allow to use all the eddy features (i.e., slice-to-volume
correction, motion-by-susceptibility-induced distortions)
#!/bin/bash
set -e
echo -e "\n START: getMasks"
unset POSIXLY_CORRECT
if [ "$2" == "" ];then
echo ""
echo "usage: $0 <SubjFolder> <TissueLabels>"
echo " Tissue and brain masks extraction script"
echo " SubjFolder: Path to the subject processing folder"
echo " TissueLabels: Volume containing results of tissue segmentations (assumes 1=CSF, 2=GM, 3=WM, 5=CSF)"
echo ""
exit 1
fi
subjOutFolder=$1 # Path to the subject folder
segVolume=$2 # Segmented volume (assumes 1=CSF, 2=GM, 3=WM, 5=CSF)
anatFolder=${subjOutFolder}/T2w
mkdir -p ${anatFolder}/segmentation
#============================================================================
# Get single tissue masks in T2w space
#============================================================================
${FSLDIR}/bin/imcp ${segVolume} ${anatFolder}/segmentation/tissue_labels.nii # Copy segmented volume to processing folder
${FSLDIR}/bin/fslmaths ${anatFolder}/segmentation/tissue_labels.nii -thr 1 -uthr 1 -bin ${anatFolder}/segmentation/csf_mask # CSF
${FSLDIR}/bin/fslmaths ${anatFolder}/segmentation/tissue_labels.nii -thr 5 -uthr 5 -bin -add ${anatFolder}/segmentation/csf_mask -bin ${anatFolder}/segmentation/csf_mask # Adding ventricles
${FSLDIR}/bin/fslmaths ${anatFolder}/segmentation/tissue_labels.nii -thr 2 -uthr 2 -bin ${anatFolder}/segmentation/gm_mask # GM
${FSLDIR}/bin/fslmaths ${anatFolder}/segmentation/tissue_labels.nii -thr 3 -uthr 3 -bin ${anatFolder}/segmentation/wm_mask # WM
${FSLDIR}/bin/fslmaths ${anatFolder}/segmentation/wm_mask -ero -sub ${anatFolder}/segmentation/wm_mask -abs ${anatFolder}/segmentation/wm_mask_edges.nii.gz # WM edges
#============================================================================
# Create brain mask
#============================================================================
${FSLDIR}/bin/fslmaths ${anatFolder}/segmentation/tissue_labels.nii -thr 0 -bin ${anatFolder}/segmentation/brain_mask
echo -e "\n END: getMasks"
This diff is collapsed.
#!/bin/bash
echo "\n START: dHCP neonatal dMRI data processing pipeline"
if [ "${2}" == "" ];then
echo "The script will read dHCP subject info and, if data is there, launch the processing steps"
echo ""
echo "usage: dHCP_neo_dMRI_launchPipeline.sh <subject list> <output folder>"
echo ""
echo " subject list: text file containing participant_id, sex and age at birth (w GA)"
echo " output folder: folder where results will be stored"
echo ""
echo ""
fi
subjList=$1
outFolder=$2
mkdir -p ${outFolder}
# Read the connectome IDs
sids=(`cat ${subjList} | sed "1 d" | cut -f 1 | grep -v "^$"`)
# Main loop through subjects
for s in ${sids[@]}; do
sex=`cat ${subjList} | grep ${s} | cut -f 2`
birth=`cat ${subjList} | grep ${s} | cut -f 3` # Age at birth (weeks)
sessions=(`cat ${reconFolder}/sub-${s}/sub-${s}_sessions.tsv | sed "1 d" | cut -f 1 | grep -v "^$"`) # Some subjects are acquired over multiple sessions
n_sessions=`echo ${#sessions[@]}`
if [ ${n_sessions} -gt 0 ]; then
for ses in ${sessions[@]}; do
date=`cat ${reconFolder}/sub-${s}/sub-${s}_sessions.tsv | grep ${ses} | cut -f 2`
age=`cat ${reconFolder}/sub-${s}/sub-${s}_sessions.tsv | grep ${ses} | cut -f 3` # Age at scan (weeks)
# Round birth and scan ages
age=`awk -v v="${age}" 'BEGIN{printf "%.0f", v}'`
birth=`awk -v v="${birth}" 'BEGIN{printf "%.0f", v}'`
# Subject specific variables
data=${reconFolder}/sub-${s}/ses-${ses}/DWI/sub-${s}_ses-${ses}_DWI_MB0_AnZfZfAdGhAb.nii
t2=${structFolder}/sub-${s}/ses-${ses}/anat/sub-${s}_ses-${ses}_T2w_restore.nii.gz
seg=${structFolder}/sub-${s}/ses-${ses}/anat/sub-${s}_ses-${ses}_drawem_tissue_labels.nii.gz
if [ -e ${data} ]; then # Check that data has been acquired
#============================================================================
# Check for scan completeness
#============================================================================
dimt4=`${FSLDIR}/bin/fslval ${data} dim4`
complete_check=${dimt4}
usable_check=1
if [ ${dimt4} -lt 34 ]; then
echo "WARNING: The dataset is unusable as it does not contain enough b0 volumes"
echo "${s} ses-${ses}" >> ${outFolder}/unusable.txt
usable_check=0
elif [ ${dimt4} -lt 123 ]; then
echo "WARNING: The dataset is incomplete and does not contain enough b0 pairs for each PE direction"
echo "${s} ses-${ses}" >> ${outFolder}/incomplete.txt
noB0s=1
usable_check=1
fi
#============================================================================
# Store QC information
#============================================================================
subjOutFolder=${outFolder}/${s}/ses-${ses}
if [ -e ${subjOutFolder}/initQC.json ]; then
break
fi
mkdir -p ${subjOutFolder}
echo "{" > ${subjOutFolder}/initQC.json
echo " \"Complete\": ${complete_check}," >> ${subjOutFolder}/initQC.json
echo " \"Usable\": ${usable_check}," >> ${subjOutFolder}/initQC.json
echo " \"nSessions\": ${n_sessions}," >> ${subjOutFolder}/initQC.json
echo " \"birthAge\": ${birth}," >> ${subjOutFolder}/initQC.json
echo " \"scanAge\": ${age}" >> ${subjOutFolder}/initQC.json
echo "}" >> ${subjOutFolder}/initQC.json
if [ -e ${t2} ]; then # Check that structural data has been acquired
#============================================================================
# Set processing jobs
#============================================================================
${scriptsFolder}/dHCP_neo_dMRI_setJobs.sh ${reconFolder}/sub-${s} ses-${ses} sub-${s}_ses-${ses}_DWI_MB0_AnZfZfAdGhAb.nii ${s} \
${scriptsFolder}/dHCP_protocol.txt ${scriptsFolder}/slorder.txt ${outFolder} \
${age} ${birth} ${t2} ${seg} 1 1
echo "${s} ${ses} ${birth} ${age}" >> ${outFolder}/complete.txt
else
echo "WARNING! Missing structural data for subject ${s}"
echo "${s} ses-${ses}" >> ${outFolder}/missingAnat.txt
fi
else
echo "WARNING! Missing dMRI data for subject ${s}"
echo "${s} ses-${ses}" >> ${outFolder}/missingDmri.txt
fi
done
else
echo "WARNING! Missing session IDs for subject ${s}"
echo "${s}" >> ${outFolder}/missingSessions.txt
fi
done
echo "\n END: dHCP neonatal dMRI data processing pipeline"
#!/bin/bash
echo "\n START: dHCP neonatal dMRI data processing pipeline progress monitor"
if [ "${2}" == "" ];then
echo "The script will read dHCP subject info and output csv file with completed processing steps and metadata"
echo ""
echo "usage: dHCP_neo_dMRI_monitor.sh <subject list> <output folder>"
echo ""
echo " subject list: text file containing participant_id, sex and age at birth (w GA)"
echo " output folder: folder where results are stored"
echo ""
echo ""
fi
subjList=$1
outFolder=$2
echo "participant_id,gender,birth_ga,session_id,date,age_at_scan,dmri,T2,seg,pip_import,pip_topup,pip_eddy,pip_superres,pip_dki,pip_bpx,pip_reg" > ${outFolder}/monitor.csv
# Read the connectome IDs
sids=(`cat ${subjList} | sed "1 d" | cut -f 1 | grep -v "^$"`)
# Main loop through subjects
for s in ${sids[@]}; do
# Set progress variables to 0
sex=0
birth=0
date=0
age=0
dimt4=0
dimt2=0
dimseg=0
sex=`cat ${subjList} | grep ${s} | cut -f 2`
birth=`cat ${subjList} | grep ${s} | cut -f 3` # Age at birth (weeks)
sessions=(`cat ${reconFolder}/sub-${s}/sub-${s}_sessions.tsv | sed "1 d" | cut -f 1 | grep -v "^$" | sort -u`) # Some subjects are acquired over multiple sessions
n_sessions=`echo ${#sessions[@]}`
if [ ${n_sessions} -gt 0 ]; then
for ses in ${sessions[@]}; do
date=(`cat ${reconFolder}/sub-${s}/sub-${s}_sessions.tsv | grep ${ses} | cut -f 2`)
age=(`cat ${reconFolder}/sub-${s}/sub-${s}_sessions.tsv | grep ${ses} | cut -f 3`) # Age at scan (weeks)
n=('n/a')
if [ `echo ${#sessions[@]}` -gt 1 ]; then
date="${date[@]/$n}"
age=${age[0]}
fi
# Subject specific variables
data=${reconFolder}/sub-${s}/ses-${ses}/DWI/sub-${s}_ses-${ses}_DWI_MB0_AnZfZfAdGhAb.nii
t2=${structFolder}/sub-${s}/ses-${ses}/anat/sub-${s}_ses-${ses}_T2w_restore.nii.gz
seg=${structFolder}/sub-${s}/ses-${ses}/anat/sub-${s}_ses-${ses}_drawem_tissue_labels.nii.gz
if [ -e ${data} ]; then # Check that data has been acquired
#============================================================================
# Check for scan size
#============================================================================
dimt4=`${FSLDIR}/bin/fslval ${data} dim4 | tr -d '[:space:]'`
if [ -e ${t2} ]; then # Check that structural data has been acquired
dimt2=1
if [ -e ${seg} ]; then
dimseg=1
subjOutFolder=${outFolder}/${s}/ses-${ses}
# Check data import
if [ -e ${subjOutFolder}/raw/data.nii.gz ]; then
p_import=1
else
p_import=0
fi
# Check topup
if [ -e ${subjOutFolder}/PreProcessed/topup/nodif_brain.nii.gz ]; then
p_topup=1
else
p_topup=0
fi
# Check eddy
if [ -e ${subjOutFolder}/PreProcessed/eddy/nodif_brain.nii.gz ]; then
p_eddy=1
else
p_eddy=0
fi
# Check superres
if [ -e ${subjOutFolder}/Diffusion/data.nii.gz ]; then
p_superres=1
else
p_superres=0
fi
# Check dki
if [ -e ${subjOutFolder}/Diffusion/dkifit/dki_S0.nii.gz ]; then
p_dki=1
else
p_dki=0
fi
# Check bpx
if [ -e ${subjOutFolder}/Diffusion.bedpostX/dyads1.nii.gz ]; then
p_bpx=1
else
p_bpx=0
fi
# Check reg
if [ -e ${subjOutFolder}/Diffusion/xfms/std40w2diff_warp.nii.gz ]; then
p_reg=1
else
p_reg=0
fi
echo "${s},${sex},${birth},${ses},${date},${age},${dimt4},${dimt2},${dimseg},${p_import},${p_topup},${p_eddy},${p_superres},${p_dki},${p_bpx},${p_reg}" >> ${outFolder}/monitor.csv
fi
fi
fi
done
fi
done
echo "\n END: dHCP neonatal dMRI data processing pipeline progress monitor"
#!/bin/bash
set -e
echo -e "\n START: runEddy"
date
if [ "$1" == "" ];then
echo ""
echo "usage: $0 <Subject folder> <slspec>"
echo " Subject folder: Path to the main subject folder"
echo " slspec: eddy slspec file"
echo ""
exit 1
fi
prepFolder=$1
dataFile=$2
subFolder=$1
slspec=$2
gpuFlag=$3
rawFolder=${subFolder}/raw
prepFolder=${subFolder}/PreProcessed
topupFolder=${prepFolder}/topup
eddyFolder=${prepFolder}/eddy
ref_scan=`cat ${eddyFolder}/ref_scan.txt`
${FSLDIR}/bin/eddy_cuda --imain=${dataFile} --mask=${topupFolder}/nodif_brain_mask.nii.gz --index=${eddyFolder}/eddyIndex.txt --bvals=${prepFolder}/bvals --bvecs=${prepFolder}/bvecs --acqp=${eddyFolder}/acqparamsUnwarp.txt --topup=${topupFolder}/topup_results --out=${eddyFolder}/eddy_corrected --very_verbose --niter=5 --fwhm=10,5,0,0,0 --s2v_niter=10 --mporder=8 --nvoxhp=5000 --slspec=${scriptsFolder}/slorder.txt --repol --ol_type=both --s2v_interp=trilinear --s2v_lambda=1 --ref_scan_no=${ref_scan} --data_is_shelled --cnr_maps --residuals --dont_mask_output
#============================================================================
# Add the necessary options based on the actual protocol.
#============================================================================
cmd=""
if [ "${slspec}" != "0" ]; then
echo "slspec file provided."
cmd="${cmd} --slspec=${slspec}"
if [ "${gpuFlag}" -eq "1" ]; then
echo "GPU acceleration enabled; running s2v eddy"
cmd="${cmd} --s2v_niter=10 --mporder=8 --s2v_interp=trilinear --s2v_lambda=1"
fi
fi
if [ -e ${topupFolder}/topup_results_fieldcoef.nii.gz ]; then
echo "topup output detected. Adding the results to eddy."
cmd="${cmd} --topup=${topupFolder}/topup_results"
if [ "${gpuFlag}" -eq "1" ]; then
echo "Correcting for mot-by-susc interactions."
cmd="${cmd} --estimate_move_by_susceptibility --mbs_niter=20 --mbs_ksp=10 --mbs_lambda=10"
fi
else
echo "topup output not detected. Extracting brain mask from raw b0s."
${FSLDIR}/bin/bet ${rawFolder}/data ${topupFolder}/nodif_brain -m -f 0.25 -R
fi
#============================================================================
# Pick eddy executable based on GPU acceleration
#============================================================================
if [ "${gpuFlag}" -eq "1" ]; then
eddy_exec=${FSLDIR}/bin/eddy_cuda
else
eddy_exec=${FSLDIR}/bin/eddy_openmp
fi
# Run eddy
${FSLDIR}/bin/eddy_cuda --imain=${rawFolder}/data --mask=${topupFolder}/nodif_brain_mask.nii.gz --index=${rawFolder}/eddyIndex.txt \
--bvals=${rawFolder}/bvals --bvecs=${rawFolder}/bvecs --acqp=${eddyFolder}/acqparamsUnwarp.txt \
--out=${eddyFolder}/eddy_corrected --very_verbose \
--niter=5 --fwhm=10,5,0,0,0 --nvoxhp=5000 \
--repol --ol_type=both --ol_nstd=3 \
${cmd} \
--data_is_shelled --cnr_maps --residuals --dont_mask_output \
#============================================================================
# Run bet on average iout.
#============================================================================
echo "Running BET on the hifi b0"
${FSLDIR}/bin/select_dwi_vols ${eddyFolder}/eddy_corrected ${prepFolder}/bvals ${eddyFolder}/hifib0 0 -m
${FSLDIR}/bin/select_dwi_vols ${eddyFolder}/eddy_corrected ${rawFolder}/bvals ${eddyFolder}/hifib0 0 -m
${FSLDIR}/bin/bet ${eddyFolder}/hifib0 ${eddyFolder}/nodif_brain -m -f 0.25 -R
......@@ -33,8 +90,8 @@ n_ol_LR=0
n_ol_RL=0
n_ol_AP=0
n_ol_PA=0
bvals=($(head -n 1 ${prepFolder}/bvals))
eddyIndex=($(head -n 1 ${eddyFolder}/eddyIndex.txt))
bvals=($(head -n 1 ${rawFolder}/bvals))
eddyIndex=($(head -n 1 ${rawFolder}/eddyIndex.txt))
dimt3=`${FSLDIR}/bin/fslval ${eddyFolder}/eddy_corrected.nii.gz dim3`
dimt4=`${FSLDIR}/bin/fslval ${eddyFolder}/eddy_corrected.nii.gz dim4`
......@@ -76,27 +133,6 @@ do
done < ${eddyFolder}/eddy_corrected.eddy_outlier_map
tot_ol=$((${n_ol_b400}+${n_ol_b1000}+${n_ol_b2600}))
# Compute average subject motion
# m_abs: average absolute subject motion
# m_rel: average relative subject motion
m_abs=0
m_rel=0
while read line
do
# Read first column from EDDY output
val=`echo $line | awk {'print $1'}`
# To handle scientific notation, we need the following line
val=`echo ${val} | sed -e 's/[eE]+*/\\*10\\^/'`
m_abs=`echo "${m_abs} + ${val}" | bc -l`
# Read second column from EDDY output
val=`echo $line | awk {'print $2'}`
# To handle scientific notation, we need the following line
val=`echo ${val} | sed -e 's/[eE]+*/\\*10\\^/'`
m_rel=`echo "${m_rel} + ${val}" | bc -l`
done < ${eddyFolder}/eddy_corrected.eddy_movement_rms
m_abs=`echo "${m_abs} / ${dimt4}" | bc -l`
m_rel=`echo "${m_rel} / ${dimt4}" | bc -l`
# Write .json file
echo "{" > ${eddyFolder}/eddy_corrected.json
echo " \"Tot_ol\": $tot_ol," >> ${eddyFolder}/eddy_corrected.json
......@@ -113,9 +149,9 @@ echo " \"No_ol_LR\": $n_ol_LR," >> ${eddyFolder}/eddy_corrected.json
echo " \"No_ol_RL\": $n_ol_RL," >> ${eddyFolder}/eddy_corrected.json
echo " \"No_ol_AP\": $n_ol_AP," >> ${eddyFolder}/eddy_corrected.json
echo " \"No_ol_PA\": $n_ol_PA," >> ${eddyFolder}/eddy_corrected.json
echo " \"Avg_motion\": $m_abs" >> ${eddyFolder}/eddy_corrected.json
echo "}" >> ${eddyFolder}/eddy_corrected.json
date
echo -e "\n END: runEddy"
......@@ -3,60 +3,68 @@ set -e
echo -e "\n START: runPostProc"
prepFolder=$1
diffFolder=$2
subFolder=$1
srFlag=$2
qcFlag=$3
topupFolder=${prepFolder}/topup
rawFolder=${subFolder}/raw
prepFolder=${subFolder}/PreProcessed
eddyFolder=${prepFolder}/eddy
diffFolder=${subFolder}/Diffusion
if [ ${qcFlag} -eq 1 ]; then
if [ "${qcFlag}" -eq "1" ]; then
mkdir -p ${prepFolder}/QC
fi
mkdir -p ${diffFolder}
#============================================================================
# Remove negative intensity values (caused by spline interpolation) from
# pre-processed data. Copy bvals and (rotated) bvecs to the Diffusion folder.
#============================================================================
${FSLDIR}/bin/fslmaths ${eddyFolder}/data_sr -thr 0 ${diffFolder}/data
rm ${eddyFolder}/data_sr.*
cp ${prepFolder}/bvals ${diffFolder}/bvals
if [ "${srFlag}" -eq "1" ]; then
${FSLDIR}/bin/fslmaths ${eddyFolder}/data_sr -thr 0 ${diffFolder}/data
rm ${eddyFolder}/data_sr.*
else
${FSLDIR}/bin/fslmaths ${eddyFolder}/eddy_corrected -thr 0 ${diffFolder}/data
fi
cp ${rawFolder}/bvals ${diffFolder}/bvals
cp ${eddyFolder}/eddy_corrected.eddy_rotated_bvecs ${diffFolder}/bvecs
#============================================================================
# Get the brain mask using BET, average shell data and attenuation profiles.
# Fit diffusion tensor to each shell separately. If interested in qc, store
# volumes and variances.
#============================================================================
uniqueBvals=(0 400 1000 2600)
uniqueBvals=(`cat ${rawFolder}/shells`)
for b in "${uniqueBvals[@]}"; do
${FSLDIR}/bin/select_dwi_vols ${diffFolder}/data ${diffFolder}/bvals ${diffFolder}/mean_b${b} ${b} -m
if [ ${qcFlag} -eq 1 ]; then
${FSLDIR}/bin/select_dwi_vols ${diffFolder}/data ${diffFolder}/bvals ${prepFolder}/QC/vols_b${b} ${b}
${FSLDIR}/bin/select_dwi_vols ${diffFolder}/data ${diffFolder}/bvals ${prepFolder}/QC/var_b${b} ${b} -v
${FSLDIR}/bin/select_dwi_vols ${diffFolder}/data ${diffFolder}/bvals ${prepFolder}/QC/vols_b${b} ${b}
${FSLDIR}/bin/select_dwi_vols ${diffFolder}/data ${diffFolder}/bvals ${prepFolder}/QC/var_b${b} ${b} -v
fi
if [ ${b} -eq 0 ]; then
${FSLDIR}/bin/bet ${diffFolder}/mean_b${b} ${diffFolder}/nodif_brain -m -f 0.25 -R
${FSLDIR}/bin/bet ${diffFolder}/mean_b${b} ${diffFolder}/nodif_brain -m -f 0.25 -R
else
echo "Multi-shell data: fitting DT to b=${b} shell..."
mkdir -p ${diffFolder}/dtifit_b${b}
${FSLDIR}/bin/select_dwi_vols ${diffFolder}/data ${diffFolder}/bvals ${diffFolder}/dtifit_b${b}/b${b} 0 -b ${b} -obv ${diffFolder}/bvecs
${FSLDIR}/bin/dtifit -k ${diffFolder}/dtifit_b${b}/b${b} -o ${diffFolder}/dtifit_b${b}/dti -m ${diffFolder}/nodif_brain_mask -r ${diffFolder}/dtifit_b${b}/b${b}.bvec -b ${diffFolder}/dtifit_b${b}/b${b}.bval --sse --save_tensor
${FSLDIR}/bin/fslmaths ${diffFolder}/mean_b${b} -div ${diffFolder}/mean_b0 -mul ${diffFolder}/nodif_brain_mask ${diffFolder}/att_b${b}
echo "Fitting DT to b=${b} shell..."
mkdir -p ${diffFolder}/dtifit_b${b}
${FSLDIR}/bin/select_dwi_vols ${diffFolder}/data ${diffFolder}/bvals ${diffFolder}/dtifit_b${b}/b${b} 0 -b ${b} -obv ${diffFolder}/bvecs
${FSLDIR}/bin/dtifit -k ${diffFolder}/dtifit_b${b}/b${b} -o ${diffFolder}/dtifit_b${b}/dti -m ${diffFolder}/nodif_brain_mask -r ${diffFolder}/dtifit_b${b}/b${b}.bvec -b ${diffFolder}/dtifit_b${b}/b${b}.bval --sse --save_tensor
${FSLDIR}/bin/fslmaths ${diffFolder}/mean_b${b} -div ${diffFolder}/mean_b0 -mul ${diffFolder}/nodif_brain_mask ${diffFolder}/att_b${b}
fi
${FSLDIR}/bin/fslmaths ${diffFolder}/mean_b${b} -mul ${diffFolder}/nodif_brain_mask ${diffFolder}/mean_b${b}
done
#============================================================================
# Fit Kurtosis model.
#============================================================================
echo "Multi-shell data: fitting DK to all shells..."
mkdir -p ${diffFolder}/dkifit
${FSLDIR}/bin/dtifit -k ${diffFolder}/data -o ${diffFolder}/dkifit/dki -m ${diffFolder}/nodif_brain_mask -r ${diffFolder}/bvecs -b ${diffFolder}/bvals --sse --save_tensor --kurt --kurtdir
#rm -R ${preprocdir}/tmpData
if [ `echo ${#uniqueBvals[@]}` -gt 2 ]; then
echo "Multi-shell data: fitting DK to all shells..."
mkdir -p ${diffFolder}/dkifit
${FSLDIR}/bin/dtifit -k ${diffFolder}/data -o ${diffFolder}/dkifit/dki -m ${diffFolder}/nodif_brain_mask -r ${diffFolder}/bvecs -b ${diffFolder}/bvals --sse --save_tensor --kurt --kurtdir
fi
echo -e "\n END: runPostProc"
......
This diff is collapsed.
#!/bin/bash
set -e
echo -e "\n Running topup..."
echo -e "\n START: runTopup"
if [ "$1" == "" ];then
echo ""
echo "usage: $0 <Subject folder>"
echo " Subject folder: Path to the main subject folder"
echo ""
exit 1
fi
subFolder=$1
topupDir=$1 # Folder where input files are and where output will be stored
rawFolder=${subFolder}/raw
prepFolder=${subFolder}/PreProcessed
topupFolder=${prepFolder}/topup
topupConfigFile=${FSLDIR}/etc/flirtsch/b02b0.cnf
topupConfigFile=${scriptsFolder}/utils/b02b0.cnf # Topup configuration file
#============================================================================
# Run topup on the selected b0 volumes.
#============================================================================
${FSLDIR}/bin/topup --imain=${topupDir}/phase --datain=${topupDir}/acqparams.txt --config=${topupConfigFile} --fout=${topupDir}/fieldmap --iout=${topupDir}/topup_b0s --out=${topupDir}/topup_results -v
unique_pedirs=(`cat ${rawFolder}/pedirs`)
if [ `echo ${#unique_pedirs[@]}` -gt 1 ]; then
echo "More than 1 phase encoding direction detected. Running topup"
#============================================================================
# Run topup on the selected b0 volumes.
#============================================================================
${FSLDIR}/bin/topup --imain=${topupFolder}/phase --datain=${topupFolder}/acqparams.txt --config=${topupConfigFile} \
--fout=${topupFolder}/fieldmap --iout=${topupFolder}/topup_b0s --out=${topupFolder}/topup_results -v
#============================================================================
# Run be