Commit 0eb16ec7 authored by William Clarke's avatar William Clarke
Browse files

Merge branch 'enh/post_fsl_course_suggestions' into 'master'

Enh: Post FSL course suggestions

See merge request fsl/fsl_mrs!62
parents 18194556 270385b0
Pipeline #15545 failed with stage
This document contains the FSL-MRS release history in reverse chronological order.
2.0.4 (Monday 26th September 2022)
2.0.4 (Wednesday 28th September 2022)
-------------------------------------
- fsl_mrs results now create symlinks to original data objects
- Updated command line interface for fsl_mrs_summarise, a list of results directories can now be passed.
- mrs_tools split better identifies which file contains which indices.
- Added fit and plot utility methods to mrs and results objects in python API.
2.0.3 (Wednesday 21st September 2022)
-------------------------------------
......
%% Cell type:markdown id: tags:
 
# Dynamic fitting of simulated diffusion weighted MRS.
 
This notebook demonstrates use of the FSL-MRS dynamic fitting tools.
 
The demonstration is performed on synthetic data made for the dwMRS workshop in Leiden in 2021. The data and basis set can be downloaded from [GitHub](https://github.com/dwmrshub/pregame-workshop-2021).
 
The notebook demonstrates:
1. How to generate the necessary configuration model file
2. Use the dynamic fitting tools in Python e.g. in an interactive notebook.
3. Run the identical analysis using the command line tools.
 
The same tools can be used for fMRS (and other things). An extensive [fMRS demo](https://github.com/wtclarke/fsl_mrs_fmrs_demo) is hosted online on Github.
 
## 1. Data load and exploration.
We will load the data to examine it and try fitting an average spectrum to assess the suitability of fitting options and the basis set.
 
%% Cell type:code id: tags:
 
``` python
# Define some data files for each spectrum
data_location = 'example_data/example_dwmrs/metab.nii.gz'
basis_location = 'example_data/example_dwmrs/basis'
```
 
%% Cell type:markdown id: tags:
 
Load the data and plot it.
 
%% Cell type:code id: tags:
 
``` python
from fsl_mrs.utils import mrs_io
from fsl_mrs.utils import plotting as splot
 
data = mrs_io.read_FID(data_location)
bvals = data.hdr_ext['dim_5_header']['Bval']
mrslist = data.mrs()
 
splot.plotly_dynMRS(mrslist, time_var=bvals, ppmlim=(0, 5))
```
 
%% Output
 
 
%% Cell type:markdown id: tags:
 
The data looks like we expect; as b-value increases we see a decrease in the amplitude of the metabolites. You might notice three things:
1. There is a residual water peak at 4.65 ppm. That needs to be modelled in the basis set, and furthermore the rate of amplitude decrease is much higher compared to the metabolites.
2. There are macromolecules visible in the spectrum, but even at high b-values they are still present (and not that much smaller than at b=0).
3. The baseline varies with b-value.
 
We now want to see whether we can fit the data with the basis set we have. There's no point in continuing with the vastly more complex dynamic fit if we don't have a good description of the underlying metabolite signal to work with.
 
We will pick some spectral fitting options. These are a ppm range including the water peak (up to 5 ppm). A low order baseline as there is a predominantly flat baseline. A simple lineshape model (no Gaussian broadening). We separate the water and macromolecule (H20 & Mac) into their own groups as they are likely to need their own metabolite broadening parameters.
 
%% Cell type:code id: tags:
 
``` python
from fsl_mrs.utils.misc import parse_metab_groups
from fsl_mrs.utils import fitting
 
# select just the first (b value = 0) spectrum to test the fitting on
# this time the basis set is loaded alongside it.
mrs0 = data.mrs(basis_file=basis_location)[0]
# Check that the basis has the right phase/frequency convention
mrs0.check_Basis(repair=True)
 
# Select our fitting options
Fitargs = {'ppmlim': (0.2, 5.0),
'baseline_order': 1,
'metab_groups': parse_metab_groups(mrs0, ['H2O', 'Mac']),
'model': 'lorentzian'}
 
# Run the fitting
res = fitting.fit_FSLModel(mrs0,**Fitargs)
 
# Plot the result
_ = splot.plot_fit(mrs0,res.pred,baseline=res.baseline,ppmlim=(-0.5, 5))
_ = splot.plot_fit(mrs0, res)
```
 
%% Output
 
 
%% Cell type:markdown id: tags:
 
# 2. The dynamic model
 
Now we move on to the dynamic fitting part of this demo. We need to describe what our dynamic model is. We do this in a single python-formatted configuration file. In this configuration file we define:
1. The correspondence between spectra fitting parameters (known as _mapped_ parameters) and any dynamic model parameters we define (known as _free_ parameters).
2. Any bounds on the defined free parameters.
3. Any functional forms that the free parameters and time variable describe to constrain the multiple spectra.
 
We use the notebook 'magic' command %load to view what's in a preprepared config.py file
 
%% Cell type:code id: tags:
 
``` python
# %load example_data/example_dwmrs/config.py
# ------------------------------------------------------------------------
# User file for defining a model
 
# Parameter behaviour
# 'variable' : one per time point
# 'fixed' : same for all time points
# 'dynamic' : model-based change across time points
 
# Each parameter of the spectral fitting gets a specific behaviour
# The default behaviour is 'fixed'
Parameters = {
'Phi_0' : 'variable',
'Phi_1' : 'fixed',
'conc' : {'dynamic':'model_biexp','params':['c_amp','c_adc_slow','c_adc_fast','c_frac_slow']},
'eps' : 'fixed',
'gamma' : 'fixed',
'baseline' : {'dynamic':'model_exp_offset','params':['b_amp','b_adc','b_off']}
}
 
# Optionally define bounds on the parameters
Bounds = {
'c_amp' : (0,None),
'c_adc_slow' : (0,.1),
'c_adc_fast' : (.1,4),
'c_frac_slow' : (0,1),
'gamma' : (0,None),
'b_amp' : (None,None),
'b_adc' : (1E-5,3),
'b_off' : (None,None)
}
 
# Dynamic models here
# These define how the parameters of the dynamic models change as a function
# of the time variable (in dwMRS, that is the bvalue)
from numpy import exp
from numpy import asarray
from numpy import ones_like
 
# Mono-exponential model with offset
def model_exp_offset(p,t):
# p = [amp,adc,off]
return p[2]+p[0]*exp(-p[1]*t)
 
# Bi-exponential model
def model_biexp(p,t):
# p = [amp,adc1,adc2,frac]
return p[0]*(p[3]*exp(-p[1]*t)+(1-p[3])*exp(-p[2]*t))
 
# ------------------------------------------------------------------------
# Gradients
# For each of the models defined above, specify the gradient
# And call these functions using the same names as above with
# '_grad' appended in the end
def model_biexp_grad(p,t):
e1 = exp(-p[1]*t)
e2 = exp(-p[2]*t)
g0 = p[3]*e1+(1-p[3])*e2
g1 = p[0]*(-p[3]*t*e1)
g2 = p[0]*(-(1-p[3])*t*e2)
g3 = p[0]*(e1-e2)
return asarray([g0,g1,g2,g3])
 
def model_exp_offset_grad(p,t):
e1 = exp(-p[1]*t)
g0 = e1
g1 = -t*p[0]*e1
g2 = ones_like(t)
return asarray([g0,g1,g2], dtype=object)
```
 
%% Cell type:markdown id: tags:
 
The above file come in three parts
 
#### Parameters variable (dict):
For each spectral fitting parameter type (e.g. concentration, or line-shift `eps`) this dict parameter defines whether that parameter:
- Takes a different (unconstrained) value for each b value - __variable__
- Has a fixed value across all b values - __fixed__
- Or is described by a function of the b value - __dynamic__
 
For the last of the options (dynamic), a function and any associated free parameters are defined. E.g. Metabolite concentrations (`conc`) follow a bi-exponential function (model_biexp) and therefore have four parameters associated with them (`'c_amp'`, `'c_adc_slow'`, `'c_adc_fast'`, `'c_frac_slow'`). Metabolite linewidths (`gamma`), however are fixed.
 
#### Bounds variable (dict):
This dictionary provides lower and upper bounds for free parameters. By default parameters are unconstrained, equivalent to `(None, None)`. But if you want to provide an upper or lower bound, or both bounds on parameters that is done using this interface. For instance the parameter `c_frac_slow` can only vary between 0 and 1.
 
#### Dynamic models and gradients (function definitions):
If a mapped parameter has been identified as `dynamic` then a functional relationship between the mapped parameter and the time variable and free parameters must be given.
 
These relationships are described using python functions. Each function listed in the `Parameters` dict must be defined. In addition a function providing the gradient of that function must be defined. E.g. `model_biexp` must be provided and so must `model_biexp_grad`.
 
%% Cell type:markdown id: tags:
 
# 3. Run the dynamic fitting.
 
We are now ready to run our dynamic fitting. This will use the data and basis set we loaded, the b values, the configuration file we inspected, and the spectral fitting options we selected.
 
%% Cell type:code id: tags:
 
``` python
import fsl_mrs.dynamic as dyn
import numpy as np
 
# ALl the data this time
mrslist = data.mrs(basis_file=basis_location)
# Check that the basis has the right phase/frequency convention
for mrs in mrslist:
mrs.check_Basis(repair=True)
 
dobj = dyn.dynMRS(
mrslist,
bvals,
config_file='example_data/example_dwmrs/config.py',
rescale=True,
**Fitargs)
 
dres = dobj.fit()
```
 
%% Cell type:markdown id: tags:
 
## 4. Results
 
First let's inspect the fits to judge the quality.
 
%% Cell type:code id: tags:
 
``` python
splot.plotly_dynMRS(mrslist, dres.reslist, dobj.time_var)
```
 
%% Output
 
 
%% Cell type:markdown id: tags:
 
That looks reasonable. We can see how each mapped parameter looks across b values, compared to the results from the independent initialisation fits.
 
In the plot that will be generated the blue dots are the results from fitting each spectrum individually. The orange line is the mapped parameters (e.g. metabolite concentrations) extracted from the fitted free parameters. The orange line __is not__ a fit to the blue dots, but we would expect a close alignment for high SNR metabolites (e.g. NAA).
 
%% Cell type:code id: tags:
 
``` python
_ = dres.plot_mapped()
```
 
%% Output
 
 
%% Cell type:markdown id: tags:
 
We can then also inspect the fitted free parameters, which for the concentrations are the desired outputs.
 
We can use some Pandas code to improve the look of this dataframe.
 
%% Cell type:code id: tags:
 
``` python
cres = dres.collected_results()
cres['conc'].style\
.format(
formatter={
'c_amp': "{:0.3f}",
'c_adc_slow': "{:0.3f}",
'c_adc_fast': "{:0.2f}",
'c_frac_slow': "{:0.2f}",})
```
 
%% Output
 
<pandas.io.formats.style.Styler at 0x7fda716b9ee0>
 
%% Cell type:markdown id: tags:
 
## A more complex model
It might not be appropriate to fit each metabolite with the same model. For instance a linear model for the macromolecules or a mon-exponential model for water might be more appropriate.
 
FSL-MRS can achieve this using the same framework.
 
%% Cell type:code id: tags:
 
``` python
mrslist = data.mrs(basis_file=basis_location)
# Check that the basis has the right phase/frequency convention
for mrs in mrslist:
mrs.check_Basis(repair=True)
 
dobj2 = dyn.dynMRS(
mrslist,
bvals,
config_file='example_data/example_dwmrs/config_multi.py',
rescale=True,
**Fitargs)
 
dres2 = dobj2.fit()
```
 
%% Cell type:code id: tags:
 
``` python
_ = dres2.plot_mapped()
```
 
%% Output
 
 
%% Cell type:code id: tags:
 
``` python
cres2 = dres2.collected_results()
cres2['conc']
```
 
%% Output
 
c_amp c_adc_slow c_adc_fast c_frac_slow c_adc c_slope
metabolite
Ala 0.051210 0.055734 3.701044 0.836475 NaN NaN
Asp 0.153325 0.014590 0.326327 0.212901 NaN NaN
Cr 0.566903 0.027343 0.249424 0.321872 NaN NaN
GABA 0.091813 0.000000 0.943949 0.296047 NaN NaN
GPC 0.068884 0.006307 0.153411 0.287418 NaN NaN
GSH 0.057658 0.020483 0.202492 0.484120 NaN NaN
Glc 0.040637 0.024023 0.100000 0.934087 NaN NaN
Gln 0.294912 0.036766 0.581525 0.465804 NaN NaN
Glu 0.941375 0.015845 0.173130 0.266299 NaN NaN
H2O 1.340613 NaN NaN NaN 1.335559 NaN
Ins 0.634510 0.016160 0.180822 0.244506 NaN NaN
Lac 0.050088 0.004352 0.157758 0.207368 NaN NaN
Mac 0.364423 NaN NaN NaN NaN 0.00194
NAA 1.296104 0.018252 0.207524 0.356160 NaN NaN
NAAG 0.105431 0.014112 0.114542 0.294642 NaN NaN
PCh 0.107530 0.038286 0.243739 0.397196 NaN NaN
PCr 0.358697 0.015419 0.152181 0.365878 NaN NaN
PE 0.176586 0.035548 0.353910 0.444925 NaN NaN
Scyllo 0.052996 0.000000 0.228900 0.687271 NaN NaN
Tau 0.124494 0.033834 0.348317 0.377911 NaN NaN
%% Cell type:markdown id: tags:
# Example of SVS processing - interactive notebook
%% Cell type:markdown id: tags:
This notebook demos the process of fitting a single voxel scan in an interactive notebook using the underlying python libraries in FSL-MRS.
To view the plots in this notebook in Jupyter please consult the [plotly getting-started guide](https://plotly.com/python/getting-started/#jupyterlab-support-python-35).
### Contents:
1. [File conversion using spec2nii](#1.-File-conversion-using-spec2nii)
2. [Interactive preprocessing](#2.-Interactive-preprocessing)
3. [Fitting of the resultant spectrum](#3.-Fitting)
4. [Display of fitting results in a notebook](#4.-Display)
Will Clarke
June 2020
University of Oxford
%% Cell type:markdown id: tags:
## 1. File conversion using spec2nii
__THIS IS DONE ON THE COMMAND LINE__
Run spec2nii twix -v to establish the file contents, then using this information run spec2nii twix -e with the appropriate flags to extract the required scans. The -q flag suppresses text output.
This dataset uses a modified versions of the CMRR spectro package sequences on a Siemens scanner. It has three sets of water reference scans. The first is tagged as a phase correction, ans is collected at the start of the main suppressed water scan and will be used for eddy current correction.
The second is collected in a separate scan with only the RF portion of the water suppression disabled. This is used for coil combination (and could be used for eddy current correction). The third is collected with the OVS and all aspects of the water suppression disabled. It therefore experiences eddy currents unlike all the other scans. It will be used for final concentration scaling.
%% Cell type:code id: tags:
``` python
%%bash
spec2nii twix -v -q example_data/meas_MID310_STEAM_metab_FID115673.dat
```
%% Cell type:markdown id: tags:
From the final lines of the output of the first cell we can see that this "twix" file contains two groups of data.
The first is tagged as "image" and contains 64 repetitions (sets) of 4096 point FIDs collected on 32 channels.
The second has a single FID on each channel.
We now call spec2nii again specifying the group we want to extract each time. Each call to spec2nii will generate a NIfTI MRS file with a size of 1x1x1x4096x32xNdyn, where Ndyn is the number of dynamics (repeats).
We repeat this for the water reference scans, extracting just the image data.
%% Cell type:code id: tags:
``` python
%%bash
spec2nii twix -e image -f steam_metab_raw -o data -j -q example_data/meas_MID310_STEAM_metab_FID115673.dat
spec2nii twix -e phasecor -f steam_ecc_raw -o data -j -q example_data/meas_MID310_STEAM_metab_FID115673.dat
spec2nii twix -e image -f steam_wref_comb_raw -o data -j -q example_data/meas_MID311_STEAM_wref1_FID115674.dat
spec2nii twix -e image -f steam_wref_quant_raw -o data -j -q example_data/meas_MID312_STEAM_wref3_FID115675.dat
```
%% Cell type:markdown id: tags:
## 2. Interactive preprocessing
In this section we will preprocess the data using functions in the preproc package in fsl_mrs. This example could be used as a template to construct your own preprocessing script in python.
#### Description of steps
0. Load the data
1. Take averages of water references used for combination across files
2. Coil combine the metab data, the ecc data and the quantification data using the "comb" data as the reference.
3. Phase and frequency align the data where there are multiple transients.
4. Combine data across those transients by taking the mean.
5. Run eddy current correction using the appropriate reference.
6. In this data an additional FID point is collected before the echo centre. Remove this.
7. Run HLSVD on the data to remove the residual water in the water suppressed data.
6. Phase the data by a single peak as a crude zero-order phase correction.
%% Cell type:code id: tags:
``` python
import fsl_mrs.utils.mrs_io as mrs_io
```
%% Cell type:markdown id: tags:
#### 0. Load data
Load all the data into lists of data using the mrs_io.read_FID function
%% Cell type:code id: tags:
``` python
# Load the raw metabolite data
supp_data = mrs_io.read_FID('data/steam_metab_raw.nii.gz')
print(f'Loaded water supressed data with shape {supp_data.shape} and dimensions {supp_data.dim_tags}.')
# Load water ref with eddy currents (for coil combination)
ref_data = mrs_io.read_FID('data/steam_wref_comb_raw.nii.gz')
print(f'Loaded unsupressed data with shape {ref_data.shape} and dimensions {ref_data.dim_tags}.')
# Load water ref without eddy currents (for quantification)
quant_data = mrs_io.read_FID('data/steam_wref_quant_raw.nii.gz')
print(f'Loaded unsupressed data with shape {quant_data.shape} and dimensions {quant_data.dim_tags}.')
# Load phasecor scan (for Eddy)
ecc_data = mrs_io.read_FID('data/steam_ecc_raw.nii.gz')
print(f'Loaded unsupressed data with shape {ecc_data.shape} and dimensions {ecc_data.dim_tags}.')
```
%% Cell type:markdown id: tags:
#### 1. Take averages of reference data for coil combination
Each water reference scan cointained two averages. Calculate the average for use as a coil combination reference.
%% Cell type:code id: tags:
``` python
from fsl_mrs.utils.preproc import nifti_mrs_proc as proc
avg_ref_data = proc.average(ref_data, 'DIM_DYN', figure=True)
```
%% Cell type:markdown id: tags:
#### 2. Coil combination
Coil combine the metab data, the ecc data and the quantification data using the "comb" data as the reference.
%% Cell type:code id: tags:
``` python
supp_data = proc.coilcombine(supp_data, reference=avg_ref_data, figure=True)
quant_data = proc.coilcombine(quant_data, reference=avg_ref_data)
ecc_data = proc.coilcombine(ecc_data, reference=avg_ref_data)
```
%% Cell type:markdown id: tags:
#### Additional step to give resonable display phase for metabolites
Phase using single peak (Cr at 3.03 ppm)
%% Cell type:code id: tags:
``` python
supp_data = proc.apply_fixed_phase(supp_data, 180.0, figure=True)
quant_data = proc.apply_fixed_phase(quant_data, 180.0)
ecc_data = proc.apply_fixed_phase(ecc_data, 180.0)
```
%% Cell type:markdown id: tags:
#### 3. Phase and freq alignment
Phase and frequency align the data where there are multiple transients.
%% Cell type:code id: tags:
``` python
supp_data = proc.align(supp_data, 'DIM_DYN', ppmlim=(0, 4.2), figure=True)
# Alignment for water scans
quant_data = proc.align(quant_data, 'DIM_DYN', ppmlim=(0, 8))
```
%% Cell type:markdown id: tags:
#### 4. Combine scans
Combine data across transients by taking the mean.
%% Cell type:code id: tags:
``` python
supp_data = proc.average(supp_data, 'DIM_DYN', figure=True)
quant_data = proc.average(quant_data, 'DIM_DYN')
```
%% Cell type:markdown id: tags:
#### 5. ECC
Run eddy current correction using the appropriate reference.
%% Cell type:code id: tags:
``` python
supp_data = proc.ecc(supp_data, ecc_data, figure=True)
quant_data = proc.ecc(quant_data, quant_data)
```
%% Cell type:markdown id: tags:
#### 6. Truncation
In this data an additional FID point is collected before the echo centre. Remove this point.
%% Cell type:code id: tags:
``` python
supp_data = proc.truncate_or_pad(supp_data, -1, 'first', figure=True)
quant_data = proc.truncate_or_pad(quant_data, -1, 'first')
```
%% Cell type:markdown id: tags:
#### 7. Remove residual water
Run HLSVD on the data to remove the residual water in the water suppressed data.
%% Cell type:code id: tags:
``` python
limits = [-0.15,0.15]
limunits = 'ppm'
supp_data = proc.remove_peaks(supp_data, limits, limit_units=limunits, figure=True)
```
%% Cell type:markdown id: tags:
#### 8. Shift to reference
Ensure peaks appear at correct frequencies after alignment and ecc.
%% Cell type:code id: tags:
``` python
supp_data = proc.shift_to_reference(supp_data, 3.027, (2.9, 3.1), figure=True)
```
%% Cell type:markdown id: tags:
#### 9. Phasing
Phase the data by a single peak as a basic zero-order phase correction.
%% Cell type:code id: tags:
``` python
final_data = proc.phase_correct(supp_data, (2.9, 3.1), figure=True)
final_wref = proc.phase_correct(quant_data, (4.55, 4.7), hlsvd=False, figure=True)
```
%% Cell type:markdown id: tags:
## 3. Fitting
### Load into MRS objects
- Read pre-baked basis file (this one was generated with `fsl_mrs_sim`).
- Create main MRS object.
- Prepare the data for fitting (this does additional checks such as whether the data needs to be conjugated, and scales the data to improve fitting robustness)
%% Cell type:code id: tags:
``` python
import matplotlib.pyplot as plt
# Create main MRS Object
mrs = final_data.mrs(basis_file='example_data/steam_11ms',
ref_data=final_wref)
mrs.processForFitting()
# Quick plots of the Metab and Water spectra
mrs.plot()
plt.show()
mrs.plot_ref()
plt.show()
plt.figure(figsize=(10,10))
mrs.plot_basis()
plt.show()
```
%% Cell type:markdown id: tags:
### Fitting
Here we show a typical model fitting and some of the parameters that can be user-set.
%% Cell type:code id: tags:
``` python
from fsl_mrs.utils import fitting, misc, plotting
# Separate macromolecule from the rest (it will have its own lineshape parameters)
metab_groups = misc.parse_metab_groups(mrs,'Mac')
# Fit with Newton algorithm
Fitargs = {'ppmlim':[0.2,4.2],
'method':'Newton','baseline_order':4,
'metab_groups':metab_groups,
'model':'voigt'}
res = fitting.fit_FSLModel(mrs,**Fitargs)
# Quick sanity-plot of the fit (see further down for interactive plotting)
_ = plotting.plot_fit(mrs,pred=res.pred,baseline=res.baseline)
_ = plotting.plot_fit(mrs, res)
```
%% Cell type:markdown id: tags:
### Quantification
Internal and water referencing.
Output is a pandas series.
%% Cell type:code id: tags:
``` python
import pandas as pd
from fsl_mrs.utils import quantify
combinationList = [['NAA','NAAG'],
['Glu','Gln'],
['GPC','PCh'],
['Cr','PCr'],
['Glc','Tau']]
res.combine(combinationList)
te = final_data.hdr_ext['EchoTime']
tr = final_data.hdr_ext['RepetitionTime']
q_info = quantify.QuantificationInfo(te,
tr,
mrs.names,
mrs.centralFrequency / 1E6)
q_info.set_fractions({'WM':0.45,'GM':0.45,'CSF':0.1})
res.calculateConcScaling(mrs,
quant_info=q_info,
internal_reference=['Cr', 'PCr'])
internal = res.getConc(scaling='internal',function=None).mean().multiply(8)
molarity = res.getConc(scaling='molarity',function=None).mean()
print(pd.concat([internal.rename('/Cr+PCr',inplace=True), molarity.rename('molarity (mM)',inplace=True)], axis=1))
```
%% Cell type:markdown id: tags:
## 4. Display
Results can be displayed with various plotting functions or rendered into an interactive HTML.
### In notebook
%% Cell type:code id: tags:
``` python
fig = plotting.plotly_fit(mrs,res)
fig.show()
```
%% Cell type:markdown id: tags:
### HTML report
%% Cell type:code id: tags:
``` python
from fsl_mrs.utils import report
import os
import datetime
output = '.'
report.create_report(mrs,res,
filename=os.path.join(output,'report.html'),
fidfile='meas_MID310_STEAM_metab_FID115673.dat',
basisfile='example_data/steam_11ms',
h2ofile='meas_MID311_STEAM_wref1_FID115674.dat',
date=datetime.datetime.now().strftime("%Y-%m-%d %H:%M"))
import webbrowser
current_path = os.getcwd()
# generate a URL
url = os.path.join('file:///'+current_path,'report.html')
webbrowser.open(url)
```
......
......@@ -663,6 +663,28 @@ class MRS(object):
from fsl_mrs.utils.plotting import plot_mrs_basis
plot_mrs_basis(self, plot_spec=add_spec, ppmlim=ppmlim)
# Utility fitting function
def fit(self, **kwargs):
"""Utility method for fitting this mrs object. Basis must be loaded.
Calls fsl_mrs.utils.fitting.fit_FSLModel
:Keyword Arguments:
* *method (``str``) -- 'Newton' or 'MH', defaults to 'Newton'
* *ppmlim (``tuple``) -- Ppm range over which to fit, defaults to (.2, 4.2)
* *baseline_order (``int``) -- Polynomial baseline order, defaults to 2, -1 disables.
* *metab_groups (``list``) -- List of metabolite groupings, defaults to None
* *model (``str``) -- 'lorentzian' or 'voigt', defaults to 'voigt'
* *x0 (``list``) -- Initilisation values, defaults to None
* *MHSamples (``int``) -- Number of MH samples to run, defaults to 500
* *disable_mh_priors (``bool``) -- If True all priors are disabled for MH fitting, defaults to False
* *fit_baseline_mh (``bool``) -- If true baseline parameters are also fit using MH, defaults to False
:return: Fit results object
:rtype: fsl_mrs.utils.FitRes
"""
from fsl_mrs.utils.fitting import fit_FSLModel
return fit_FSLModel(self, **kwargs)
# Unused functions
# def add_expt_MM(self, lw=5):
# """
......
......@@ -295,12 +295,19 @@ def split(args):
split_1, split_2 = nmrs_tools.split(to_split, args.dim, args.indices)
# 3. Save the output file
if args.index is not None:
first_name = '_low'
second_name = '_high'
elif args.indices:
first_name = '_others'
second_name = '_selected'
if args.filename:
file_out_1 = args.output / (args.filename + '_1')
file_out_2 = args.output / (args.filename + '_2')
file_out_1 = args.output / (args.filename + first_name)
file_out_2 = args.output / (args.filename + second_name)
else:
file_out_1 = args.output / (split_name + '_1')
file_out_2 = args.output / (split_name + '_2')
file_out_1 = args.output / (split_name + first_name)
file_out_2 = args.output / (split_name + second_name)
split_1.save(file_out_1)
split_2.save(file_out_2)
......
......@@ -285,3 +285,16 @@ def test_parse_metab_groups():
# List of integers
assert mrs.parse_metab_groups([0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])\
== [0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0]
def test_fit_method():
mrs = mrs_from_files(svs_metab, svs_basis)
fitargs = {
'metab_groups': mrs.parse_metab_groups('Mac'),
'baseline_order': 1}
res = mrs.fit(**fitargs)
import fsl_mrs.utils.results
assert isinstance(res, fsl_mrs.utils.results.FitRes)
......@@ -123,10 +123,10 @@ def test_split(tmp_path):
'--filename', 'split_file',
'--file', str(test_data_split)])
assert (tmp_path / 'split_file_1.nii.gz').exists()
assert (tmp_path / 'split_file_2.nii.gz').exists()
f1 = nib.load(tmp_path / 'split_file_1.nii.gz')
f2 = nib.load(tmp_path / 'split_file_2.nii.gz')
assert (tmp_path / 'split_file_low.nii.gz').exists()
assert (tmp_path / 'split_file_high.nii.gz').exists()
f1 = nib.load(tmp_path / 'split_file_low.nii.gz')