Commit 9a7fdc99 authored by Paul McCarthy's avatar Paul McCarthy 🚵
Browse files

Merge branch 'rf/newlines' into 'master'

Rf/newlines

See merge request !75
parents 6017a9c2 e6bc38d6
......@@ -147,7 +147,7 @@ def main():
if html:
print(f'Generating HTML version of executed notebook...')
tempfile = 'temp.ipynb'
tempfile = 'FUNPACK.ipynb'
truncate_long_output_cells(exfile, tempfile)
convert_to_html(tempfile, htmlfile)
else:
......
......@@ -6,12 +6,21 @@ FUNPACK changelog
--------------------------------
Added
^^^^^
* New ``--escape_newlines`` option, which causes non-numeric values containing
escape characters (e.g. ``\n``, ``\t``, etc) to be output literally.
Fixed
^^^^^
* Fixed a bug where subject inclusion expressions were causing zero rows to be
imported if the variable was not present in the input file.
* Fixed some compatibility issues with Pandas 1.2.
2.5.0 (Wednesday 9th December 2020)
......
......@@ -6,7 +6,7 @@
#
__version__ = '2.5.0'
__version__ = '2.5.1'
"""The ``funpack`` versioning scheme roughly follows Semantic Versioning
conventions.
"""
......
......@@ -120,6 +120,7 @@ CLI_ARGUMENTS = collections.OrderedDict((
('TSV/CSV export options', [
(('ts', 'tsv_sep'), {}),
(('en', 'escape_newlines'), {'action' : 'store_true'}),
(('tm', 'tsv_missing_values'), {'default' : ''})]),
('HDF5 export options', [
......@@ -352,6 +353,10 @@ CLI_ARGUMENT_HELP = {
'Do not save non-numeric columns to the main output file.',
# TSV/CSV export options
'escape_newlines' :
'Escape any newline characters in all non-numeric columns, replacing them '
'with a literal "\n" (with the same logic applied to any other escape '
'characters).',
'tsv_sep' :
'Column separator string to use in output file (default: "," for csv, '
'"\\t" for tsv).',
......
......@@ -164,3 +164,18 @@ def formatCompound(dtable, column, series, delim=','):
return delim.join([str(v) for v in val])
return series.apply(fmt)
@custom.formatter('escapeString')
def escapeString(dtable, column, series):
"""Encodes every value of ``series`` as a string, with any escape
characters (`\n`, `\t`, etc) formatted literally. This has the
effect that the series is coerced to a string type (if not already).
"""
if len(series) == 0:
return series
def fmt(val):
return str(val).encode('unicode_escape').decode('utf-8')
return series.apply(fmt)
......@@ -14,7 +14,8 @@ import os.path as op
import os
import logging
import numpy as np
import numpy as np
import pandas.api.types as pdtypes
from . import util
from . import custom
......@@ -50,6 +51,7 @@ def exportTSV(dtable,
outfile,
sep=None,
missingValues=None,
escapeNewlines=False,
numRows=None,
dropNaRows=False,
dateFormat=None,
......@@ -71,6 +73,10 @@ def exportTSV(dtable,
:arg missingValues: String to use for missing/NA values. Defaults to the
empty string.
:arg escapeNewlines: If ``True``, all string/object types are escaped
using ``shlex.quote``.
:arg numRows: Number of rows to write at a time. Defaults to
:attr:`NUM_ROWS`.
......@@ -89,6 +95,7 @@ def exportTSV(dtable,
if sep is None: sep = '\t'
if missingValues is None: missingValues = ''
if numRows is None: numRows = NUM_ROWS
if formatters is None: formatters = {}
# We're going to output each chunk of
# subjects to a separate file (in
......@@ -101,6 +108,17 @@ def exportTSV(dtable,
subtables = [dtable.subtable(rows=c) for c in idxchunks]
outfiles = ['{}_{}'.format(outfile, i) for i in range(nchunks)]
# escapeNewlines is performed by the
# exporting.escapeString formatter
# function. We apply it to all columns
# which are not numeric or date
if escapeNewlines:
for col in dtable.dataColumns:
series = dtable[:, col.name]
if not (pdtypes.is_numeric_dtype( series) or
pdtypes.is_datetime64_any_dtype(series)):
formatters[col.name] = 'escapeString'
# write each chunk in parallel
args = zip(subtables,
outfiles,
......
......@@ -387,6 +387,7 @@ def doExport(dtable, args):
formatters=args.var_format,
# TSV options
escapeNewlines=args.escape_newlines,
sep=args.tsv_sep,
missingValues=args.tsv_missing_values,
......
%% Cell type:markdown id: tags:
![win logo](attachment:win.png)
# `funpack` (https://git.fmrib.ox.ac.uk/fsl/funpack)
> Paul McCarthy <paul.mccarthy@ndcn.ox.ac.uk>
> ([WIN@FMRIB](https://www.win.ox.ac.uk/))
`funpack` is a command-line program which you can use to extract data from UK
BioBank (and other tabular) data.
You can give `funpack` one or more input files (e.g. `.csv`, `.tsv`), and it
will merge them together, perform some preprocessing, and produce a single
output file.
A large number of rules are built into `funpack` which are specific to the UK
BioBank data set. But you can control and customise everything that `funpack`
does to your data, including which rows and columns to extract, and which
cleaning/processing steps to perform on each column.
`funpack` comes installed with recent versions of
[FSL](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/). You can also install `funpack`
via `conda`:
> ```
> conda install -c conda-forge fmrib-unpack
> ```
Or using `pip`:
> ```
> pip install fmrib-unpack
> ```
Get command-line help by typing:
> ```
> funpack -h
> ```
**Important** The examples in this notebook assume that you have installed `funpack`
2.5.0 or newer.
2.5.1 or newer.
%% Cell type:code id: tags:
``` bash
funpack -V
```
%% Cell type:markdown id: tags:
> _Note:_ If the above command produces a `NameError`, you may need to change
> the Jupyter Notebook kernel type to **Bash** - you can do so via the
> **Kernel -> Change Kernel** menu option.
### Contents
1. [Overview](#Overview)
1. [Import](#1.-Import)
2. [Cleaning](#2.-Cleaning)
3. [Processing](#3.-Processing)
4. [Export](#4.-Export)
2. [Examples](#Examples)
3. [Import examples](#Import-examples)
1. [Selecting variables (columns)](#Selecting-variables-(columns))
1. [Selecting individual variables](#Selecting-individual-variables)
2. [Selecting variable ranges](#Selecting-variable-ranges)
3. [Selecting variables with a file](#Selecting-variables-with-a-file)
4. [Selecting variables from pre-defined categories](#Selecting-variables-from-pre-defined-categories)
2. [Selecting subjects (rows)](#Selecting-subjects-(rows))
1. [Selecting individual subjects](#Selecting-individual-subjects)
2. [Selecting subject ranges](#Selecting-subject-ranges)
3. [Selecting subjects from a file](#Selecting-subjects-from-a-file)
4. [Selecting subjects by variable value](#Selecting-subjects-by-variable-value)
5. [Excluding subjects](#Excluding-subjects)
3. [Selecting visits](#Selecting-visits)
1. [Evaluating expressions across visits](#Evaluating-expressions-across-visits)
4. [Merging multiple input files](#Merging-multiple-input-files)
1. [Merging by subject](#Merging-by-subject)
2. [Merging by column](#Merging-by-column)
3. [Naive merging](#Merging-by-column)
4. [Cleaning examples](#Cleaning-examples)
1. [NA insertion](#NA-insertion)
2. [Variable-specific cleaning functions](#Variable-specific-cleaning-functions)
3. [Categorical recoding](#Categorical-recoding)
4. [Child value replacement](#Child-value-replacement)
5. [Processing examples](#Processing-examples)
1. [Sparsity check](#Sparsity-check)
2. [Redundancy check](#Redundancy-check)
3. [Categorical binarisation](#Categorical-binarisation)
6. [Custom cleaning, processing and loading - funpack plugins](#Custom-cleaning,-processing-and-loading---funpack-plugins)
1. [Custom cleaning functions](#Custom-cleaning-functions)
2. [Custom processing functions](#Custom-processing-functions)
3. [Custom file loaders](#Custom-file-loaders)
7. [Miscellaneous topics](#Miscellaneous-topics)
1. [Non-numeric data](#Non-numeric-data)
2. [Dry run](#Dry-run)
3. [Built-in rules](#Built-in-rules)
4. [Using a configuration file](#Using-a-configuration-file)
5. [Working with unknown/uncategorised variables](#Working-with-unknown/uncategorised-variables)
# Overview
`funpack` performs the following steps:
## 1. Import
All data files are loaded in, unwanted columns and subjects are dropped, and
the data files are merged into a single table (a.k.a. data frame). Multiple
files can be merged according to an index column (e.g. subject ID). Or, if the
input files contain the same columns/subjects, they can be naively
concatenated along rows or columns.
> _Note:_ FUNPACK refers to UK Biobank **Data fields** as **variables**. The
> two terms can be considered equivalent.
## 2. Cleaning
The following cleaning steps are applied to each column:
1. **NA value replacement:** Specific values for some columns are replaced
with NA, for example, variables where a value of `-1` indicates *Do not
know*.
2. **Variable-specific cleaning functions:** Certain columns are
re-formatted; for example, the [ICD10](https://en.wikipedia.org/wiki/ICD-10)
disease codes can be converted to integer representations.
3. **Categorical recoding:** Certain categorical columns are re-coded.
4. **Child value replacement:** NA values within some columns which are
dependent upon other columns may have values inserted based on the values
of their parent columns.
## 3. Processing
During the processing stage, columns may be removed, merged, or expanded into
additional columns. For example, a categorical column may be expanded into a set
of binary columns, one for each category.
A column may also be removed on the basis of being too sparse, or being
redundant with respect to another column.
## 4. Export
The processed data can be saved as a `.csv`, `.tsv`, or `.hdf5` file.
# Examples
Throughout these examples, we are going to use a few command line
options, which you will probably **not** normally want to use:
- `-ow` (short for `--overwrite`): This tells `funpack` not to complain if
the output file already exists.
- `-q` (short for `--quiet`): This tells `funpack` to be quiet. Without the
`-q` option, `funpack` can be quite verbose, which can be annoying, but is
very useful when things go wrong. A good strategy is to tell `funpack` to
produce verbose output using the `--noisy` (`-n` for short) option, and to
send all of its output to a log file with the `--log_file` (or `-lf`)
option. For example:
> ```
> funpack -n -n -n -lf log.txt out.tsv in.tsv
> ```
%% Cell type:code id: tags:
``` bash
alias funpack="funpack -ow -q"
```
%% Cell type:markdown id: tags:
Here's the first example input data set, with UK BioBank-style column names:
%% Cell type:code id: tags:
``` bash
cat data_01.tsv
```
%% Cell type:markdown id: tags:
The numbers in each column name typically represent:
1. The variable ID
2. The visit, for variables which were collected at multiple points in time.
3. The "instance", for multi-valued variables.
Note that one **variable** is typically associated with several **columns**,
although we're keeping things simple for this first example - there is only
one visit for each variable, and there are no mulit-valued variables.
> _Most but not all_ variables in the UK BioBank contain data collected at
> different visits, the times that the participants visited a UK BioBank
> assessment centre. However there are some variables (e.g. [ICD10 diagnosis
> codes](https://biobank.ctsu.ox.ac.uk/crystal/field.cgi?id=41202)) for which
> this is not the case.
# Import examples
## Selecting variables (columns)
You can specify which variables you want to load in the following ways, using
the `--variable` (`-v` for short), `--category` (`-c` for short) and
`--column` (`-co` for short) command line options:
* By variable ID
* By variable ranges
* By a text file which contains the IDs you want to keep.
* By pre-defined variable categories
* By column name
### Selecting individual variables
Simply provide the IDs of the variables you want to extract:
%% Cell type:code id: tags:
``` bash
funpack -v 1 -v 5 out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
### Selecting variable ranges
The `--variable`/`-v` option accepts MATLAB-style ranges of the form
`start:step:stop` (where the `stop` is inclusive):
%% Cell type:code id: tags:
``` bash
funpack -v 1:3:10 out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
### Selecting variables with a file
If your variables of interest are listed in a plain-text file, you can simply
pass that file:
%% Cell type:code id: tags:
``` bash
echo -e "1\n6\n9" > vars.txt
funpack -v vars.txt out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
### Selecting variables from pre-defined categories
Some UK BioBank-specific categories are [built into
`funpack`](#Built-in-rules), but you can also define your own categories - you
just need to create a `.tsv` file, and pass it to `funpack` via the
`--category_file` (`-cf` for short):
%% Cell type:code id: tags:
``` bash
echo -e "ID\tCategory\tVariables" > custom_categories.tsv
echo -e "1\tCool variables\t1:5,7" >> custom_categories.tsv
echo -e "2\tUncool variables\t6,8:10" >> custom_categories.tsv
cat custom_categories.tsv
```
%% Cell type:markdown id: tags:
Use the `--category` (`-c` for short) to select categories to output. You can
refer to categories by their ID:
%% Cell type:code id: tags:
``` bash
funpack -cf custom_categories.tsv -c 1 out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
Or by name:
%% Cell type:code id: tags:
``` bash
funpack -cf custom_categories.tsv -c uncool out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
### Selecting column names
If you are working with data that has non-UK BioBank style column names, you
can use the `--column` (`-co` for short) to select individual columns by their
name, rather than the variable with which they are associated. The `--column`
option accepts full column names, and also shell-style wildcard patterns:
%% Cell type:code id: tags:
``` bash
funpack -co 4-0.0 -co "??-0.0" out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
## Selecting subjects (rows)
`funpack` assumes that the first column in every input file is a subject
ID. You can specify which subjects you want to load via the `--subject` (`-s`
for short) option. You can specify subjects in the same way that you specified
variables above, and also:
* By specifying a conditional expression on variable values - only subjects
for which the expression evaluates to true will be imported
* By specifying subjects to exclude
### Selecting individual subjects
%% Cell type:code id: tags:
``` bash
funpack -s 1 -s 3 -s 5 out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
### Selecting subject ranges
%% Cell type:code id: tags:
``` bash
funpack -s 2:2:10 out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
### Selecting subjects from a file
%% Cell type:code id: tags:
``` bash
echo -e "5\n6\n7\n8\n9\n10" > subjects.txt
funpack -s subjects.txt out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
### Selecting subjects by variable value
The `--subject` option accepts *variable expressions* - you can write an
expression performing numerical comparisons on variables (denoted with a
leading `v`) and combine these expressions using boolean algebra. Only
subjects for which the expression evaluates to true will be imported. For
example, to only import subjects where variable 1 is greater than 10, and
variable 2 is less than 70, you can type:
%% Cell type:code id: tags:
``` bash
funpack -s "v1 > 10 && v2 < 70" out.tsv data_01.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
The following symbols can be used in variable expressions:
| Symbol | Meaning |
|---------------------------|---------------------------------|
| `==` | equal to |
| `!=` | not equal to |
| `>` | greater than |
| `>=` | greater than or equal to |
| `<` | less than |
| `<=` | less than or equal to |
| `na` | N/A |
| `&&` | logical and |
| <code>&#x7c;&#x7c;</code> | logical or |
| `~` | logical not |
| `contains` | Contains sub-string |
| `all` | all columns must meet condition |
| `any` | any column must meet condition |
| `()` | to denote precedence |
> The `all` and `any` symbols allow you to control how an expression is
> evaluated across multiple columns which are associated with one variable
> (e.g. separate columns for each visit). We will give an example of this in
> the section on [selecting visits](#Selecting-visits), below.
Non-numeric (i.e. string) variables can be used in these expressions in
conjunction with the `==`, `!=`, and `contains` operators, and date/time
variables can be compared using the `==`, `!=`, `>`, `>=`, `<`, and `<=`
operators. For example, imagine that we have the following data set:
%% Cell type:code id: tags:
``` bash
cat data_02.tsv
```
%% Cell type:markdown id: tags:
And we want to identify subjects who were born during or after 1965 (variable
33), and who have a value of `B` for variable 1:
%% Cell type:code id: tags:
``` bash
funpack -s "v33 >= 1965-01-01 && v1 == 'B'" out.tsv data_02.tsv
cat out.tsv
```
%% Cell type:markdown id: tags:
> When comparing dates and times, you must use the format `YYYY-MM-DD` and
> `YYYY-MM-DD HH:MM:SS`. When comparing strings, you must surround values
> with single or double quotes.
Evaluating a variable expression requires the data for every subject to be
loaded into memory, so that the conditional expression can be evaluated.
A useful strategy, if you intend to work with the same subset of subjects more
than once, is to use FUNPACK once, to identify the subjects of interest, save
their IDs to a text file, and on subsequent calls to FUNPACK, use the text
file to select subjects. This means that subsequent FUNPACK runs will be