### How does smoothing (on individual resting-state data) change the results?

* After applying mean filtering of a gaussin kernel of 1.8mm sigma, residual predictions were greatly improved at dimension 25 ([figure 1](figs/smooth25.png)) and dimension 100 ([figure 2](figs/smooth100.png)) using ICA basis. (Here the prediction was made using the first 100 subjects, leave-one-out) - and subsequently updated on ~12,000 subjects.

* After smoothing, however, using coefficients averaged from 100 best-matched subjects only marginally improved prediction accuracy compared with using randomly selected subjects: [figure 3](figs/ukb_best100.png)

* After applying mean filtering of a gaussin kernel of 1.8mm sigma, residual predictions were greatly improved at dimension 25 ([figure 1](figs/smooth25.png)) and dimension 100 ([figure 2](figs/smooth100.png)) using ICA basis. (Here the prediction was made using the first 100 subjects, leave-one-out) - and subsequently updated on ~12,000 subjects (and best-matched subjects were searched over these ~12,000).

* After smoothing, however, using coefficients averaged from 100 best-matched subjects only marginally improved prediction accuracy compared with using randomly selected subjects: [figure 3](figs/ukb_best100.png)

### 1) Does weighted average have added advantage? How is the prediction using bases averaged from other subjects?

* Results on UKB data are shown in [figure 3](figs/ukb_best100.png). Left panel shows the prediction using bases of the new subject, the right panel using bases averaged from other subjects (either 100 best-matched/randomly-selected/least-matched); green and orange boxplots show predictions based on weighted and unweighted averaged of the 100 best-matched subjects respectively. Including weights has minor effect on prediction accuracy...

...

...

@@ -11,8 +11,13 @@

* On UKB data: fastica.m was run on the first 25 columns of MIGP (voxelsX25) with numIC=25 (seems there is no demeaning) and the corresponding DR maps were created for each subject, which improved the prediction accuracy [figure 6](figs/fastica25.png). (Here the prediction was made using the first 100 subjects, leave-one-out)

* On HCP data: no improvements.

### Correlation of predicted amplitude with nIDP on UKB, ~12,000 subjects

### Correlation of predicted amplitude with nIDP on UKB, across ~12,000 subjects

* Left panel shows the correlation matrix between the selected four IDP (faces-shape related) and the nIDPs (age wearing glasses); middle panel the correlation matrix between amplitudes of the three task (faces, shape, faces-shapes) and the selected nIDPs (age wearing glasses); right panel the correlation matrix between predicted amplitude of the three tasks and the four nIDPs [figure 7](figs/ukb_idps.png); Neither task amplitudes nor predicted amplitudes shows is correlated with these nIDPs... (but the IDPs and the amplitudes are correlated though).

### Visualisation of actual residual maps and predicted residual maps

* actual residual maps [figure 8](figs/sub1_true_1.png) and predicted residual maps overlaid [figure 9](figs/sub1_reconst_1.png)

* actual residual maps [figure 8](figs/sub1_true_1.png) and predicted residual maps overlaid [figure 9](figs/sub1_reconst_1.png) i

### Predict differentiate contrast maps

* Slowly exploring...

* learn a transformation to apply onto the reconstruction coefficients...?

* do ICA (and dual regression) at high dimensions (e.g. 500 ICA components, so that it can sufficiently represent subcortical processes) and use lasso/elastic net to select useful ones..?