@@ -88,9 +88,10 @@ When $`d >> n`$, Lasso appears superior to the others.
### Panel B - structured ARD priors (in progress).
#### On the high quality data
Instead of $`\mathbf{w}\sim\mathcal{N}(0, \text{diag}(\alpha_1,...\alpha_d))`$, we assume the hyperparamters have a underlying structure, e.g., $`\mathbf{w}\sim\mathcal{N}(0, \text{diag}(\exp(\mathbf{u}))`$, where $`\mathbf{u}`$ is a Gaussian process $`\mathbf{u}\sim\mathcal{N}(\mathbf{0}, \mathbf{C}_{\Theta})`$ such that neighbouring features (i.e., adjoining voxels) share similar sparsity.
Instead of $`\mathbf{w}\sim\mathcal{N}(0, \text{diag}(\alpha_1,...\alpha_d))`$, we assume the hyperparamters have a underlying structure, e.g., $`\mathbf{w}\sim\mathcal{N}(0, \text{diag}(\exp(\mathbf{u}))`$, where $`\mathbf{u}`$ is a Gaussian process $`\mathbf{u}\sim\mathcal{N}(\mathbf{0}, \mathbf{C}_{\Theta})`$ such that neighbouring features (i.e., adjoining/co-activating voxels) share similar sparsity.
#### On the low quality data
### Panel C - structured spike-and-slab priors (in progress).
#### On the high quality data
Instead of using ARD priors, we assume the coefficients have spike-and-slab priors with latent variables $`\gamma_i^{H}, i=1,2,...d`$, where $`\gamma_i\sim\text{Bernoulli}(\sigma(\theta))`$. The hyperparameter $`\theta`$ can be a Gaussian Process.