For a single voxel, suppose $`\mathbf{y}_{n} \sim \text{multinomial}(\mathcal{\pi})`$, and $`p(\mathbf{x}^{L}_{n}|y_{nk}=1) = \mathcal{N}(\mu_{k}, \Sigma_{k}^{L})`$. To use high-quality data to inform the inference on low-quality data, we assume $`p(\mathbf{x}^{H}_{n}|y_{nk}=1, \mathbf{U}) = \mathcal{N}(\mathbf{U}\mathbf{x}^{H}_{n}|\mu_{k}, \Sigma_{k}^{H})`$ where $`\mathbf{U}^{T}\mathbf{U} = \mathbf{I}`$. The complete log-likelihood can be written as
For a single voxel, suppose $`\mathbf{y}_{n} \sim \text{multinomial}(\mathcal{\pi})`$, and $`p(\mathbf{x}^{L}_{n}|y_{nk}=1) = \mathcal{N}(\mu_{k}, \Sigma_{k}^{L})`$. To use high-quality data to inform the inference on low-quality data, we assume $`p(\mathbf{x}^{H}_{n}|y_{nk}=1, \mathbf{U}) = \mathcal{N}(\mathbf{U}\mathbf{x}^{H}_{n}|\mu_{k}, \Sigma_{k}^{H})`$ where $`\mathbf{U}^{T}\mathbf{U} = \mathbf{I}`$. The complete log-likelihood can be written as
In summary, in addition to finding the the hyper-parameters $`\pi, \mu, \Sigma_{k}^{H}, \Sigma^{L}_{k}`$, we want to estimate a transformation matrix $`\mathbf{U}`$ such that $`\mathbf{UX}^{H}`$ is as close to $`\mathbf{X}^{L}`$ as possible (or vice versa).
In summary, in addition to finding the the hyper-parameters $`\pi, \mu, \Sigma_{k}^{H}, \Sigma^{L}_{k}`$, we want to estimate a transformation matrix $`\mathbf{U}`$ such that $`\mathbf{UX}^{H}`$ is as close to $`\mathbf{X}^{L}`$ as possible (or vice versa).
### Pseudo code - Algorithm 1. EM for the Fusion of GMMs
1. Run K-means clustering on the high-quality data to generate the assignment of the voxels $`R^{(0)}`$.
2. Initialise the means $`\mu_{k}^{L}`$, $`\mu_{k}^{H}`$, covariances $`\Sigma_{k}^{L}`$, $`\Sigma_{k}^{H}`$, and mixing coefficients $`\pi_k`$ using the K-means assignment $`R^{(0)}`$, and evaluate the initial likelihood.
3. Initialise the transformation matrix $`\mathbf{U} = \mathbf{MN}^{T}`$, where $`\mathbf{MDN}^{T}`$ is the SVD of $`\sum_{k=1}^{K}\mu_{k}^{H}(\mu_{k}^{L})^{T}`$.
4. For iteration = $`1, 2, ...`$, do
-**E-step.** Evaluate the responsibilities using the current parameter values
- $`\mathbf{U}=\mathbf{MN}^{T}`$ where $`\mathbf{MDN}^{T}`$ is the svd of $`\sum_{k=1}^{K}\gamma(y_nk)\mu_{k}^{L}(\mathbf{x}_{n}^{H})^{T}(\sum_{n=1}^{N}\mathbf{x}_{n}^{H}(\mathbf{x}_{n}^{H})^{T})`$
- Evaluate the likelihood and check for convergence.
5. Using $`\mu_{k}^{L}, \Sigma_{k}^{L}, \pi_{k}`$ to assignment unseen low-quality data points.
### Simulation results
### Simulation results
#### We considered three scenarios
#### We considered three scenarios
##### I. Low-quality data noisier than the high-quality data
##### I. Low-quality data noisier than the high-quality data