Commit 772cc0c3 authored by Ying-Qiu Zheng's avatar Ying-Qiu Zheng
Browse files

Update 2021JUL21.md

parent 7d852a91
......@@ -7,12 +7,12 @@ Suppose $`\mathbf{X}^{H}, \mathbf{X}^{L}`$ are $`N \times V`$ feature matrices (
For a single voxel, suppose $`\mathbf{y}_{n} \sim \text{multinomial}(\mathcal{\pi})`$, and $`p(\mathbf{x}^{L}_{n}|y_{nk}=1) = \mathcal{N}(\mu_{k}, \Sigma_{k}^{L})`$. To use high-quality data to inform the inference on low-quality data, we assume $`p(\mathbf{x}^{H}_{n}|y_{nk}=1, \mathbf{U}) = \mathcal{N}(\mathbf{U}\mathbf{x}^{H}_{n}|\mu_{k}, \Sigma_{k}^{H})`$ where $`\mathbf{U}^{T}\mathbf{U} = \mathbf{I}`$. The complete log-likelihood can be written as
```math
\log p(\mathbf{x}^{H}_{n}, \mathbf{x}^{L}_{n}, \mathbf{y}_{n}|\mathbf{U},...\mathbf{\pi}, \mathbf{\mu}_{k},...\mathbf{\Sigma}^{H}_{k},...\mathbf{\Sigma}^{L}_{k},...) = \prod_{k=1}^{K}(\mathcal{N}(\mathbf{x}^{L}_{n}|\mu_{k},\Sigma_{k}^{L})\mathcal{N}(\mathbf{Ux}^{H}_{n}|\mu_{k},\Sigma_{k}^{H}))^{y_{nk}}
p(\mathbf{x}^{H}_{n}, \mathbf{x}^{L}_{n}, \mathbf{y}_{n}|\mathbf{U},...\mathbf{\pi}, \mathbf{\mu}_{k},...\mathbf{\Sigma}^{H}_{k},...\mathbf{\Sigma}^{L}_{k},...) = \prod_{k=1}^{K}(\mathcal{N}(\mathbf{x}^{L}_{n}|\mu_{k},\Sigma_{k}^{L})\mathcal{N}(\mathbf{Ux}^{H}_{n}|\mu_{k},\Sigma_{k}^{H}))^{y_{nk}}
```
The marginal distribution of $`\mathbf{x}_{n}^{L}, \mathbf{x}_{n}^{H}`$ is
```math
\log p(\mathbf{x}^{H}_{n}, \mathbf{x}^{L}_{n} | \mathbf{U},...\mathbf{\pi}, \mathbf{\mu}_{k},...\mathbf{\Sigma}^{H}_{k},...\mathbf{\Sigma}^{L}_{k},...)=\sum_{k=1}^{K}\pi_{k}\mathcal{N}(\mathbf{x}^{L}_{n}|\mu_{k},\Sigma_{k}^{L})\mathcal{N}(\mathbf{Ux}^{H}_{n}|\mu_{k},\Sigma_{k}^{H})
p(\mathbf{x}^{H}_{n}, \mathbf{x}^{L}_{n} | \mathbf{U},...\mathbf{\pi}, \mathbf{\mu}_{k},...\mathbf{\Sigma}^{H}_{k},...\mathbf{\Sigma}^{L}_{k},...)=\sum_{k=1}^{K}\pi_{k}\mathcal{N}(\mathbf{x}^{L}_{n}|\mu_{k},\Sigma_{k}^{L})\mathcal{N}(\mathbf{Ux}^{H}_{n}|\mu_{k},\Sigma_{k}^{H})
```
In summary, in addition to finding the the hyper-parameters $`\pi, \mu, \Sigma_{k}^{H}, \Sigma^{L}_{k}`$, we want to estimate a transformation matrix $`\mathbf{U}`$ such that $`\mathbf{UX}^{H}`$ is as close to $`\mathbf{X}^{L}`$ as possible (or vice versa).
......@@ -24,7 +24,7 @@ In summary, in addition to finding the the hyper-parameters $`\pi, \mu, \Sigma_{
4. For iteration = $`1, 2, ...`$, do
- **E-step.** Evaluate the responsibilities using the current parameter values
- $`\gamma(y_{nk}) = \frac{\pi_{k}\mathcal{N}(\mathbf{x}^{L}_{n} | \mu_{k}^{L}, \Sigma_{k}^{L})\mathcal{N}(\mathbf{Ux}^{H}_{n} | \mu_{k}^{L}, \Sigma_{k}^{H})}{\sum_{j=1}^{K}\pi_{j}\mathcal{N}(\mathbf{x}^{L}_{n} | \mu_{k}^{L}, \Sigma_{k}^{L})\mathcal{N}(\mathbf{Ux}^{H}_{n} | \mu_{k}^{L}, \Sigma_{k}^{H})}`$
- **M-step.** Re-estimate the parameters using the current responsibilities by setting the derivatives to zero
- **M-step.** Re-estimate the parameters using the current responsibilities by setting the derivatives of log likelihood to zero
- $`\mu_{k}^{L} = \frac{1}{N_{k}}((\Sigma^{H}_{k})^{-1} + (\Sigma^{L}_{k})^{-1} )^{-1}\sum_{n=1}^{N}\gamma(y_{nk})((\Sigma_{k}^{H})^{-1}\mathbf{Ux}^{H}_{n} + (\Sigma_{k}^{L})^{-1}\mathbf{x}_{n}^{L} )`$
- $`\Sigma_{k}^{L} = \frac{1}{N_{k}}\sum_{n=1}^{N}\gamma(y_{nk})(\mathbf{x}^{L}_{n} - \mathbf{\mu}_{k}^{L})(\mathbf{x}^{L}_{n} - \mathbf{\mu}_{k}^{L})^{T}`$
- $`\Sigma_{k}^{H} = \frac{1}{N_{k}}\sum_{n=1}^{N}\gamma(y_{nk})(\mathbf{Ux}^{H}_{n} - \mathbf{\mu}_{k}^{L})(\mathbf{Ux}^{H}_{n} - \mathbf{\mu}_{k}^{L})^{T}`$
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment