@@ -18,12 +18,14 @@ The marginal distribution of $`\mathbf{x}_{n}^{L}, \mathbf{x}_{n}^{H}`$ is

...

@@ -18,12 +18,14 @@ The marginal distribution of $`\mathbf{x}_{n}^{L}, \mathbf{x}_{n}^{H}`$ is

In summary, in addition to finding the the hyper-parameters $`\pi, \mu, \Sigma_{k}^{H}, \Sigma^{L}_{k}`$, we want to estimate a transformation matrix $`\mathbf{U}`$ such that $`\mathbf{UX}^{H}`$ is as close to $`\mathbf{X}^{L}`$ as possible (or vice versa).

In summary, in addition to finding the the hyper-parameters $`\pi, \mu, \Sigma_{k}^{H}, \Sigma^{L}_{k}`$, we want to estimate a transformation matrix $`\mathbf{U}`$ such that $`\mathbf{UX}^{H}`$ is as close to $`\mathbf{X}^{L}`$ as possible (or vice versa).

### Simulation results

### Simulation results

#### Methods

#### We considered three scenarios

##### Low-quality data noisier than the high-quality data

##### I. Low-quality data noisier than the high-quality data

We simulate the case where the features of low-quality data are noiser than those of the high-quality data. The number of informative features remains the same, however.

We simulate the case where the features of low-quality data are noiser than those of the high-quality data. The number of informative features remains the same, however.

```julia

```julia

noise_level=10

noise_level=10

d=3

# the high- and low-quality share the same cluster centroid

# the high- and low-quality share the same cluster centroid