Resnet50 is a common classification model. It was pre-trained from the
source domain, then tuned and tested in the target domain, therefore, no
DA was utilized. In Table 5 , Resnet50 underperforms all other
methods in all sequences. The possible reason may be the weak
cross-domain knowledge transferability of the fine-tune strategy. This
shows the advantage of domain adaptation methods over the fine-tune
strategy. Another model, DANN, which is GAN-based and has been widely
employed in lesion assessment. It can extract low-level features from
the entire image. Moreover, Deep Coral was also introduced, which can
leverage domain knowledge transfer by aligning the second-order
statistics. Similar to DANN, it also adopts a common encoder for feature
extraction from the input of a whole image slice. Differently, our model
could fuse both lesion features and prostate features for effective DA,
instead of extracting the prostate features. We “strengthened” the
point labels to be coarse mask labels, such that features, particular
lesion features, can be robustly aligned for DA using the mask labels.
In Table 5 , CMD²A-Net outperforms the two UDA models in all the
sequences in terms of AUC, indicating the effectiveness of our model in
cross-domain feature harmonization and its advantage in prostate lesion
classification. It is worth noting that all four models accomplish their
highest AUCs using the ensembled sequence. The consistent conclusion can
be found in Section Cross-domain Malignancy Classification and Lesion Detection, showing the benefits of the
all-sequence-ensembled method again.
Visualization of Sample Distribution and Ablation Study
Apart from AUC, we also intend to visualize the sample distribution of source and target domains, in support to any improved performance of handling domain shift intuitively. Datasets, P-x and LC-A, were adopted to visualize the data distribution before and after the DA. Algorithm, t-SNE [19], was employed to visualize the data distributions of all sequences, i.e. T2, ADC, and hDWI. Fifty mpMRI cases from each dataset were randomly chosen. As shown in Figure 3a-c, obvious clustering can be observed before DA in each sequence, indicating severe domain shift between the two domains. After CMD²A-Net training (i.e. DA), domain-invariant features were extracted by the well-trained model. After the DA, each sequence samples from the two cohorts are evenly distributed, proving that CMD²A-Net could assure feature alignment on the heterogenous mpMRI sequences[20].