Abstract:
Cross-modal image synthesis is one of the important tasks in medical image processing. The medical image annotation information required by the supervised model is difficult and expensive to obtain. Which make the existing image synthesis models cannot well preserve the structural information of the input image. Aiming at addressing this problem, a novel unsupervised cross-modal medical image synthesis method that fuses edge perception information was proposed. The algorithm takes CycleGAN as the basic framework, adopts an improved u-net as the generator network and incorporates residual paths in the skip connection to alleviate the semantic difference between encoder and decoder. The adjacent convolution blocks in the expansion path and contraction path of the encoder and decoder are fused in a densely connected way to increase the reuse of feature information and improve the feature expression ability of the network. Then by supplementing the edge perception module to the model, the network can learn the texture information and edge information of medical images simultaneously, which can better reflect the abnormal area, and facilitate doctors to distinguish normal and diseased tissues. Finally, experimental verification was carried out on public brain dataset. The results show the effectiveness of the proposed cross-modal medical image synthesis method, and the generalization performance of the proposed method is further validated by applying it to other scenarios.