Home > News > Techscience

Significant Progress Made in Effective Fusion of Multiple Label Segmentation Research

ZhuHanBin Thu, May 02 2024 10:37 AM EST

On April 26, Professor Xu Yanwu's team at the Artificial Intelligence and Digital Economy Guangdong Laboratory (Guangzhou), also known as the Pazhou Laboratory, made significant progress in the research of effective fusion of multiple label segmentation. Their related work, titled "Calibrating Inter-Annotator Segmentation Uncertainty via Diagnosis-First Principle," was published in the "IEEE Transactions on Medical Imaging."

This work primarily focuses on how to calibrate the segmentation uncertainty among different annotators using the diagnosis-first principle. In the field of medical image segmentation, due to the potential ambiguity of tissues/lesions, it often requires multiple clinical experts to collaboratively annotate the target segmentation area to reduce the impact of individual biases on the annotation work, which, in turn, introduces uncertainty among annotators.

The annotated regions by different annotators can be influenced by their experiences and professional knowledge, leading to variations in the annotation results. For instance, if the average of results from all annotators is taken as the standard, less experienced experts tend to be more conservative, resulting in larger annotated areas, while some experts may provide much smaller annotated areas compared to others.

To address this issue, the commonly used majority voting method is typically employed, but this approach overlooks the differences in annotators' professional knowledge. In their latest research, Professor Xu Yanwu's team proposed a "Diagnosis-First Segmentation Framework" (DiFF), aimed at calibrating the uncertainty among annotators in medical image segmentation based on disease diagnosis as the criterion.

According to the introduction, DiFF first learns to fuse the segmentation labels from multiple annotators into a single Diagnosis-First Ground-Truth (DF-GT) to maximize disease diagnostic performance. Subsequently, the researchers introduced the Take and Give model (T&G model) to segment DF-GT from the original images. Through the T&G model, DiFF can learn to produce segmentation results with calibrated uncertainty, facilitating disease diagnosis.

To validate the effectiveness of DiFF, the researchers applied it to three different medical image segmentation tasks: optic disc/cup (OD/OC) segmentation in fundus photographs, thyroid nodule segmentation in ultrasound images, and lesion segmentation on dermoscopy images. Experimental results demonstrate that DiFF can effectively calibrate segmentation uncertainty, significantly improve corresponding disease diagnostic outcomes, and outperform previous methods of multi-annotator label fusion.

In their future work, Professor Xu Yanwu's team will further explore the relationship between diagnosis-first segmentation features and clinical biomarkers to explain how neural networks utilize these features to make diagnostic decisions. The team will also investigate methods for visualizing and analyzing these features.

For more information on the related paper: Link to the paper