Search
Close this search box.

Top Three Mistakes When Annotating Radiology Exams

Top Three Mistakes When Annotating Radiology Exams

Hari Trivedi November 25, 2023 0 Comments

1. Ignoring Inherent Ambiguity in Radiology

Description: Like most of medicine, radiology is not black and white (although our images may be). There are many ways to describe a single finding and two identical-appearing findings may have different diagnoses. Translating this ‘gray area’ into clearly demarcated classes for machine learning is difficult, and can result in noisy training validation if done incorrectly. For example, the interreader agreement for pulmonary nodule on chest CT is only 65%, as radiologists often disagree whether an opacity is a nodule, atelectasis, or a small lymph node. Similarly, boundaries of organs or lesions can be difficult, particularly when margins are poorly demarcated (think of the pancreas). If left unaddressed, these factors can lead to unexpected annotation variability with downstream impact on model performance and validation. Lightbox AI Solution: Our experience across various modalities, body regions, and disease processes enables us to anticipate and address such challenges proactively. Dr. Trivedi has built amongst the world’s largest medical imaging datasets and this experience translates into optimized annotation protocols for your project, saving time and avoiding costs associated with imprecise annotations.

2. Poor Project-Annotator Fit

Required annotator qualifications depend upon the use case, particularly when it comes to regulatory bodies. Generally, model performance must be compared to that of potential end-users. Therefore, models that may be used by general radiologists should be compared to their performance (for example, many chest x-ray models) whereas models that are highly subspecialty specific may need to be compared against subspecialty radiologists. On occasion, models may be used by non-radiologists such as in emergency department triaging. In this case, regulatory bodies may request ground truth to be annotated by non-radiologist physicians or advanced practice providers. Ignoring this can lead to skewed ground truth, disadvantaging your model in validation studies or even requiring re-annotation of entire datasets. Lightbox AI Solution: Every project begins with a thorough understanding of the model use case including exam types and technical parameters, deployment location, and end-users. We meticulously match each project with the right radiologists (or non-radiologists), ensuring that your ground truth is valid and reliable.

3. Poor Project-Platform Fit

Selecting the right annotation platform can be the difference between successful and unsuccessful annotation projects. Although open-source platforms running on local machines may seem like the simplest and most cost effective solution, in most cases the opposite is true. Most open-source products, while feature-rich and flexible, are not optimized for efficiency. Many companies also have in-house annotation engines, which again may be functional but not optimized for efficiency. Lags of even a few seconds when loading cases, changing series, or scrolling can add up to significant time losses. 

Lightbox AI Solution: We have found that the correct annotation platform can improve efficiency by up to 70%, saving you both time and money. We offer multiple platform solutions that range from basic image-level annotations to complex segmentations, streamlining the annotation process. Once completed, annotations can be exported directly to your environment to evaluate model and reader performance.

At Lightbox.ai, we understand the intricacies of radiology AI annotation projects. Our expertise and tailored approach ensure that your project is set up for success, avoiding common pitfalls and delivering quality, efficiency, and accuracy every step of the way.

Leave a comment

Your email address will not be published. Required fields are marked *

Radiologist-led Image Annotation
and Curation for AI