Interactive scribbleprompt framework for Medical Scan Annotation

0
AI for medical scans

To the untrained eye, a medical image such as an MRI or X-ray looks like a blurry collection of black-and-white blobs. Determining where one structure (such as a tumor) stops and another begins is difficult. To resolve this issue, an interactive ScribblePrompt framework is designed for medical scan annotation.

Scribble Prompt

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital (MGH), and Harvard Medical School created the interactive “ScribblePrompt” framework for medical scan annotation. It is a versatile tool that can quickly segment any medical image, including those it has never seen before.

Moreover, rather than having people manually mark up each image. The team simulated how users would annotate over 50,000 scans. They include MRIs, ultrasounds, and pictures, covering features in the eyes, cells, brains, bones, skin, and more.

Furthermore, to categorize the scans, the scientists utilized algorithms. They mimicked how humans might scrawl and click on different sections of medical imaging. In addition to widely labeled regions, the team employed superpixel techniques to discover parts of an image with identical values. And to uncover potential new areas of interest to medical researchers and train ScribblePrompt to segment them. This synthetic data enabled ScribblePrompt to handle real-world segmentation requests from users.

MIT Ph.D. student Hallee Wong SM ’22 said

AI has significant potential in analyzing images and other high-dimensional data to help humans do things more productively,”

We want to augment, not replace, the efforts of medical workers through an interactive system. ScribblePrompt is a simple model with the efficiency to help doctors focus on the more interesting parts of their analysis. It’s faster and more accurate than comparable interactive segmentation methods, reducing annotation time by 28 percent compared to Meta’s Segment Anything Model (SAM) framework, for example.”

ScribblePrompt’s UI is basic. Users can write or click on the rough region they want to segment. The program will highlight the entire structure or background as desired. A retinal (eye) scan, for example, allows users to click on specific veins. ScribblePrompt can also mark up a structure using a bounding box.

The tool can then make adjustments based on feedback from the user. In a user study, ScribblePrompt was chosen as the top tool by MGH neuroimaging researchers. Due to its self-correcting and interactive characteristics.

Harvard Medical School professor in radiology and MGH neuroscientist Bruce Fischl, says

The problem is dramatically worse in medical imaging in which our ‘images’ are typically 3D volumes, as human beings have no evolutionary or phenomenological reason to have any competency in annotating 3D images. ScribblePrompt enables manual annotation to be carried out much, much faster and more accurately, by training a network on precisely the types of interactions a human would typically have with an image while manually annotating. The result is an intuitive interface that allows annotators to naturally interact with imaging data with far greater productivity than was previously possible.”

ScribblePrompt was trained using simulated scribbles and clicks on 54,000 images from 65 datasets, including scans of the eyes, thorax, spine, cells, skin, abdominal muscles, neck, brain, bones, teeth, and lesions. The model became familiar with 16 different sorts of medical pictures, such as microscopes, CT scans, X-rays, MRIs, ultrasounds, and photographs.

LEAVE A REPLY

Please enter your comment!
Please enter your name here