Image segmentation, multi-modal data fusion, and patient-specific 3D model generation
AI-based fast 2D-3D registration between CT/MRI and x-ray images:
When an operation involves intervention around sensitive tissues such as vertebral column and nearby nerves, it is required to localize the surrounding tissues precisely for safe intervention. In the operation room, the tissues' locations can be confirmed in the middle of surgery by taking one or two X-ray images. One of the most representative methods for localizing tissues based on X-ray images is the intensity-based 2D-3D registration. This method involves generating synthetic X-ray images, measuring the similarity between the synthetic images and the real ones based on their intensities, and updating the tissues' locations to increase the similarity. This method is generally slow (i.e., not real-time) and very sensitive to the initial location guess.
We are developing an AI-based method to achieve fast registration without suffering from initial guess sensitivity for real-time use in the operation room. The method comprises two steps: (i) initial registration based on AI, and (ii) intensity-based micro-registration.
AI-based medical image segmentation:
Numerous medical imaging techniques, including CT, MRI, PET, ultrasound, X-ray, etc., have been widely used in clinical practice due to its ability to visualize internal tissues, such as internal organs, bones, soft tissue, and blood vessels. Some of the imaging modalities also enable radiologists to distinguish between different tissue types and regions to identify abnormalities. As each imaging modality possesses unique abilities to visualize particular tissues, combining more than two imaging modalities could even enhance the capabilities of the individual imaging modality. However, identifying and labeling each tissue area is still considered to be a time-consuming task and can be accomplished by professionally trained experts only. Moreover, combining two different imaging modalities requires a lot of conditions to be considered.
In CMR, we are focusing on two topics in this regard. First, we are developing a deep-learning-based automatic image segmentation method that is trained based on pre-labeled data and used to segment body parts automatically. The technique has been applied to segment lumbar spines from CT data and intervertebral disc from MRI data. Second, we are also developing AI-based fusion techniques between CT and MRI data, including a generation of synthetic MRI data from CT data or vice versa. These techniques may enable us to visualize 3D lumbar spines with 3D spinal nerves and intervertebral discs in the same space.
