Federated learning with simple GUI, radiology workflow integration, standard annotation and storage of medical images, deep learning with privacy preserved, sharing centralized deep learning model with all participating institutions.
Diagnostic imaging studies provide non-invasive means for medical professionals to visualize the internal structures of the human body without performing invasive procedures. This field, known as diagnostic radiology, utilizes various imaging modalities (e.g., Computed Radiography (CR), Digital Radiography (DR), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US)) and procedures to capture detailed internal images. The demand for medical images in the diagnostic process of human diseases is currently high, which can lead to delays in radiological diagnoses by radiologists. Consequently, general medical practitioners or physicians of the patients often assume responsibility for interpreting radiographic findings, even though they may lack specialized expertise in this area. Recognizing this challenge, our research team identified the need for a diagnostic support system to assist general practitioners in making accurate diagnoses.
Today, breakthroughs in deep learning and powerful computing have greatly improved artificial intelligence (AI) abilities. These AI systems can match or exceed the diagnostic accuracy of skilled radiologists. Although many health firms have adopted diagnostic aids from abroad, their expense and appropriateness for Thai people might not be ideal. Physiological and morphological variances among Southeast Asians present hurdles, as AI model training demands vast data. Considering these aspects, our group is creating an "Automated Framework for AI Federated Learning in Medical Imaging." It trains AI models using data and diagnostic outcomes from various health centers without sharing the actual images. Each center contributes its AI model findings to a shared global model via federated learning, aiming to refine the global AI model's efficiency and precision by drawing on diverse populations, locations, and extensive data. The central AI model's results will help establish a unified AI diagnostic system, aiding centers without their own AI capabilities.
The team has designed this "Automated Framework for AI Federated Learning in Medical Imaging" system to meet real-world usage needs and support medical image annotation guidelines at Ramathibodi Hospital, Mahidol University, and Srinagarind Hospital, Khon Kaen University, along with public data from the NIH Chest X-ray dataset. The system comprises several subsystems to ease radiologists to create labels on medical images. Subsequently, the data will be stored using Annotation and Image Markup (AIM) according to DICOM PS3.21 standards and AIM XML documents. These image annotations will then be sent for deep learning model training with federated learning framework, yielding continuously improved high-accuracy results.