Littlefield, N., Plate, J. F., Weiss, K. R., Lohse, I., Chhabra, A., Siddiqui, I. A., Menezes, Z., Mastorakos, G., Mehul Thakar, S., Abedian, M., Gong, M. F., Carlson, L. A., Moradi, H., Amirian, S., & Tafti, A. P. (2023). Learning Unbiased Image Segmentation: A Case Study with Plain Knee Radiographs. 2023 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), 1–5.
@inproceedings{10313433,
author = {Littlefield, Nickolas and Plate, Johannes F. and Weiss, Kurt R. and Lohse, Ines and Chhabra, Avani and Siddiqui, Ismaeel A. and Menezes, Zoe and Mastorakos, George and Mehul Thakar, Sakshi and Abedian, Mehrnaz and Gong, Matthew F. and Carlson, Luke A. and Moradi, Hamidreza and Amirian, Soheyla and Tafti, Ahmad P.},
booktitle = {2023 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI)},
title = {Learning Unbiased Image Segmentation: A Case Study with Plain Knee Radiographs},
year = {2023},
volume = {},
number = {},
pages = {1-5},
doi = {10.1109/BHI58575.2023.10313433}
}
Automatic segmentation of knee bony anatomy is essential in orthopedics, and it has been around for several years in both pre-operative and post-operative settings. While deep learning algorithms have demonstrated exceptional performance in medical image analysis, the assessment of fairness and potential biases within these models remains limited. This study aims to revisit deep learning-powered knee-bony anatomy segmentation using plain radiographs to uncover visible gender and racial biases. The current contribution offers the potential to advance our understanding of biases, and it provides practical insights for researchers and practitioners in medical imaging. The proposed mitigation strategies mitigate gender and racial biases, ensuring fair and unbiased segmentation results. Furthermore, this work promotes equal access to accurate diagnoses and treatment outcomes for diverse patient populations, fostering equitable and inclusive healthcare provision.
Littlefield, N., Plate, J. F., Weiss, K. R., Lohse, I., Chhabra, A., Siddiqui, I. A., Menezes, Z., Mastorakos, G., Amirian, S., Moradi, H., & Tafti, A. P. (2023). AI Fairness in Hip Bony Anatomy Segmentation: Analyzing and Mitigating Gender and Racial Bias in Plain Radiography Analysis. 2023 IEEE 11th International Conference on Healthcare Informatics (ICHI), 714–716.
@inproceedings{10337174,
author = {Littlefield, Nickolas and Plate, Johannes F. and Weiss, Kurt R. and Lohse, Ines and Chhabra, Avani and Siddiqui, Ismaeel A. and Menezes, Zoe and Mastorakos, George and Amirian, Soheyla and Moradi, Hamidreza and Tafti, Ahmad P.},
booktitle = {2023 IEEE 11th International Conference on Healthcare Informatics (ICHI)},
title = {AI Fairness in Hip Bony Anatomy Segmentation: Analyzing and Mitigating Gender and Racial Bias in Plain Radiography Analysis},
year = {2023},
volume = {},
number = {},
pages = {714-716},
doi = {10.1109/ICHI57859.2023.00130}
}
Automatic segmentation of hip bony anatomy is a critical component of orthopedics enabling healthcare providers and clinicians to efficiently and objectively accomplish several medical image analysis tasks, including the diagnosis of hip fractures, arthritis, deformity, and dislocation. This autonomous process assists surgeons in preoperative planning by determining the location and size of surgical incisions, the placement of hip implants, and/or other surgical instruments. While deep learning computer vision algorithms for hip segmentation have demonstrated almost human-like performance in past literature, analyzing the fairness and any potential bias within such models has been very limited so far. Thus, the present work aims to provide a better understanding of any visible gender, ethnicity, and racial bias in hip bony anatomy segmentation using plain radiographs.
Littlefield, N., Moradi, H., Amirian, S., Kremers, H. M., Plate, J. F., & Tafti, A. P. (2023). Enforcing Explainable Deep Few-Shot Learning to Analyze Plain Knee Radiographs: Data from the Osteoarthritis Initiative. 2023 IEEE 11th International Conference on Healthcare Informatics (ICHI), 252–260.
@inproceedings{10337166,
author = {Littlefield, Nickolas and Moradi, Hamidreza and Amirian, Soheyla and Kremers, Hilal Maradit and Plate, Johannes F. and Tafti, Ahmad P.},
booktitle = {2023 IEEE 11th International Conference on Healthcare Informatics (ICHI)},
title = {Enforcing Explainable Deep Few-Shot Learning to Analyze Plain Knee Radiographs: Data from the Osteoarthritis Initiative},
year = {2023},
volume = {},
number = {},
pages = {252-260},
doi = {10.1109/ICHI57859.2023.00042}
}
The use of fast, accurate, and automatic knee radiography analysis is becoming increasingly important in orthopedics, and it is becoming more important in improving patient-specific diagnosis, prognosis, and treatment. Precise characterization of plain knee radiographs can greatly impact patient care, as they are usually used in preoperative and intraoperative planning. Rapid yet, deep learning medical image analysis has already shown success in a variety of knee image analysis tasks, ranging from knee joint area localization to joint space segmentation and measurement, with almost a human-like performance. However, there are several fundamental challenges that stop deep learning methods to obtain their full potential in a clinical setting such as orthopedics. These include the need for a large number of gold-standard, manually annotated training images and a lack of explainability and interpretability. To address these challenges, this study is the first to present an explainable deep few-shot learning model that can localize the knee joint area and segment the joint space in plain knee radiographs, using only a small number of manually annotated radiographs. The accuracy performance of the proposed method was thoroughly and experimentally evaluated using various image localization and segmentation measures, and it was compared to baseline models that utilized large-scale fully-annotated training datasets. The current deep few-shot learning methods achieved an average Intersection over Union (IoU) of 0.94 and a mean Average Precision @0.5 of 0.98, using 10-shot learning in the localization of the knee joint area, and an average IoU of 0.91 in the knee joint space segmentation using only 10 manually annotated radiographs.