Radiology - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 6 of 6
  • Item
    Thumbnail Image
    Charting the potential of brain computed tomography deep learning systems
    Buchlak, QD ; Milne, MR ; Seah, J ; Johnson, A ; Samarasinghe, G ; Hachey, B ; Esmaili, N ; Tran, A ; Leveque, J-C ; Farrokhi, F ; Goldschlager, T ; Edelstein, S ; Brotchie, P (ELSEVIER SCI LTD, 2022-05)
    Brain computed tomography (CTB) scans are widely used to evaluate intracranial pathology. The implementation and adoption of CTB has led to clinical improvements. However, interpretation errors occur and may have substantial morbidity and mortality implications for patients. Deep learning has shown promise for facilitating improved diagnostic accuracy and triage. This research charts the potential of deep learning applied to the analysis of CTB scans. It draws on the experience of practicing clinicians and technologists involved in development and implementation of deep learning-based clinical decision support systems. We consider the past, present and future of the CTB, along with limitations of existing systems as well as untapped beneficial use cases. Implementing deep learning CTB interpretation systems and effectively navigating development and implementation risks can deliver many benefits to clinicians and patients, ultimately improving efficiency and safety in healthcare.
  • Item
    Thumbnail Image
    Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study
    Seah, JCY ; Tang, CHM ; Buchlak, QD ; Holt, XG ; Wardman, JB ; Aimoldin, A ; Esmaili, N ; Ahmad, H ; Hung, P ; Lambert, JF ; Hachey, B ; Hogg, SJF ; Johnston, BP ; Bennett, C ; Oakden-Rayner, L ; Brotchie, P ; Jones, CM (ELSEVIER, 2021-08)
    BACKGROUND: Chest x-rays are widely used in clinical practice; however, interpretation can be hindered by human error and a lack of experienced thoracic radiologists. Deep learning has the potential to improve the accuracy of chest x-ray interpretation. We therefore aimed to assess the accuracy of radiologists with and without the assistance of a deep-learning model. METHODS: In this retrospective study, a deep-learning model was trained on 821 681 images (284 649 patients) from five data sets from Australia, Europe, and the USA. 2568 enriched chest x-ray cases from adult patients (≥16 years) who had at least one frontal chest x-ray were included in the test dataset; cases were representative of inpatient, outpatient, and emergency settings. 20 radiologists reviewed cases with and without the assistance of the deep-learning model with a 3-month washout period. We assessed the change in accuracy of chest x-ray interpretation across 127 clinical findings when the deep-learning model was used as a decision support by calculating area under the receiver operating characteristic curve (AUC) for each radiologist with and without the deep-learning model. We also compared AUCs for the model alone with those of unassisted radiologists. If the lower bound of the adjusted 95% CI of the difference in AUC between the model and the unassisted radiologists was more than -0·05, the model was considered to be non-inferior for that finding. If the lower bound exceeded 0, the model was considered to be superior. FINDINGS: Unassisted radiologists had a macroaveraged AUC of 0·713 (95% CI 0·645-0·785) across the 127 clinical findings, compared with 0·808 (0·763-0·839) when assisted by the model. The deep-learning model statistically significantly improved the classification accuracy of radiologists for 102 (80%) of 127 clinical findings, was statistically non-inferior for 19 (15%) findings, and no findings showed a decrease in accuracy when radiologists used the deep-learning model. Unassisted radiologists had a macroaveraged mean AUC of 0·713 (0·645-0·785) across all findings, compared with 0·957 (0·954-0·959) for the model alone. Model classification alone was significantly more accurate than unassisted radiologists for 117 (94%) of 124 clinical findings predicted by the model and was non-inferior to unassisted radiologists for all other clinical findings. INTERPRETATION: This study shows the potential of a comprehensive deep-learning model to improve chest x-ray interpretation across a large breadth of clinical practice. FUNDING: Annalise.ai.
  • Item
    Thumbnail Image
    Do comprehensive deep learning algorithms suffer from hidden stratification? A retrospective study on pneumothorax detection in chest radiography
    Seah, J ; Tang, C ; Buchlak, QD ; Milne, MR ; Holt, X ; Ahmad, H ; Lambert, J ; Esmaili, N ; Oakden-Rayner, L ; Brotchie, P ; Jones, CM (BMJ PUBLISHING GROUP, 2021-12)
    OBJECTIVES: To evaluate the ability of a commercially available comprehensive chest radiography deep convolutional neural network (DCNN) to detect simple and tension pneumothorax, as stratified by the following subgroups: the presence of an intercostal drain; rib, clavicular, scapular or humeral fractures or rib resections; subcutaneous emphysema and erect versus non-erect positioning. The hypothesis was that performance would not differ significantly in each of these subgroups when compared with the overall test dataset. DESIGN: A retrospective case-control study was undertaken. SETTING: Community radiology clinics and hospitals in Australia and the USA. PARTICIPANTS: A test dataset of 2557 chest radiography studies was ground-truthed by three subspecialty thoracic radiologists for the presence of simple or tension pneumothorax as well as each subgroup other than positioning. Radiograph positioning was derived from radiographer annotations on the images. OUTCOME MEASURES: DCNN performance for detecting simple and tension pneumothorax was evaluated over the entire test set, as well as within each subgroup, using the area under the receiver operating characteristic curve (AUC). A difference in AUC of more than 0.05 was considered clinically significant. RESULTS: When compared with the overall test set, performance of the DCNN for detecting simple and tension pneumothorax was statistically non-inferior in all subgroups. The DCNN had an AUC of 0.981 (0.976-0.986) for detecting simple pneumothorax and 0.997 (0.995-0.999) for detecting tension pneumothorax. CONCLUSIONS: Hidden stratification has significant implications for potential failures of deep learning when applied in clinical practice. This study demonstrated that a comprehensively trained DCNN can be resilient to hidden stratification in several clinically meaningful subgroups in detecting pneumothorax.
  • Item
    Thumbnail Image
    Evaluation of deep learning-based artificial intelligence techniques for breast cancer detection on mammograms: Results from a retrospective study using a BreastScreen Victoria dataset
    Frazer, HML ; Qin, AK ; Pan, H ; Brotchie, P (WILEY, 2021-08)
    INTRODUCTION: This study aims to evaluate deep learning (DL)-based artificial intelligence (AI) techniques for detecting the presence of breast cancer on a digital mammogram image. METHODS: We evaluated several DL-based AI techniques that employ different approaches and backbone DL models and tested the effect on performance of using different data-processing strategies on a set of digital mammographic images with annotations of pathologically proven breast cancer. RESULTS: Our evaluation uses the area under curve (AUC) and accuracy (ACC) for performance measurement. The best evaluation result, based on 349 test cases (930 test images), was an AUC of 0.8979 [95% confidence interval (CI) 0.873, 0.923] and ACC of 0.8178 [95% CI 0.785, 0.850]. This was achieved by an AI technique that utilises a certain family of DL models, namely ResNet, as its backbone, combines the global features extracted from the whole mammogram and the local features extracted from the automatically detected cancer and non-cancer local regions in the whole image, and leverages background cropping and text removal, contrast adjustment and more training data. CONCLUSION: DL-based AI techniques have shown promising results in retrospective studies for many medical image analysis applications. Our study demonstrates a significant opportunity to boost the performance of such techniques applied to breast cancer detection by exploring different types of approaches, backbone DL models and data-processing strategies. The promising results we have obtained suggest further development of AI reading services could transform breast cancer screening in the future.
  • Item
    Thumbnail Image
    Incidental detection of prostate cancer with computed tomography scans
    Korevaar, S ; Tennakoon, R ; Page, M ; Brotchie, P ; Thangarajah, J ; Florescu, C ; Sutherland, T ; Kam, NM ; Bab-Hadiashar, A (NATURE PORTFOLIO, 2021-04-12)
    Prostate cancer (PCa) is the second most frequent type of cancer found in men worldwide, with around one in nine men being diagnosed with PCa within their lifetime. PCa often shows no symptoms in its early stages and its diagnosis techniques are either invasive, resource intensive, or has low efficacy, making widespread early detection onerous. Inspired by the recent success of deep convolutional neural networks (CNN) in computer aided detection (CADe), we propose a new CNN based framework for incidental detection of clinically significant prostate cancer (csPCa) in patients who had a CT scan of the abdomen/pelvis for other reasons. While CT is generally considered insufficient to diagnose PCa due to its inferior soft tissue characterisation, our evaluations on a relatively large dataset consisting of 139 clinically significant PCa patients and 432 controls show that the proposed deep neural network pipeline can detect csPCa patients at a level that is suitable for incidental detection. The proposed pipeline achieved an area under the receiver operating characteristic curve (ROC-AUC) of 0.88 (95% Confidence Interval: 0.86-0.90) at patient level csPCa detection on CT, significantly higher than the AUCs achieved by two radiologists (0.61 and 0.70) on the same task.
  • Item
    Thumbnail Image
    Magnetic resonance imaging of meningiomas: a pictorial review
    Watts, J ; Box, G ; Galvin, A ; Brotchie, P ; Trost, N ; Sutherland, T (SPRINGER HEIDELBERG, 2014-02)
    UNLABELLED: Meningiomas are the most common non-glial tumour of the central nervous system (CNS). There are a number of characteristic imaging features of meningiomas on magnetic resonance imaging (MRI) that allow an accurate diagnosis, however there are a number of atypical features that may be diagnostically challenging. Furthermore, a number of other neoplastic and non-neoplastic conditions may mimic meningiomas. This pictorial review discusses the typical and atypical MRI features of meningiomas and their mimics. TEACHING POINTS: There are several characteristic features of meningiomas on MRI that allow an accurate diagnosis Some meningiomas may display atypical imaging characteristics that may be diagnostically challenging Routine MRI sequences do not reliably distinguish between benign and malignant meningiomas Spectroscopy and diffusion tensor imaging may be useful in the diagnosis of malignant meningiomas A number of conditions may mimic meningiomas; however, they may have additional differentiating features.