Chest radiography, a fundamental diagnostic tool in medicine for over a century, has played a pivotal role in detecting and managing lung diseases globally. However, its relationship with cardiac function remains unclear. Even the commonly used cardiothoracic ratio has limited correlation with left ventricular ejection fraction.

Transthoracic echocardiography is the primary method of assessing cardiac function, providing vital data for diagnosing and monitoring cardiac diseases. Despite this, the demand for echocardiography is growing, resulting in a shortage of qualified technicians.

In this context, chest radiography, which is accessible and quick, may complement echocardiography. Deep learning, a subset of artificial intelligence, has been explored in a multi-institutional retrospective study, to estimate echocardiography parameters from chest radiographs. Unlike traditional machine learning, deep learning automatically extracts features from data, making it well-suited to tasks involving complex or unknown features.

The Multi-institutional Retrospective Study

In the study published on The Lancet Digital Health, the researchers aimed to develop and validate a deep learning model for assessing cardiac function and valvular disease through chest radiographs.

They collected data from multiple institutions, including chest radiographs and associated echocardiograms, from April 1, 2013, to Dec 31, 2021. Data from three institutions were used for training, validation, and internal testing, while data from one institution were used for external testing.

A total of 22,551 radiographs were taken from 16,946 patients. An external dataset included 3,311 radiographs from 2,617 patients, with an average age of 72, almost evenly divided between males and females. Results showed that the deep learning model performed well in classifying cardiac parameters. Those included left ventricular ejection fraction, tricuspid regurgitant velocity, mitral regurgitation, aortic stenosis, mitral regurgitation, aortic regurgitation, mitral stenosis, tricuspid regurgitation, pulmonary regurgitation, and inferior vena cava dilation.


A deep-learning model was developed and evaluated to estimate the classification of transthoracic echocardiograms from chest radiographs. The model achieved a mean AUC of 0.87 in the external test dataset.

Model Development: The study developed a multi-label deep-learning model using EfficientNet as a feature extractor. Multi-label learning was chosen due to its increased accuracy as it extracts features from various labels. The model included 17 labels as classifiers, encompassing two cutoffs (none-mild vs. moderate-severe and none vs. mild-severe) for each of the six valvular heart diseases (mitral regurgitation, aortic stenosis, aortic regurgitation, mitral stenosis, tricuspid regurgitation, and pulmonary regurgitation), two cutoffs each for left ventricular ejection fraction (40% and 50%) and tricuspid regurgitant velocity (2.8 m/s and 3.4 m/s), and one cutoff for inferior vena cava dilation (21 mm).

The deep-learning model was trained using ImageNet pre-trained parameters and fine-tuned with five-fold cross-validation on the training dataset. TrivialAugment was utilized for data augmentation, while the PyTorch framework was employed for development. The offered appendix contains detailed procedures and other information, such as the machine environment, class imbalance handling techniques, and model overview. The source code may also be found online.

Model Test: The study assessed the diagnostic performance of the deep-learning model using the best-performing model and the same thresholds as those for the validation dataset on both the internal and external test datasets. Nine labels were considered primary classifiers, including cutoffs for valvular heart diseases, left ventricular ejection fraction, tricuspid regurgitant velocity, and inferior vena cava dilation. These primary classifiers were chosen based on their clinical significance.

Saliency maps were generated for the primary classifiers to visualize the region of interest in the external test dataset. Grad-CAM++ was used to create these maps, highlighting the importance of each feature map from the last convolutional layer of the model.

Statistical Analysis: The performance of each classifier was assessed using sensitivity, specificity, accuracy, positive and negative predictive values, and the area under the receiver operating characteristic curve (AUC) for both internal and external test datasets. To measure overall model performance per dataset, the mean and standard deviation (SD) of the AUC findings for the key classifiers were calculated. Using the external dataset, AUCs were also evaluated by gender and age, and confusion matrices were constructed for each major classifier. The 95% confidence intervals (CIs) for the AUC were calculated using bootstrapping. The Clopper-Pearson technique was used to conduct statistical studies in R version 4.0.0 with a two-sided significance level of 5%.

The addition of data from several sources improved the model’s robustness and generalizability, distinguishing it from previous research that used single-center datasets.

According to saliency maps, chest radiographs include inherent properties that may be used to categorize heart functions and valvular disorders.


This method has significant benefits over echocardiography. It provides speedy findings, which is advantageous for those who cannot tolerate lengthy echocardiograms.

Because of its modest system needs, it is adaptable for clinical usage, even in underserved regions or during crises. Furthermore, clinicians may use it at any time because it draws on current chest radiographs in medical records to deliver heart function insights without the need for extra testing.

Prospective validation with various patient groups, as well as comparisons to MRI-derived indices for left ventricular ejection fraction, would boost its usefulness.
It is crucial to recognize that the model is meant for classification rather than correct assessment of the value of echocardiographic results.
Given the rising global prevalence of heart disease, this deep-learning model based on chest radiographs might be used to enhance transthoracic echocardiography in future cardiac exams.