Back to Journals » Clinical Ophthalmology » Volume 19
Development and Evaluation of a Deep Learning Algorithm to Differentiate Between Membranes Attached to the Optic Disc on Ultrasonography
Authors Bhatt VD, Shah N , Bhatt DC, Dabir S , Sheth J , Berendschot TT, Erckens RJ, Webers CAB
Received 4 December 2024
Accepted for publication 10 March 2025
Published 18 March 2025 Volume 2025:19 Pages 939—947
DOI https://doi.org/10.2147/OPTH.S501316
Checked for plagiarism Yes
Review by Single anonymous peer review
Peer reviewer comments 2
Editor who approved publication: Dr Scott Fraser
Vaidehi D Bhatt,1 Nikhil Shah,2 Deepak C Bhatt,1 Supriya Dabir,3 Jay Sheth,4 Tos TJM Berendschot,5 Roel J Erckens,5 Carroll AB Webers5
1UBM Institute, Mumbai, India; 2Pursuing Masters in Computer Science at Stevens Institute of Technology, Jersey City, NJ, USA; 3Department of Retina, Rajan Eye Care Pvt Ltd, Chennai, India; 4Shantilal Shanghvi Eye Institute, Mumbai, India; 5University Eye Clinic Maastricht, Maastricht, the Netherlands
Correspondence: Vaidehi D Bhatt, UBM Institute, A/1 Ganesh Baug, Dadar, Mumbai, 400019, India, Tel +91 9821525810, Email [email protected]
Purpose: The purpose of this study was to create and test a deep learning algorithm that could identify and distinguish between membranes attached to optic disc [OD; retinal detachment (RD)/posterior vitreous detachment (PVD)] based on ocular ultrasonography (USG).
Patients and Methods: We obtained a database of B-scan ultrasonography from a high-volume imaging center. A transformer-based Vision Transformer (ViT) model was employed, pre-trained on ImageNet21K, to classify ultrasound B-scan images into healthy, RD, and PVD. Images were pre-processed using Hugging Face’s AutoImage Processor for standardization. Labels were mapped to numerical values, and the dataset was split into training and validation (505 samples), and testing (212 samples) subsets to evaluate model performance. Alternate methods, such as ensemble strategies and object detection pipelines, were explored but showed limited improvement in classification accuracy.
Results: The AI model demonstrated high classification performance, achieving an accuracy of 98.21% for PVD, 97.22% for RD, and 95.83% for normal cases. Sensitivity was 98.21% for PVD, 96.55% for RD, and 92.86% for normal cases, while specificity reached 95.16%, 100%, and 95.42%, respectively. Despite the overall strong performance, some misclassification occurred, with seven instances of RD being incorrectly labeled as PVD.
Conclusion: We developed a transformer-based deep learning algorithm for ocular ultrasonography that accurately identifies membranes attached to the optic disc, distinguishing between RD (97.22% accuracy) and PVD (98.21% accuracy). Despite seven misclassifications, our model demonstrates robust performance and enhances diagnostic efficiency in high-volume imaging settings, thereby facilitating timely referrals and ultimately improving patient outcomes in urgent care scenarios. Overall, this promising innovation shows potential for clinical adoption.
Keywords: retinal detachment, posterior vitreous detachment, ultrasonography, artificial intelligence, deep learning algorithm
Introduction
With the broad recognition that vitreoretinal pathologies are one of the principal causes of severe vision loss on a worldwide scale, posterior segment disorders are receiving significant and widespread consideration. Ophthalmic ultrasound has become the most important and accurate diagnostic imaging technique for the direct evaluation of posterior segment lesions with opaque ocular media.1 Amongst the posterior segment pathologies, retinal detachments (RD) and posterior vitreous detachment (PVD) are frequently encountered in clinical practice.2–5 According to reports, the prevalence of retinal detachment (RD) usually ranges from 6.3–17.9 per 100,000 population.2 In these patients, prompt diagnosis is crucial because a delayed diagnosis could result in an irreparable loss of vision.3 These patients usually present with symptoms of flashes and floaters, although these symptoms are most commonly seen with acute PVD.3 From roughly 4% in the fifth decade of life to 87% in people over 80, the prevalence of PVD rises with age.4 Although it is frequently a benign condition, 14% of patients may develop a retinal tear.5 This poses a considerable concern since untreated tears can lead to RD. Thus, it is critical that ultrasound be used to quickly and accurately identify and differentiate these entities.
Ocular ultrasound is a user-dependent procedure for performance and interpretation. In a prospective study of 115 patients with flashes and floaters, a point-of-care ocular ultrasound (POCUS) was performed by 30 emergency physicians (EPs) and assessed by a retina specialist within 24 hours.6 The POCUS had a 75% (95% CI = 48–93%) and 94% (95% CI = 87–98%), sensitivity and specificity for RD detection.6 In contrast, in another study comprising 92 examinations performed by 31 trained EPs, the sensitivity and specificity were 97% and 92%, respectively.7 Thus, the accuracy of diagnosis based on ocular ultrasound appears to differ between the sonologists/ophthalmologists.
Typically, membranes in the vitreous cavity can be distinguished based on their echogenicity and movement. For instance, membranes associated with PVD that are not attached to the OD are usually easy to distinguish from RD membranes, which are always attached to the OD.1–5 Also, the echogenicity of PVD is of low-to-moderate intensity, while RD is usually high echogenic membranes.1–5 However, differentiating PVD membranes that are attached to the OD from RD membranes remains challenging—especially for less experienced users. Recognizing this diagnostic dilemma, our study is designed to focus specifically on the detection and differentiation of membranes at the OD level.
The ability of a computer program to carry out functions connected to human intellect, such as thinking, learning, adaptation, sensory understanding, and interaction, is known as artificial intelligence (AI).8 Conventional computational algorithms are computer programs that, like an electrical calculator, operate according to a set of rules and always produce the same result.8,9 An AI algorithm, on the other hand, learns the functions (and thus the rules) from the training data (or input) that is assigned to it.8,9 The appeal of AI technologies in the medical field arises from their potential to improve healthcare by drawing new and significant insights from the massive amount of digital data created during healthcare delivery. Image processing, a subset of AI, is a very useful technology and the demand from the industry seems to be growing every year.10 Historically, image processing that uses machine learning appeared in the 1960s as an attempt to simulate the human vision system and automate the image analysis process.10 Deep Learning has become one of the most widely used AI techniques for several institutions and individuals who are in the business of automation.8–10 This is because of considerable improvements in the access to data and increases in computational power, which allow practitioners to achieve meaningful results across several areas.
In this context, we aim to contribute to improved ophthalmic diagnostics by developing an application that provides an initial assessment of eye health. Specifically, our system takes B-scan ultrasonography images as input, identifies any membranes attached to the OD, formulates a differential diagnosis between RD and PVD, and generates a preliminary report to assist the ophthalmologist. By focusing on the challenging distinction between RD and OD-attached PVD membranes, our work addresses a critical need for more accurate and user-independent diagnostic support.
Related Work
The existing systems have Collaborative Filtering approaches, which suffer from a lack of training data, extensive research yet to be done for independent use and the need to be consulted by a doctor for any diagnosis. Some models focus on independent use and could screen the scans rapidly and detect those that came out abnormally with 40–50% accuracy.11–13 Some researchers went with an automated system that detected retinal detachment using texton, higher order spectral cumulants, and locality-sensitive discriminant analysis.11,12 But the discussed models had their drawbacks where they could only detect retinal detachment. Other approaches included deep convolutional neural networks, which have led to a series of breakthroughs for image classification by naturally integrating low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, which brings out the complexity of trying different neural networks, making it an expensive affair computationally.14,15
Materials and Methods
The B-scan ultrasound images for the raw data and the test dataset were obtained from the UBM Institute, Mumbai, India. The study adhered to the principles outlined in the Declaration of Helsinki and was approved by the Harmony Ethics Research Committee. Written informed consent was obtained from all participants.
The dataset consisted of 505 samples for training and validation, and 212 samples for testing. Images were included if they were of sufficient quality for interpretation and had a confirmed diagnosis of healthy, RD, or PVD by an experienced ultrasonologist. Exclusion criteria included poor-quality scans, images with significant artifacts, and cases with coexisting RD and PVD, as differentiation in such cases would require a separate AI model.
The differentiation between RD and PVD was based on three key ultrasound parameters, assessed by a trained ultrasonologist:
- Echogenicity of the membrane: RD appears as a thicker, more echogenic membrane, while PVD is thinner and less reflective.
- Dynamic movement of the membrane on ultrasound: RD exhibits limited or no movement relative to eye motion, whereas PVD shows more mobility.
- Reflectivity on A-scan: RD typically demonstrates higher reflectivity compared to PVD.
The model’s classification performance was compared to that of two separate imaging experts with at least thirty years of combined expertise.
We adopted a transformer-based approach for image classification, utilizing a pre-trained Vision Transformer (ViT) model. The pre-trained model, specifically “google/vit-base-patch16-224-in21k”, is an advanced image recognition model known for leveraging self-attention mechanisms in images, which allows it to capture spatial dependencies and patterns across the entire image without the inductive biases of traditional convolutional neural networks (CNNs). This decision to use a pre-trained transformer model aligns with modern trends in leveraging transfer learning for specialized tasks.
By utilizing the pre-trained Vision Transformer, the model benefits from a wealth of general visual knowledge encoded in the earlier layers, significantly reducing the amount of domain-specific training data required. The image processing pipeline uses Hugging Face’s AutoImageProcessor, a processor that standardizes images to the specific input requirements of the pre-trained ViT model (Figure 1). The use of a pretrained model necessitates that all input images conform to a specific format, including size and normalization. For this purpose, the code includes a sequence of transformations, including random cropping to a fixed size and normalization using the image processor’s mean and standard deviation. These transformations help ensure that the input images match the model’s expected format and distribution, minimizing potential mismatches in data quality that could impact performance. The selection of RandomResizedCrop further adds robustness by introducing random variability during the training phase, which can improve the model’s ability to generalize to unseen data.
A key component in this approach is the mapping of categorical image labels to numerical values for a supervised learning framework. The label_mapping dictionary specifies three categories: healthy, vitD, and retD, which are mapped to the integers 0, 1, and 2, respectively (Figure 2).
In conclusion, the approach capitalizes on the power of transformers for image classification, which represents a modern and powerful alternative to CNNs in vision tasks. Using a pre-trained ViT model, the method leverages existing knowledge to reduce the computational and data burden typically associated with training deep learning models from scratch. Future iterations could benefit from more explicit documentation around data loading and additional techniques to improve model generalization and robustness, such as cross-validation, additional augmentation techniques, or fine-tuning strategies on smaller datasets.
We also attempted some alternate approaches with varying degrees of success:
1) Ensemble Approach: In this approach, the objective was to enhance the model’s performance by combining the predictions from multiple models using an ensemble strategy. The framework uses a custom dataset loader implemented with PyTorch, which handles loading image data and parsing XML annotations to extract bounding boxes and labels. The image data undergoes transformations, including resizing and normalizing, before being fed into the models. The ensemble method employs a Faster R-CNN architecture, specifically a `fasterrcnn_resnet50_fpn` model, which is pre-trained on COCO (Common Objects in Context) dataset. The model is fine-tuned to classify three classes—healthy, RD, and PVD. However, ensemble methods like this one are often beneficial when the individual models provide diverse and complementary predictions. In this case, the individual models likely shared too much overlap, both in terms of architecture (ResNet-based detection) and training data. As a result, the ensemble did not significantly improve performance over single-model predictions. Additionally, this approach may have introduced complexity, which led to longer training times without tangible benefits.
2. Embedding-based Approach: In this approach, the goal was to utilize feature embeddings from a Vision Transformer model to classify retinal images. We employed a pre-trained model along with its AutoFeatureExtractor to process the images. The model captures the features of each image as embeddings by loading the model and feature extractor. After obtaining the embeddings that represent the learned features of each class (healthy, RD, and PVD), we applied K-Means clustering to classify images from the test set. The idea was that the embeddings would cluster naturally according to the inherent features of each class. However, this unsupervised method faced challenges due to the complexity of the retinal images and the subtle differences between classes. The high dimensionality of the embeddings may have affected the effectiveness of the clustering algorithm. Consequently, this approach did not yield significant improvements in classification performance, as the clustering did not align well with the actual class labels.
3. Object Detection-based Classification: In this approach, a full object detection pipeline was set up using TensorFlow, leveraging an SSD MobileNet v2 architecture. The training involved creating TensorFlow records from a custom dataset of labeled images, and the pipeline aimed to train a detection model capable of identifying and localizing abnormalities in medical images. Although SSD MobileNet is highly optimized for real-time detection, it is not always the best choice for problems that require high classification accuracy, especially in medical domains. The primary task here was to classify images into different categories (healthy, retinal detachment, vitreous detachment), not detect objects within them. Consequently, object detection models like SSD added unnecessary complexity, focusing on localizing bounding boxes rather than improving classification accuracy. This complexity may have resulted in reduced accuracy for the final classification predictions, as the model’s focus was split between detection and classification. Additionally, the training setup with TensorFlow required generating `.tfrecord` files and setting up a custom pipeline, which introduced further overhead without providing clear benefits for this specific task.
Results and Interpretation
The confusion matrix presented in Figure 3 shows that the model has been optimized to minimize misclassification, particularly for the health-critical categories of retinal detachment (Class 2) and posterior vitreous detachment (Class 1). This deliberate weighting is a practical design choice, given the medical context: prioritizing the correct identification of abnormal cases (Classes 1 and 2) ensures that no pathological cases are overlooked, thus reducing the risk of missing critical conditions.
The model’s performance was evaluated using key classification metrics, including sensitivity, specificity, and accuracy for each class (Table 1).
![]() |
Table 1 Class-Wise Performance of the AI Model |
Class-Wise Performance
- Class 1 (PVD): The model achieved a sensitivity of 98.21%, a specificity of 95.16%, and an accuracy of 98.21%.
- Class 2 (Retinal Detachment): The model exhibited 96.55% sensitivity, 100% specificity, and an accuracy of 97.22%.
- Class 3 (Healthy): The sensitivity was 92.86%, specificity 95.42%, and accuracy 95.83%.
As a result, the model avoids falsely classifying any cases of retinal detachment or posterior vitreous detachment as “healthy” (Class 0), which aligns well with its goal of automating the triage of normal cases. This approach is tailored for the specific use case of efficiently sorting through large volumes of ocular scans. By correctly identifying normal eyes (Class 0) and focusing attention on the scans that may require more detailed review (Classes 1 and 2), the model aids in reducing repetitive and resource-heavy tasks for clinicians. This is especially beneficial in environments with high patient volumes, where minimizing the workload for specialists can lead to faster diagnosis and more focused clinical attention on potentially serious conditions.
Despite the model’s strong overall performance, some confusion exists between Class 1 (PVD) and Class 2 (RD). As seen in the matrix, seven instances of Class 2 have been misclassified as Class 1. This indicates a potential overlap in the features used to distinguish between these two types of eye conditions. The model achieves an accuracy of 98.21% for Class 1 (PVD) and 97.22% for Class 2 (RD), showing that while the classification of retinal detachment is robust, there is room for improvement in correctly identifying posterior vitreous detachment.
Class-wise accuracy further emphasizes the model’s strength in detecting healthy cases (100% accuracy for Class 0). The slight underperformance in Class 1 (PVD) likely reflects the smaller number of samples for this condition, which suggests that improving data diversity and volume—particularly for less-represented categories—could significantly enhance the model’s performance. Ensuring a more balanced dataset will help reduce the model’s confusion between similar classes, and in turn, improve its overall reliability and accuracy in a clinical setting.
In summary, this confusion matrix reinforces that the model is a reliable tool for streamlining the process of eye scan classification, particularly by accurately ruling out healthy cases. As the dataset grows and becomes more balanced, we expect further improvements in the classification of the more subtle pathological cases, such as posterior vitreous detachment.
Discussion
In the present study, we introduce a deep learning model designed to detect membranes at the disc and formulate a differential diagnosis between two common posterior segment pathologies at this location, namely PVD and RD. The model demonstrates robust performance across all classes. Specifically, it achieved a sensitivity of 98.21%, specificity of 95.16%, and accuracy of 98.21% for PVD; for RD, it showed a sensitivity of 96.55%, specificity of 100%, and an accuracy of 97.22%. Additionally, when evaluating healthy subjects (Class 3), the model exhibited a sensitivity of 92.86%, specificity of 95.42%, and an accuracy of 95.83%. In the present study, we introduce a deep learning model that can detect a membrane at the disc and formulate a differential diagnosis between the two common posterior segment pathologies at this loci, namely PVD and RD. The model has an accuracy of 87.5% for PVD and 97.8% for RD.
Around 2% to 3% of all visits to the emergency room (ED) are for the evaluation of ocular problems.1 If not quickly diagnosed and addressed, these manifestations may cause irreversible visual loss. Amongst these ocular emergencies, a PVD and an RD are frequently encountered.2–5 Because these entities present with common symptoms such as flashes and floaters, they may be clinically mistaken if not adequately evaluated. Clinically speaking, it is crucial to distinguish between these 2 disorders because patients with PVD can frequently be continuously monitored in the outpatient department whereas those with RD may require an emergency surgical intervention. These patients with ocular symptoms undergo a detailed examination including visual acuity (VA), intraocular pressure (IOP), anterior segment, and dilated posterior segment evaluation by the ophthalmologist. At the same time, in the emergency room, a dilated examination, which is frequently necessary for proper visualization is not routinely carried out. Utilizing ultrasound in this patient population has become crucial due to the known drawbacks of conventional examination. Furthermore, a lot of EP practice in settings where ophthalmic consultation is difficult to access. Ocular ultrasound can be utilized in conjunction with the history and the ocular examination to provide supplementary information that will help with the accurate diagnosis and plan the optimal management strategy.
Ocular ultrasonography is widely utilized in the ED and in ophthalmic practice to identify and distinguish different pathologies.1 Because it is portable, does not expose users to radiation and takes little time, the POCUS is perfect in an ED setup.6 Additionally, since the eye is liquid-filled and superficial, employing POCUS to evaluate ocular conditions is gaining importance. The performance and interpretation of an ocular ultrasound are user-dependent processes. Making the distinction between various ocular disorders is a challenging process that clinicians frequently have to perform to decide which patients can be discharged safely and which ones need immediate management.16 Even so, there are times when these two disorders’ sonographic findings are difficult to interpret. Additionally, a precise diagnosis can be difficult because PVD and RD can coexist.16 Baker et al performed a cross-sectional study in which 13 emergency physicians evaluated 390 ultrasonographic scans.16 In their results, they found an accuracy of 74.6% for the detection of an RD. In our study, the ML model achieved a significantly better accuracy rate of 97.22%. Additionally, in our study, we evaluated only the membranes, which were attached to the disc. These case scenarios are challenging, especially for beginners.
In our study, we developed an automatic ocular pathology detection system that can detect different diseases such as RD and PVD, specifically at the OD level. Based on our results, our model detects and gives us positive results with up to 98.21% accuracy for PVD and 97.22% for RD. Chen D. et al achieved a marginally lower accuracy of 0.73 in classifying RD, VD, and vitreous hemorrhage (VH). However, their model was developed to screen for abnormalities such as RD, VD, VH, other lesions, and normal patients.11 Also, the limited studies that have evaluated the role of AI in ocular ultrasonography have been performed using B-scan images that depict these membranous lesions at any location across the vitreous cavity. In practise, there is a wealth of non-AI-based literature that can help with this, particularly that which focuses on echogenicity and membrane movements. The problem, though, comes into play when the membrane is attached to the disc. At this level, there may be no discernible difference in movement between these membranes. Additionally, the echogenicity might be similar, particularly if the PVD is dense or stained with blood. On the other hand, our model was a focused one to specifically detect and differentiate between the two commonly encountered and confused disease entities, namely, RD and PVD, at the level of the OD. In future, we plan to make the model able to detect more eye diseases (VH, intraocular tumors, etc), by collecting appropriate data and then processing the data. Furthermore, the application can be deployed in multiple clinics as well as across hospitals where there are very few doctors and aid them in the detection of these ocular pathologies.
The AI model has the potential to assist with distinguishing between an RD and a PVD. This presents a considerable opportunity for improvement in both the referral process and the treatment of this frequent diagnostic challenge. Overall, our AI model has the potential to revolutionize the way RDs and PVDs are diagnosed, providing a more accurate and efficient approach to patient care. This advancement in technology could lead to better outcomes for patients and improve the overall healthcare system. Additionally, the use of AI for ultrasound diagnosis can also lead to significant cost savings by reducing the need for unnecessary referrals. With the ability to accurately diagnose RDs and PVDs, healthcare providers can develop more targeted treatment plans, resulting in fewer hospitalizations and readmissions. This is particularly important in the cases of RD and PVD, as misdiagnosis or delayed diagnosis can lead to complications and prolonged treatment, resulting in higher healthcare costs. Furthermore, the implementation of this technology can also help address the shortage of ophthalmologists and increase access to care for patients in underserved areas, especially in lower-middle income countries (LMICs).
Conclusion
To conclude, our deep learning algorithm for ocular ultrasound not only demonstrates high sensitivity, specificity, and accuracy across all evaluated classes but also shows significant promise as a diagnostic aid in emergency settings. This advancement could lead to more efficient patient management, better resource allocation, and improved overall care in ocular emergencies.
Disclosure
The authors report no conflicts of interest in this work.
References
1. Lahham S, Shniter I, Thompson M, et al. Point-of-care ultrasonography in the diagnosis of retinal detachment, vitreous hemorrhage, and vitreous detachment in the emergency department. JAMA Netw Open. 2019;2(4):e192162. doi:10.1001/jamanetworkopen.2019.2162
2. Mitry D, Charteris DG, Fleck BW, Campbell H, Singh J. The epidemiology of rhegmatogenous retinal detachment: geographical variation and clinical associations. Br J Ophthalmol. 2010;94(6):678–684. doi:10.1136/bjo.2009.157727
3. Pastor JC, Fernández I, Rodríguez de la Rúa E, et al. Surgical outcomes for primary rhegmatogenous retinal detachments in phakic and pseudophakic patients: the Retina 1 Project--report 2. Br J Ophthalmol. 2008;92(3):378–382. doi:10.1136/bjo.2007.129437
4. Hikichi T, Hirokawa H, Kado M, et al. Comparison of the prevalence of posterior vitreous detachment in whites and Japanese. Ophthalmic Surg. 1995;26(1):39–43.
5. Hollands H, Johnson D, Brox AC, Almeida D, Simel DL, Sharma S. Acute-onset floaters, and flashes: is this patient at risk for retinal detachment? JAMA. 2009;302(20):2243–2249. doi:10.1001/jama.2009.1714
6. Kim DJ, Francispragasam M, Docherty G, et al. Test characteristics of point-of-care ultrasound for the diagnosis of retinal detachment in the emergency department. Acad Emerg Med. 2019;26(1):16–22. doi:10.1111/acem.13454
7. Shinar Z, Chan L, Orlinsky M. Use of ocular ultrasound for the evaluation of retinal detachment. J Emerg Med. 2011;40(1):53–57. doi:10.1016/j.jemermed.2009.06.001
8. Drukker L, Noble JA, Papageorghiou AT. Introduction to artificial intelligence in ultrasound imaging in obstetrics and gynecology. Ultrasound Obstet Gynecol. 2020;56(4):498–505. doi:10.1002/uog.22122
9. Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunović H. Artificial intelligence in retina. Prog Retin Eye Res. 2018;67:1–29. doi:10.1016/j.preteyeres.2018.07.004
10. Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019;103(2):167–175. doi:10.1136/bjophthalmol-2018-313173
11. Chen D, Yu Y, Zhou Y, et al. A deep learning model for screening multiple abnormal findings in ophthalmic ultrasonography (with video). Transl Vis Sci Technol. 2021;10(4):22. doi:10.1167/tvst.10.4.22
12. Koh JEW, Raghavendra U, Gudigar A, et al. A novel hybrid approach for automated detection of retinal detachment using ultrasound images. Comput Biol Med. 2020;120:103704. doi:10.1016/j.compbiomed.2020.103704
13. Song J, Chai YJ, Masuoka H, et al. Ultrasound image analysis using deep learning algorithm for the diagnosis of thyroid nodules. Medicine. 2019;98(15):e15133. doi:10.1097/MD.0000000000015133
14. Chan HP, Samala RK, Hadjiiski LM, Zhou C. Deep learning in medical image analysis. Adv Exp Med Biol. 2020;1213:3–21.
15. Chowdhury AR, Chatterjee T, Banerjee S. A Random Forest classifier-based approach in the detection of abnormalities in the retina. Med Biol Eng Comput. 2019;57(1):193–203. doi:10.1007/s11517-018-1878-0
16. Baker N, Amini R, Situ-LaCasse EH, et al. Can emergency physicians accurately distinguish retinal detachment from posterior vitreous detachment with point-of-care ocular ultrasound? Am J Emerg Med. 2018;36(5):774–776. doi:10.1016/j.ajem.2017.10.010
© 2025 The Author(s). This work is published and licensed by Dove Medical Press Limited. The
full terms of this license are available at https://www.dovepress.com/terms.php
and incorporate the Creative Commons Attribution
- Non Commercial (unported, 3.0) License.
By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted
without any further permission from Dove Medical Press Limited, provided the work is properly
attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.
Recommended articles
Diagnostic Value of Artificial Intelligence in Minimal Breast Lesions Based on Real-Time Dynamic Ultrasound Imaging
Qu C, Xia F, Chen L, Li HJ, Li WM
International Journal of General Medicine 2024, 17:4061-4069
Published Date: 14 September 2024