The deepeye® Suite
New Therapy Insights for Ophthalmology
Treatment Planning Support
We develop 1-click predictive AI solutions for personalised treatment.
deepeye® TPS
AI-based medical device software (clinical decision support system | CDSS)
Treatment planning support (TPS) for ophthalmologists
Applicable for patients with neovascular age-related macular degeneration (nAMD)
Input: pseudoanonymized optical coherence tomography (OCT) scans
Output: AI report with additional information for treatment planning
retinal disease activity,
highlights biomarkers for explainability (XAI),
and prediction of future treatment needs
Disclaimer: AI does not replace a doctor's decision. TPS is not yet available for sale and clinical use.
Retinal Therapy Research
We analyze retinal therapy datasets for new insights. Our approach takes the proven approach of one of the leading reading and study centers in the EU, involved in all major phase III+IV trials for anti-VEGF agents in the last years to the next level.
deepeye® Research
Flexible AI platform for research
AI boost for retinal therapy study teams (pharma, device, payors)
Applicable for study support (phase IV) and retrospective analysis (phase II-III, quality assurance samples)
Input: large, heterogenous OCT + clinical datasets
Output: anonymized, standardized data + AI report
anonymized, standardized and enhanced imaging data
unique insights into therapy patterns and potential i
AI report and publications (on request)
Disclaimer: deepeye® Research is not a disease screening or pure biomarker detection tool, rather a retinal therapy study tool.
Our Partners
These partnerships enable us to bring our 2nd-opinion-approach to more ophthalmologists globally - faster!
How deepeye® works (AI FAQ)
AI: We define AI as "augmented intelligence" that enhances the capabilities of doctors, just like the AMA does.
Dataset, quality: This mostly refers to the "ground truth" (s. below). It also refers to what is included in the dataset, e.g. an OCT volume scans of 10 B-Scans vs. 49 B-Scans, accompanying clinical data (was it a follow-up or IVI visit), and what treatment regimen was used. For instance, it is very difficult to predict the optimal treatment interval using T&E data only, as there is often (initial) overtreatment while extending out.
Dataset, quantity: Size matters in regards to AI. We already access close to 1 million OCT volume scans from over 70 ophthalmic practices and several RCTs, allowing us to select the perfect dataset for most retinal diseases, research and clinical questions.
Dataset, longitudinal: For screening for eye diseases or initial diagnosis of a patient, this is less relevant. For predicting treatment response though, we at deepeye usually only use data if we have at least 20 OCTs per subject, translating to usually 4-5 years in therapy. In many busy university hospital which large amounts of OCT data, such longitudinal data is scarce to non existent due to technical and organisational barriers. We have partnered with those few ophthalmic centers of excellence who started as early as 2011(!) to track their patients' progress. If a patient has received more than 150(!) IVIs, and AI can much better understand how similar patients will respond to treatment.
Device-agnostic: We claim to be device-agnostic. Unlike almost all other ophthalmic AI models out there, this means we can process data from different kinds of imaging machines. E.g. Heidelberg Engineering SPECTRALIS, ZEISS Cirrus, Topcon Triton / Maestro, Optopol Copernicus Revo, Canon Xephilio ... There are some OCT manufacturers or older OCT models that we still have trouble with. But we can read >95% of OCTs out there.
Generalizable AI: This is why it is very difficult for a clinic or company to develop their own AI and share it with others. The AI works only for one OCT machine (in most cases Heidelberg Engineering Spectralis), one therapy regimen (T&E), one population (white caucasians, aged 70-85), one diagnosis (nAMD) ... We at deepeye have combined diverse datasets and used state-of-the-art techniques to make sure our AI can adapt to most "domain". Always make sure, your AI can be used for the "domains" you intend to use it for!
Ground truth: An AI can only be as good as its input (training dataset). "Bullsh*** in, bullsh*** out", as AI researchers say. So the people who select the training dataset and check its validity are key for AI performance. In our case, we use several certified readers (retina specialists with expert knowledge and training) who review the data, add annotations etc. as ground truth. This allows our AI to achieve better results than a single ophthalmologist - but the AI can never beat the ground truth. If you use an AI, always ask for the ground truth!
Validation: An AI needs to be validated. A representative dataset is therefore required. This means as diverse as the population the AI shall be used with. If you use an AI, make sure it is validated - just like ours.