The conversation about AI in healthcare has often stayed at the level of speculation: what might become possible, what researchers are working on, what hypothetical future looks like. The reality in 2026 is that AI diagnostic tools are already deployed in thousands of clinical settings across the United States, cleared by the FDA, and being used on real patients in real appointments without most of those patients knowing it. The FDA has cleared over 950 AI-enabled medical devices as of 2026, the vast majority of them in radiology and cardiology. The pace of new clearances has accelerated significantly in the past two years. What is being built is not a future scenario. It is a current one, and it is moving faster than public understanding has kept pace with.

In radiology, AI tools are being used to flag potential abnormalities in X-rays, CT scans, and MRIs before a radiologist reviews the image. Some tools prioritize the reading queue, pushing cases with suspected findings to the top so that critical results do not wait. Others assist the radiologist during reading by highlighting regions of the image where the model has detected something worth attention. The radiologist still makes the final call, but the workflow has changed. The model has already looked at the image before the human does. That is a meaningful shift in how diagnostic decisions are organized, and it affects both the efficiency and the error profile of the process in ways that are actively being studied.

Cardiology is another area where AI clearances are concentrated. Algorithms that analyze electrocardiograms can detect patterns associated with atrial fibrillation, heart failure, and other conditions with accuracy that in several studies has matched or exceeded cardiologist performance on specific tasks. Apple's ECG feature on the Apple Watch uses an FDA-cleared algorithm. What is available in hospital settings is considerably more sophisticated. AI tools are being used to analyze echocardiogram images, flag patients at elevated cardiac risk, and identify structural heart conditions from imaging that might have been missed or delayed under purely manual review protocols.

Dermatology has seen rapid AI adoption in screening contexts. Tools trained on large image datasets can assess photographs of skin lesions for features associated with malignancy with accuracy that holds up in clinical validation studies. Ophthalmology has FDA-cleared AI tools specifically designed to detect diabetic retinopathy from retinal photographs, enabling screening in primary care settings that do not have an ophthalmologist on staff. That last application is particularly significant for underserved communities where access to specialists is limited. An AI tool that can screen accurately in a primary care or community health setting extends the reach of detection to patients who would not otherwise get timely diagnosis.

The questions that come with these tools are real and deserve direct engagement rather than being buried in footnotes. AI models trained on patient data reflect whatever biases were present in the training data. Early studies on some dermatology AI tools found worse performance on darker skin tones, which traces back to training sets that were not representative. The FDA now requires bias analysis as part of the clearance submission for many categories of AI diagnostic tools, and post-market monitoring is part of the regulatory framework, but the implementation of that monitoring is uneven. Patients who are already disadvantaged by how medicine treats them historically are the ones most at risk from diagnostic AI that performs differently across populations.

The liability question is also unresolved in ways that will matter as these tools become more central to clinical workflows. When an AI tool misses a finding and a patient is harmed, who is responsible? The hospital that deployed the tool? The manufacturer that built it? The radiologist who reviewed the image after the AI did? The answers are not yet settled in law, and cases making their way through the courts will establish precedents that shape how liability is distributed. That uncertainty affects how aggressively institutions are willing to rely on AI outputs versus treating them as advisory inputs.

For patients, the practical takeaway is that asking your provider what AI tools are used in interpreting your diagnostic imaging is a reasonable question. Understanding whether a tool has been validated in populations similar to yours is relevant to how you interpret its outputs. And engaging with the broader conversation about what these tools are doing well and doing poorly is worth your attention, because the policy frameworks governing AI in medicine are still being written and public understanding of what is at stake affects whether those frameworks end up serving patients well.

---