Artificial Intelligence (AI) has demonstrated its ability to improve the frontline of healthcare. Its use in medical diagnosis is rapidly being embraced in hospitals and other healthcare organizations to perform clinical diagnosis and suggest treatments.
Efficiency in image analysis can quickly and accurately flag specific anomalies for a radiologist’s review. AI guides doctors in choosing the best cancer treatments based on mutations in patients’ tumors. AI-assisted robotic orthopedic surgery can result in a reduction in surgical complications compared to surgeons operating alone. The potential of using AI to improve outcomes is great.
But there are ethical implications when using machine learning tools. Among the concerns are biases of the companies and healthcare systems creating the algorithms, and their clinical recommendations. Their motives may not match with those of the physicians. What if the algorithm is designed to save money? Or if different treatment decisions are made depending on insurance status or the patient’s ability to pay?
The insertion of an algorithm’s predictions into the patient-physician relationship introduces a third party, turning it into one between the patient and the healthcare system. It changes the dynamics of responsibility and the expectation of confidentiality.
Physicians and software engineers need to work together to design the tools, being aware of the credibility of sources of data, and taking care to prevent machines from misinterpreting data.
If machine-learning tools are to be integrated into clinical care, all partners must be bound by core ethical principles of respect for the patient.