Walking the tightrope of artificial intelligence guidelines in clinical practice

Over the past few months, there has been a wave of digital health guidelines and whitepapers issued by regulators, institutes, and organisations worldwide. In the field of artificial intelligence (AI), EU guidelines, published in April, promote the development of trustworthy AI across all disciplines, while a US Food and Drug Administration (FDA) whitepaper proposes a regulatory framework for constantly developing software in health care. Guidelines from the National Institution of Health and Care Excellence (NICE) tackle the level of evidence required for a new digital health intervention, and NHSX and Public Health England have both reported their intention to produce their own AI guidelines.
AI approaches in medical practice needs to be lawful, ethical, and robust. According to the EU guidelines for trustworthy AI, there are seven key requirements for ethical AI: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental wellbeing; and accountability. They include tiered, risk-based guidance for tool validation for prevention of harm, recommendations to make the model explainable as well as fair and unbiased, and ensure that human autonomy is maintained. The guidelines highlight that AI approaches should augment the actions of humans through transparent decision pathways rather than black box decision making.