Although AI can tackle widespread disparities in health care, it could also worsen existing disparities. Racial biases in AI demonstrate how minority groups are underserved by technology, and one of the reasons for these disparities may be as simple as how the data is collected.
Data collection can inadvertently worsen the health inequalities experienced by minority ethnic groups, primarily when data is concentrated in a particular ethnic group. Excluding or overlooking data about a specific ethnic group leads to missed diagnoses. For example, studies have shown that many AI technologies prioritize white patients over other minority groups.
For health care to benefit all ethnic groups, AI technology should be designed with an equitable outcome. Poor data collection in research and development (R&D) leads to a lack of diversity that requires urgent attention to reduce health disparities. In addition, patient and public feedback and input are essential to all critical areas of AI algorithm development. There is also a need for further research and transparency regarding access to the use of AI technology, especially in minority groups, if no one is to be left behind on its benefits.
In addition, there is a need for the government to develop a regulatory framework to ensure algorithms are tested on the appropriate minority ethnic group to reduce bias in data sets. Researchers also proposed legislation and regulation in AI that protect data and citizens’ rights to tackle health disparities and reduce potential bias.
SOURCE: OPEN ACCESS GOVERNMENT