70. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V.
70. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V.
How AI works and the ethical responsibility of consensus and self-referential circular reasoning: Some considerations for the development of medical devices
Text
Introduction: Hardly any development is likely to have changed healthcare as much as the introduction of artificial intelligence (AI). For instance, the FDA [1] until today lists 1016 medical devices with AI applications. Most of these have only been approved in recent years. Hence, there is already extensive literature and, in particular, regulatory frameworks on ethical implications related to the use of AI in healthcare (e.g., EU AI Act, WHO Guidance, FG-AI4H DEL01). This also includes the technical issue of recursive data usage, when training AI models, which is discussed under terms such as self-training bias, feedback loops or data contamination [2], [3]. However, in the current climate of optimism, often little attention is paid to ethical dimension and reflection on the user side. The rapid and dynamic technical development contains the risk of information asymmetry for a large number of users and a lack of understanding the underlying technical processes and especially the underlying epistemological and normative foundations of AI development [4]. At least two ethical prerequisites must be reflected more strongly. Firstly, the development of AI in the traditional sense (e.g., defined by the FDA [1]) requires a known outcome and a corresponding (training) data set. Hence, the content of the training data set is of existential importance for the validity of the results. Thereby, training data sets are restricted by the accessible data sources [5]. The AI data source therefore implies a bias in the sense of a selection bias of social behavior of the majority. As a result, the technical processes involved in the development of AI tend conceptually towards moral norms on the agreement or consensus, i.e. the concept of moral consensus. Without the knowledge of these mechanisms, ethical reflection is often lacking. Secondly, until now, little attention has been given by users of medical devices to the fact that the training data sets based on for further development of AI are increasingly being generated by AI itself. In this respect, from a dynamic perspective, a self-referential circular argument arises in the development of new AI. This article therefore tries to provide a basic user centered understanding of the technical and practical prerequisites for the development of medical devices with AI.
Methods: Methodologically, the first step of the article is to define AI and use the “Framingham Heart Study” as an example to show how an AI model is trained with frequently used procedures (i.e., Supervised and Unsupervised learning). In particular, the importance of the training data set is emphasized. In a second step, the concepts of moral consensus and self-referential circular are then explained and discussed normatively.
Results: As a result, this understanding not only reduce information asymmetries on the user side with reference to AI development in healthcare, but also sensitize users to the underlying learning mechanisms of AI and there epistemological and normative foundations.
Conclusion: Accordantly, a deeper understanding and ethical awareness with regard to data quality and the validity of the (ethical) decision-making systems of medical devices with AI applications are imparted for users.
The authors declare that they have no competing interests.
The authors declare that an ethics committee vote is not required.
Literatur
[1] U.S. Food and Drug Administration (FDA), editor. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. 2025 [cited 2025 Mar 5]. Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices[2] Norori N, Faraci F, Tzovara A, Aellen FM, Hu Q. Addressing bias in big data and AI for health care: A call for open science. Patterns. 2021;2.
[3] Chaudhry Z, Choudhury A. Large language models and user trust: Consequence of self-referential learning loop and the deskilling of health care professionals. Journal of Medical Internet Research. 2024;26.
[4] He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nature Medicine. 2019;25(1):30–6.
[5] Ashok M, Madan R, Joha A, Sivarajah U. Ethical framework for Artificial Intelligence and Digital technologies. Int J Inf Manag. 2022;62:102433.



