Logo

70. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V.

Deutsche Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS)
07.-11.09.2025
Jena

Meeting Abstract

Feedback Loops Instead of Endless Loops: Qualitative Analysis from Multi-Site Research Infrastructure Deployments

Anne Seim - Institute for Medical and Biometry, Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
Cigdem Klengel - Institute for Medical and Biometry, Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
Caroline Glathe - Institut für Medizinische Informatik und Biometrie der Medizinischen Fakultät an der TU Dresden, Dresden, Germany
Martin Bartos - Department of Informatics, Klinikum Chemnitz gGmbH, Chemnitz, Germany
Katja Hoffmann - Zentrum für Medizinische Informatik / Institut für Medizinische Informatik und Biometrie (IMB) Medizinische Fakultät Carl Gustav Carus, Dresden, Germany

Text

Introduction: As digital health solutions scale from university hospitals to non-university healthcare settings, effective deployment of research infrastructures becomes a critical challenge. In Germany, the Medical Informatics Initiative (MII) and the National Network of University Medicine (NUM) have established foundational infrastructures [1]. Building on the MII/NUM structures, the digital research hub MIHUBx is developing services that can be used by non-university healthcare providers to make real-world data available for research purposes (grant number 01ZZ2101) [2]. When complex applications must be deployed without clearly structured and easy-to-understand instructions, users often experience frustration, significantly hindering adoption and effective use [3].This exploratory single-case study investigates how feedback loops during deployment can improve the usability, comprehensibility, and sustainability of installation processes, ultimately supporting broader adoption.

Methods: A qualitative, observational methodology was employed, using the think-aloud method to assess user experience during the deployment of two core MiHUBx software packages: FHIR-to-OMOP Install and Research-Data-to-FHIR Install (both v1.0.0). One participant, an IT-skilled individual simulating the role of a local administrator at a non-academic partner site, performed the installation in a controlled setting, while an observer recorded verbal feedback, facial expressions, and interactions. Audio recordings and field notes were analyzed using inductive qualitative content [4] analysis in MAXQDA. The analysis was guided by deployment-related usability heuristics (e.g., clarity, orientation, error prevention) [5].

Results: The analysis revealed a consistent set of challenges categorized into seven main themes: external requirements, allocation of roles, contact pathways, contractual aspects, onboarding, automation/sustainability, and documentation. Key issues included insufficient description of required technical environments, unclear responsibility distribution, lack of contact information, and poor onboarding strategies. The participant encountered ambiguities regarding procedural steps and expectations. A notable quote from the observation "I am not sure what to expect after this step in the description" illustrated the ambiguity in instructional materials. The participant's feedback informed several targeted improvements, including clearer documentation, a deployment checklist, and revised role communication protocols.

Discussion: Findings confirm that deployment challenges are both technical and organizational. Variability in infrastructure, documentation quality, and stakeholder expectations impeded process reproducibility. These insights support the need for flexible, feedback-oriented deployment strategies rather than rigid, standardized procedures. Feedback loops proved valuable for uncovering usability gaps and aligning deployment strategies. Spatial separation between partners and short project cycle, exacerbated by constraints such as the Wissenschaftszeitvertragsgesetz (WissZeitVG), was also found to undermine continuity. While the study is limited to a single case and cannot claim generalizability, its exploratory nature enabled the identification of context-specific barriers and the generation of hypotheses for further testing. Building on these initial findings, a follow-up phase will expand testing to include additional participants at multiple sites in order to validate and refine the improved deployment tools under varied real-world conditions. This study highlights the critical role of continuous feedback mechanisms in optimizing the deployment of research infrastructures across diverse healthcare settings. Feedback loops not only help address recurring technical errors but also align documentation and support systems with user needs. The findings argue against a one-size-fits-all model and underscore the importance of context-sensitive, sustainable deployment strategies.

The authors declare that they have no competing interests.

The authors declare that a positive ethics committee vote has been obtained.


References

[1] Semler SC, Wissing F, Heyder R. German Medical Informatics Initiative. Methods Inf Med. 2018;57:e50–6. DOI: 10.3414/ME18-03-0003
[2] Krefting D, Bavendiek U, Fischer J, Marx G, Molinnus D, Panholzer T, et al. Die digitalen Fortschrittshubs Gesundheit – Gemeinsame Datennutzung über die Universitätsmedizin hinaus. Bundesgesundheitsbl. 2024;67(6):701–9. DOI: 10.1007/s00103-024-03883-9
[3] Hadlington L, Scase MO. End-user frustrations and failures in digital technology: exploring the role of Fear of Missing Out, Internet addiction and personality. Heliyon. 2018:e00872. DOI: 10.1016/j.heliyon.2018.e00872
[4] Mayring P, Fenzl T. Qualitative Inhaltsanalyse. In: Baur N, Blasius J, editors. Handbuch Methoden der empirischen Sozialforschung. Wiesbaden: Springer VS; 2019. DOI: 10.1007/978-3-658-21308-4_42
[5] Nielsen J. Enhancing the explanatory power of usability heuristics. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI "94). New York (NY): Association for Computing Machinery; 1994. p. 152–8. DOI: 10.1145/191666.191729