<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
<!DOCTYPE GmsArticle SYSTEM "http://www.egms.de/dtd/2.0.34/GmsArticle.dtd">
<GmsArticle xmlns:xlink="http://www.w3.org/1999/xlink">
  <MetaData>
    <Identifier>25gmds136</Identifier>
    <IdentifierDoi>10.3205/25gmds136</IdentifierDoi>
    <IdentifierUrn>urn:nbn:de:0183-25gmds1361</IdentifierUrn>
    <ArticleType>Meeting Abstract</ArticleType>
    <TitleGroup>
      <Title language="en">Evaluating synthetic eye images for opthalmological assessments: Developing a systematic survey</Title>
    </TitleGroup>
    <CreatorList>
      <Creator>
        <PersonNames>
          <Lastname>Kondragunta</Lastname>
          <LastnameHeading>Kondragunta</LastnameHeading>
          <Firstname>Jyothsna</Firstname>
          <Initials>J</Initials>
        </PersonNames>
        <Address>
          <Affiliation>Universit&#228;tsklinikum Jena, Jena, Germany</Affiliation>
        </Address>
        <Creatorrole corresponding="no" presenting="no">author</Creatorrole>
      </Creator>
      <Creator>
        <PersonNames>
          <Lastname>Spreckelsen</Lastname>
          <LastnameHeading>Spreckelsen</LastnameHeading>
          <Firstname>Cord</Firstname>
          <Initials>C</Initials>
        </PersonNames>
        <Address>
          <Affiliation>Universit&#228;tsklinikum Jena, Jena, Germany</Affiliation>
        </Address>
        <Creatorrole corresponding="no" presenting="no">author</Creatorrole>
      </Creator>
    </CreatorList>
    <PublisherList>
      <Publisher>
        <Corporation>
          <Corporatename>German Medical Science GMS Publishing House</Corporatename>
        </Corporation>
        <Address>D&#252;sseldorf</Address>
      </Publisher>
    </PublisherList>
    <SubjectGroup>
      <SubjectheadingDDB>610</SubjectheadingDDB>
      <Keyword language="en">synthetic data</Keyword>
      <Keyword language="en">survey</Keyword>
      <Keyword language="en">medical informatics applications</Keyword>
    </SubjectGroup>
    <DatePublishedList>
      <DatePublished>20251103</DatePublished>
    </DatePublishedList>
    <Language>engl</Language>
    <License license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
      <AltText language="en">This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License.</AltText>
      <AltText language="de">Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung).</AltText>
    </License>
    <SourceGroup>
      <Meeting>
        <MeetingId>M0631</MeetingId>
        <MeetingSequence>136</MeetingSequence>
        <MeetingCorporation>Deutsche Gesellschaft f&#252;r Medizinische Informatik, Biometrie und Epidemiologie</MeetingCorporation>
        <MeetingName>70. Jahrestagung der Deutschen Gesellschaft f&#252;r Medizinische Informatik, Biometrie und Epidemiologie e. V. (GMDS)</MeetingName>
        <MeetingTitle></MeetingTitle>
        <MeetingSession>PS 6: Synthetic data, privacy &#38; consent</MeetingSession>
        <MeetingCity>Jena</MeetingCity>
        <MeetingDate>
          <DateFrom>20250907</DateFrom>
          <DateTo>20250911</DateTo>
        </MeetingDate>
      </Meeting>
    </SourceGroup>
    <ArticleNo>Abstr. 264</ArticleNo>
  </MetaData>
  <OrigData>
    <TextBlock name="Text" linked="yes">
      <MainHeadline>Text</MainHeadline><Pgraph><Mark1>Introduction:</Mark1> The creation of digital avatars to anonymize personal health data is essential for enabling privacy-preserving <TextLink reference="1"></TextLink>. This survey focuses on assessing the clinical relevance of synthetically generated medical images, involving evaluations by multiple participants. By verifying the realism and utility of these AI-generated images, the study ensures their potential for meaningful integration into clinical practice.</Pgraph><Pgraph><Mark1>State of the art:</Mark1> Generative-AI (Gen-AI) technologies such as transformers-based architecture have revolutionized medical image synthesis by providing realistic representations. Prior studies revealed the use of these models for Retinal image generation <TextLink reference="2"></TextLink>, while <TextLink reference="3"></TextLink> conducted AI robustness testing to verify the synthesis process. Though some studies exist, proper validation of generated synthetic images in ophthalmology applications is overlooked. Additionally, AI-based realism scoring models have been utilized for general image evaluation and their effectiveness in clinical assessments remain underexplored <TextLink reference="4"></TextLink>. This study attempts to address these limitations by gathering evaluations from people for the AI generated synthetic images.</Pgraph><Pgraph><Mark1>Concept:</Mark1> A systematic, structured and interactive online survey using an open-source platform is created considering ophthalmologists, medical imaging specialists and AI researchers. The involvement of human for evaluation emphasizes more precisely the realism of AI generated synthetic images rather than using AI itself. This survey was designed to facilitate comprehensive assessments by allowing participants to evaluate generated eye images for: grading, differentiation, ranking and identification of synthetic and real images.</Pgraph><Pgraph>Our goal was to select evaluation methods based on the individual&#8217;s ability to provide insights into image reality and diagnosis potential. By responding to validation mechanisms (e.g. sentinel questions, randomized question orders), we ensure survey quality thereby enhancing reliability of the findings.</Pgraph><Pgraph><Mark1>Implementation:</Mark1> We chose LimeSurvey hosted at FSU Jena for this purpose due to its support for complex branching logic, randomized question ordering and integration of validation mechanisms. The designed survey allows quantitative analysis across participants while maintaining flexibility for future extension. The survey comprises three modules:</Pgraph><Pgraph><OrderedList><ListItem level="1" levelPosition="1" numString="1.">Bulbar injection rating &#8211; Synthetic images are graded using a clinical reference scale (Jenvis Grading Score for bulbar injections)</ListItem><ListItem level="1" levelPosition="2" numString="2.">Real vs. Synthetic Identification &#8211; Participants classify images as real, or AI generated</ListItem><ListItem level="1" levelPosition="3" numString="3.">Image similarly matching &#8211; Participants select the closest real-image corresponding to a synthetic image through guidance score</ListItem></OrderedList></Pgraph><Pgraph>AI-driven realism assessments (ViT, DINO) are planned for validation with human evaluations. Different guidance scores are considered for evaluating the images for different groups.</Pgraph><Pgraph><Mark1>Lessons learned:</Mark1> The development of a systematic and structured survey, for the validation of synthetic images is complex. It requires appropriate questions summarizing the participants experience. The survey needs to consider the cognitive load as well as bias and framing with respect to participants. We consider the choice of image scoring as an important factor for the validation of images that will have tradeoffs in granularity and interpretability. We collected initial results for the survey within the working group (medical informatics background) and the feedback is integrated into the final version of the survey. Now, the survey is ready for validation and the results will be published accordingly.</Pgraph><Pgraph>The authors declare that they have no competing interests.</Pgraph><Pgraph>The authors declare that an ethics committee vote is not required.</Pgraph></TextBlock>
    <References linked="yes">
      <Reference refNo="1">
        <RefAuthor>Susser D</RefAuthor>
        <RefAuthor>Schiff DS</RefAuthor>
        <RefAuthor>Gerke S</RefAuthor>
        <RefAuthor>Cabrera LY</RefAuthor>
        <RefAuthor>Cohen IG</RefAuthor>
        <RefAuthor>Doerr M</RefAuthor>
        <RefAuthor>Harrod J</RefAuthor>
        <RefAuthor>Kostick-Quenet K</RefAuthor>
        <RefAuthor>McNealy J</RefAuthor>
        <RefAuthor>Meyer MN</RefAuthor>
        <RefAuthor>Price WN 2nd</RefAuthor>
        <RefAuthor>Wagner JK</RefAuthor>
        <RefTitle>Synthetic Health Data: Real Ethical Promise and Peril</RefTitle>
        <RefYear>2024</RefYear>
        <RefJournal>Hastings Cent Rep</RefJournal>
        <RefPage>8-13</RefPage>
        <RefTotal>Susser D, Schiff DS, Gerke S, Cabrera LY, Cohen IG, Doerr M, Harrod J, Kostick-Quenet K, McNealy J, Meyer MN, Price WN 2nd, Wagner JK. Synthetic Health Data: Real Ethical Promise and Peril. Hastings Cent Rep. 2024 Sep;54(5):8-13. DOI: 10.1002&#47;hast.4911</RefTotal>
        <RefLink>http:&#47;&#47;dx.doi.org&#47;10.1002&#47;hast.4911</RefLink>
      </Reference>
      <Reference refNo="2">
        <RefAuthor>Wang Z</RefAuthor>
        <RefAuthor>Lim G</RefAuthor>
        <RefAuthor>Ng WY</RefAuthor>
        <RefAuthor>Tan TE</RefAuthor>
        <RefAuthor>Lim J</RefAuthor>
        <RefAuthor>Lim SH</RefAuthor>
        <RefAuthor>Foo V</RefAuthor>
        <RefAuthor>Lim J</RefAuthor>
        <RefAuthor>Sinisterra LG</RefAuthor>
        <RefAuthor>Zheng F</RefAuthor>
        <RefAuthor>Liu N</RefAuthor>
        <RefTitle>Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration</RefTitle>
        <RefYear>2023</RefYear>
        <RefJournal>Frontiers in Medicine</RefJournal>
        <RefPage>1184892</RefPage>
        <RefTotal>Wang Z, Lim G, Ng WY, Tan TE, Lim J, Lim SH, Foo V, Lim J, Sinisterra LG, Zheng F, Liu N. Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration. Frontiers in Medicine. 2023 Jun 22;10:1184892.</RefTotal>
      </Reference>
      <Reference refNo="3">
        <RefAuthor>Coyner AS</RefAuthor>
        <RefAuthor>Chen JS</RefAuthor>
        <RefAuthor>Chang K</RefAuthor>
        <RefAuthor>Singh P</RefAuthor>
        <RefAuthor>Ostmo S</RefAuthor>
        <RefAuthor>Chan RP</RefAuthor>
        <RefAuthor>Chiang MF</RefAuthor>
        <RefAuthor>Kalpathy-Cramer J</RefAuthor>
        <RefAuthor>Campbell JP</RefAuthor>
        <RefAuthor>Imaging and Informatics in Retinopathy of Prematurity Consortium</RefAuthor>
        <RefTitle>Synthetic medical images for robust, privacy-preserving training of artificial intelligence: application to retinopathy of prematurity diagnosis</RefTitle>
        <RefYear>2022</RefYear>
        <RefJournal>Ophthalmology Science</RefJournal>
        <RefPage>100126</RefPage>
        <RefTotal>Coyner AS, Chen JS, Chang K, Singh P, Ostmo S, Chan RP, Chiang MF, Kalpathy-Cramer J, Campbell JP, Imaging and Informatics in Retinopathy of Prematurity Consortium. Synthetic medical images for robust, privacy-preserving training of artificial intelligence: application to retinopathy of prematurity diagnosis. Ophthalmology Science. 2022 Jun 1;2(2):100126.</RefTotal>
      </Reference>
      <Reference refNo="4">
        <RefAuthor>Caron M</RefAuthor>
        <RefAuthor>Touvron H</RefAuthor>
        <RefAuthor>Misra I</RefAuthor>
        <RefAuthor>J&#233;gou H</RefAuthor>
        <RefAuthor>Mairal J</RefAuthor>
        <RefAuthor>Bojanowski P</RefAuthor>
        <RefAuthor>Joulin A</RefAuthor>
        <RefTitle>Emerging properties in self-supervised vision transformers</RefTitle>
        <RefYear>In</RefYear>
        <RefTotal>Caron M, Touvron H, Misra I, J&#233;gou H, Mairal J, Bojanowski P, Joulin A. Emerging properties in self-supervised vision transformers. In: Proceedings of the IEEE&#47;CVF international conference on computer vision 2021. p. 9650-9660.</RefTotal>
      </Reference>
    </References>
    <Media>
      <Tables>
        <NoOfTables>0</NoOfTables>
      </Tables>
      <Figures>
        <NoOfPictures>0</NoOfPictures>
      </Figures>
      <InlineFigures>
        <NoOfPictures>0</NoOfPictures>
      </InlineFigures>
      <Attachments>
        <NoOfAttachments>0</NoOfAttachments>
      </Attachments>
    </Media>
  </OrigData>
</GmsArticle>