<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
<!DOCTYPE GmsArticle SYSTEM "http://www.egms.de/dtd/2.0.34/GmsArticle.dtd">
<GmsArticle xmlns:xlink="http://www.w3.org/1999/xlink">
  <MetaData>
    <Identifier>26dga218</Identifier>
    <IdentifierDoi>10.3205/26dga218</IdentifierDoi>
    <IdentifierUrn>urn:nbn:de:0183-26dga2185</IdentifierUrn>
    <ArticleType>Meeting Abstract</ArticleType>
    <TitleGroup>
      <Title language="en">Automatic speech performance measures for assessing audiogenic speech disorders using speech processing technologies</Title>
    </TitleGroup>
    <CreatorList>
      <Creator>
        <PersonNames>
          <Lastname>Tuschen</Lastname>
          <LastnameHeading>Tuschen</LastnameHeading>
          <Firstname>Laura</Firstname>
          <Initials>L</Initials>
        </PersonNames>
        <Address>
          <Affiliation>Fraunhofer IDMT, HSA, Oldenburg, Deutschland</Affiliation>
        </Address>
        <Creatorrole corresponding="no" presenting="yes">author</Creatorrole>
      </Creator>
      <Creator>
        <PersonNames>
          <Lastname>Rabe</Lastname>
          <LastnameHeading>Rabe</LastnameHeading>
          <Firstname>Marlene</Firstname>
          <Initials>M</Initials>
        </PersonNames>
        <Address>
          <Affiliation>Carl von Ossietzky Universit&#228;t Oldenburg, Communication Acoustics, Oldenburg, Deutschland</Affiliation>
        </Address>
        <Creatorrole corresponding="no" presenting="no">author</Creatorrole>
      </Creator>
      <Creator>
        <PersonNames>
          <Lastname>Meyer</Lastname>
          <LastnameHeading>Meyer</LastnameHeading>
          <Firstname>Bernd T.</Firstname>
          <Initials>BT</Initials>
        </PersonNames>
        <Address>
          <Affiliation>Carl von Ossietzky Universit&#228;t Oldenburg, Communication Acoustics, Oldenburg, Deutschland</Affiliation>
        </Address>
        <Creatorrole corresponding="no" presenting="no">author</Creatorrole>
      </Creator>
    </CreatorList>
    <PublisherList>
      <Publisher>
        <Corporation>
          <Corporatename>German Medical Science GMS Publishing House</Corporatename>
        </Corporation>
        <Address>D&#252;sseldorf</Address>
      </Publisher>
    </PublisherList>
    <SubjectGroup>
      <SubjectheadingDDB>610</SubjectheadingDDB>
    </SubjectGroup>
    <DatePublishedList>
      <DatePublished >20260302</DatePublished >
    </DatePublishedList>
    <Language>engl</Language>
    <License license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
      <AltText language="en">This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License.</AltText>
      <AltText language="de">Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung).</AltText>
    </License>
    <SourceGroup>
      <Meeting>
        <MeetingId>M0642</MeetingId>
        <MeetingSequence>218</MeetingSequence>
        <MeetingCorporation>Deutsche Gesellschaft f&#252;r Audiologie e. V.</MeetingCorporation>
        <MeetingName>28. Jahrestagung der Deutschen Gesellschaft f&#252;r Audiologie</MeetingName>
        <MeetingTitle></MeetingTitle>
        <MeetingSession>Postersession</MeetingSession>
        <MeetingCity>Oldenburg</MeetingCity>
        <MeetingDate>
          <DateFrom>20260304</DateFrom>
          <DateTo>20260306</DateTo>
        </MeetingDate>
      </Meeting>
    </SourceGroup>
    <ArticleNo>218</ArticleNo>
  </MetaData>
  <OrigData>
    <TextBlock name="Text" linked="yes">
      <MainHeadline>Text</MainHeadline><Pgraph><Mark1>Question:</Mark1> Adults may experience speech disorders of functional origin or organic causes. Speech is then characterized by sounds not being pronounced clearly, the voice being hoarse or rough, or uncontrolled speech intensity, changed rhythmic-melodic, and speech rate <TextLink reference="1"></TextLink>, <TextLink reference="2"></TextLink>. Such disorders can lead to limitations in conversations and even exclusion from society as speech production ability is reduced on a communication related side <TextLink reference="1"></TextLink>. The aim of this project was to investigate whether existing algorithm-based approaches and models from speech and hearing processing can be used to analyse and evaluate disordered human speech production, particularly in adults with speech disorders caused by hearing loss, so-called audiogenic speech disorders.</Pgraph><Pgraph><Mark1>Method: </Mark1>An audio dataset of a test and a reference group was compared. Each group consisted of n&#61;15 adults reading sentences from a matrix test with simple grammatical structure. The adults in the test group had a pre- or peri-lingual hearing impairment and suffered from different degrees of audiogenic speech and voice disorders. The subjects in the reference group had no hearing impairment. For each group, a set of 446 sentences was compared using different acoustic algorithm-based approaches to measure speech intelligibility (ASR), listener effort (LEAP), and speech naturalness (NISQA) <TextLink reference="3"></TextLink>. In addition, a listening experiment was conducted in which participants with normal hearing rated a subset of the audio recordings based on perceived listener effort and naturalness on rating scales and speech intelligibility as correctly recognized words.</Pgraph><Pgraph><Mark1>Results: </Mark1>The acoustic analyses generally show differences between the reference and test groups. Speech intelligibility and naturalness are lower on average for the test group within the acoustic measurements. On average, listener effort is rated higher for the test group.  There is greater variance in all analysis results for the test group than for the reference group.The results of the listening experiment show similar results: The naturalness of the test group is rated as less natural than the reference group on average. The perceived listener effort is also rated higher on average for the test group. In addition, the variance of the rating results is higher in the test group than in the reference group. No group differences can be shown in the intelligibility ratings.</Pgraph><Pgraph><Mark1>Conclusion: </Mark1>The results suggest that changes in speech associated with audiogenic speech disorders can be automatically measured using algorithm-based approaches and speech and hearing processing models in a group comparison. A more extensive listening experiment is planned to improve comparability with speech processing models and test reliability. In the future, it is planned to transfer the measurements to spontaneous speech and other speech disorders such as dysarthria in Parkinson&#39;s disease or childhood pronunciation disorders.</Pgraph></TextBlock>
    <References linked="yes">
      <Reference refNo="2">
        <RefAuthor>Baumg&#228;rtner A</RefAuthor>
        <RefAuthor>Staiger A</RefAuthor>
        <RefTitle>Neurogene St&#246;rungen der Sprache und des Sprechens</RefTitle>
        <RefYear>2020</RefYear>
        <RefJournal>Neurologie Up2date</RefJournal>
        <RefPage>155&#8211;173</RefPage>
        <RefTotal>Baumg&#228;rtner A, Staiger A. Neurogene St&#246;rungen der Sprache und des Sprechens. Neurologie Up2date. 2020;03:155&#8211;173.</RefTotal>
      </Reference>
      <Reference refNo="1">
        <RefAuthor>Lehner K</RefAuthor>
        <RefAuthor>Ziegler W</RefAuthor>
        <RefTitle>Indicators of Communication Limitation in Dysarthria and Their Relation to Auditory-PerceptualSpeech Symptoms: Construct Validity of the KommPaS Web App</RefTitle>
        <RefYear>2022</RefYear>
        <RefJournal>Journal of Speech, Language, and Hearing Research</RefJournal>
        <RefPage>22&#8211;42</RefPage>
        <RefTotal>Lehner K, Ziegler W. Indicators of Communication Limitation in Dysarthria and Their Relation to Auditory-PerceptualSpeech Symptoms: Construct Validity of the KommPaS Web App. Journal of Speech, Language, and Hearing Research. 2022;65:22&#8211;42.</RefTotal>
      </Reference>
      <Reference refNo="3">
        <RefAuthor>Mittag G</RefAuthor>
        <RefAuthor>M&#246;ller S</RefAuthor>
        <RefTitle>Deep Learning Based Assessment of Synthetic Speech Naturalness</RefTitle>
        <RefYear></RefYear>
        <RefBookTitle>Proceedings Interspeech 2020.</RefBookTitle>
        <RefPage></RefPage>
        <RefTotal>Mittag G, M&#246;ller S. Deep Learning Based Assessment of Synthetic Speech Naturalness. In: Proceedings Interspeech 2020.</RefTotal>
      </Reference>
    </References>
    <Media>
      <Tables>
        <NoOfTables>0</NoOfTables>
      </Tables>
      <Figures>
        <NoOfPictures>0</NoOfPictures>
      </Figures>
      <InlineFigures>
        <NoOfPictures>0</NoOfPictures>
      </InlineFigures>
      <Attachments>
        <NoOfAttachments>0</NoOfAttachments>
      </Attachments>
    </Media>
  </OrigData>
</GmsArticle>