<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
<!DOCTYPE GmsArticle SYSTEM "http://www.egms.de/dtd/2.0.34/GmsArticle.dtd">
<GmsArticle xmlns:xlink="http://www.w3.org/1999/xlink">
  <MetaData>
    <Identifier>26dga001</Identifier>
    <IdentifierDoi>10.3205/26dga001</IdentifierDoi>
    <IdentifierUrn>urn:nbn:de:0183-26dga0016</IdentifierUrn>
    <ArticleType>Meeting Abstract</ArticleType>
    <TitleGroup>
      <Title language="en">Future hearables: biosignal-adaptive AI for speech communication</Title>
    </TitleGroup>
    <CreatorList>
      <Creator>
        <PersonNames>
          <Lastname>Schultz</Lastname>
          <LastnameHeading>Schultz</LastnameHeading>
          <Firstname>Tanja</Firstname>
          <Initials>T</Initials>
        </PersonNames>
        <Address>
          <Affiliation>Universit&#228;t Bremen, Cognitive Systems Lab, Bremen, Deutschland</Affiliation>
        </Address>
        <Creatorrole corresponding="no" presenting="yes">author</Creatorrole>
      </Creator>
    </CreatorList>
    <PublisherList>
      <Publisher>
        <Corporation>
          <Corporatename>German Medical Science GMS Publishing House</Corporatename>
        </Corporation>
        <Address>D&#252;sseldorf</Address>
      </Publisher>
    </PublisherList>
    <SubjectGroup>
      <SubjectheadingDDB>610</SubjectheadingDDB>
    </SubjectGroup>
    <DatePublishedList>
      <DatePublished >20260302</DatePublished >
    </DatePublishedList>
    <Language>engl</Language>
    <License license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
      <AltText language="en">This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License.</AltText>
      <AltText language="de">Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung).</AltText>
    </License>
    <SourceGroup>
      <Meeting>
        <MeetingId>M0642</MeetingId>
        <MeetingSequence>001</MeetingSequence>
        <MeetingCorporation>Deutsche Gesellschaft f&#252;r Audiologie e. V.</MeetingCorporation>
        <MeetingName>28. Jahrestagung der Deutschen Gesellschaft f&#252;r Audiologie</MeetingName>
        <MeetingTitle></MeetingTitle>
        <MeetingSession>Plenarvortrag 1</MeetingSession>
        <MeetingCity>Oldenburg</MeetingCity>
        <MeetingDate>
          <DateFrom>20260304</DateFrom>
          <DateTo>20260306</DateTo>
        </MeetingDate>
      </Meeting>
    </SourceGroup>
    <ArticleNo>001</ArticleNo>
  </MetaData>
  <OrigData>
    <TextBlock name="Text" linked="yes">
      <MainHeadline>Text</MainHeadline><Pgraph>Hearables are a technology for young and old alike. Young people typically wear them in the form of headphones connected to smartphones to play podcasts, filter noise, and enable hands-free calling. In later life, hearables are more common in the form of hearing aids that improve hearing. These digital devices, centered around the ear, have great potential, especially if we can teach them to sense human needs and deliver contextual information. In my talk, I will show that biosignals are the body&#39;s data streams that provide real-time insights into human behavior and, when interpreted in context, also into human needs. The processing and interpretation of biosignals provides an inside perspective on human physical and mental activities, complementing the traditional approach of merely observing humans. I will present illustrative cases of our work, ranging from silent and neural communication interfaces that convert muscle and brain signals into audible speech to the interpretation of human attention and cognitive decline. I argue that AI technology which uses biosignals to recognize human needs will turn hearables into powerful speech communication systems of the future &#8211; the next generation of the babelfish.</Pgraph></TextBlock>
    <Media>
      <Tables>
        <NoOfTables>0</NoOfTables>
      </Tables>
      <Figures>
        <NoOfPictures>0</NoOfPictures>
      </Figures>
      <InlineFigures>
        <NoOfPictures>0</NoOfPictures>
      </InlineFigures>
      <Attachments>
        <NoOfAttachments>0</NoOfAttachments>
      </Attachments>
    </Media>
  </OrigData>
</GmsArticle>