Alzheimer’s Drug Discovery Foundation (ADDF) Announces Launch of SpeechDx, the First Study to Generate a Voice Database Paired with Clinical Data for Early Detection of Alzheimer’s Disease

The Alzheimer’s Drug Discovery Foundation’s (ADDF) Diagnostic Accelerator (DxA) announced the launch of SpeechDx, the first longitudinal study aimed at creating the largest repository of speech and voice data to help accelerate the detection, diagnosis and monitoring of Alzheimer’s disease. Recorded voice samples from the study will be paired with clinical and biomarker data that can be leveraged by academic, biotech, and industry partners to develop algorithms for the creation of new speech biomarkers.

Scientific literature has shown subtle changes in speech and language patterns can indicate and predict cognitive decline associated with Alzheimer’s, but to-date, tools have struggled to systematically capture these alterations. Recent technological advancements in conjunction with the boom in artificial intelligence and machine learning have enabled us to record these difficult-to-detect changes, thus setting the ground for the use of speech and voice as a new class of digital biomarkers for Alzheimer’s.

“Speech is a complex cognitive process that contains important information about how your brain is functioning and scientific evidence shows us speech may hold the key to early, accurate, and non-invasive detection of Alzheimer’s disease,” says Howard Fillit, MD, Co-Founder and Chief Science Officer of the ADDF. “This study will help develop and validate voice-based biomarkers, expanding our existing arsenal of neuroimaging, peripheral blood, and digital biomarkers—all of which are crucial to delivering the right drugs to the right patients at the right time.”

SpeechDx is poised to develop a paired voice and clinical database that will provide the research community with harmonized data needed to generate speech-based diagnostic algorithms. These efforts will not only expand the existing portfolio of peripheral and digital biomarkers, but also further the DxA’s vision to accelerate the development of non-invasive and accessible biomarkers and diagnostic tools for the early detection of Alzheimer’s.

“To our knowledge, SpeechDx will comprise the largest-in-size, and longest-in-duration, curated repository of voice and clinical ground truth in dementia research,” said Lampros Kourtis, PhD, DxA SpeechDx Program Manager at Gates Ventures. “Our hope is that scientists can use this dataset to train, validate and benchmark algorithms that detect and monitor cognitive decline at early stages of disease development.”

This study will span a three-year period across five clinical sites in the U.S., Spain, and Australia, including Boston University, Emory University, Barcelona Brain Health Initiative, Barcelonaβeta Brain Research Center (BBRC), and the Ace Alzheimer Center Barcelona. The data will be collected from a diverse pool of 2,650 participants with full brain health spectrum from cognitively healthy to early Alzheimer’s, and in three languages including English, Spanish, and Catalan. Study participants will be given handheld tablets with the pre-installed SpeechDx app to capture their voice data.

The study sites and partners share a collective vision to create a gold-standard speech and language Alzheimer’s dataset that will be accessible by biotech, pharma, and research communities through the Alzheimer’s Disease Data Initiative’s (ADDI) data and analysis sharing platform. The use of this warehouse of data will be fundamental for the development of new and renewable digital voice biomarkers.

“Machine learning algorithms are being integrated into every aspect of medical research, but the outputs are only as good as the data they are being built on,” remarks Niranjan Bose, PhD, Managing Director of Health & Life Sciences at Gates Ventures. “Implementation and development of the SpeechDx program will streamline the collection of high-quality speech data and ultimately complement the existing array of available biomarkers, including expanding the portfolio of digital tools used to predict and prevent the onset of the disease early on.”

Participants will be provided with a custom speech-collection app developed by the DxA which uses entirely open-source speech tasks. Additionally, each participant’s voice recording will be paired with clinical data and harmonized across all sites. This integration of clinical-digital data will serve as a ground point for machine learning, thus enabling higher accuracy. The collected data will be stored via the ADDI’s platform, which will function as a digital repository and contain approximately 2,584 hours of voice data to be used for the creation of algorithms for Alzheimer’s detection and monitoring.

You might also like