Till sidans topp

Sidansvarig: Webbredaktion
Sidan uppdaterades: 2012-09-11 15:12

Tipsa en vän
Utskriftsversion

Predicting MCI Status Fro… - Göteborgs universitet Till startsida
Webbkarta
Till innehåll Läs mer om hur kakor används på gu.se

Predicting MCI Status From Multimodal Language Data Using Cascaded Classifiers

Artikel i vetenskaplig tidskrift
Författare Kathleen Fraser
Kristina Lundholm Fors
Marie Eckerström
Fredrik Öhman
Dimitrios Kokkinakis
Publicerad i Frontiers in Aging Neuroscience
Volym 11
Nummer/häfte 205
ISSN 1663-4365
Publiceringsår 2019
Publicerad vid Institutionen för neurovetenskap och fysiologi
Institutionen för svenska språket
Centrum för åldrande och hälsa (AgeCap)
Språk en
Länkar dx.doi.org/10.3389/fnagi.2019.00205
Ämnesord mild cognitive impairment, language, speech, eye-tracking, machine learning, multimodal, early, mild cognitive impairment, alzheimers-disease, spontaneous speech, picture description, memory, integration, decline, identification, comprehension, recognition
Ämneskategorier Neurovetenskaper, Språkteknologi (språkvetenskaplig databehandling), Lingvistik

Sammanfattning

Recent work has indicated the potential utility of automated language analysis for the detection of mild cognitive impairment (MCI). Most studies combining language processing and machine learning for the prediction of MCI focus on a single language task; here, we consider a cascaded approach to combine data from multiple language tasks. A cohort of 26 MCI participants and 29 healthy controls completed three language tasks: picture description, reading silently, and reading aloud. Information from each task is captured through different modes (audio, text, eye-tracking, and comprehension questions). Features are extracted from each mode, and used to train a series of cascaded classifiers which output predictions at the level of features, modes, tasks, and finally at the overall session level. The best classification result is achieved through combining the data at the task level (AUC = 0.88, accuracy = 0.83). This outperforms a classifier trained on neuropsychological test scores (AUC = 0.75, accuracy = 0.65) as well as the "early fusion" approach to multimodal classification (AUC = 0.79, accuracy = 0.70). By combining the predictions from the multimodal language classifier and the neuropsychological classifier, this result can be further improved to AUC = 0.90 and accuracy = 0.84. In a correlation analysis, language classifier predictions are found to be moderately correlated (rho = 0.42) with participant scores on the Rey Auditory Verbal Learning Test (RAVLT). The cascaded approach for multimodal classification improves both system performance and interpretability. This modular architecture can be easily generalized to incorporate different types of classifiers as well as other heterogeneous sources of data (imaging, metabolic, etc.).

Sidansvarig: Webbredaktion|Sidan uppdaterades: 2012-09-11
Dela:

På Göteborgs universitet använder vi kakor (cookies) för att webbplatsen ska fungera på ett bra sätt för dig. Genom att surfa vidare godkänner du att vi använder kakor.  Vad är kakor?