To the top

Page Manager: Webmaster
Last update: 9/11/2012 3:13 PM

Tell a friend about this page
Print version

Explainable Machine Trans… - University of Gothenburg, Sweden Till startsida
To content Read more about how we use cookies on

Explainable Machine Translation with Interlingual Trees as Certificates

Conference paper
Authors Aarne Ranta
Published in CLASP Papers in Computational Linguistics
ISSN ISSN 2002-9764
Publication year 2017
Published at Department of Computer Science and Engineering (GU)
Language en
Keywords Machine translation, Explainable AI, Grammatical Framework, Universal Dependencies
Subject categories Computational linguistics


Explainable Machine Translation (XMT) is an instance of Explainable Artificial Intelligence (XAI). An XAI program does not only return an output, but also an explanation of how the output was obtained. This helps the user to assess the reliability of the result, even if the AI program itself is a black box. As a promising candidate for explanations in MT, we consider interlingual meaning representations—abstract syntax trees in the sense of Grammatical Framework (GF). An abstract syntax tree encodes the translatable content in a way that enables accurate target language generation; the main problem is to find the right tree in parsing. This paper investigates a hybrid architecture where the tree is obtained by a black box robust parser, such as a neural dependency parser. As long as the parser returns a tree from which the target language is generated in a trusted way, the tree serves as an explanation that enables the user to assess the reliability of the whole chain of translation.

Page Manager: Webmaster|Last update: 9/11/2012

The University of Gothenburg uses cookies to provide you with the best possible user experience. By continuing on this website, you approve of our use of cookies.  What are cookies?