Please use this identifier to cite or link to this item:
|Title:||Machine translation of English relative clauses to Chinese in commercial contracts -- a contrastive evaluation of the outputs by SYSTRAN translate, Baidu translate and Google translate||Authors:||Li, Yidi||Keywords:||Humanities::Language::English||Issue Date:||2022||Publisher:||Nanyang Technological University||Source:||Li, Y. (2022). Machine translation of English relative clauses to Chinese in commercial contracts -- a contrastive evaluation of the outputs by SYSTRAN translate, Baidu translate and Google translate. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157188||Abstract:||Online machine translation (MT) systems are used by millions of people for communication across language barriers, and their performance has been optimized with the state-of-art neural network technology. Relative clauses bring difficulties to MT due to their unique syntactic feature. This study aims to investigate the translation quality of relative clauses of three leading MT systems (SYSTRAN Translate, Baidu Translate and Google Translate) in both human and automatic evaluation methods. In order to conduct the human evaluation experiment, a bilingual corpus was first built by randomly collecting 50 English sentences embedding relative clauses in commercial contracts as the source text (ST), and each sentence has a published human translation (HT) and three MT outputs produced by the three online MT systems. Then, 10 professional translators were invited to rate the translation quality of the target text (TT) sentences (both generated by human and machine) with a score ranging from 1 to 5. Meanwhile, the automatic evaluation metric BLEU (Bilingual Evaluation Understudy) was applied for scoring the MT outputs. Finally, an analysis on the issues of the machine generated relative clauses was conducted. The data collected from the human evaluation experiment demonstrated that HT quality of the relative clauses from English into Chinese was significantly different from that of SYSTRAN and Google Translate, but there was no significant difference between HT and Baidu Translate. BLEU scores were correlated to the human scores for the three MT outputs. When generating relative clauses, the MT tools tended to prepose attributes regardless of their lengths, generate wrong identifications of the headwords, neglect relative clauses, and choose expressions with inaccurate meanings and overlong premodifiers. The findings shed light on MT issues in dealing with relative clauses, which helps developers to better improve their systems.||URI:||https://hdl.handle.net/10356/157188||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SoH Theses|
Updated on Jun 23, 2022
Updated on Jun 23, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.