Please use this identifier to cite or link to this item:
|Title:||Investigating the behavioral and neural correlates of auditory semantic processing in the Chinese language||Authors:||Liu, Hengshuang||Keywords:||DRNTU::Social sciences::Communication||Issue Date:||2017||Source:||Liu, H. (2017). Investigating the behavioral and neural correlates of auditory semantic processing in the Chinese language. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||Everyday social communication emphasizes auditory semantic processing. To date, most existing models of auditory semantic processing are based on alphabetic languages like English. Given the substantial linguistic differences between character languages such as Chinese and alphabetic languages like English, it is unclear whether and how these models apply to the character-based Chinese language. The current thesis proposed a preliminary model for Chinese auditory semantic processing based on a meta-analysis of past findings, and then tested the model with present behavioral and neuroimaging studies. In order to establish a model with regions consistently activated in Chinese auditory semantic processing, a meta-analysis was conducted to synthesize findings in 170 past studies in Study 1. Results suggest that the Chinese auditory semantic model may involve a network including the bilateral posterior superior temporal lobes, the left middle frontal gyrus, the left ventral inferior frontal gyrus, the left anterior superior temporal cortex, the left middle temporal gyrus, and the left occipito-temporal cortex. It is interesting that the occipito-temporal visual cortex, commonly associated with reading process, is suggested by the present meta-analysis to be involved in the Chinese auditory semantic model. Thus, it is worth investigating whether orthographic representation has an effect on Chinese auditory semantic processing in the absence of visual characters. To address this, a behavioral experiment was conducted in Study 2. Forty-eight Mandarin Chinese native speakers were recruited to perform a synonym judgment task in both visual and auditory sessions, where two of the four task conditions in each session were manipulated to only differ by the presence or absence of visual priming. After controlling for several possible confounds, the reaction time in the auditory session was longer for the condition without visual priming (e.g., ‘目的’ [/mu4 di/ goal] & ‘木头’ [/mu4 tou/ wood]) than the condition with visual priming (e.g., ‘目的’ [/mu4 di/ goal] & ‘目录’ [/mu4 lu4/ catalogue]), possibly resulting from the heavier visualization load for the condition without visual priming. In contrast, in the visual session where visual information was physically available all at once without ambiguity, the effect of visual priming disappeared. This finding is likely amongst the first behavioral evidence to suggest the involvement of Chinese visual form in auditory semantic processing where no character is visually presented. The proposed model identified in the meta-analysis (Study 1) did not make differentiation between word-level and sentence-level processes which might have their respective unique regional networks. To further evaluate possible differences between these two levels of processes, two separate neuroimaging data sets were utilized. Study 3a employed an auditory semantic-tone task to evaluate word-level processing, while a forward-backward passive listening task was adopted in Study 3b to examine sentence-level processing. Results showed the activations in the word-level network (Study 3a) to be consistent with the regions proposed in our model, likely because the majority of the studies included in our meta-analysis examined word processing. Within the proposed network, the right posterior superior temporal gyrus, the left middle frontal gyrus, and the left ventral inferior frontal gyrus were functionally connected probably to aid auditory-to-lexicosemantic transformation. In contrast to word processing, the sentence-level processing showed involvement of all regions proposed in the model except the left occipito-temporal visual cortex (Study 3b). This was plausibly because homophones could be discriminated by contextual and syntactic scaffold when hearing Chinese sentences without much need to visualize to the written form. Findings from the three studies have provided mutually corroborating evidence that stems from meta-analytic, behavioral, and neurobiological perspectives, and jointly propose a Chinese auditory semantic model for future research to reference from and improve upon. The model was found to apply both to the word-level and sentence-level Chinese auditory semantic processing, with exception of the left occipito-temporal cortex which was only found in the word-level but not sentence-level processing. This model generally fits into the classical models based on alphabetic writing systems, but with features specific to Chinese. While the left inferior parietal lobule underlying sublexical-level phonemic assembly may have little involvement in Chinese auditory semantic processing, the left middle frontal gyrus is more likely to be recruited for the sound-to-print mapping at the whole character level. With the current model, we could advance our understanding of how the linguistic, behavioral, and neurobiological representations interact during Chinese auditory semantic processing.||URI:||http://hdl.handle.net/10356/72743||DOI:||10.32657/10356/72743||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||HSS Theses|
Updated on May 14, 2021
Updated on May 14, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.