Your conditions: 李量
  • 节律在听觉言语理解中的作用

    Subjects: Psychology >> Social Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: For a long time, the research of rhythm has mainly focused on sensory and perceptual processing, ignoring its role in speech comprehension. Oral speech, as an important channel of information exchange in human society, has rich rhythmic characteristics, and its understanding is that listeners receive external speech input and obtain meaning. In daily communication, auditory speech comprehension is influenced by multiscale rhythm information. Common external rhythms showed below. Prosodic structure rhythm can affect the intelligibility of auditory speech and help listeners to analyze sentence structure in an ambiguous context. Context rhythm changes the listener's judgment of the number of words and affects the recognition of vowels and consonants in words. Body language rhythm can alter stress position perception and restore speech intelligibility. The influence of external rhythm on auditory speech comprehension exists in a wide range of auditory and non-auditory stimuli, which help the listeners to understand the speaker's speech content. The process by which the listener's brain uses external rhythms to promote or alter speech comprehension is thought to be related to internal rhythms. Internal rhythms are neural oscillations, which can represent the hierarchical characteristics of external speech input at different time scales and tend to be coupled with each other. The convergence of internal and external rhythms over time with the input of external rhythmic stimuli is called neural entrainment. Neural entrainment of external rhythmic stimuli and internal neural activity can optimize the brain's processing of speech stimuli and extract discrete language components from continuous sound signals. It's worth noting that neural entrainment is not only a passive follower of external rhythm information but also influenced by the subjective regulation of listeners. In the process of speech comprehension, top-down modulation may be derived from cognitive processes such as the listener's selective attention, prior knowledge of grammar, and expectation. They can affect neural entrainment at the same time. When the listener pays selective attention to a stream of speech, it weakened or eliminated the advanced neural response of the brain regions to not pay attention to the voice stream. And neural entrainment based on the corresponding component in the listener is expected to enhance speech representation and processing. And neural entrainment relied on the listener of the existing prior knowledge to integrate words composition between different levels or across brain areas. These active modulations make the key information in the process of speech comprehension more likely to be at the optimal excitability level of neuron cluster activity, thus obtaining more processing resources. We believe that neural entrainment may be the key mechanism to realize the interrelation between internal and external rhythms and jointly affect speech comprehension. Finally, the discovery of internal and external rhythms and their related mechanisms can provide a research window for understanding speech, a complex sequence with structural rules in multilevel time scales.

  • 掩蔽刺激对目标识别加工的作用:来自fNIRS的证据

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: When our visual system processes target signals, it usually receives large amounts of irrelevant information from the target, leading to a reduction in the visibility of the target. A wealth of research has shown that visual search for target letters against a masking background is largely determined by the masker type. Informational maskers, such as either randomly positioned and oriented letters or randomly distributed letter fragments, induce stronger masking effects on recognition of target letters than the energetic maskers do, such as the random-phase masker (same spectral amplitude composition as the letter masker but with the phase spectrum randomized) or the random-pixel masker (the locations of the letter maskers’ pixel amplitudes being randomized). However, the mechanisms under informational masking and those under energetic masking are still unknown. The current study examined both cortical activities and behavioral performances in the visual search task, which is determined by whether one of four letters presented at four symmetrically-located positions differs from the others under three masking conditions (random pixels, letter fragments, and random letters). Both the oxygenated hemoglobin concentration (HbO) responses in the primary visual cortex (V1) and secondary visual cortex (V2) with a functional near infrared spectroscopy (fNIRS) were recorded. Twenty (4 males, 16 females) healthy adults (mean age: 22.5 ± 1.67 years) participated in the experiment. Each masking condition contained 5 blocks, and each block contained 8 trails. There was a resting phase of 20 seconds between the two blocks. Spatial registration methods were applied to localize the cortical regions underneath each channel and to define two regions of interest (ROIs), which are the primary visual cortex (V1) and secondary visual cortex (V2). The behavioral results showed that the performance of recognizing target letters improved when the masker type shifted from random letters to letter fragments and to random pixels, suggesting that the letter masker interfered the most with performance than the letter fragment and random-pixel maskers. The random-pixel masker caused the least masking effect. The fNIRS results showed that both letter masker and letter-fragment masker produced an increase in cortical oxygen level. Many regions of interest (ROIs), particularly the visual cortex (including V1 and V2), were more activated under the letter or the letter-fragment masking condition compared to the random-pixel masking condition. Moreover, the differences in cortical activation between the masking conditions further suggested that the V1 and V2 are the critical brain regions involved in visual letter search and informational masking of letter recognition. To summarize, this study used fNIRS to explore the cortex activation patterns of different types of masking on target recognition. The results showed that information masking had much more interference on visual search and caused greater processing loads in primary and secondary visual cortex, compared with energy masking under the same conditions. Furthermore, the differences between letter fragments masking and letters masking are reflected in the activation mode of V1 and V2 regions.

  • 言语的情绪韵律和情绪语义对听觉去信息掩蔽的作用

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: In daily communication, a speaker's voice usually carries a particular emotion. Emotional information is transmitted in two ways: prosody and the semantic content of speech. Previous studies have found that emotional prosody has the effect of releasing auditory masking. The purpose of the present study is a), to test whether the emotional semantic content also has the effect of releasing speech from informational masking, and if so, b) to explore what is the difference between the role of emotional prosody and emotional content in releasing informational masking.This study consisted of two experiments, each divided into two sub-experiments. A perceived spatial separation paradigm was adopted in all experiments to separate the effects of informational masking from that of energetic masking. Experiment 1 explored the mechanism of emotional prosody in the unmasking of informational masking. A complete within-subject design of 2 (perceived spatial separation: no, have) � 2 (emotional prosody: neutral, happy) � 4 (signal-to-noise ratio: -8 dB, -4 dB, 0 dB, 4 dB) was adopted in both sub-experiments. Experiment 1a employed time- reversed sentences with no semantic intelligibility as masking sounds (with presumed only perceptual informational masking). Experiment 1b used syntactically correct nonsense sentences as masking sounds (with both perceptual and cognitive informational masking). Experiment 2 also contained two sub-experiments; it aimed to examine the role of the emotional semantics of speech in releasing informational masking. A complete within-subject design of 2 (perceived spatial separation: no, have) � 2 (emotional semantics: neutral, positive) � 4 (signal-to-noise ratio: -8 dB, -4 dB, 0 dB, 4 dB) was adopted in both sub-experiments. Experiment 2a employed time- reversed sentences with no semantic intelligibility as masking sounds. Experiment 2b used syntactically correct nonsense sentences as masking sounds. Experiment 1a showed that the accuracy of recognition of the target sentence uttered in emotional prosody was significantly higher than that of the target sentence uttered in neutral prosody. Experiment 1b showed that the accuracy of recognition of the target sentence uttered in emotional prosody was significantly higher than that of the target sentence uttered in neutral prosody. There was a marginally significant difference between the results of Experiment 1a and Experiment 1b. Experiment 2a showed no significant difference in recognition accuracy between target sentences with emotional semantics and those with neutral semantics. Experiment 2b showed that the recognition accuracy of target sentences with emotional semantics was significantly higher than that of target sentences with neutral semantics. The study found no significant difference between Experiments 2a and 2b. In conclusion, the results of the present study suggest that the mechanisms of emotional prosody and emotional semantics is different in releasing speech from informational masking. Emotional prosody of speech can preferentially attract more attention from listeners and reduce perceptual informational masking, but it only has a minor effect on releasing cognitive informational masking. The emotional semantics of speech can preferentially occupy more cognitive processing resources of listeners. Hence, it can reduce the cognitive informational masking; however, it fails to release the perceptual informational masking.

  • The role of rhythm in auditory speech understanding

    Subjects: Psychology >> Cognitive Psychology submitted time 2021-12-29

    Abstract: Speech understanding is a mental process in which the listener receives external speech input and acquires meaning. In daily communication, speech comprehension is influenced by multi-scale rhythmic information, which usually includes the rhythm of prosodic structure, the rate of context, and the speaker's body language. They alter the listeners' phoneme discrimination, word perception, and speech intelligence in auditory speech understanding. Internal rhythms are neural oscillations in the brain, which can represent the hierarchical characteristics of external speech input at different time scales. The neural entrainment of external rhythmic stimulus and internal neural activity can optimize the brain's processing of speech stimulus and further enhance the internal representation of target speech by the top-down modulation of the listener's cognitive process. We think that it may be the key mechanism to build the interrelationship between internal and external rhythms and jointly affect speech understanding. The discovery of its mechanism can provide a window for the study of speech, which is a complex sequence with structural rules on multi-level time scales.

  • Operating Unit: National Science Library,Chinese Academy of Sciences
  • Production Maintenance: National Science Library,Chinese Academy of Sciences
  • Mail: eprint@mail.las.ac.cn
  • Address: 33 Beisihuan Xilu,Zhongguancun,Beijing P.R.China