Your conditions: 王晨
  • The influence of emotional valence and motivation on socially shared retrieval-induced forgetting

    Subjects: Psychology >> Social Psychology submitted time 2024-04-26

    Abstract: Memories of individuals are typically encoded, stored, recalled, and reconstructed through direct or indirect interactions with others. Cuc et al. (2007) founded that during interactive retrieval, speakers’ selective recall of memories results in the forgetting of non-target information related to the retrieved information, a phenomenon known as retrieval-induced forgetting (RIF). Simultaneously, listeners in this interactive process are also influenced by the speakers’ selective recall, leading to the forgetting of relevant but not retrieved information, a phenomenon termed socially shared retrieval-induced forgetting (SS-RIF). Building on the intertwined connection between emotion, motivation, and memory, this study investigates the impact of emotional valence and motivation on socially shared retrieval-induced forgetting in the context of interactive retrieval.
    In Experiment 1, emotional valence and item type were manipulated to explore the influence of emotional valence on socially shared retrieval-induced forgetting. The experiment employed a within-participants design of 3 (emotional valence: positive emotion, neutral emotion, negative emotion) × 4 (item type: Rp+, Rp−, Nrp+, Nrp−). The dependent variable was participants’ recall accuracy of items under the three emotional conditions. The results demonstrated that listeners exhibited socially shared retrieval-induced forgetting effects under positive and neutral emotions but not under negative emotions in the interactive retrieval practice paradigm. Additionally, the effect was more pronounced under positive emotions compared to neutral emotions, aligning with our Hypothesis 1.
    Experiment 2 manipulated positive emotional motivation and item type to investigate the impact of motivation on socially shared retrieval-induced forgetting. The experiment employed a within-participants design of 2 (positive emotional motivation dimension: high-motivation with positive emotion, low-motivation with positive emotion) × 4 (item type: Rp+, Rp−, Nrp+, Nrp−). Results indicated that listeners exhibited socially shared retrieval-induced forgetting effects under both high- and low-motivation with positive emotional conditions, consistent with the findings of Experiment 1. Moreover, the level of socially shared retrieval-induced forgetting was significantly higher under high-motivation with positive emotions compared to low-motivation with positive emotions, supporting our Hypothesis 2.
    These findings offer empirical support for comprehending the impact of emotional valence and motivation on socially shared retrieval-induced forgetting, underscoring the crucial role of emotion and motivation in memory outcomes during social interactive tasks.

  • Robots abide by ethical principles promote human-robot trust? The reverse effect of decision types and the human-robot projection hypothesis

    Subjects: Psychology >> Social Psychology submitted time 2023-03-24

    Abstract: Asimov's Three Laws of Robotics are the basic ethical principles of artificial intelligent robots. The ethic of robots is a significant factor that influences people’s trust in human-robot interaction. Yet how it affects people's trust, is poorly understood. In this article, we present a new hypothesis for interpreting the effect of robots’ ethics on human-robot trust—what we call the human-robot projection hypothesis (HRP hypothesis). In this hypothesis, people are based on their intelligence, e.g., intelligence for cognition, emotion, and action, to understand robots’ intelligence and interact with them. We propose that compared with robots that violate ethical principles, people project more mind energy (i.e., the level of mental capacity of humans) onto robots that abide by ethical principles, thus promoting human-robot trust. In this study, we conducted three experiments to explore how presenting scenarios where a robot abided by or violated Asimov’s principles would affect people’s trust in the robot. Meanwhile, each experiment corresponds to one of Asimov’s principles to explore the interaction effect of the types of robot’s decisions. Specifically, all three experiments were two by two experimental designs. The first within-subjects factor was whether the robot being interacted with had abided by Asimov’s principle with a “no harm” core element. The second within-subjects factor was the types of robot’s decision, with corresponding differences in Asimov’s principles among different experiments (Experiment 1: whether the robot takes action or not; Experiment 2: whether the robot obeys human’s order or not; Experiment 3: whether the robot protects itself or not). We assessed the human-robot trust by using the trust game paradigm. Experiments 1-3 consistently showed that people were more willing to trust robots that abided by ethical principles compared with those who violated. We also found that human-robot projection played a mediating role, which supports the HRP hypothesis. In addition, the significant interaction effects between the type of robot’s decision and robot abided by or violated Asimov’s principle existed in all three experiments. The results of Experiment 1 showed that action robots got more trust than inaction robots when abided by the first principle, whereas inaction robots got more trust than action robots when they violated the first principle. The results of Experiment 2 showed that disobeyed robots got less trust than obeyed robots. The detrimental effect was greater in scenarios where robots violated the second principle than in those who abided. The results of Experiment 3 showed that compared with robots that violated the third principle, the trust-promoting effect of protecting itself versus destroying itself was stronger among those who abided. The above results indicated that the reverse effects of decision types existed in both Experiments 1 and 3. Finally, the cross-experimental analysis showed that: (1) When robots abided by ethical principles, their inaction and disobedience still compromise human-robot trust. When robots violated ethical principles, their obedience incurs the least loss of human-robot trust, while their action and disobedience incur a relatively severe loss of human-robot trust. (2) When the ethical requirements of different robotic laws conflict, there was no significant difference between the importance of not harming humans and obeying human orders in terms of the human-robot trust, and both were more important than protecting robots themselves. This study helps to understand the impact of robotic ethical decision-making on human-robot trust and the important role of human-robot projection, which might have important implications for future research in human-robot interaction.

  • Operating Unit: National Science Library,Chinese Academy of Sciences
  • Production Maintenance: National Science Library,Chinese Academy of Sciences
  • Mail: eprint@mail.las.ac.cn
  • Address: 33 Beisihuan Xilu,Zhongguancun,Beijing P.R.China