Your conditions: 许丽颖
  • The Information Framing Effect of “AI Unemployment”

    Subjects: Psychology >> Social Psychology submitted time 2024-05-03

    Abstract: The advancement of artificial intelligence (AI) technology significantly contributes to enhancing productivity; however, concerns regarding potential technological unemployment have garnered considerable attention. The uncertainty surrounding the occurrence, timing, and scale of AI-induced unemployment impedes definitive conclusions. This uncertainty may also lead the public to be influenced by encountered information concerning AI-induced unemployment. Media coverage on AI-induced unemployment often presents extensive information regarding affected industries, occupations, and probability scales, establishing two numerical information frameworks: one emphasizing factors influencing unemployment distribution across industries and another emphasizing the probability of unemployment occurrence. Comparatively, the probability framework, as opposed to the factor framework, allows individuals to formulate judgments indicating a reduced likelihood of AI-induced unemployment, thereby mitigating the perceived threat of AI, especially among individuals with high ambiguity tolerance. Building upon the foundational assumption that the probability framework alleviates AI threat perception, this study, comprising seven recursive experiments, investigates the mediating role of judgments on AI-induced unemployment likelihood and the moderating role of individual ambiguity tolerance. Experiment 1 juxtaposes AI threat perception elicited by general AI-induced unemployment descriptions, factor frameworks, and probability frameworks. Experiment 2 validates the mediating role of likelihood judgments. Experiments 3 and 4 respectively eliminate potential influences of probability values and unemployment scale. Experiment 5 explores ambiguity tolerance’s moderating effect. Experiments 6 and 7 examine subsequent AI threat effects, including support for AI development policies and willingness to recommend various occupations. The primary findings are as follows. Firstly, introducing AI-induced unemployment through a probability framework effectively diminishes AI threat perception (Experiments 1-7). Secondly, this effect is mediated by perceived likelihood, whereby the probability framework prompts individuals to form judgments indicating decreased AI-induced unemployment likelihood, thus reducing AI threat (Experiments 2-5). Thirdly, the information framework effect is moderated by ambiguity tolerance, primarily manifesting among individuals tolerant of ambiguous information (Experiment 5). Fourthly, individuals influenced by the probability framework demonstrate increased support for policies related to AI development, with AI threat playing a mediating role (Experiment 6). Lastly, individuals influenced by the probability framework exhibit a heightened willingness to recommend jobs involving frequent AI interaction (Experiment 7). This study extends prior research by elucidating how external factors such as information frames contribute to variations in AI threat perception. Unlike the extensively studied valence information frame, numerical information frames impact AI threat perception by altering individuals’ likelihood judgments. Our findings shed light on the effects of the numerical information framework on AI-induced unemployment threat perception, policy support, and job recommendation willingness.

  • Monism and pluralism in morality: Origins, connotations and debates

    Subjects: Psychology >> Social Psychology submitted time 2024-03-19

    Abstract: In recent years, the debates between monism and pluralism are one of the most active theoretical disagreements in the field of moral psychology. Moral monism claims that all moral phenomena on the surface and moral structures behind them can be explained by one factor, and the representative theories are stages theory of moral development and dyadic morality theory, etc. Moral pluralism holds that morality can’t be explained by a single factor, whereas has many heterogeneous moral dimensions and cultural sensitivity. The representative theories are triadic moral discourse theory, relational model theory and moral foundations theory, etc. Moreover, moral foundations theory and dyadic morality theory are the typical representatives of the debates between monism and pluralism. The two theories have engaged in a long and inconclusive dialogue on the harm, purity, modular claims, and moral foundations of politics. Future studies should further explore the monism and pluralism of morality from three specific aspects so as to maintain the vitality of theory in the field of moral psychology.

  • Perceived opacity leads to algorithm aversion in the workplace

    Subjects: Psychology >> Social Psychology submitted time 2023-12-22

    Abstract:     With algorithms standing out and influencing every aspect of human society, people’s attitudes toward algorithmic invasion have become a vital topic to be discussed. Recently, algorithms as alternatives and enhancements to human decision-making have become ubiquitously applied in the workplace. Despite algorithms offering numerous advantages, such as vast data storage and anti-interference performance, previous research has found that people tend to reject algorithmic agents across different applications. Especially in the realm of human resources, the increasing utilization of algorithms forces us to focus on users’ attitudes. Thus, the present study aimed to explore public attitudes toward algorithmic decision-making and probe the underlying mechanism and potential boundary conditions behind the possible difference.
        To verify our research hypotheses, four experiments (N = 1211) were conducted, which involved various kinds of human resource decisions in the daily workplace, including resume screening, recruitment and hiring, allocation of bonuses, and performance assessment. Experiment 1 used a single-factor, two-level, between-subjects design. 303 participants were randomly assigned to two conditions (agent of decision-making: human versus algorithm) and measured their permissibility, liking, and willingness to utilize the agent. Experiment 1 was designed to be consistent with experiment 2. The only difference was an additional measurement of perceived transparency to test the mediating role. Experiment 3 aimed to establish a causal chain between the mediator and dependent variables by manipulating the perceived transparency of the algorithm. In experiment 4, a single-factor three-level between-subjects design (non-anthropomorphism algorithm versus anthropomorphism algorithm versus human) was utilized to explore the boundary condition of this effect.
        As anticipated, the present research revealed a pervasive algorithmic aversion across diverse organizational settings. Specifically, we conceptualized algorithm aversion as a tripartite framework encompassing cognitive, affective, and behavioral dimensions. We found that compared with human managers, participants demonstrated significantly lower permissibility (Experiments: 1, 2, and 4), liking (Experiments: 1, 2, and 4), and willingness to utilize (Experiment 2) algorithmic management. And the robustness of this result was demonstrated by the diversity of our scenarios and samples. Additionally, this research discovered perceived transparency as an interpretation mechanism explaining participants’ psychological reactions to different decision-making agents. That is to say, participants were opposed to algorithmic management because they thought its decision processes were more incomprehensible and inaccessible than humans (noted in Experiment 2). Addressing this “black box” phenomenon, experiment 3 showed that providing more information and principles about algorithmic management positively influenced participants’ attitudes. Crucially, the result also demonstrated the moderating effect of anthropomorphism. The result showed that participants exhibited greater permissibility and liking for the algorithm with human-like characteristics, such as a human-like name and communication style, over more than a mechanized form of the algorithm. This observation underlined the potential of anthropomorphism to ameliorate resistance to algorithmic management.
        These results bridge the gap between algorithmic aversion and decision transparency from the social-psychological perspective. Firstly, the present research establishes a three-dimensional (cognitive, affective, and behavioral) dual-perspective (employee and employer) model to elucidate the negative responses toward algorithmic management. Secondly, it reveals that perceived opacity acts as an obstacle to embracing algorithmic decision-making. This finding lays the theoretical foundation of Explainable Artificial Intelligence (XAI) which is conceptualized as a “glass box”. Ultimately, the study highlights the moderating effect of anthropomorphism on algorithmic aversion. This suggests that anthropomorphizing algorithms could be a feasible approach to facilitate the integration of intelligent management systems.

  • 系统合理化何以形成——三种不同的解释视角

    Subjects: Psychology >> Developmental Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: System justification theory proposes that people have the natural tendency to see the current sociopolitical systems as fair and legitimate, which is called system justification. But what are the roots of system justification? Researchers have provided explanations from three distinct perspectives. The cognitive dissonance perspective posits that the tendency to justify the current system exists because people want to alleviate those bad feelings which often arise when they feel the system cannot meet their need. The second perspective is compensatory control, which argues that system justification is derived from a sense of lacking control. By a system-legitimating process one can find a sense of order to cope with the threat of personal control. The third perspective is social cognitive process, which proposes that people express a salient and inherent attributional tendency when explaining socioeconomic disparities. It is the attributional style that serves as a main source of system justification. The future study should include explanatory variables from different theoretical perspectives in one study, draw on the findings of similar fields to explore other possible mechanisms, seek sources of system justification peculiar to Chinese culture, and explore the application issues based on distinguishing positive and negative system justification.

  • 智慧时代的螺丝钉:机器人凸显对职场物化的影响

    Subjects: Psychology >> Social Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: With buzzwords such as “tool man”, “laborer” and “corporate slave” sweeping the workplace, workplace objectification has become an urgent topic to be discussed. With the increasing use of artificial intelligence, especially robots in the workplace, the workplace effects produced by robots are also worth paying attention to. Therefore, the present research aims to explore whether the penetration of robots into the workplace will produce or aggravate the phenomenon of workplace objectification. Based on the intergroup threat theory and previous related studies, the present research assumes that the salience of robot workers in the workplace will pose both realistic threats and identity threats to people, and the perception of these threats will reduce people's sense of control. According to the compensatory control theory, the decrease of perceived control will cause people to have a strong motivation of restoring control. And workplace objectification, the 4th strategy proposed by compensatory control theory (i.e., affirming nonspecific structure, or seeking out and preferring simple, clear, and consistent interpretations of the social and physical environments), can be used to restore the sense of control. Therefore, this paper hypothesizes that the salience of robot workers in the workplace will increase the workplace objectification, because robot salience will increase people's perceived threats of robots, which will lead to control compensation, which will eventually lead to more severe workplace objectification. In addition, the other three strategies proposed by compensatory control theory, namely, bolstering personal agency, affiliating with external systems perceived to be acting on the self's behalf, and affirming specific structure (i.e., clear contingencies between actions and outcomes within the context of reduced control), can moderate the effect of robot salience on workplace objectification. Other ways of affirming non-specific structure than workplace objectification can also moderate the effect of robot salience on workplace objectification. Based on theories of social psychology and combined with the realistic background of workplace objectification, this paper attempts to use diverse methods to test the above hypothesis. Specifically, experiments, big data, and questionnaire surveys will be adopted to explore the potential mechanism and boundary conditions of the impact of robot salience on workplace objectification. The present research consists of five studies. Study 1 verifies the existence of the phenomenon that robot salience has an effect on workplace objectification. Study 2 explores the chain mediating effects of perceived robot threat and control compensation. Study 3 examines the moderating effect of personal factors, including bolstering personal agency, affiliating with external systems perceived to be acting on the self's behalf, affirming specific structure, and other ways of affirming non-specific structure than workplace objectification. Study 4 examines the moderating effect of robot factors, including anthropomorphism and two dimensions of mind perception. Study 5 examines the moderating effect of environmental factors, including different organizational cultures and ethical organizational cultures, and explores the intervention strategies for workplace objectification. The present research helps to prospectively understand the possible negative effects of artificial intelligence in the workplace and put forward effective solutions.

  • 算法决策趋避的过程动机理论

    Subjects: Psychology >> Social Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: With the advantages of objectivity, accuracy, high speed and low cost, algorithmic decision-making has been widely used in human daily life, such as medical, judicial, recruitment and transportation situations. How will people react to the shift from traditional human decision-making to the newly emerged algorithmic decision-making? If people perceive algorithms as social actors, there would be no difference when faced with the same decision made by two different agents. However, researches show that algorithmic decision-making is more related to different responses in individuals than human decision-making on the same content. In other words, people will approach or avoid the same algorithmic decision-making, which is defined as the algorithmic decision-making approach and avoidance. Specifically, the algorithmic decision-making approach means that algorithmic decision-making is considered fairer, less biased, less discriminatory, more trustworthy, and more acceptable than human decision-making. But the algorithmic decision-making avoidance is the other way around. By analogy with the distinct ideologies when facing outgroup members, the process motivation model of algorithmic decision-making approach and avoidance simulates human psychological motivation when facing the same decisions made by algorithms and humans. Based on the premise that quasi-social interaction (relationship) and interpersonal interaction (relationship) develop parallel, the theory summarizes the three interaction stages between humans and algorithms. Namely, the interaction of initial behavior, the establishment of quasi-social relationships and the formation of identity. Furthermore, it elaborates how cognitional, relational, and existential motivation trigger individual approach and avoidance responses in each specific stage. More precisely, it occurs to meet the cognitive motivational needs to reduce uncertainty, complexity, and ambiguity in the interaction of the initial behavior stage, fulfill the relational motivational needs for establishing belonging and social identity in the establishment of the quasi-social relationship stage, and to satisfy the motivational needs for coping with threats and seeking security in the identity formation stage. In accordance with the three psychological motivations of cognition, relationship, and existence, the process motivational theory introduces six influencing factors, such as cognitive load, decision transparency, moral status, interpersonal interaction, reality threat and identity threat respectively. For future directions, we suggest that more researches are needed to explore how mind perception and intergroup perception influence algorithmic decision-making approach and avoidance. Meanwhile, what is the reversal process of the algorithmic decision-making approach and avoidance from a social perspective and what other possible motivations are associated with it are also worthy of consideration.

  • 道德之一元论与多元论:缘起、内涵与论争

    Subjects: Psychology >> Social Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: The distinction between moral monism and moral pluralism has been reflected in the early vision of moral philosophy. Moral pluralism can be traced back to moral relativism, which holds that there is no universal moral principle. And any moral value applies only within certain cultural boundaries and individual value systems. However, moral universalism, a monistic ethical position, holds that there are universal ethics that apply to all people. In recent years, the above theoretical confrontations have entered the field of moral psychology. The dispute between monism and pluralism is one of the most active theoretical controversies in the field of moral psychology in recent years. Moral monism holds that all external moral-related phenomena and internal moral structures can be explained by one factor. The representative theories are stages theory of moral development and dyadic morality theory and so on. On the other hand, moral pluralism holds that morality cannot be explained by a single factor, but there are many heterogeneous moral dimensions, which are culturally sensitive. The representative theories include the triadic moral discourse theory, the relational model theory and the moral foundations theory and so on. Among them, the dyadic morality theory put forward by Kurt Gray et al. and the moral foundation theory put forward by Jonathon Haidt are the typical representatives of the disputes between monism and pluralism. Gray et al. argued that harm is the most powerful factor in explaining moral judgments and moral judgments about harm are more intuitive. Moreover, people with different political orientations reach a consensus that harm is the core of moral judgments. On the contrary, Haidt et al. believed that people of different political orientations, cultures and social classes is manifested with different moral foundations, and the moral foundations scale has good construct validity, discriminant validity, practical validity, etc. The disputes between the two theories mainly focus on the explanatory power of harm, the harmfulness of moral dumbfounding, modularity views and the problem of purity. Specifically, Gray et al. argued that moral dumbfounding stems from biased sampling that confounds content with weirdness and severity, rather than purity violation. They also believed that the so-called "harmless wrongs" can be explained by perceived harm. Importantly, purity cannot be regarded as an independent construct of morality. Moreover, there is few evidence to support the modular claims. Nevertheless, Haidt et al. believed that moral monism oversimplifies the connotations of morality. The different moral foundations are not " Fodorian modularity", but more flexible and overlapping "massive modularity". Furthermore, plenty of evidence supported purity as an independent moral foundation. Future research should be carried out in the following aspects. First of all, morality must need a clearer definition. To ensure the validity of moral research, future research should try to define moral concepts more clearly and should ensure that only one construct is tested at a time. Without ensuring that the situation clearly reflects a certain moral dimension, it is difficult for researchers to pinpoint which moral dimension influences people’s moral judgments. Secondly, in addition to paying attention to the disputes between monism and pluralism, we also need to separate from the disputes, take an objective view of the different characteristics of the controversies, learn from each other and complement each other, so as to promote the development of moral psychology. Specifically, moral monism emphasizes the simplicity of moral constructs and the accuracy of measurement, while pluralism emphasizes the understanding of the nature of morality among people in different cultures. These are two different theoretical constructs and explanations of the nature of morality. Future research should combine the advantages of moral monism and moral pluralism, and try to adopt realistic situations with high ecological validity, so as to construct a more perfect integrated theoretical model. Last but not the least, most previous empirical studies have been dominated by the "WEIRD (Western, Educated, Industrialized, Rich and Democratic)” sample. Future research should urgently consider the possibility of carrying out morality research indifferent cultures, especially based on the Chinese culture to explore the nature of morality.

  • 算法歧视比人类歧视引起更少道德惩罚欲

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: The application of algorithms is believed to contribute to reducing discrimination in human decision-making, but algorithmic discrimination still exists in real life. So is there a difference between folk responses to human discrimination and algorithmic discrimination? Previous research has found that people's moral outrage at algorithmic discrimination is less than that at human discrimination. Few studies, however, have investigated people's behavioral tendency towards algorithmic discrimination and human discrimination, especially whether there is a difference in their desire for moral punishment. Therefore, the present study aimed at comparing people's desire to punish algorithmic discrimination and human discrimination as well as finding the underlying mechanism and boundary conditions behind the possible difference. To achieve the research objectives, six experiments were conducted, which involved various kinds of discrimination in daily life, including gender discrimination, educational background discrimination, ethnic discrimination and age discrimination. In experiment 1 and 2, participants were randomly assigned to two conditions (discrimination: algorithm vs. human), and their desire for moral punishment was measured. Additionally, the mediating role of free will belief was tested in experiment 2. To demonstrate the robustness of our findings, the underlying mechanism (i.e., free will belief) was further examined in experiment 3 and 4. Experiment 3 was a 2 (discrimination: algorithm vs. human) × 2 (belief in free will: high vs. low) between-subject design, and experiment 4 was a single-factor (discrimination: human vs. algorithm with free will vs. algorithm without free will) between-subject design. Experiment 5 and 6 were conducted to test the moderating role of anthropomorphism. Specifically, participants’ tendency to anthropomorphize was measured in experiment 5, and the anthropomorphism of algorithm was manipulated in experiment 6. As predicted, the present research found that compared with human discrimination, people have less desire to punish algorithmic discrimination. And the robustness of this result was demonstrated by the diversity of our stimuli and samples. In addition, we found that free will belief played a mediating role in the effect of discrimination (algorithm vs. human) on the desire to punish. That is to say, the reason why people had less desire to punish when facing algorithm discrimination was that they thought algorithms had less free will than humans. Finally, the results also demonstrated the moderating effect of anthropomorphism. These results enrich literature regarding algorithm discrimination as well as moral punishment from the perspective of social psychology. First, this research explored people's behavioral tendency towards algorithmic discrimination by focusing on the desire for moral punishment, which contributes to a better understanding of people's responses to algorithmic discrimination. Second, the results are consistent with previous studies on people’s mind perception of artificial intelligence. Third, it adds evidence that free will has a significant impact on moral punishment.

  • The Influence of Perceived Robot Threat on Workplace Objectification

    Subjects: Psychology >> Management Psychology submitted time 2023-03-04

    Abstract: With buzzwords such as "tool man", "laborer" and “corporate slave” sweeping the workplace, workplace objectification has become an urgent topic to be discussed. With the increasing use of artificial intelligence, especially robots in the workplace, the workplace effects produced by robots are also worth paying attention to. Therefore, the present paper aims to explore whether people’s perception of robots’ threat to them will produce or aggravate workplace objectification. On the basis of reviewing the related research on workplace objectification and robot workforce, and combined with intergroup threat theory, this paper elaborates the realistic threat to human employment and security caused by robot workforce, as well as the identity threat to human identity and uniqueness. From the perspective of compensatory control theory, this paper proposes the deep mechanisms and boundary conditions of that perceiving robot threat will reduce people's sense of control, thereby stimulating the control compensation mechanism, which in turn leads to workplace objectification. This research is composed of eight studies. The first study includes two sub-studies, which investigate the relationship between perceived robot threat and workplace objectification through questionnaires and online experiments. This study tries to find a positive correlation and a causal association between perceived robot threat and workplace objectification. The second study includes three sub-studies, which explore why perceived robot threat increases workplace objectification. This study tries to verify the mediating effect of control compensation (sense of control), to explain the psychological mechanism behind the effect of perceived robot threat on workplace objectification, and to repeatedly verify it through different research methods. The third study includes three sub-studies. Based on the three compensatory control strategies proposed by the control compensation theory in addition to affirming nonspecific structure, this study tries to further explore the moderating effect of personal agency, external agency, and specific structure. The main findings of this paper are as follows. First, perceived robot threat will increase workplace objectification, and perceived robot identity threat has a stronger effect. Second, the sense of control plays a mediating role in the effect of perceived robot threat (mainly identity threat) on workplace objectification. Specifically, the higher the perceived robot identity threat, the lower the sense of control, and the more serious the workplace objectification. Third, the other three strategies proposed by compensatory control theory, namely strengthening personal agency, supporting external agency and affirming specific structure, can moderate the effect of perceived robot threat on workplace objectification. The main theoretical contributions of this paper are as follows. First, it reveals the negative influence of robots on interpersonal relationships and their psychological mechanism. Second, it extends the explanatory boundary of compensatory control theory to the field of artificial intelligence, proposing and verifying that perceived robot threat increases workplace objectification through compensatory control. Third, the relationship between different compensation control strategies is discussed, and the moderating model of perceived robot threat affecting workplace objectification is proposed and verified. The main practical contributions are: first, to provide reference for the anthropomorphic design of robots; second, it helps to better understand, warn and deal with the negative social impact of robots.

  • The process motivation model of algorithmic decision-making approach and avoidance

    Subjects: Psychology >> Social Psychology submitted time 2022-07-22

    Abstract: Algorithms are often used for decision-making. However, algorithmic decision-making is more related to different responses in individuals than human decision-making on the same content. The phenomenon is defined as the algorithmic decision-making approach and avoidance. The approach means that algorithmic decision-making is considered fairer, less biased, less discriminatory, more trustworthy, and more acceptable than human decision-making. But the avoidance is the other way around. To explain the phenomenon of the algorithmic decision-making approach and avoidance better, the process motivation model of algorithmic decision-making approach and avoidance is employed in the review. It summarizes three stages of the interaction between human and algorithm, namely, the interaction of initial behavior, the establishment of quasi-social relationship and the formation of identity. Moreover, it elaborates how cognitional, relational, and existential motivation trigger individual approach and avoidance responses in each specific stage. For future directions, we suggest that more researches are needed to explore how mind perception and intergroup perception influence algorithmic decision-making approach and avoidance. Meanwhile, what is the reversal process of algorithmic decision-making approach and avoidance from a more social perspective and what other possible motivations are associated with it are also worth of considered.

  • 算法歧视比人类歧视引起更少道德惩罚欲

    Subjects: Psychology >> Social Psychology submitted time 2022-02-07

    Abstract:算法歧视屡见不鲜,人们对其有何反应值得关注。六个递进实验比较了不同类型歧视情境下人们对算法歧视和人类歧视的道德惩罚欲,并探讨其潜在机制和边界条件。结果发现:相对于人类歧视,人们对算法歧视的道德惩罚欲更少(实验1~6),潜在机制是人们认为算法(与人类相比)更缺乏自由意志(实验2~4),且个体拟人化倾向越强或者算法越拟人化,人们对算法的道德惩罚欲越强(实验5~6)。研究结果有助于更好地理解人们对算法歧视的反应,并为算法犯错后的道德惩罚提供启示。

  • 智慧时代的螺丝钉:机器人凸显对职场物化的影响

    Subjects: Psychology >> Management Psychology Subjects: Psychology >> Social Psychology submitted time 2022-01-17

    Abstract:

    "

  • A three-dimensional motivation model of algorithm aversion

    Subjects: Psychology >> Social Psychology submitted time 2021-11-16

    Abstract: Algorithm aversion refers to the phenomenon of people preferring human decisions but being reluctant to use superior algorithm decisions. The three-dimensional motivational model of algorithm aversion summarizes the three main reasons: the doubt of algorithm agents, the lack of moral standing, and the annihilation of human uniqueness, corresponding to the three psychological motivations, i.e., trust, responsibility, and control, respectively. Given these motivations of algorithm aversion, increasing human trust in algorithms, strengthening algorithm agents' responsibility, and exploring personalized algorithms to salient human control over algorithms should be feasible options to weaken algorithm aversion. Future research could further explore the boundary conditions and other possible motivations of algorithm aversion from a more social perspective. " "

  • 萌:感知与后效

    Subjects: Psychology >> Social Psychology submitted time 2018-11-27

    Abstract: "

  • 人工智能之拟人化

    Subjects: Psychology >> Social Psychology submitted time 2018-11-27

    Abstract: " "

  • 如何做一个道德的人工智能体?心理学的视角

    Subjects: Psychology >> Social Psychology Subjects: Computer Science >> Natural Language Understanding and Machine Translation submitted time 2018-11-27

    Abstract: "

  • Does religious priming make people better?

    Subjects: Psychology >> Social Psychology submitted time 2018-11-15

    Abstract: In recent years, multiple religious priming paradigms have been used to explore the relationship between religions and morality. It has been found that religious initiation, on the one hand, could promote people's prosocial behaviors and urge people to obey moral rules; on the other hand, it could also activate out-group prejudice and stimulate the hostility to groups with conflicting values. Existing research suggests that specific religious beliefs mediate the influence of religious priming on morality. Variables such as intrinsic religion, gender, the priming method and connotation of the stimuli, and some personality traits would serve as moderators influence the relationship between religion and morality because they may have influence on specific religious beliefs. Future research could further focus on methodological innovation as well as the exploration of other mediators and integrate more cross-religious and cross-cultural perspectives with interdisciplinary collaboration.

  • Operating Unit: National Science Library,Chinese Academy of Sciences
  • Production Maintenance: National Science Library,Chinese Academy of Sciences
  • Mail: eprint@mail.las.ac.cn
  • Address: 33 Beisihuan Xilu,Zhongguancun,Beijing P.R.China