• 智慧时代的螺丝钉:机器人凸显对职场物化的影响

    Subjects: Psychology >> Social Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: With buzzwords such as “tool man”, “laborer” and “corporate slave” sweeping the workplace, workplace objectification has become an urgent topic to be discussed. With the increasing use of artificial intelligence, especially robots in the workplace, the workplace effects produced by robots are also worth paying attention to. Therefore, the present research aims to explore whether the penetration of robots into the workplace will produce or aggravate the phenomenon of workplace objectification. Based on the intergroup threat theory and previous related studies, the present research assumes that the salience of robot workers in the workplace will pose both realistic threats and identity threats to people, and the perception of these threats will reduce people's sense of control. According to the compensatory control theory, the decrease of perceived control will cause people to have a strong motivation of restoring control. And workplace objectification, the 4th strategy proposed by compensatory control theory (i.e., affirming nonspecific structure, or seeking out and preferring simple, clear, and consistent interpretations of the social and physical environments), can be used to restore the sense of control. Therefore, this paper hypothesizes that the salience of robot workers in the workplace will increase the workplace objectification, because robot salience will increase people's perceived threats of robots, which will lead to control compensation, which will eventually lead to more severe workplace objectification. In addition, the other three strategies proposed by compensatory control theory, namely, bolstering personal agency, affiliating with external systems perceived to be acting on the self's behalf, and affirming specific structure (i.e., clear contingencies between actions and outcomes within the context of reduced control), can moderate the effect of robot salience on workplace objectification. Other ways of affirming non-specific structure than workplace objectification can also moderate the effect of robot salience on workplace objectification. Based on theories of social psychology and combined with the realistic background of workplace objectification, this paper attempts to use diverse methods to test the above hypothesis. Specifically, experiments, big data, and questionnaire surveys will be adopted to explore the potential mechanism and boundary conditions of the impact of robot salience on workplace objectification. The present research consists of five studies. Study 1 verifies the existence of the phenomenon that robot salience has an effect on workplace objectification. Study 2 explores the chain mediating effects of perceived robot threat and control compensation. Study 3 examines the moderating effect of personal factors, including bolstering personal agency, affiliating with external systems perceived to be acting on the self's behalf, affirming specific structure, and other ways of affirming non-specific structure than workplace objectification. Study 4 examines the moderating effect of robot factors, including anthropomorphism and two dimensions of mind perception. Study 5 examines the moderating effect of environmental factors, including different organizational cultures and ethical organizational cultures, and explores the intervention strategies for workplace objectification. The present research helps to prospectively understand the possible negative effects of artificial intelligence in the workplace and put forward effective solutions.

  • 算法决策趋避的过程动机理论

    Subjects: Psychology >> Social Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: With the advantages of objectivity, accuracy, high speed and low cost, algorithmic decision-making has been widely used in human daily life, such as medical, judicial, recruitment and transportation situations. How will people react to the shift from traditional human decision-making to the newly emerged algorithmic decision-making? If people perceive algorithms as social actors, there would be no difference when faced with the same decision made by two different agents. However, researches show that algorithmic decision-making is more related to different responses in individuals than human decision-making on the same content. In other words, people will approach or avoid the same algorithmic decision-making, which is defined as the algorithmic decision-making approach and avoidance. Specifically, the algorithmic decision-making approach means that algorithmic decision-making is considered fairer, less biased, less discriminatory, more trustworthy, and more acceptable than human decision-making. But the algorithmic decision-making avoidance is the other way around. By analogy with the distinct ideologies when facing outgroup members, the process motivation model of algorithmic decision-making approach and avoidance simulates human psychological motivation when facing the same decisions made by algorithms and humans. Based on the premise that quasi-social interaction (relationship) and interpersonal interaction (relationship) develop parallel, the theory summarizes the three interaction stages between humans and algorithms. Namely, the interaction of initial behavior, the establishment of quasi-social relationships and the formation of identity. Furthermore, it elaborates how cognitional, relational, and existential motivation trigger individual approach and avoidance responses in each specific stage. More precisely, it occurs to meet the cognitive motivational needs to reduce uncertainty, complexity, and ambiguity in the interaction of the initial behavior stage, fulfill the relational motivational needs for establishing belonging and social identity in the establishment of the quasi-social relationship stage, and to satisfy the motivational needs for coping with threats and seeking security in the identity formation stage. In accordance with the three psychological motivations of cognition, relationship, and existence, the process motivational theory introduces six influencing factors, such as cognitive load, decision transparency, moral status, interpersonal interaction, reality threat and identity threat respectively. For future directions, we suggest that more researches are needed to explore how mind perception and intergroup perception influence algorithmic decision-making approach and avoidance. Meanwhile, what is the reversal process of the algorithmic decision-making approach and avoidance from a social perspective and what other possible motivations are associated with it are also worthy of consideration.

  • 道德之一元论与多元论:缘起、内涵与论争

    Subjects: Psychology >> Social Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: The distinction between moral monism and moral pluralism has been reflected in the early vision of moral philosophy. Moral pluralism can be traced back to moral relativism, which holds that there is no universal moral principle. And any moral value applies only within certain cultural boundaries and individual value systems. However, moral universalism, a monistic ethical position, holds that there are universal ethics that apply to all people. In recent years, the above theoretical confrontations have entered the field of moral psychology. The dispute between monism and pluralism is one of the most active theoretical controversies in the field of moral psychology in recent years. Moral monism holds that all external moral-related phenomena and internal moral structures can be explained by one factor. The representative theories are stages theory of moral development and dyadic morality theory and so on. On the other hand, moral pluralism holds that morality cannot be explained by a single factor, but there are many heterogeneous moral dimensions, which are culturally sensitive. The representative theories include the triadic moral discourse theory, the relational model theory and the moral foundations theory and so on. Among them, the dyadic morality theory put forward by Kurt Gray et al. and the moral foundation theory put forward by Jonathon Haidt are the typical representatives of the disputes between monism and pluralism. Gray et al. argued that harm is the most powerful factor in explaining moral judgments and moral judgments about harm are more intuitive. Moreover, people with different political orientations reach a consensus that harm is the core of moral judgments. On the contrary, Haidt et al. believed that people of different political orientations, cultures and social classes is manifested with different moral foundations, and the moral foundations scale has good construct validity, discriminant validity, practical validity, etc. The disputes between the two theories mainly focus on the explanatory power of harm, the harmfulness of moral dumbfounding, modularity views and the problem of purity. Specifically, Gray et al. argued that moral dumbfounding stems from biased sampling that confounds content with weirdness and severity, rather than purity violation. They also believed that the so-called "harmless wrongs" can be explained by perceived harm. Importantly, purity cannot be regarded as an independent construct of morality. Moreover, there is few evidence to support the modular claims. Nevertheless, Haidt et al. believed that moral monism oversimplifies the connotations of morality. The different moral foundations are not " Fodorian modularity", but more flexible and overlapping "massive modularity". Furthermore, plenty of evidence supported purity as an independent moral foundation. Future research should be carried out in the following aspects. First of all, morality must need a clearer definition. To ensure the validity of moral research, future research should try to define moral concepts more clearly and should ensure that only one construct is tested at a time. Without ensuring that the situation clearly reflects a certain moral dimension, it is difficult for researchers to pinpoint which moral dimension influences people’s moral judgments. Secondly, in addition to paying attention to the disputes between monism and pluralism, we also need to separate from the disputes, take an objective view of the different characteristics of the controversies, learn from each other and complement each other, so as to promote the development of moral psychology. Specifically, moral monism emphasizes the simplicity of moral constructs and the accuracy of measurement, while pluralism emphasizes the understanding of the nature of morality among people in different cultures. These are two different theoretical constructs and explanations of the nature of morality. Future research should combine the advantages of moral monism and moral pluralism, and try to adopt realistic situations with high ecological validity, so as to construct a more perfect integrated theoretical model. Last but not the least, most previous empirical studies have been dominated by the "WEIRD (Western, Educated, Industrialized, Rich and Democratic)” sample. Future research should urgently consider the possibility of carrying out morality research indifferent cultures, especially based on the Chinese culture to explore the nature of morality.

  • 算法歧视比人类歧视引起更少道德惩罚欲

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: The application of algorithms is believed to contribute to reducing discrimination in human decision-making, but algorithmic discrimination still exists in real life. So is there a difference between folk responses to human discrimination and algorithmic discrimination? Previous research has found that people's moral outrage at algorithmic discrimination is less than that at human discrimination. Few studies, however, have investigated people's behavioral tendency towards algorithmic discrimination and human discrimination, especially whether there is a difference in their desire for moral punishment. Therefore, the present study aimed at comparing people's desire to punish algorithmic discrimination and human discrimination as well as finding the underlying mechanism and boundary conditions behind the possible difference. To achieve the research objectives, six experiments were conducted, which involved various kinds of discrimination in daily life, including gender discrimination, educational background discrimination, ethnic discrimination and age discrimination. In experiment 1 and 2, participants were randomly assigned to two conditions (discrimination: algorithm vs. human), and their desire for moral punishment was measured. Additionally, the mediating role of free will belief was tested in experiment 2. To demonstrate the robustness of our findings, the underlying mechanism (i.e., free will belief) was further examined in experiment 3 and 4. Experiment 3 was a 2 (discrimination: algorithm vs. human) × 2 (belief in free will: high vs. low) between-subject design, and experiment 4 was a single-factor (discrimination: human vs. algorithm with free will vs. algorithm without free will) between-subject design. Experiment 5 and 6 were conducted to test the moderating role of anthropomorphism. Specifically, participants’ tendency to anthropomorphize was measured in experiment 5, and the anthropomorphism of algorithm was manipulated in experiment 6. As predicted, the present research found that compared with human discrimination, people have less desire to punish algorithmic discrimination. And the robustness of this result was demonstrated by the diversity of our stimuli and samples. In addition, we found that free will belief played a mediating role in the effect of discrimination (algorithm vs. human) on the desire to punish. That is to say, the reason why people had less desire to punish when facing algorithm discrimination was that they thought algorithms had less free will than humans. Finally, the results also demonstrated the moderating effect of anthropomorphism. These results enrich literature regarding algorithm discrimination as well as moral punishment from the perspective of social psychology. First, this research explored people's behavioral tendency towards algorithmic discrimination by focusing on the desire for moral punishment, which contributes to a better understanding of people's responses to algorithmic discrimination. Second, the results are consistent with previous studies on people’s mind perception of artificial intelligence. Third, it adds evidence that free will has a significant impact on moral punishment.

  • Operating Unit: National Science Library,Chinese Academy of Sciences
  • Production Maintenance: National Science Library,Chinese Academy of Sciences
  • Mail: eprint@mail.las.ac.cn
  • Address: 33 Beisihuan Xilu,Zhongguancun,Beijing P.R.China