Submitted Date
Subjects
Authors
Institution
  • The Impacts and Mechanisms of Artificial Intelligence on Knowledge Workers: An Instrumental and Humanistic Perspective

    Subjects: Psychology >> Management Psychology submitted time 2025-05-29

    Abstract: The rapid development of artificial intelligence (AI) has triggered psychological and behavioral shifts among knowledge workers, reshaping their perceptions of the modern work environment and expectations for organizational growth. Currently, research on the impacts and mechanisms of AI on knowledge workers remains fragmented. This study explores the empowering and activating pathways of AI for knowledge workers in terms of “agency” and “self-actualization” through both instrumental and humanistic perspectives. The main focus includes two areas: (1) From the instrumental perspective, this study examines the informational role of AI and its dual influence on employee cognitive processes, clarifying the creativity process of knowledge workers’ “agency” in the age of AI; (2) From the humanistic perspective, this study explores how AI influences changes in employees’ psychological needs and perceptions of well-being, identifying its impact on knowledge workers’ turnover decisions related to “self-actualization.” This research is expected to deepen the understanding of AI’s influence on knowledge workers, expand theoretical studies on human-AI collaboration, and provide practical implications.

  • The Impact of Perceived Usefulness on the Continued Use of Large Language Models: The Chain Mediating Effect of Expectation Confirmation and Satisfaction and The Moderating Role of Communication Intention

    Subjects: Psychology >> Applied Psychology submitted time 2025-03-25

    Abstract: Based on the Expectation Confirmation Model (ECM), this study explores the impact of perceived usefulness, expectation confirmation, and satisfaction on college students’ continuous use of large language models, and examines the moderating effect of the willingness to communicate with AI. Through empirical analysis of questionnaire data from 189 college students, using the chain mediation model and moderation effect test method. The research results show that perceived usefulness has a significant positive impact on continuous use intention through the chain mediation of expectation confirmation and satisfaction; the willingness to communicate with AI plays a moderating role between perceived usefulness and continuous use intention, and the effect is more significant in the low communication willingness group.

  • Employees adhere less to advice on moral behavior from artificial intelligence supervisors than human

    Subjects: Psychology >> Social Psychology submitted time 2024-09-04

    Abstract: The use of artificial intelligence (AI) in organizations has evolved from being a tool to being a supervisor. Although previous research has examined people’s reactions to AI supervisors in general, few studies have investigated the effectiveness of AI supervisors, specifically whether individuals adhere to their moral behavioral advices. The present research aims to compare employees’ adherence to moral behavioral advice given by AI and human supervisors, as well as identify potential psychological mechanisms and boundary conditions behind the possible differences.
    To test our research hypotheses, we conducted six experiments and three pilot experiemts (N = 1642, including 179 samples of pilot experiments) involving different types of moral behaviors in organizations, such as engaging in the activity to help the disabled, volunteering for environmental protection or child welfare, and making charitable donations for disasters or colleagues’ difficulties. Experiment 1a and 1b was a single-factor, two-level, between-subjects design. 180 participants were randomly assigned to two conditions: supervisors giving advice on moral behavior (human versus AI). Their adherence to the supervisor’s advice was measured in different scenarios. Experiment 2 followed the same design as Experiment 1, with additional measurements of evaluation apprehension and perceived mind to test the mediating role. To establish a causal chain between the mediator and the dependent variable and demonstrate the robustness of our findings, we further examined the underlying mechanism in Experiment 3. This experiment had a between-subjects design of 2 (supervisors: human versus AI) x 2 (evaluation apprehension: high versus low). Experiments 4 and 5 were designed to test the moderating role of anthropomorphism. In Experiment 4, participants’ tendency to anthropomorphize was measured, and in Experiment 5, the anthropomorphism of the AI supervisor was manipulated. 
    As predicted, the present research found that, compared to a human supervisor, participants were less likely to follow the moral advice of an AI supervisor (Experiments 1a~5). The robustness of this finding was demonstrated by the diversity of our scenario settings and samples. And we also excluded the potential effects of perceived rational, negative emotions, exploitation, perceived autonomy and some individual differences (pilot experiment and emperiment 1a~1b). In addition, this research discovered evaluation apprehension as the underlying mechanism explaining employees’ adherence to advice from different supervisors. Participants believed that they would receive less social judgment and evaluation from an AI supervisor than a human supervisor. Consequently, they were less willing to adhere to the advice offered by the AI (Experiments 2~5). The present research also demonstrated the moderating effect of anthropomorphism (Experiments 4~5). In Experiment 4, for individuals with a high tendency towards anthropomorphism, there was no significant difference in their adherence to advice on moral behavior from human or AI supervisors; Participants with low anthropomorphism tendency showed greater adherence to a human supervisor than to an AI supervisor. In Experiment 5, participants demonstrated greater adherence to the AI supervisor with a human-like name and communication style compared to the mechanized AI supervisor.
    The study contributes to the literature on AI leadership by highlighting the limitations of AI supervisors in providing advice on moral behavior. Additionally, the results confirm the phenomenon of algorithm aversion in the moral domain, indicating that people are hesitant to accept AI involvement in moral decision-making, even in an advisory role. The study also identifies evaluation apprehension as a factor that influences adherence to AI advice. Individuals may be less likely to follow the advice of AI due to a decreased concern for potential social judgment in their interactions with AI supervisors. Finally, anthropomorphism may be a useful approach to enhance the effectiveness of AI supervisors.

  • Human-AI mutual trust in the era of artificial general intelligence

    Subjects: Psychology >> Applied Psychology submitted time 2024-08-02

    Abstract: With the development of technology, artificial general intelligence has begun to take shape, ushering in a new era for human-machine interaction and relationships. The trust between humans and artificial intelligence (AI) are on the brink of a transformative shift from unidirectional trust, where people trust AI, to a state of mutual trust between humans and AI. This study, based on a review of the interpersonal trust model in social psychology and the human-machine trust model in engineering psychology, proposes a dynamic mutual trust model for human-AI relationships from the perspective of interpersonal trust. The model regards humans and AI as equal contributors to trust-building, highlighting the “mutual trust” in the relational dimension and the “dynamics” in the temporal dimension of trust between humans and AI. It constructs a fundamental framework for dynamic mutual trust between humans and AI, incorporating influencing factors, result feedback, and behavior adjustment as essential components. This model pioneers the inclusion of AI’s trust towards humans and the dynamic interactive process of mutual trust, offering a new theoretical perspective for the study of trust between humans and AI. Future research should focus on understanding the establishment and maintenance of trust from AI towards humans, developing quantitative models for human-AI trust, and exploring mutual trust dynamics within multi-agent interactions.

  • Whose values are AI models aligning with? How culture shapes people’s normative expectations of AI value: An Integrative Review

    Subjects: Psychology >> Social Psychology submitted time 2024-07-31

    Abstract: With the rapid development and widespread application of artificial intelligence (AI) technology, the profound cultural influence on AI values has attracted widespread attention. Research to date, however, has not systemically looked at both the human universals and cultural differences in people’s normative expectations of AI values. To further explore the potential impacts of culture on AI values through the lens of cultural psychology and highlight the importance of taking into account the role of cultural diversity played in AI developments and applications, our current integrative review briefly synthesizes what might be the cross-cultural consensus and what might be the cultural differences in shaping people’s attitudes, behaviors, and normative expectations regarding AI values. In addition, we discuss the vital role of cultural beliefs and cultural norms in the ethical supervision and application of AI in human society. To better understand the complex interaction between AI and culture, future work should focus on developing and iterating algorithms for diverse cultural scenarios, thereby both promoting the globalization of AI application and meet diverse cultural demands to ultimately improve the well-being of individuals and society across the globe.

  • The Information Framing Effect of “AI Unemployment”

    Subjects: Psychology >> Social Psychology submitted time 2024-05-03

    Abstract: The advancement of artificial intelligence (AI) technology significantly contributes to enhancing productivity; however, concerns regarding potential technological unemployment have garnered considerable attention. The uncertainty surrounding the occurrence, timing, and scale of AI-induced unemployment impedes definitive conclusions. This uncertainty may also lead the public to be influenced by encountered information concerning AI-induced unemployment. Media coverage on AI-induced unemployment often presents extensive information regarding affected industries, occupations, and probability scales, establishing two numerical information frameworks: one emphasizing factors influencing unemployment distribution across industries and another emphasizing the probability of unemployment occurrence. Comparatively, the probability framework, as opposed to the factor framework, allows individuals to formulate judgments indicating a reduced likelihood of AI-induced unemployment, thereby mitigating the perceived threat of AI, especially among individuals with high ambiguity tolerance. Building upon the foundational assumption that the probability framework alleviates AI threat perception, this study, comprising seven recursive experiments, investigates the mediating role of judgments on AI-induced unemployment likelihood and the moderating role of individual ambiguity tolerance. Experiment 1 juxtaposes AI threat perception elicited by general AI-induced unemployment descriptions, factor frameworks, and probability frameworks. Experiment 2 validates the mediating role of likelihood judgments. Experiments 3 and 4 respectively eliminate potential influences of probability values and unemployment scale. Experiment 5 explores ambiguity tolerance’s moderating effect. Experiments 6 and 7 examine subsequent AI threat effects, including support for AI development policies and willingness to recommend various occupations. The primary findings are as follows. Firstly, introducing AI-induced unemployment through a probability framework effectively diminishes AI threat perception (Experiments 1-7). Secondly, this effect is mediated by perceived likelihood, whereby the probability framework prompts individuals to form judgments indicating decreased AI-induced unemployment likelihood, thus reducing AI threat (Experiments 2-5). Thirdly, the information framework effect is moderated by ambiguity tolerance, primarily manifesting among individuals tolerant of ambiguous information (Experiment 5). Fourthly, individuals influenced by the probability framework demonstrate increased support for policies related to AI development, with AI threat playing a mediating role (Experiment 6). Lastly, individuals influenced by the probability framework exhibit a heightened willingness to recommend jobs involving frequent AI interaction (Experiment 7). This study extends prior research by elucidating how external factors such as information frames contribute to variations in AI threat perception. Unlike the extensively studied valence information frame, numerical information frames impact AI threat perception by altering individuals’ likelihood judgments. Our findings shed light on the effects of the numerical information framework on AI-induced unemployment threat perception, policy support, and job recommendation willingness.

  • Symbiosis or opposition? The dialectical relationship between human and artificial intelligence

    Subjects: Psychology >> Social Psychology submitted time 2024-04-25

    Abstract: This review explores the complex attitudes towards Artificial Intelligence (AI) from the perspectives of benefit theory, threat theory, and dialectical relations. Initially, the discussion highlights how AI, as a form of technological advancement, fosters work efficiency, decision-making quality, and innovation across various domains, reflecting the optimistic evaluations and expectations placed on AI. Subsequently, the review shifts focus on the potential threats presented by AI, including privacy infringement, job market disruption, and ethical dilemmas, showcasing the critical concerns surrounding AI development. Moreover, examining AI from a dialectical standpoint underscores the importance of balancing technological innovation with ethical considerations. This entails a discussion on future research directions, emphasizing cross-cultural ethical explorations and the enhancement of human-AI collaboration. The review concludes that a comprehensive understanding and evaluation of AI necessitate transcending singular viewpoints, incorporating multidisciplinary insights to facilitate the sustainable development and social integration of AI technologies.

  • The Role of Culture in Human-Computer Interaction: Human Universals and Cultural Differences

    Subjects: Psychology >> Cognitive Psychology submitted time 2024-04-24

    Abstract: The rapid advancement and widespread application of Artificial Intelligence (AI) technologies have significantly altered human living and working practices, thereby attracting scholarly attention to its sociocultural implications. This literature review examines the influence of cultural factors on the interpretation, principles, and utilization of AI across diverse contexts, emphasizing the critical interaction between AI technologies and cultural psychological principles. AI encompasses a broad spectrum of capabilities, ranging from basic algorithms to sophisticated machine learning systems, designed for tasks such as sensory perception, linguistic understanding, and decision-making. However, the adoption and integration of these technologies exhibit considerable variation across different cultural environments. The study highlights the importance of integrating cultural perspectives to achieve equitable, effective, and universally acceptable AI systems. Through the lens of cultural psychology, this research provides insights into the development of culturally attuned AI systems. It advocates for a comprehensive understanding of both cultural variances and shared values within the realm of AI. The paper proposes future research directions focusing on incorporating cultural diversity into AI research and applications, aiming to realize the full global potential of AI technologies.

  • Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review

    Subjects: Psychology >> Applied Psychology submitted time 2024-01-09

    Abstract: This paper explores the frontiers of large language models (LLMs) in psychology applications. Psychology has undergone several theoretical changes, and the current use of Artificial Intelligence (AI) and Machine Learning, particularly LLMs, promises to open up new research directions. We provide a detailed exploration of how LLMs like ChatGPT are transforming psychological research. It discusses the impact of LLMs across various branches of psychology, including cognitive and behavioral, clinical and counseling, educational and developmental, and social and cultural psychology, highlighting their potential to simulate aspects of human cognition and behavior. The paper delves into the capabilities of these models to emulate human-like text generation, offering innovative tools for literature review, hypothesis generation, experimental design, experimental subjects, data analysis, academic writing, and peer review in psychology. While LLMs are essential in advancing research methodologies in psychology, the paper also cautions about their technical and ethical challenges. There are issues like data privacy, the ethical implications of using LLMs in psychological research, and the need for a deeper understanding of these models' limitations. Researchers should responsibly use LLMs in psychological studies, adhering to ethical standards and considering the potential consequences of deploying these technologies in sensitive areas. Overall, the article provides a comprehensive overview of the current state of LLMs in psychology, exploring potential benefits and challenges. It serves as a call to action for researchers to leverage LLMs' advantages responsibly while addressing associated risks.

  • Dancing with AI: AI-employee collaboration in the systemic view

    Subjects: Psychology >> Management Psychology submitted time 2023-08-25

    Abstract: AI-employee collaboration is an interactive system composed of “AI-human-task environment” with the goal of completing tasks efficiently. Improving AI-employee collaboration is crucial for promoting the integration of AI and the real economy, as well as the mental health and career development of employees in the digital and intelligent era. However, due to the complexities of the interaction between AI and employees, the current researches are fragmented and lack a comprehensive understanding of AI-employee collaboration research. Therefore, it is necessary to clarify relevant concepts and sort out the literature of the AI-employee collaboration systematically and comprehensively. Based on a systemic view, we clarify the concept of AI and AI-employee collaboration, sorts out the compositions of AI-employee collaboration system, analysis the interactive effects of compositions and constructs an integrated research framework. Finally, based on the research framework of AI-employee collaboration, future research prospects are proposed.

  • Operating Unit: National Science Library,Chinese Academy of Sciences
  • Production Maintenance: National Science Library,Chinese Academy of Sciences
  • Mail: eprint@mail.las.ac.cn
  • Address: 33 Beisihuan Xilu,Zhongguancun,Beijing P.R.China