Professor Yulin Fang and 錢鑫濤
8 October 2025
Artificial intelligence (AI) technology has been developing in leaps and bounds in recent years. Reams of academic studies have shown that human-AI collaboration is conducive to enhancing efficiency and creativity across different work settings. Various sectors of the community have also become aware of the productivity gains that AI brings to diverse fields of work.
However, AI empowerment is no panacea. Excessive reliance on human-AI partnership may cause the risk of “skill atrophy”. The world of academia partly attributes this phenomenon to “automation bias”: a tendency for users to rely on AI “automatically” to complete tasks. In the process, monitoring of AI systems would gradually diminish and over-reliance on their recommendations and outputs may take hold.
Moreover, we must address an underlying threat. Generative AI may offer unprecedented creativity and efficiency, but it can lead humans to gradually abandon active thinking and learning, ultimately trapped them in a vicious cycle of declined learn ability. Habitual reliance on AI to perform tasks could erode a user’s ability to explore and ruminate on fundamental principles, thereby stagnating knowledge acquisition and skill development. Once AI’s output falters or when faced with unprecedented challenges, humans may be left completely unable to exercise independent judgment or make creative responses. In the long run, such a learning gap would not only compromise one’s competitiveness, but could also trigger crises marked by innovation hindrance, runaway risks, and an overall productivity downturn across institutions and society.
Would collaboration with AI compromise skill acquisition?
Some initial studies indicate that as the quality of AI outputs improves, users engaging with these systems tend to invest less and less effort in further verifying their outputs.
In the education sector, in an experiment conducted by researchers from Pennsylvania University in a math class at an international high school in Turkey, two types of GPT-4 interfaces are available for use—GPT Base, which provides solution steps and answers, and GPT Tutor, which gives only hints at each key step without direct answers.
Results of the experiment show that during the AI-assisted practice session, the GPT Base group performs 48% better while the GPT Tutor group performs 127% better. Nevertheless, in the subsequent examination on the material without AI-assistance, the GPT Base group performs 17% worse than the control group, which has no technological assistance throughout, whereas the GPT Tutor group offsets this negative effect.
According to the analysis of the experiment results, with access to the AI tool, the GPT Base group of students tend to give the answers they have been provided without thinking much about the process, thus failing to acquire problem-solving abilities. In contrast, the “giving hints along the way without directly providing answers” design prevents the GPT Tutor group from excessively relying on AI and enables them to develop problem-solving skills.
On the other hand, recent research studies on the workplace (see Note 1) find that gains in worker productivity from generative AI are distributed unevenly among employees. Workers with the lowest performance yield the greatest improvements from technological assistance, while those with outstanding performance virtually fail to benefit from human-AI collaboration. Initial results from a research study being conducted by the Institute of Digital Economy and Innovation (IDEI), where I work, also show that while generative AI systems have yet to achieve expert-level competence in the relevant field, workers often continue to follow AI-generated instructions uncritically, causing declines in work quality. This demonstrates that using technology as a shortcut to attain short-term productivity gains without fostering a positive learning and reflection mindset during human-AI collaboration, can erode workers’ skills over time.
AI should be the best tutor for humans
In the face of increasingly powerful generative AI, we must first come to the realization that blind trust in AI tools, without deep thinking and reflection, could erode human motivation to learn. Consequently, people may fall into the trap of becoming passive users incapable of learning.
The game of chess has long established a classic case for humans learning from AI. Back in 1997, Garry Kasparov was beaten by IBM’s Deep Blue. Recent years have seen AlphaGo beat the world’s top Go players Lee Sedol and Ke Jie. This may appear to be man’s defeat by machine but has instead opened up new opportunities for human learning. Many professional chess players have not lost ground since humans were beaten by machines. Instead, they have capitalized on the innovative approaches and strategies that emerged from matches with AI. Through trial and error, coupled with independent thinking, they have even managed to elevate their skills to new heights (see Note 2).
For positions requiring professional judgment and accountability, close collaboration between man and AI is set to become the future norm in the workplace rather than just something nice to have. Making the best use of AI through continuous trial and error, reflection, and growth is essential for maintaining one’s competitiveness and taking the lead in innovation in the fast-changing labour market. On the contrary, those who seek ease and abandon self-learning will only be left behind by the times.
How to promote learning from the collaboration experience with AI?
Although academia has yet to reach a consensus on this question, we would like to offer the following suggestions, informed by completed scholarly explorations.
First, encouraging users to learn from the reasoning processes of AI systems. A study report published by German scientists in 2023 (see Note 3) finds that explainable AI technology can, through explanations, change users’ cognitive approaches to the same type of issues during and after human-AI collaborations. By extension to the scenario in which generative AI is driven by a chain of reasoning, system designers can prompt users to elaborate on their own reasoning and queries within an interactive interface. Then, AI’s chain of thought will be shown, facilitating users’ understanding of not only what the answer is but also the reason for it. This aligns with the self-explanation effect in educational psychology. Enabling learners to raise their hypotheses or questions is useful to strengthening long-term memory, deepening understanding, and reinforcing newly-acquired knowledge in the ensuing review process.
Second, encouraging users to build a habit of critical reflection during human-AI collaboration. Another research paper published by the Goethe University Frankfurt am Main in 2022 (see Note 4) reveals that, compared with imaging physicians making independent diagnoses, those receiving AI assistance tend to undergo varying degrees of reflection on whether their diagnoses are consistent with their machine learning-based systems. Such a reflective mechanism is instrumental in reinforcing doctors’ professional diagnoses and contributes to the further accumulation of medical experience. This pinpoints the need to incorporate similar periods of reflection into future applications of human-AI collaboration. The new design will enable users to thoroughly evaluate AI performance and encourage them to reflect on whether their own diagnoses align with those generated by the AI and, if not, to analyse the reasons behind them. Through self-reflection, users can also draw experience from their interaction with AI.
Conclusions
The momentum of the AI industrial revolution is unstoppable. To ensure that individuals can continue to enhance their personal skills in partnership with AI, it is crucial to achieve a strong public consensus and make adequate technological preparations. Only by doing so can the productivity gains from AI, amid the waves of digital transformation and AI innovation, be converted into long-term, sustainable technological dividends.
Notes:
- Brynjolfsson, E., Li, D., & Raymond, L. R. (2025). Generative AI at Work. Quarterly Journal of Economics, 2025, 1–54. https://doi.org/10.1093/qje/qjae044
- Kaufman, L. (2023, August 28). Accuracy, Ratings, and GOATs. Chess.Com. https://www.chess.com/article/view/chess-accuracy-ratings-goat
- Bauer, K., Von Zahn, M., & Hinz, O. (2023). Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users’ Information Processing. Information Systems Research, 34(4), 1582–1602. https://doi.org/10.1287/isre.2023.1199
- Abdel-Karim, B., Pfeuffer, N., Carl, K. V., & Hinz, O. (2023). How AI-Based Systems Can Induce Reflections: The Case of AI-Augmented Diagnostic Work. MIS Quarterly, 47(4), 1395–1424. https://doi.org/10.25300/MISQ/2022/16773




