Resisting Skill Erosion: Harnessing AI for Personal Growth

Artificial intelligence (AI) technology has been developing in leaps and bounds in recent years. Reams of academic studies have shown that human-AI collaboration is conducive to enhancing efficiency and creativity across different work settings.


Professor Yulin Fang and 錢鑫濤

8 October 2025

Artificial intelligence (AI) technology has been developing in leaps and bounds in recent years. Reams of academic studies have shown that human-AI collaboration is conducive to enhancing efficiency and creativity across different work settings. Various sectors of the community have also become aware of the productivity gains that AI brings to diverse fields of work.

However, AI empowerment is no panacea. Excessive reliance on human-AI partnership may cause the risk of “skill atrophy”. The world of academia partly attributes this phenomenon to “automation bias”: a tendency for users to rely on AI “automatically” to complete tasks. In the process, monitoring of AI systems would gradually diminish and over-reliance on their recommendations and outputs may take hold.

Moreover, we must address an underlying threat. Generative AI may offer unprecedented creativity and efficiency, but it can lead humans to gradually abandon active thinking and learning, ultimately trapped them in a vicious cycle of declined learn ability. Habitual reliance on AI to perform tasks could erode a user’s ability to explore and ruminate on fundamental principles, thereby stagnating knowledge acquisition and skill development. Once AI’s output falters or when faced with unprecedented challenges, humans may be left completely unable to exercise independent judgment or make creative responses. In the long run, such a learning gap would not only compromise one’s competitiveness, but could also trigger crises marked by innovation hindrance, runaway risks, and an overall productivity downturn across institutions and society.

Would collaboration with AI compromise skill acquisition?

Some initial studies indicate that as the quality of AI outputs improves, users engaging with these systems tend to invest less and less effort in further verifying their outputs.

In the education sector, in an experiment conducted by researchers from Pennsylvania University in a math class at an international high school in Turkey, two types of GPT-4 interfaces are available for use—GPT Base, which provides solution steps and answers, and GPT Tutor, which gives only hints at each key step without direct answers.

Results of the experiment show that during the AI-assisted practice session, the GPT Base group performs 48% better while the GPT Tutor group performs 127% better. Nevertheless, in the subsequent examination on the material without AI-assistance, the GPT Base group performs 17% worse than the control group, which has no technological assistance throughout, whereas the GPT Tutor group offsets this negative effect.

According to the analysis of the experiment results, with access to the AI tool, the GPT Base group of students tend to give the answers they have been provided without thinking much about the process, thus failing to acquire problem-solving abilities. In contrast, the “giving hints along the way without directly providing answers” design prevents the GPT Tutor group from excessively relying on AI and enables them to develop problem-solving skills.

On the other hand, recent research studies on the workplace (see Note 1) find that gains in worker productivity from generative AI are distributed unevenly among employees. Workers with the lowest performance yield the greatest improvements from technological assistance, while those with outstanding performance virtually fail to benefit from human-AI collaboration. Initial results from a research study being conducted by the Institute of Digital Economy and Innovation (IDEI), where I work, also show that while generative AI systems have yet to achieve expert-level competence in the relevant field, workers often continue to follow AI-generated instructions uncritically, causing declines in work quality. This demonstrates that using technology as a shortcut to attain short-term productivity gains without fostering a positive learning and reflection mindset during human-AI collaboration, can erode workers’ skills over time.

AI should be the best tutor for humans

In the face of increasingly powerful generative AI, we must first come to the realization that blind trust in AI tools, without deep thinking and reflection, could erode human motivation to learn. Consequently, people may fall into the trap of becoming passive users incapable of learning.

The game of chess has long established a classic case for humans learning from AI. Back in 1997, Garry Kasparov was beaten by IBM’s Deep Blue. Recent years have seen AlphaGo beat the world’s top Go players Lee Sedol and Ke Jie. This may appear to be man’s defeat by machine but has instead opened up new opportunities for human learning. Many professional chess players have not lost ground since humans were beaten by machines. Instead, they have capitalized on the innovative approaches and strategies that emerged from matches with AI. Through trial and error, coupled with independent thinking, they have even managed to elevate their skills to new heights (see Note 2).

For positions requiring professional judgment and accountability, close collaboration between man and AI is set to become the future norm in the workplace rather than just something nice to have. Making the best use of AI through continuous trial and error, reflection, and growth is essential for maintaining one’s competitiveness and taking the lead in innovation in the fast-changing labour market. On the contrary, those who seek ease and abandon self-learning will only be left behind by the times.

How to promote learning from the collaboration experience with AI?

Although academia has yet to reach a consensus on this question, we would like to offer the following suggestions, informed by completed scholarly explorations.

First, encouraging users to learn from the reasoning processes of AI systems. A study report published by German scientists in 2023 (see Note 3) finds that explainable AI technology can, through explanations, change users’ cognitive approaches to the same type of issues during and after human-AI collaborations. By extension to the scenario in which generative AI is driven by a chain of reasoning, system designers can prompt users to elaborate on their own reasoning and queries within an interactive interface. Then, AI’s chain of thought will be shown, facilitating users’ understanding of not only what the answer is but also the reason for it. This aligns with the self-explanation effect in educational psychology. Enabling learners to raise their hypotheses or questions is useful to strengthening long-term memory, deepening understanding, and reinforcing newly-acquired knowledge in the ensuing review process.

Second, encouraging users to build a habit of critical reflection during human-AI collaboration. Another research paper published by the Goethe University Frankfurt am Main in 2022 (see Note 4) reveals that, compared with imaging physicians making independent diagnoses, those receiving AI assistance tend to undergo varying degrees of reflection on whether their diagnoses are consistent with their machine learning-based systems. Such a reflective mechanism is instrumental in reinforcing doctors’ professional diagnoses and contributes to the further accumulation of medical experience. This pinpoints the need to incorporate similar periods of reflection into future applications of human-AI collaboration. The new design will enable users to thoroughly evaluate AI performance and encourage them to reflect on whether their own diagnoses align with those generated by the AI and, if not, to analyse the reasons behind them. Through self-reflection, users can also draw experience from their interaction with AI.

Conclusions

The momentum of the AI industrial revolution is unstoppable. To ensure that individuals can continue to enhance their personal skills in partnership with AI, it is crucial to achieve a strong public consensus and make adequate technological preparations. Only by doing so can the productivity gains from AI, amid the waves of digital transformation and AI innovation, be converted into long-term, sustainable technological dividends.

Notes:

  1. Brynjolfsson, E., Li, D., & Raymond, L. R. (2025). Generative AI at Work. Quarterly Journal of Economics, 2025, 1–54. https://doi.org/10.1093/qje/qjae044
  2. Kaufman, L. (2023, August 28). Accuracy, Ratings, and GOATs. Chess.Com. https://www.chess.com/article/view/chess-accuracy-ratings-goat
  3. Bauer, K., Von Zahn, M., & Hinz, O. (2023). Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users’ Information Processing. Information Systems Research34(4), 1582–1602. https://doi.org/10.1287/isre.2023.1199
  4. Abdel-Karim, B., Pfeuffer, N., Carl, K. V., & Hinz, O. (2023). How AI-Based Systems Can Induce Reflections: The Case of AI-Augmented Diagnostic Work. MIS Quarterly, 47(4), 1395–1424. https://doi.org/10.25300/MISQ/2022/16773
Translation

以長技鑄智慧:借人工智能賦能自我成長


人工智能(AI)技術發展日新月異。學術界已有大量研究報告表明,在不同工作場景中,人機協作能顯著提升效率與創造力。社會各界亦已對人工智能技術對不同工作帶來的生產力提升有所感知。

然而,AI賦能並非萬靈藥,在人機協作中過度依賴AI也可能帶來「技能衰退」的風險。學術界將此現象部分歸因為「自動化偏誤」(automation bias):用家會傾向由人工智能「自動」完成工作;在此過程中,對人工智能系統的監察會逐漸減少並開始過度依賴其建議或輸出結果。

此外,我們也必須正視一個潛藏的風險:生成式人工智能雖擁有前所未有的創造力與效率,卻可能讓人類在日常工作中逐漸放棄主動思考與學習,進而陷入無法學習的惡性循環。當AI代勞成常態,個體便少了對底層原理的探索與反思,知識吸收與技能累積停滯不前。一旦AI輸出失準或面臨全新挑戰,人類就可能束手無策,難以做出獨立判斷與創新對策。長遠而言,這種學習斷層不僅削弱個人競爭力,也將讓組織與社會陷入創新停滯、風險失控與整體生產力下滑的危機。

與人工智能協作,會否影響技能習得?


學術界已有一些初步研究結果表明,隨著人工智能輸出質素的提升,與該系統協作的用家會投入越來越少的精力來進一步驗證其輸出。

在教育界,賓夕法尼亞大學研究者在土耳其一間國際學校的高中數學課堂進行的一項實驗預備了兩種GPT-4作課堂導師,一種是可以直接給出解題步驟和答案的GPT Base,一種是僅在每一步驟關鍵點給予提示,並不直接提供完整解答的GPT Tutor。

實驗結果顯示:在開放人工智能使用的環節,GPT Base可將學生正確率提升48%;GPT Tutor則可提升127%。然而,在隨後純人工作答的考試中,GPT Base組反而落後於未用人工智能組17%;GPT Tutor組則抵消了這一負面影響。

根據實驗結果進行分析,可見GPT Base組的學生在擁有AI工具時往往直接複製答案而不深究過程,導致解題能力無法在做題過程中習得;而GPT Tutor透過「逐步提示、不提供答案」的設計,成功防止了學生對AI的過度依賴,並幫助學生習得了解題的能力。

另一方面,在職場環境中亦有若干研究(註一)發現,生成式人工智能幫助員工提升生產力的效率並不均衡。在人機協作工作時,往往原先表現最差的員工會從系統協助中受益最多;而原先表現優秀的員工則幾乎無法從中得到裨益。筆者所在的港大經管學院數字經濟與創新研究所(IDEI)正在進行中的一項研究,亦初步顯示當生成式人工智能系統技能尚未達到領域內專家水平時,員工仍然會選擇盲從人工智能的指示,進而導致工作質量的下滑。可見,如果無法保持從協作中積極學習與反思的心態,用人工智能「取巧」帶來的短期效率提升,極可能換來長期對個人工作能力的蠶食。

人工智能應成為人類的最佳導師


在面對生成式人工智能日益強大的能力時,我們首先要認識到:太過信任AI 工具而不進行深度思考與反思,人類學習的動能便可能被消磨殆盡,最終陷入「只會用、不會學」的被動局面。

棋類博弈早已為人類向AI學習寫下經典案例。早在1997年,卡斯帕洛夫就敗給了 IBM 的 Deep Blue;近年 AlphaGo 更戰勝了世界頂尖棋手李世石、柯潔。表面看似是人機對抗的失利,卻反而創造了新的學習機會:許多職業棋手並未因人敗於機而式微,反而透過與 AI 的對弈,藉助它所展現出的全新布局思路和策略,反覆檢驗,自主思考,技藝反而更上一層樓(註二)。

對於那些需承擔專業判斷與責任的崗位而言,人機緊密協作絕非可有可無的配角,而是未來工作的常態。若能善用 AI 引領自己不斷試錯、反思和成長,就能在瞬息萬變的職場中保持競爭力、引領創新;反之,若只圖省事、放棄自我學習,終將被時代淘汰。

如何推動從人工智能協作體驗中學習?


儘管現時學術界尚未就這一問題完全形成答案,我們或許能藉學術界一些已完成的探索提供些許建議:

其一,推動用家從人工智能系統的推理過程中學習。一項2023年德國科學家發表的研究(註三)指出,可解釋性人工智能技術能夠透過解釋,在人機協作任務中及任務完成後改變用家對同一類型問題的認知方式。延展到推理鏈(chain-of-thought)驅動的生成式人工智能的情境中,系統設計者可在互動介面中邀請用家先闡述自己的推斷或質疑,再逐步展示AI的思維鏈,方便用家不僅知其然,更知其所以然。這恰與教育心理學中的「自我解釋效應」(self-explanation effect)吻合,讓學習者先提出其假設或問題,有助強化長期記憶與理解,並在後續的校正過程中不斷鞏固新知。

其二,鼓勵用家建立與人工智能協作時作批判性反思的習慣。另一項2022年發表的由歌德大學(Goethe University Frankfurt am Main)研究者(註四)進行的研究發現,與獨立判讀的影像科醫生相比,由AI協助的影像科醫生會基於其診斷是否與AI一致而進入程度不一的反思。這種自省機制可強化醫生的專業判斷並進一步累積醫療經驗。這提示我們在應用未來的人機協作時也可以設計這樣的反思時間,以協助用家充分評估人工智能的表現,鼓勵用家反思「我自己的判斷與人工智能系統一致嗎?不一致的原因何在?」,並透過反思,讓用家得以吸收協作中得到的經驗。

結語


人工智能工業革命的浪潮不可阻擋,但如何確保能夠在與人工智能協作中持續提升個人技能,需要我們有充分的公眾共識和技術準備。只有如此,才能在數字化轉型與人工智能創新浪潮中真正將人工智能帶來的生產力增益進一步轉換為長期可持續的技術紅利

備註:



  1. Brynjolfsson, E., Li, D., & Raymond, L. R. (2025). Generative AI at Work. Quarterly Journal of Economics, 2025, 1–54. https://doi.org/10.1093/qje/qjae044

  2. Kaufman, L. (2023, August 28). Accuracy, Ratings, and GOATs. Chess.Com. https://www.chess.com/article/view/chess-accuracy-ratings-goat

  3. Bauer, K., Von Zahn, M., & Hinz, O. (2023). Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users’ Information Processing. Information Systems Research34(4), 1582–1602. https://doi.org/10.1287/isre.2023.1199

  4. Abdel-Karim, B., Pfeuffer, N., Carl, K. V., & Hinz, O. (2023). How AI-Based Systems Can Induce Reflections: The Case of AI-Augmented Diagnostic Work. MIS Quarterly, 47(4), 1395–1424. https://doi.org/10.25300/MISQ/2022/16773


方鈺麟教授
港大經管學院數字經濟與創新研究所所長、創新及資訊管理學教授

錢鑫濤
港大經管學院博士候選人

(本文同時於二零二五年十月八日載於《信報》「龍虎山下」專欄)