Published Online:October 2025
Product Name:The IUP Journal of Corporate Governance
Product Type:Article
Product Code:IJCG051025
DOI:10.71329/IUPJCG/2025.24.4.101-118
Author Name:Ahmad Alhindi and Arunkumar Sivakumar
Availability:YES
Subject/Domain:Management
Download Format:PDF
Pages:101-118
The study extends the Theory of Planned Behavior (TPB) to explore ethical AI usage among business research scholars, focusing on the adoption of AI chatbots. A key contribution is the integration of moral disengagement as a novel negative predictor, highlighting its mediating role between moral climate and ethical AI use. The findings reveal that moral disengagement significantly shapes scholars’ ethical decisionmaking, offering a deeper understanding of how individuals rationalize or resist unethical AI use in academic contexts. The study provides practical implications for universities and policymakers by emphasizing the need for strengthening institutional moral climates, reducing opportunities for students’ moral disengagement, and fostering ethical intentions. Strategies such as ethics-focused training, clear governance policies, peer accountability, and reflective practices can help minimize disengagement mechanisms and promote responsible AI adoption. By linking TPB with moral disengagement, this study advances theoretical understanding and offers actionable guidance for enhancing research integrity in the age of AI.
The fast interaction of artificial intelligence (AI) tools with academic research is presenting a paradigm shift, empowering research scholars with many capabilities in generating ideas, conducting literature review, data analysis, and grammar check.