AI can support leadership development, but only with safeguards. This blog explores the real risks of AI-supported learning, including inaccurate outputs, hidden bias, over-automation, and the erosion of human judgment. It then moves from caution to action, offering a practical, experience-based checklist leaders and learning teams can use to ensure AI tools are transparent, inclusive, ethically governed, and aligned with organisational values. The goal is not to slow innovation, but to make sure AI strengthens leadership capability without compromising trust, equity, or accountability.
Leadership is fundamentally about responsibility, particularly when decisions affect others. As AI tools become embedded in leadership development, the question is no longer whether they are useful, but whether they are being used responsibly.
Media reporting suggests that many managers now consult AI chatbots before approaching their own supervisors, often because the interaction feels faster and less judgemental (McKinsey, 2023). While understandable, this behaviour highlights the risk of over-trust and under-supervision; both of which matter deeply in leadership education.
This research investigates the longstanding question of whether leaders are born or made, while examining how artificial intelligence can support leadership development and training. Using a grounded ethnographic approach alongside grounded theory techniques, the study explores participant experiences of AI-supported learning through structured questionnaires and live interactions with the GenAI chatbot.
The project had a small cohort of participants who took part in reflective exercises, Likert-scale surveys, and an AI-facilitated leadership development session, generating both qualitative and quantitative data on leadership knowledge, skill development, and user experience. By combining participant self-reflection with measured perceptions of learning outcomes, the project evaluates the feasibility, effectiveness, and limitations of AI as a tool for leadership development, contributing practical and theoretical insight into the evolving role of technology in shaping future leaders.
One of the most significant risks in AI-supported learning is that chatbots can produce persuasive, confident responses that are partially or wholly incorrect. Research reported by UK Parliament post (2023) demonstrates that chatbots can influence opinions while also generating inaccurate information. In leadership development, this combination is particularly problematic.
Current research found that participants were aware of this risk and expressed concerns about trust, accuracy, and bias when using AI for leadership learning (Harper, 2025). This reinforces the need to design learning activities that explicitly require verification, source-checking, and critical evaluation.
Leadership education already grapples with questions of representation, power, and inclusion. AI systems trained on historical data may reproduce dominant assumptions about who looks or sounds like a leader. Without careful design, this does pose a risk of reinforcing existing inequalities.
Participants in the study raised concerns about whether AI might reflect hidden biases in leadership norms (Harper, 2025). Addressing this requires intentional design choices: diverse scenarios, critical questioning, and opportunities for learners to challenge assumptions rather than accept outputs at face value.
Responsible AI use is often discussed at policy level, but it also has practical implications for educators. The introduction of ISO/IEC 42001 (ISO, 2023) provides a useful reference point. This standard focuses on managing AI systems responsibly through risk assessment, transparency, and continuous improvement (DSIT, 2024; NIST, 2023).
Higher education institutions do not need to implement the standard wholesale, but its principles translate well into learning contexts: define purpose, assess risks, apply safeguards, and review use over time.
.png)
AI can enhance leadership development, but only when used responsibly. With clear safeguards, ethical governance, and strong human oversight, AI can support learning without reinforcing bias, undermining judgement, or eroding trust, making responsible use of AI a defining test of modern leadership.
I would love to hear your views. Connect with me on www.linkedin.com/in/dr-jennifer-harper-roberts2026

Jennifer is Strategic Implementation Lead in the Faculty of Business and Law at The Open University, with over 12 years’ experience across higher and further education. Her work spans teaching and learning enhancement, academic quality, workforce development, and digitally mediated student experience.
A Senior Fellow of Advance HE (SFHEA), she lectures and supervises at postgraduate level and researches leadership and strategy in technology-enabled education, with a focus on generative AI, organisational capability, and inclusive, sustainable change.
Department for Science, Innovation and Technology (DSIT) (2024). AI regulation: a pro-innovation approach. Available at: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach
Harper, J. (2025) Are leaders born or made? Can technological approaches assist the development and training of future leaders. PhD thesis, University of Chester. Available at: https://chesterrep.openrepository.com/
ISO (2023) ISO/IEC 42001:2023 Artificial intelligence management system. Available at: ISO/IEC 42001:2023(en), Information technology — Artificial intelligence — Management system
McKinsey (2023) The state of AI in 2023: Generative AI’s breakout year. Available at: The state of AI in 2023: Generative AI’s breakout year | McKinsey
NIST (2023) AI Risk Management Framework (AI RMF 1.0). Available at: https://www.nist.gov/itl/ai-risk-management-framework
UK Parliament POST (2023) Generative artificial intelligence. Available at: https://post.parliament.uk/research-briefings/post-pn-0700/
University of Leeds (2025) Acknowledging use of Generative AI in academic work.
https://generative-ai.leeds.ac.uk/ai-and-assessments/acknowledging-use-of-ai/
University of Manchester Library (2025) Referencing and acknowledging AI (artificial intelligence). Available at:
https://subjects.library.manchester.ac.uk/referencing/AI
