AI has the potential to enhance leadership learning, but only when its use is driven by sound pedagogy rather than technological novelty.
This blog sets out three practical design principles for integrating chatbots into leadership education as tools for rehearsal and structured reflection.
Across higher education, generative artificial intelligence is being adopted at speed. Leadership education is no exception. Chatbots are now used for feedback, coaching-style dialogue, and reflective prompts. However, history reminds us, that new educational technologies do not automatically improve learning. What matters is how they are designed and embedded.
Recent systematic reviews show that chatbots are widely used in education and are often evaluated positively by learners (Debets, 2025; Anjulo Lambebo, 2025; Molter, 2025). Yet these same reviews highlight a recurring issue: many implementations lack strong pedagogical grounding. This is a critical warning for leadership education, where learning is complex, contextual, and deeply human.
Leadership learning extends well beyond acquiring information about theories or models. It involves judgement, reflection, identity formation, ethical reasoning, and communication under uncertainty (Northouse, 2021). These dimensions are traditionally developed through coaching, mentoring, role-play, and experiential learning.
Research has explored whether conversational AI could support these learning processes by providing structured reflection and scenario-based dialogue (Harper, 2025). The study involved a small cohort of participants engaged in leadership development activities who interacted with a generative AI chatbot during a facilitated trial. Participants used the chatbot to work through leadership scenarios, articulate decision rationales, and reflect on their responses in real time, rather than to receive definitive answers or instruction.
Participants did not view AI as a replacement for human educators, coaches, or mentors. Instead, they positioned it as a supportive, low-risk space for rehearsal, sense-making, and confidence-building. This suggests that the perceived value of conversational AI lies in its capacity to enable iterative reflection and practice, reinforcing leadership learning as a process of judgement formation and self-authorship rather than content acquisition.
Research on AI-supported coaching increasingly frames chatbots as useful for routine, repeatable developmental activities, such as goal clarification, reflective questioning, and preparation for conversations, while leaving ethical judgement and emotional nuance to humans (Learnovate Centre, 2025; The Conference Board, 2025). This distinction is vital in leadership education.
Participants in the study reported that AI-supported dialogue helped them articulate ideas, reflect privately, and prepare for leadership situations. However, they were also aware of AI’s limitations, particularly around context sensitivity and emotional understanding (Harper, 2025). This reinforces the need to position AI as formative support, not a definitive source of truth.
AI is most effective when used as a rehearsal and reflection tool. Learners should be encouraged to test ideas, draft responses, and explore scenarios, while being explicitly reminded to verify information and engage critically. This is especially important given evidence that chatbots can produce confident but inaccurate responses (The Guardian, 2025).
Leadership learning requires dialogue, challenge, and ethical discussion. A blended model works well: AI-supported reflection followed by peer discussion, tutor feedback, and real-world application. This aligns with social constructivist approaches to leadership learning and prevents over-reliance on technology.
Future leaders will work alongside AI systems. Leadership education must therefore include explicit discussion of AI limitations, bias, accountability, and responsible use. Guidance from the European Commission (2024) and higher education scholars (Qu, 2023) emphasises the importance of transparency and ethical awareness when using generative AI in learning contexts.
If AI is introduced without careful design, learners may infer that speed matters more than judgement, or that outputs matter more than reasoning. Leadership education should do the opposite, reward reflection, ethical consideration, and critical engagement. AI can support this aim, but only if learning outcomes and assessment are aligned accordingly.
Online and blended environments offer strong opportunities for AI-supported leadership learning. They allow asynchronous reflection, repeated practice, and personalised support. However, effectiveness depends on clear expectations, scaffolded activities, and a shared understanding that responsibility for learning and leadership decisions remains human.
The challenge for leadership education is not whether to use AI, but how it is designed. When added without pedagogical intent, chatbots risk reinforcing speed and surface learning. When used well, they can support what leadership education struggles to scale: rehearsal, reflection, and sense-making.
This work shows that AI’s value lies less in providing answers and more in creating space for judgement, ethical reasoning, and self-authorship. For higher education, the task is clear. If we want leaders who can navigate uncertainty responsibly, AI must be embedded to slow thinking down, not automate it. Leadership education does not need more technology; it needs more intentional pedagogy.
I would love to hear your views. Connect with me on www.linkedin.com/in/dr-jennifer-harper-roberts2026

Jennifer is Strategic Implementation Lead in the Faculty of Business and Law at The Open University, with over 12 years’ experience across higher and further education. Her work spans teaching and learning enhancement, academic quality, workforce development, and digitally mediated student experience.
A Senior Fellow of Advance HE (SFHEA), she lectures and supervises at postgraduate level and researches leadership and strategy in technology-enabled education, with a focus on generative AI, organisational capability, and inclusive, sustainable change.
References
Anjulo Lambebo, E. (2025) ‘Chatbots in higher education: a systematic review’, Interactive Learning Environments. Available at: https://www.tandfonline.com/doi/abs/10.1080/10494820.2024.2436931
Debets, T. (2025) ‘Chatbots in education: a systematic review of use cases and learning outcomes’, Computers & Education. Available at: https://www.sciencedirect.com/science/article/pii/S0360131525000910
European Commission (2024) Guidelines on the responsible use of generative AI in research. Available at: https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/guidelines-responsible-use-generative-ai-research-developed-european-research-area-forum-2024-03-20_en
Harper, J. (2025) Are leaders born or made? Can technological approaches assist the development and training of future leaders. PhD thesis, University of Chester. Available at: https://chesterrep.openrepository.com/
Learnovate Centre (2025) Using generative AI to provide personalised coaching and learning support. Available at: https://learnovatecentre.org/wp-content/uploads/2025/02/GenAI-Stream5_Personalised_Coaching_Report.pdf
Molter, M. (2025) ‘The impact of AI chatbots on higher education learning: a systematic literature review’, LearnTechLib. Available at: https://www.researchgate.net/publication/386526509_Chatbots_in_higher_education_a_systematic_review
Northouse, P.G. (2021) Leadership: theory and practice. 9th edn. Sage.
Qu, Y. (2023) ‘Generative artificial intelligence in higher education’, British Journal of Educational Technology. Available at: https://bera-journals.onlinelibrary.wiley.com/doi/full/10.1111/bjet.70029
The Conference Board (2025) How AI coaching is redefining leadership development (podcast). Available at: https://www.conference-board.org/podcasts/c-suite-perspectives/How-AI-Coaching-Is-Redefining-Leadership-Development
The Guardian (2025) ‘Chatbots can sway political opinions but are “substantially” inaccurate, study finds’. Available at: https://www.theguardian.com/technology/2025/dec/04/chatbots-sway-political-opinions-substantially-inaccurate-study
