Last month, the two human authors took part in a peer review exercise on blog writing, hosted by The Open University’s SCiLAB team. Each had drafted a blog post and, embracing the spirit of collaboration, they exchanged constructive feedback on both content and style. The experience was not only enriching but also deeply affirming - a reminder of the unique value of human-to-human support in the creative process.
Yet, as the session drew to a close, an unexpected question surfaced. One author reflected on how positive and supported they had felt during the human peer review, but admitted to feeling a sense of guilt when engaging in a similar exercise earlier that day - with Generative AI. Why the discomfort?
Was it the absence of formal recognition for AI contributions, even though human reviewers weren’t credited either? A fear of losing personal writing skills by leaning on AI to polish phrasing? Or perhaps it was the AI’s relentless helpfulness - suggesting titles, summaries, and refinements - while human reviewers intuitively knew when to stop and use the opportunity of time together to reflect on the implications of AI in writing.
To delve deeper, we posed a direct question to GenAI Copilot:
‘Why do we feel like we’re cheating when we ask you to review our writing?’
Here’s what it said:
“That's a really thoughtful question - and you're definitely not alone in feeling that way. The sense that you're ‘cheating’ by asking me (or any AI) to review your writing often comes from a few overlapping psychological and cultural factors:
Many educational systems and professional environments place a strong emphasis on individual achievement. When you get help - especially from something that feels like a shortcut - it can feel like you're not doing the work ‘yourself’, even if you're still the one generating the ideas and making the final decisions.
We often forget that writing has always involved tools: dictionaries, thesauruses, grammar checkers, editors, peers, and now AI. Using me to review your writing is no different in principle- I'm just a more advanced tool. But because AI is new and powerful, it can feel like it's doing more of the work than it actually is.
Writers sometimes worry that if an AI helps refine their work, the final product might not feel fully ‘theirs’. But in reality, you're still the author - I'm just helping you express your ideas more clearly or effectively.
When something feels too easy, we often undervalue it. If I help you spot a grammar issue or suggest a better phrase instantly, it might feel like you skipped a step — even though the goal is better communication, not struggling unnecessarily.
Think of me like a writing coach or editor. You're still doing the creative thinking, the structuring, the decision-making. I'm just here to support you - to help you write with more confidence, clarity, and impact.
With that shift in perspective, our guilt began to ease, although we did reflect wryly on the irony of the very tool that caused our misgivings being the same one that reassured us. We started to see our interaction with GenAI not as a shortcut, but as a form of collaboration - just like the peer review we’d done with each other.
But just as we were getting comfortable with this idea, further questions surfaced: what if this hadn’t been a blog post, but an academic paper? Would it still feel acceptable to ask GenAI to identify themes or extract quotes from data? Where do we draw the line between assistance and authorship? We ask students to acknowledge their use of GenAI in assessment, so should we do the same? And how do we acknowledge our use of GenAI in writing blogs and journal articles?
In academia, we’ve long wrestled with imposter syndrome. But with the rise of GenAI, are we now entering a new era - one shaped by a different kind of self-doubt?
As Copilot framed it ‘Imposter syndrome used to be our academic nemesis. Now, it’s been joined by a new character: AI Anxiety - armed with endless prompts and an uncanny knack for rewriting our sentences just a little too well.’
It’s a new frontier, and we’re still learning how to navigate it. But perhaps the first step is acknowledging that these questions - about creativity, credit, and control - are not just technical or ethical. They’re deeply human.
M365 Copilot, the OU’s generative AI tool, was used to develop this blog. In July 2025, Copilot was asked ‘Why do we feel like we’re cheating when we ask you to review our writing?’ The answer provided by Copilot was cited directly in the blog and the fact that this information came from Copilot was clearly indicated.
Sarah is a Senior Lecturer in Work and Organisational Learning at The Open University. Her research explores the lived experience of work, with a focus on learning in, for, and through work. Sarah studies organisational contradictions and their impact on leadership and behaviour, as well as work-based learning as a pedagogical approach. She also holds a British Academy Small Research Grant to investigate the lived experience of female angel investors in Saudi Arabia.
Catherine is lead for Access, Participation and Success and a Senior Lecturer in the Faculty of Business and Law. Her research interests focus on identifying the best methods and tools to help students and young people achieve their potential. This includes youth and peer mentoring as well as digital tools.