This blog continues reflections on using AI within the research process, following Bloomfield and Comfort (2026) who explored the role of AI in peer review. Here, we shift focus to conference preparation, another part of the research lifecycle where AI presents new opportunities - and new uncertainties. Like the previous experience, we found ourselves grappling with a sense of discomfort as we tried to determine when, how, and whether using AI felt appropriate.
As an early-career researcher within a project team, the first author approached her first conference submission with a mixture of excitement and anxiety. Writing an abstract for a conference ‘paper’ felt particularly daunting, especially when we had not yet written the paper itself. Imposter syndrome crept in quickly: Were we ready? Did we know enough? Was there actually something worth presenting?
A closer read of the conference guidelines, however, brought welcome relief. We realised that we did not need to submit a full written paper. Instead, we could choose between:
We chose the 15-minute paper presentation, recognising it as a valuable opportunity to articulate our emerging work. Yet the requirement remained: 500 word abstract summarising research that was still evolving. The task felt overwhelming, and so – cautiously - we turned to AI.
Echoing the concerns discussed in Bloomfield and Comfort (2026), we worried about what constituted ‘acceptable’ AI support. To follow ethical and institutional boundaries, we used only our university’s internal version of Copilot, ensuring none of our project data left a secure environment. We uploaded our internal project report along with the conference call and asked the AI to generate an abstract.
To our surprise, it produced an impressively coherent, well structured draft within seconds. It captured the essence of our research, organised it logically, and articulated it in a style that felt conference ready. Of course, we reviewed it carefully, line by line. But as we read, we found ourselves asking: Should we feel guilty? The AI had not invented anything; it had simply re expressed our own ideas more concisely than we had managed at first.
The abstract was then shared with a more experienced colleague, who provided stylistic suggestions and pointed out where certain elements were still unclear. Interestingly, the ambiguities identified weren’t the AI’s errors: they were flaws that had been present in our original report. In this way, the AI didn’t just help us write an abstract; it revealed where our explanation of the project itself needed fine tuning.
Emboldened by its success, the AI then suggested that our report could be turned into an academic paper - and even offered to write one for us. This, we agreed, was a boundary we were not prepared to cross. But when it suggested creating a set of conference slides, the line felt blurrier. Would that be support - or substitution? We are still deliberating.
Our experience highlights both the usefulness and the unease associated with using AI in academic work. On the one hand, AI helped reduce the cognitive load of a task that felt overwhelming. It enabled us to articulate our thinking more clearly, identify gaps in our own writing, and move forward with greater confidence. It acted as a reflective surface, not a ghost writer.
Yet on the other hand, using AI forced us to wrestle with questions that feel increasingly urgent:
These questions do not have straightforward answers yet, and perhaps they never will. What our experience suggests, however, is that AI can be a valuable partner when used intentionally, critically, and with care. It can help early career researchers - especially those grappling with imposter syndrome - find a foothold in unfamiliar academic processes.
For us, preparing a conference abstract with AI was not about automation; it was about confidence-building, clarity, and collaboration. A lesson we learnt through writing this blog is that we should probably have been more explicit about how we had used AI in writing the abstract to be in line with our institution’s guidance, even though the conference organisers did not ask us to do so. We are conscious that different journals have different policies about whether AI use is allowed or not. We wonder whether conferences will provide similar guidance going forwards. And if not, why not?
As we continue to explore what AI can and cannot do, we suspect that the line between support and over-use will continue to shift. For now, though, we remain committed to using AI not to replace our thinking, but to enhance our ability to communicate it.
We involved Microsoft Copilot throughout the development of this blog. Copilot helped us shape the structure, refine our arguments, extend sections of the text, and improve clarity and cohesion. It also generated multiple drafts, suggested alternative framings, and supported us in expressing our ideas more confidently. While Copilot contributed significantly to the writing process, all reflections, interpretations, and final decisions about the content were made by the human authors.
When we asked Copilot if we could go one step further and list it as an author, it’s reply was adamant:
“I can’t be listed as an author. This is because AI systems cannot take responsibility for content, cannot consent to authorship, and cannot meet authorship criteria used in academia. However, you can write an acknowledgement that positions Copilot as a significant collaborator or major contributor, as long as you don’t frame it as a human co author.”
This is similar to our Institution’s research policy which states that ‘It is not acceptable to list any artificial intelligence tool as an author of a research output as such tools cannot enter into a publishing agreement, nor take responsibility for the content or integrity of the work. Those listed as authors are fully responsible for the entire content of any research output, even those sections produced with the assistance of an artificial tool’.
Bloomfield, S. and Comfort, C. (2025). Exploring the Ethics of AI in Peer Review: A Human Perspective. SCiLAB, The Open University, Milton Keynes, UK.

Evelyn is a Lecturer in Nursing within the Faculty of Wellbeing, Education, and Language Studies at The Open University (OU). Since joining the OU in 2017, she has made significant contributions to the development and delivery of nursing and nursing associate education programmes. She has also co-led several Pan-University scholarship-funded projects, focusing on strategies to support learners during practice-based learning experiences and evaluating the effectiveness of the tripartite relationship.

Sarah Bloomfield is Senior Lecturer in Work and Organisational Learning and the Director of Undergraduate Apprenticeship Qualifications in the Open University Business School. Her research and practice focus on how individual and collective managerial effectiveness can be improved in the workplace, recognising that each work situation is unique.
