Artificial intelligence and the simulationists.

By: Victoria Brazil (@SocraticEM)


Opinions on Artificial Intelligence (AI) lie on a spectrum from extreme hype to doomsday predictions. So, what might be AI’s optimal use within simulation-based education (SBE)?

The use of AI for medical education has been discussed widely since the release of ChatGPT on last November. Some of the conversation is about what health professionals need to learn to use AI in practice, while other commentary has focused on how AI can augment the educational process. Many of the latter examples apply to simulation-based education: creating scenario stems, drafting learning objectives, recommending equipment and resources, structuring debriefing points, and creating session summaries.

In AI and the simulationists, Rogers et al. tested the ability of Chat GPT to write simulation scenarios suitable for health professional learners. They prompted the platform with detail of their learners and the required content areas – 1) an adult cardiac arrest case for a group of medical and nursing students, and 2) a paediatric asthma case for a practising healthcare team. The scenarios were then evaluated by experienced simulation educators, using their own judgement and a structured tool – the Simulation Scenario Evaluation Tool (SSET).

So how did Chat GPT go?

Not bad. The scenarios were generated quickly, taking about 20 minutes of educator time. The scenarios were workable, but still required some expert review. The evaluators found positives in the specific learning objectives written, the realism of scenarios, and their conforming to simulation standards. Identified deficiencies were in the quality of references (some inaccurate or confabulated) and not referencing current treatment guidelines. This is not surprising. AI is not actually ‘intelligent’. Large language models like Chat GPT are designed to simply predict the next best word in a string, and are limited by the data on which it is trained (which might not include those guidelines).

I agreed with most of the authors’ conclusions – with one additional reservation. Algorithms are biased and can perpetuate stereotypes. In the first generated scenario in this study a 60-year-old white male with a wife gets chest pain… That’s not wrong, but we can see how algorithms can perpetuate lack of diverse patient representation in education, compounding the potentially problematic signalling from our simulation scenario design choices. And this can impact on how we care for patients who fall outside these demographics

The authors conclude – “Provided the correct questions are asked, AI can augment the design process, making for a quicker solution and integrating information that may be missed in conventional scenario development…. [But] …. At present, the need for human intervention in simulation scenario design is required.” Hopefully this is an early entrant in a rich literature exploring the optimal use of AI in healthcare simulation.

Image generated by AI. Note incorrect hand placement.

The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of the University of Ottawa. For more details on our site disclaimers, please see our ‘About’ page