By: Victoria Brazil (@SocraticEM)
We’ve all felt the awkward pause in the conversation: “So… how do you know your sim program improves patient outcomes?” It’s the uncomfortable moment when good intentions collide with the reality of measurement. The STORK team in Queensland has given us a better answer. Not by chasing the holy grail of patient-level impact, but by turning the focus squarely onto systems.
I recently recorded a podcast with Ben Symon and the STORK crew, who walked us through their approach – measuring what simulation actually changes in the clinical environments it touches. Their outreach program delivers paediatric resuscitation education across regional Queensland. But this isn’t just about teaching. It’s about diagnosis and intervention. Think simulation as Trojan horse: education on the surface, but underneath – systems probing, safety audits, and quality improvement seeds planted.
Their 2025 publication in Advances in Simulation outlines a deliberately pragmatic strategy. Forty courses. 242 system issues identified—mostly around equipment, processes, drug safety. Nearly half resolved, with many more still in progress. No randomised control trial, no breathless claims—just credible, local, trackable change.
Crucially, they’ve wrapped this in a systematic process. It’s not a dashboard or a KPI tracker—it’s a reporting mechanism that actually lands. Short, narrative summaries sent to the right people: ED directors, execs, educators. It closes the loop between what happens in sim and what happens in real clinical work. Simple, but rarely done.
This is translational simulation in its clearest form, not defined by location (in situ vs centre-based), but by function. As I wrote in 2017, it’s not “where?” but “why?”. The “why” here is unmistakable: use simulation to fix what’s broken, not just teach what’s ideal.
None of this works, though, without the right people running the show. Faculty development for translational sim needs to look very different. These aren’t just good debriefers. They’re QI-literate, systems-savvy, politically aware operators who can navigate the ward, the simulation space and the realities of health systems
A few reflections for those of us running programs:
- Let go of outcome obsession. No one intervention can “prove” patient benefit. What you can prove is that your program identifies hazards and gets stuff fixed.
- Design for follow-through. If your sim insights don’t leave the debrief room, they’re wasted. Build the process that carries them upstream.
- Develop different faculty. If your faculty dev only teaches PEARLS or advocacy inquiry, you’re undercooking your team. Think systems, not just scenarios.
- Stay humble, stay useful. The STORK team didn’t wait for perfect conditions. They embedded themselves, kept the reporting tight, and aligned their metrics with what matters locally.
References
Photo generated by Gemini
The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of the University of Ottawa. For more details on our site disclaimers, please see our ‘About’ page
