Understanding Real Workflows
Well researched AI tools often struggle with adoption for reasons unrelated to their underlying quality. The challenge is frequently a disconnect between how tools expect users to behave and how work actually unfolds in real conditions. In classrooms, limited time, emotional intensity, and constant interruptions shape every decision. Adoption depends on behavior change, not just product quality.
I was brought in as a consultant to understand why an AI powered teaching assistant for classroom behavior planning was not being used in practice, despite strong interest from educators.
How I build understanding
Understanding how teachers work rarely comes from a single input, especially when research has already been conducted. I start by building on existing work rather than duplicating it, then focus on where open questions remain.
To understand what was happening in practice, I looked across multiple signals, including:
Review of prior research to understand what was already known
Conversations with educators focused on what a typical day looked like and how behavior issues were handled in real time
Detailed walkthroughs of what happened between a classroom incident and the actions teachers took to address it
Review of the current product experience to understand its assumptions about timing, inputs, and engagement
Questions about what teachers did instead of using the tool and why
Direct feedback on what worked, what did not, and what prevented repeat use
This approach made it possible to focus on where the product’s assumptions broke down in practice, rather than revisiting questions that had already been answered. The goal was not to reprove value, but to understand why value was difficult to access in real classroom conditions.
A real example
Educators consistently expressed interest in having neutral, research backed support for managing challenging student behavior. Many believed a tool like this could be especially helpful for reflection, emotional regulation, and planning next steps.
At the same time, use of the tool was infrequent and often delayed.
Teachers described a gap between when a classroom situation occurred and when they had the time or capacity to engage with a tool. By the time they could sit down during planning periods, lunch, or later in the day, important details had faded or the moment had already passed. In those cases, teachers defaulted to familiar behaviors such as talking with coworkers, relying on experience, or using faster, lower friction alternatives.
This created a situation where the value of the tool was understood, but access to that value was limited by timing, cognitive load, and classroom realities rather than lack of interest.
Why this mattered
Without accounting for how teaching unfolds moment to moment, the team risked misdiagnosing the problem and overcorrecting.
This work clarified that the issue was not a lack of value, but a lack of fit. Iteration, rather than reinvention, was the most effective way forward.
What this work enabled
This work gave the team confidence to move forward with iteration rather than reconsidering the product’s core premise.
It narrowed focus to workflow alignment and behavior change, helping guide what to prioritize and what to leave untouched.
What this revealed
Teachers were managing behavior across fragmented moments rather than in a single, uninterrupted flow. Situations unfolded quickly, often during class, and decisions were made under emotional and time pressure.
This was not because teachers were unwilling to reflect or plan. It was because the tool assumed teachers could pause, recall details later, and provide structured inputs after the fact.
The issue was not resistance to AI. It was a mismatch between the workflow the tool required and how behavior support actually happened in real classroom conditions.