Informing Confident Product Iteration

Well-researched AI tools can still struggle with adoption. The issue is often not unclear value, but a mismatch between the product and how work unfolds in practice. In time-pressured, interruption-heavy environments, decisions are shaped by limited availability, emotional load, and shifting priorities, leaving little room for tools that require sustained attention or delayed reflection.

In work focused on AI-powered support tools, I partnered with teams to examine how these products needed to evolve to better align with real-world workflows. The goal was to clarify which product iterations would meaningfully improve adoption without undermining the core value users already recognized.


How I build understanding

Insight into how people work rarely comes from a single signal, especially when meaningful research already exists. Rather than duplicating prior efforts, I build on what’s already known and focus on the open questions that matter for product decisions.

In this work, I synthesize multiple signals to clarify where experiences break down in practice, including:

  • Reviewing prior research to ground the work in what is already established

  • Examining how a typical workday unfolds and how situations are handled in real time

  • Tracing what happens between an in-the-moment event and the actions taken to address it

  • Evaluating product experiences to surface assumptions about timing, inputs, and sustained engagement

  • Exploring what people do instead of using a tool, and why

  • Collecting direct feedback on what supports use, what creates friction, and what prevents repeat engagement

This approach makes it possible to identify where product assumptions conflict with real-world working conditions. The goal is not to determine whether a product should exist, but to clarify how it needs to evolve so people can access its value within the realities of their work.


What this looked like in practice

People who had used this AI-powered tool expressed interest in having neutral, research-backed support for navigating complex or challenging situations in their work. Many saw clear value in the tool for reflection, emotional regulation, and planning next steps.

At the same time, actual use was often infrequent or delayed.

In time-pressured, interruption-heavy work environments, there was a recurring gap between when a situation occurred and when someone had the time or capacity to engage with a tool. By the time space opened up later in the day, critical details had faded or the urgency of the moment had passed. In those cases, people defaulted to familiar behaviors such as relying on experience, talking with peers, or choosing faster, lower-friction alternatives.

The result was a product whose value was understood, but whose usefulness was constrained by timing, cognitive load, and real-world working conditions rather than lack of interest. Recognizing this gap shifted the focus away from proving value and toward understanding how and when that value could realistically be accessed, helping inform where iteration should focus to better support use in practice.


Why this mattered

Without accounting for how work unfolds moment to moment, there is a risk of misdiagnosing the problem and overcorrecting with unnecessary changes.

This work clarified that the challenge was not a lack of value, but a lack of fit. Grounding decisions in real-world working conditions made it possible to focus on iteration rather than reinvention, identifying where small, targeted changes could meaningfully improve adoption without undermining core strengths.


What this work enabled

This work provided clarity to move forward with iteration rather than reconsidering the product’s core premise.

It narrowed the focus to workflow alignment and behavior change, clarifying what to prioritize, what to adjust, and what to leave untouched.


What this revealed

Work was being managed across fragmented moments rather than within a single, uninterrupted flow. Situations unfolded quickly, and decisions were often made under emotional and time pressure.

This was not due to a lack of willingness to reflect or plan. Instead, the experience assumed people could pause, recall details later, and provide structured input after the fact.

The core issue was not resistance to AI, but a mismatch between the workflow being required and how support and decision-making actually unfolded in real-world conditions.