When the Product Works
but People Don't Show Up

A mixed-methods research engagement uncovering the psychological barriers blocking adoption of an AI-powered personal health platform, and the strategic framework built to address them.

Client Early-stage AI health platform (NDA)
Duration ~2.5 months
My Role Solo researcher and strategist
Methods Mixed methods

A product with strong fundamentals,
and a growth problem

The client was an early-stage AI health platform designed to act as a personal medical command center, helping patients consolidate their health history, prepare for appointments, and catch potential diagnostic errors before they happen. The founders had validated the core technology and had early clinical backing, but were struggling to convert awareness into active users at scale.

I was brought in to conduct usability research and help the team understand where and why users were dropping off. What I found was more fundamental than a UX friction problem.

A multi-method study designed to
follow the real story

I recruited and screened participants independently, targeting a range of ages, health engagement levels, and tech comfort. Sessions were conducted remotely via moderated Zoom. Critically, I also documented and analyzed conversations with individuals who refused to engage with the product at all, a group most research protocols would simply exclude as non-completers.

01
Screener Survey
Recruited and qualified participants independently across varied health engagement profiles
02
Think-Aloud Usability
8 moderated sessions observing real-time navigation and verbal reasoning through the product
03
Semi-Structured Interviews
Post-session interviews exploring attitudes toward health data, AI, and personal health management
04
Non-Completer Analysis
Documented conversations with individuals who refused to engage, treating refusal itself as signal
05
Web Analytics Review
Analyzed behavioral data to identify drop-off patterns and corroborate qualitative findings

The product wasn't broken.
The entry point was.

Usability testing revealed a small number of interface-level friction points, but these were not the primary story. Near-universal concept acceptance among participants who completed the intake suggested the core product was sound. The more significant findings were psychological.

1
Near-universal concept acceptance

Every participant who completed the intake understood the value proposition clearly. There was no confusion about what the product does or why it matters. Usability, at the feature level, was solid.

2
The "Not for Me" Paradox

Users consistently recognized the product as genuinely valuable, but for other people. They described it as "something impactful" and "something I'd use if my doctor told me to," while not yet seeing themselves as the primary user. This is a clinical application of Optimism Bias: the tendency to believe health risks and health management needs apply more to others than to oneself. I named this the "Not for Me" Paradox, a personal relevance gap that creates adoption inertia even in the presence of genuine product understanding.

3
Trust is a feeling, not a certificate

A meaningful segment of potential users refused to engage with the product at all due to privacy concerns, even after being informed of HIPAA and SOC 2 compliance. Compliance badges did not move them. This finding, surfaced specifically through analysis of non-completers that a standard protocol would have excluded, revealed that trust in a health data context is an emotional state, not an information problem. Telling people the product is safe is not the same as making them feel safe.

Core Insight
"Between the 'Not for Me' Paradox and the Trust Gap, these hesitations create high friction for a purely direct-to-consumer launch, creating risk of substantial acquisition costs fighting psychology one person at a time."

The implication was clear: the direct-to-consumer path wasn't broken, but it was expensive. A different entry point could change the psychological math entirely.

Don't change the product.
Change the context in which people meet it.

Based on the research findings, I developed a Trust-Transfer Model as the strategic response. The core logic: if individual users don't yet trust the platform on their own, route them through institutions they already trust. When a self-funded employer, union, or advocacy organization introduces the product as a benefit, they transfer their credibility to it. The user's internal question shifts from "Do I need to buy this?" to "This is something my organization already chose for me."

Primary
Recommendation
B2B2C Trust-Transfer Model

Partner with self-funded employers, stop-loss carriers, unions, and other high-trust aggregators to offer subsidized or benefit-packaged access. Target verticals where the value proposition is immediately legible to the employer: burnout reduction for healthcare workers, billable hour protection for professionals in the sandwich generation, absenteeism reduction in manufacturing. This is not a sales pivot. It is a psychological solve.

Complementary
Channel
Patient and Caretaker Advocacy Alignment

Endorsements from established advocacy organizations (e.g., Alzheimer's Association, caretaker alliances, AARP, NAACP health initiatives) function as trust validators at the top of the funnel, lowering the "Not for Me" barrier for organic direct-to-consumer users without requiring subsidized access.

Validation
Before Scale
30-Day Signal-First Learning Plan

Rather than recommending a full go-to-market shift, I proposed a lightweight 30-day pilot designed to test whether the trust-transfer entry point actually reduces the friction observed in research. The goal was signal, not scale, with a clear stop condition if the signal was not there.

Retention
Quick Win
Free Trial Re-Engagement

Web analytics identified a gap in follow-up with users who completed free trial signup but did not convert. I recommended a structured re-engagement email sequence targeting this segment, a low-cost, high-ROI intervention given the acquisition cost already spent to reach them.

Working directly with
the people who had to act on it

This engagement had a fundamentally different collaboration structure than large institutional projects. There was no PM layer, no approval chain, no separate product or design team sitting between research and decision-making. I worked directly with the founders, which meant findings landed immediately with the people who had both the authority and the motivation to act on them.

That directness is a double-edged thing. It accelerates impact but also means the researcher carries more responsibility for how findings are framed. There is no internal advocate to translate the work before it reaches leadership. I presented the research myself, fielded questions myself, and navigated the moment when findings challenged assumptions the founders had built their go-to-market strategy around. Doing that without losing the relationship, and in fact extending it into several months of ongoing strategic conversations, required as much communication skill as research skill.

I recruited and screened all research participants independently, which added coordination work but also gave me full control over sample composition and ensured the participant pool genuinely reflected the range of users the product needed to reach.

Three things I would
change in retrospect

1
Test the onboarding flow specifically, not just the product

The research surfaced that the entry point was the core problem, but the usability sessions were designed around the product as a whole rather than isolating the onboarding sequence as its own test object. A dedicated onboarding study, with tasks and scenarios built specifically around first-time activation, would have produced more granular and immediately actionable findings about exactly where trust broke down and why.

2
Push harder for a follow-up study after implementation

The founders implemented several recommendations, but the primary actions were strategic and channel-level rather than interface-level, which meant there was no redesigned onboarding flow to bring back into a usability lab. I would advocate earlier and more explicitly for prioritizing at least one interface change alongside the strategic ones, specifically to the onboarding sequence, so that a follow-up study had something concrete to test and the research loop could close with measured evidence rather than staying open.

3
Include caregivers as a formal participant segment from the start

The research surfaced caregivers as a high-potential user segment during analysis, but they were not explicitly recruited as part of the original study design. Including them as a named segment upfront, with dedicated screener criteria and tailored interview questions, would have produced richer findings about that group and potentially accelerated the B2B2C channel recommendation that came out of the engagement.

The founders acted on it.

Following the research presentation, the founders continued the engagement for several months of strategic planning conversations, a signal that the findings landed with credibility and utility. Concrete organizational changes included:

Channel Expansion

Added B2B2C outreach to employers, unions, AARP, and NAACP alongside existing direct-to-consumer efforts

Retention Activation

Implemented free trial re-engagement email sequences for users who signed up but did not subscribe

Strategic Clarity

Founders gained a named framework (Trust-Transfer Model) to anchor go-to-market decisions going forward

Extended Engagement

Research engagement extended into ongoing strategic advisory conversations over several additional months

What this project
was really about

The most important methodological decision in this project was treating the people who refused to participate as data. A standard usability protocol would have screened them out or noted them as non-completers and moved on. Instead, I documented those conversations carefully, and they became the foundation of the Trust Gap finding, and arguably the most strategically significant insight in the study.

The "Not for Me" Paradox framing came from recognizing that what looked like a product problem was actually a psychological positioning problem. Optimism Bias is well-documented in health behavior research; applying it to digital health platform adoption and naming it in a way founders could act on was the translation work that made the research useful rather than just interesting.

The broader lesson: research that stops at findings is only half the job. The value is in what happens next.