Case studies in research,
strategy, and product direction
informed
analytics users
participants
personnel served
Is Whether Airmen Can Do Their Jobs
Persona development, XLA framework design, and pre/post satisfaction research for the Air Force's enterprise IT transformation program. Research findings shaped contract requirements and provisioning strategy across nine pilot installations.
a Victim of Its Own Success
Enterprise-scale user research, persona validation, and product roadmap development for Advana, the Department of Defense's AI and analytics platform serving 100,000 users. Research informed a $15 billion recompete and a program-level development pause.
The Entry Point Was.
Usability research and strategic framework development for an AI-powered personal health platform struggling to convert awareness into active users. Research surfaced a trust gap at onboarding and produced the Trust-Transfer Model that reoriented the founders' go-to-market strategy.
Regulated Industries Will Actually Trust
Solo build of a 950+ participant proprietary research panel under HIPAA-adjacent and FDIC-regulated constraints, serving F500 pharma clients including Pfizer, Eli Lilly, Merck, J&J, and AbbVie, and M&T Bank. Includes compliance stack design, referral attribution logic, and the decision to refuse a shortcut that would have compromised the whole thing.
The Users Were Still Afraid.
UX audit and friction mapping for a consumer-facing tokenized asset platform navigating the intersection of TradFi compliance and DeFi mechanics. Research identified 17 distinct friction points concentrated in transaction state legibility, and produced an executive roadmap that reframed a UX problem and a regulatory communication failure as a single unified challenge.
The research background,
the strategy orientation
My path has not been linear. I started in marketing intelligence and child development research, spent years at the PhD level in quantitative methods and psychology, and moved through applied research and strategy roles. Most of that work had a common thread: figuring out how to measure things that resist easy measurement, and building systems honest about what they do not know.
I work primarily with AI-focused product teams, with particular depth in healthtech, govtech, and fintech. The questions I keep returning to are about decision quality: not whether an AI system produces impressive output, but whether it actually improves the decisions users are trying to make, for which users, under what conditions, and how you would know if it stopped.
I am skeptical of demos. I am interested in what happens six months after launch. I think trust is a measurement problem and that most AI products are solving the wrong version of it.
I hold a current Secret clearance and have led research and strategy engagements throughout my career for Fortune 500 pharmaceutical companies, the Department of Defense, and early-stage startups. Earlier in my career I built a consumer and clinical trial recruitment research panel from zero to over 950 participants, an infrastructure project that shaped how I think about research operations and the conditions that make good research possible at scale. The scale of the work changes. The discipline does not.
Ground Truth on Substack:
rigorous AI product practice
I write about evaluation frameworks, failure mode analysis, governance design, and the organizational conditions that determine whether good product thinking actually ships. The focus is on what happens after the demo.