Research Case Study  |  DoD / CDAO  |  2023-2024

When a Platform Becomes
a Victim of Its Own Success

Enterprise-scale user research and platform strategy for Advana, the Department of Defense's AI and analytics platform serving 100,000+ users across every branch of the military and civilian defense workforce.

Platform Advana / DoD CDAO
Engagement April 2023 to April 2024
My Role Lead Researcher and Platform Strategist
Organization Isobar Public Sector (subcontractor)

The Pentagon's most important
data platform had a user problem

Advana is the Department of Defense's enterprise-wide AI and analytics platform, originally developed to help the comptroller's office manage financial data across thousands of disparate systems. By 2023, it had grown into something far more expansive: a platform relied on daily by decision-makers from the Secretary of Defense down, integrating data from over 400 business systems and serving a user base that had grown to roughly 100,000 people across every branch of the military and the civilian defense workforce.

That growth was both the platform's greatest achievement and its most pressing challenge. CDAO leadership described Advana publicly as "a victim of its own success." The platform had scaled faster than its architecture, its user experience, and its understanding of who was actually using it and why.

The Presenting Problem

Engagement was lower than expected given the platform's enterprise mandate. A major contract decision was approaching. And the CDAO needed a clear, research-grounded picture of its user base before committing to the next phase of investment and development.

Booz Allen Hamilton had already developed an initial set of personas and archetypes. Our team was brought in to validate them, pressure-test them against real users, and surface what the existing framework might be missing.

100k+
Active users
400+
Integrated systems
All branches
Scope of user base
$15B
Subsequent recompete

Research at the scale
the problem demanded

The heterogeneity of Advana's user base was itself a methodological challenge. A platform serving financial analysts, logistics officers, intelligence personnel, human resources staff, and senior command leadership across every branch of the military cannot be understood through a small sample. I designed the research to match the scale and complexity of the user population, building a multi-method approach that ran simultaneously across survey, interview, and information architecture channels.

1,382
Survey respondents
8+
Interview participants per user segment
20
Deliverables produced
36%
Users reporting difficulty finding content
01
Segmentation Survey
Early-stage survey to ensure reliable representation of key user groups before the main research. 1,382 respondents across military, civilian, and contractor designations. Outputs informed persona groupings and defined the sampling frame for interviews.
02
SUS Survey
System Usability Scale deployed to assess the current UI. The SUS is a validated 10-item scale producing a standardized usability score (0-100). Outputs informed the user satisfaction report and UI assessment report.
03
CX Evaluation Survey
Multi-purpose instrument collecting CSAT data, persona validation data, pain points, task data, desired features, and journey stage inputs in a single deployment. Outputs informed nine separate deliverables, from updated personas to the product roadmap to future state navigation design.
04
In-Depth Interviews
Eight participants per user archetype segment, structured interview guide with room for exploration. Covered persona data, task flows, pain points, unmet needs, and desired capabilities across commands, branches, and roles.
05
Usability Studies
Two rounds: directed usability study on the current website, and a second study on the future state UI. Task-oriented, evaluated against heuristic criteria. Outputs fed directly into UI assessment and future state design recommendations.
06
Card Sort and Tree Jack
Card sort to design the future state information architecture: participants organized topics into categories that made sense to them. Tree jack to test and validate the proposed navigation structure. Together these produced the future state sitemap, navigation, and content model.

Twenty deliverables across
six workstreams

The research did not end at findings. Each method fed a specific downstream deliverable, and the deliverables spanned the full pipeline from research synthesis through information architecture and future state design.

Persona Validation
Persona analysis report • Persona summary document • Workflow analysis document • User satisfaction report • Persona data mapping tables
User Scenarios & Journeys
User journey documents • User scenarios
User Requirements
User requirements document • Product roadmap
Information Architecture
Sitemap • Navigation • Site audit • UI assessment report • Key metrics • Content model
Future State Design
UI design recommendation • Clickable prototype • Design system recommendation
Analysis of Alternatives
CMS analysis of alternatives • Website rebuild analysis report • Revised Advana product roadmap

Research that fed directly
into product direction

Two of the twenty deliverables were product roadmaps, not research reports. The first was developed in collaboration with the engagement's experience strategist and informed by early research findings and initial CDAO stakeholder input. It included strategic objectives, high-level themes, prioritized features, phased infrastructure improvements, milestones, and timeframes.

The second, revised roadmap came later in the engagement, after the full research picture had emerged. It incorporated both what the research had surfaced and what CDAO leadership had validated or pushed back on in interim briefings. The revision was not cosmetic. The six simultaneous failure modes the research identified changed what sequencing made sense, which infrastructure work had to come before which experience improvements, and which user segments needed to be prioritized first to unlock broader adoption.

Contributing to a product roadmap at this scale, for a platform used by 100,000 people across the entire Department of Defense, required holding the research findings and the organizational constraints in the same frame at the same time. That is a different kind of work than producing a findings report and handing it over.

Six simultaneous failure modes
on a single platform

What emerged from the research was not a single root cause but a constellation of interconnected problems, each compounding the others. The platform had scaled its user base without scaling the infrastructure, the user experience, or the organizational understanding needed to support that growth.

Different branches had fundamentally different needs

The user base was not a single population with minor variation. Financial analysts, logistics personnel, intelligence staff, and command leadership used the platform in structurally different ways, worked with different data types, and had different definitions of what "useful" looked like. A one-size-fits-all platform architecture could not serve all of them well simultaneously.

Infrastructure could not support the use cases being built on top of it

A major influx of new users had strained the underlying architecture, creating performance and reliability issues that surfaced repeatedly in interviews. Users were building workflows and analytical processes on a foundation that could not support them at scale. This was not a perception problem. It was a structural one.

Users lacked the technical skills the platform assumed

The platform had been designed with a level of data literacy that was not uniformly present across its user base. Many users who needed the platform most were least equipped to use it independently, creating an adoption gap that training alone could not close without interface-level changes.

The platform was being used differently than intended

Workflow mapping revealed significant divergence between designed and actual use patterns. Users had developed workarounds, partial workflows, and informal practices that reflected the platform's gaps rather than its intended capabilities. These unintended patterns were often invisible to platform leadership.

Leadership and end users had misaligned expectations

What leadership believed users needed and what users reported actually needing diverged in meaningful ways. This misalignment was shaping roadmap and investment decisions without either side fully recognizing the gap. Surfacing it required getting both perspectives into the same analytical frame.

Adoption was low despite significant investment and enterprise mandate

Even with a formal DoD mandate designating Advana as the department's authoritative analytics platform, voluntary active engagement remained lower than expected. The survey data showed that a substantial share of users visited only rarely or a few times a month, and most users had had access for less than a year despite the platform having existed far longer, reflecting how rapidly it had been onboarded without corresponding support infrastructure. Mandate alone cannot substitute for a platform users find genuinely useful and navigable.

Finding content was a measurable problem

Ease of finding data was directly measured in the CX survey. 36% of respondents reported difficulty finding what they needed (combining extremely difficult, very difficult, and fairly difficult responses). Only 20% found content very or extremely easy to locate. In open-ended responses, users described not knowing where to go, searching across multiple community spaces without success, and relying on colleagues to navigate rather than the platform itself. This is an information architecture problem, not a user education problem, and it demanded an IA solution: the sitemap, navigation redesign, and content model that became three of the twenty deliverables.

Core Diagnostic

The platform had not failed. It had succeeded faster than the conditions for sustainable success had been built. The research made visible what scale had obscured: a user base that had outgrown the architecture, the interface, and the organizational model designed to support it.

Navigating multiple teams
with competing relationships to the research

This project operated at the intersection of several stakeholders with different relationships to the work. Within Isobar, I worked alongside a project manager, UX designer, experience strategist, developer, and business analyst. My role was to ensure the research direction held through each phase and that findings translated into actionable design inputs rather than abstract insights.

Externally, I had ongoing communication with the Booz Allen Hamilton team whose existing persona framework we were charged with validating. That relationship required a particular kind of diplomatic rigor: being honest about where the prior framework needed revision while maintaining productive working relationships with the team that had built it. Receiving inherited work from another firm and stress-testing it without creating friction is a different skill than starting from scratch.

The most consistent challenge was translating research methodology and statistical reasoning to stakeholders without research backgrounds who nonetheless had authority over project decisions. Making the case for why sample size mattered, why certain findings were more reliable than others, and why some recommendations were more urgent than the timeline suggested was constant work. It is also what has made me a better communicator of research to non-research audiences than most researchers with purely academic backgrounds.

Primary research stakeholders were the CDAO's Chief Experience Architect and Chief Product Officer, to whom I presented findings and strategic recommendations directly at multiple points across the engagement.

Structural solutions to
structural problems

The findings pointed to three distinct but interconnected interventions. Each addressed a different layer of the adoption problem: the infrastructure underneath, the experience on top, and the organizational model in between.

Infrastructure First
Pause development and strengthen the underlying architecture

Before adding capabilities or expanding the user base further, the platform needed to address the infrastructure strain that was degrading the experience for existing users. Continued development on an unstable foundation would compound the problems already present. I delivered this recommendation directly to the CDAO's Chief Experience Architect and Chief Product Officer through both written deliverables and a formal presentation briefing. The DoD acted on it, pausing Advana development in June 2024 specifically to make infrastructure improvements.

Experience Layer
Redesign the user interface to meet users where they are

The interface needed to accommodate a far wider range of data literacy than it currently assumed. This meant reducing the cognitive load for less technical users without limiting the capabilities of more sophisticated ones, and making it substantially clearer where to go and what to do for users across every role and branch.

Platform Architecture
Individualize the platform through specialized components for distinct user segments

A single universal interface could not serve the full diversity of Advana's user base. The research supported a move toward specialized components tailored to the distinct data types, workflows, and analytical needs of different user groups across branches and functions. This recommendation aligned with what the CDAO subsequently pursued through the Open DAGIR framework: a modular, multi-vendor architecture designed to serve diverse needs rather than a single monolithic system.

Research that shaped
a billion-dollar decision

The engagement ran through April 2024, after which the research and strategic framework continued informing CDAO planning. In the months that followed, the DoD took several significant actions that tracked directly with the findings and recommendations.

Development Pause

DoD paused Advana development in June 2024 to make infrastructure improvements, addressing the structural finding at the core of the research

Modular Architecture

The subsequent $15B recompete plan centered on a modular, multi-vendor architecture, reflecting the segmentation and specialization strategy surfaced in the research

Strategic Reframe

CDAO leadership publicly reframed Advana's challenges in language consistent with the research findings: "a victim of its own success" that needed to evolve its architecture and acquisition model to reach the next level of scale

$15B Recompete

In September 2024, the CDAO announced a 10-year, multi-vendor contract valued at up to $15 billion, described as the largest data and AI government acquisition in DoD history

Note on attribution: The development pause and recompete involved many stakeholders and decision-makers across the DoD. The research contributed a user-grounded evidence base and strategic framework that informed those decisions. It is not claimed as the sole cause.

Three things I would
change in retrospect

Decision Rights
Push earlier for a clear decision rights framework

The research produced findings that pointed in directions that challenged some existing assumptions: about the persona framework, about the platform's readiness to scale, about the infrastructure beneath it. Getting explicit alignment upfront on who had authority to act on different categories of findings would have shortened the distance between insight and action. Without it, some recommendations spent longer in deliberation than the urgency warranted.

Continuous Listening
Advocate earlier for ongoing measurement rather than a point-in-time study

A platform with 100,000 users and a major contract decision approaching deserved ongoing measurement, not a single research engagement. I would push earlier and more forcefully for standing up a lightweight continuous listening infrastructure: regular pulse surveys, a feedback mechanism embedded in the platform itself, so that findings could be tracked longitudinally rather than treated as a snapshot.

Information Architecture
Resource the card sort earlier in the process

The card sort was ultimately identified as critical for designing the future state information architecture. Getting it resourced and scheduled earlier would have given the IA deliverables a stronger empirical foundation rather than relying more heavily on interview synthesis for navigation and content model decisions.

What this project
was really about

The most important decision I made on this project was insisting on a sample large enough to be meaningful across a genuinely heterogeneous user population. A smaller study would have found the most visible problems and missed the structural ones. The enterprise survey was not methodological excess; it was the only way to ensure that findings about one segment were not being incorrectly applied to another.

The persona validation piece also matters more than it might appear in hindsight. I was not starting from scratch. I was handed a framework that had already shaped thinking about the platform's users, and asked to determine how much of it was accurate. That requires a different kind of intellectual discipline than building fresh. You have to hold the existing framework loosely enough to see what it gets wrong while taking seriously what it gets right.

The six failure modes I surfaced were not a surprise to everyone in the room. Some of them were known at some level. Prior to the research, the tendency had been to treat each problem as a discrete issue to be addressed on its own timeline. What the research provided was a unified, evidence-grounded account that made that approach harder to sustain. Seeing six failure modes operating simultaneously on the same platform changed what solutions looked like. That reframing mattered as much as any individual finding.

When a platform is the designated enterprise standard for the entire Department of Defense, the stakes of getting the user model wrong are not academic. This was research where the cost of confirmation bias was measured in billions of dollars and the working conditions of hundreds of thousands of people.