The Pentagon's most important
data platform had a user problem
Advana is the Department of Defense's enterprise-wide AI and analytics platform, originally developed to help the comptroller's office manage financial data across thousands of disparate systems. By 2023, it had grown into something far more expansive: a platform relied on daily by decision-makers from the Secretary of Defense down, integrating data from over 400 business systems and serving a user base that had grown to roughly 100,000 people across every branch of the military and the civilian defense workforce.
That growth was both the platform's greatest achievement and its most pressing challenge. CDAO leadership described Advana publicly as "a victim of its own success." The platform had scaled faster than its architecture, its user experience, and its understanding of who was actually using it and why.
Engagement was lower than expected given the platform's enterprise mandate. A major contract decision was approaching. And the CDAO needed a clear, research-grounded picture of its user base before committing to the next phase of investment and development.
Booz Allen Hamilton had already developed an initial set of personas and archetypes. Our team was brought in to validate them, pressure-test them against real users, and surface what the existing framework might be missing.
Research at the scale
the problem demanded
The heterogeneity of Advana's user base was itself a methodological challenge. A platform serving financial analysts, logistics officers, intelligence personnel, human resources staff, and senior command leadership across every branch of the military cannot be understood through a small sample. I designed the research to match the scale and complexity of the user population, building a multi-method approach that ran simultaneously across survey, interview, and information architecture channels.
Twenty deliverables across
six workstreams
The research did not end at findings. Each method fed a specific downstream deliverable, and the deliverables spanned the full pipeline from research synthesis through information architecture and future state design.
Research that fed directly
into product direction
Two of the twenty deliverables were product roadmaps, not research reports. The first was developed in collaboration with the engagement's experience strategist and informed by early research findings and initial CDAO stakeholder input. It included strategic objectives, high-level themes, prioritized features, phased infrastructure improvements, milestones, and timeframes.
The second, revised roadmap came later in the engagement, after the full research picture had emerged. It incorporated both what the research had surfaced and what CDAO leadership had validated or pushed back on in interim briefings. The revision was not cosmetic. The six simultaneous failure modes the research identified changed what sequencing made sense, which infrastructure work had to come before which experience improvements, and which user segments needed to be prioritized first to unlock broader adoption.
Contributing to a product roadmap at this scale, for a platform used by 100,000 people across the entire Department of Defense, required holding the research findings and the organizational constraints in the same frame at the same time. That is a different kind of work than producing a findings report and handing it over.
Six simultaneous failure modes
on a single platform
What emerged from the research was not a single root cause but a constellation of interconnected problems, each compounding the others. The platform had scaled its user base without scaling the infrastructure, the user experience, or the organizational understanding needed to support that growth.
The user base was not a single population with minor variation. Financial analysts, logistics personnel, intelligence staff, and command leadership used the platform in structurally different ways, worked with different data types, and had different definitions of what "useful" looked like. A one-size-fits-all platform architecture could not serve all of them well simultaneously.
A major influx of new users had strained the underlying architecture, creating performance and reliability issues that surfaced repeatedly in interviews. Users were building workflows and analytical processes on a foundation that could not support them at scale. This was not a perception problem. It was a structural one.
The platform had been designed with a level of data literacy that was not uniformly present across its user base. Many users who needed the platform most were least equipped to use it independently, creating an adoption gap that training alone could not close without interface-level changes.
Workflow mapping revealed significant divergence between designed and actual use patterns. Users had developed workarounds, partial workflows, and informal practices that reflected the platform's gaps rather than its intended capabilities. These unintended patterns were often invisible to platform leadership.
What leadership believed users needed and what users reported actually needing diverged in meaningful ways. This misalignment was shaping roadmap and investment decisions without either side fully recognizing the gap. Surfacing it required getting both perspectives into the same analytical frame.
Even with a formal DoD mandate designating Advana as the department's authoritative analytics platform, voluntary active engagement remained lower than expected. The survey data showed that a substantial share of users visited only rarely or a few times a month, and most users had had access for less than a year despite the platform having existed far longer, reflecting how rapidly it had been onboarded without corresponding support infrastructure. Mandate alone cannot substitute for a platform users find genuinely useful and navigable.
Ease of finding data was directly measured in the CX survey. 36% of respondents reported difficulty finding what they needed (combining extremely difficult, very difficult, and fairly difficult responses). Only 20% found content very or extremely easy to locate. In open-ended responses, users described not knowing where to go, searching across multiple community spaces without success, and relying on colleagues to navigate rather than the platform itself. This is an information architecture problem, not a user education problem, and it demanded an IA solution: the sitemap, navigation redesign, and content model that became three of the twenty deliverables.
The platform had not failed. It had succeeded faster than the conditions for sustainable success had been built. The research made visible what scale had obscured: a user base that had outgrown the architecture, the interface, and the organizational model designed to support it.
Navigating multiple teams
with competing relationships to the research
This project operated at the intersection of several stakeholders with different relationships to the work. Within Isobar, I worked alongside a project manager, UX designer, experience strategist, developer, and business analyst. My role was to ensure the research direction held through each phase and that findings translated into actionable design inputs rather than abstract insights.
Externally, I had ongoing communication with the Booz Allen Hamilton team whose existing persona framework we were charged with validating. That relationship required a particular kind of diplomatic rigor: being honest about where the prior framework needed revision while maintaining productive working relationships with the team that had built it. Receiving inherited work from another firm and stress-testing it without creating friction is a different skill than starting from scratch.
The most consistent challenge was translating research methodology and statistical reasoning to stakeholders without research backgrounds who nonetheless had authority over project decisions. Making the case for why sample size mattered, why certain findings were more reliable than others, and why some recommendations were more urgent than the timeline suggested was constant work. It is also what has made me a better communicator of research to non-research audiences than most researchers with purely academic backgrounds.
Primary research stakeholders were the CDAO's Chief Experience Architect and Chief Product Officer, to whom I presented findings and strategic recommendations directly at multiple points across the engagement.
Structural solutions to
structural problems
The findings pointed to three distinct but interconnected interventions. Each addressed a different layer of the adoption problem: the infrastructure underneath, the experience on top, and the organizational model in between.
Before adding capabilities or expanding the user base further, the platform needed to address the infrastructure strain that was degrading the experience for existing users. Continued development on an unstable foundation would compound the problems already present. I delivered this recommendation directly to the CDAO's Chief Experience Architect and Chief Product Officer through both written deliverables and a formal presentation briefing. The DoD acted on it, pausing Advana development in June 2024 specifically to make infrastructure improvements.
The interface needed to accommodate a far wider range of data literacy than it currently assumed. This meant reducing the cognitive load for less technical users without limiting the capabilities of more sophisticated ones, and making it substantially clearer where to go and what to do for users across every role and branch.
A single universal interface could not serve the full diversity of Advana's user base. The research supported a move toward specialized components tailored to the distinct data types, workflows, and analytical needs of different user groups across branches and functions. This recommendation aligned with what the CDAO subsequently pursued through the Open DAGIR framework: a modular, multi-vendor architecture designed to serve diverse needs rather than a single monolithic system.
Research that shaped
a billion-dollar decision
The engagement ran through April 2024, after which the research and strategic framework continued informing CDAO planning. In the months that followed, the DoD took several significant actions that tracked directly with the findings and recommendations.
DoD paused Advana development in June 2024 to make infrastructure improvements, addressing the structural finding at the core of the research
The subsequent $15B recompete plan centered on a modular, multi-vendor architecture, reflecting the segmentation and specialization strategy surfaced in the research
CDAO leadership publicly reframed Advana's challenges in language consistent with the research findings: "a victim of its own success" that needed to evolve its architecture and acquisition model to reach the next level of scale
In September 2024, the CDAO announced a 10-year, multi-vendor contract valued at up to $15 billion, described as the largest data and AI government acquisition in DoD history
Note on attribution: The development pause and recompete involved many stakeholders and decision-makers across the DoD. The research contributed a user-grounded evidence base and strategic framework that informed those decisions. It is not claimed as the sole cause.
Three things I would
change in retrospect
The research produced findings that pointed in directions that challenged some existing assumptions: about the persona framework, about the platform's readiness to scale, about the infrastructure beneath it. Getting explicit alignment upfront on who had authority to act on different categories of findings would have shortened the distance between insight and action. Without it, some recommendations spent longer in deliberation than the urgency warranted.
A platform with 100,000 users and a major contract decision approaching deserved ongoing measurement, not a single research engagement. I would push earlier and more forcefully for standing up a lightweight continuous listening infrastructure: regular pulse surveys, a feedback mechanism embedded in the platform itself, so that findings could be tracked longitudinally rather than treated as a snapshot.
The card sort was ultimately identified as critical for designing the future state information architecture. Getting it resourced and scheduled earlier would have given the IA deliverables a stronger empirical foundation rather than relying more heavily on interview synthesis for navigation and content model decisions.
What this project
was really about
The most important decision I made on this project was insisting on a sample large enough to be meaningful across a genuinely heterogeneous user population. A smaller study would have found the most visible problems and missed the structural ones. The enterprise survey was not methodological excess; it was the only way to ensure that findings about one segment were not being incorrectly applied to another.
The persona validation piece also matters more than it might appear in hindsight. I was not starting from scratch. I was handed a framework that had already shaped thinking about the platform's users, and asked to determine how much of it was accurate. That requires a different kind of intellectual discipline than building fresh. You have to hold the existing framework loosely enough to see what it gets wrong while taking seriously what it gets right.
The six failure modes I surfaced were not a surprise to everyone in the room. Some of them were known at some level. Prior to the research, the tendency had been to treat each problem as a discrete issue to be addressed on its own timeline. What the research provided was a unified, evidence-grounded account that made that approach harder to sustain. Seeing six failure modes operating simultaneously on the same platform changed what solutions looked like. That reframing mattered as much as any individual finding.
When a platform is the designated enterprise standard for the entire Department of Defense, the stakes of getting the user model wrong are not academic. This was research where the cost of confirmation bias was measured in billions of dollars and the working conditions of hundreds of thousands of people.