The Air Force had been measuring IT success
with the wrong instruments
For decades, the Department of the Air Force managed IT the same way most large organizations do: locally, redundantly, and expensively. Over 230 separate organizations could make changes on the network. More than 80 distinct authority-to-operate boundaries fragmented the enterprise. Every base bought its own hardware, managed its own helpdesk, and defined "working IT" for itself. The result was massive duplication, inconsistent security posture, and a user experience that varied wildly from installation to installation.
EITaaS, Enterprise IT as a Service, was the Air Force's answer: transition from a base-by-base model to a centralized commercial enterprise service, and free Airmen and Guardians from first-level IT support work so they could focus on the missions they were trained for. The Risk Reduction Effort was a pilot across nine installations, designed to validate whether the model worked before the Air Force committed to scaling it across 800,000 personnel under a $5.7 billion contract.
In 2022, Michael Kanaan, then-Director of Operations for the USAF-MIT Artificial Intelligence Accelerator, posted an open letter that went viral across the defense community. It became the public articulation of what everyone who worked in DoD IT already knew privately.
"Dear DoD, You tell us to Accelerate change or lose, then fix our computers. Before buying another plane, tank, or ship, fix our computers. Yesterday, I spent an hour waiting just to log on. Fix our computers… You lost literally hundreds of thousands of employee hours last year because computers don't work… I Googled how much the computer under my desk costs in the real world. It was $108 dollars… Making computers so useless that nobody can hack them is not a strategy (yet they hack them anyway)… Sincerely and on behalf of, Every DoD employee."
The letter received over 1,700 reactions and forced a public response from DoD CIOs. The specific failures it named, hour-long login times, Excel freezing and forcing restarts, Tanium and McAfee consuming 40% of machine resources fighting each other, 10 forced restarts in a single day, were not edge cases. They were the baseline. This was the environment the EITaaS research was designed to measure, and that the program was designed to fix.
Isobar Public Sector was not part of the delivery team. We served as Trusted Advisors and Subject Matter Experts to DAF program leadership, operating under the Architecture and Integration Directorate and advising the Cyber Security Team on RRE initiatives. Our role was to bridge commercial best practices and military requirements, with a specific focus on human-centered design and modernization strategy.
I led the research and UX strategy work. My primary deliverables were the User Persona Framework and the Experience Level Agreement model, which together changed how the Air Force defined, measured, and procured IT service outcomes. Those frameworks went directly into the Wave 1 and Wave 2 solicitation requirements.
Research that had to serve
two masters simultaneously
The research mandate was dual. I needed to understand the current state IT experience across the nine RRE pilot bases, and I needed to define what the future state should look like in terms specific enough to write into a contract. The second part is where the work became unusual: not just documenting what users experienced, but translating those experiences into procurement-grade performance standards.
Five personas that changed
what the Air Force bought
The core insight driving the persona work was that treating 800,000 Airmen and Guardians as a single user type was the root cause of both the cost problem and the experience problem. A desk-bound administrative specialist and a flightline maintenance technician have fundamentally different IT needs, operate in different environments, and define "working IT" differently. A procurement model that ignored those differences would keep buying the wrong equipment for the wrong people.
I developed and championed five strategic personas through the research, each grounded in interview and survey data from the RRE pilot bases. Each persona directly shaped technology decisions, tiered service design, and the XLA framework that went into the Wave 1 requirements.
By mapping specific mission-productivity outcomes to each persona, the Air Force could move from paying for IT staff headcount per base to paying for the attainment of experience outcomes. That is what made outcome-based contracting possible at this scale, and it required the persona research to exist first.
Measuring what the mission needs,
not just what the server does
Traditional IT contracts measure performance in terms of system availability: is the server running, how long did it take to close a ticket, what percentage of the network was up this quarter. These metrics are real and necessary. They are also insufficient for understanding whether Airmen can actually do their jobs.
I developed the Experience Level Agreement framework as a parallel measurement layer, grounded in the persona research and designed to capture what operational IT performance looked like from the user's perspective. Each XLA was tied to a specific persona's definition of mission success, and together they shifted the contract evaluation model from server-centric to human-centric.
SLAs measure whether systems meet technical thresholds. XLAs measure whether users can accomplish what they need to accomplish. The distinction matters because a system can meet every SLA and still produce a terrible user experience. For the Air Force, a terrible user experience is not just a satisfaction problem. It is a mission readiness problem.
The persona framework enabled the Air Force to stop buying identical equipment for every Airman regardless of need. Garrison Users could be provisioned with reliable, cost-appropriate devices. Power Users and Flightline Users could receive the hardware their roles actually required. Instead of a standard configuration across the enterprise, hardware could be matched to mission profile, producing significant cost efficiencies without degrading capability where capability mattered most.
The Tiered Enterprise Service Desk model I helped establish routed Garrison User requests through AI-enabled self-service channels for routine tasks like password resets and software requests, measuring success through First Contact Resolution rates. Complex issues from Power Users and Flightline Users escalated to specialized technicians with dedicated MTTR commitments. The result was a service model that was simultaneously more efficient and more capable across different user types.
The persona framework and XLA model were not just internal deliverables. They informed the requirements that went into the Wave 1 and Wave 2 solicitations. The multi-vendor frontend strategy Isobar helped shape, specifically designed to prevent vendor lock-in by separating the delivery model into best-of-breed commercial components, drew directly on the persona-segmented service design work I led.
A Challenge Coin is a discretionary personal commendation given directly by a senior official, not a program milestone or routine recognition. It is awarded when someone's contribution is considered personally worthy of acknowledgment by someone with the authority and no obligation to do so.
The Chief Experience Officer of the USAF awarded this coin specifically for the work of leading the transition to a user-centered service model and establishing experience-level standards across the Department of the Air Force's digital modernization program. That this recognition came from the person responsible for experience across the entire Air Force speaks to where the work landed.
For those unfamiliar with the tradition: a Challenge Coin from a senior official is not given routinely. It is given when the work mattered to someone who understood what it cost to do it right.
Earning influence without
contractual authority
Isobar's role on EITaaS was structurally unusual: we were not part of the primary delivery contract but served as trusted advisor and subject matter expert to Department of the Air Force program leadership, operating under the Architecture and Integration Directorate. That positioning shaped everything about how collaboration worked. Influence had to be earned through the quality of the work rather than backed by contractual authority.
Within that structure, I worked closely with DAF program leadership, the Cyber Security team on RRE initiatives, and program stakeholders across nine pilot installations. The research touched people in fundamentally different working environments: administrative staff, flightline maintenance crews, intelligence personnel, remote workers. Coordinating access and participation across that range of environments required sustained effort with the Air Force survey office, which controlled survey deployment and had its own requirements and timelines. Building that working relationship took real investment, and it was the prerequisite for being able to do the research at all.
The most important collaborative relationship was with DAF program leadership, for whom I served as the bridge between commercial UX best practices and military operational requirements. Translating concepts like experience-level agreements and persona-driven service design to an audience steeped in procurement and technical delivery language, and doing it well enough that those concepts ended up written into contract solicitation requirements, required genuine effort to understand their constraints and incentives rather than simply presenting findings and expecting them to land.
From pilot results to
a $5.7 billion commitment
The RRE produced dramatic, measurable improvements at the nine pilot installations. Senior Air Force leadership publicly cited those user experience gains as the primary basis for confidence in proceeding with Wave 1. The research, personas, and XLA framework I developed were the infrastructure through which that confidence was built, measured, and communicated to decision-makers.
The pre/post survey data told the story most clearly at Gunter Annex, which was furthest along in the EITaaS transition at the time of measurement. Overall NIPR satisfaction at Gunter reached a mean of 73.6 out of 100, compared to means of 44.4, 46.4, and 43.2 at the other three measured bases still in earlier transition stages. Speed satisfaction at Gunter was 72.9 versus mid-40s elsewhere. Reliability satisfaction was 68.2 at Gunter versus low-to-mid 40s at the others. The gap is not noise. It is what the EITaaS model looks like when it is working. The baseline problem the survey also captured: at the pre-transition bases, between 40% and 69% of users reported slow network speeds affecting their ability to work daily or weekly. The 59% faster data transfer result is more meaningful when you understand that baseline.
84% faster startup times and 59% faster data transfer at pilot bases, measured against the baseline framework established during the RRE
3x faster network speeds versus legacy AFNET; 92% reduction in security event workload for IT personnel across pilot installations
Persona framework and XLA model went directly into Wave 1 and Wave 2 solicitation requirements, shaping how a $5.7B contract was written and evaluated
Challenge Coin awarded by the Chief Experience Officer of the USAF for leading the transition to a user-centered service model
The Wave 1 program, built on the foundation of the RRE research and the persona and XLA frameworks I developed, has continued to produce measurable outcomes since full deployment.
DAF average service desk answer speed of 19 seconds, enabled by AI-powered triage routing tickets to the correct technician instantly
10% increase in customer satisfaction since early 2025, tracked through the continuous feedback loops and user touchpoint surveys built into the XLA measurement model
Significant reduction in ticket reopen rates, confirming the Remote/Teleworker XLA threshold, that issues be resolved correctly the first time, is being met
The UX strategy work fed directly into the 21st Century Storefront, a centralized hardware and software hub built on ServiceNow EITSM 3.0 with AI virtual agents handling natural language troubleshooting queries
Note on the contract pause: A bid protest in early 2023 temporarily paused the engagement. This was a procurement challenge unrelated to the research or program performance, resolved when the GAO denied the protest in May 2023.
Three things I would
change in retrospect
The transition from SLA to XLA thinking was one of the most consequential things to come out of this engagement, and it emerged from the research rather than being handed down as a requirement. I would document that intellectual journey more formally: the specific interview and survey findings that made the SLA model's inadequacy visible, and the reasoning chain that led to each XLA metric. That documentation would have made the framework more defensible and more transferable to future program phases.
The XLAs we developed were grounded in what Airmen said they needed. But user needs evolve, and a framework built on a point-in-time research base needs a mechanism for updating. I would push for a standing user feedback loop built into the program from the beginning: not just measuring whether XLA thresholds were being met, but whether the thresholds themselves were still the right ones to measure.
The advisory rather than delivery structure of Isobar's role meant that influence depended entirely on relationship quality. We built a strong working relationship with DAF program leadership, but it took time that could have been shortened with more deliberate investment upfront. The same applies to the Air Force survey office relationship, which controlled our ability to deploy research instruments at scale. Both were worth more early effort than they received.
What this project
was really about
The thing I keep coming back to on this project is the XLA work, specifically the moment of arguing that "is the server up" was not a sufficient definition of IT success for a fighting force. That argument sounds obvious in hindsight. It was not obvious to everyone in the room.
SLAs exist because they are measurable, defensible, and easy to write into contracts. XLAs require you to define what human productivity looks like for five different types of people in five different operational environments, and then figure out how to measure it at scale. That is a harder problem. It also turned out to be the problem that mattered, because the Air Force was not struggling to tell whether its servers were running. It was struggling to tell whether its people could do their jobs.
The persona work is where research methodology and acquisition strategy came together. Most user research produces findings that inform design. This work produced findings that informed a procurement. The personas were not just archetypes on a slide: they were the structure that made outcome-based contracting possible. A Flightline User's success metric is meaningless unless you know what a Flightline User actually does and why startup time matters to them at 0400 on a flight deck. You cannot write that into a contract without first doing the research to understand it.
The Challenge Coin sits on my desk. It is a reminder that research done carefully enough, and communicated clearly enough, changes things beyond the slide deck it ends up in.