F500 pharma clients. FDIC-regulated banks.
And no panel to run research on.
Crowley Webb is a full-service advertising agency with a meaningful book of business in regulated industries. The agency also owns Praxis, a clinical trial recruitment firm, and I technically worked for both organizations. That dual role is how the pharmaceutical side of the panel work took shape: Praxis brought the clinical and pharma client relationships, and I built the research infrastructure those relationships required. The client base included Pfizer, Eli Lilly, Merck, Johnson & Johnson, and AbbVie on the pharmaceutical side, and M&T Bank in financial services. These are environments where research data carries real compliance weight and where participant provenance actually matters to clients.
The agency had no proprietary research panel. Every study that required a screened participant pool meant going to a third-party vendor, paying vendor rates on top of the research itself, accepting whatever panel quality they offered, and working within their timelines and constraints. That was the baseline when I started. The question of whether to build something better was still open.
recruited
from zero
panel architect
client base
What "trust" means when your clients
answer to regulators
Building a research panel for a general consumer brand is one thing. Building one for clients whose industries are governed by HIPAA, FDA oversight, and FDIC regulation is something else. The participants need to be who they say they are. The data handling needs to hold up to scrutiny. The privacy architecture can't be an afterthought you patch later.
Third-party panels solve for scale and speed, and on the surface they look like the obvious move. But they come with two problems that mattered here: cost and credibility. Every study that required a screened panel meant paying a vendor premium on top of the research itself, which adds up fast across a roster of regulated-industry clients. More fundamentally, third-party panels don't solve for the kind of institutional credibility that lets a pharma client actually trust the data going to a medical affairs team. If the panel itself couldn't meet that bar, everything downstream was compromised: the research, the recommendations, and the client relationship.
That wasn't a research quality problem. That was a strategic infrastructure problem, and it needed to be treated like one.
A panel that can't prove its data lineage to a regulated-industry client isn't a research tool. It's a liability dressed up as a shortcut.
The shortcut was available.
The consequences weren't worth it.
When I proposed building a proprietary panel from organic recruitment, my manager pushed back. Purchasing email lists was the obvious shortcut: faster lead volume, lower upfront effort, results on a timeline the agency could sell to clients. It was a reasonable business instinct if you weren't thinking about what happens the moment a client asks how these participants were sourced.
I refused. Not because I wanted to make the project harder, but because I understood what a purchased list would actually cost us. A research panel is only as credible as its provenance. The moment you seed it with contacts who never opted in, you've introduced a contamination you can't remediate. With regulated-industry clients, that contamination doesn't stay theoretical. It surfaces during audits, in data governance reviews, and in the room when your client's legal team starts asking questions.
Refusing that shortcut meant I had to build the entire recruitment mechanism myself. There was no playbook for this, no existing infrastructure to inherit, and no timeline relief. What it did give me was a panel that could actually withstand scrutiny.
End-to-end recruitment infrastructure,
designed from scratch
Without the option to buy contact lists, every participant had to be recruited through legitimate, auditable channels. That meant I needed multiple mechanisms that could generate volume while preserving consent and data integrity. I built three interlocking systems.
I designed and built a post-completion opt-in experience directly inside Qualtrics. Participants who finished a study were presented with a clear, affirmative invitation to join the panel. The flow was sequenced so that consent was genuinely informed and separable from study participation. Joining the panel was never a condition of completing the survey, which mattered for both ethical and compliance reasons.
I designed a referral system from scratch, including the attribution logic. Each existing panel member received a unique referral code. Referrals were tracked through a custom logic layer I built, allowing me to attribute new participants to their source, verify chain of consent, and eventually incentivize referrers through Giftbit, the reward fulfillment platform I integrated into the workflow. The attribution model mattered not just for panel growth accounting but for proving to clients that participant sourcing was traceable and clean.
I built the compliance architecture in direct collaboration with the legal team, treating it as a design constraint that shaped every data handling decision rather than a post-hoc review. The resulting framework covered international and domestic requirements, including GDPR, the EU-U.S. Data Privacy Framework, APEC CBPR, and HIPAA-adjacent practices appropriate for research involving health-related data. I also worked to achieve TRUSTe certification and align the panel's data governance with ISO 27001:2022 standards. This compliance layer was not decorative; it was the thing that made the panel useful to the clients who most needed it.
Purpose-built infrastructure,
not repurposed tools
The technology choices were driven by the compliance and audit requirements, not the other way around. Each platform was selected for what it could handle responsibly at the data layer, not just for feature convenience.
The certifications are the evidence,
not the goal
The compliance stack wasn't assembled to check boxes. It was built to answer the question that regulated-industry clients are always implicitly asking: can we trust this data if someone scrutinizes it? Each framework addressed a specific risk surface, and together they covered the international and domestic terrain of the client base.
The panel's first real test
was the caliber of the client
The panel wasn't built in anticipation of a theoretical future use case. It was built against the actual client roster: organizations whose compliance standards set the floor, not the ceiling. That the infrastructure held up across all of them was the validation.
These aren't clients that give you a pass on data governance questions. Pfizer, Lilly, Merck, J&J, and AbbVie operate in an environment where a research practice that couldn't withstand regulatory scrutiny would simply not get used. M&T Bank brought FDIC-regulated financial services constraints into the mix. The fact that the panel served all of them on their own terms, without diluting the compliance posture to fit a lower standard, was the proof of concept.
This wasn't a recruitment tool.
It was enablement infrastructure.
The panel could have been scoped as a research operations efficiency play: faster studies, lower vendor spend, more control over timelines. That framing would have been accurate but incomplete. What actually got built was something more foundational: the infrastructure that let a mid-size agency credibly serve regulated-industry clients at a level typically reserved for shops with dedicated compliance teams.
Building GDPR, DPF, APEC CBPR, and HIPAA-adjacent practices into research infrastructure from the design phase rather than retrofitting them later is exactly the posture HealthTech companies need from researchers who will handle sensitive user and patient-adjacent data.
Financial services clients and their regulators care about data lineage. Building a panel where every participant's sourcing, consent, and data handling is traceable and documented is a fundamentally different capability than running studies through a third-party panel you can't fully audit.
Designing the recruitment mechanism, attribution logic, compliance stack, tech integration, and incentive fulfillment as a single coherent system, rather than stitching together vendor pieces, is exactly the operational scope that research ops roles inside product organizations actually require.
Three things I'd approach
differently in retrospect
I documented decisions and built processes as I went, which meant the institutional knowledge existed. But most of it was organized around my own workflow rather than structured for someone else to operate without my context. Formalizing the methodology into a transferable playbook earlier would have made the panel easier to scale, easier to hand off, and easier for the agency to extend independently without me as the single point of continuity.
The recruitment side of the panel got a lot of architectural attention because that was where the immediate pressure was. The participant lifecycle after onboarding, including re-engagement cadences for inactive members, off-boarding logic for participants who aged out of target segments, and a structured refresh strategy, was something I addressed as it came up rather than designing upfront. Building that protocol into the initial architecture would have reduced attrition and kept the panel composition cleaner over time.
The conflict with my manager over purchased email lists surfaced after the panel build was already in motion. Getting clear written alignment on the parameters and constraints of the recruitment approach at project kickoff, rather than mid-build, would have saved time, reduced friction, and put the rationale for organic recruitment on record from the start instead of having to defend it under pressure.
What this case study demonstrates
The decision that mattered most in this project was the one I made before I built anything.
Refusing to buy email lists was a research integrity decision, but it was also a strategic one. It forced me to invent infrastructure that turned out to be considerably more valuable than a faster path to a compromised outcome would have been. The panel that emerged from that refusal was something the agency couldn't have bought: a first-party, fully auditable, compliance-ready research asset that made a new category of client trust possible.
For organizations in HealthTech and FinTech where the research function touches sensitive data, the ability to build research infrastructure that can actually be trusted, not merely used, is the differentiator. That's what this project was about.