Reimagining the Role of UX Researchers as Architects of Human-AI Collaboration

Understanding the gaps and alignments between human needs and technical feasibility is exactly where research is critical in the agentic era.
Key Takeaways
As researchers who have spent our careers studying how technology reshapes work, we now find ourselves subjects in our own study. AI tools are not only a part of our research toolkit – 90% of UX professionals use AI for analysis – they have fundamentally shifted what we study, how we work, and where we create value.
The question isn’t whether AI will change research. It’s how we’ll shape that change.
Here’s what we’ll cover
The gap between human needs and AI hype
How the research landscape is transforming
New modes of human-AI collaboration
Three human strengths that will define research’s future
Embracing research’s unbounded future
The gap between human needs and AI hype
Stanford University’s SALT Lab recently mapped a fascinating tension: what people think AI can do versus what they actually want it to do. Studying 1,500 workers, they identified four distinct zones where human desire and AI capability intersect.
The results revealed a troubling misalignment. While 41% of investment flows to areas where workers have low desire for automation, there’s relative under-investment in places where people actually want help. This gap between technical feasibility and human needs is exactly where research becomes critical.
We help our organizations navigate these sociotechnical crosshairs and build paths for worker agency in technology development. But to do this effectively, we need to understand how our own field is changing.
How the research landscape is transforming
As the product ecosystem continues to pivot rapidly, so do the expectations of research. The demand has shifted from studying static interfaces, personas, and user journeys to designing for and evaluating adaptive, human-AI systems that anticipate user needs in real-time and support an increasing set of jobs.
Meanwhile, the product lifecycle is compressing, as are the opportunities for research impact. Stakeholders need embedded, continuous insights rather than lengthy reports delivered weeks later. The demands and roles of designers and product managers are evolving. They’re increasingly responsible for customer insights and want to rely on always-on learning systems with real-time instrumentation and optimization.
New modes of human-AI collaboration
A lot of commentary about AI and the workforce is eager to demonstrate clear shifts and job replacement. For research, like in so many other fields, the future of work is more complex and messier than any AI labor replacement narrative suggests.
I reframe the evolution of research work as a spectrum of human-AI collaboration modes we cycle through as opposed to a more binary approach of augmentation versus automation. It’s about skill rebundling across a spectrum of human-AI collaboration modes and emerging job categories.

For defined work, we move fluidly between:
- Assistive collaboration: AI summarizing transcripts while we lead analysis
- Cooperative partnerships: drafting research plans together through iterative refinement
- Supervised automation: designing studies that AI executes under our oversight
For emergent work, we’re pioneering:
- Orchestrated sensemaking: architecting feedback systems that triangulate signals across data streams
- Co-intelligence: human-AI thinking partnerships that navigate ambiguity
- Agentic experimentation: AI agents proposing and testing optimizations under human governance
This evolution is already reshaping the job market. Research roles are merging with product experimentation and AI system learning. Insights director positions emerge with the convergence of UX research, data science, and voice-of-customer functions. And some PM job descriptions explicitly include research as a core responsibility.
These aren’t isolated examples — they’re signals of role rebundling and category emergence across the research function.
Three human strengths that will define research’s future
As we navigate AI’s “jagged frontier”, where some tasks are easily automated while seemingly similar ones aren’t, the most successful researchers will lean into distinctly human strengths in partnership with AI:
1. Boundary spanning
This means doing expansive work that traverses and upends traditional functional boundaries. Researchers are now building interactive demos, spinning up concept apps, and testing ideas faster than ever. But it’s also about connecting across organizational silos, pulling signals from customer support, telemetry, sales calls, and research studies to synthesize meaning, where others stay in their lane.
2. Framing innovation
AI excels within a defined frame, but humanity’s greatest strength is our ability to imagine and reframe possibilities. We can look at a retention problem and ask, “What if this is actually an onboarding issue?” That act of reframing a problem, or assigning new meaning to existing data, requires understanding context, culture, and the ability to imagine what doesn’t yet exist.
3. Organizational catalyst
We move from organizational influencer to catalyst, increasingly owning outcomes and doing the messy, often invisible orchestration that turns ideas into shipped reality. Converting insights into outcomes requires mobilizing attention, trust, and the right people amidst organizational messiness.
How might the research function evolve?
How should leaders think about positioning research in this future? Some argue we should move “up” into strategy as tactical work is increasingly automated. Others say embed deeper in product and own outcomes. My view: both — and more. Research itself may undergo a kind of mitosis where it evolves into multiple different functions, and our identities will be reshaped by those new functions.
Here are three possibilities for how research functions grow:
1. Learning architecture
We architect and leverage human—AI systems that integrate distributed, always-on data streams, conduct analysis at scale, and develop insights based on deeper human understanding of what’s meaningful. Beyond orchestration of feedback signals, we need to manage the organizational attention budget, deliver the right signal to the right person at the right moment, and weave data into narratives that travel.
2. Human-AI systems design and evaluation
Design and evaluate human-AI ecosystems that include multiple actors, feedback loops, and governance. We convert insights into product improvements and experiments, while we work to optimize for the whole outcome, not a single touchpoint. This work requires creative pattern-making, ethical imagination, and a working grasp of how AI systems fail and learn. In this future we may expand our skill set and roles to support more hands-on prototyping and experimental design.
3. Embodied intelligence
As researchers (and ethnographers), we notice what matters in context because we deeply understand our participants and product space. We are our own research instruments. This allows us to co-produce insights with participants and absorb context that isn’t captured elsewhere. This intelligence fuels the framing of insights in real-time to drive decision-making, and it sparks innovation in collaboration with others. We also facilitate this embodied experience of connecting with customers for our stakeholders.
Embracing research’s unbounded future
Multiple futures for research are unfolding simultaneously. The transformation may be challenging – even grief-inducing as professional boundaries shift. But we have an opportunity to actively reimagine what research becomes rather than simply react to change.

Watch my keynote on this topic at YouX. (Requires registration)
Because we’ve spent our careers understanding how technology reshapes lives and work, we’re uniquely positioned to be intentional about reshaping our own future. The emerging world is better when we contribute our voices as architects of thoughtful human-AI collaboration.















