Integrating AI into UX Research

July 24, 2025 | Bob Konow
Integrating AI into UX Research

Depending on your perspective, the current state of AI is both thrilling and unsettling. The rapid pace of change is leading to innovation but also creates a sense of whiplash due to frequent updates and developments. It's clear AI is here to stay, even amid the noise.

In UX research, AI has quickly become an invaluable assistant. From helping with early-stage planning and research synthesis to insight development, AI is now embedded in routine activities. But adopting AI tools responsibly and with full awareness of its limits is essential.

Our team has experimented with a wide range of tools and approaches. We’re sharing some best practices for AI in UX research we’ve uncovered along the way while also preserving the human expertise that drives real change to ultimately enhance workflow processes and organizational efficiency.

Introduction to AI Usage in Research

We currently leverage AI through three key approaches including using widely available models, working within AI-enabled environments, and developing custom solutions. Each offers unique value, providing innovation, efficiency, and practical experimentation.

Utilizing Mass Market Models and Tools

Our teams regularly engage with AI models and toolsets including ChatGPT, Google Gemini, and Microsoft Copilot. We experiment with these tools through the use of chats, image generation, and other capabilities. Although immediate success isn't guaranteed, ongoing refinement can lead to improvements in workflow.

Building Experiences with AI Enabled Tools

Numerous AI enabled tools, such as Lovable, Bolt, v0, and Replit provide dynamic platforms for quick prototyping and experimentation. These environments are excellent for vibe coding, facilitating rapid innovation, and exploration of new concepts.

The range of research tools is growing fast, with new ones emerging almost every day. Keeping an eye out for new opportunities and tools in this evolving field is important to stay ahead.

Developing Custom AI Tools and Models

While mass market AI models and tools are trained on broad internet-scale data, they can sometimes be too generic. Internal custom tools can be finely tuned to your organization’s specific data, workflows, and business logic, enabling unique capabilities that off-the-shelf solutions cannot offer. Custom models understand industry-specific terminology and context, integrate with proprietary systems and datasets, and support specialized use cases that aren’t viable with public models.

Custom tools also allow full governance over data handling, model updates, and security protocols. This is critical for many industries with strict regulatory requirements or sensitive customer information that we support – including finance, healthcare, and government.

It is important to keep in mind that custom AI tools and models do come with a steep cost to maintain. If feasible, this is a great option to pursue.

Responsible Experimentation with AI

No matter how your organization is leveraging AI, caution must be exercised when involving customers and their data. Rather than simply trusting and verifying, we often approach AI outputs with a bit of skepticism and thoroughly verify before proceeding.

Particularly in research, we would avoid uploading sensitive customer data or interview transcripts directly to AI services without clear consent from participants and customers. Enterprise-level and individual account settings can restrict data usage, and explicit notifications and agreements (such as Statements of Work or informed consent forms) remain essential.

Expanding into Deep Research Models

As UX research becomes more complex and data-rich, general-purpose models may not suffice for the demands of synthesis and discovery. This is where deep research models, AI platforms designed to handle multi-document synthesis and exploratory research, offer significant value.

Tools like Elicit and Perplexity and other emerging domain-specific AI agents can:

  • Synthesize findings across large sets of interview notes or articles
  • Assist in literature reviews or competitor analysis
  • Identify recurring themes across qualitative sources
  • Support longitudinal research and early design explorations

These models are particularly useful during discovery and reporting phases, helping researchers scale insight generation across fragmented or lengthy inputs. However, their outputs still require contextual validation, and they should be treated as starting points, not final answers.

For UX researchers, these tools act as accelerators, not replacements, for expert-led insight development.

Challenges in Analysis and Synthesis

A primary area of caution in AI-driven research involves analysis and synthesis, due to inherent inconsistencies and accuracy risks. AI outputs can fluctuate significantly with updates, making manual analysis crucial to maintaining reliability and consistency.

Researchers can benefit from embedding AI models into their software stacks for preliminary analyses, using general models like ChatGPT or Gemini to supplement insights. Utilizing tools such as NotebookLM is beneficial to reduce hallucinations from more general models and focus on the data uploaded to the tool. However, manual oversight ensures the reliability and depth of research findings.

Preserving Critical Thinking and Expertise

Over-reliance on AI can risk atrophying critical thinking skills, especially when novice researchers rely heavily on AI outputs without fully understanding underlying methodologies. Maintaining expertise involves deeply engaging with analysis and synthesis processes, essential for quality research.

AI can project misleading confidence, appearing precise and accurate. While AI-generated outputs may initially seem perfect, cracks in accuracy and relevance quickly emerge under expert scrutiny. Thus, maintaining access to expert guidance, either internal or external, is critical.

The Irreplaceable Power of Research Teams

While AI enhances efficiency and exploration, it's the research team that transforms data into meaning. The strength of a skilled team lies not just in collecting and summarizing information, but in knowing what to highlight, why it matters, and how to act on it in nuanced contexts.

Great researchers synthesize across touchpoints, align insights with business goals, and interpret patterns with cultural, emotional, and strategic depth. They can spot subtle contradictions, validate through conversation, and adapt insights dynamically as new information emerges, tasks that today’s AI models can only mimic superficially.

Human researchers bring:

  • Curated judgment: Prioritizing what’s most relevant or urgent
  • Contextual fluency: Understanding organizational nuance, user emotion, and behavioral context
  • Collaborative interpretation: Building alignment across teams and stakeholders
  • Ethical sensibility: Evaluating implications and responsible use of insights

AI models are improving daily and they are excellent aids, but research teams possess access and context AI can't replicate. We draw from proprietary tools like Forrester, Gartner, internal dashboards, and expert interviews that aren't available to the public or open models. We also bring a timeliness factor, responding to emerging trends, customer signals, and organizational shifts that AI tools, trained on older data, often miss. Our insights are not only curated but grounded in recency and relevancy, something AI-generated output can struggle to deliver without strong human supervision.

In essence, the real opportunity is not choosing between AI or research but investing in teams that know how to work with AI effectively, while maintaining a clear-eyed view of its limits.

When Research Drives Real Change

The true value of UX research lies in uncovering what AI can't. This includes things like rich context, behavioral nuance, and proprietary insights embedded in real-world environments.

In our recent work with a telecommunications customer, we conducted immersive field testing by visiting multiple retail locations, observing team processes, and identifying operational and experiential gaps. These direct observations, paired with business context, led to a series of targeted, actionable recommendations that AI models, lacking access to proprietary processes and nuances, could not have produced independently.

Similarly, in the dining and convenience space, we facilitated an in-depth experiential research initiative that focused on how customers navigated a high-touch, multi-step environment. By combining behavioral insights with strategic business knowledge (and in some cases, even competitive analysis), the research generated breakthrough opportunities that shaped both customer experience and operational workflows.

These examples highlight why human-led, field-based research remains indispensable. Tools like AI can support, analyze, and augment, but they can’t substitute for the contextual, situational awareness gained through direct, intentional observation, and human interpretation.

Preparing for the Future of AI in Research

To make AI useful in real-world research, teams need hands-on practice, mentorship, and a clear understanding of each tool’s strengths and limitations. Newer team members in particular should have space to experiment while also learning how to vet and refine AI-generated content. Balancing AI-driven efficiency with dedicated manual analysis and synthesis fosters both skill development and effective implementation.

AI should be the first step, not the last. It helps organize ideas, suggest directions, and fill early gaps. Then, the true value comes when human researchers validate, iterate, and connect those early threads to real business outcomes. By combining AI insights with field data, behavioral context, and team interpretation, organizations can create a robust and trustworthy research foundation.

UX research isn’t just about what’s said in an interview, it’s about what it means in context. That’s where AI can support, but not substitute, the judgment of experienced teams. Everyone should be experimenting with AI, but nobody should rely on it alone.