Will AI Define Us? Three Possible Futures for UX Research

“The best way to predict the future is to create it.” - Peter Drucker

 

Regardless of whether you're in design, research, product, or data science, Peter Drucker’s insight couldn't be more relevant for us right now as generative AI forces us to confront a critical question: Will we shape our future, or will AI define it for us? (Faluyi, 2025; McKinsey & Company, 2025)

It's easy to get caught up in the worst-case scenario nowadays, especially in the tech industry. Mass layoffs, mind-blowing capabilities, AGI prophets and dissenters painting a professional apocalypse that makes your stomach churn with equal parts excitement, wonder, and dread. But is that productive and healthy? (Exploding Topics, 2025)

Rather than get consumed in the whirlpool of existential dread, I applied the best/worst/base case foresight planning framework to imagine and define what potential futures of the Research industry might look like in light of the generative AI revolution.

Curious? Feeling the same? Read on.


A brief note: While I've applied this framework to the research industry, you can also utilize the best/worst/base case foresight framework, a powerful tool for grounding speculation and identifying proactive steps, to envision your own Field's futures.


The Core Topic: The impact of generative AI on the UX & Product Research community, profession, and craft.

Key areas of impact I consider:

  • The Craft: How research activities, methods, and deliverables are performed.

  • The Profession: The roles, skills, career paths, and perceived value of UX/Product Researchers.

  • The Community: How researchers interact, share knowledge, and evolve as a collective.

  • Tools and Technology: The nature and capabilities of the AI tools used.

  • Ethics and Trust: How ethical considerations and trust in research findings are handled.

  • Organizational Integration: How research teams function within companies and their influence.


Scenario 1

The Status Quo: Ambling Along

In this scenario, we continue with our current state: generative AI is adopted unevenly and without a clear overarching strategy within the Product & Design research field. (Bipartisan Policy Center, 2024; GPTZero, 2025)

  • The Craft: Generative AI tools are used primarily for basic automation (e.g., summarizing transcripts, generating reports, basic brainstorming, and survey question generation)(Looppanel, 2025; Maze, 2025a; UX-Republic, 2025). Their capabilities remain somewhat limited in understanding nuance and context. Researchers use AI as a supplementary tool, but it doesn't fundamentally change core methodologies, such as in-depth interviews or complex usability testing. With only ~30% adoption in our field and process, there is a mixture of excitement and frustration with the tools' inconsistencies (Maze, 2025b).

  • The Profession: The role of the researcher remains essentially unchanged, although some junior tasks may be reduced. There is a growing expectation for researchers to utilize AI tools, but formal training is lacking. The value of research is still recognized, but there's confusion about where AI adds real value versus just speed. Some researchers feel their expertise is being devalued by the perception that AI can do their job.

  • The Community: Discussions about AI in research are common but often fragmented. There's no widespread agreement on best practices or ethical guidelines for using generative AI. Knowledge sharing about practical AI applications and integrations is often ad-hoc within companies or online groups and forums.

  • Tools and Technology: Tools, tools everywhere. Niche AI tools emerge–some more effective than others. Interoperability between tools is poor. "Black box" AI is common, making it challenging to comprehend how insights are generated and compounding concerns about bias and trust. (Council of Europe, n.d.; IBM, n.d.; SAP, n.d.)

  • Ethics and Trust: Ethical considerations are often an afterthought. Data privacy concerns persist, and there are instances of misuse or accidental sharing of sensitive data through AI tools (Maiti et al., 2025; ResearchGate, 2024). Trust in AI-generated insights is variable and often depends on individual researcher scrutiny. The inherent bias of models is discussed, but not proactively addressed (Council of Europe, n.d.; USC Annenberg, n.d.)

  • Organizational Integration: AI tools are often adopted departmentally or by individuals without a cohesive organizational strategy, resulting in inconsistencies in research quality and difficulties in integrating AI-driven insights across teams.

In essence, Generative AI is bolted onto existing processes without a clear understanding of its intent, causing some bumps and inefficiencies but not fundamentally reshaping the landscape (Tapptitude, n.d.). The potential of AI is recognized but not fully realized due to a lack of strategic adoption and understanding.


Scenario 2

The Worst Case: The Devaluation and Disconnect of Research

If you're like me, this was a bit easier to imagine. The destruction, devaluation, and disconnection of everything. In this scenario, the unchecked and poorly understood implementation of generative AI leads to significant negative consequences for the product and design research field.

  • The Craft: Over-reliance on generative AI for analysis and insight generation leads to a decline in the depth and quality of research. Nuance is lost, and critical human insights are missed as AI prioritizes easily identifiable patterns. "Shallow" or even misleading findings become more common, leading to poor product decisions. Traditional research skills atrophy (more on this in a later post). MDPI (2025a) notes that "frequent usage of AI has a negative correlation with the cultivation of critical thinking skills," and a Microsoft study found that AI can render human cognition "atrophied and unprepared" (404 Media, 2025).

  • The Profession: The profession is significantly devalued with the commoditization of “research insights” (Finquest, 2024). Companies view AI as a cost-effective replacement for skilled researchers, potentially leading to job losses or a shift towards more superficial "prompt engineering" roles. (Exploding Topics, 2025; Faluyi, 2025) The unique value of human empathy, critical thinking, and strategic insight is underappreciated, resulting in a lack of deep user understanding, a failure to address nuanced needs, and a disconnect between products and customer experience. Career progression is unclear and at risk.

  • The Community: The community is fragmented and demoralized. The profession faces significant contraction as companies prioritize AI over human expertise. Experienced researchers leave the field, and new talent is not attracted to a dying craft. Knowledge sharing declines as researchers compete in a shrinking job market or become isolated within their AI-driven workflows.

  • Tools and Technology: Generative AI tools become commoditized and focused on speed over accuracy or depth. Ethical safeguards are minimal or easily bypassed. AI bias is rampant and unchecked, perpetuating harmful stereotypes in research outputs. (Council of Europe, n.d.; Maiti et al., 2025; ResearchGate, 2024) Additionally, the efficacy of AI models diminishes due to a stagnation of new, human-driven research insights needed for continuous learning and refinement.

  • Ethics and Trust: Ethical disasters occur due to the misuse of AI, including privacy breaches, biased research outcomes impacting at risk and marginalized groups, and a complete erosion of trust in research findings by stakeholders and participants. Research’s reputation is severely damaged either being seen as an impediment to innovation.

  • Organizational Integration: Research teams are downsized or disbanded, with AI being managed by other departments (e.g., design or product management) that lack a deep understanding of qualitative research methodologies and ethics. Research becomes a technical process divorced from user needs. Meanwhile, companies are getting further away from their customers, and “human-centered” design loses focus.

In essence, Generative AI is seen as a silver bullet, leading to a race to the bottom in terms of research quality, ethical standards, and the value of human expertise. The profession is significantly harmed, and the ability to truly understand people is compromised (Adriana Lacy Consulting, 2025).


Scenario 3

The Best Case: The Empowered Research Engine

After that doomsday wallowing, let’s imagine a better world with AI. In this scenario, generative AI is strategically and ethically integrated, transforming Product & Design Research into a more powerful, efficient, and impactful function (Tapptitude, n.d.). One where our connection to our customers is strengthened, not weakened.

  • The Craft: Generative AI becomes a powerful co-pilot, automating tedious tasks and augmenting human capabilities (Greenbook.org, 2024; SmythOS, 2025). Researchers leverage AI for advanced data analysis, identifying subtle patterns across massively disparate datasets, simulating human behaviors in certain contexts, and generating creative stimuli for testing (Julius AI, 2025). This frees up researchers to focus on complex research design, in-depth qualitative exploration, strategic synthesis of AI and human insights, identifying and satisfying blind spots, and communicating compelling narratives.

  • The Profession: The profession continues to evolve and thrive. Researchers upskill to become experts in human-AI collaboration, critical evaluators of AI outputs, and strategic partners who can leverage AI to answer more complex and impactful questions. (Looppanel, 2025; Merlien Institute, 2025) New roles emerge, focusing on AI-driven research strategies and the ethical implementation of AI in research. The value of the researcher is elevated as they become orchestrators of advanced research processes. Research becomes a strategic advisor and governance partner to teams looking to implement AI into the design and product lifecycle.

  • The Community: The community is vibrant and collaborative. Researchers actively share best practices, develop ethical guidelines for the use of AI, and collectively push the boundaries of what is possible with AI-augmented research. Online communities and conferences focus on advanced AI techniques and their ethical implications.

  • Tools and Technology: Sophisticated and interoperable AI platforms emerge, offering transparency into their workings where possible ("glass box" AI). Ethical AI development is prioritized, with built-in safeguards against bias and strong data privacy measures (IBM, n.d.; SAP, n.d.; UST Journals, 2024). Tools are designed to enhance, not replace, human judgment.

  • Ethics and Trust: Ethical considerations are deeply embedded in the research process. Researchers are trained in AI ethics, and frameworks are in place to ensure responsible data handling, mitigate bias, and promote transparent reporting of AI use (IBM, n.d.; SAP, n.d.; UST Journals, 2024). Stakeholders and participants trust the research process due to clear ethical guidelines and demonstrable rigor.

  • Organizational Integration: Research is recognized as a strategic function powered by human-AI collaboration (SmythOS, 2025). Research teams work closely with AI development teams to build and refine tools. Insights generated through AI-augmented methods are highly valued and directly inform product and business strategy (Looppanel, 2025; Maze, 2025b).

In essence, Generative AI is a catalyst for positive transformation, elevating the craft, empowering researchers, and increasing the strategic impact of UX and Product Research within organizations (Adriana Lacy Consulting, 2025; Tapptitude, n.d.).

—--------------------

This exercise is not about predicting the future, but about preparing for potential futures and identifying actions that can help steer towards a more desirable outcome.

💭 🤔 What do you think?

  • Which of these futures do we think is most likely?

  • Which future do we want to create?

  • What are your biggest fears and hopes related to AI in your work and field?

  • What are the first steps we need to take to move away from the worst case and towards the best case?

Share your thoughts below! Let's collectively shape the future of research in the age of AI. 👇🏻 👇🏻


How could I be a researcher without a bib?