Our company is acquired. Now what?

“Business as usual” applies no more to us UX professionals if we must navigate organizational changes.

How UX professionals are confident and skeptical of LLMs.

a white robot-like creature stands in front of a blackboard with data visualizations, charts, and research notes
Image generated with Microsoft’s Copilot

Researched and written with Irina Wagner, PhD.

The rumbles over the promise of enhancing productivity with the help of LLM touched nearly every white-collar industry, and user experience research is no exception. From having the LLM brainstorm copy of recruitment emails to full-on reporting on research data and even stimulating what users will say, AI-powered chatbots have entered our workspaces with what some perceive as a threat to replace us. However, even if AI is capable of automating some of our workflows and saving us time, it is still unclear if UX professionals are ready to outsource their research processes to a machine.

In this small study, we investigated the receptiveness of outsourcing research processes to an AI-based chatbot among UX practitioners who do not engage in research as a part of their daily practice. Investigating the correlation between UX practitioners’ expertise and their confidence in outsourcing UXR processes to LLMs, we hypothesized that those non-research-focused professionals less familiar with research might be more receptive to AI assistance. Simultaneously, we theorized that UXR processes that are seen as important would be less likely to be outsourced to AI to ensure their accuracy. This study aimed to answer two key questions:

  1. How confident are non-research UX practitioners in outsourcing UXR processes to LLMs?
  2. What, if any, UXR processes are most likely to be outsourced to LLMs?

By delving into these questions, we hoped to gain insights into how LLM integration might influence the future of UX research.

AI for UX survey

We conducted a short survey as a part of a bigger project exploring behaviors associated with using AI-powered chatbots in user experience research. The survey was disseminated on LinkedIn, targeting UX professionals who do not primarily engage in research activities. Sixteen participants completed the survey, representing various subfields within UX, including design, content strategy, and management. The cohort also included professionals from adjacent disciplines, such as brand strategy, front-end development, and customer experience. Most participants rated themselves moderately to extremely familiar with user research (n=12). Note that while self-estimation of a skill set has limitations, the Dunn-Kreuger effect suggests that as most participants work within UX, they are thus less likely to overestimate their familiarity.

The survey was designed to gauge participants’ levels of expertise, their satisfaction with three popular AI-powered chatbots available at the time, their assessment of the importance of various research processes, their likelihood of using AI chatbots for different UXR tasks, and their overall sentiment towards these tools.

Given that this survey was intended as an introductory interaction with research, we deliberately limited the number of questions to avoid overwhelming respondents. Nonetheless, the insights gathered provide a valuable foundation for understanding current trends and attitudes, which we are eager to share with the broader UX community as we delve deeper into this investigation.

Task prioritization

Our hypothesis that UX professionals are more likely to use LLMs for less important research processes held true. Tasks such as initial methodology brainstorming and screener survey creation were frequently cited as ideal for outsourcing to LLMs. While essential, these processes are often viewed as more straightforward, repetitive, and requiring input from additional team members, making them suitable candidates for automation.

On the other hand, critical tasks like quantitative data analysis, test participant definition, creation of the artifacts, and data synthesis were less likely to be entrusted to LLMs. We interpret this finding as practitioners’ reluctance to relay their responsibility for human empathy, contextual understanding, and interpretative skills in these areas to a machine.

Confidence gap

One of the key insights from our study is the relationship between a practitioner’s expertise in UX research and their confidence in using LLMs for UX research. We found that UX practitioners who were more experienced with UX research exhibited lower confidence in outsourcing critical UXR processes to LLMs. Based on free-text responses, this skepticism appears to stem from a deep understanding of the nuances and complexities involved in qualitative research, which they believe an AI might not fully grasp.

Conversely, less experienced practitioners showed higher confidence levels, as demonstrated by the broader range of UX tasks they would be willing to hand off to an LLM. This high level of confidence may be influenced by a lack of in-depth knowledge about the intricacies of UX research. Less experienced respondents viewed LLMs as a valuable tool that could bridge gaps in their skill set and streamline their workflow.

There is a surprising dynamic between the level of UXR expertise, confidence, and the practitioner’s willingness to outsource certain processes. Even though the more experienced UX-ers are less confident in utilizing LLM’s in their research, they are still ready to outsource nearly every research process to the machine. Meanwhile, less experienced experts are more optimistic but are interested in employing it only for the most time-consuming processes such as literature review, creation of discussion guides, surveys, analysis, and synthesis. This dynamic underscores the importance of balancing AI integration with human expertise, whereby human expertise can verify and validate the unrefined output of the machine. It also suggests that as LLM’s performance improves, confidence in them and their integration in our business practices may expand: for example, if they can be shown to handle complex tasks with the required depth and sensitivity.

Enthusiasm and satisfaction

At the time of the research, three major LLM chatbots were available to consumers free of charge: Chat GPT (3.5), Bard, and Bing. To understand user preferences, we inquired about our participants’ satisfaction with each of these tools. Across LLMs, participants tended to be most satisfied with past use of ChatGPT, but did not award it a high satisfaction rate overall, with an average score of 3.33 on a 5-point Likert scale. Respondents were even less satisfied with two other LLMs (Bard had an average score of 2.46/5 and Bing 1.4/5). Looking at the average across all three LLMs, we recognize that it was buoyed by higher ChatGPT ratings.

When asked about their overall thoughts regarding using AI for UX research, respondents articulated a range of perspectives on utilizing AI chatbots to support UX research, with the majority falling into one of three categories: reflecting the benefits associated with it (e.g., reducing bias), how they would use it (e.g., to help with data analysis), and concerns they have for the technology (e.g., privacy). These primary topics stayed consistent across levels of UX research expertise, with benefits and concerns being present across all participants. Those with higher average satisfaction rates for AI chatbots tended to have more positive comments. In particular, satisfied participants noted the benefits of the AI-powered chatbots and commented on how they would use the technology. This sentiment further supports our claim that those with higher satisfaction rates have higher levels of confidence using the technology.

Final thoughts

The observed relationship between a UX practitioner’s expertise and their inclination to use AI chatbots for research offers valuable insights into the evolving field of user experience. This knowledge can help us understand how efficiency can be harnessed in research practices, and it aligns with other reports of primary use cases of AI in other industries. Not unlike Superintelligent’s observation, we also see that highly qualified professionals rely on AI for brainstorming rather than saving time or money. And just as LinkedIn reports, we notice a largely optimistic outlook on using AI. Our study reveals that while UX professionals are enthusiastic about integrating AI technology, they are not looking to replace their existing workflows entirely. Instead, they seek to enhance their research processes with AI’s systematic efficiency, especially benefiting individual practitioners and smaller teams.

If you are a non-research UX professional interested in contributing your experience with AI in research, please contact us.

Researched and written with Irina Wagner, PhD


Using AI for research was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *