Mehr Infos

Critically evaluate user research data for product development – the 5 biggest pitfalls and how to avoid them

Author: Benjamin Franz

Reading time:

Dec 2023

In today’s digital world, the correct use and interpretation of research data is crucial for the successful design of user interfaces and products.

This blog post highlights five common pitfalls that can occur when working with such data. From the importance of selecting the right data to the challenges of data analysis and typical mistakes when dealing with user feedback, this article offers practical insights and valuable tips to avoid these pitfalls.

Whether you’re an experienced UX designer, a market researcher or simply someone interested in using data effectively, this article will give you the tools to use your data wisely and profitably.

Enjoy reading!

 

1st pitfall: Incorrect use of existing research data

Existing data is valuable for the design of user interfaces and products. However, the value of the data is very closely linked to whether you have collected data from the right people. If you have an internal B2B application installed individually by admins on the users’ computers, then the data almost inevitably comes from the right people.

However, if you create an online questionnaire, you don’t know whether the people who answer the questionnaire belong to the user group you want to collect data from. Whenever you do not see the people and only receive the data, we recommend integrating questions that allow you to filter out data from inappropriate people afterward. The questions could include, for example, age, gender, profession, or willingness to use a particular area. However, the questions depend on which user group you want to reach.

 

Tip: Open questions as a filter

Another note: Participation in studies, tests, surveys, etc., is often associated with an expense allowance. In other words, participants receive money, vouchers, or products for taking part. This can lead to people trying to be part of the user group, even if they are not. In other words, they try to cheat their way into your survey.

We essentially prevent this by formulating the questions on the inclusion or exclusion of participants very openly so that it is unclear to the respondent what the “correct” answer is. Many online tools support you in providing people with one-time links or canceling the survey if the conditions for participation are not met. It is up to you to decide whether this is necessary. You can use a flexibility and risk matrix to help you here: The more risky/ inflexible your solution is, the better the data quality should be.

 

2nd pitfall: The (unintended) interpretation of research data

With qualitative data in particular, it is easy for data not to be recorded objectively but to be interpreted during documentation. For example, an interview with a user could not be recorded with a camera (objective). Still, the interviewer’s interpretation of what was said could be recorded in bullet points (somewhat subjective).

We advise against this as a first step. Try to ensure that you first collect the data objectively. This can be done live by taking notes. But it can also be done based on recordings. Once you have completed the data collection or want to draw an interim balance, you can look for patterns in the objective data. This does not necessarily mean that you transcribe everything that is said verbatim. We would like to point out that you should note the essence of the observation and the essence of what was said, but not your own interpretation of it.

This has the advantage of allowing you to view and interpret the objective data several times. You can also reinterpret the recorded objective data at a later date with more knowledge if necessary. If you have yet to record the objective data from the outset but have interpreted it directly, then you do not have this option. Above all, because the interpretation of the data depends on the person interpreting it, you lose information here. The assessment could have turned out differently for different people. If several people are involved in the data collection, they interpret the same event differently, and you overlook patterns that would have been obvious from the objective data. If you are the only person who collects data and has to work with it, then this point is only of limited importance.

Unless, of course, you have to present and defend your conclusions and results afterward. Even then, it is helpful to collect objective data and only then interpret it. This approach also makes it easier for you to defend your conclusions.

 

3rd pitfall: The wrong frame of comparison

People’s evaluation of a system, product, or service is incredibly dependent on their frame of comparison. If you don’t set a frame of comparison, people will still compare. They won’t know with what. You don’t believe that? Let us explain.

At this point, we would like to share a personal story that happened to Michaela in her thesis (yes, that is old). Michaela already specialized in the field of usability and user experience during her studies and dealt with the question of whether the aesthetics of an object influence the perceived and actual usability. Spoiler alert: No. At least not as far as the experiment could show. But do you know what the problem is? We don’t know if this is the case. And now, we come to the topic of the comparative framework.

 

4th pitfall: Inadequately understood evaluations and statements

In interview situations in particular, it often happens that a statement is made that is interpreted completely differently by the interviewer than by the interviewee. For this reason, you should ideally ask the interviewer about evaluations in order to ensure the correct interpretation.

 

5th pitfall: The data is not complete or meaningful

Especially when you are working with quantitative data – e.g. questionnaire data – the question of data quality is very important. People are often reluctant to fill out lengthy questionnaires. The longer and more complicated the survey is, the more likely you are to either lose people completely or demotivate them along the way, so that although they tick boxes, they no longer make sense.

The first check you should therefore make in your data is whether all questions have been answered. It may well happen that you still have 150 people answering a question at the beginning, but only 75 at the end. This is not a bad thing in itself if you can assume that it is not certain people who drop out of the survey, but that this happens “by chance”. Otherwise, you may end up with only a certain type of person in the survey, who will of course give different answers. Depending on how important this is to you, you can decide whether to exclude people who have not completed the survey from the evaluation.

Secondly, you should check whether any strange response patterns appear. This is actually more difficult to check manually in the digital age. When studying psychology, it was still taught that if you work with paper questionnaires, you should check whether the questionnaires are filled out according to a certain pattern (e.g. in X form, always 1 or always 5). Digitally, this is no longer so obvious because you cannot see the completed questionnaire in front of you; however, you can of course also recognize the patterns from the numbers. Usually, all people who always answer the same thing are excluded.

Now you could say: “Hey, what if the person really just loved everything?” Then we would tell you: “Then you have a problem with your questionnaire design.” Usually you design questionnaires in such a way that even if someone thinks everything is great, the questions are asked in such a way that different values have to be ticked. You can achieve this by reversing some questions. (Instead of “I found it very easy to use”, it then says “I found it very difficult to use”). This turns a cross in the same place into a contradictory statement. In the case of particularly important aspects, very similar questions are sometimes asked, which are then checked to see whether similar answers were given to the questions.

Proper questionnaire construction is an art in itself. If you would like to learn more about this, we recommend reading literature on market research or on test and questionnaire construction. But be honest: In most cases, you will also be able to cope with a less complex survey. The most important thing is to make sure that the questions are clear and actually measure what you want to measure.

 

Conclusion

We summarize once again what you should avoid:

  1. Incorrect use of existing data: The value of data depends heavily on whether it has been collected from the right people. It is recommended to include questions that help to filter inappropriate data later, especially in online surveys. Open-ended questions can help to identify dishonest participants.
  2. (Unintended) interpretation of data: There is a risk that qualitative data is subjectively interpreted as soon as it is documented. It is recommended to collect data objectively and look for patterns later to avoid ambiguities and loss of information.
  3. Wrong frame of comparison: The assessment of products depends heavily on the users’ frame of comparison. Users set their comparison standards without a predefined framework, which can lead to distorted results. An example from a diploma thesis shows how different designs of can openers influenced the perceived aesthetics, although the objective usability remained the same.
  4. Inadequately understanding of evaluations and statements: In interviews, statements can be interpreted differently by the interviewer than by the interviewee. By asking questions, misinterpretations can be avoided, and a deeper understanding can be gained.
  5. Incomplete or meaningless data: With quantitative data, quality is critical. It should be checked whether all questions have been answered and whether there are any strange answer patterns. When designing the questionnaire, questions should be formulated in such a way that they measure the desired outcome and generate different responses.

You are also welcome to contact us directly via our contact form. We look forward to hearing from you!

 

0 / 5 (0)

Subscribe for our newsletter

E-Mail *
Hidden
This field is for validation purposes and should be left unchanged.

Related Posts

The needs series Part 4: Competence

How important is the need for competence in our lives? What role does it play in the use of certain products? What examples are there of products that place the need for competence as a whole or in...

read more
Heuristic Evaluation – What is it?

Even though testing a product with users is the ultimate discipline in product development, there are quick and inexpensive methods that are also suitable for identifying serious usability problems...

read more