Human Computer Interaction

Case Study

After the busy workday we all dream about “Ah-mazing“ kitchen assistent, which cooks for us. 21st century offers the Termomix TM5. This machine claims to combine the functions of 12 different appliances in one.
This study aims to test the cognitive activity in performing certain tasks with the Termomix and to come up with new design concepts to either enhance the cognitive experience or eliminate the cognitive stress

The Process

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

1

Dekstop research

2

Cogitative Problems

3

Vecuritas luminare sed

4

Vecuritas luminare sed

The Process

After grouping the findings and grading the respective severity levels, a solution had to be found and formulated for each usability problem. The usability problems listed could now be passed on to the development team together with the proposed solutions so that they can be taken into account in the further development and improvement of the Thermomix.

What we found

Negative Findings

Task

Start button
too small /
Color misdirection

Heuristic

Visibility of
system status
Consistency
and standards

Severity

5

Solution

Start button larger,
green background

Next

Task

“Next” button misleading: time is beeing selected and it goes to the “Next” step automatically 

Heuristic

User control
Error prevention

Severity

5

Solution

“Next” – option only
possible if level has
been set, only one
action per screen

Conclusion

We combined Cognitive Walkthrough + Heuristic Evaluation + Thinking aloud for our study. We used 3 “experts” who dealt with the terms “usability” and “user experience”. It was relatively quick to discover many problems on one device with a small group. Each of the experts has focused on as many problems as possible with each task. If I could measure the method according to the usability components (usability, efficiency, effectiveness), I would rate it in this way: “Ease of use: high Efficiency: high Effectiveness: medium ” Ease of use high, since the effort is largely proportional to the size and complexity of the interface to be evaluated. Above all, CW is used to check the interaction between the interface and the user. As with our study, the focus of the method was on evaluating the design of a system through exploration.   Efficiency relatively high, because in a small period of about 1 hour each of the experts discovered many problems with the interface. It is sometimes advantageous that the method does not promote users, as this saves time. I think that effectiveness is one of the weaknesses of the method. With the method, the experts find many problems, but an expert / designer can never really put himself in the role of the user. The task of the experts is to focus on problems and not to use the device in real life. The “Thinking aloud” method enabled us to identify reasons for problems and abnormalities as well as possible proposed solutions without additional analysis. I find the method not only very simple and at the same time very effective, but also very flexible, since it can be applied to many different scenarios.

You like it?

Take a look at my other projects

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.