User Experience designers have been relying on design heuristics to quickly evaluate their designs for usability bugs. While it is undeniable that other methods of usability evaluation have evolved to be more robust and accurate, heuristic evaluation puts forward some of the high-level bugs at the early stages of design. For years, UX designers have been using Nielsen’s heuristics to evaluate information systems, however, with the increase in variety of platforms, the traditional heuristics have not been able to evaluate designs accurately. Traditional heuristics though cover sufficient guidelines that encompass human experience with an information system, they are often limited to web, desktop or control panel based interfaces. Emerging platforms such as mobile, ambient technologies, virtual/augmented reality, wearable technology and other natural interfaces require some dedicated guidelines that specifically target the critical aspects of these platforms. Attempts have been made to design dedicated heuristics for mobile platforms such as NN Group’s Mobile Usability Guidelines. Likewise, dedicated heuristics have been designed by Korhonen and Koivisto (2006), to evaluate mobile games in particular. This clearly suggests that UX designers are required to formulate special heuristics tailored to the platform they are designing on.
However, it is important to understand what makes a good heuristic set. A heuristic set becomes useful if it is able to identify more usability bugs (and of course more relevant and accurate bugs). However, what does it take a UX team to generate these heuristics? The answer lies in this 2003 CHI publication on “Heuristic Evaluation of Ambient Displays”. In this paper, they have demonstrated a simple yet a very effective way of generating as well as validating design heuristics for ambient displays.
This post is my attempt at generalising their findings, so that UX designers are able to generate heuristics for their respective platforms. To begin with, you should invite interaction designers and product engineers of the platform (e.g. VR Headset, Wearable Band etc) and discuss the core features or utilities associated with the platform. Then identify the primary goals of the platform and check which of the existing Nielsen’s heuristics are able to evaluate them. Then formulate a rough guideline for the remaining of the platform goals. As a result, you will have modified set of heuristics, which essentially comprise of core Nielsen heuristics as well as additional heuristics tailored to your platform.
Now since you have an additional set of guidelines, it is time to validate them. For this purpose, you can again invite usability experts (especially the ones who have some experience in heuristic evaluation). It is preferable if you have at least 4–6 experts, so that you can divide them into two groups. One group will be asked to evaluate the platform using existing Nielsen heuristics and the other group will be asked to evaluate them using the modified heuristics. If the modified heuristics are able to churn out more usability bugs, then we can reasonably conclude that modified heuristics can be used to evaluate the emerging platform at hand.
With increase in the variety of platforms in current technologies, it is essential to have improved and dedicated heuristics that can evaluate our solutions more accurately. If you found this post useful or have anything to add, do comment or simply leave a note. 🙂