Availability heuristic

Availability Heuristic and Representativeness Bias: How to deal with it as Developer

Availability heuristic was first introduced in 1973 by psychologists Amos Tversky and Daniel Kahneman in the paper titled “Availability: A Heuristic for Judging Frequency and Probability.” Tversky and Kahneman explain that the availability heuristic is a product of human nature to rely on information that is readily available—information that is easily recalled from memory.

The concept of availability, as it is used in the availability heuristic, denotes the degree to which an idea can be brought to mind. The availability heuristic operates on the notion that if something can be recalled, it must be important, or at least more important than alternative solutions which are not as readily recalled. Subsequently, under the availability heuristic, people tend to heavily weigh their judgments toward more recent information, making new opinions biased toward that latest news.

The availability heuristic is a cognitive bias that is commonly used by judges and jurors alike. In legal settings, it can have a significant impact on a jury’s verdict. When presented with evidence, a jury is likely to remember details that are highly available, or easily recalled, and will be less likely to remember details that are not as readily available. The availability heuristic can cause people to ignore important details.

Related to availability heuristic is representativeness heuristic. It also relies on people’s memory of specific instances, but it has more to do with a stereotype, prototype or average. Here, people tend to think that assumptions based on small samples faithfully represent entire population.

One is base-rate neglect and the other is sample-size neglect. Often decision makers tend to rely on stereotypes when making decisions. In sample-size neglect, decision makers, when judging the likelihood of a particular outcome, often fail to accurately consider the sample size of the data on which they base their judgments.

The key to avoiding the representativeness bias is to always be open to the possibility that the case before you isn’t typical. But this can be difficult, if you deal with decision makers which are often prone to make just these errors.

A telltale sign of this error is often the “It’s not working on my device” situation. Here, a sample size of n=1 (the decision maker) is taken as representative for the whole user base. On a closer inspection, it often turns of that the device in question is not something that the targeted user group is using. A typical example are iPADs: we see that only between 3 and 5 percent of regular users are using iPADs while the number of decision makers using iPADs are higher: about 20% (again, this might be a bias based on our experience :-)). So if things don’t work well on a single iPAD then the assumption is that this must be a systematic problem on all devices. However, in our experience, this is not very often the case, because iPADs special devices: they are neither a smartphone, neither a laptop.

What we’ve started to do is to provide stats of the expected user base early in the development process. We show global stats how many users are iPAD users, iPhone users or Android users (from various vendors). We show with nice diagrams that it makes sense to get an app working well for the majority of users and then go after the smaller groups. While this might sound obvious, it’s not that obvious for decision makers, because of the representativeness heuristic.

It’s also a good idea if you provide a device that can be used during testing. This prevents them from using their iPADs. This simple trick makes your life as developer a lot easier.

What are Citizen Developers?

What is the Not Invented Here Syndrome?

 

The Mythical Man-Month

What is Prompt Engineering?

Foto von Andrea Piacquadio von Pexels

Scroll to top