Fundamental Attribution Error and Confirmation Bias
Post date: Feb 06, 2013 11:43:51 PM
Following on from my post on Cognitive Heuristics, Biases and Lean Product Delivery, I was asked by a colleague to elaborate:
"Do you have any real world examples of things like cognitive bias and ways to measure success with empirical evidence?"
Context
Ok so first of all there are many cognitive bias (over 150) and more being discovered all the time.
According to Wikipedia, the more commonly studied biases are:
Framing by using a too-narrow approach and description of the situation or issue.
Hindsight bias, sometimes called the "I-knew-it-all-along" effect, is the inclination to see past events as being predictable.
Fundamental attribution error is the tendency for people to over-emphasize personality-based explanations for behaviours observed in others while under-emphasizing the role and power of situational influences on the same behaviour.
Confirmation bias is the tendency to search for or interpret information in a way that confirms one's preconceptions.
Self-serving bias is the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.
Belief bias is when one's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion.
In their talk Megan Folsom and Dan North both touched on Confirmation bias, Pro-innovation bias, and Fundamental attribution error. They made the case that these are all biases that we should be aware of when attempting to build teams to deliver a product to delight users.
Going back to the original question above: I'll try to provide examples, and I think it would be good to focus on two bias: Fundamental attribution error and Confirmation bias. The reason for this is my original post focused on two things: enabling people to work together collaboratively, by being aware of biases and using feedback loops for empirical measurement of the results. My view is that our susceptibility to fundamental attribution error, and confirmation bias are the main contributing factors to poor decision making and project failure.
Fundamental attribution error
Fundamental attribution error is a really interesting bias, and one I'm sure we have all engaged in and have been the subject of. Fundamental attribution error describes the tendency to over-value dispositional or personality-based explanations for the observed behaviors of others, while under-valuing situational explanations for those behaviours.
There is a simple example of a Fundamental attribution error on wikipedia that illustrates the bias well, but I'll offer two here which are more aligned with situations in a work context.
When developing software if a team doesn't meet their estimates, an observer might conclude that the team are inexperienced and have not spent enough time estimating. If the observer was to look at the situational explanations then perhaps the team doesn't understand what needs to be built, or the person requesting the feature is not clear what they are expecting. There may be a large number of ways to solve the problem, or perhaps there's a bottleneck in the delivery process slowing the team down.
If we switch roles and try to put ourselves into that team's role, we might be able to understand the situational/contextual explanation for the observed behaviour.
On a larger scale, if a product delivery / project is overrunning, or it's not retuning on the investment it has received. A business might consider the project to be badly managed, or may seek to place blame on an individual or a group of individuals for the systemic failure, in a way that is out of context in regards to their actual levels of responsibility. This type of Fundamental attribution error or Group attribution error/Ultimate attribution error perhaps allows the business to not worry about the failed system (context/situation) by attributing the problem to an individual or group of individuals (disposition).
The problem with not acknowledging these errors is that we limit our ability to learn and continuously improve, through acknowledgement of the situations that influenced the results.
Ok so I know what a Fundamental attribution error is, now what?
How can we address this bias / lack of situational thinking? Perhaps the first step is awareness, being aware that you are susceptible to Fundamental attribution errors can be helpful to guard against it. Helping to ensure you consider the situation, or take others on the journey of understanding the context/situation, so they don't fall into this error. Having this enlightened view perhaps then leads us to attempt more root-cause analysis to address problems, rather than just attributing them to individuals or groups. This is perhaps another reason to have more open and honest communication within teams and a business, as situational context is appreciated more by others around you.
For a more fun example watch this: Root Cause Analysis: Lean Muppets
Confirmation bias
Wikipedia states: Confirmation bias is a tendency of people to favor information that confirms their beliefs or hypotheses. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs.
This is an interesting one to address as it's such a core part of our psychological response and information processing. I imagine that we all recognise ourselves falling foul of this bias in our daily lives, either in our devotion to a sports team, company brand, political party, religion, or even in the advocation of the use of one methodology over another (the irony isn't lost on me here).
Therefore awareness of this bias is really important, so I will focus on two perspectives which I think are relevant to product delivery: -
How to communicate effectively with others who may have different beliefs.
How to create experiments and evaluate the results in a ways that don't just reinforce our original belief / hypothesis.
The first perspective is a subject for another post, and one I need to do more thinking on. The second perspective is very relevant to how we structure and deliver a product/project and its resulting success or failure, so I'll try to provide and example here.
Create experiments and evaluate the results...
When we start a project of work that has some high-level objectives, and set out to plan and research all the deliverables and requirements for that project up front, we will be influenced by that analysis and the plan throughout the project lifecycle. Even if there is evidence to suggest that things are not working, or that our original hypothesis are not resulting in a positive outcome. This is partly due to the high investment in the previous research (which can be interpreted as a sunk cost) and the belief that what has already been uncovered through hypothesis and prediction is correct (even in the face of empirical evidence to the contrary). In this scenario we are more likely to continue with the original plan until we deliver or run out of time or money, regardless of the resulting success or failure of the project.
Basically we have not provided any opportunities for the team / business to pivot on the original plan, and go in the direction which evidence suggests is more advantageous. It is folly to believe that we can formulate lots of hypothesis upfront, and for all of those hypothesis to be realised as correct in one large experimental step.
This I believe (oh the irony...) is not an efficient way of delivering features for products/projects that involve system implementations and software delivery which interacts with multiple end users. Given the inter-connected nature of systems and users, the delivery of one feature will have an influence on the subsequent feature delivery, therefore any upfront planning and requirements gathering is ultimately just an exercise in (educated) hypothesis/conjecture creation.
To attempt to combat this, we can use a more scientific method, with small incremental feature delivery mitigating the potential influence Confirmation bias has on the group working on a project/product delivery. For example we can use small feature experiments combined with a focus on measuring the results, to either confirm or not our original hypothesis.
...in a ways that doesn't just reinforce our original belief / hypothesis.
The risk is that confirmation biases mean we naturally want to run experiments where the result would demonstrate that we are right, rather than risk running an experiment that we expect will be wrong. But if we did run an experiment that might show that we are wrong, we might eventually find out the true answer to our original hypothesis.
By being aware of this bias we can attempt to question our hypothesis and focus more on how to interpret the success or failure through conducting experiments.
Awareness
So how does this awareness of Fundamental attribution error and Confirmation bias influence our product delivery?
In conclusion, I propose that by creating small cross-functional product teams, who go on the journey together to deliver a product, understanding the context of the whole delivery pipeline, focus on small incremental feature delivery through proposing a hypothesis and conducting experiments, are better placed to guard against these two biases. Why? The cross-functional nature of the team reduces the potential for Fundamental attribution errors and Group attribution error throughout the delivery pipeline, as the situational context is apparent to all involved. The small incremental feature delivery mitigates against Confirmation bias, by focusing the team on a specific problem domain and iterating on features until the stated problem has been resolved or satisfied, using feedback and metrics to measure the success or failure of the feature, thereby removing the potential for Confirmation bias to influence decision making.