Bias already has our full attention!
Why don't we just hand over the Decisions to Machines?
Dealing with Bias in the Boardroom
Dealing with "Noise" in the Boardroom
In recent times there has been an avalanche of media and ongoing debate highlighting the need for us all to be aware of our own biases, both implicit (unconscious) and explicit (conscious) and the effect these biases have in our decision making.
A plethora of bias awareness training programs have now become available (both online and offline).
In some cases, both in the workplace and as part of maintaining professional competency, bias awareness training courses have now become mandatory.
The aim is to better educate us about our own biases and provide us with strategies that we can all use to best mitigate against them.
The recognition of the need to be aware of and eliminate our own and others biases has extended into the realm of Automated Decision Making (or automated decision assistance) from A.I. or rule-based algorithms.
When automated systems use data that is biased to begin with, or applying an algorithm or rule too blindly they can perpetuate or create a bias of their own.
Regulators have started to impose requirements for Automated Decision Making to be subject to a regular "bias audit" aimed at providing assurance that the A.I. or rule-based algorithm is free from bias.
One might conclude that once we have found ways to successfully mitigate against our biases that we will have arrived at the destination of better decisions and the end of the story.
After reading the book "Noise: A Flaw in Human Judgment" by Daniel Kahneman, Olivier Simony & Cass R. Sunstein (published in 2021) it is readily apparent that research has now proven that bias is only part of the problem.
The otherwise invisible part of both the problem and the solution has been labelled "Noise" by the authors.
Whenever there is human judgment there is noise.
Imagine two doctors presented with identical information about the same patient giving very different diagnoses.
Now imagine the reason for the difference is because the doctors have made their diagnosis in the morning or afternoon, or at the beginning or the end of the week.
This is “noise” – the reason human judgements that should be identical vary – which Daniel Kahneman, one of the world’s best-known psychologists and winner of the 2002 Nobel Prize in Economics, tackles in his latest book, Noise: A Flaw in Human Judgment.
A study on 208 criminal judges showed that their independently given punitive recommendations on 16 fictive cases varied greatly in harshness.
For example, the judges only unanimously recommended imprisonment in three of the cases, and while the recommended number of prison years in one case was 1.1 years on average, one recommendation was as high as 15 years.
"Noise" has not been identified as part of the problem in decisions to date because it only appears statistically (it is otherwise invisible).
Therefore, it is not readily apparent until after a detailed statistical analysis of decisions that should be identical but are not has been conducted.
To date, research has focused on individual decisions (not on artificially repeating the same decision by the same person or amongst a group of people to see whether the outcome varies when it should be the same).
By way of example, normally a fingerprint expert does not revisit their prior fingerprint analysis to test whether or not they make the same finding the second time around.
It is only now that research studies have been specifically structured to study these kinds of scenarios, and statistical analysis has been performed that the presence of "Noise" in decision making has now been identified.
Contrast the above discussion about "Noise" in human decisions with a computer algorithm or A.I program and you will find that there will not be any "Noise".
The computer system will consistently generate exactly the same results 24/7.
Whether the result is right, wrong or biased is another matter.
So if machines don't generate any "Noise" error, why don't we just hand over the decision to machines?
Kahneman is not yet an enthusiast. He believes artificial intelligence is going to “produce major problems for humanity in the next few decades” and is not ready for many of the domains in which judgment is required.
Read more: Algorithms workers can't see are increasingly pulling the management strings
In the longer term, however, he does see a world in which we might “not need people” to make many decisions. Once it becomes possible to structure problems in regular ways and to accumulate sufficient data about those problems, human judges could become superfluous.
Until then there is plenty to do in reducing human error by improving human judgment, rather than eliminating it by outsourcing decisions to machines.
Knowing about noise (and bias) will help with that goal.
The following Checklist is sourced from Appendix B of the book "Noise: A Flaw in Human Judgment" by Daniel Kahneman, Olivier Simony & Cass R. Sunstein.
It is intended for Decision Observers who are encouraged to use it as a starting point to design a custom bias observation checklist of their own, as well as incorporating well structured decision hygiene to reduce "Noise" in group decision making.
⭐️ We are currently working to develop a FREE online tool that incorporates this standard Checklist so that a Decision Observer can step through the Checklist and consider what tweaks they may wish to make for future use.
✅ Did the group's choice of evidence and the focus of their discussion indicate the substitution of an easier question for the difficult one they were assigned?
✅ Did the group neglect an important factor (or paper to give weight to an irrelevant one)?
✅ Is there any reason to suspect that members of the group share biases, which could lead their errors to be correlated?
✅ Conversely, can you think of a relevant point of view or expertise that is not represented in this group?
✅ Do (any of) the decision makers stand to gain more from one conclusion than another?
✅ Was anyone already committed to a conclusion? Is there any reason to suspect prejudice?
✅ Did any dissenters express their views?
✅ Is there a risk of escalating commitment to a losing course of action?
✅ Was there accidental bias in the choice of considerations that were discussed early?
✅ Were alternatives fully considered, and was evidence that would support them actively sought?
✅ Were uncomfortable data or opinions suppressed or neglected?
✅ Are the participants exaggerating the relevance of an event because of its recency, its dramatic quality, or its personal relevance, even if it is not diagnostic?
✅ Did the judgment rely heavily on anecdotes, stories, or analogies?
✅ Did the data confirm them?
✅ Did numbers of uncertain accuracy or relevance play an important role in the final judgment?
✅ Did the participants make non-regressive extrapolations, estimates, or forecasts?
✅ When forecasts are used, did people question the sources and validity?
✅ Was the outside view used to challenge the forecasts?
✅ Were confidence intervals used for uncertain numbers? Are they wide enough?
✅ Is the risk appetite of the decision makers aligned with that of the organisation?
✅ Is the decision team overly cautious?
✅ Do the calculations (including the discount rate used) reflect the organisation's balance of short- and long-term priorities?
The Mediating Assessments Protocol (MAP): "A Structured Approach to Strategic Decisions" was designed and developed by Daniel Kahneman & Olivier Simony (in collaboration with Dan Lovallo*) to mitigate against noise in human decision making within organisations.
The protocol can be applied broadly and whenever the evaluation of a plan or option requires considering and weighting multiple dimensions.
When you think about it, nearly every board meeting of a company, charitable organisation or government agency will involve human decision making that can benefit from adopting a combination of a Decision Observer using a bias Checklist and the adoption by the board of the Mediating Assessments Protocol (MAP) to help in reducing Noise in the move towards better decisions.
"Reducing errors in human judgment requires a disciplined process."
Only required once for repeating decisions of the same kind.
For example, a venture capital fund investment funding decision.
For recurring judgments: use relative judgments, with a case scale if possible.
Whilst the research results regarding Noise are compelling, in the real world I can foresee major issues in implementing both Bias and Noise mitigation strategies.
It currently appears difficult enough for a board of directors to work towards diversity in its own composition, let alone make changes to any status quo that is unstructured and undisciplined with regard to how major strategic decisions are made.
When the rubber hits the road in the boardroom decision process we quickly descend into the realm of corporate culture (what really happens) rather than what is supposed to happen, or what should happen to obtain the best results according to the latest research.
Many boardroom decisions are currently (when viewed objectively) a rubber stamping process.
It is common practice to have lobbying by parties with a vested interest in the desired outcome of a proposal secure votes in their favour in the lead up to the board meeting, with individual board members being influenced by initial prejudgment bias such that decisions may have for the most part already been made before the board meeting commences.
I am also aware of boardroom cultures that abhor dissent "on the record" in board meeting (especially public stock exchange listed companies).
Whilst I have not personally conducted any searches into public stock exchange listed company archives to attempt to find board meeting minutes documenting dissent it is not hard to imagine that such a search (in a company with a boardroom culture abhorring dissent) might only find unanimous board decisions.
Only after the real financial and corporate governance costs of "Noise" in boardroom decisions has our full attention will change be possible.
The authors have successfully conducted "Noise Audits" which have been crucial and "eye-opening" such that they have made all the difference in raising awareness and assisting to quantify the impact of "Noise" which in turn has driven adoption of "Noise" reduction strategies.
In my "Post-Noise" opinion, the opportunity to improve our decision making (by working to reduce both bias and noise) represents both a potential competitive advantage for individuals and organisations attuned to it, as well as an improvement in overall corporate governance.
* This research was supported by an Australian Research Council Discovery Grant to coauthor Dan Lovallo.
Further Reading: Daniel Kahneman on "noise" — the flaw in human judgment harder to detect than cognitive bias dated 28 May, 2021.
A recording of Daniel Kahneman’s full conversation with Ben Newell is avaiable on the UNSW Centre for Ideas’ website.
Social Sharing Image: Courtesy of Josh Eckstein on Unsplash
Credits: This blog article was written by James D. Ford Esq., GAICD CIPP/US | Principal Solicitor, Blue Ocean Law Group℠.
This blog article is intended for general interest + information only.
It is not legal advice, nor should it be relied upon or used as such.
We recommend you always consult a lawyer for legal advice specifically tailored to your needs & circumstances.
Your comment has been received and we will approve it shortly.