Skip to content

Latest commit

 

History

History
146 lines (85 loc) · 21.2 KB

21-mcfarland-improving-collaborative-decisions-by-reducing-noise.md

File metadata and controls

146 lines (85 loc) · 21.2 KB

Improving collaborative decisions by reducing noise

Ron McFarland

Open leaders are champions of open organization principles—including collaboration. But are there times when leaders should actually consider limiting collaboration among team members?

According to at least two experts, the answer is "yes."

In this chapter, I'll explore the idea that strategically limiting some aspects of collaboration might be beneficial when these aspects of collaborative discussion may be counterproductive to helpful information gathering. To do this, I'll weave together two compelling works in this area: the book Influential Mind by Tali Sharot and the book Noise: A Flaw in Human Judgment by Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein.

My argument is that open leaders don't just need to promote any and all collaboration. They need to pay attention to those aspects of collaboration that can actually hinder the benefits of bringing people together.

When conversation stifles

According to Sharot, our minds are always influenced by our surroundings. Our environment impacts our recall at any given time. Therefore, Sharot argues, interacting with others too early in an investigative process might introduce forms of cognitive bias and prevent people from obtaining critical information they need to achieve innovations and insights. For example: In a decision-making meeting, the person that first presents a suggestion will influence everyone, and the decision-making process that unfolds could be biased as a result. Borrowing a popular phrase, Sharot calls this phenomenon "group think."

But what if the initial comment in that planning session wasn't necessarily the best idea? What if it actually set the stage for a conversation that was more unproductive than productive? What if the views of the most outspoken silence the ideas of the more reserved?

Open leaders face these challenges when encouraging collaboration among team members.

On bias and noise

The authors of Noise explore this problem directly (and, interestingly enough, have consulted Sharot for their work as well). They distinguish between bias and "noise," taking the bias problem one step further. To explain the relationship between bias and noise, they offer four possible outcomes of a discussion by imagining a "shooting gallery target" and various locations on that target (see Figure 1). Consider these "hits" as human judgment or decisions made correctly or wrongly. Simply put, at the conclusion of a collaborative discussion, we must ask: Was the decision we made a successful, accurate decision, or an unsuccessful, error-filled decision? If unsuccessful, by how far has the whole team drifted from what it sought to achieve?

![image]

https://coachingbuttons.com/images/2023/12/noise.png

Figure 1: Adapted from Noise: A Flaw in Human Judgment

Let's examine the figure in more detail.

Everyone on Team A (no bias and no noise) hits the bullseye. This means they gathered, used, collaborated on, and agreed on good information to reach the ideal result. They are not particularly biased or impacted by what Kahneman calls "noise."

Team B (bias but no noise) was off target, but all in the same area, in the lower left. This means the team's judgment was universally biased by something or someone specific, as all the members hit the same area on the target. They believed in the same false information. They didn't have any "noise" to speak of, however.

Team C (no bias but noise) judgment results were somewhat centered, but scattered around the bull's-eye except one. This means that many things negatively impacted their judgment accuracy, except for one member. No single, interesting but false factor influenced accuracy. This is what the authors call "noise": those random, unknown factors. What constitutes noise is different for every individual and difficult to identify, but it exists. It adversely impacts judgments; if ignored, poor judgments can result, even on extremely important issues. Imagine being sick and going to five doctors to be cured, but all five doctors give different diagnoses (judgments). The book provides several useful examples of this type of variance. This problem is far greater than we usually think, according to the authors of Noise. Their book offers suggestions for constructing more productive collaboration methods in open organizations. Hopefully, this will result in better community judgments and decisions.

Team D (both biased and noisy). This team is both biased and influenced by noise. This means someone—or something—has biased the group's judgment to push it off target toward the lower left direction. Also, other factors impact each team member's judgment, which results in the spread between them. They may be ignoring important information or considering unrelated information.

So, why are they off target? It is because of human error due to bias (systematic deviation) or noise (random error). Once we learn the reason why, then open leaders might consider putting bias or noise-reduction measures in place to improve the judgments that arise from collaborative discussions. Bear in mind that both bias and noise are errors and should be reduced. Depending on the actual case, one error might be more important than another, but the book Noise focuses primarily on the latter: noise.

What's noise, anyway?

According to the authors of Noise, "noise" can come from a variety of sources. It could be a person's stress level or mood while making a judgment. It could be unrelated intimidation or outside opinions. Furthermore, it could be distractions or dramatic events. It could be room temperature, or unnecessary, unrelated, irrelevant, useless information that distracts the judgment process. It could simply be the time of day, which impacts everyone differently.

For instance: In groups in your organization, does an individual provide a strong opinion that moves the discussion in an unhelpful direction? Who speaks last? Who speaks with confidence?Who is wearing black? Who is seated next to you? Who smiles, frowns, or performs unconscious gestures? All these could have an impact on the group. The authors call this "social influence," and it's a form of noise.

In open organizations (as in most places), the purpose of a collaborative exercise or informative conversation is to arrive at some shared conclusion, even a loose consensus. One hopes it will result in collective action and shared direction. If members of a team always make vastly different judgments, then the results of their collaboration are likely to be poor—ineffective, out-of-scope, or unable to drive real change or innovation. This is particularly true for extremely important decisions or judgments. To address this, first identify the noise that may be affecting a team. Then develop an appropriate strategy to address it (preferably in collaboration with your team). This can result in teams or communities making overall better decisions.

Imagine that in your open organization, your team is making a decision on the price of some software you jointly developed, and everyone offers their best quotation. If they all are around 10% more or less from the average, that would be normal, making reaching an agreement easier. But, what if many are 50% to 90% more or less than the average, some doubling others? In that case, noise may be impacting the team's impressions.

Starting with a noise audit

Before working to reduce noise, teams need to evaluate the noise that's present and determine its impact on the team's collaboration and decision-making. Once identified, then, leaders can introduce, install, and oversee appropriate noise reduction protocols. Noise will always exist and cannot be completely eliminated, but it can be reduced.

In these types of "noise audits," many independent individuals should evaluate the situation. They observe the differing situations in which it is present. After confirming that noise is present, strategies can be put in place to reduce it, which tightens the wide-ranging judgments between team members.

Audits are conducted by observing how people evaluate certain things, as in my earlier example of a team setting software pricing. A team of qualified auditors are selected. They develop a jointly decided on, structured questionnaire of relevant questions, making sure to avoid irrelevant information.

Auditors then ask each member for a quotation individually and get the reasoning behind that quotation. Each member being questioned should not know that all members are asked the same questions. With the answers given to the auditors, first they look at the variation between quotations. Then, they list all the perspectives side-by-side and compare reasoning between them by looking for similarities and contradictions. Finally, they evaluate each of the reasons interviewees has posited. There could be important, informative perspectives from only one person, but others' reasons could be based on incorrect information. That all has to be exposed.

Sometimes, the observations of multiple independent judges can be helpful. After the full independent evaluation, an aggregate, concise, understandable finding is formulated. The finding is presented to all the participants, which leads to thought triggers, and stimulates higher-quality discussion. This should take place only after all perspectives and components are assembled, explained, and understood. If time permits, possibly all the perspectives should be evaluated separately by the judges only before any overall discussion takes place. Evaluate each perspective or factor by talking about them and evaluating each perspective jointly. After that, evaluate each perspective against all the other perspectives. With all the perspectives visible, the group can make higher-quality judgments.

Judgments, not personal opinions

Here the word "judgments" refers to matters of accurate judgment, not matters of individual opinions or tastes, where differences are entirely understandable. For judgment calls, everyone should not agree exactly, but be in pretty close agreement. If the group senses sizable disagreement, then noise is probably present.

Let's look at the extremes: Let's look at the extremes: where noise doesn't exist at all, and where it is everywhere.

On the one hand, you have computer-generated decisions or hard rules that have no noise, as no human judgment is required. On the other, you have wildly different tastes or opinions on the issue at hand. Between them is judgment and a certain noise level when addressing an issue. Our goal is to reduce the major noise within that judgment environment (some call this the "expectation of bounded disagreement").

What you want to study in an audit is not just each person's judgment. You also want to learn exactly what factors impact each person when making a judgment. Then, by exposing those factors, noise issues can be addressed, and a tighter agreement should result.

I recommend developing a noise audit to assess your team's Perform a review by following these steps:

  1. Break the decision into components to assess data.
  2. Use outside views and comparisons for evaluation.
  3. Keep all judgments independent and isolated from others.
  4. Review each assessment separately.
  5. Make judgments individually.
  6. Delay all final decisions until a mass amount of components and perspectives are exposed. In some cases, each component's judgment might be far more important than any final decision. The final decision could at best be a continual work-in-process.
  7. Expose all information in a carefully decided sequence and with common frames of reference.
  8. Aggregate the findings from many independent judgments.

Reducing noise to make better decisions

Noise reduction strategies are akin to washing your hands before eating a sandwich. You may not know what germs you're killing, but you know it improves your health. This kind of error prevention is often thankless, as you can't observe it. Likewise, noise-reduction measures prevent unwanted things or events from happening, but you don't know if it was the preventive measures, or something else, that caused the judgment impacting noise. Noise is an invisible enemy, and therefore a victory against it is also invisible.

What you have to do is a cost-benefit analysis. If washing your hands takes little time and requires only soap and clean water, and not washing your hands will get you killed, the benefit of washing your hands is greater than the cost. For all decision hygiene projects, this analysis must be done to confirm its need and convince every one of its importance. In other cases, where the effort is great and the benefits little, it might be best to just tolerate a certain level of noise.

Principles of decision hygiene to improve collaboration

According to the authors of Noise, there are six decision hygiene principles:

  1. The goal of judgment is accuracy, not individual expression or tastes.
  2. You must think statistically, make comparisons and take an outside, comparison-focused view.
  3. Structure and break down judgments into several independent components.
  4. Resist premature overall intuitions. (Kahneman says judgments should be a 2-step process. First, individually and independently make judgments on all components. Then, second, combine all components for the final overall judgment.)
  5. Obtain independent judgments from multiple judges, then consider aggregating those judgments.
  6. Find comparisons to evaluate against (one thing over another thing) and develop relationship scales (framing range).

One way to improve decision hygiene and reduce the chance of noise and bias is to sequence the introduction of information. While collaborating, leaders can provide information when it is immediately needed. If it is delivered too early, it could create noise/biases, be distracting or be irrelevant. This is called "linear sequential unmasking."

Irrelevant information could be considered "noise" if it adversely impacts good judgment. Kahneman writes that "bias" pushes a judgment in only one direction away from the ideal, not in any direction like noise. Rules, standards, guidelines, formulas, and algorithms can reduce noise and judgment errors. These rules, formulas, and algorithms are not noisy and could be superior in accuracy. Here, unbiased software developers can be very helpful.

Think of the error chance difference between paying a mechanical cashier or a human when buying something in the store. Which has the higher chance of not giving you the correct change? Wherever there is human judgment, there is noise. If machines make the judgment, there is no noise, but totally relying on heartless computers may not be appropriate or welcomed. Furthermore, machines can obtain more information faster than humans. Considering this, we need to find a balance.

Sometimes, using a detailed checklist of factors impacting any judgment is helpful. After going through the list and considering those factors, better judgments become possible.

In other cases, general guidelines (which are more flexible than firm rules) can help reduce noise. Measuring scales, and comparisons, can also improve judgment. For example, structured, scripted job interview guidelines are always more reliable than unstructured, free speaking ones.

Complexity breakdown

In complex judgments, breaking the issues down into component parts, tasks and assignments can reduce wild variation in judgment. Those components can be evaluated separately, weighted and finally combined.

First word impact

The first word leaders use to explain concepts can introduce noise into decision-making processes. The authors of Noise call this "anchoring," and say that it reduces the chance of highly diverse judgments. The first word in a list becomes the anchor, which we have to fight to avoid putting extra weight on. We must apply critical thinking to this sequencing impact.

Consider this example list of words, "unprincipled," "cunning," "persistent," and "intelligent." Jumping to conclusions, we tend to anchor to the first word and give an overall negative judgment. If the words were instead ordered "intelligent," "persistent," "cunning," and "unprincipled," we might give a more favorable judgment.

Also, we should find quality comparisons (or anchors) to value against. Words like "best," "quality," and "good" could all carry different meanings to team members, but those words are more understandable when compared to something specific.

Making the right decision easy

Critical thinking requires effort. Therefore, to reach agreement easily, use justifiable data that are readily available and easy to understand.

Let's consider an example of a team making a decision. Imagine that your team in an open organization must agree on the price of a new product. To start the conversation, everyone suggests a price. If everyone's suggestions are around the same value, an agreement can be easily reached. But if the proposed prices differ greatly from each other, the group needs to find a comfortable middle ground. Here are some steps you might use to come to agreement:

  1. Assign a facilitator to govern the judgment process. The facilitator asks each team member to individually consider what they think the ideal price should be and include reasons explaining why. They should then prepare a document of that price with those reasons and store the document somewhere that is not visible.
  2. The members should wait for some time before reviewing the document. It could be overnight ("sleep on it"), a week later, two weeks later, or a month later depending on time available.
  3. After some time has passed, and without looking at the concealed quotation, the facilitator asks the team members again for the ideal price and to give their reasoning. Then compare the two quotations and compare reason. Hopefully, different perspectives will come out, and they come up with a more accurate price.
  4. The members might try to argue against their own reasons, and that's okay. Hopefully, each member will be in a different mood, under different time pressures and have a different energy/emotional level the second time, so different insights are generated.
  5. Then, the team can collectively make a second document with these new insights after reviewing the original quotation (the original "anchor" that Kahneman and his team writes about).
  6. After all the members have completed their revised document of the quotation and reasons for it, they submit their quotations to the team facilitator. The facilitator calculates the average quotation submitted and the standard deviation among them.
  7. The facilitator prepares visual slides of each document submitted (such as the "target" illustration we discussed earlier), with each's deviation from the average. Then, the facilitator invites all the submitting members to attend a discussion meeting. He asks them to be open-minded to new information, perspectives and reasoning. The group can now make a decision more transparently and swiftly. If you consider "above the bull's eye" to be overpriced, "below the bull's eye" underpriced, "to the left of the bull's eye" one major reason and "to the right" another major reason, you can visually make difference comparisons. In Target A, all four prices and reasons are about the same (all close to each other). In Target B, all are underpriced to what they could sell the software for in the market, and for similar reasons. In Target C, one quotation is very high but has similar reasons for another very low quotation. Also, the other two have similar quotations but for very different reasons, one having very accurate reasoning. In Target D, two have similar accurate pricing but for similar inaccurate reasons. The other two have similar low prices, but their reasoning is not that accurate.
  8. Leaders should invite the members to explain their quotations and reasoning. The order of the presentations are randomly selected. During each member's initial presentation, don't allow comments or evaluations until all presentations have been given.
  9. Once all the presentations have been explained in detail, line up all the visuals side-by-side for comparison. Side-by-side, everyone can identify the quotations with the largest deviation from others, and their reasoning. Was it based on something no one considered? Did it rely on false information? Was it ego driven? This now can be discussed and explored while they notice similarities and contrasts of all the documents.
  10. After discussion, the team comes up with an agreed on most accurate, important reasoning and determine a jointly decided quotation.
  11. This process could be repeated after the software has been on the market for a period of time to make pricing adjustments if need be.

This method will result in a more reasonable quotation with both less noise and less bias.

A final note on noise

In this chapter, I covered several methods for improving group decision-making by eliminating "noises": biases, unnecessary judgements, group think, and more. Now that you know the concepts, I recommend re-reading this chapter any time you are involved in a major decision-making project. As an open leader, you can help establish noise-reducing decision-making procedures with your teams.