This issue appears at almost every debate on collective intelligence. I could use Manichaean way of saying it: “collective intelligence ALWAYS works better than the experts“, but it is not true. There are circumstances that give advantage to one option over the other, and even though both of them have their drawbacks.
I used to think that the answer has laid in the level of the “technical” complexity of the problem. I thought if it was a very complex problem, with specific expert skills requirements, then the experts solution could work better. But then I´ve realized that it is not so relevant, and that the key to succeed is how you design the spaces for participation.
Let´s imagine for a moment that we need to solve a problem and there are two alternatives available to us:
Option-A: 5 experts invited through another expert´s suggested selection.
Option-B: 500 people who are self-proposed to participate through a web-page. There may be experts in the group, but it is more likely that the majority of them are not.
At the Option-A there will always be a bias in the selection filter. People who celebrate the virtues of this system, rarely talk about this: experts (or politicians) choose experts because they are friends or because they know that the chosen ones will not cause problems or simply because they don’t know another. Looking at Expert Panels and “Committee of Wise Men” created to solve public challenges, one can but ask: What are the criteria for choosing them? Who has selected whom?
At the Option-B the doors are open and (in principle) there is equality of opportunities to participate. There are no entry barriers, so anyone can sign up. The problem in this case is no longer the “selection filter to enter” but instead others show up:
Ability to attract the best talent: It´s not the same you choose 5 experts, invite or hire them as the talented one searches for you willing to participate voluntarily at your party. There always exists a reasonable risk that some of the best will/would/might not sign up.
Ability to reduce the information overload that is generated by a large number of participants, among whom a significant percentage might not understand the problem or might not be prepared to provide interesting ideas.
At the Option-B there may have been 500 people participating, of which only 50 ones know much about the issue/problem and they are as well prepared as the 5 experts from Option-A. With the explanation as above, we would always prefer the second model ; just because now we have 50 experts instead of 5 ones, and even more, they are self-selected without “any” selection bias.
But there is another (big) inconvenience we could be covering up: the “noise” generated by the other 450 participants who will also want to be heard, probably “bothering” the 50 ones “who know”. If they make too much noise, it is possible that these 50 experts perform less effectively than the 5 ones behind the closed doors. Therefore, removing the entry barriers turns the problem of “selection filter” into the filter of “quantity to quality translation” one.
What is the solution then? We should continue applying an open model without entry restrictions (and in this way, to be able to avoid selection bias), but while introducing some discipline within a system of self-regulation that will help impose an order (and reduce noise), and consequently it will allow us to separate the noise from the melody.
The formula is both simple and complex as the following: Admission is free, but participants will have to earn the right to be heard. Members will have to prove they deserve to remain, because if what they generate is noise, then the system will expel them. If this doesn’t happen, the collective intelligence can lead to worse outcomes than the experts panel. It is meritocracy, not egalitarianism.
Note-1: The image of the post belongs to the album of Gregory Pleau in Flickr
Note-2: Read this post in Spanish (Lee este post en Español)