In his book The Wisdom of Crowds (2004), James Surowiecki popularized the notion that, under the right conditions, canvassing the aggregate opinions of many people could be more efficient than relying on the expertise of a few.
Jeff Howe applied this approach to decision-making using the buzzword ‘crowdsourcing’ in a Wired article in October 2006.
Crowdsourcing assumes that customers know best what they want and need. Hence, more heads are better than one. We discuss why crowdsourcing may fail in a few important situations that concern social media.
Crowds innovate – NOT
For us at ComMetrics, innovation is a step-by-step process (idea to prototype) where each stage of development is combined with regular measurements of factors critical to achieving success. For example, time used and money spent leading to success in the market, as reflected by new subscribers and their feedback.
The ideas came from various places but then we went to the lab and built. Social networks came in when we had the prototype and wanted feedback.
Our approach is reflected by Dyer, Gregersen and Christensen’s The Innovator’s DNA (December 2009).
Five ‘discovery skills’ separate true innovators from the rest of us (Harvard Business Review). The authors concluded that innovators apply these behaviors more skillfully than the rest of us.
It seems a bit naive to think that going to Dodger Stadium or the LA Coliseum in the hope that most people attending the game might be exhibiting the above behaviors, and therefore help us innovate faster…
Crowd-wisdom helps consumers – NOT necessarily
While crowds may not innovate, they still provide wisdom when it comes to product reviews. Superusers’ product reviews on Amazon or eBay influence many. One could ask how reliable these ratings and reviews are. A recent comment on a blog post addresses this in more detail:
A real concern is the wisdom of crowds who are herded by power-users writing the first review for a product. Any attempt to turn mob opinion into a test for truth is pernicious.
The notion that a book might be a must-read because it is highly ranked by many on Amazon does not make it Nobel prize material. The earth did not stand still just because Galileo fell out of favor, nor has evolution been shown to be false due to the faith of believers.
Hence, product reviews driven by superusers and crowds who follow just means that the wisdom of crowds can only be conventional. Volume against quality.
=> SocioTwitting: Developing metrics for Twitter – volume vs. influence
Thanks for sharing this,
Urs – @ComMetrics
Thumbs Up or Down works but fails to explain why
Crowds do not drive and bring innovation to successful fruition in the form of a marketable product. Nor are they the best source for assessing quality – the one that shouts the loudest is heard the most.
Nevertheless, crowds can tell you if they like or dislike something. For instance, Bonobos found that they can come up with a name and choose the one they like the most. Bonobos emailed customers and asked them to name a new pair of trousers – the winner was the Dark and Stormys brand.
But using crowds for things like A/B tests (i.e. comparing several groups’ reactions – including a control group – to different versions of a webpage to improve it) or getting the thumbs up or down sign risks two things:
To minimize the chance of throwing the baby out with the bathwater, it is best to talk to some clients to find out the why, as we had to learn the hard way when re-designing this blog.