Most of us have read about the Virginia restaurant that refused to serve a White House staffer and within 24 hours found ten thousand negative reviews had been posted on Yelp.
Yelp’s reaction was to say the restaurant’s rating had been affected by reactions to news coverage rather than reviewers’ personal experiences. Trip Advisor temporarily froze their online reviews when the same thing happened to them.
There are many terms used to describe the posting of deliberately false reviews for personal reasons (brigading, sock puppeting, astroturfing, etc). But what is more important than the lingo is how the rate of incidence is rapidly increasing. And what do you think is at the center of the phenomenon?
Business Insider reports that mathematicians have taught A.I. to write believable fake reviews that sneak past screening algorithms in an escalating cycle of one-upmanship.
Think of it as like the software hacking/patching cycle, where we constantly need to update our apps.
It is the equivalent of an arms race where each new offense is countered by a new defense that is overcome by a new offense, and so on. And just like an arms race, the speed, magnitude, and coordination of these activities are growing, too.
Studies show between 80% and 90% of us use online ratings in our decision-making.
Yet estimates show that fake reviews make up 25% (Yelp), 40% (Amazon) and 50% (Washington Post) of all reviews. The Telegraph says “Given the clandestine nature of the fake reviews, it would almost impossible to arrive at a credible figure.”
Studies also show 85% of us are unable to tell fake reviews from real ones. Not a problem, some say, as there is plenty of advice that helps us sort the fakes from the true ones.
They are absolutely right.
We can find scads of information on the InterWeb that tells us how to recognize fake information (shorter comments, smaller words, no details). But wiser heads know that the people behind fake reviews read this advice too, and revise their algorithms accordingly.
The idea that we can easily see through sophisticated efforts to produce fake reviews is nonsense.
It might be a good idea to put less trust in online ratings and more in the old-school Human Intelligence approach of asking friends and co-workers about their experiences, processing the data, and making up your own mind.