Judge wine, or flip a coin?

Why do so many people pay attention to so many wine competition results when it’s (surely) pretty obvious that they’re a load of nonsense? And surely equally clear that the competitions are of most benefit mostly to those who make a lot of money running them and those who make a bit less judging on them (and sometimes get a freebie visit to exotic places like Barcelona and Paarl)? I think the answer is pretty simple, and I’ll come to that, but first I’ll report on a bit more hard evidence that has emerged.

It came to my attention in a recent article in the Wall Street Journal entitled (most reasonably!) “A Hint of Hype, A Taste of Illusion”¬† The article tells of the latest bit of published research from Robert Hodgson, a former university teacher of statistics and now a proprietor of a small winery. Dr Hodgson’s first report had shown how utterly unreliable were the wine judges that he’d studied. After noting major discrepancies at a Californian competition he conducted a test in which wines (from the same bottle) were given to the same judges three different times, and received markedly different ratings each time. (Here’s a link to the Wines and Vines report on his research , and a link to where you can find the full report in the Journal of Wine Economics .)

The second bombshell paper, “An Analysis of the Concordance Among 13 U.S. Wine Competitions” has actually also been made available in full by the same journal (click here), which published it in September – but the Wall Street Journal account was the first I’d seen of it (if it’s old news to anyone else, I’m sorry for being late.)

This time Hodgson looked at wines entering a number of competitions and compared the results. As the Wall Street Journal has it, “The medals seemed to be spread around at random, with each wine having about a 9% chance of winning a gold medal in any given competition.

To test that idea, Mr. Hodgson restricted his attention to wines entering a certain number of competitions, say five. Then he made a bar graph of the number of wines winning 0, 1, 2, etc. gold medals in those competitions. The graph was nearly identical to the one you’d get if you simply made five flips of a coin weighted to land on heads with a probability of 9%. The distribution of medals, he wrote, ‘mirrors what might be expected should a gold medal be awarded by chance alone.'”

This is not all that surprising news. A couple of times some years back, the print Grape made some charts showing the often ludicrously varied results achieved by the same wines in different local and international competitions. Of course there is occasionally consistency from different judging panels (witness the Christmas tree appearance of Kaapzicht Steytler Vision), but it is pretty rare. Producers and marketers also know the roulettish nature of competitions very well – which is why those that can afford to will enter as many as they can, knowing that there’s a good chance that the wine will do well in at least one of them. No matter if it bombs in another – you don’t send out a press release about that competition, or proudly put the bronze sticker on the bottle.

And, as the retailers will tell you, the little stickers help sell the wine. Why? Obviously the general answer, and the ostensible reason for the competitions as well as for other guides, is the insecurity of most wine-buyers, who rely on the judgements of experts. But I don’t think those consumers aren’t necessarily aware of the conflicting results – any alert ones surely are. Anyone who’s interested enough to buy Wine magazine, for example, is surely aware that when the magazine gives Cape Point Vineyards Sauvignon Blanc a mere one star that this is somewhat anomalous and rather throws into question the merits of the wine that scores four or five.

I suspect that the motive behind buying an awarded wine is frequently not that the buyer knows that the wine is therefore likely to be good, or necessarily better than another. The point is that if the insecure wine-buyer gets hold of a bottle that bears a gold sticker – any gold sticker will probably do – then he or she is buying something like an alibi for their choice. If anyone else disparages the wine, it can always be pointed out that, well, some real experts agreed it was great!

No doubt things are a bit more complicated and varied than that. Often the choice of a bemedalled wine is simply because there’s a good chance that it at least can’t be bad if its passed muster in a show. (If I suddenly found myself in a bottle store in Budapest without having done my homework and without someone to advise me, I daresay I’d look for the wines that had done well in Hungary’s version of Veritas.) But is that justification for the shows, and for the added costs per bottle to pay the entrance fees?

It’s a problem to which there’s no easy answer, I fear. Dr Hodgson doesn’t have one, but everyone except competition organisers should be grateful for his putting on a more scientific footing the depressed and sceptical observation that many of us have made over the years. The big, blind wine competitions will continue for as long as wine producers want to make use of them for their marketing purposes, and for as long as wine-drinkers find some use in them – even if the awards could be allocated pretty much as usefully by a flip of the coin.

Leave a Reply

Your email address will not be published. Required fields are marked *

Are you human? *