by Robert Joseph

What’s the point of wine competitions?

Wine competitions are the worst way to identify the world’s best wines. Apart from all the others that have been tried. I make no excuse for mangling Churchill’s famous quote about democracy and using it here because the process of getting a group of people to blind taste a series of bottles is effectively a democratic process and it is undeniably flawed but… It does actually work better than the alternatives.

Immodestly, I can claim to know a little about wine competitions. Apart from judging at a long list of events including the SAA Wine Show in the Cape and regional and national competitions in Australia, New Zealand and Europe, I’ve actually run a few myself: 49 International Wine Challenges (IWC) in London, Japan, China, Russia, Hong Kong, Vietnam, Poland, Singapore, Sydney and Thailand, to be precise… as well as the Tri Nations in Sydney which set wines from Australia, New Zealand and South Africa against each other.

Like democratic elections these events and others in which I have participated as a judge have differed in the ways in which they were run. Australian ‘Wine Shows’ when I took part in them were often oddly competitive in their own right. Tasters tested themselves and each other to see how many wines they could usefully assess. I recall one event at which I had to sniff, sip and spit my way through a single ‘flight’ of 150 ‘current vintage’ Chardonnays, before rounding off the day with 40 reds and fortifieds. But there have also been French competitions at which we were expected to focus our attention on 20 or so undistinguished wines before being liberated to concentrate on the far more important business of a four-course lunch. The New World tasters have tended to have quite similar knowledge, tastes and criteria; again in Europe, I’ve found myself judging Alsace wines alongside Frenchmen for whom Gewurztraminer was a novel experience.

The competitions with which I’ve been involved have evolved too. In earlier IWCs, tasters were unaware of the country of origin of the wines they were judging, and we expected them to assess, say, a Merlot-based wine from St Emilion against ones from Stellenbosch and Sonoma. After analysing the results and talking to the judges, we abandoned that system in favour of providing information about origin and grape variety, but all of the competitions run according to OIV (International Organisation of Wines and Vines) rules still prefer to leave their judges in the dark. For a while, we tried giving judges an indication of the prices of the wines they were tasting – as the Decanter World Wine Awards (DWWA) does with its Over-£15 and Under-£15 categories – but we gave that up too. Judges at OIV competitions are forbidden to discuss the marks they have given; judges at most New World competitions, the DWWA and IWC, have to come to a consensus over every award.

Unsurprisingly, especially given my experience over 25 years at all manner of wine competitions, I believe that the methodology we developed at the IWC is the one that works best. The decision to reveal regions and grape varieties stemmed from the realisation that not doing so led to wines occasionally being given awards that might not make sense to anyone who actually bought the wine. I recall tasting as part of a group at an OIV competition, all of whom decided that we were tasting Pinot Bianco, and that was the basis on which we allocated the medals. When it was too late to revise those awards, we were informed that all the wines we’d judged were labeled as Chardonnay; none of them warranted any kind of recommendation as an example of that grape.

Pricing was another issue. If it’s right for tasters to know the grape variety and origin because consumers have at least part of that information when they select a bottle, why not tell them the price as well? The problem we discovered was that price is a relative factor. To one person £15 is a lot of money to pay for a bottle of wine; for others, who are used to paying £30 or more for Burgundy, it’s relatively cheap. One taster, when given a £20 bottle tended to think “I’m damned if I’m going to give a wine that pricey – and, by implication, that poor value for money – a gold medal”, while another would say “£20 suggests this is a serious wine – and thus potentially more likely to be worthy of a gold than one priced at £10”. Judging without knowing prices yielded more interesting results; we found that it led to a higher proportion of inexpensive wines walking away with big prizes.

I also like the fact that wines at the IWC are tasted up to four times before they leave the process, and that all entries are sampled at least once by a ‘super jury’ – including reliable palates such as Tim Atkin MW, Charles Metcalfe and Oz Clarke – to ensure fairness and consistency. And I’m pleased that Mundus Vini, the German-based competition of which I’m a director has finally left the OIV system. There are indeed lots of aspects to Mundus Vini, including the Germanic efficiency of its organisation and the brilliant charts that reveal the flavours and characteristics the tasters found in each wine, that explain its success in mainland Europe, but I still think that all competitions, indeed all wine judging, is fundamentally flawed.

So why is that? First, there is the question of bottle variation. Even setting aside the issue of wines that are very slightly corked – a more frequent problem than is usually recognised – wines sealed with natural corks can differ hugely thanks to the variation in the amount of oxygen these closures allow into the bottle. Far too often when I check the half-dozen or more bottles of the same wine to be served at a dinner where I am a speaker, I find gold medal-worthy examples – and ones that would struggle to get a bronze. Next, there is the context: wines that come at the end of a line of bigger or lighter, oakier or unoaked samples can often be unfairly judged. Barometric pressure can affect the way wines are perceived, as, for those who believe in such things, can the biodynamic calendar. (I’m agnostic about this last factor, but the down-to-earth people at Marks & Spencer now schedule their press tastings to coincide with biodynamically ‘propitious’ days.

All of these factors can affect the way a wine performs at a competition, though hopefully the fact that there is a panel of tasters mitigates the additional variation in the way their physical and mental state can influence individual critics’ tastebuds. Wine writers are made from flesh and blood; the score they give a wine might well be explained by factors ranging from a light headcold to mortgage worries and pregnancy.

My own answer after all these years is simple. I look out for wines that have won awards in three or more reputable competitions. I have seen wines strike lucky or unlucky in one wine show, but it’s a very, very rare example that can bamboozle several sets of judges.

Leave a Reply