Big Data, Blind Tasting, Research, sensory, sensory science, The Tasting Panel 2009, wine faults
Comments 3

Big data supports expert wine tasters

In the course of developing software for predicting consumer wine preferences, a Houston-based start up, VineSleuth, shed new light on the abilities of expert wine tasters and the validity of blind tasting assessments. Contrary to popular belief, the company’s VineSleuth metrics, which are based on the work of Chief Science Officer Michael Tompkins and his team, reveal that tasters can consistently identify aroma and flavor characteristics in blind wine evaluations.

“We have extensive experimental data which support that expert evaluators have the capacity to precisely identify wine characteristics in blind repeat samples,” said Tompkins whose work spans thirteen years in the field of numerical methods. “During the course of our experiments, our vetted evaluators repeat sample characteristics about 90% of the time,” he says.

Michael Tompkins

VineSleuth’s data directly confronts the popular misconception that consistent sensory evaluation of wine is a random occurrence. In developing an algorithm designed to help consumers make wine selections based on personal preference, the company has established a benchmark based on the results of its top-performing tasters (including this author) and intends to use those data to vet future tasters who participate in ongoing research and product  development.

 

Amy Gross

CEO and co-founder Amy Gross stepped forward with the company’s findings in advance of a beta release of the Wine4.Me smartphone application, wine ranking engine and website in response to several blog posts which inferred a general lack of expert repeatability based on a study conducted by winery owner Robert Hodgson and published in the Journal of Wine Economics in 2009. Hodgson’s study which calls to light the inconsistencies between wine competition results has been widely misinterpreted casting doubt on the abilities of highly-trained wine professionals including those who participated in VineSleuth’s research.

The relevance of Hodgson’s 2009 study-one that relies on highly subjective data and the work of evaluators who are not equally qualified to the task-has been called into question by VineSleuth’s findings. “Just because panelists in wine competitions can’t repeat results doesn’t mean that individual experts are not able to repeatedly identify a wine’s aroma and flavor characteristics and their intensities in blind samples,” said Tompkins, who relied on experimental and statistical methodologies used in the field of sensory science as the basis for VineSleuth’s data acquisition and analyses. “We’re confident that our methodology is statistically valid and we’re eager to see it applied,” says Tompkins.

 

3 Comments

  1. Jim Lapsley says

    The “big data” post is interesting. But if it is to ve taken seriously, the findings and methodology should be submitted to a reputable science journsl for peer review and publication.

    • deborahparkerwong says

      I agree, Jim, and I’ve encouraged VineSleuth to do just that. Hodgson’s shoddy study was published in the Journal of Wine Economics in 2009 which made it fodder for bashing sensory experts.

      • Deborah, I agree with you and VineSleuth that there are a number of extremely competent, reliable, all-around awesome “experts” on the wine judging circuit who can be depended upon on an individual basis. At the same time, having done enough judgings (including with you), I know that there are a lot of wine competition judges whom I have observed to be pretty piss-poor (i.e. the opposite of a Deborah Parker Wong). Especially people who are obviously on judges’ panel because they “know” the organizers, or because they have participated in the competition before and have become well-liked from a personal standpoint. I get the human element of wine competition judge selection, but I’ll never get why competence doesn’t seem to be the *key* factor.

        This is not even mentioning the fact that most wine competition judges are on the older side (average age closer to 55 than 25), when every study shows that sensory abilities diminish with age (while younger wine professionals may have less experience, they’re pretty damned good wine tasters — yet very few wine professionals in their 20s are even invited to sit on wine competition panels!).

        Nothing is more demoralizing than to sit on a table judging wine with a presumed “expert” who can’t tell a good (or bad) wine from a side of a barn. Or worse, judges who literally fall asleep at the table (too much partying with fellow wine judge “friends” the night before), or who fail to spit and are therefore zonked out of their minds (more common than what many people think!). So much for blind tasting “validity.”

        This is why I also generally agree with findings that wine competiton results are simply unreliable. They are exactly what they are purported to be: sum totals of almost random groups of judges which may or (probably just as often) may not fulfill the expectations of any given consumer.

        Other than the plain, basic fact that not every judge is on the ball when evaluating every wine in every round on every single day of competitions (a tall order even for the most dutiful judges, who are often called upon to evaluate over 100 wines a day, forcing them to make snap decisions within seconds — a shame when consider the hundreds of hours of work that can go into a wine’s production, from vine to bottling), there are *always* less than competent judges in the mix. Hence, the obvious inconsistencies in results found by observers such as Hodgson (and myself, although few people give a darn about what I think).

        Anyway, that’s my take, based upon experience, not theory.

Leave a Reply to Jim LapsleyCancel reply