Close but no Cigar for Netflix Recommender System 114
Ponca City, We Love You writes "In October 2006, Netflix, the online movie rental service, announced that it would award $1 million to the first team to improve the accuracy of Netflix's movie recommendations by 10% based on personal preferences. Each contestant was given a set of data from which three million predictions were made about how certain users rated certain movies and Netflix compared that list with the actual ratings and generated a score for each team. More than 27,000 contestants from 161 countries submitted their entries and some got close, but not close enough. Today Netflix announced that it is awarding an annual progress prize of $50,000 to a group of researchers at AT&T Labs, who improved the current recommendation system by 8.43 percent but the $1 million grand prize is still up for grabs and a $50,000 progress prize will be awarded every year until the 10 percent goal is met. As part of the rules of the competition, the team was required to disclose their solution publicly. (pdf)"
Re:Moving target? (Score:5, Informative)
Netflix is free to merge any improvements into their actual system in the meantime.
Re:Bad title (Score:3, Informative)
no breaktrough - just blending (Score:5, Informative)
Most noteworthy aspect of the winning entry is that their winning method works by combining 107 different types of prediction strategies.
They state that you can get pretty far by blending the 3-4 best strategies, but of course doing so would not have netted them the progress prize
It is kind of sad realization that there actually is no better method. Your best bet is to use brute force and attempt to find some weighting methodology that combines known methods. By the way this is a well known issue in protein structure prediction competitions, for many years now so called meta-servers (predictions work by merely combining other predictions) win all the time. The joke is that we now need meta-meta-servers, combine the results of combiners
Also a clarification on the progress prize: to get it you need to have at least 1% improvement over the previous result. Considering that there is only 1.57% to go there is room for only one more progress prize until it hits the Grand Prize (10% improvement over the original results).
Not how it works (Score:5, Informative)
Re:I'd say... (Score:3, Informative)
"Straightforward statistical linear models with a lot of data conditioning."
The Netflix programmers shouldn't necessarily get special recognition for using least-squares modeling, but feel free to pass on your praise to Gauss, Legendre, Galton, and Fisher.
What's amazing is how hard it is to improve drastically on these 150-year-old statistical techniques.
Re:I'd say... (Score:4, Informative)
Re:I'd say... (Score:5, Informative)
Egregious errors? It's downright useless unless you pretty much buy only one genre of book/music/whatever. Their system is heavily weighted towards whatever you most recently bought - and drops huge slabs of quasi related stuff into your request list at the slightest provocation.
I buy (among other things) serious works of culinary history, sociology, etc... Yet my reccomendation list is clogged with food porn (coffee table cookbooks) and the latest crap offerings from whichever TV chef is the current flavor of the moment. It also doesn't recognize the difference between editions - if you buy a hardback, it'll happily reccomend you buy the paperback. If you buy a frequently reprinted SF novel, it'll happily add each new printing/edition to your queue.