Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 63067

Short-Circuit predictors

$
0
0

On /r/science there's a link to a physorg article on successfully predicting movie revenues by analyzing wikipedia posts: Link to phys.org.

Maybe someone can conjure the actual actual math paper, but there's an interesting twist apparent in the newsy article about it:

The predicting power of the Wikipedia-based model, despite its simplicity compared with Twitter, is that many of the editors of the Wikipedia pages about the movies are committed movie-goers who gather and edit relevant material well before the release date. By contrast, the "mass" production of tweets occurs very close to the release time, and often these can be spun by marketing agencies rather than reflecting the feelings of the public.'

If this predictor were adopted, there's an obvious short-circuit that can render it useless. Producers can spoof the signal by unleashing hoards of wiki editors to amp up the projections of revenue and thereby make it easier to raise money for the film. This is close to the idea of 'wireheading' in AI.

I have thought and written about this problem for a while, and wonder if there are examples from ML where self-reference interferes with models. More interestingly: what does an intelligence machine/organization do to prevent this kind of short circuit?

submitted by szza
[link][5 comments]

Viewing all articles
Browse latest Browse all 63067

Trending Articles