Let us say we have some sensors all trying to measure the temperature in a room. Each sensor makes some measurements, but has some measurement error. Some sensors are good, some measure too high and some too low. Given a set of sensor measurements, I want to determine which of the sensors are most trustworthy, and then I want to infer the correct temperature by for example a weighted average.
Another situation is this: You are planning a scientific conference, and people have submitted a bunch of papers. Each paper is given to a set of reviewers, and they each assign a mark to the paper. Now you have a set of papers each with a set of marks. Since reviewers are not always unbiased, we want to infer the correct markings - that is we want to clean up the data.
What I want to know, is what these systems/methods are normally called. They seem to work a lot like recommender systems, but instead of predicting missing entries, they clean up actual entries. So far I have not gotten much closer than the keywords: Filtering, aggregation, truth discovery.
[link][3 comments]