Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Poof of concept

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                             

Rating normalisation system

better ratings from diverse sparse evaluations
  (+2)
(+2)
  [vote for,
against]

It's not uncommon for lists of cultural stuff (such as movies and online games) to have crowd-sourced ratings.

There are several issues with this, but one is that each reviewer has a different model of the rating scheme, and only rates a small proportion of the content.
For example, in a one-to-five star scheme, one person may aim to rate 20% of the material at each level on average, while another may expect a bell-shaped distribution and score many more items as average, reserving levels of 1 and 5 for the outliers. In practice, many people rate everything one except for a few favorites which may get a score of 5 if they're very lucky.

While this sort of system tends to kind-of work on aggregate if there are a large number of votes, the variance is however very high with few reviews.
I suggest that the naive model could be greatly improved by keeping track of each reviewer's record, and attempting to map their scores onto a standard model. A user's over-represented ratings would hence be weighted down, reducing their effect on the overall evaluation.

It would be reasonable to inform reviewers that such a system was in place, and show them how their record matched it.

There are obvious concerns with ensuring that such a system is functional; their solution is left as an exercise for the interested reader.

Loris, Nov 12 2018

Rating by Ranking Rating_20by_20Ranking
An alternative approach [Loris, Nov 12 2018]

CMV discussion of five-star-ism https://www.reddit....ing_all_ratings_to/
Basically, explains why something like this scheme is necessary [notexactly, Nov 12 2018]

[link]






       The problem with this is that most reviewers are idiots.
MaxwellBuchanan, Nov 12 2018
  

       Then take it one step further and when displaying the rating to someone who has given many ratings already, display the rating normalized to the scale that they use.
scad mientist, Nov 12 2018
  

       But some people might be good at buying things that are excellent quality and which they really like - and so everything they review is genuinely a ***** item. Other people might only review those things which are ***** and not bother leaving reviews for anything else. Both of these would not give true results when put through normalisation which assumes that people buy and review random things.

(Note - the line of five asterisks above denotes a "Five star" review and is not an obfuscated obscenity)
hippo, Nov 12 2018
  

       //Then take it one step further and when displaying the rating to someone who has given many ratings already, display the rating normalized to the scale that they use.//   

       This would be great, because it would encourage people to use the full dynamic range.   

       //But some people might be good at buying things that are excellent quality and which they really like - and so everything they review is genuinely a ***** item. Other people might only review those things which are ***** and not bother leaving reviews for anything else.//   

       Absolutely, and this is one of those concerns I left to be addressed.
It occurs though that this isn't a universal issue. If you have a large library of frequently accessed content/items with relatively low turn-over, then this is a problem. People predominantly visit the stuff which is already well-liked, because ... well, that's the point of the rating system. But in this instance you don't really need anything fancy, because standard ratings will work just fine anyway.
However, if you have a large stream of previously un-rated content, that's where you want to quickly evaluate things based on a few quirky reviewers.
Loris, Nov 12 2018
  

       // The problem with this is that most reviewers are idiots. //   

       The problem with this is that most reviewers are humans. Oh, wait ...
8th of 7, Nov 12 2018
  

       You could normalise ratings by sending all your customers the same 100 random items and getting them all to review all these items
hippo, Nov 12 2018
  

       Perhaps a better solution might be for suppliers to rate their own product, since they will have the most experience of it. Thus, they could advertise their Premium Beluga as "really jolly good, five stars", or their lumpfish roe as "well, it's a lot cheaper but don't give it to anyone who knows real caviar - probably about two stars".   

       Of course, the more devious retailers might find loopholes in this system.
MaxwellBuchanan, Nov 12 2018
  

       Abandon star ratings and just allow people to rate things [Splendid | Marvellous | Mediocre | Slightly disappointing | Willfully repugnant]
hippo, Nov 12 2018
  

       //Abandon star ratings//   

       Star ratings are all over the place and have a serious dynamic range problem. Uber, for example, have a 5 star system. The drivers get kicked off the service if they drop below 4.5 or so. That means the real rating system spans 0.5 stars for driving quality. I wonder what the first 4.5 are for? Basic human decency? "2/5, got to destination late because driver was recruiting child soldiers"
bs0u0155, Nov 12 2018
  

       1/5 - unfortunately, driver was single-celled organism - needs few hundred million years to evolve
hippo, Nov 12 2018
  

       eBay is equally bad in that respect.
MaxwellBuchanan, Nov 12 2018
  

       In light of the last few annotations before this one, my link is now far more relevant than it was when I posted it!
notexactly, Nov 12 2018
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle