This a follow-on to my earlier thread

How accurate do you think the average ratings in the Lens Reviews are?. But since Adam has done a major overhaul on the lens rating system since then and added a number of sub-ratings I thought I would start a new thread to avoid confusion.

I think the new system is a major step forward. Hopefully it will end (or reduce) the confusion about how value relates to the rating. While conducting the experiment described below I came across one user who rated a kit zoom lens higher than a limited prime lens. Value has its place, but I think that's a bit extreme!

Although the new system is a big improvement, it doesn't address the issue of different reviewers having intrinsically different scales. At one end, there are people who have only reviewed the kit lens, and at the other are people who have only reviewed limited lenses (lucky devils). There's also "ballot stuffing". I came across people who said (paraphrasing) "I'm giving an extreme score to counteract the scores I don't agree with" and there are people who submitted multiple reviews for the same lens all with the same high score.

To see if there's a smarter way to compute averages that minimizes these effects, I conducted a little experiment. I scraped the reviews for a small number of lenses (listed below) and extracted the reviewer and their rating for each lens. You might want to ignore the following details on first reading. Reviewers who only rated one lens or who gave the same rating to every lens they reviewed were ignored. For the others I scaled their ratings to the range 0 (for the lowest rating they assigned) to 1 (for the highest). From these scaled ratings I then computed an average lens rating. I then computed an average user rating by averaging the lens ratings for all the lenses they reviewed. I then iterated this procedure also taking into account the user rating when determining the lens ratings, until the lens ratings converged. The lens ratings thus obtained are not on a natural scale, so I transformed them to have the same mean and variance as the original average lens ratings.

The simple average rating and the new rating for the small subset of lenses I applied it to are shown in the following table:

*HTML Code:*

Lens New rating Old rating

SMC-Pentax-FA-31mm-F1.8-Limited 10.08 9.71

SMC-Pentax-FA-77mm-F1.8-Limited 10.03 9.77

SMC-Pentax-DA-Star-50-135mm-F2.8-SDM-Zoom 9.67 9.59

SMC-Pentax-FA-43mm-F1.9-Limited 9.57 9.52

SMC-Pentax-DA-40mm-F2.8-Limited-Pancake 9.27 9.67

SMC-Pentax-DA-70mm-F2.4-Limited 9.06 9.42

SMC-Pentax-FA-Star-24mm-F2 8.98 9.37

SMC-Pentax-FA-50mm-F1.4 8.94 8.86

SMC-Pentax-DA-18-135mm-F3.5-5.6-ED-AL-IF-DC-WR 8.74 8.19

SMC-Pentax-DA-18-250mm-F3.5-6.3-Zoom 8.40 8.65

SMC-Pentax-DAL-55-300mm-F4-5.8-Zoom 8.30 8.47

SMC-Pentax-DA-18-55mm-F3.5-5.6-II-Version-2-Zoom 8.25 8.23

SMC-Pentax-DAL-50-200mm-F4-5.6-Zoom 7.56 7.40

SMC-Pentax-DAL-18-55mm-F3.5-5.6-Zoom 7.29 7.31

Overall, I'm pretty surprised about how similar the two scores are. I'm happy that the DA 18-135 has moved up relative to the 18-55 kit lenses. I feel happier knowing that the different rating scales of the various reviewers doesn't make much of a difference. But does it change anything significantly enough to be worthwhile?