Author

My photo
Racine, Wisconsin, United States
We (my wife and I) are celebrating the 11th Anniversary of HAPLR, and more importantly, our 38th Anniversary. The HAPLR system uses data provided by 9,000 public libraries in the United States to create comparative rankings. The comparisons are in broad population categories. HAPLR provides a comparative rating system that librarians, trustees and the public can use to improve and extend library services. I am the director of Waukesha County Federated Library System.

Sunday, December 13, 2009

Misbehaving Data...

LJ has yet to deny that the very unlikely Public Internet Use numbers that they used for San Diego County Library mean that the Library’s LJ Index 5 star rating is wrong.

Rebecca Miller states that “the LJ Index did precisely one of the things it was designed to do: shine a spotlight on inaccurate data so it can be corrected.”

Nearly two weeks ago I asked her: “Do you plan to shine a spotlight on the scores of other libraries that had similarly unlikely data?”

Having heard nothing for two weeks, I assume the answer is no. I also assume that LJ will have no objection if the information is “spotlighted” elsewhere.

Transparency is the watchword that LJ uses for the LJ index, yet their published spreadsheets omit crucial data for calculating the LJ Index scores. The data indicate the raw scores for each library on circulation, visits, attendance, and Internet use per capita, but we look in vain for the further calculations that provide the scores.

Fair warning, the next four paragraphs get into statistics deeper than I would like.

The LJ Index uses standard scores. A standard score indicates how many standard deviations an individual number is above or below the average (mean) for that particular measure. It is derived by subtracting the population mean from an individual raw score and then dividing the difference by the population standard deviation. This conversion process is called standardizing or normalizing the mean.

Please, Rebecca, show me WHERE these further calculations are TRANSPARENTLY provided on the LJ Index site. I cannot find them.

In statistics, outliers, amounts radically different from all other amounts in the dataset, are notorious for messing up the data. That is because the averages of all the standard scores for a dataset must amount to zero. Some are above the average (mean) and some are below. When radical outliers are included weird things happen. An impossibly high (or low) score means that the scores of ALL others must be radically changed so that the standard score will remain at zero. That is why statisticians usually control for outliers (by eliminating them or capping them at a realistic level) before using the standard scores. LJ could easily have done this but chose not to do so. Why?

LJ could have controlled for outliers when it comes to public Internet uses by using the edit checks developed by IMLS. All they had to do is cap the possible number of Internet uses at 0.9 per visit, as IMLS specifies. They did not do so. Instead, they allowed the dataset to permit as many as 8 public internet uses for every library visit. Could all visitors have really used the Internet terminal as many as 8 times every time they visited? Using this dubious data skewed the results. San Diego County Library ended up as a top ranked library based on a single, questionable score. A simple cap, easily applied by LJ and prescribed by IMLS, would have avoided the embarrassment of awarding 5 stars to a library that clearly made a mistake in reporting.

The critical question is how many more instances of this distortion of the results happened to the other libraries in the LJ Index ratings.

I had been planning to change the measures used in the HAPLR ratings for the next edition, but I have decided against doing so. I had hoped to include measures of Internet use. However it has become clear that the Internet use measures are still unreliable, so I will wait for at least one more edition before changing things. Over the years, I have taken a lot of criticism for not including electronic use measures in HAPLR ratings. When the LJ Index chose to use these “Public Internet Terminal Use” numbers, I initially assumed that the authors had satisfied themselves that the numbers were finally more reliable. I guess I was wrong.

Again, nearly two weeks ago I asked Rebecca Miller: “Do you plan to shine a spotlight on the scores of other libraries that had similarly unlikely data?” I have not received an answer. I doubt I will get one.

No comments:

Post a Comment

Blog Archive