The LJ Index rejects input measures because 1) the authors question the validity of spending data, and 2) budget cutters MIGHT look amiss at high spending libraries.
Then LJ Index proceeds to use the very budget categories they question to sort all libraries by spending categories. How odd.
Then they argue that the spending should be concealed or at least not focused upon. They argue that in an "ideal world" high spending should be something to brag about but in, I guess, our real world, that could become a danger!
So, do we need or want an index that uses the very spending categories it decries to locate "star libraries" while at the same time suppresing valid information?
To prove my points, I quote the LJ Index directly below.
The problem with ranking inputs
There are two major reasons we propose to issue rankings based on outputs. First and foremost, input data present many comparability issues. Depending on a library's size, it may or may not have a payroll that includes everyone who works in the library. In small towns and mid-sized cities, the library staff may be supported substantially by other local government employees. Similarly, a complete picture of a library's fiscal status may or may not be provided by the revenues and expenditures it can report.
For instance, many public libraries owe at least some of the databases to which they provide access to consortial expenditures by state and/or regional library agencies. Expenses covered under one library's budget (e.g., utilities, telecommunications) may be paid for by the city or county to which a supposed peer library belongs. And data on collection size alone, in the absence of data on collection age, could create a misleading impression about the resources available at any particular library.
The second, and perhaps more important, reason for focusing on service outputs instead of resource inputs is the potential political catch-22 presented by high rankings on the latter. Few potential rankings users would welcome the news that their libraries topped rankings on staffing, collection size, or—least of all—funding. While such rankings should be something to brag about in an ideal world, in these tight economic times, they could invite cuts on the rationale that the library would still have “nothing to complain about,” or that maintaining outputs despite input cuts (a doubtful eventuality) would represent an improvement in the library's “efficiency.” For these reasons, we chose to leave input measures out of the LJ Index.
HAPLR uses spending as one of its input measures. You usually get what you pay for and the HAPLR ratings demonstrate this for most libraries. I recognize the problems with the way libraries report total spending as do the LJ Index authors. But I will not shrink from the results as they have done.
- Thomas J. Hennen Jr.
- Racine, Wisconsin, United States
- We (my wife and I) are celebrating the 11th Anniversary of HAPLR, and more importantly, our 38th Anniversary. The HAPLR system uses data provided by 9,000 public libraries in the United States to create comparative rankings. The comparisons are in broad population categories. HAPLR provides a comparative rating system that librarians, trustees and the public can use to improve and extend library services. I am the director of Waukesha County Federated Library System.