My photo
Racine, Wisconsin, United States
We (my wife and I) are celebrating the 11th Anniversary of HAPLR, and more importantly, our 38th Anniversary. The HAPLR system uses data provided by 9,000 public libraries in the United States to create comparative rankings. The comparisons are in broad population categories. HAPLR provides a comparative rating system that librarians, trustees and the public can use to improve and extend library services. I am the director of Waukesha County Federated Library System.

Saturday, April 24, 2010

On Site or Remote Assessments

On-Site Assessments

We do on-site or remote assessments of individual libraries. The reports include specific recommendations for action by the library. The intended result of the assessment is not to increase the HAPLR Rating for the library. It is to increase the effectiveness of the library.

Consulting Reports

We do a variety of types of consulting in the areas of long range planning, marketing, and impact fee assessments.

For all the consulting, we use the principles and data included in Hennen's Public Library Planner, of course.'s%20Public%20Library%20Consulting.htm

Sunday, April 18, 2010

HAPLR 2010 Edition

FOR RELEASE on April 15, 2010
Hennen's American Public Library Ratings 2010 published.
The 2010 version of Hennen's American Public Library Ratings (HAPLR) Web site ( was re-opened with new data today announced Thomas J. Hennen Jr., its author. The ratings have been published since 1999. HAPLR has become widely recognized in the public library world for these ratings and the individual reports.

HAPLR identifies the public libraries in America with the highest input and output measures. Statistics alone cannot define library excellence, of course, but Hennen believes that the ratings numbers are still important. The HAPLR Index uses six input and nine output measures. The author added the scores for each library within a population category to develop a weighted score. The population categories change at 1,000, 2,500, 5,000, 10,000, 25,000, 50,000, 100,000, 250,000, and 500,000 residents. (500 k means over 500,000 population, 250 k means 250,000 to 499,999 population, and so forth.)

The HAPLR Index has a theoretical minimum of 1 and a maximum of 1,000, although most libraries score between 260 and 730. The HAPLR Index web site site provides a method for obtaining score cards and rating sheets for individual public libraries. It also provides further information on the rating index and other services provided by the author.

The previous editions all saw extensive media attention. This edition is expected to receive more attention.

A list of Top Ten Libraries in each category is available at:

This is the 10th edition of the HAPLR ratings. Ten libraries made it into all 10 editions!

They are:
Bridgeport Public Library WV
Carmel Clay Public Library IN
Columbus Metropolitan Library OH
Denver Public Library CO
Hennepin County Library MN
Naperville Public Library IL
Saint Charles City-County Library District MO
Santa Clara County Library CA
Twinsburg Public Library OH
Washington-Centerville Public Library

Fifteen libraries made it to the Top Ten list for the first time in 2010. They are:

Loudoun County Public Library VA
Evansville-Vanderburgh Public Library IN
Wayne County Public Library OH
Champaign Public Library IL
Westerville Public Library OH
Algonquin Area Public Library District IL
Elk Grove Village Public Library IL
Burton Public Library OH
Canal Fulton Public Library OH
John A Stahl Library NE
Orange Beach Public Library AL
Beresford Public Library SD
Grand Marais Public Library MN
Rock Creek Public Library OH
Eagle Public Library AK

Web site:

Contact by ordinary mail:

Thomas J. Hennen Jr.
6014 Spring Street
Racine, WI 53406

Contact by phone: 262-886-1625
Cell phone: 262-880-7055

Thursday, January 7, 2010

Index Requirements

Back in 2000, LJ Index co-author Keith Lance asserted in his article in American Libraries entitled “Lies, Damn Lies, and Indexes,” that a “proper” index must consist of variables that are correlated by between 0.60 and 0.80. He produced a correlation matrix on HAPLR, noted that some of my variables were above or below those numbers, and criticized it for that.

I could not find the correlation matrix for the LJ Index anywhere in LJ’s “transparent” web site, so I did my own calculations. By my calculations, only visits and circulation per capita meet the definition. The web site asserts that you rejected reference because it did not meet the criteria. However, you make no mention of the fact that attendance and public internet use also fail the test.

I stopped calling HAPLR an index, assuming that my lack of a PhD in statistics had led me astray in naming the product. I now refer to it as a scorecard. I kept the web site name because of domain name requirements.

I have no desire to go back to calling HAPLR an index. I wonder, though, if the “generally accepted statistical principles” Lance cited in 2000 have changed. If so, can Lance please direct me to sources? If not will you consider a name change?

Saturday, December 19, 2009

Yes or No? lj

Three weeks ago I asked: “Will LJ publish the results for libraries for the many other libraries besides San Diego County that reported obviously erroneous data?”

I also asked if they would object if someone else did.

Having heard nothing, I assume the answer is no to the first question and yes to the second.

So much for transparency.

How many libraries got stars because LJ used erroneous outlier data? How many were denied them?

I am pretty sure we will hear no response to these questions either.

Sunday, December 13, 2009

Transparency, the Mystery Stats

With apologies to T.S. Eliot but not to LJ Index

Transparency and Mystery Stats

Transparency, Transparency, there’s nothing like Transparency.
LJ’s above outlier laws? It must be so, apparently.
There’s bafflement to statisticians, a standard score’s despair:
For when they give out stars to some, transparency’s not there.
You may seek it in the FAQs, you may look at Lyon’s Blog
But I tell you once and once again, transparency’s not there.

Transparency’s a lofty thing; of course we all want that.
Statistics rules must be there, LJs laid out the stats,
Spotlights on misreporting are as dim as they may dare:
Spotlights are dusty from neglect, the numbers are contraire,
The numbers jump from place to place. Outliers should be rare,
But when you look at LJ stars, Transparency’s not there.

The edit checks went wrong, you see, but once the data’s in,
Don’t expect LJ to show what their numbers should have been.
“The LJ Index did, precisely, one of the things it
Was designed to do: shine a “spotlight” when the sums don’t fit!
All data’s here? Look, look again, for Standard scores or Means,
Transparency and so much is gone, or so it truly seems.

Misbehaving Data...

LJ has yet to deny that the very unlikely Public Internet Use numbers that they used for San Diego County Library mean that the Library’s LJ Index 5 star rating is wrong.

Rebecca Miller states that “the LJ Index did precisely one of the things it was designed to do: shine a spotlight on inaccurate data so it can be corrected.”

Nearly two weeks ago I asked her: “Do you plan to shine a spotlight on the scores of other libraries that had similarly unlikely data?”

Having heard nothing for two weeks, I assume the answer is no. I also assume that LJ will have no objection if the information is “spotlighted” elsewhere.

Transparency is the watchword that LJ uses for the LJ index, yet their published spreadsheets omit crucial data for calculating the LJ Index scores. The data indicate the raw scores for each library on circulation, visits, attendance, and Internet use per capita, but we look in vain for the further calculations that provide the scores.

Fair warning, the next four paragraphs get into statistics deeper than I would like.

The LJ Index uses standard scores. A standard score indicates how many standard deviations an individual number is above or below the average (mean) for that particular measure. It is derived by subtracting the population mean from an individual raw score and then dividing the difference by the population standard deviation. This conversion process is called standardizing or normalizing the mean.

Please, Rebecca, show me WHERE these further calculations are TRANSPARENTLY provided on the LJ Index site. I cannot find them.

In statistics, outliers, amounts radically different from all other amounts in the dataset, are notorious for messing up the data. That is because the averages of all the standard scores for a dataset must amount to zero. Some are above the average (mean) and some are below. When radical outliers are included weird things happen. An impossibly high (or low) score means that the scores of ALL others must be radically changed so that the standard score will remain at zero. That is why statisticians usually control for outliers (by eliminating them or capping them at a realistic level) before using the standard scores. LJ could easily have done this but chose not to do so. Why?

LJ could have controlled for outliers when it comes to public Internet uses by using the edit checks developed by IMLS. All they had to do is cap the possible number of Internet uses at 0.9 per visit, as IMLS specifies. They did not do so. Instead, they allowed the dataset to permit as many as 8 public internet uses for every library visit. Could all visitors have really used the Internet terminal as many as 8 times every time they visited? Using this dubious data skewed the results. San Diego County Library ended up as a top ranked library based on a single, questionable score. A simple cap, easily applied by LJ and prescribed by IMLS, would have avoided the embarrassment of awarding 5 stars to a library that clearly made a mistake in reporting.

The critical question is how many more instances of this distortion of the results happened to the other libraries in the LJ Index ratings.

I had been planning to change the measures used in the HAPLR ratings for the next edition, but I have decided against doing so. I had hoped to include measures of Internet use. However it has become clear that the Internet use measures are still unreliable, so I will wait for at least one more edition before changing things. Over the years, I have taken a lot of criticism for not including electronic use measures in HAPLR ratings. When the LJ Index chose to use these “Public Internet Terminal Use” numbers, I initially assumed that the authors had satisfied themselves that the numbers were finally more reliable. I guess I was wrong.

Again, nearly two weeks ago I asked Rebecca Miller: “Do you plan to shine a spotlight on the scores of other libraries that had similarly unlikely data?” I have not received an answer. I doubt I will get one.

Sunday, November 29, 2009

Still Misbehaving

LJ’s Rebecca Miller's response to my question blames federal numbers rather than the LJ Index design. The question is not just about the possibility that there are numbers that are inaccurate in the IMLS dataset. That is a given.

The question is “How can the LJ Index “Score Calculation Algorithm” allow one measurement to swamp the entire score for a library?” This type of data “misbehavior” does not happen with HAPLR because it uses percentiles. When a library gets to the 99th percentile, that is the end of things and no measure swamps the entire score.

In the LJ Index, three measures may be the worst in the category, but if one score is an outlier, the LJ Index Score for the library will be high. This misbehaving data is by design and it is the point that Miller misses in her response. Why design an index this way?

Miller also states: “While the circumstances are embarrassing for San Diego [County], the LJ Index did precisely one of the things it was designed to do: shine a spotlight on inaccurate data so it can be corrected.” I must have missed that “spotlight on inaccurate data” in the LJ report. She responded to my questions about the Index. It looks more like my spotlight than LJ’s that caused the discussion.

Miller argues that LJ, Bibliostat, and Baker and Taylor (the sponsors of the LJ Index) “cannot use time consuming and expensive methods” to check the information provided by IMLS. I agree about time consuming methods, but the method I used is neither time consuming nor expensive.

I simply divided “Public Internet Uses per Capita” by “Visits per Capita.” The result is Public Internet Uses per Visit. That calculation is all it takes to spotlight nearly 100 libraries where the reported number of Internet Uses exceeded the number of visitors. No reasonable observer would fail to question such high outputs. Each San Diego County visitor was reported to have used a Public Internet Terminal over 4 times at each and every visit. In one library the number was 8 uses per visit by every visitor! There are plenty of spotlights that LJ chose to ignore until asked about the problems. Where are the remaining spotlights for these libraries?

I have taken a lot of criticism for not including electronic use measures into HAPLR but the numbers always seemed too unreliable to me. Several years ago LJ Index co-author Keith Lance agreed with me at an ALA Conference. He noted that HAPLR should not use the electronic use data because the numbers were too unreliable. When he chose to use these numbers for the LJ Index, I assumed, apparently wrongly, that he had satisfied himself that they were now more reliable.

I liked Susan Mark's analysis. It is true, as she notes, that population and how it is assigned to libraries has a very large amount to do with both ratings.

Monday, November 23, 2009

The LJ Index and Misbehaving Data

For more see:

Why did LJ decide to use the “outlier” numbers that caused San Diego County to get a five star rating that appears questionable? Did this decision cost other libraries star ratings?

Why does the LJ Index “Score Calculation Algorithm” allow one measurement to swamp the score? Is this data misbehavior intended as one of the authors claims below?

In the LJ Index calculations, San Diego County’s incredibly high score (889% above the group average!) for Public Internet Use cancels relatively low scores for Circulation (48% below), Visits (29% below), and Program Attendance (20% below). In the latest LJ Index, San Diego ranked 4th and got 5 stars.

LJ’s February edition omitted San Diego County Library because it did not report Public Internet Use sessions. The Library received five stars in the November edition, called Round Two It reported 16.5 million “Public Internet Use” sessions. Newer data on the California State Library web site reports a more likely 1.4 million. Did San Diego County, among many others, report hits rather than sessions? Didn’t the numbers surprise LJ?

For 16.5 million sessions to be correct, on average, all visitors had to have used the internet terminals an average of 4.2 times every time they visited the library! That is highly unlikely. IMLS, the federal agency that publishes the data, has “edit checks” that are supposed to alert data coordinators to numbers that are out of range. Somehow, 132 libraries in 38 states were reported as having almost every reported visitor use the Public Internet Terminals at every visit. IMLS published a remarkable 8 sessions for every user visit for one library. Did the process work for the latest data?

How does this affect the LJ Index Star Libraries roster? With the more reasonable 1.5 million number, wouldn’t San Diego’s score fall from 989 to 450? Rather than 5 Stars for being 4th ranked out of 36 libraries, they would fall to 22nd ranked and no stars. Isn’t that precisely what will happen with round three?

Am I wrong that this single correction changes the scores of every other library in the group? In all, 29 of the 36 libraries would change rankings with just this one outlier number corrected. Isn’t that a lot of volatility for just one data element?

Should LJ have left San Diego County Library out of the mix because of the questionable data? At the very least, should they not have acknowledged the problems? The LJ authors have certainly given me enough grief about not giving sufficient warning about the vagaries of HAPLR data over the years.

In Ain’t Misbehavin’! , LJ Index co-author Ray Lyons’ Blob piece says, “LJ Index scores are not well behaved. That is, why they don’t conform to neat and tidy intervals the way HAPLR scores range from about 30 to 930.” Lyons says that LJ Index is more informative than percentile-based rankings like HAPLR. Lyons notes that the LJ Index has a “challenging problem” with outliers that can distort the ratings. Is that what happened here? Aren’t there other examples of this happening in other spending categories?

Sunday, July 26, 2009

Hurricane Katrina, Ohio’s “PLF,” and the Great Recession of 2009

Hurricane Katrina hit in August 2005. The general devastation in affected libraries happened during 2006, libraries reported in 2007 resulting in rating results in this year’s HAPLR ratings.

In the news this week, Ohio’s libraries, many of which receive most or all of their money from the state’s Public Library Fund (PLF), find that they will receive about a 30% reduction in funding. Better than the 50% cut they expected because of a magnificent effort to have citizens contact legislators, but bad enough.

The news is similar throughout the country, 20% cuts possible in Hawaii, sizeable reductions in Pennsylvania. The more state funding libraries receive the more that all libraries in a state are caught in the crossfire at once. But we are seeing comparable massive cuts, layoffs, and furloughs at local supported libraries everywhere.

The devastation of Katrina, with its loss of life and property cannot of course be compared to the devastation to libraries by budget cuts. But the resulting reductions in the ratings for Ohio’s long held high ratings will be severe.

Wednesday, July 22, 2009

Planning and Public Libraries

I am greatly indebted to that great librarian and library advocate Kathleen de la Peña McCook for the tip on this article.

The article is in the Planning Commissioners Journal. It touts the economic impact of building libraries and could be a good thing to give to city councils considering library buildings.

Libraries do a lot more for economic development than sports complexes and the like. We must let that be known again and again.

For a limited time, download a complimentary pdf of this article from the site's page.

de la Pena McCook’s site can be reached at:

From the article:
Libraries at the Heart of Our Communities
by Wayne Senville

There's been a dramatic change in the mission of libraries across the country. No longer just static repositories of books and reference materials, libraries are increasingly serving as the hub of their communities, providing a broad range of services and activities. They are also becoming important "economic engines" of downtowns and neighborhood districts."

Friday, July 3, 2009

What Reporting Year Do HAPLR and the LJ Index Cover?

End Date of Reporting Period

“Not even close,” said then FSCS guru, Keith Lance, when I referred to the data by stating the “year of reporting” for the data back in 2000 or so. Because of that, I expected Lance and Lyons to give some attention the reporting period of the data involved. Lyons and Lance make no such disclaimer. Their Fact Sheet states: "This 2009 edition of the LJ Index is based upon 2006 public library statistical data published by the Institute of Museum and Library Services. "

Oh well, go figure, and, as Vonnegut would have said, so it goes.

For the 2006 IMLS reporting for 2009 HAPLR and LJ Index Data the ending date of the reporting periods stretch over 16 months from September 30, 2005 to December 31, 2006. The reports were filed by the various states during calendar year 2007.

End Date Number of Libraries
9/30/2005 : 11
10/31/2005: 1
11/30/2005: 8
12/31/2005: 286
01/31/2006: 1
02/28/2006: 14
03/31/2006: 127
04/15/2006: 1
04/30/2006: 205
05/31/2006: 52
06/30/2006: 3,644
07/31/2006: 13
08/31/2006: 41
09/30/2006: 1,081
10/31/2006: 5
11/30/2006: 1
12/31/2006: 3,717
Total: 9,208

The variety of reporting periods is concentrated in 11 states. Texas has 10 different reporting periods; Illinois and Missouri, 9; Maine, 8. Six other states have two or more reporting periods while the other 39 states have but one.

State Number of Report periods

TX 10
IL 9
MI 9
MO 9
ME 8
VT 6
NE 5
NY 5
AK 2
PA 2
UT 2

Other 39 States: 1

Tuesday, June 30, 2009

10th Annivesary HAPLR Data

New Edition

We celebrate the 10th Anniversary of Hennen’s American Public Library Ratings (HAPLR) this year. In 10 years HAPLR has become widely recognized in the public library world. As is to be expected, the rating system has critics as well as fans. We are proud of HAPLR and what it has done for libraries in the nation. We look forward to many more years of ranking, assessing, and providing report cards for libraries in the U.S. I will continue to refine HAPLR based on the advice of both fans and critics.

HAPLR identifies the public libraries in America with the highest input and output measures. Statistics alone cannot define library excellence, of course, but Hennen believes that the ratings numbers are still important. The HAPLR Index uses six input and nine output measures. The author added the scores for each library within a population category to develop a weighted score. The population categories change at 1,000, 2,500, 5,000, 10,000, 25,000, 50,000, 100,000, 250,000, and 500,000 residents.

The HAPLR Index is similar to an ACT or SAT score with a theoretical minimum of 1 and a maximum of 1,000, although most libraries score between 260 and 730. The HAPLR Index web site provides a method for obtaining score cards and rating sheets for individual public libraries. It also provides further information on the rating index and other services provided by the author.

The previous editions saw extensive media attention. This edition is expected to receive more attention.

After 10 years, there is now a competitor index. Sponsored by Bibliostat, a library data gathering software firm, and Library Journal, the LJ Index uses only 4 output measures rather than the 6 input and 9 output measures in the HAPLR Ratings.

A list of Top Ten Libraries in each category is available at:

HAPLR scores are compared to their LJ Index counterparts: LJ_HAPLR_ScoreComparisons_2009-06.htm

Sunday, June 28, 2009

What Will 50% Proposed Budget Cuts do to Ohio Libraries?

First answer?

Very bad things, of course. But it will take time.

For library users the impact will be nearly immediate, of course.

For the HAPLR ratings, one can predict a number of things, but the results will take longer to see.

Cuts in 2010 will not show up until the data are published by IMLS in 2012. That will have an impact on the HAPLR Ratngs in 2013.

Furthermore, budget cuts will mean reductions in such things as circulation and visits to a library but it takes a while for the funding reductions to result in reduced library use.

In no other state are libraries as dependent on State funding as in Ohio. It is not at all unusual for a library to be 80% or more funded by the state. So state funding cuts of 50% will mean cuts of 40% or more for many Ohio libraries. That level of funding reduction can only mean major reductions in staffing, materials, and hours open in Ohio libraries.

Long story short?

What the Ohio Legislature decides this month will affect the ratings of Ohio libraries through 2015 and very far beyond.

The state that dominated HAPLR ratings for a decade (a quarter of all HAPLR top ten libraries were in Ohio) will likely fade and fade fast.

Library users in Ohio are rallying and letting their legislators know about library services, however, so perhaps the Buckeye state will continue to rule the HAPLR ratings after all.

Time will tell.

New Edition, Most Common Library Name

I expect to publish the new edition of HAPLR Ratings by the end of the month. This will be the 10th Anniversary Edition.

In the process of developing the new data, I ran across an interesting data element: the most common name for a public library.

The most common name for a library in the U.S.? Carnegie you say?

No, it’s Oxford Public Library.

Well then Carnegie is second, surely?

Nope again. Runner-up honors go to Madison Public Library.

Carnegie Public Library has to share third place with Bloomfield, Franklin, and Salem Public Libraries.

Tuesday, June 2, 2009

Unsettling Population Data

Ray Lyons, one of the LJ Index authors, in “Unsettling Scores” (Public Library Quarterly, Volume 26, Numbers 3_4, p. 49 – 100) notes, among other things, that HAPLR scores change when a library moves from one population category to the next. He quotes one of my articles, “Go Ahead Name Them, American Libraries, 30(1) 72-76.

“Depending on the demographic makeup of the state, there will be differences in population assignment. So a word of caution is in order. Mileage stickers on new cars carry the disclaimer “your mileage may vary” depending on the driver ant the driving conditions. Depending on the actual population in your library service area your HAPLR index rating may vary.”

He goes on to note that I repeat the caveat in 2000 and 2002 but not since. Adding that: “For the most part, the library community remains unaware of this considerable drawback to the HAPLR statistical scheme.”

How odd that when he becomes one of the LJ Index authors, he makes scant mention of this “considerable drawback” in the LJ Index scheme.

Go figure.

Spending Categories in the LJ Index

The LJ Index rejects input measures because 1) the authors question the validity of spending data, and 2) budget cutters MIGHT look amiss at high spending libraries.

Then LJ Index proceeds to use the very budget categories they question to sort all libraries by spending categories. How odd.

Then they argue that the spending should be concealed or at least not focused upon. They argue that in an "ideal world" high spending should be something to brag about but in, I guess, our real world, that could become a danger!

So, do we need or want an index that uses the very spending categories it decries to locate "star libraries" while at the same time suppresing valid information?

To prove my points, I quote the LJ Index directly below.

The problem with ranking inputs

There are two major reasons we propose to issue rankings based on outputs. First and foremost, input data present many comparability issues. Depending on a library's size, it may or may not have a payroll that includes everyone who works in the library. In small towns and mid-sized cities, the library staff may be supported substantially by other local government employees. Similarly, a complete picture of a library's fiscal status may or may not be provided by the revenues and expenditures it can report.

For instance, many public libraries owe at least some of the databases to which they provide access to consortial expenditures by state and/or regional library agencies. Expenses covered under one library's budget (e.g., utilities, telecommunications) may be paid for by the city or county to which a supposed peer library belongs. And data on collection size alone, in the absence of data on collection age, could create a misleading impression about the resources available at any particular library.

The second, and perhaps more important, reason for focusing on service outputs instead of resource inputs is the potential political catch-22 presented by high rankings on the latter. Few potential rankings users would welcome the news that their libraries topped rankings on staffing, collection size, or—least of all—funding. While such rankings should be something to brag about in an ideal world, in these tight economic times, they could invite cuts on the rationale that the library would still have “nothing to complain about,” or that maintaining outputs despite input cuts (a doubtful eventuality) would represent an improvement in the library's “efficiency.” For these reasons, we chose to leave input measures out of the LJ Index.

HAPLR uses spending as one of its input measures. You usually get what you pay for and the HAPLR ratings demonstrate this for most libraries. I recognize the problems with the way libraries report total spending as do the LJ Index authors. But I will not shrink from the results as they have done.

Monday, April 6, 2009

Acing the LJ Index Course

There are four per capita factors to the LJ Index: Visits, Circulation, Attendance, Public Internet users. While it is true that these four factors tend to rise and fall together, it is not always so, of course.

A library could score the worst in three of the four, the best in the other, and thereby become a five star library. Therefore, a single factor can determine the outcome of an LJ Index rating. The LJ Index factors are not equally weighted. Fully 100% of the LJ Index weight could go to any of the four factors. That is curious, indeed.

Let me put this another way. Suppose you took a course and got three Fs and an A+ and then got an A+ for the course. Suppose this happened no matter which of the four parts of the course you got an A+ in? You would have to conclude that three of the four factors did not matter as long as you aced one of them.

Thursday, April 2, 2009

Praising with faint damnation?

Consider this quote:
"Rating systems assign libraries to peer groups based on simplistic and imprecise indicators such as community population or library expenditures. Beyond ignoring possibly significant imprecision in these data, this creation of equivalent classes also ignores key differences on factors such as community demographics and needs, library mission, institutional context, and others. As a result, accuracy and validity of final rankings from these systems are compromised."

You might have thought that that quote came from someone opposed to both HAPLR and the LJ Index, but you would be wrong. These are the words of Ray Lyons in 2008 presentation to IFLA. HAPLR uses the "simplistic and imprecise" peer groups developed over many years by state data coordinators; population catefories. The LJ Index? The peer groups are created out of whole cloth by the authors; arbitrary spending categories.

Oh, well. Go figure.

Tuesday, March 31, 2009

Reference and/or Electronic Resource measures

Why does the LJ Index not use reference data? Don’t we deserve more than the off handed “that’s for another article” as they put it in this LJ Index article? We need to be clear. ‘Users of Electronic Resources" as a category is included in the LJ Index but not Reference. There should be some very unhappy reference librarians out there. The federal data on which both the LJ Index and HAPLR are based has included Reference queries since it’s inception.

The “users of electronic resources” number is new this (2006) year, although it has been in the ‘testing phase’ for a number of years. How can the LJ Index reject, without any real comment a measure that has been around for decades and embrace one that is brand new, also without comment?

Saturday, March 28, 2009

HAPLR and the LJ Index

Imitation is the sincerest form of flattery. So, as the author of the HAPLR ratings, I am deeply flattered by the recent publication of the LJ Index. But I am also perplexed. As the author of HAPLR I am flattered because some of my chief critics have finally agreed to the need to evaluate public libraries, although they have embraced different methods. What are the differences between HAPLR and the LJ Index? They are many but the fundamental difference is that HAPLR includes input measures while the LJ index does not. The LJ Index looks at only one side of the library service equation. HAPLR looks at both sides. The HAPLR system does not simply develop scores for libraries. It offers a variety of reports to libraries that compare their performance to comparably sized libraries in their state and in the nation. Over the years, thousands of libraries have used standard or specialized reports to evaluate current operations and chart future courses of action. I am pleased that many libraries have improved their funding and service profiles with these reports.

In the end I believe that competition will make both our endeavors better and welcome the LJ Index.

Blog Archive