How to Connect Law Library Value and Outcomes

This is less an answer than thinking about the question.  Law libraries have struggled for years to try to come up with a way to measure value.  In most cases, libraries develop transaction-based metrics to show activity.  How many reference questions answered?  books loaned?  pages copied or e-mailed?  searches run?  The challenge remains:  how does a search or an answer translate into a measure that can answer whether the library needs more or fewer resources?

The recent AALL report – economic value of law libraries – highlights that we’re nowhere nearer an answer.  I won’t rehash the points made by Jean Grady because I think she’s spot on.  We need fewer exhortations and generalities and more specifics.  Unfortunately, the high costs of providing legal information can skew any dollar value because we’re often contrasting low numbers of lawyers with high thousands of collections costs.

Public libraries often use a calculation that connects items used and dollar value.  That may just be for outward perceptions, since public libraries track the same activities that special libraries watch.  Academics have some deeper resources, including COUNTER and SUSHI, to help them focus on journal usage.  This tends to be more thorough than legal research database analytics but not necessarily any closer an answer.  These sorts of analytics help law libraries justify purchases through activity but not measure value.

How do we connect what we, and others, do with our resources and time to the outcomes that matter to our organizations?

The real challenge seems to be going from quantitative measures to qualitative ones.  Because as soon as you mention “follow up survey”, there are a million reasons why people won’t respond to them.  And I’m not sure that a customer service followup survey – “how did we do” – is entirely the answer either.

But I do think we should be considering additional surveys or analysis.  They might:

  • occur as part of a case closing process, or other checkpoints (pre-litigation, pre-trial, post-trial) in the matter.  A review of the bills related to a matter would highlight services that reached the threshold of being valuable enough to bill to the client.
  • involve asking lawyers and staff doing research work whether their research was routine (just an update to make sure nothing new was out there) or whether it had a greater impact on the matter’s resolution:  highlighting a novel argument, uncovering previously unknown persuasive authority, etc.
  • determine whether a high volume of research – lots of quantitative inputs: searches, questions, etc. – correlates to a more lucrative outcome, whether or not they are billed to the client.

One universal is that the value measurement needs to be something that both the library and the lawyer agree.  It may be that the firm or researchers don’t connect library value to dollars earned or saved.  If they see it as an insurance policy, the value they get may be so intangible as to defeat measurement.  But it can make it difficult to be innovative, because your services and resources may just be there to provide assurance, not improvement.

This can be particularly challenging for a public-facing law library.  Visitors to the courthouse may use your services because you’re there, not because you’re vital.  Self-represented parties may not be available to provide feedback on outcomes.  A library would almost need to invest the time in following those cases, with legal professionals or pro ses, to their conclusion.

As I started, I don’t have any answers.  But I sometimes wonder if our hunting after a value measure is futile.  There may not be any causal connection between having used library resources and a case outcome – you could spend 100 hours researching and still lose a dog of a case – and value based on savings or reduced overhead isn’t very compelling.  Worst case, if there is a cheaper way to reduce overhead or increase savings, it means the library is battling on a dollar basis, not one that takes into account the entire service.

Even if we were to find a value measurement that could be re-used in our organizations, what would we do with it?  And what would our organizations do if we failed to meet or exceeded that measure?  Would a better quantification of library value actually change anything?