Unless you are hearing a voice in your head, “if you build it … people will come” isn’t a winning strategy. I was reminded of that when reading about a new digital approach to libraries in Nebraska. I expect that library isn’t actually implementing that strategy either, because the story describes some pretty detailed approaches to how they intend to get people into, and using their space. We have an awful lot of data that we can use to avoid the guess work implicit in an if you build it approach.
Data for Post-Action Validation
Someone commented recently that data was good for validating an approach or theory but that assumptions could be made, in advance, without the data. That’s probably true to some extent. However, it seems a backwards approach to taking action if you have data readily available.
A great example I just learned about is the Digital Inclusion Survey and, in particular, its interactive map. It reminds me of my own, much simpler map. It uses self-reported surveys from libraries and U.S. census data to enable anyone to see the types of people in a library’s service area. And you can define that service area. I was led to the map by this post, describing how the map and supporting documents could be used to justify new branches, branch locations, and library staffing requirements. Rather than guessing where a library location should go or how many staff it might need, a library board or decision maker could actually know about the environment.
This can be tricky in libraries, since we sometimes rely on data that isn’t really terribly useful. Two common measurements in law libraries are gate count (foot traffic) and reference activity. The first is problematic because you can’t tie it to activity or service. If your foot traffic is steady but people are using your library as their cafeteria, the numbers may create a false sense of security.
Similarly, reference statistics depend heavily on the number of people available to answer questions and how many hours they’re available. Cutting staff can lead to drops in reference activity which can circle back to a question in why reference staff are needed. The benefit of browsing statistics (books off shelves) and database activity is that they tend to be less likely to fluctuate.
Anecdote + Anecdote = Data
It can also be deadly to fall back on feelings. I had to laugh at this tweet because it’s so apt:
Sometimes all you have is anecdotal evidence to help you make decisions. Sometimes you want to use anecdotes to help tell the story of the data that you have. The challenge is when the anecdotes contradict the data, and yet the story-telling wins the day. That’s when you’re headed out onto thin ice.
It probably makes me sound like I have a data fetish. I don’t. But I don’t fear data. I’ve sat through innumerable conference calls and meetings where concerns are raised about how data is collected, and whether it’s skewed. I’ve been there and fixed problematic surveys. It’s rarely a reason to not use data. It just means understanding the limitations: the data doesn’t make a decision, it just helps you to have context.
Awareness of what was asked of how many people and who they are, and by whom they were asked, can help you determine whether or not you want to rely on data. Qualitative data can always be tricky but libraries (can) generate and have access to substantial quantitative data as well. Sometimes we need to use a variety of qualitative data – like this Nevada community college library did – to better understand the activity and background of our user environment. The data may not always tell the story the library hoped it would.
As libraries continue to experience resource limitations, using those resources wisely is critical. Sometimes you have to use instinct and anecdote. If you have or can get data, though, it can help you to build the library people are expecting, not just the library you hope they’ll use.