Working with Numbers

It is trite to say that you can’t measure what you can’t count.  There are many places in life where there is no metric that can or should be applied.  But when you are dealing with a business and determining whether a project should or shouldn’t go forward, it is useful to be able to quantify some element of it.

Decisions without Data

A recent experience I had involved a change to our site navigation in order to make something easier to find.  Clearly, this is an important business goal.  The navigation change was because people “can’t find” the information.  That was a flag to dig a bit further to understand what the problem was.

The first hurdle was that we had two conflicting stories.  On the one hand, we had customers complaining that they couldn’t find something on the Web site.  Staff attempting to assist them were not able to remember how to browse to the information.  On the other hand, we had Web analytics showing that people were finding the information using search terms on Google or our site search.

While we could quantify how people visited the site using search (or got there directly), we couldn’t quantify (a) the people who didn’t arrive at the page nor (b) the number of people who called and said they didn’t.  At this point, the easy option is to add a menu link and close the problem.

But that ignores a couple of things.  First, we recently revamped our navigation to drop from over 130 items to a much simpler structure.  It may not be the best but it is simpler.  One goal, then, is to avoid navigation creep.  Secondly, it is hard to convert anecdotes of complaints into change:

  • how many complaints are we talking about, and do we have an expectation of eliminating all calls?
  • we know something isn’t working for some people, but we don’t really know what;
  • navigation may not be the problem.

After digging further, it was clear that a couple of things were perceived problems.  One was that the content owners didn’t think their site was visible.  Another was that they had avoided site search in the past because it was terrible – it was, truly – and that habit had remained.  When a content item couldn’t be find through browsing, it was considered lost even though we could see people searching for it.

We have navigation issues, no doubt.  We have been collapsing Web sites into one larger bin and navigation was owned by a variety of groups and not corralled by any of them.  But this was a good case where navigation change wouldn’t necessarily fix the problem.  Organizationally, we assume everyone browses our content so a navigation change is always the first request.  This content was an unusual case, which may be why it is hard to browse to.  It doesn’t really fit into any of our current navigation nodes and is shoehorned in so that it at least lives somewhere.  It will be interesting to see where it eventually lands as we take a more holistic view to the site.

Data without Evidence

One of the things I come across dealing with lawyers and technology is assumptions about how lawyers use technology.  A person once made the comment that we didn’t need to put something on our site navigation because, to paraphrase, “lawyers are task-oriented and they are going to look around for something even if it isn’t clear where it is.”  I doubt that.  But neither of us have any data to show that lawyers will react that way to information on the Internet.  In which case, I would fall back on generic user data or research that shows how most people would react if they don’t find information in a particular situation.

I was following a legal marketing conference recently and this post went by:

working-with-numbers-lma

I’m not trying to call out the messenger, I realize this was one of those tweets that is posted based on something someone said at a conference.  But this is the one that caught my attention and it was retweeted and reposted a couple more times.  I’m interested in this sort of data – Twitter generating 120,000 tweets a second – but also a skeptic.  I am a pretty fair researcher so I looked around the Web and found references to Twitter flow but nothing like this.  I can’t find any citation to that data point anywhere else.

Which is not to say it doesn’t exist.  But when you see stats or data posted in a blog or social media without a link, it raises a question.  As I said, though, it’s not the messenger’s fault.  Sure it would be great if they were all Type-A and included a link or more detail.  I always keep my ears open after a particularly good data point shoots by to see how long it is before someone uses it in real life as though it were documented.  At that point, it’s good to ask the source.  Because if the source is a random Twitter post, that’s not much to hang your hat on.

Data Masquerading as Fact

Data is important to me.  Usage data guides purchasing decisions for library database subscriptions, visitor analytics help guide decisions on what to do with our Web site and how.  I’ve been accused of misusing data when I made a recommendation that made some people unhappy.  No big deal.

What kills me is how people select the data they use for their decisions.  Or perhaps, better, which data they decide to communicate to support their decision.  A survey that says “80% of X think Y” is interesting but I’ll still want to know how many respondents that is.  4 out of 5 respondents is usually less helpful than if that’s 8000 out of 10000.

Survey respondents is just an easy example of the problem, though.  Percentages can be easier for people to understand but they’re not always useful when you’re trying to measure a project’s cost.  This is particularly true on the Web where it can be hard to quantify whether particular usage means what you think it means.

Here’s an example based on real-life.  A WordPress site sees a bump mobile visitors, rising from 3% to 10% over a period of 6 months.  The rise was significant enough to cause their leadership to decide to allocate $5000 to make their site mobile friendly.  This would be applied to making their theme responsive or something similar.

In fact, after taking a look at the data, they hadn’t had any rise in mobile users.  It looks like they just crunched the numbers wrong or someone cherry-picked usage by a particular sub-group and extrapolated that out.  But let’s say that they actually had a 7% jump over 6 months, analyzing mobile users by each month against those that hadn’t.

This is a non-profit with a target audience of 25,000 individuals.  One thing they’d be interested in knowing is if those 7% were people in their audience group or not.  If they were, then it might be worth spending money to make their site more usable on mobile devices, particularly if this is was a growing trend for them.  For argument’s sake, let’s say they are all their target members.

For me, it’s still not that easy.  In this example, 7% equates to about 300 people.  WordPress comes with free plugins – Jetpack from Automattic or WPTouch – so the decision-makers really were choosing between spending $5000 for a site that had a 6 month increase of 300 mobile visits or selecting a free alternative.  If they had that money available to improve their site, it should be to grow the overall visits, desktop and mobile, rather than just the latter.

The point being that the project had been framed in a 7% increase which sounds like a big jump but was, in fact, a blip.  If the decision-makers understand that 7% is 300, it sounds less exciting.  I think they also should have looked over a much longer time period, because you can get fluctuations that don’t actually mean a trend is occurring.  Businesses may have the for-profit motivation to jump on these trends and eat the costs if it doesn’t pan out but non-profits often can’t.  People get new tablets and smartphones at Christmas, and they may be experiencing some people trying out their new devices.  If the trend continues, the decision-makers will have more data to justify their decision on how to spend their funding.

People know to look at numbers closely, and they do so the closer it gets to their own pocket book.  There seem to be occasions, though, when we’re spending other people’s money, when we’re asking people to buy or use our services, or are justifying our ongoing existence, when we get sloppy.  If we fail to quantify an issue, or use “round” data, it can make for bad decision-making and poor choices.