I don’t know anything about cognitive computing. Professor Benjamin Alarie does and gave an intriguing presentation during LegalTech Toronto 2015 on its place in the legal profession. He’s also a founder of BlueJ Legal. He used, and coined as far as I can tell, the concept of needing to thicken the law to fill in the gaps between fact patterns and our values [if I misspeak about this, feel free to correct me in the comments]. Here’s the slide, thanks to Shaunna Mireau.
The visual is that, as the law thickens, the space between what the law says and the facts and the values will become more known. As the space is filled, it becomes easier to use cognitive computing (which I guess is filling the space, although so could clearer laws) across the entire process.
It’s interesting both because it makes sense – you’d want to reduce gaps for a machine to attempt to bridge remaining gaps on its own – but also because we have gap fillers already. Equity was one that jumped to my mind, as was black letter law, but I’m sure there are others. We may not be talking about empty space, then, but merely space that AI has to learn to deal with in a similar way to how we already have.
This returned to me when I was reading Simon Head’s Mindless: Why Smarter Machines are Making Dumber Humans over the weekend [NYT Review]. He was discussing process and practice.
Process we are already familiar with; it refers to a series of operations and how they relate to one another. Practice, on the other hand, refers to the activities that can inhabit each operation in the process and especially to the accumulation of tacit knowledge and skill that employees bring to bear in order to perform well such embedded tasks.
Applied to law, this tacit knowledge and skill is where a legal professional fills in the gaps between the law and the facts to hopefully result in a conclusion that meets the necessary values. Head’s point was that process eventually squeezes practice out of the way, similar to the way, as the law thickens, it’s not necessarily displacing emptiness.
AI seems like a logical progression to have technology reduce the world of judicial opinions to more likely than not rules that can supplement legislation and rules. Skilled lawyers do that already and we rely on precedent as a shortcut. The thing I’m looking forward to watch is the extent to which processes can be made – and how quickly – and which part of legal decision making they’ll impact the most (not necessarily the first).
To circle back to Prof. Alarie’s thickening and cognitive computing, I recall that the reference was not necessarily to replace lawyers but that the tools could be used by lawyers to deliver service. So many of these technologies scale and cost so that they are realistically only available to larger firms and may be oriented to that type of legal work. I can’t help but think of smaller firm lawyers using cognitive computing as containers themselves, layered like so many individual apps on top of the underlying systems that will increasingly be shared knowledge tools but used by lawyers and law firms differently.
This post originally appeared on LinkedIn.