Anyone following artificial intelligence in law knows that its first great cost saving has been in the area of document discovery. Machines can sort through duplicates so that associates don’t have to read the same document seven times, and they can string together thousands of emails to put together a quick-to-read series of a dozen email chains. More sophisticated programs evolve their ability with the help of human input.

Law firms are already saving their clients millions in adopting the technology. It’s bad news for the lawyers who used to earn their livings doing extremely boring document review, but good for everyone else. As in the grocery, book, taxi and hotel businesses, the march of technology is inevitable.

Other advances in law have come with search engines such as Lexmachina, which searches through a small number of databases to predict the outcome of patent cases. Other AI products that have scanned all U.S. Supreme Court decisions do a better job than people in predicting how the court will decide a particular case, based on briefs submitted in a live matter and the judges deciding the case.

When we think about our work gathering facts, we know that most searching is done not in a closed, limited environment. We don’t look through a “mere” four million documents as in a complex discovery or the trivial (for a computer) collection of U.S. Supreme Court cases. Our work is done when the entire world is the possible location of the search.

A person who seldom leaves New York may have a Nevada company with assets in Texas, Bermuda or Russia.

Until all court records in the U.S. are scanned and subject to optical character recognition, artificial intelligence won’t be able to do our job for us in looking over litigation that pertains to a person we are examining.

That day will surely come for U.S records, and may be here in 10 years, but it is not here yet. For the rest of the world, the wait will be longer.

Make no mistake: computers are essential to our business. Still, one set of databases including Westlaw and Lexis Nexis that we often use to begin a case are not as easy to use as Lexmachina or other closed systems, because they rely on abstracts of documents as opposed to the documents themselves.

They are frequently wrong about individual information, mix up different individuals with the same name, and often have outdated material. My profile on one of them, for instance, includes my company but a home phone number I haven’t used in eight years. My current home number is absent. Other databases get my phone number right, but not my company.

Wouldn’t it be nice to have a “Kayak” type system that could compare a person’s profile on five or six paid databases, and then sort out the gold from the garbage?

It would, but it might not happen so soon, and not just because of the open-universe problem.

Even assuming these databases could look to all documents, two other problems arise:

  1. They are on incompatible platforms. Integrating them would be a programming problem.
  2. More importantly, they are paid products, whereas Kayak searches free travel and airline sites. In addition, they require licenses to use, and the amount of data you can get is regulated by one of several permissible uses the user must enter to gain access to the data. A system integration of the sites would mean the integrator would have to vet the user for each system and process payment if it’s a pay-per-use platform.

These are hardly insurmountable problems, but they do help illustrate why, with AI marching relentlessly toward the law firm, certain areas of practice will succumb to more automation faster than others.

What will be insurmountable for AI is this: you cannot ask computers to examine what is not written down, and much of the most interesting information about people resides not on paper but in their minds and the minds of those who know them.

The next installment of this series on AI will consider how AI could still work to help us toward the right people to interview.

By now, if a lawyer isn’t thinking hard about how automation is going transform the business of law, that lawyer is a laggard.

You see the way computers upended the taxi, hotel, book and shopping mall businesses? It’s already started in law too.  As firms face resistance over pricing and are looking to get more efficient, the time is now to start training people to work with – and not in fear of – artificial intelligence.

And be not afraid.
And be not afraid

There will still be plenty of lawyers around in 10 or 20 years no matter how much artificial intelligence gets deployed in the law. But the roles those people will play will in many respects be different. The new roles will need different skills.

In a new Harvard Business Review article (based on his new book, “Humility is the New Smart”) Professor Ed Hess at the Darden School of Business argues that in the age of artificial intelligence, being smart won’t mean the same thing as it does today.

Because smart machines can process, store and recall information faster than any person, the skills of memorizing and recall are not as important as they once were. The new smart “will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating and learning,” Hess writes.

Among the many concrete things this will mean for lawyers are two aspects of fact investigation we know well and have been writing about for a long time.

  1. Open-mindedness will be indispensable.
  2. Even for legal research, logical deduction is out, logical inference is in.

Hess predicts we will “spend more time training to be open-minded and learning to update our beliefs in response to new data.” What could this mean in practice for a lawyer?

If all you know how to do is to gather raw information from a limited universe of documents, or perhaps spend a lot of time cutting and pasting phrases from old documents onto new ones, your days are numbered. Technology-assisted review (TAR) already does a good job sorting out duplicates and constructing a chain of emails so you don’t have to read the same email 27 times as you read through a long exchange.

But as computers become smarter and faster, they are sometimes overwhelmed by the vast amounts of new data coming online all the time. I wrote about this in my book, “The Art of Fact Investigation: Creative Thinking in the Age of Information Overload.”

I made the overload point with respect to finding facts outside discovery, but the same phenomenon is hitting legal research too.

In their article “On the Concept of Relevance in Legal Information Retrieval” in the Artificial Intelligence and Law Journal earlier this year,[1] Marc van Opijnen and Cristiana Santos wrote that

“The number of legal documents published online is growing exponentially, but accessibility and searchability have not kept pace with this growth rate. Poorly written or relatively unimportant court decisions are available at the click of the mouse, exposing the comforting myth that all results with the same juristic status are equal. An overload of information (particularly if of low-quality) carries the risk of undermining knowledge acquisition possibilities and even access to justice.

If legal research suffers from the overload problem, even e-discovery faces it despite TAR and whatever technology succeeds TAR (and something will). Whole areas of data are now searchable and discoverable when once they were not. The more you can search, the more there is to search. A lot of what comes back is garbage.

Lawyers who will succeed in using ever more sophisticated computer programs will need to remain open-minded that they (the lawyers) and not the computers are in charge. Open-minded here means accepting that computers are great at some things, but that for a great many years an alert mind will be able to sort through results in a way a computer won’t. The kind of person who will succeed at this will be entirely active – and not passive – while using the technology. Anyone using TAR knows that it requires training before it can be used correctly.

One reason the mind needs to stay engaged is that not all legal reasoning is deductive, and logical deduction is the basis for computational logic. Michael Genesereth of Codex, Stanford’s Center for Legal Informatics wrote two years ago that computational law “simply cannot be applied in cases requiring analogical or inductive reasoning,” though if there are enough judicial rulings interpreting a regulation the computers could muddle through.

For logical deduction to work, you need to know what step one is before you proceed to step two.  Sherlock Holmes always knew where to start because he was the character in entertaining stories of fiction. In solving the puzzle he laid it out in a way that seemed as if it was the only logical solution.

But it wasn’t. In real life, law enforcement, investigators and attorneys faced with mountains of jumbled facts have to pick their ways through all kinds of evidence that produces often mutually contradictory theories. The universe of possible starting points is almost infinite.

It can be a humbling experience to sit in front of a powerful computer armed with great software and connected to the rest of the world, and to have no idea where to begin looking, when you’ve searched enough, and how confident to be in your findings.

“The new smart,” says Hess, “will be about trying to overcome the two big inhibitors of critical thinking and team collaboration: our ego and our fears.”

Want to know more about our firm?

  • Visit charlesgriffinllc.com and see our two blogs, this one and The Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon). There is a detailed section on logic and inference in the law.
  • Watch me speak about Helping Lawyers with Fact Finding, here. We offer training for lawyers, and I speak across the country to legal groups about the proper mindset for legal inquiry.
  • If you are member of the ABA’s Litigation Section, see my piece in the current issue of Litigation Journal, “Five Questions Litigators Should Ask: Before Hiring an Investigator (and Five Tips to Investigate It Yourself). It contains a discussion of open-mindedness.

[1] van Opijnen, M. & Santos, C. Artif Intell Law (2017) 25: 65. doi:10.1007/s10506-017-9195-8

 

While the U.S. Supreme Court is deciding whether it’s lawful to covertly track a suspected felon through warrantless GPS monitoring (see April 15, 2011 petition here), the European Commission is tackling a more powerful, already implemented technology that could potentially threaten everyone’s privacy if left unregulated.

rfidlabel.jpg

Ever heard of the “Internet of Things?” The term was coined by the Radio Frequency Identification (RFID) community 10 years ago and refers to sensors that can read physical, environmental changes and report them back over the internet. (RFID technology uses radio waves to identify data from an electronic tag and has commonly been used by businesses for inventory management and logistics.)

The Internet of Things is a collection of sensors that are “readable, recognizable, locatable, addressable and/or controllable via the Internet.” Imagine these as sensors of any kind with the ability to monitor any type of action, including radiation detection.

The good news about having lots of sensors spread around: The recent devastating earthquakes and tsunami in Japan prompted a need for immediate region-wide radiation detection. During what has emerged in the last few weeks as a nuclear accident ranked as seriously as Chernobyl, the internet of things played its part in monitoring and reporting back over IP (Internet Protocol) the radiation levels in real-time to news sources, rescue and aid organizations, and the brave cleanup crews. Hundreds of radiation sensors, very much like weather sensors, were already in place – strategically positioned around the country for an event just like this disaster.

Sensors, like the ones used to monitor radiation in Japan, can all be operated remotely and businesses are beginning to use them in remarkable ways. One company allows food suppliers to trace their goods along the supply chain, allowing their customers to see where the food came from. Another lets farmers monitor the health and vitals of their livestock through sensors planted in an animal’s ear. And the technology is not reserved only for businesses, thanks to a company making recent waves in the news called Pachube.  

Now anyone can use the system to link a sensor, and have the Pachube computer control a setting. For instance, one developer uses a temperature sensor in his office and has Pachube automatically turn on the fan for him. Pachube’s sensor data is available to anyone in real-time, and the service is free. It’s clear that these “smart systems” are allowing businesses to improve their services and better allocate their resources, but they could also be used for more sinister purposes.

But if we let our imagination run a little, we start to see a potential problem for privacy.

Envision walking by a remotely operated sensor, monitored over a service like Pachube, as all of your clothes and your electronic devices contain RFID tags. The sensor reports your exact preferences and the receiving party – the manufacturer, for instance, has your credit card information on file. The sensor now knows exactly who you are from the RFID tags. This is where the implications and dangers of this kind of technology really begin to run rampant and why many countries are already ahead of the game in preparing regulation.      

The European Commission, along with supply chain standards organization GS1 and the European Network and Information Security Agency (ENISA) are partnered in working on implementing guidelines for all companies in Europe using RFID technology in order to address the issue of data-protection. Miguel Lopera, GS1’s CEO, stated that the partnership is working so that “no personal data is actually present on a tag.” Is it then up to the individual companies to protect the purchaser’s information in some sort of gentleman’s agreement?

Sensors like the ones used to transmit radiation data in Japan are undeniably important during a crisis. If left unchecked, this technology, along with Pachube’s efforts to “democratize the sensor” could allow anyone to set up a sensor and secretly monitor what it is reading.

I don’t know about you, but that idea scares me.