An entire day at a conference on artificial intelligence and the law last week in Chicago produced this insight about how lawyers are dealing with the fast-changing world of artificial intelligence:

Many lawyers are like someone who knows he needs to buy a car but knows nothing about cars. He knows he needs to get from A to B each day and wants to get there faster. So, he is deposited at the largest auto show in the world and told, “Decide which car you should buy.”

Whether it’s at smaller conferences or at the gigantic, auto-show-like legal tech jamborees in Las Vegas or New York, the discussion of AI seems to be dominated by the companies that produce the stuff. Much less on show are people who use legal AI in their everyday lives.

At my conference, the keynote address (and two more panels) were dominated by IBM. Other familiar names in AI in the world of smart contracting and legal research were there, along with the one of the major “old tech” legal research giants. All of the products and services sounded great, which means the salespeople were doing their jobs.

But the number of people who presented about actually using AI after buying it? Just a few (including me). “We wanted to get more users,” said one of the conference organizers, who explained that lawyers are reluctant to describe the ways they use AI, lest they give up valuable pointers to their competitors.

Most of the questions and discussion from lawyers centered around two main themes:

  1. How can we decide which product to buy when there are so many, and they change so quickly?
  2. How can we organize our firm’s business model in such a way that it will be profitable to use expensive new software (“software” being what AI gets called after you start using it)?

Law firm business models are not my specialty, but I have written before and spoke last week about evaluating new programs.

Only you (and not the vendor) can decide how useful a program is, by testing it. Don’t let the vendors feed you canned examples of how great their program is. Don’t put in a search term or two while standing at a trade show kiosk. Instead, plug in a current problem or three while sitting in your office and see how well the program does compared to the searches you ran last week.

You mean you didn’t run the searches, but you’re deciding whether to buy this expensive package? You should at least ask the people who will do the work what they think of the offering.

I always like to put in my own company or my own name and see how accurate a fact-finding program is. Some of them (which are still useful some of the time) think I live in the house I sold eight years ago. If you’re going to buy, you should know what a program can do and what it can’t.

As with other salespeople in other industries, AI sales staff won’t tell you what their programs are bad at doing. And most importantly, they won’t tell you how well or how badly (usually badly) their program integrates with other AI software you may be using.

No matter how good any software is, you will need good, inquisitive and flexible people running it and helping to coordinate outputs of different products you are using.

While sales staff may have subject-matter expertise in law (it helps if they are lawyers themselves) they cannot possibly specialize in all facets of the law. Their job is to sell, and they should not be criticized for it.

They have their job to do, and as a responsible buyer, you have yours.

For more on what an AI testing program could look like and what kinds of traits the best users of AI should have, see my forthcoming law review article here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263

 

Anyone following artificial intelligence in law knows that its first great cost saving has been in the area of document discovery. Machines can sort through duplicates so that associates don’t have to read the same document seven times, and they can string together thousands of emails to put together a quick-to-read series of a dozen email chains. More sophisticated programs evolve their ability with the help of human input.

Law firms are already saving their clients millions in adopting the technology. It’s bad news for the lawyers who used to earn their livings doing extremely boring document review, but good for everyone else. As in the grocery, book, taxi and hotel businesses, the march of technology is inevitable.

Other advances in law have come with search engines such as Lexmachina, which searches through a small number of databases to predict the outcome of patent cases. Other AI products that have scanned all U.S. Supreme Court decisions do a better job than people in predicting how the court will decide a particular case, based on briefs submitted in a live matter and the judges deciding the case.

When we think about our work gathering facts, we know that most searching is done not in a closed, limited environment. We don’t look through a “mere” four million documents as in a complex discovery or the trivial (for a computer) collection of U.S. Supreme Court cases. Our work is done when the entire world is the possible location of the search.

A person who seldom leaves New York may have a Nevada company with assets in Texas, Bermuda or Russia.

Until all court records in the U.S. are scanned and subject to optical character recognition, artificial intelligence won’t be able to do our job for us in looking over litigation that pertains to a person we are examining.

That day will surely come for U.S records, and may be here in 10 years, but it is not here yet. For the rest of the world, the wait will be longer.

Make no mistake: computers are essential to our business. Still, one set of databases including Westlaw and Lexis Nexis that we often use to begin a case are not as easy to use as Lexmachina or other closed systems, because they rely on abstracts of documents as opposed to the documents themselves.

They are frequently wrong about individual information, mix up different individuals with the same name, and often have outdated material. My profile on one of them, for instance, includes my company but a home phone number I haven’t used in eight years. My current home number is absent. Other databases get my phone number right, but not my company.

Wouldn’t it be nice to have a “Kayak” type system that could compare a person’s profile on five or six paid databases, and then sort out the gold from the garbage?

It would, but it might not happen so soon, and not just because of the open-universe problem.

Even assuming these databases could look to all documents, two other problems arise:

  1. They are on incompatible platforms. Integrating them would be a programming problem.
  2. More importantly, they are paid products, whereas Kayak searches free travel and airline sites. In addition, they require licenses to use, and the amount of data you can get is regulated by one of several permissible uses the user must enter to gain access to the data. A system integration of the sites would mean the integrator would have to vet the user for each system and process payment if it’s a pay-per-use platform.

These are hardly insurmountable problems, but they do help illustrate why, with AI marching relentlessly toward the law firm, certain areas of practice will succumb to more automation faster than others.

What will be insurmountable for AI is this: you cannot ask computers to examine what is not written down, and much of the most interesting information about people resides not on paper but in their minds and the minds of those who know them.

The next installment of this series on AI will consider how AI could still work to help us toward the right people to interview.

A story in today’s Wall Street Journal about “Why the Virtual Reality Hype is About to Come Crashing Down” makes the simple point that computers haven’t caught up to all the permutations of real life to make a “virtual reality” headset experience resemble a genuine experience.

A short demo is one thing, but life goes on after the short demo is finished.

philip segal the art of fact investigation.JPG

“The dirty little secret about [virtual reality] is that the hardware has run ahead of the content,” says the Journal.

My view is that catching up to real life is something that it is hard to see computers doing anytime soon, a point made in my recently published book, The Art of Fact Investigation: Creative Thinking in the Age of Information Overload.

The book makes the case that figuring out problems related to human behavior requires guesswork and the flexibility to change course when one series of guesses appears to be the wrong way forward. Computers are wonderfully flexible and free of emotional bias, but are completely unimaginative.

While computers can sort easily through data people enter onto their hard drives, they have a much harder time saying, “Here is something you should expect to find but do not.” Example: risk management programs failed to note the suspicious fact that Bernard Madoff’s alleged billions under management were audited by a tiny accounting firm in a suburban shopping mall. The computers did not say (because they were not programmed ahead of time to say), “I should be seeing a Big-Four auditor here but I don’t see it.”

But what about all the hype about “Big Data” and our ability to predict things based on billions or individual cases only a computer can keep track of?

The problem is that in some kinds of investigations (who is this particular person? What is the probable reaction of this particular company to litigation?) we don’t demand an answer about what other people or companies have done in the past.

Big data aggregates lots of individual results, but sometimes when the stakes are high, we want to disaggregate and find out what this particular person did at work eight years ago to prompt a departure left off a resume, or what this particular company’s board is like when faced with a lawsuit.

You won’t find those answers in any magical database. If you are lucky and smart, you will find some clues that will help you put together a probable story.

If that sounds less than the neat and tidy solution you were hoping for, who said real life was neat and tidy?

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

When you see two big book reviews and an entire special section of The Wall Street Journal devoted to a topic, a curious person should ask: how does this affect me, my family, my business, the worldBig Data.jpg?

The book is Big Data: A Revolution that Will Transform How We Live, Work and Think and one its authors, Kenneth Cukier, is a former colleague of mine. The book has received excellent reviews and deserves to be read by anyone who wants to understand what’s going on in a big sector of world business.

It’s clear that Big Data – the ability today to process more information about people than ever before – has huge implications for businesses. But its effect on a good investigation? Harder to see.

Good investigators don’t care if 68 percent of people who drive Cadillacs also eat at a steak house once every seven weeks. That may be a statistic worth something to steakhouse owners looking for new customers or Cadillac dealers wondering which kind of restaurant social network to link to.

But for us? Hard to see how it will help. Investigations are done one at a time. If we need to know exactly where someone is going to go on a certain day, we need to follow him.

It’s not good enough to know that many of the people in his congregation have summer homes in a particular part of Pennsylvania: if we are doing an asset search, we need to see if HE has a home there. Then we need to be able to prove it.

That’s where big data stops and investigation takes over. Big data is about probabilities, and investigation is about evidence and proof.

Of course, courts do not reject statistical evidence all of the time. The now-famous series of Shonubi decisions in the Second Circuit are well documented here. But given the choice between a statistical probability and hard proof, why would anybody who needed help with fact finding prefer the former?