An entire day at a conference on artificial intelligence and the law last week in Chicago produced this insight about how lawyers are dealing with the fast-changing world of artificial intelligence:

Many lawyers are like someone who knows he needs to buy a car but knows nothing about cars. He knows he needs to get from A to B each day and wants to get there faster. So, he is deposited at the largest auto show in the world and told, “Decide which car you should buy.”

Whether it’s at smaller conferences or at the gigantic, auto-show-like legal tech jamborees in Las Vegas or New York, the discussion of AI seems to be dominated by the companies that produce the stuff. Much less on show are people who use legal AI in their everyday lives.

At my conference, the keynote address (and two more panels) were dominated by IBM. Other familiar names in AI in the world of smart contracting and legal research were there, along with the one of the major “old tech” legal research giants. All of the products and services sounded great, which means the salespeople were doing their jobs.

But the number of people who presented about actually using AI after buying it? Just a few (including me). “We wanted to get more users,” said one of the conference organizers, who explained that lawyers are reluctant to describe the ways they use AI, lest they give up valuable pointers to their competitors.

Most of the questions and discussion from lawyers centered around two main themes:

  1. How can we decide which product to buy when there are so many, and they change so quickly?
  2. How can we organize our firm’s business model in such a way that it will be profitable to use expensive new software (“software” being what AI gets called after you start using it)?

Law firm business models are not my specialty, but I have written before and spoke last week about evaluating new programs.

Only you (and not the vendor) can decide how useful a program is, by testing it. Don’t let the vendors feed you canned examples of how great their program is. Don’t put in a search term or two while standing at a trade show kiosk. Instead, plug in a current problem or three while sitting in your office and see how well the program does compared to the searches you ran last week.

You mean you didn’t run the searches, but you’re deciding whether to buy this expensive package? You should at least ask the people who will do the work what they think of the offering.

I always like to put in my own company or my own name and see how accurate a fact-finding program is. Some of them (which are still useful some of the time) think I live in the house I sold eight years ago. If you’re going to buy, you should know what a program can do and what it can’t.

As with other salespeople in other industries, AI sales staff won’t tell you what their programs are bad at doing. And most importantly, they won’t tell you how well or how badly (usually badly) their program integrates with other AI software you may be using.

No matter how good any software is, you will need good, inquisitive and flexible people running it and helping to coordinate outputs of different products you are using.

While sales staff may have subject-matter expertise in law (it helps if they are lawyers themselves) they cannot possibly specialize in all facets of the law. Their job is to sell, and they should not be criticized for it.

They have their job to do, and as a responsible buyer, you have yours.

For more on what an AI testing program could look like and what kinds of traits the best users of AI should have, see my forthcoming law review article here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263

 

By now, if a lawyer isn’t thinking hard about how automation is going transform the business of law, that lawyer is a laggard.

You see the way computers upended the taxi, hotel, book and shopping mall businesses? It’s already started in law too.  As firms face resistance over pricing and are looking to get more efficient, the time is now to start training people to work with – and not in fear of – artificial intelligence.

And be not afraid.
And be not afraid

There will still be plenty of lawyers around in 10 or 20 years no matter how much artificial intelligence gets deployed in the law. But the roles those people will play will in many respects be different. The new roles will need different skills.

In a new Harvard Business Review article (based on his new book, “Humility is the New Smart”) Professor Ed Hess at the Darden School of Business argues that in the age of artificial intelligence, being smart won’t mean the same thing as it does today.

Because smart machines can process, store and recall information faster than any person, the skills of memorizing and recall are not as important as they once were. The new smart “will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating and learning,” Hess writes.

Among the many concrete things this will mean for lawyers are two aspects of fact investigation we know well and have been writing about for a long time.

  1. Open-mindedness will be indispensable.
  2. Even for legal research, logical deduction is out, logical inference is in.

Hess predicts we will “spend more time training to be open-minded and learning to update our beliefs in response to new data.” What could this mean in practice for a lawyer?

If all you know how to do is to gather raw information from a limited universe of documents, or perhaps spend a lot of time cutting and pasting phrases from old documents onto new ones, your days are numbered. Technology-assisted review (TAR) already does a good job sorting out duplicates and constructing a chain of emails so you don’t have to read the same email 27 times as you read through a long exchange.

But as computers become smarter and faster, they are sometimes overwhelmed by the vast amounts of new data coming online all the time. I wrote about this in my book, “The Art of Fact Investigation: Creative Thinking in the Age of Information Overload.”

I made the overload point with respect to finding facts outside discovery, but the same phenomenon is hitting legal research too.

In their article “On the Concept of Relevance in Legal Information Retrieval” in the Artificial Intelligence and Law Journal earlier this year,[1] Marc van Opijnen and Cristiana Santos wrote that

“The number of legal documents published online is growing exponentially, but accessibility and searchability have not kept pace with this growth rate. Poorly written or relatively unimportant court decisions are available at the click of the mouse, exposing the comforting myth that all results with the same juristic status are equal. An overload of information (particularly if of low-quality) carries the risk of undermining knowledge acquisition possibilities and even access to justice.

If legal research suffers from the overload problem, even e-discovery faces it despite TAR and whatever technology succeeds TAR (and something will). Whole areas of data are now searchable and discoverable when once they were not. The more you can search, the more there is to search. A lot of what comes back is garbage.

Lawyers who will succeed in using ever more sophisticated computer programs will need to remain open-minded that they (the lawyers) and not the computers are in charge. Open-minded here means accepting that computers are great at some things, but that for a great many years an alert mind will be able to sort through results in a way a computer won’t. The kind of person who will succeed at this will be entirely active – and not passive – while using the technology. Anyone using TAR knows that it requires training before it can be used correctly.

One reason the mind needs to stay engaged is that not all legal reasoning is deductive, and logical deduction is the basis for computational logic. Michael Genesereth of Codex, Stanford’s Center for Legal Informatics wrote two years ago that computational law “simply cannot be applied in cases requiring analogical or inductive reasoning,” though if there are enough judicial rulings interpreting a regulation the computers could muddle through.

For logical deduction to work, you need to know what step one is before you proceed to step two.  Sherlock Holmes always knew where to start because he was the character in entertaining stories of fiction. In solving the puzzle he laid it out in a way that seemed as if it was the only logical solution.

But it wasn’t. In real life, law enforcement, investigators and attorneys faced with mountains of jumbled facts have to pick their ways through all kinds of evidence that produces often mutually contradictory theories. The universe of possible starting points is almost infinite.

It can be a humbling experience to sit in front of a powerful computer armed with great software and connected to the rest of the world, and to have no idea where to begin looking, when you’ve searched enough, and how confident to be in your findings.

“The new smart,” says Hess, “will be about trying to overcome the two big inhibitors of critical thinking and team collaboration: our ego and our fears.”

Want to know more about our firm?

  • Visit charlesgriffinllc.com and see our two blogs, this one and The Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon). There is a detailed section on logic and inference in the law.
  • Watch me speak about Helping Lawyers with Fact Finding, here. We offer training for lawyers, and I speak across the country to legal groups about the proper mindset for legal inquiry.
  • If you are member of the ABA’s Litigation Section, see my piece in the current issue of Litigation Journal, “Five Questions Litigators Should Ask: Before Hiring an Investigator (and Five Tips to Investigate It Yourself). It contains a discussion of open-mindedness.

[1] van Opijnen, M. & Santos, C. Artif Intell Law (2017) 25: 65. doi:10.1007/s10506-017-9195-8

 

One lawyer we know has a stock answer when clients ask him how good their case is: “I don’t know. The courts are the most lawless place in America.”

What he means is that even though the law is supposed to foster predictability so that we will know how to act without breaking our society’s civil and criminal rules, there is a wide variety of opinion among judges even in the same jurisdictions about the matters that make or break a case on its way to a jury.

Our friend’s answer came to mind while reading an interesting roundup of experienced trial lawyers over the weekend about why the trial of Bill Cosby outside Philadelphia resulted in a deadlocked jury and mistrial, announced on Saturday.

In the New York Times, the attorneys mostly fell into two camps: those who thought lead witness Andrea Constand presented the jury with credibility problems because of inconsistent testimony, and those who thought the judge’s decision to limit the admission of evidence of many other similar allegations substantially weakened the prosecution’s case.

My view is that the two reasons are linked: evidence that many women have made claims similar to Constands’ could easily have overcome the credibility problem if the jury had been able to hear about many of the other women who alleged Cosby had drugged and had sexual contact with them too.

In another case with identical facts and a different judge, the other accusers may have made it in a great example of two things we tell clients all the time:

  1. Persuasive evidence is good, but admissible evidence is what you really want when you know you’re going to trial.
  2. A lot of legal jobs are now being done by computers, but while there are human judges they will differ the way humans always do: in a way that is never 100% predictable.

Admissibility

When we are assigned to gather facts in civil or criminal matters, all of the evidence we get must always be gathered legally and ethically. Otherwise it could easily turn out to be inadmissible. But even if you do everything right, admissibility is sometimes out of your control. The whole case can turn on it.

If all you are doing is trying to get as much information as you can without any thought of taking it to trial, then admissibility may not be much of a concern. Think about deciding whether someone is rich enough to bother suing using hearsay evidence; or finding personally damaging information that may be excluded as prejudicial, but even the thought of arguing a motion about that information would be too much for the other side to bear. It could increase the chance of a more favorable settlement for you.

In the Cosby case the information in question would have been very helpful to the prosecution.

Ordinarily the justice system doesn’t like to see evidence of other bad acts used in a case to paint a picture of  a defendant’s character. Rule 404 (b) of the Federal Rules of Evidence excludes this kind of thing, but allows admission of evidence of another act “as proving motive, opportunity, intent, preparation, plan, knowledge, identity, absence of mistake, or lack of accident.”

So the prosecution could have argued that all the other accusers making similar claims that they were drugged and subjected to sexual contact were evidence of Cosby’s intent, or a lack of accident, and may even have been seen as preparation for the time Constand went to Cosby’s home and was drugged.

But the judge wouldn’t let any of that in. In Pennsylvania, the rules in this section are tougher on the prosecution than are the federal rules. The state’s rule 404(b) (2) “requires that the probative value of the evidence must outweigh its potential for prejudice. When weighing the potential for prejudice of evidence of other crimes, wrongs, or acts, the trial court may consider whether and how much such potential for prejudice can be reduced by cautionary instructions.”

It seems that the judge was afraid that even warning the jury not to read too much into the other accusers would have prejudiced them even if he instructed them that the other accusers alone did not constitute proof of Cosby’s guilt — in this matter with Constand.

Unpredictability

The legal world is justifiably occupied in trying to figure out how to reduce costs by automating as many tasks as possible. Gathering of some facts can be automated, but not always, for the simple reason that facts are infinitely variable and therefore not wholly predictable.

Implicit in fact gathering is evaluating the facts you get, as you gather them. You are constantly evaluating because you can’t look everywhere, so promising leads get follow-up, the others don’t. Machines can scan millions of documents using optical character recognition because there are only so many combinations of letters out there. But the variety of human experience is limitless.

If machines can’t be trusted to properly evaluate someone’s story, imagine the problems if that story has never been written down. Think about all the things you would not want the world to know about you. How much of all of that has been written down? Probably very little. It was human effort alone that developed the other witnesses the prosecution wanted to call.

The only way a computer might have helped in this case would have been to predict – based on prior cases – which way the judge would rule in excluding the other evidence. Even that would be a tough program to write because these decisions turn on so many unique factors. But since judges are chosen at random, it wouldn’t have helped shape the decision about whether or not to charge Cosby.

Want to know more about our firm?

  • Visit charlesgriffinllc.com and see our two blogs, this one and The Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.
  • If you are member of the ABA’s Litigation Section, see my piece in the current issue of Litigation Journal, “Five Questions Litigators Should Ask: Before Hiring an Investigator (and Five Tips to Investigate It Yourself).

We don’t usually think of the law as the place our most creative people go. Lawyers with a creative bent often drift into business, where a higher risk tolerance is often required to make a success of yourself. Some of our greatest writers and artists have legal training, but most seem to drop out when their artistic calling tells them law school isn’t for them.

Group of Robots and personal computer vector illustration

Still, creativity and innovation are all the rage in law schools today. Northwestern has a concentration in it as does Vanderbilt, and Harvard has a course on Innovation in Legal Education and Practice.

Like it or not, as artificial intelligence takes over an increasing number of dreary legal tasks, there will be less room for dreary, plodding minds in law firms. The creative and innovative will survive.

This doesn’t worry us, because we’ve long talked about the need for creativity in fact finding. It’s even in the subtitle of my book, The Art of Fact Investigation: Creative Thinking in the Age of Information Overload.

The book takes up the message we have long delivered to clients: computers can help speed up searching, but computers have also made searching more complex because of the vast amounts of information we need to sort through.

  • Deadlines are ever tighter, but now we have billions of pages of internet code to search.
  • Information about a person used to be concentrated around where he was born and raised. Today, people are more mobile and without leaving their base, they can incorporate a dozen companies across the country doing business in a variety of jurisdictions around the world.
  • Databases make a ton of mistakes. E.g. Two of them think I live in the house I sold seven years ago.
  • Most legal records are not on line. Computers are of limited use in searching for them, and even less useful if figuring out their relevance to a particular matter.
  • Since you can’t look everywhere, investigation is a matter of making educated guesses and requires a mind that can keep several plausible running theories going at the same time. That’s where the creativity comes in. How do you form a theory of where X has hidden his assets? By putting yourself in his shoes, based on his history and some clues you may uncover through database and public-record research.

The idea that technological change threatens jobs is hardly new, as pointed out in a sweeping essay by former world chess champion Gary Kasparov in the Wall Street Journal.

Twenty years after losing a chess match to a computer, Kasparov writes: “Machines have been displacing people since the industrial revolution. The difference today is that machines threaten to replace the livelihoods of the class of people who read and write articles about them,” i.e. the writer of this blog and just about anyone reading it.

Kasparov argues that to bemoan technological progress is “little better than complaining that antibiotics put too many gravediggers out of work. The transfer of labor from humans to our inventions is nothing less than the history of civilization … Machines that replace physical labor have allowed us to focus more on what makes us human: our minds.”

The great challenge in artificial intelligence is to use our minds to manage the machines we create. That challenge extends to law firms. We may have e-discovery, powerful computers and databases stuffed with information, but it still requires a human mind to sort good results from bad and to craft those results into persuasive arguments.

After all, until machines replace judges and juries, it will take human minds to persuade other human minds of the value of our arguments.

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.