Have you ever noticed that artificial intelligence always seems much more frightening when people write about what it will become, but then how it can seem like imperfect, bumbling software when writing about AI in the present tense?

You get one of each in this morning’s Wall Street Journal. The paper paints a horrific picture of what the ruthless secret police of the world’s dictatorships will be able to do with AI in The Autocrat’s New Tool Kit, including facial recognition to track behavior more efficiently and to target specific groups with propaganda.

But then see Social-media companies have struggled to block violent content about this week’s terrorist attack on two mosques in New Zealand. With all of their computing power and some of the world’s smartest programmers and mathematicians, Facebook and YouTube allowed the killings to be streamed live on the internet. It took an old-fashioned phone call from the New Zealand police to tell them to take the live evildoing down. Just as the New York Times or CNBC would never put such a thing on their websites, neither should Facebook or YouTube.

Wouldn’t you think that technology that could precisely target where to send the most effective propaganda could distinguish between an extremely violent film and extremely violent reality? I would. After all, it’s like nothing for these sites to have indexed the code of all the movie clips already uploaded onto their systems. If facial recognition works on a billion Chinese people, why not on the thousands of known film actors floating up there on the YouTube cloud? If it’s not a film you already know and there are lots of gunshots, the video should be flagged for review.

Why is this so hard? For one thing, the computing power the companies need doesn’t exist yet. “The sheer volume of material posted by [YouTube’s] billions of users, along with the difficulty in evaluating which videos cross the line, has created a minefield for the companies,” the Journal said.

It’s that minefield that disturbs me. A minefield dotted with difficulties about whether or not to show mass murder in real time? What would be the harm of having a person look at any video that features mass killing before it’s cleared to air? If the computers can’t figure out what to do with such material, let a person look at it.

What is so frightening about AI is not the computing power and the uses the world can find for it, but the abdication of self-control and ethical considerations of the people using the AI.

I want the police in my country to have guns and to use them on criminals who are about to kill innocent people. I don’t want police states shooting peaceful demonstrators. I’m happy to have police in the U.S. use facial recognition if it will help stop a person from blowing up the stadium where the Super Bowl is being held. I would not want cameras on every intersection automatically tracking my every movement.

Guns are neither smart nor stupid. They are artificial power that increases the harm an unarmed person can affect. Guns are essential in maintaining freedom but can suppress freedom too.

Same for AI. There are lots of wonderful applications for it. Every bit of software in use today was called AI before it began to be used, when we then called it software.

What sets off the good AI from the bad is the way people use it. Streaming on YouTube can be a wonderful thing. But just as we need political accountability to make sure the guns our armies and police have aren’t abused, we need the people at YouTube to control their technology in a responsible way.

The AI at Facebook and YouTube isn’t dumb: Dumb are the people who trusted too readily that the tool could decide for itself what the right call would be when the horrors from Christchurch began to be uploaded.

Do you ever wonder why some gifted small children play Mozart, but you never see any child prodigy lawyers who can draft a complicated will?

The reason is that the rules of how to play the piano have far fewer permutations and judgment calls than deciding what should go into a will. “Do this, not that” works well with a limited number of keys in each octave. But the permutations of a will are infinite. And by the way, child prodigies can play the notes, but usually not as soulfully as an older pianist with more experience of the range of emotions an adult experiences over a lifetime.

You get to be good at something by doing a lot of it. You can play the Mozart over and over, but how do you know what other human beings may need in a will, covering events that have yet to happen?

Not by drafting the same kind of will over and over, that’s for sure.

Reviewing a lot of translations done by people is the way Google Translate can manage rudimentary translations in a split second. Reviewing a thousand decisions made in document discovery and learning from mistakes picked out by a person is the way e-discovery software looks smarter the longer you use it.

But you would never translate a complex, nuanced document with Google Translate, and you sure wouldn’t produce documents without having a partner look it all over.

The craziness that can result from the mindless following of rules is an issue on the forefront law today, as we debate how much we should rely on artificial intelligence.

Who should bear the cost if AI makes a decision that damages a client? The designers of the software? The lawyers who use it? Or will malpractice insurance evolve enough to spread the risk around so that clients pay in advance in the form of a slightly higher price to offset the premium paid by the lawyer?

Whatever we decide, my view is that human oversite of computer activity is something society will need far into the future. The Mozart line above was given to me by my property professor in law school and appeared in the preface of my book, The Art of Fact Investigation.

The Mozart line is appropriate when thinking about computers, too. And in visual art, I increasingly see parallels between the way artists and lawyers struggle to get at what is true and what is an outcome we find desirable. Take the recent exhibition at the Metropolitan Museum here in New York, called Delirious: Art at the Limits of Reason, 1950 to 1980.

It showed that our struggle with machines is hardly new, even though it would seem so with the flood of scary stories about AI and “The Singularity” that we get daily. The show was filled with the worrying of artists 50 and 60 years ago about what machines would do to the way we see the world, find facts, and how we remain rational. It seems funny to say that: computers seem to be ultra-rational in their production of purely logical “thinking.”

But what seems to be a sensible or logical premise doesn’t mean that you’ll end up with logical conclusions. On a very early AI level, consider the databases we use today that were the wonders of the world 20 years ago. LexisNexis or Westlaw are hugely powerful tools, but what if you don’t supervise them? If I put my name into Westlaw, it thinks I still live in the home I sold in 2011. All other reasoning Westlaw produces based on that “fact” will be wrong. Noise complaints brought against the residents there have nothing to do with me. A newspaper story about disorderly conduct resulting in many police visits to the home two years ago are also irrelevant when talking about me.[1]

The idea of suppositions running amok came home when I looked at a sculpture last month by Sol LeWitt (1928-2007) called 13/3. At first glance, this sculpture would seem to have little relationship to delirium. It sounds from the outset like a simple idea: a 13×13 grid from which three towers arise. What you get when it’s logically put into action is a disorienting building that few would want to occupy.

As the curators commented, LeWitt “did not consider his otherwise systematic work rational. Indeed, he aimed to ‘break out of the whole idea of rationality.’ ‘In a logical sequence,’ LeWitt wrote, in which a predetermined algorithm, not the artist, dictates the work of art, ‘you don’t think about it. It is a way of not thinking. It is irrational.’”

Another wonderful work in the show, Howardena Pindell’s Untitled #2, makes fun of the faith we sometimes have in what superficially looks to be the product of machine-driven logic. A vast array of numbered dots sits uneasily atop a grid, and at first, the dots appear to be the product of an algorithm. In the end, they “amount to nothing but diagrammatic babble.”

Setting a formula in motion is not deep thinking. The thinking comes in deciding whether the vast amount of information we’re processing results in something we like, want or need. Lawyers would do well to remember that.

[1] Imaginary stuff: while Westlaw does say I live there, the problems at the home are made up for illustrative purposes.

Anyone following artificial intelligence in law knows that its first great cost saving has been in the area of document discovery. Machines can sort through duplicates so that associates don’t have to read the same document seven times, and they can string together thousands of emails to put together a quick-to-read series of a dozen email chains. More sophisticated programs evolve their ability with the help of human input.

Law firms are already saving their clients millions in adopting the technology. It’s bad news for the lawyers who used to earn their livings doing extremely boring document review, but good for everyone else. As in the grocery, book, taxi and hotel businesses, the march of technology is inevitable.

Other advances in law have come with search engines such as Lexmachina, which searches through a small number of databases to predict the outcome of patent cases. Other AI products that have scanned all U.S. Supreme Court decisions do a better job than people in predicting how the court will decide a particular case, based on briefs submitted in a live matter and the judges deciding the case.

When we think about our work gathering facts, we know that most searching is done not in a closed, limited environment. We don’t look through a “mere” four million documents as in a complex discovery or the trivial (for a computer) collection of U.S. Supreme Court cases. Our work is done when the entire world is the possible location of the search.

A person who seldom leaves New York may have a Nevada company with assets in Texas, Bermuda or Russia.

Until all court records in the U.S. are scanned and subject to optical character recognition, artificial intelligence won’t be able to do our job for us in looking over litigation that pertains to a person we are examining.

That day will surely come for U.S records, and may be here in 10 years, but it is not here yet. For the rest of the world, the wait will be longer.

Make no mistake: computers are essential to our business. Still, one set of databases including Westlaw and Lexis Nexis that we often use to begin a case are not as easy to use as Lexmachina or other closed systems, because they rely on abstracts of documents as opposed to the documents themselves.

They are frequently wrong about individual information, mix up different individuals with the same name, and often have outdated material. My profile on one of them, for instance, includes my company but a home phone number I haven’t used in eight years. My current home number is absent. Other databases get my phone number right, but not my company.

Wouldn’t it be nice to have a “Kayak” type system that could compare a person’s profile on five or six paid databases, and then sort out the gold from the garbage?

It would, but it might not happen so soon, and not just because of the open-universe problem.

Even assuming these databases could look to all documents, two other problems arise:

  1. They are on incompatible platforms. Integrating them would be a programming problem.
  2. More importantly, they are paid products, whereas Kayak searches free travel and airline sites. In addition, they require licenses to use, and the amount of data you can get is regulated by one of several permissible uses the user must enter to gain access to the data. A system integration of the sites would mean the integrator would have to vet the user for each system and process payment if it’s a pay-per-use platform.

These are hardly insurmountable problems, but they do help illustrate why, with AI marching relentlessly toward the law firm, certain areas of practice will succumb to more automation faster than others.

What will be insurmountable for AI is this: you cannot ask computers to examine what is not written down, and much of the most interesting information about people resides not on paper but in their minds and the minds of those who know them.

The next installment of this series on AI will consider how AI could still work to help us toward the right people to interview.

By now, if a lawyer isn’t thinking hard about how automation is going transform the business of law, that lawyer is a laggard.

You see the way computers upended the taxi, hotel, book and shopping mall businesses? It’s already started in law too.  As firms face resistance over pricing and are looking to get more efficient, the time is now to start training people to work with – and not in fear of – artificial intelligence.

And be not afraid.
And be not afraid

There will still be plenty of lawyers around in 10 or 20 years no matter how much artificial intelligence gets deployed in the law. But the roles those people will play will in many respects be different. The new roles will need different skills.

In a new Harvard Business Review article (based on his new book, “Humility is the New Smart”) Professor Ed Hess at the Darden School of Business argues that in the age of artificial intelligence, being smart won’t mean the same thing as it does today.

Because smart machines can process, store and recall information faster than any person, the skills of memorizing and recall are not as important as they once were. The new smart “will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating and learning,” Hess writes.

Among the many concrete things this will mean for lawyers are two aspects of fact investigation we know well and have been writing about for a long time.

  1. Open-mindedness will be indispensable.
  2. Even for legal research, logical deduction is out, logical inference is in.

Hess predicts we will “spend more time training to be open-minded and learning to update our beliefs in response to new data.” What could this mean in practice for a lawyer?

If all you know how to do is to gather raw information from a limited universe of documents, or perhaps spend a lot of time cutting and pasting phrases from old documents onto new ones, your days are numbered. Technology-assisted review (TAR) already does a good job sorting out duplicates and constructing a chain of emails so you don’t have to read the same email 27 times as you read through a long exchange.

But as computers become smarter and faster, they are sometimes overwhelmed by the vast amounts of new data coming online all the time. I wrote about this in my book, “The Art of Fact Investigation: Creative Thinking in the Age of Information Overload.”

I made the overload point with respect to finding facts outside discovery, but the same phenomenon is hitting legal research too.

In their article “On the Concept of Relevance in Legal Information Retrieval” in the Artificial Intelligence and Law Journal earlier this year,[1] Marc van Opijnen and Cristiana Santos wrote that

“The number of legal documents published online is growing exponentially, but accessibility and searchability have not kept pace with this growth rate. Poorly written or relatively unimportant court decisions are available at the click of the mouse, exposing the comforting myth that all results with the same juristic status are equal. An overload of information (particularly if of low-quality) carries the risk of undermining knowledge acquisition possibilities and even access to justice.

If legal research suffers from the overload problem, even e-discovery faces it despite TAR and whatever technology succeeds TAR (and something will). Whole areas of data are now searchable and discoverable when once they were not. The more you can search, the more there is to search. A lot of what comes back is garbage.

Lawyers who will succeed in using ever more sophisticated computer programs will need to remain open-minded that they (the lawyers) and not the computers are in charge. Open-minded here means accepting that computers are great at some things, but that for a great many years an alert mind will be able to sort through results in a way a computer won’t. The kind of person who will succeed at this will be entirely active – and not passive – while using the technology. Anyone using TAR knows that it requires training before it can be used correctly.

One reason the mind needs to stay engaged is that not all legal reasoning is deductive, and logical deduction is the basis for computational logic. Michael Genesereth of Codex, Stanford’s Center for Legal Informatics wrote two years ago that computational law “simply cannot be applied in cases requiring analogical or inductive reasoning,” though if there are enough judicial rulings interpreting a regulation the computers could muddle through.

For logical deduction to work, you need to know what step one is before you proceed to step two.  Sherlock Holmes always knew where to start because he was the character in entertaining stories of fiction. In solving the puzzle he laid it out in a way that seemed as if it was the only logical solution.

But it wasn’t. In real life, law enforcement, investigators and attorneys faced with mountains of jumbled facts have to pick their ways through all kinds of evidence that produces often mutually contradictory theories. The universe of possible starting points is almost infinite.

It can be a humbling experience to sit in front of a powerful computer armed with great software and connected to the rest of the world, and to have no idea where to begin looking, when you’ve searched enough, and how confident to be in your findings.

“The new smart,” says Hess, “will be about trying to overcome the two big inhibitors of critical thinking and team collaboration: our ego and our fears.”

Want to know more about our firm?

  • Visit charlesgriffinllc.com and see our two blogs, this one and The Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon). There is a detailed section on logic and inference in the law.
  • Watch me speak about Helping Lawyers with Fact Finding, here. We offer training for lawyers, and I speak across the country to legal groups about the proper mindset for legal inquiry.
  • If you are member of the ABA’s Litigation Section, see my piece in the current issue of Litigation Journal, “Five Questions Litigators Should Ask: Before Hiring an Investigator (and Five Tips to Investigate It Yourself). It contains a discussion of open-mindedness.

[1] van Opijnen, M. & Santos, C. Artif Intell Law (2017) 25: 65. doi:10.1007/s10506-017-9195-8

 

What will it take for artificial intelligence to surpass us humans? After the Oscars fiasco last night, it doesn’t look like much.

As a person who thinks a lot about the power of human thought versus that of machines, what is striking is not that the mix-up of the Best Picture award was the product of one person’s error, but rather the screw-ups of four people who flubbed what is about the easiest job there is to imagine in show business.

Not one, but two PwC partners messed up with the envelope. You would think that if they had duplicates, it would be pretty clear whose job it was to give out the envelopes to the presenters. Something like, “you give them out and my set will be the backup.” But that didn’t seem to be what happened.

Then you have the compounded errors of Warren Beatty and Faye Dunaway, both of whom can read and simply read off what was obviously the wrong card.

The line we always hear about not being afraid that computers are taking over the world is that human beings will always be there to turn them off if necessary. Afraid of driverless cars? Don’t worry; you can always take over if the car is getting ready to carry you off a cliff.

An asset search for Bill Johnson that reveals he’s worth $200 million, when he emerged from Chapter 7 bankruptcy just 15 months ago? A human being can look at the results and conclude the computer mixed up our Bill Johnson with the tycoon of the same name.

But what if the person who wants to override the driverless car is drunk? What if the person on the Bill Johnson case is a dimwit who just passes on these improbable findings without further inquiry? Then, the best computer programming we have is only as good as the dumbest person overseeing it.

We’ve written extensively here about the value of the human brain in doing investigations. It’s the theme of my book, The Art of Fact Investigation.

As the Oscars demonstrated last night, not just any human brain will do.

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

In a partially hilarious, partially disturbing article this week in The Wall Street Journal, “Facebook Has No Sense of Humor,” the Editor in Chief of the satirical website The Babylon Bee related that two patently ridiculous “news” stories had recently been fact-checked by Snopes: The Onion’s “Shelling From Royal Caribbean’s M.S. ‘Allure’ Sinks Carnival Cruise Vessel That Crossed Into Disputed Waters” and the Babylon Bee’s “Ocasio-Cortez Appears on ‘The Price Is Right,’ Guesses Everything Is Free.”

They made me laugh, but then came the more troubling part. The Babylon Bee story headlined “Senator Hirono Demands ACB Be Weighed Against a Duck to See If She Is a Witch” was blocked by the robots at Facebook because the Monty-Python referenced line “we must burn her” appeared in the body of the article.

The real problem for me came after the Bee alerted Facebook to the supposed mistake made by its robots, which had generated a warning that the “incitement to violence” could bring further repercussions if repeated. The Bee appealed to Facebook, but to no avail.

We are the first to argue that robots aren’t really that smart – rather they are amoral and very quick at doing mindless drudgery – as we’ve argued in Artificial Intelligence: Good and Evil All at Once, Just Like its Creators.[i]

This blog doesn’t really give a toss about Facebook. It’s a useful investigatory tool as far as it goes, but if Facebook were to disappear tomorrow, no tears would be shed around here. That its robots weren’t overruled by people with good sense is the better reason not to trust Facebook’s human judges of propriety or morality.

So why should a good investigator have a sense of humor? The same reason, broadly that an investigator needs empathy, just as we said on our companion blog recently. Empathy lets you make better guesses about what someone may do next or may have done previously. Humor is useful because the people we look at also have senses of humor. It’s useful the way empathy is, for filtering purposes as above, and also because humor is a great way to put people you are interviewing at ease. Not always, but when warranted.

If you act like a machine, your results will be as good as a machine’s, but you will get them a lot more slowly than a machine would have.

Where is the value (and fun) in that?

[i] In a full law review article, I argue that artificial intelligence brings a lot of promising tools to investigators, but not because it can investigate for us. Rather, AI can comb through the mountains of new data being created hourly. Before long, we will be able to search transcripts of podcasts and YouTube videos, for example. [Legal Jobs in the Age of Artificial Intelligence: Moving from Today’s Limited Universe of Data Toward the Great Beyond. Savannah Law Review, Vol. 5, No.1 (2017).]

There is a widespread belief among lawyers and other professionals that investigators, armed only with special proprietary databases, can solve all kinds of problems other professionals cannot.

While certain databases are a help, we often tell our clients that even if we gave them the output of all the databases our firm uses, they would probably still not be able to come to most of the conclusions we do. That is because databases have incomplete, conflicting outputs. You need a knowledgeable person to weigh those outputs and come up with a “right” answer that is always, at first, a best guess that requires verification.

Commercial databases are also hobbled by legal and commercial restrictions that other large information resources are not. This is one of the reasons I have argued for several years now that increases in of artificial intelligence will increase – not diminishjobs for humans.

The full argument is in Legal Jobs in the Age of Artificial Intelligence: Moving from Today’s Limited Universe of Data Toward the Great Beyond, Savannah Law Review Vol. 5:1, 2018.

Forget what you know about SQL and relational databases. Forget about the large universes of even millions of documents put into various artificial intelligence engines for expedited document review. When users populate and control a database, they can alter the content. They can control the programming to make the databases talk to each other.

Even in hospitals, with more than a dozen different information systems struggling to interconnect, it’s possible to imagine a smoothly-running network of networks given enough computing power and programming time.

The commercial databases that depend on credit-header information, utility bills, commercial mailing lists and more are different. Commercial databases used in investigations do not play nicely with other databases for two main reasons:

  1. Competitive Barriers

 There is only one New York Secretary of State, or one recorder of deeds in a county. They are presumed to be correct as a matter of law as long as you get a certified copy of their documents.

The commercial databases are different in that they compete with one another. Databases do not share results so that we may sort out conflicts automatically. They do not suggest, for example, that if a John R. Smith of Houston  is this John R. Smith on Walnut Avenue, then this John R. Smith owns the following companies in Nevada.

Instead, one database will tell you about the man on Walnut Avenue, and a different database may give you the Nevada company information and suggest that its owner lives not on Walnut Avenue but at another address in San Diego. A third database may tell you the same person who lived on Walnut Avenue in 2014 now lives in San Diego.

You will have to stitch it all together yourself. By you, I mean you a human being and not you, using some kind of databases version of kayak.com that assembles travel sites and hands you results in one convenient place.

Note that for someone searching for completeness, Kayak’s no model either. If you assume that every airline and hotel in the world is listed on Kayak, you are wrong. That is because Kayak is a business that makes money from the places it lists (it’s part of a profit-making company called Booking.com). If you go to another for-profit site, Travelocity.com,  and enter flights leaving from New York for Dallas, you do not get all possible options. You can fly on Southwest between those two cities, but the other day Travelocity didn’t give you that choice. Tomorrow may be different if the market for airline pricing changes.

Commercial databases are like airlines and hotels: they too are profit-making enterprises. Just as American won’t let you board with a Delta ticket, databases don’t like to share either.

  1. Legal Impediments to Sharing

Even if databases wanted to share information, doing so would be fraught with legal difficulty. That is because the information they offer is accessible to licensees only. These users need a permissible purpose under federal law that governs credit reports, and the varying permissible purposes yield varying amounts of information.

Each database has to review the permissible uses you enter, and each has to make sure you are a paying customer. Databases could theoretically subcontract that job to a central entity that handles its competitors as well, but with just a handful of big competitors, there’s less incentive to outsource.

The other, larger problem an information “Kayak” would have is that the real travel Kayak just gives you a small number of  data points: price, when you leave and when you arrive. Database output about a person, his known residences, phone numbers, associates and more is voluminous. How would you sort the differing outputs of different competitors? “Rank by accuracy” is a laughable non-starter.

All in all, if you make your living sorting through database output and then use that to check against public filings, litigation, licenses, news stories, blogs, videos and social media, be of good cheer.

The robots haven’t even come close to usurping your duties.

Not for the first time, the most compelling piece of information in an investigation is what isn’t there.

We’ve written often before about the failure of databases and artificial intelligence to knit together output from various databases and I discussed the idea of what isn’t there in my book, The Art of Fact Investigation. Remember, two of the biggest red flags for those suspicious of Bernard Madoff were the absence of a Big Four auditor for a fund the size he claimed to have, as well as the absence of an independent custodian.

The most famous example of missing evidence is from Sherlock Holmes: the dog that didn’t bark when the horse was removed from its stall during the night. No barking meant the dog knew the person who took the horse. It’s discussed by an evidence professor here.

This week, another excellent example: The Center for Advanced Defense Studies (C4ADS) provided an advance copy of its report to the New York Times and Wall Street Journal, detailing the way Kim Jong Un evades international sanctions. The report, Lux & Loaded is an example of superior investigation.

Using public records that track ship movements, commercial databases that record the movement of goods, and publicly available photos on the internet, C4ADS makes the case that Kim was able to get his hands on two armored Mercedes-Maybach S600 cars worth more than $500,000 each.

The report details the movement of the cars from a port in the Netherlands through several transshipment points, but the key finding is that the ship last seen with the cars “vanished” for 17 days last October, while the cars were in transit off the coast of Korea. While unable to say conclusively that the cars moved from Rotterdam to North Korea, the evidence is persuasive.

Most interesting is not the evidence that they could not find because it does not exist in open sources, but the evidence that was removed because a Togo-flagged ship called DN5505 turned off its transponders. It’s as if the ship just disappeared from the high seas.

Maybe we should not expect to be able to track every cargo shipment in the world every moment it’s at sea, but ships going dark for weeks at a time is something that should arouse suspicions.

That’s why, among the organization’s recommendations is that insurance companies should require that ships keep their transponders on in order to acquire and maintain insurance.

That won’t catch sanctions busters every time, but it’s a very sensible start.

Artificial intelligence doesn’t equal artificial perfection. I have argued for a while now both on this blog and in a forthcoming law review article here that lawyers (and the investigators who work for them) have little to fear and much to gain as artificial intelligence gets smarter.

Computers may be able to do a lot more than they used to, but there is so much more information for them to sort through that humans will long be required to pick through the results just as they are now. Right now, we have no quick way to word-search the billions of hours of YouTube videos and podcasts, but that time is coming soon.

The key point is that some AI programs will work better than others, but even the best ones will make mistakes or will only get us so far.

So argues British math professor Hannah Fry in a new book previewed in her recent essay in The Wall Street Journal, here. Fry argues that instead of having blind faith in algorithms and artificial intelligence, the best applications are the ones that we admit work somewhat well but are not perfect, and that require collaboration with human beings.

That’s collaboration, not simply implementation. Who has not been infuriated at the hands of some company, only to complain and be told, “that’s what the computer’s telling me.”

The fault may be less with the computer program than with the dumb company that doesn’t empower its people to work with and override computers that make mistakes at the expense of their customers.

Fry writes that some algorithms do great things – diagnose cancer, catch serial killers and avoid plane crashes. But, beware the modern snake-oil salesman:

Despite a lack of scientific evidence to support such claims, companies are selling algorithms to police forces and governments that can supposedly ‘predict’ whether someone is a terrorist, or a pedophile based on his or her facial characteristics alone. Others insist their algorithms can suggest a change to a single line of a screenplay that will make the movie more profitable at the box office. Matchmaking services insist their algorithm will locate your one true love.

As importantly for lawyers worried about losing their jobs, think about the successful AI applications above. Are we worried that oncologists, homicide detectives and air traffic controllers are endangered occupations? Until there is a cure for cancer, we are not.

We just think these people will be able to do their jobs better with the help of AI.

An entire day at a conference on artificial intelligence and the law last week in Chicago produced this insight about how lawyers are dealing with the fast-changing world of artificial intelligence:

Many lawyers are like someone who knows he needs to buy a car but knows nothing about cars. He knows he needs to get from A to B each day and wants to get there faster. So, he is deposited at the largest auto show in the world and told, “Decide which car you should buy.”

Whether it’s at smaller conferences or at the gigantic, auto-show-like legal tech jamborees in Las Vegas or New York, the discussion of AI seems to be dominated by the companies that produce the stuff. Much less on show are people who use legal AI in their everyday lives.

At my conference, the keynote address (and two more panels) were dominated by IBM. Other familiar names in AI in the world of smart contracting and legal research were there, along with the one of the major “old tech” legal research giants. All of the products and services sounded great, which means the salespeople were doing their jobs.

But the number of people who presented about actually using AI after buying it? Just a few (including me). “We wanted to get more users,” said one of the conference organizers, who explained that lawyers are reluctant to describe the ways they use AI, lest they give up valuable pointers to their competitors.

Most of the questions and discussion from lawyers centered around two main themes:

  1. How can we decide which product to buy when there are so many, and they change so quickly?
  2. How can we organize our firm’s business model in such a way that it will be profitable to use expensive new software (“software” being what AI gets called after you start using it)?

Law firm business models are not my specialty, but I have written before and spoke last week about evaluating new programs.

Only you (and not the vendor) can decide how useful a program is, by testing it. Don’t let the vendors feed you canned examples of how great their program is. Don’t put in a search term or two while standing at a trade show kiosk. Instead, plug in a current problem or three while sitting in your office and see how well the program does compared to the searches you ran last week.

You mean you didn’t run the searches, but you’re deciding whether to buy this expensive package? You should at least ask the people who will do the work what they think of the offering.

I always like to put in my own company or my own name and see how accurate a fact-finding program is. Some of them (which are still useful some of the time) think I live in the house I sold eight years ago. If you’re going to buy, you should know what a program can do and what it can’t.

As with other salespeople in other industries, AI sales staff won’t tell you what their programs are bad at doing. And most importantly, they won’t tell you how well or how badly (usually badly) their program integrates with other AI software you may be using.

No matter how good any software is, you will need good, inquisitive and flexible people running it and helping to coordinate outputs of different products you are using.

While sales staff may have subject-matter expertise in law (it helps if they are lawyers themselves) they cannot possibly specialize in all facets of the law. Their job is to sell, and they should not be criticized for it.

They have their job to do, and as a responsible buyer, you have yours.

For more on what an AI testing program could look like and what kinds of traits the best users of AI should have, see my forthcoming law review article here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263