Do you ever wonder why some gifted small children play Mozart, but you never see any child prodigy lawyers who can draft a complicated will?

The reason is that the rules of how to play the piano have far fewer permutations and judgment calls than deciding what should go into a will. “Do this, not that” works well with a limited number of keys in each octave. But the permutations of a will are infinite. And by the way, child prodigies can play the notes, but usually not as soulfully as an older pianist with more experience of the range of emotions an adult experiences over a lifetime.

You get to be good at something by doing a lot of it. You can play the Mozart over and over, but how do you know what other human beings may need in a will, covering events that have yet to happen?

Not by drafting the same kind of will over and over, that’s for sure.

Reviewing a lot of translations done by people is the way Google Translate can manage rudimentary translations in a split second. Reviewing a thousand decisions made in document discovery and learning from mistakes picked out by a person is the way e-discovery software looks smarter the longer you use it.

But you would never translate a complex, nuanced document with Google Translate, and you sure wouldn’t produce documents without having a partner look it all over.

The craziness that can result from the mindless following of rules is an issue on the forefront law today, as we debate how much we should rely on artificial intelligence.

Who should bear the cost if AI makes a decision that damages a client? The designers of the software? The lawyers who use it? Or will malpractice insurance evolve enough to spread the risk around so that clients pay in advance in the form of a slightly higher price to offset the premium paid by the lawyer?

Whatever we decide, my view is that human oversite of computer activity is something society will need far into the future. The Mozart line above was given to me by my property professor in law school and appeared in the preface of my book, The Art of Fact Investigation.

The Mozart line is appropriate when thinking about computers, too. And in visual art, I increasingly see parallels between the way artists and lawyers struggle to get at what is true and what is an outcome we find desirable. Take the recent exhibition at the Metropolitan Museum here in New York, called Delirious: Art at the Limits of Reason, 1950 to 1980.

It showed that our struggle with machines is hardly new, even though it would seem so with the flood of scary stories about AI and “The Singularity” that we get daily. The show was filled with the worrying of artists 50 and 60 years ago about what machines would do to the way we see the world, find facts, and how we remain rational. It seems funny to say that: computers seem to be ultra-rational in their production of purely logical “thinking.”

But what seems to be a sensible or logical premise doesn’t mean that you’ll end up with logical conclusions. On a very early AI level, consider the databases we use today that were the wonders of the world 20 years ago. LexisNexis or Westlaw are hugely powerful tools, but what if you don’t supervise them? If I put my name into Westlaw, it thinks I still live in the home I sold in 2011. All other reasoning Westlaw produces based on that “fact” will be wrong. Noise complaints brought against the residents there have nothing to do with me. A newspaper story about disorderly conduct resulting in many police visits to the home two years ago are also irrelevant when talking about me.[1]

The idea of suppositions running amok came home when I looked at a sculpture last month by Sol LeWitt (1928-2007) called 13/3. At first glance, this sculpture would seem to have little relationship to delirium. It sounds from the outset like a simple idea: a 13×13 grid from which three towers arise. What you get when it’s logically put into action is a disorienting building that few would want to occupy.

As the curators commented, LeWitt “did not consider his otherwise systematic work rational. Indeed, he aimed to ‘break out of the whole idea of rationality.’ ‘In a logical sequence,’ LeWitt wrote, in which a predetermined algorithm, not the artist, dictates the work of art, ‘you don’t think about it. It is a way of not thinking. It is irrational.’”

Another wonderful work in the show, Howardena Pindell’s Untitled #2, makes fun of the faith we sometimes have in what superficially looks to be the product of machine-driven logic. A vast array of numbered dots sits uneasily atop a grid, and at first, the dots appear to be the product of an algorithm. In the end, they “amount to nothing but diagrammatic babble.”

Setting a formula in motion is not deep thinking. The thinking comes in deciding whether the vast amount of information we’re processing results in something we like, want or need. Lawyers would do well to remember that.

[1] Imaginary stuff: while Westlaw does say I live there, the problems at the home are made up for illustrative purposes.