Have you ever noticed that artificial intelligence always seems much more frightening when people write about what it will become, but then how it can seem like imperfect, bumbling software when writing about AI in the present tense?

You get one of each in this morning’s Wall Street Journal. The paper paints a horrific picture of what the ruthless secret police of the world’s dictatorships will be able to do with AI in The Autocrat’s New Tool Kit, including facial recognition to track behavior more efficiently and to target specific groups with propaganda.

But then see Social-media companies have struggled to block violent content about this week’s terrorist attack on two mosques in New Zealand. With all of their computing power and some of the world’s smartest programmers and mathematicians, Facebook and YouTube allowed the killings to be streamed live on the internet. It took an old-fashioned phone call from the New Zealand police to tell them to take the live evildoing down. Just as the New York Times or CNBC would never put such a thing on their websites, neither should Facebook or YouTube.

Wouldn’t you think that technology that could precisely target where to send the most effective propaganda could distinguish between an extremely violent film and extremely violent reality? I would. After all, it’s like nothing for these sites to have indexed the code of all the movie clips already uploaded onto their systems. If facial recognition works on a billion Chinese people, why not on the thousands of known film actors floating up there on the YouTube cloud? If it’s not a film you already know and there are lots of gunshots, the video should be flagged for review.

Why is this so hard? For one thing, the computing power the companies need doesn’t exist yet. “The sheer volume of material posted by [YouTube’s] billions of users, along with the difficulty in evaluating which videos cross the line, has created a minefield for the companies,” the Journal said.

It’s that minefield that disturbs me. A minefield dotted with difficulties about whether or not to show mass murder in real time? What would be the harm of having a person look at any video that features mass killing before it’s cleared to air? If the computers can’t figure out what to do with such material, let a person look at it.

What is so frightening about AI is not the computing power and the uses the world can find for it, but the abdication of self-control and ethical considerations of the people using the AI.

I want the police in my country to have guns and to use them on criminals who are about to kill innocent people. I don’t want police states shooting peaceful demonstrators. I’m happy to have police in the U.S. use facial recognition if it will help stop a person from blowing up the stadium where the Super Bowl is being held. I would not want cameras on every intersection automatically tracking my every movement.

Guns are neither smart nor stupid. They are artificial power that increases the harm an unarmed person can affect. Guns are essential in maintaining freedom but can suppress freedom too.

Same for AI. There are lots of wonderful applications for it. Every bit of software in use today was called AI before it began to be used, when we then called it software.

What sets off the good AI from the bad is the way people use it. Streaming on YouTube can be a wonderful thing. But just as we need political accountability to make sure the guns our armies and police have aren’t abused, we need the people at YouTube to control their technology in a responsible way.

The AI at Facebook and YouTube isn’t dumb: Dumb are the people who trusted too readily that the tool could decide for itself what the right call would be when the horrors from Christchurch began to be uploaded.

Do you ever wonder why some gifted small children play Mozart, but you never see any child prodigy lawyers who can draft a complicated will?

The reason is that the rules of how to play the piano have far fewer permutations and judgment calls than deciding what should go into a will. “Do this, not that” works well with a limited number of keys in each octave. But the permutations of a will are infinite. And by the way, child prodigies can play the notes, but usually not as soulfully as an older pianist with more experience of the range of emotions an adult experiences over a lifetime.

You get to be good at something by doing a lot of it. You can play the Mozart over and over, but how do you know what other human beings may need in a will, covering events that have yet to happen?

Not by drafting the same kind of will over and over, that’s for sure.

Reviewing a lot of translations done by people is the way Google Translate can manage rudimentary translations in a split second. Reviewing a thousand decisions made in document discovery and learning from mistakes picked out by a person is the way e-discovery software looks smarter the longer you use it.

But you would never translate a complex, nuanced document with Google Translate, and you sure wouldn’t produce documents without having a partner look it all over.

The craziness that can result from the mindless following of rules is an issue on the forefront law today, as we debate how much we should rely on artificial intelligence.

Who should bear the cost if AI makes a decision that damages a client? The designers of the software? The lawyers who use it? Or will malpractice insurance evolve enough to spread the risk around so that clients pay in advance in the form of a slightly higher price to offset the premium paid by the lawyer?

Whatever we decide, my view is that human oversite of computer activity is something society will need far into the future. The Mozart line above was given to me by my property professor in law school and appeared in the preface of my book, The Art of Fact Investigation.

The Mozart line is appropriate when thinking about computers, too. And in visual art, I increasingly see parallels between the way artists and lawyers struggle to get at what is true and what is an outcome we find desirable. Take the recent exhibition at the Metropolitan Museum here in New York, called Delirious: Art at the Limits of Reason, 1950 to 1980.

It showed that our struggle with machines is hardly new, even though it would seem so with the flood of scary stories about AI and “The Singularity” that we get daily. The show was filled with the worrying of artists 50 and 60 years ago about what machines would do to the way we see the world, find facts, and how we remain rational. It seems funny to say that: computers seem to be ultra-rational in their production of purely logical “thinking.”

But what seems to be a sensible or logical premise doesn’t mean that you’ll end up with logical conclusions. On a very early AI level, consider the databases we use today that were the wonders of the world 20 years ago. LexisNexis or Westlaw are hugely powerful tools, but what if you don’t supervise them? If I put my name into Westlaw, it thinks I still live in the home I sold in 2011. All other reasoning Westlaw produces based on that “fact” will be wrong. Noise complaints brought against the residents there have nothing to do with me. A newspaper story about disorderly conduct resulting in many police visits to the home two years ago are also irrelevant when talking about me.[1]

The idea of suppositions running amok came home when I looked at a sculpture last month by Sol LeWitt (1928-2007) called 13/3. At first glance, this sculpture would seem to have little relationship to delirium. It sounds from the outset like a simple idea: a 13×13 grid from which three towers arise. What you get when it’s logically put into action is a disorienting building that few would want to occupy.

As the curators commented, LeWitt “did not consider his otherwise systematic work rational. Indeed, he aimed to ‘break out of the whole idea of rationality.’ ‘In a logical sequence,’ LeWitt wrote, in which a predetermined algorithm, not the artist, dictates the work of art, ‘you don’t think about it. It is a way of not thinking. It is irrational.’”

Another wonderful work in the show, Howardena Pindell’s Untitled #2, makes fun of the faith we sometimes have in what superficially looks to be the product of machine-driven logic. A vast array of numbered dots sits uneasily atop a grid, and at first, the dots appear to be the product of an algorithm. In the end, they “amount to nothing but diagrammatic babble.”

Setting a formula in motion is not deep thinking. The thinking comes in deciding whether the vast amount of information we’re processing results in something we like, want or need. Lawyers would do well to remember that.

[1] Imaginary stuff: while Westlaw does say I live there, the problems at the home are made up for illustrative purposes.

Anyone following artificial intelligence in law knows that its first great cost saving has been in the area of document discovery. Machines can sort through duplicates so that associates don’t have to read the same document seven times, and they can string together thousands of emails to put together a quick-to-read series of a dozen email chains. More sophisticated programs evolve their ability with the help of human input.

Law firms are already saving their clients millions in adopting the technology. It’s bad news for the lawyers who used to earn their livings doing extremely boring document review, but good for everyone else. As in the grocery, book, taxi and hotel businesses, the march of technology is inevitable.

Other advances in law have come with search engines such as Lexmachina, which searches through a small number of databases to predict the outcome of patent cases. Other AI products that have scanned all U.S. Supreme Court decisions do a better job than people in predicting how the court will decide a particular case, based on briefs submitted in a live matter and the judges deciding the case.

When we think about our work gathering facts, we know that most searching is done not in a closed, limited environment. We don’t look through a “mere” four million documents as in a complex discovery or the trivial (for a computer) collection of U.S. Supreme Court cases. Our work is done when the entire world is the possible location of the search.

A person who seldom leaves New York may have a Nevada company with assets in Texas, Bermuda or Russia.

Until all court records in the U.S. are scanned and subject to optical character recognition, artificial intelligence won’t be able to do our job for us in looking over litigation that pertains to a person we are examining.

That day will surely come for U.S records, and may be here in 10 years, but it is not here yet. For the rest of the world, the wait will be longer.

Make no mistake: computers are essential to our business. Still, one set of databases including Westlaw and Lexis Nexis that we often use to begin a case are not as easy to use as Lexmachina or other closed systems, because they rely on abstracts of documents as opposed to the documents themselves.

They are frequently wrong about individual information, mix up different individuals with the same name, and often have outdated material. My profile on one of them, for instance, includes my company but a home phone number I haven’t used in eight years. My current home number is absent. Other databases get my phone number right, but not my company.

Wouldn’t it be nice to have a “Kayak” type system that could compare a person’s profile on five or six paid databases, and then sort out the gold from the garbage?

It would, but it might not happen so soon, and not just because of the open-universe problem.

Even assuming these databases could look to all documents, two other problems arise:

  1. They are on incompatible platforms. Integrating them would be a programming problem.
  2. More importantly, they are paid products, whereas Kayak searches free travel and airline sites. In addition, they require licenses to use, and the amount of data you can get is regulated by one of several permissible uses the user must enter to gain access to the data. A system integration of the sites would mean the integrator would have to vet the user for each system and process payment if it’s a pay-per-use platform.

These are hardly insurmountable problems, but they do help illustrate why, with AI marching relentlessly toward the law firm, certain areas of practice will succumb to more automation faster than others.

What will be insurmountable for AI is this: you cannot ask computers to examine what is not written down, and much of the most interesting information about people resides not on paper but in their minds and the minds of those who know them.

The next installment of this series on AI will consider how AI could still work to help us toward the right people to interview.

By now, if a lawyer isn’t thinking hard about how automation is going transform the business of law, that lawyer is a laggard.

You see the way computers upended the taxi, hotel, book and shopping mall businesses? It’s already started in law too.  As firms face resistance over pricing and are looking to get more efficient, the time is now to start training people to work with – and not in fear of – artificial intelligence.

And be not afraid.
And be not afraid

There will still be plenty of lawyers around in 10 or 20 years no matter how much artificial intelligence gets deployed in the law. But the roles those people will play will in many respects be different. The new roles will need different skills.

In a new Harvard Business Review article (based on his new book, “Humility is the New Smart”) Professor Ed Hess at the Darden School of Business argues that in the age of artificial intelligence, being smart won’t mean the same thing as it does today.

Because smart machines can process, store and recall information faster than any person, the skills of memorizing and recall are not as important as they once were. The new smart “will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating and learning,” Hess writes.

Among the many concrete things this will mean for lawyers are two aspects of fact investigation we know well and have been writing about for a long time.

  1. Open-mindedness will be indispensable.
  2. Even for legal research, logical deduction is out, logical inference is in.

Hess predicts we will “spend more time training to be open-minded and learning to update our beliefs in response to new data.” What could this mean in practice for a lawyer?

If all you know how to do is to gather raw information from a limited universe of documents, or perhaps spend a lot of time cutting and pasting phrases from old documents onto new ones, your days are numbered. Technology-assisted review (TAR) already does a good job sorting out duplicates and constructing a chain of emails so you don’t have to read the same email 27 times as you read through a long exchange.

But as computers become smarter and faster, they are sometimes overwhelmed by the vast amounts of new data coming online all the time. I wrote about this in my book, “The Art of Fact Investigation: Creative Thinking in the Age of Information Overload.”

I made the overload point with respect to finding facts outside discovery, but the same phenomenon is hitting legal research too.

In their article “On the Concept of Relevance in Legal Information Retrieval” in the Artificial Intelligence and Law Journal earlier this year,[1] Marc van Opijnen and Cristiana Santos wrote that

“The number of legal documents published online is growing exponentially, but accessibility and searchability have not kept pace with this growth rate. Poorly written or relatively unimportant court decisions are available at the click of the mouse, exposing the comforting myth that all results with the same juristic status are equal. An overload of information (particularly if of low-quality) carries the risk of undermining knowledge acquisition possibilities and even access to justice.

If legal research suffers from the overload problem, even e-discovery faces it despite TAR and whatever technology succeeds TAR (and something will). Whole areas of data are now searchable and discoverable when once they were not. The more you can search, the more there is to search. A lot of what comes back is garbage.

Lawyers who will succeed in using ever more sophisticated computer programs will need to remain open-minded that they (the lawyers) and not the computers are in charge. Open-minded here means accepting that computers are great at some things, but that for a great many years an alert mind will be able to sort through results in a way a computer won’t. The kind of person who will succeed at this will be entirely active – and not passive – while using the technology. Anyone using TAR knows that it requires training before it can be used correctly.

One reason the mind needs to stay engaged is that not all legal reasoning is deductive, and logical deduction is the basis for computational logic. Michael Genesereth of Codex, Stanford’s Center for Legal Informatics wrote two years ago that computational law “simply cannot be applied in cases requiring analogical or inductive reasoning,” though if there are enough judicial rulings interpreting a regulation the computers could muddle through.

For logical deduction to work, you need to know what step one is before you proceed to step two.  Sherlock Holmes always knew where to start because he was the character in entertaining stories of fiction. In solving the puzzle he laid it out in a way that seemed as if it was the only logical solution.

But it wasn’t. In real life, law enforcement, investigators and attorneys faced with mountains of jumbled facts have to pick their ways through all kinds of evidence that produces often mutually contradictory theories. The universe of possible starting points is almost infinite.

It can be a humbling experience to sit in front of a powerful computer armed with great software and connected to the rest of the world, and to have no idea where to begin looking, when you’ve searched enough, and how confident to be in your findings.

“The new smart,” says Hess, “will be about trying to overcome the two big inhibitors of critical thinking and team collaboration: our ego and our fears.”

Want to know more about our firm?

  • Visit charlesgriffinllc.com and see our two blogs, this one and The Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon). There is a detailed section on logic and inference in the law.
  • Watch me speak about Helping Lawyers with Fact Finding, here. We offer training for lawyers, and I speak across the country to legal groups about the proper mindset for legal inquiry.
  • If you are member of the ABA’s Litigation Section, see my piece in the current issue of Litigation Journal, “Five Questions Litigators Should Ask: Before Hiring an Investigator (and Five Tips to Investigate It Yourself). It contains a discussion of open-mindedness.

[1] van Opijnen, M. & Santos, C. Artif Intell Law (2017) 25: 65. doi:10.1007/s10506-017-9195-8

 

What will it take for artificial intelligence to surpass us humans? After the Oscars fiasco last night, it doesn’t look like much.

As a person who thinks a lot about the power of human thought versus that of machines, what is striking is not that the mix-up of the Best Picture award was the product of one person’s error, but rather the screw-ups of four people who flubbed what is about the easiest job there is to imagine in show business.

Not one, but two PwC partners messed up with the envelope. You would think that if they had duplicates, it would be pretty clear whose job it was to give out the envelopes to the presenters. Something like, “you give them out and my set will be the backup.” But that didn’t seem to be what happened.

Then you have the compounded errors of Warren Beatty and Faye Dunaway, both of whom can read and simply read off what was obviously the wrong card.

The line we always hear about not being afraid that computers are taking over the world is that human beings will always be there to turn them off if necessary. Afraid of driverless cars? Don’t worry; you can always take over if the car is getting ready to carry you off a cliff.

An asset search for Bill Johnson that reveals he’s worth $200 million, when he emerged from Chapter 7 bankruptcy just 15 months ago? A human being can look at the results and conclude the computer mixed up our Bill Johnson with the tycoon of the same name.

But what if the person who wants to override the driverless car is drunk? What if the person on the Bill Johnson case is a dimwit who just passes on these improbable findings without further inquiry? Then, the best computer programming we have is only as good as the dumbest person overseeing it.

We’ve written extensively here about the value of the human brain in doing investigations. It’s the theme of my book, The Art of Fact Investigation.

As the Oscars demonstrated last night, not just any human brain will do.

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

Remember those law school exams that depended not as much on getting the right answer as on issue spotting? Usually you got a fact pattern and you had to look at all the ways those facts would present interesting legal questions for a judge to consider.

Investigation is something like this, but instead of legal issues, you are spotting factual issues. You don’t always get the entire answer, but sometimes, the very action of showing a looming factual issue means you’ve earned your fee.

Factual issue spotting often means that you are looking for the answer by figuring out first where to go to find the answer you want.

I learned the power of this way of thinking this the first week I worked as an investigator, when I was asked to find a lawyer our client could hire in Liberia. This was a shattered country with no working phone system. After a fruitless hour of looking up lawyers old lists on the internet, I pivoted.

I knew that a lot of the lawyers in Liberia had probably fled. I found a former president of Liberia on the faculty of a U.S. university, called his home number at lunchtime (he picked up with “Hello”), and asked him to recommend some lawyers. “They all carry cell phones from Ghana” he told me, and gave me three names and cell phone numbers. We had our Liberian attorney the next day.

Here the issue was dispersion. Knowing the lawyers and those who would know the lawyers were disbursed, I knew not to look in Liberia but rather elsewhere.

A few other examples:

In doing an asset search recently, I knew it would be impossible to look in each and every county all over the United States for property in the name of the subject (his name wasn’t John Smith, but nearly as common). Instead, I looked at property across the country by the address the tax bill was mailed to. That was a very specific query, but I was able to dig even deeper by looking at not only the subject’s current address, but the house he had until three years ago.

And there they were: Mineral rights for twelve pieces of property in Texas.

Here the issue was all known addresses of the subject not just the current one. He may not have updated his address with the tax authorities because his mineral properties were non-producing and so exempt from tax. They also remained nicely concealed from his soon-to-be former wife. These properties are like lottery tickets. Worth a few bucks today, and maybe much more next year if a company strikes oil or gas on them and needs to pay you royalties.

Finally, trying to figure out how much a husband makes as a financial advisor. On his FINRA report, I found that he was also a broker and an insurance salesman. He made money three different ways. A look at his company’s annual report revealed some 30 subsidiary companies that could have compensated him.

Here the issue was imagining a person’s tax return. If you want to hide money, using a secret company is a good way to do it because if taxed as a corporation, it stays off your tax return as long as it doesn’t pay you dividends. The only people who need to know about that company are the payer, the payee, and the person who perhaps prepares the company’s tax return.

For example, he could have had an arrangement this his insurance commissions were paid to a limited liability company he controls. If my client’s lawyer did not ask in discovery for all money paid by each of these subsidiaries to any company beneficially owned by the subject, the company may not report his full compensation.

Investigating is going from A to B to C to D to find what you need. If you want to plug in a person’s name and get everything there is to know about thanks to big data and artificial intelligence, you will need to do an investigation in a movie to TV show.

In real life, it usually isn’t so easy, even though your investigator should always be able to show you how it was done in order to prove he didn’t break the law or invade anyone’s privacy.

ChatGPT now comes up in most of the extended conversations I have with lawyers about how things are going. Many rave about how easy it is to have this robot whip up a simple motion or even, in one example, “a short speech about NATO defense capabilities.”

While it may be true that ChatGPT can do what an executive assistant or an inexperienced associate might be able to come up with, even its biggest proponents agree that you can’t just take what it gives you and put that out there as your product.

But what about for an investigator? Is ChatGPT helpful?

I wrote about this last month on our other blog, The Divorce Asset Hunter. In that article, Would an Artificial Intelligence Asset Search Help? I argued that ChatGPT’s own disclaimers made me skeptical about its usefulness, since it depends on lots of data and has what its creators call a “limited understanding” of the world. It’s all there at ChatGPT.pro

Leaving aside that computers don’t “understand” anything, but rather imitate prior examples of understanding, I think the bigger problem is the first one. You can get a pretty good short speech on NATO out of it because there are thousands of such speeches floating around on the internet.

After I wrote that blog last month, I eventually tried ChatGPT to see for myself how restrictive those potential limitations might be. My answer is, pretty darned restrictive.

A client asked this week to find out why an associate could have been visited by the FBI at his home, or whether perhaps these were process servers pretending to be FBI, since they had not been too keen to show ID up close as the Bureau requires.

I asked the chat bot, “Why would law enforcement be looking for Albert R. Jackson?” (not his real name).

ChatGPT’s answer was that it doesn’t deal with specific people or situations. That was the same answer I got when I asked whether nepotism was a problem at a particular public company (where people on chat boards had been complaining about just such an issue).

It’s not that ChatGPT gave me boilerplate on these two question. It gave me nothing.

Do not mistake this for a blanket dismissal of the power and potential of artificial intelligence. As I wrote last month,

The more I have looked at artificial intelligence, the more bullish I have been about the rosy future for investigation. AI and greater computing power will generate volumes of data we can only dream about right now. Automatic transcripts of every YouTube video, for example, would mark an explosive change in the amount of material you would have to work with in researching someone. So would the ability to do media searches in every language, not just the small number offered by LexisNexis. I wrote about this in a law review article a few years ago, Legal Jobs in the Age of Artificial Intelligence.

The future is bright for investigators using AI. More data will mean more things to research and interpret. But who will do the research and interpretation? Smart people will, as they do today.

When not at work, I like to do many things, and one of my favorites is to watch New York Mets baseball. Since moving to New York I’ve grown to love the team and I make common cause with the many Mets fans I run into (even in my Bronx neighborhood just a few stops from Yankee Stadium).[1]

Keith Hernandez, 1986: Barry Colla Photography, Public domain, via Wikimedia Commons

One benefit of watching the Mets is that our team has what many consider to be the finest broadcasters calling their games. The radio team led by Howie Rose is unmatched for its knowledge of the game and willingness to criticize the home team if that’s merited. Most of my intake is via the TV broadcasters, often rated the best in the game. Included in most of these games is Mets alumnus Keith Hernandez, who provides an education on hitting each and every time he sits down in the booth. [2]

Some of what I’ve learned watching Mets baseball turns out to be applicable to my work life.

  1. Not everything is under your control. If your umpire that day is a “pitcher’s umpire” (one who calls a generously-large strike zone), then some close pitches are, in the words of Hernandez, “too close to take.” You had better try to hit them or foul them off. Passivity can send you back to the dugout. Hernandez doesn’t criticize umpires for large or small strike zones, and goes out of his way to praise the umpires who keep their zones consistent. If an umpire is calling inside strikes all day long, shame on you for watching as a close one goes by you for strike three.

Our “umpires’ are the circumstances in which we find ourselves. If we are investigating on a tight deadline and need to get records in a county that does not permit on-site searching in the courthouse, we prefer to over-order any record that could be relevant so we can sort them out later. If someone is going to rule out a document as being irrelevant, we want control of that process. If we get too many documents and need to discard some as being of no use, that’s like a two-strike foul. Better that than to fail to order the document that could turn out to be the one we need. That would be looking at strike-three.

On the other hand, if we’re in a county that has a nice full set of records online, we can be much more selective up front because we can read the dockets ourselves and sometimes figure out whether the case relates to “our” John Smith or someone else with the same name.

Overseas, some countries have nice public record systems that permit searching and unambiguous reporting. Other nations have no concept of a public record. All the information you get in these places is unofficial, even if highly informative. It’s no good cursing your bad luck that the investigation takes you to a difficult location. You’re at bat and you make the best of it.

  1. You may need to change your approach a few pitches in. Hernandez talks all the time about “good two-strike hitting,” which means that you may need to change the way you want to swing when there is a much greater price to pay for being unsuccessful on the next pitch. Instead of trying to hit the ball in the most powerful way (“pulling” toward the right for left-handed hitters and vice-versa), many good hitters just “slap” or “poke” the ball through an opening the other way for a single.

In other words, if you’re not going to hit a home run, better to hit a single than to strike out.

We love investigations that turn out to be “home runs” right away. The million-dollar fraud we discovered being perpetrated by our client’s litigation opponents that forced them to drop their suit; the $60 million in traceable assets we found in even less time during a divorce asset search.

But what if you don’t find that kind of thing that quickly? Two strikes for us can mean we are running out of time (before a court-imposed deadline), running out of budget (you can’t afford to look everywhere); or both. We try to concentrate on the jurisdictions most likely to yield records about our person, rather than all thirty counties he may have lived or worked in over the past 20 years. This is what’s called Bayesian analysis – refining search terms based on what you learn as you go along to increase your odds of success.[3]

If we’re trying to impeach the credibility of a witness, we may find no prior conviction for fraud (a home run), but the omission of a job on his resume eight years ago could turn out to be valuable if it leads to evidence that he was fired for incompetence there (long single or double).

  1. Analytics are helpful, but you also have to trust your eye and what the pitcher is doing today. The data may tell you that with two strikes a pitcher is x% likely to throw a breaking ball, but what if the third time through the batting order you notice he has started “tipping” his pitches (involuntarily signaling what he will throw next)? That won’t be in the data, but it will be obvious if you’re watching.

We too have access to lots of data. Databases give us nice head starts on where people have lived, what companies they’ve associated with, some criminal convictions, and names of relatives. But we still subject all of that data to verification by checking it ourselves. It’s often right, but it’s wrong often enough that you can’t just trust it blindly.

One database thinks I still live in the home we sold 12 years ago, based on a grocery store discount card I obtained while living in the old house and which I continue to use today. The grocery store continues to sell my data to aggregators, who continue to report that I live where I don’t. Anyone checking the county record at the old place can see it was sold, but if they don’t check they’ll get it wrong.

Along with his equally brilliant commentators Ron Darling and Gary Cohen, Hernandez is also brimming with tips about fielding, especially regarding the placement of infielders given the situation (the hitter at bat, the score, the count). These tips also pertain to investigation. That article will be appearing before the Mets next win the World Series.[4]

[1] There are many Yankee fans around too, and I also count them among my friends. There is even, for some reason, a lifelong Bronx resident who roots for the Phillies. It only took him two years to admit it.

[2] Hernandez will have his number retired by the Mets on July 9. He has not been elected to the Baseball Hall of Fame, though his career on-base percentage of .384  is close to that of Detroit’s Miguel Cabrera, often referred to as a shoo-in for Cooperstown. His WAR (wins above replacement) of 60.3 is better than those of Yogi Berra, Willie Stargell or Vladimir Guerrero, and just 0.1 below Harmon Killebrew’s. He was also famous for revolutionizing play at first base, where he won eleven consecutive Gold Gloves.

[3] I discuss this further in Legal Jobs and the Age of Artificial Intelligence, Savannah Law Review Vol. 5, No. 1 (2018).

[4] Date to be determined.

I was puzzled this week at the reaction to a bomb of a story by the Wall Street Journal. The paper’s rightfully cautious lawyers allowed it to go to press and declare that 131 federal judges had broken the law by hearing cases in which they or their families had a direct financial interest.

Photo: Uwe Kils, Creative Commons

Even though a bunch of judges admitted they had acted incorrectly, other than a couple of reports by The Hill and Esquire.com, the reaction was zero.[1]

The story and its aftermath yielded a couple of interesting lessons.

  1. If a story can’t be politicized these days, it goes away fast. The judges named in the story came from across the political spectrum. Obama appointees, Bush (both of them) appointees, Clinton appointees, Trump appointees. They should have recused themselves but didn’t. No political points to be scored here.
  1. As an investigator, the story was a wonderful illustration of the idea that the world is packed with information that nobody either looks at or analyzes. This story scratches the surface of what lies ahead as artificial intelligence (AI) both increases the amount of data we can search, and the speed with which we can analyze it. Imagine being able to see not just the part of the iceberg above the water, but the whole thing – right away.

JUST ASK FOR IT

In this case, the paper relied on a non-profit called The Free Law Project which put together the stock ownership data for the judges. How did the Free Law Project get this information, which judges are required by law to submit? They asked for it. That’s it. Now it’s a public database on their website.

There is a lot of other wonderful information that is there for the asking. Freedom of information requests at the federal and state government levels have given us all kinds of material over the years as we investigate.

Did someone really serve in the military? What did a charity say its mission was when it was established? What kind of things did a company import and declare to U.S. Customs? We’ve found it all by asking for it.

All sorts of other great information is there for the taking. We once linked two people using state-level lobbying records freely available on the government’s website. If your database doesn’t look at lobbying records, that should be the database’s problem, not yours.

PUTTING IT ALL TOGETHER

The brilliance of the Journal’s story wasn’t just the sea of data the Free Law Project gave them. It was the critical task of seeing what happens when you match one set of data with another. The second set, of course, was looking at cases all the judges had heard. When a party before the judge turned out to have been owned in part by the judge or the judge’s family (personally or in trust and known to the judge), that was where the law was broken.

One of the keys to a good investigation is how databases and other forms of information work together. I’ve written for years that we employ an “all windows open” approach. It’s no good running a search on Bloomberg and then checking it off your list as a source already consulted. If later in the investigation you find out in an interview or court record that your subject has a company you didn’t know about when the investigation started, you have to look at that company on Bloomberg again. [2] Think of it as having each information source in a separate open window on your computer. Don’t close all until the investigation is done.

It probably took the Journal a long time to do its analysis, but note that the story only handles cases between 2010 and 2018. Nearly three years of cases since then have been heard – thousands – and the “investigation” is far from complete.

THE FUTURE OF INVESTIGATION WITH ARTIFICIAL INTELLIGENCE

On a very basic level, the Journal used AI to parse all the data. Once parties and stockholdings are fed into a database, a computer can spit out conflicts in a second. Human beings need to carefully review the findings, but doing the entire job by hand would have taken years.

AI helps in processing time, but it also is helping to create searchable data sets that were never searchable before. I discussed this at length in a law review article a few years ago, Legal Jobs in the Age of Artificial Intelligence. Imagine being able to search in seconds the transcripts of every podcast or YouTube video on the internet.

That time is coming. More data, more analysis, more human hours to make sense of it. This Wall Street Journal story is just a taste of the future.

 

 

 

 

 

[1] The Administrative Office of the U.S. Courts, which runs the federal judiciary, told journalists it was looking into the matter. Many cases may have to be retried. It’s a mess for the judiciary which will play out over weeks, months, years, probably.

[2] My book, The Art of Fact Investigation discusses this concept, and I’ve included it for years in my courses and lectures to lawyers around the country.

What conveys the truth more effectively?

A snapshot of a person’s values and accomplishments in the form of a quotation? Or a long essay about that person that will contain the short clip but surround it with other facts that could contradict or water down the single line (or build on the quote and infuse it with needed context)?

Photos: ShareAlike 2.0 Japan.

It’s a good question because you can make a case that at times, either answer is preferable. It might be nice to have both. That’s why in our memos, we have bullet-point highlights on top and then all the facts (usually in chronological order) in the body of the document.

How best to convey the essence of something is the subject in part of a marvelous show at New York’s International Center for Photography: a fascinating pairing by Richard Choi of video and still photos plucked from the 30 seconds or so of the video. Billed as “a meditation on the stream of life and its expression as a single image, between film and photography, between life and our memory of it,” it prompted in me all kinds of thoughts about fact investigation.

To know fully what someone’s life consisted of you would have to be there for the whole time, and that’s not practical. In abstracting a life to get the essence of it, you need to make editorial decisions about what to include and what to omit. Sometimes the gaps are there in testimony and documentation, and sometimes you have so much information that you are obliged to leave some out.

One of the pairings in the exhibition that struck me was a short film of a mother and her young daughter and son kneeling in prayer in what looks like a church or chapel. The photo shows them immobile and deep in prayer. But the video reveals that the little boy couldn’t stop fidgeting for most of the time, and the photo captured him at a rare single moment of rest.

The difference between a snapshot in time and a flow of information comes up in many walks of life. In accounting a balance sheet is a snapshot of the last second of the period, but profit and loss and cash flow are financial “movies” of the company’s life over the course of a quarter or year. To process a movie or a photo you think differently, and that certainly goes for reading financial records. Companies can clean up a balance sheet for the end of the year or the quarter, and then go back into debt on January 2, for instance. A “movie” of  a whole year can obscure a big change that happened in the business at one end of the period or another.

Another way the photo vs. movie issue comes up with investigators is when we do interviews. We have to boil down for the client the remarks most relevant to their inquiry. If possible it’s nice to provide clients with a transcript of a whole interview, but in many states lawyers and their agents are discouraged from recording telephone calls as a matter of ethics. In other states, it’s forbidden by law to record a phone call without telling the other person that a recording is under way. We wrote about this in Taping Phone Calls Is Not Worth the Risk.

But now, there exists the ability to transcribe automatically whole YouTube videos. I wrote about the revolution in investigation that this kind of computing power would bring in Legal Jobs In the Age of Artificial IntelligenceIf you can easily provide the context for the quote you pull out of a one-hour interview or video, it’s always good to do so.

Soon we may be able to transcribe automatically all YouTube videos in existence and then search the texts for what we need. But you can’t hand your client 15,000 hours of transcripts of every former employee of Company X talking about what it’s like to work there.

You will need to play editor, even if it’s a matter of giving selected clips of video. In some cases, a single 10-second statement will do the trick, and here “photo” will triumph over “movie.”