ChatGPT now comes up in most of the extended conversations I have with lawyers about how things are going. Many rave about how easy it is to have this robot whip up a simple motion or even, in one example, “a short speech about NATO defense capabilities.”

While it may be true that ChatGPT can do what an executive assistant or an inexperienced associate might be able to come up with, even its biggest proponents agree that you can’t just take what it gives you and put that out there as your product.

But what about for an investigator? Is ChatGPT helpful?

I wrote about this last month on our other blog, The Divorce Asset Hunter. In that article, Would an Artificial Intelligence Asset Search Help? I argued that ChatGPT’s own disclaimers made me skeptical about its usefulness, since it depends on lots of data and has what its creators call a “limited understanding” of the world. It’s all there at

Leaving aside that computers don’t “understand” anything, but rather imitate prior examples of understanding, I think the bigger problem is the first one. You can get a pretty good short speech on NATO out of it because there are thousands of such speeches floating around on the internet.

After I wrote that blog last month, I eventually tried ChatGPT to see for myself how restrictive those potential limitations might be. My answer is, pretty darned restrictive.

A client asked this week to find out why an associate could have been visited by the FBI at his home, or whether perhaps these were process servers pretending to be FBI, since they had not been too keen to show ID up close as the Bureau requires.

I asked the chat bot, “Why would law enforcement be looking for Albert R. Jackson?” (not his real name).

ChatGPT’s answer was that it doesn’t deal with specific people or situations. That was the same answer I got when I asked whether nepotism was a problem at a particular public company (where people on chat boards had been complaining about just such an issue).

It’s not that ChatGPT gave me boilerplate on these two question. It gave me nothing.

Do not mistake this for a blanket dismissal of the power and potential of artificial intelligence. As I wrote last month,

The more I have looked at artificial intelligence, the more bullish I have been about the rosy future for investigation. AI and greater computing power will generate volumes of data we can only dream about right now. Automatic transcripts of every YouTube video, for example, would mark an explosive change in the amount of material you would have to work with in researching someone. So would the ability to do media searches in every language, not just the small number offered by LexisNexis. I wrote about this in a law review article a few years ago, Legal Jobs in the Age of Artificial Intelligence.

The future is bright for investigators using AI. More data will mean more things to research and interpret. But who will do the research and interpretation? Smart people will, as they do today.