There is a widespread belief among lawyers and other professionals that investigators, armed only with special proprietary databases, can solve all kinds of problems other professionals cannot.

While certain databases are a help, we often tell our clients that even if we gave them the output of all the databases our firm uses, they would probably still not be able to come to most of the conclusions we do. That is because databases have incomplete, conflicting outputs. You need a knowledgeable person to weigh those outputs and come up with a “right” answer that is always, at first, a best guess that requires verification.

Commercial databases are also hobbled by legal and commercial restrictions that other large information resources are not. This is one of the reasons I have argued for several years now that increases in of artificial intelligence will increase – not diminishjobs for humans.

The full argument is in Legal Jobs in the Age of Artificial Intelligence: Moving from Today’s Limited Universe of Data Toward the Great Beyond, Savannah Law Review Vol. 5:1, 2018.

Forget what you know about SQL and relational databases. Forget about the large universes of even millions of documents put into various artificial intelligence engines for expedited document review. When users populate and control a database, they can alter the content. They can control the programming to make the databases talk to each other.

Even in hospitals, with more than a dozen different information systems struggling to interconnect, it’s possible to imagine a smoothly-running network of networks given enough computing power and programming time.

The commercial databases that depend on credit-header information, utility bills, commercial mailing lists and more are different. Commercial databases used in investigations do not play nicely with other databases for two main reasons:

  1. Competitive Barriers

 There is only one New York Secretary of State, or one recorder of deeds in a county. They are presumed to be correct as a matter of law as long as you get a certified copy of their documents.

The commercial databases are different in that they compete with one another. Databases do not share results so that we may sort out conflicts automatically. They do not suggest, for example, that if a John R. Smith of Houston  is this John R. Smith on Walnut Avenue, then this John R. Smith owns the following companies in Nevada.

Instead, one database will tell you about the man on Walnut Avenue, and a different database may give you the Nevada company information and suggest that its owner lives not on Walnut Avenue but at another address in San Diego. A third database may tell you the same person who lived on Walnut Avenue in 2014 now lives in San Diego.

You will have to stitch it all together yourself. By you, I mean you a human being and not you, using some kind of databases version of kayak.com that assembles travel sites and hands you results in one convenient place.

Note that for someone searching for completeness, Kayak’s no model either. If you assume that every airline and hotel in the world is listed on Kayak, you are wrong. That is because Kayak is a business that makes money from the places it lists (it’s part of a profit-making company called Booking.com). If you go to another for-profit site, Travelocity.com,  and enter flights leaving from New York for Dallas, you do not get all possible options. You can fly on Southwest between those two cities, but the other day Travelocity didn’t give you that choice. Tomorrow may be different if the market for airline pricing changes.

Commercial databases are like airlines and hotels: they too are profit-making enterprises. Just as American won’t let you board with a Delta ticket, databases don’t like to share either.

  1. Legal Impediments to Sharing

Even if databases wanted to share information, doing so would be fraught with legal difficulty. That is because the information they offer is accessible to licensees only. These users need a permissible purpose under federal law that governs credit reports, and the varying permissible purposes yield varying amounts of information.

Each database has to review the permissible uses you enter, and each has to make sure you are a paying customer. Databases could theoretically subcontract that job to a central entity that handles its competitors as well, but with just a handful of big competitors, there’s less incentive to outsource.

The other, larger problem an information “Kayak” would have is that the real travel Kayak just gives you a small number of  data points: price, when you leave and when you arrive. Database output about a person, his known residences, phone numbers, associates and more is voluminous. How would you sort the differing outputs of different competitors? “Rank by accuracy” is a laughable non-starter.

All in all, if you make your living sorting through database output and then use that to check against public filings, litigation, licenses, news stories, blogs, videos and social media, be of good cheer.

The robots haven’t even come close to usurping your duties.

Not for the first time, the most compelling piece of information in an investigation is what isn’t there.

We’ve written often before about the failure of databases and artificial intelligence to knit together output from various databases and I discussed the idea of what isn’t there in my book, The Art of Fact Investigation. Remember, two of the biggest red flags for those suspicious of Bernard Madoff were the absence of a Big Four auditor for a fund the size he claimed to have, as well as the absence of an independent custodian.

The most famous example of missing evidence is from Sherlock Holmes: the dog that didn’t bark when the horse was removed from its stall during the night. No barking meant the dog knew the person who took the horse. It’s discussed by an evidence professor here.

This week, another excellent example: The Center for Advanced Defense Studies (C4ADS) provided an advance copy of its report to the New York Times and Wall Street Journal, detailing the way Kim Jong Un evades international sanctions. The report, Lux & Loaded is an example of superior investigation.

Using public records that track ship movements, commercial databases that record the movement of goods, and publicly available photos on the internet, C4ADS makes the case that Kim was able to get his hands on two armored Mercedes-Maybach S600 cars worth more than $500,000 each.

The report details the movement of the cars from a port in the Netherlands through several transshipment points, but the key finding is that the ship last seen with the cars “vanished” for 17 days last October, while the cars were in transit off the coast of Korea. While unable to say conclusively that the cars moved from Rotterdam to North Korea, the evidence is persuasive.

Most interesting is not the evidence that they could not find because it does not exist in open sources, but the evidence that was removed because a Togo-flagged ship called DN5505 turned off its transponders. It’s as if the ship just disappeared from the high seas.

Maybe we should not expect to be able to track every cargo shipment in the world every moment it’s at sea, but ships going dark for weeks at a time is something that should arouse suspicions.

That’s why, among the organization’s recommendations is that insurance companies should require that ships keep their transponders on in order to acquire and maintain insurance.

That won’t catch sanctions busters every time, but it’s a very sensible start.

Artificial intelligence doesn’t equal artificial perfection. I have argued for a while now both on this blog and in a forthcoming law review article here that lawyers (and the investigators who work for them) have little to fear and much to gain as artificial intelligence gets smarter.

Computers may be able to do a lot more than they used to, but there is so much more information for them to sort through that humans will long be required to pick through the results just as they are now. Right now, we have no quick way to word-search the billions of hours of YouTube videos and podcasts, but that time is coming soon.

The key point is that some AI programs will work better than others, but even the best ones will make mistakes or will only get us so far.

So argues British math professor Hannah Fry in a new book previewed in her recent essay in The Wall Street Journal, here. Fry argues that instead of having blind faith in algorithms and artificial intelligence, the best applications are the ones that we admit work somewhat well but are not perfect, and that require collaboration with human beings.

That’s collaboration, not simply implementation. Who has not been infuriated at the hands of some company, only to complain and be told, “that’s what the computer’s telling me.”

The fault may be less with the computer program than with the dumb company that doesn’t empower its people to work with and override computers that make mistakes at the expense of their customers.

Fry writes that some algorithms do great things – diagnose cancer, catch serial killers and avoid plane crashes. But, beware the modern snake-oil salesman:

Despite a lack of scientific evidence to support such claims, companies are selling algorithms to police forces and governments that can supposedly ‘predict’ whether someone is a terrorist, or a pedophile based on his or her facial characteristics alone. Others insist their algorithms can suggest a change to a single line of a screenplay that will make the movie more profitable at the box office. Matchmaking services insist their algorithm will locate your one true love.

As importantly for lawyers worried about losing their jobs, think about the successful AI applications above. Are we worried that oncologists, homicide detectives and air traffic controllers are endangered occupations? Until there is a cure for cancer, we are not.

We just think these people will be able to do their jobs better with the help of AI.

An entire day at a conference on artificial intelligence and the law last week in Chicago produced this insight about how lawyers are dealing with the fast-changing world of artificial intelligence:

Many lawyers are like someone who knows he needs to buy a car but knows nothing about cars. He knows he needs to get from A to B each day and wants to get there faster. So, he is deposited at the largest auto show in the world and told, “Decide which car you should buy.”

Whether it’s at smaller conferences or at the gigantic, auto-show-like legal tech jamborees in Las Vegas or New York, the discussion of AI seems to be dominated by the companies that produce the stuff. Much less on show are people who use legal AI in their everyday lives.

At my conference, the keynote address (and two more panels) were dominated by IBM. Other familiar names in AI in the world of smart contracting and legal research were there, along with the one of the major “old tech” legal research giants. All of the products and services sounded great, which means the salespeople were doing their jobs.

But the number of people who presented about actually using AI after buying it? Just a few (including me). “We wanted to get more users,” said one of the conference organizers, who explained that lawyers are reluctant to describe the ways they use AI, lest they give up valuable pointers to their competitors.

Most of the questions and discussion from lawyers centered around two main themes:

  1. How can we decide which product to buy when there are so many, and they change so quickly?
  2. How can we organize our firm’s business model in such a way that it will be profitable to use expensive new software (“software” being what AI gets called after you start using it)?

Law firm business models are not my specialty, but I have written before and spoke last week about evaluating new programs.

Only you (and not the vendor) can decide how useful a program is, by testing it. Don’t let the vendors feed you canned examples of how great their program is. Don’t put in a search term or two while standing at a trade show kiosk. Instead, plug in a current problem or three while sitting in your office and see how well the program does compared to the searches you ran last week.

You mean you didn’t run the searches, but you’re deciding whether to buy this expensive package? You should at least ask the people who will do the work what they think of the offering.

I always like to put in my own company or my own name and see how accurate a fact-finding program is. Some of them (which are still useful some of the time) think I live in the house I sold eight years ago. If you’re going to buy, you should know what a program can do and what it can’t.

As with other salespeople in other industries, AI sales staff won’t tell you what their programs are bad at doing. And most importantly, they won’t tell you how well or how badly (usually badly) their program integrates with other AI software you may be using.

No matter how good any software is, you will need good, inquisitive and flexible people running it and helping to coordinate outputs of different products you are using.

While sales staff may have subject-matter expertise in law (it helps if they are lawyers themselves) they cannot possibly specialize in all facets of the law. Their job is to sell, and they should not be criticized for it.

They have their job to do, and as a responsible buyer, you have yours.

For more on what an AI testing program could look like and what kinds of traits the best users of AI should have, see my forthcoming law review article here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263

 

We’ve had a great response to an Above the Law op-ed here that outlined the kinds of skills lawyers will need as artificial intelligence increases its foothold in law firms.

The piece makes clear that without the right kinds of skills, many of the benefits of AI will be lost on law firms because you still need an engaged human brain to ask the computer the right questions and to analyze the results.

But too much passivity in the use of AI is not only inefficient. It also carries the risk of ethical violations. Once you deploy anything in the aid of a client, New York legal ethics guru Roy Simon says you need to ask,

“Has your firm designated a person (whether lawyer or nonlawyer) to vet, test or evaluate the AI products (and technology products generally) before using them to serve clients?”

We’ve written before about ABA Model Rule 5.3 that requires lawyers to supervise the investigators they hire (and “supervise” means more than saying “don’t break any rules” and then waiting for the results to roll in). See The Weinstein Saga: Now Featuring Lying Investigators, Duplicitous Journalists, Sloppy Lawyers.

But Rule 5.3 also pertains to supervising your IT department. It’s not enough to have some sales person convince you to buy new software (AI gets called software once we start using it). The lawyer or the firm paying for it should do more than rely on claims by the vendor.

Simon told a recent conference that you don’t have to understand the code or algorithms behind the product (just as you don’t have to know every feature of Word or Excel), but you do need to know what the limits of the product are and what can go wrong (especially how to protect confidential information).

In addition to leaking information it shouldn’t, what kinds of things are there to learn about how a program works that could have an impact on the quality of the work you do with it?

  • AI can be biased: Software works based on the assumptions of those who program it. You can never get a read in advance of what a program’s biases may do to output until you use the program. Far more advanced than the old saying “garbage in-garbage out,” but a related concept: there are thousands of decisions a computer needs to make based on definitions a person inserts either before the thing comes out of the box or during the machine-learning process where people refine results with new, corrective inputs.
  • Competing AI programs can do some things better than others. Which programs are best for Task X and which for Task Y? No salesperson will give you the complete answer. You learn by trying.
  • Control group testing can be very valuable. Ask someone at your firm to do a search for which you know the results and see how easy it is for them to come up with the results you know you should see. If the results they come up with are wrong, you may have a problem with the person, with the program, or both.

The person who should not be leading this portion the training is the sales representative of the software vendor. Someone competent at the law firm needs to do it, and if they are not a lawyer then a lawyer needs to be up on what’s happening.

[For more on our thoughts on AI, see the draft of my paper for the Savannah Law Review, Legal Jobs in the Age of Artificial Intelligence: Moving from Today’s Limited Universe of Data Toward the Great Beyond, available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263].

 

We don’t usually think of the law as the place our most creative people go. Lawyers with a creative bent often drift into business, where a higher risk tolerance is often required to make a success of yourself. Some of our greatest writers and artists have legal training, but most seem to drop out when their artistic calling tells them law school isn’t for them.

Group of Robots and personal computer vector illustration

Still, creativity and innovation are all the rage in law schools today. Northwestern has a concentration in it as does Vanderbilt, and Harvard has a course on Innovation in Legal Education and Practice.

Like it or not, as artificial intelligence takes over an increasing number of dreary legal tasks, there will be less room for dreary, plodding minds in law firms. The creative and innovative will survive.

This doesn’t worry us, because we’ve long talked about the need for creativity in fact finding. It’s even in the subtitle of my book, The Art of Fact Investigation: Creative Thinking in the Age of Information Overload.

The book takes up the message we have long delivered to clients: computers can help speed up searching, but computers have also made searching more complex because of the vast amounts of information we need to sort through.

  • Deadlines are ever tighter, but now we have billions of pages of internet code to search.
  • Information about a person used to be concentrated around where he was born and raised. Today, people are more mobile and without leaving their base, they can incorporate a dozen companies across the country doing business in a variety of jurisdictions around the world.
  • Databases make a ton of mistakes. E.g. Two of them think I live in the house I sold seven years ago.
  • Most legal records are not on line. Computers are of limited use in searching for them, and even less useful if figuring out their relevance to a particular matter.
  • Since you can’t look everywhere, investigation is a matter of making educated guesses and requires a mind that can keep several plausible running theories going at the same time. That’s where the creativity comes in. How do you form a theory of where X has hidden his assets? By putting yourself in his shoes, based on his history and some clues you may uncover through database and public-record research.

The idea that technological change threatens jobs is hardly new, as pointed out in a sweeping essay by former world chess champion Gary Kasparov in the Wall Street Journal.

Twenty years after losing a chess match to a computer, Kasparov writes: “Machines have been displacing people since the industrial revolution. The difference today is that machines threaten to replace the livelihoods of the class of people who read and write articles about them,” i.e. the writer of this blog and just about anyone reading it.

Kasparov argues that to bemoan technological progress is “little better than complaining that antibiotics put too many gravediggers out of work. The transfer of labor from humans to our inventions is nothing less than the history of civilization … Machines that replace physical labor have allowed us to focus more on what makes us human: our minds.”

The great challenge in artificial intelligence is to use our minds to manage the machines we create. That challenge extends to law firms. We may have e-discovery, powerful computers and databases stuffed with information, but it still requires a human mind to sort good results from bad and to craft those results into persuasive arguments.

After all, until machines replace judges and juries, it will take human minds to persuade other human minds of the value of our arguments.

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

Hiring good people is getting a lot harder, and not just because there are fewer candidates in a lot of industries. With AI-enabled cheating, grade inflation, and the shunning of standardized tests by colleges and graduate schools, how is a hiring manager supposed to know who’s a good fit?

My prediction: Good companies will have to think more like creative investigators to figure out who’s smart at their work, and who’s just smart at beating the system. They will have to rely less on outsourcing the evaluation of people through grades and school brands.

ChatGPT Makes. Stuff. Up.

Do you want candidates to write you an essay? I wrote recently on our other blog The Divorce Asset Hunter that when it comes to researching individual people or situations instead of something more general, ChatGPT just punts – it refuses to engage. ChatGPT doesn’t think – it mimics what others have written about a topic. If your topic is obscure (“What’s the reputation of this business owner in Maryland who wants to borrow $3 million?” or “What’s this 25 year old MBA student like?”) You are out of luck with ChatGPT.

And if the topic is a more general one that AI wants to try, look out. A respected litigator in New York I know named Stephen Brodsky reported last week that a client had sent him some legal research to incorporate into a brief. His report, posted on LinkedIn:

A few weeks ago, a client emailed me a list of case citations, asking me to include them in a brief I was writing for his litigation. When I checked them, they didn’t come up. Each time, I thought I mistyped the cite. One after the other. Then I realized – THEY DID NOT EXIST. When I asked my client who gave him the “citations,” he said he had tried out Chat GPT.

Brodsky concluded: “It. Made. The. Cases. Up. My. Client. Thought. They. Were. Real.”

WHAT YOU CAN DO: As Brodsky demonstrated and as I discovered as a financial journalist, the fun is in the footnotes. If someone hands you something they’ve written, query them on the footnotes. Do the citations exist? Did they really read the material they are citing? I would avoid take-home assignments and make people write something over the course of a couple of hours right there near you, minus an internet connection and phone of course. Give the candidate an hour to write up how they would approach a particular problem.

We All Live in Lake Wobegon Now

What about screening people to see who gets the in-person test above? What about their academic record?

Cheating is becoming increasingly common at universities, as detailed in this Free Press article recently. It’s easier than ever, both because of ChatGPT and lazy teachers who give the same exam for decades.

Standardized tests at many colleges are either optional or part of a “test blind” admissions process. By 2025, the LSAT may be optional for law schools admitting prospective lawyers.

Even before the elimination of standardized testing, there was a problem at universities in that they had inflated grades to such an extent that reality had caught up with Garrison Keillor’s made-up Lake Wobegon, where “all the children are above average.” Yale went from the place that John Kerry and George W. Bush got a lot of C’s and Kerry even got four D’s to a place where an A average was not far off the norm.

WHAT YOU CAN DO: If schools are handing out A’s like participation trophies, give the A’s the same weight an athletic recruiter would give a participation trophy. If they are going to be litigators, give them a tough evidence problem and see how they do. Whatever the position they’re up for, throw an ethical problem their way. If it looks like they would do the wrong thing if they could get away with it, find someone else. And whatever you do, use made-up names and places so that in the event they are allowed to bring the test home they can’t have the computer generate an answer.

A New Rigor for Checking References

If it’s not their first job, you can always call up former employers. I am not a big fan of calling just the references a prospect submits, because what are the chances you submit the name of someone who fired you or was happy to see the last of you in that office? If there are gaps in the resume (even just a month or two between jobs), find out whether there was another job in there that didn’t work out. Then find out why.

THE TAKEAWAY: You can’t depend on transcripts, school admission departments, and a lot of take-home or internet-assisted essays. Calling only the references a candidate submits is a rigged game. YOU the hirer need to devise the tests and the lists of people to call.

It’s that or get ready for even higher turnover in the years to come.

This blog may be one of the few publications in the Western world that has never written the word “Kardashian,” but that has now changed. In the stories about the robbery in Paris of Kim Kardashian we found numerous issues that touch on the work we do.Kardashian Paris Investigation

After my recent book The Art of Fact Investigation came out in May, a number of people wrote to me and suggested another chapter in the next edition about what people could do to maintain privacy in the face some who may want to dig up facts on them.

The easy advice for Kim Kardashian-West: if you are on social media a lot with information about valuable possessions and your whereabouts, criminals will easily learn about your valuable possessions and your whereabouts. Big rings on Instagram? Not a good idea. The super-secret apartment hotel in Paris? With paparazzi following you everywhere, how secret is any place you go?

The harder advice both to accept and to act on relates to some speculation in the media that the crime was an inside job, because the thieves knew that Kardashian’s security guard was not on duty that night.

When we let others into our homes and into our lives, there is always the chance that one of those people may feed information to the outside. This is why many people like a preliminary background check of the electrician or plumber they are about to admit into their home. They like a more thorough look at someone who will watch their children. But Kardashian-West isn’t just dealing with plumbers and babysitters.

How many photographs are there of her bringing home groceries, for example? She eats therefore food is delivered by people. When she buys something large, that too is delivered. It is unlikely that she drives her car to Jiffy Lube when it’s time for an oil change. People drive for her.

We have written before about the value of talking to workers who have been in someone’s home. Movers, gardeners, handymen – all get to know the home to an extent and the people who work there. If one of them becomes estranged because they are fired or are not paid, they have every incentive to talk about the person they used to serve.

We are not saying that Kardashian-West has been betrayed by any of her staff. Only that when police found out that the bodyguard was off duty that night, they surely wanted to know: who else knew that? And they would have started the questioning close to home.

 

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

 

When your defense is that the law allows you to publish garbage without fear of prosecution, one takeaway is simple: the internet is filled with garbage that needs to be well verified before you rely on it.Internet searching

This blog thinks the Ninth Circuit got it right in exonerating Yelp this week from the lawsuit by a small business that was incorrectly identified in a negative Yelp ad. The decision is here.

While we feel terribly for the locksmith whose business was tarred with a brutally negative review that Yelp erroneously attached to his business, it seems clear that the court was right in deciding that Yelp was protected from prosecution by the federal Communications Decency Act.

The reasoning in Congress for this and other laws that grant safe harbor to internet facilitators of exchanges (of opinions, goods or anything else) is that if the internet sites were to be held liable for the contents of what they were portraying, the industry would shut down or need to charge a lot of money to compensate them for the risk.

As fact finders, we think the Yelp case is a handy example of why just about anything on line should be verified if you intend to make any kind of important decision based on what you read.

We recently had a case in which a negative review of a doctor became relevant in a malpractice case. Question one to us was: is this reviewer a real person and if so who is she? Based on her Yelp handle and city we managed to find her and to take a statement from her that turned out to be even more valuable than what she had posted on Yelp.

But what if “she” had turned out to be a competitor, an embittered but deranged former patient, or just a crank?

This is the not the first time we’ve written about this. In The Spokeo Lawsuit: Databases are Riddled with Errors we discussed a database that spits out some free information but then asks you to pay for more (often inaccurate) information.

As we tell our clients all the time (and as I’ve written in my book, The Art of Fact Investigation), even the most expensive databases confuse people with similar names, leave out key information such as where a person really lives or works, and are mostly hopeless with linking people and their shell companies.

The internet is a wonderful, useful and time-saving place, but there is no substitute for a good critical mind to sort investigative gold from the masses of garbage you find there.

 

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

 

It’s cloud illusions I recall

I really don’t know clouds at all

–Joni Mitchell

Today’s decision by the Second Circuit that Microsoft did not have to hand over data stored on its server in Ireland should remind us all that information isn’t just “out there.” As with printed information so it is sometimes with electronic data: physical location matters.

The court imposed a major limitation on the scope of a warrant issued under the Stored Communications Act. It reversed the Southern District of New York’s Chief Judge in quashing a warrant issued to Microsoft to turn over emails stored outside the United States. The full opinion is here.

This blog doesn’t usually get into the weeds when it comes to the Stored Communications Act, but we are intensely interested in how to find things and how to get them to the clients who need them.

The case reminds us that even though a lot more information than ever before is stored electronically, it still matters greatly where it is stored.  Crucially, electronic storage is not the same as accessibility via the internet.

Even in the U.S, most counties do not put all of their records on line. Those that purport to do so can have less than complete recordkeeping compared to the data that is searchable on site at the local courthouse.

Just the other day we read in the newspaper about an old case in Bergen County, New Jersey that would help our client. The case was nowhere to be found on line at the New Jersey courts website. When our retriever travelled to Bergen County, he was told that the case had been destroyed.

Were we out of luck? No. The same parties had gone at it in another New Jersey county, and had attached a copy of the Bergen County suit to the one in the other county. That other suit (also not on line but visible on the computers on site) had not been destroyed. We were then able to see what the Bergen suit was all about.

None of this was accomplished on the internet, which is just a series of boxes that sit in different rooms in different jurisdictions.

Which jurisdictions the boxes are in can make all the difference.

 

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.