investigative techniques

Do you ever wonder why some gifted small children play Mozart, but you never see any child prodigy lawyers who can draft a complicated will?

The reason is that the rules of how to play the piano have far fewer permutations and judgment calls than deciding what should go into a will. “Do this, not that” works well with a limited number of keys in each octave. But the permutations of a will are infinite. And by the way, child prodigies can play the notes, but usually not as soulfully as an older pianist with more experience of the range of emotions an adult experiences over a lifetime.

You get to be good at something by doing a lot of it. You can play the Mozart over and over, but how do you know what other human beings may need in a will, covering events that have yet to happen?

Not by drafting the same kind of will over and over, that’s for sure.

Reviewing a lot of translations done by people is the way Google Translate can manage rudimentary translations in a split second. Reviewing a thousand decisions made in document discovery and learning from mistakes picked out by a person is the way e-discovery software looks smarter the longer you use it.

But you would never translate a complex, nuanced document with Google Translate, and you sure wouldn’t produce documents without having a partner look it all over.

The craziness that can result from the mindless following of rules is an issue on the forefront law today, as we debate how much we should rely on artificial intelligence.

Who should bear the cost if AI makes a decision that damages a client? The designers of the software? The lawyers who use it? Or will malpractice insurance evolve enough to spread the risk around so that clients pay in advance in the form of a slightly higher price to offset the premium paid by the lawyer?

Whatever we decide, my view is that human oversite of computer activity is something society will need far into the future. The Mozart line above was given to me by my property professor in law school and appeared in the preface of my book, The Art of Fact Investigation.

The Mozart line is appropriate when thinking about computers, too. And in visual art, I increasingly see parallels between the way artists and lawyers struggle to get at what is true and what is an outcome we find desirable. Take the recent exhibition at the Metropolitan Museum here in New York, called Delirious: Art at the Limits of Reason, 1950 to 1980.

It showed that our struggle with machines is hardly new, even though it would seem so with the flood of scary stories about AI and “The Singularity” that we get daily. The show was filled with the worrying of artists 50 and 60 years ago about what machines would do to the way we see the world, find facts, and how we remain rational. It seems funny to say that: computers seem to be ultra-rational in their production of purely logical “thinking.”

But what seems to be a sensible or logical premise doesn’t mean that you’ll end up with logical conclusions. On a very early AI level, consider the databases we use today that were the wonders of the world 20 years ago. LexisNexis or Westlaw are hugely powerful tools, but what if you don’t supervise them? If I put my name into Westlaw, it thinks I still live in the home I sold in 2011. All other reasoning Westlaw produces based on that “fact” will be wrong. Noise complaints brought against the residents there have nothing to do with me. A newspaper story about disorderly conduct resulting in many police visits to the home two years ago are also irrelevant when talking about me.[1]

The idea of suppositions running amok came home when I looked at a sculpture last month by Sol LeWitt (1928-2007) called 13/3. At first glance, this sculpture would seem to have little relationship to delirium. It sounds from the outset like a simple idea: a 13×13 grid from which three towers arise. What you get when it’s logically put into action is a disorienting building that few would want to occupy.

As the curators commented, LeWitt “did not consider his otherwise systematic work rational. Indeed, he aimed to ‘break out of the whole idea of rationality.’ ‘In a logical sequence,’ LeWitt wrote, in which a predetermined algorithm, not the artist, dictates the work of art, ‘you don’t think about it. It is a way of not thinking. It is irrational.’”

Another wonderful work in the show, Howardena Pindell’s Untitled #2, makes fun of the faith we sometimes have in what superficially looks to be the product of machine-driven logic. A vast array of numbered dots sits uneasily atop a grid, and at first, the dots appear to be the product of an algorithm. In the end, they “amount to nothing but diagrammatic babble.”

Setting a formula in motion is not deep thinking. The thinking comes in deciding whether the vast amount of information we’re processing results in something we like, want or need. Lawyers would do well to remember that.

[1] Imaginary stuff: while Westlaw does say I live there, the problems at the home are made up for illustrative purposes.

We’ve had a great response to an Above the Law op-ed here that outlined the kinds of skills lawyers will need as artificial intelligence increases its foothold in law firms.

The piece makes clear that without the right kinds of skills, many of the benefits of AI will be lost on law firms because you still need an engaged human brain to ask the computer the right questions and to analyze the results.

But too much passivity in the use of AI is not only inefficient. It also carries the risk of ethical violations. Once you deploy anything in the aid of a client, New York legal ethics guru Roy Simon says you need to ask,

“Has your firm designated a person (whether lawyer or nonlawyer) to vet, test or evaluate the AI products (and technology products generally) before using them to serve clients?”

We’ve written before about ABA Model Rule 5.3 that requires lawyers to supervise the investigators they hire (and “supervise” means more than saying “don’t break any rules” and then waiting for the results to roll in). See The Weinstein Saga: Now Featuring Lying Investigators, Duplicitous Journalists, Sloppy Lawyers.

But Rule 5.3 also pertains to supervising your IT department. It’s not enough to have some sales person convince you to buy new software (AI gets called software once we start using it). The lawyer or the firm paying for it should do more than rely on claims by the vendor.

Simon told a recent conference that you don’t have to understand the code or algorithms behind the product (just as you don’t have to know every feature of Word or Excel), but you do need to know what the limits of the product are and what can go wrong (especially how to protect confidential information).

In addition to leaking information it shouldn’t, what kinds of things are there to learn about how a program works that could have an impact on the quality of the work you do with it?

  • AI can be biased: Software works based on the assumptions of those who program it. You can never get a read in advance of what a program’s biases may do to output until you use the program. Far more advanced than the old saying “garbage in-garbage out,” but a related concept: there are thousands of decisions a computer needs to make based on definitions a person inserts either before the thing comes out of the box or during the machine-learning process where people refine results with new, corrective inputs.
  • Competing AI programs can do some things better than others. Which programs are best for Task X and which for Task Y? No salesperson will give you the complete answer. You learn by trying.
  • Control group testing can be very valuable. Ask someone at your firm to do a search for which you know the results and see how easy it is for them to come up with the results you know you should see. If the results they come up with are wrong, you may have a problem with the person, with the program, or both.

The person who should not be leading this portion the training is the sales representative of the software vendor. Someone competent at the law firm needs to do it, and if they are not a lawyer then a lawyer needs to be up on what’s happening.

[For more on our thoughts on AI, see the draft of my paper for the Savannah Law Review, Legal Jobs in the Age of Artificial Intelligence: Moving from Today’s Limited Universe of Data Toward the Great Beyond, available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263].

 

Decent investigators and journalists everywhere ought to have been outraged at news over the weekend in the Wall Street Journal that appears to have caught a corporate investigator masquerading as a Journal reporter.

According to the story, the person trying to get information about investment strategy and caught on tape pretending to be someone he wasn’t was “Jean-Charles Brisard, a well-known corporate security and intelligence consultant who lives in Switzerland and France.”

Fake news we know about, but fake reporters? It’s more common than it should be. Free societies need a free press, and for a free press to work people have to be able to trust that a reporter is who he says he is.

Good investigators working for U.S. lawyers should not pretend to be someone they are not – whether the fake identity is a journalist or some other occupation. Whether or not it breaks a state or federal impersonation statute, it’s probably unethical under the rules of professional responsibility.

Consider Harvey Weinstein’s army of lawyers and their investigators. The evidence was presented in Ronan Farrow’s second New Yorker piece on Weinstein that hit the web last night. The story says that Weinstein, through lawyer David Boies, hired former Mossad agents from a company called Black Cube.

“Two private investigators from Black Cube, using false identities, met with the actress Rose McGowan, who eventually publicly accused Weinstein of rape, to extract information from her,” the story says.

It goes on to explain that one of the investigators used a real company as cover but that the company had been specially set up as an empty shell for this investigation. The name of the company was real, but its purpose was not (it was not an investment bank). Worse, the investigator used a fake name. Courts have said this can be OK for the agents of lawyers if done in conjunction with an intellectual property, civil rights or criminal-defense matter. This was none of these.

The journalism aspect of the Weinstein/Black Cube investigation is (if accurate) just as revolting, involving a freelance journalist who was passing what people said to him not to a news outlet but to Black Cube. This produces the same result as the Brisard case above. Why talk to a journalist if he a) May not be a journalist or b) Will be passing your material on directly to the person he’s asking you about?  The freelancer in question is unidentified and told Farrow he took no money from Black Cube or Weinstein. Volunteerism at its most inspiring.

And where were the lawyers in all of this unseemliness? Boies signed the contract with Black Cube, but said he neither selected the firm nor supervised it. “We should not have been contracting with and paying investigators that we did not select and direct,” Boies told Farrow. “At the time, it seemed a reasonable accommodation for a client, but it was not thought through, and that was my mistake. It was a mistake at the time.”

Alert to lawyers everywhere: it was a mistake “at the time” and it would be a mistake anytime. Lawyers are duty-bound to supervise all of their agents, lawyer and non-lawyer alike. When I give my standard Ethics for Investigators talk, ABA model rule 5.3(c)(1) comes right at the top, as in this excerpt from my recent CLE for the State bar of Arizona:

A lawyer is responsible for a non-lawyer’s conduct that violates the rules if the lawyer “orders or, with the knowledge of the specific conduct, ratifies the conduct involved.”

“Ratification” can in some cases be interpreted as benign neglect. An initial warning “Just don’t break any rules” won’t suffice. The nightmare scenario is the famed Winnie the Pooh case in California, Stephen Schlesinger, Inc. v. The Walt Disney Company, 155 Cal.App.4th 736 (2007).

Schlesinger’s lawyers hired investigators and told them to be good. Then the investigators broke into Disney’s offices and stole documents, some of them privileged. The court not only suppressed the evidence, but dismissed the entire case. Part of the reasoning was that Schlesinger’s lawyers, after that initial instruction, did no supervising at all.

Black Cube may not have committed any crimes, but appears from the facts in the story to have gone over the ethical line in pretending to be people they were not. Boies (or any other lawyer in a similar position) should have tried to make sure they would do no such thing. What Black Cube did was everyday fare for Mossad, the CIA and MI6, but not for the agents of U.S. lawyers.

Anyone following artificial intelligence in law knows that its first great cost saving has been in the area of document discovery. Machines can sort through duplicates so that associates don’t have to read the same document seven times, and they can string together thousands of emails to put together a quick-to-read series of a dozen email chains. More sophisticated programs evolve their ability with the help of human input.

Law firms are already saving their clients millions in adopting the technology. It’s bad news for the lawyers who used to earn their livings doing extremely boring document review, but good for everyone else. As in the grocery, book, taxi and hotel businesses, the march of technology is inevitable.

Other advances in law have come with search engines such as Lexmachina, which searches through a small number of databases to predict the outcome of patent cases. Other AI products that have scanned all U.S. Supreme Court decisions do a better job than people in predicting how the court will decide a particular case, based on briefs submitted in a live matter and the judges deciding the case.

When we think about our work gathering facts, we know that most searching is done not in a closed, limited environment. We don’t look through a “mere” four million documents as in a complex discovery or the trivial (for a computer) collection of U.S. Supreme Court cases. Our work is done when the entire world is the possible location of the search.

A person who seldom leaves New York may have a Nevada company with assets in Texas, Bermuda or Russia.

Until all court records in the U.S. are scanned and subject to optical character recognition, artificial intelligence won’t be able to do our job for us in looking over litigation that pertains to a person we are examining.

That day will surely come for U.S records, and may be here in 10 years, but it is not here yet. For the rest of the world, the wait will be longer.

Make no mistake: computers are essential to our business. Still, one set of databases including Westlaw and Lexis Nexis that we often use to begin a case are not as easy to use as Lexmachina or other closed systems, because they rely on abstracts of documents as opposed to the documents themselves.

They are frequently wrong about individual information, mix up different individuals with the same name, and often have outdated material. My profile on one of them, for instance, includes my company but a home phone number I haven’t used in eight years. My current home number is absent. Other databases get my phone number right, but not my company.

Wouldn’t it be nice to have a “Kayak” type system that could compare a person’s profile on five or six paid databases, and then sort out the gold from the garbage?

It would, but it might not happen so soon, and not just because of the open-universe problem.

Even assuming these databases could look to all documents, two other problems arise:

  1. They are on incompatible platforms. Integrating them would be a programming problem.
  2. More importantly, they are paid products, whereas Kayak searches free travel and airline sites. In addition, they require licenses to use, and the amount of data you can get is regulated by one of several permissible uses the user must enter to gain access to the data. A system integration of the sites would mean the integrator would have to vet the user for each system and process payment if it’s a pay-per-use platform.

These are hardly insurmountable problems, but they do help illustrate why, with AI marching relentlessly toward the law firm, certain areas of practice will succumb to more automation faster than others.

What will be insurmountable for AI is this: you cannot ask computers to examine what is not written down, and much of the most interesting information about people resides not on paper but in their minds and the minds of those who know them.

The next installment of this series on AI will consider how AI could still work to help us toward the right people to interview.

We don’t usually think of the law as the place our most creative people go. Lawyers with a creative bent often drift into business, where a higher risk tolerance is often required to make a success of yourself. Some of our greatest writers and artists have legal training, but most seem to drop out when their artistic calling tells them law school isn’t for them.

Group of Robots and personal computer vector illustration

Still, creativity and innovation are all the rage in law schools today. Northwestern has a concentration in it as does Vanderbilt, and Harvard has a course on Innovation in Legal Education and Practice.

Like it or not, as artificial intelligence takes over an increasing number of dreary legal tasks, there will be less room for dreary, plodding minds in law firms. The creative and innovative will survive.

This doesn’t worry us, because we’ve long talked about the need for creativity in fact finding. It’s even in the subtitle of my book, The Art of Fact Investigation: Creative Thinking in the Age of Information Overload.

The book takes up the message we have long delivered to clients: computers can help speed up searching, but computers have also made searching more complex because of the vast amounts of information we need to sort through.

  • Deadlines are ever tighter, but now we have billions of pages of internet code to search.
  • Information about a person used to be concentrated around where he was born and raised. Today, people are more mobile and without leaving their base, they can incorporate a dozen companies across the country doing business in a variety of jurisdictions around the world.
  • Databases make a ton of mistakes. E.g. Two of them think I live in the house I sold seven years ago.
  • Most legal records are not on line. Computers are of limited use in searching for them, and even less useful if figuring out their relevance to a particular matter.
  • Since you can’t look everywhere, investigation is a matter of making educated guesses and requires a mind that can keep several plausible running theories going at the same time. That’s where the creativity comes in. How do you form a theory of where X has hidden his assets? By putting yourself in his shoes, based on his history and some clues you may uncover through database and public-record research.

The idea that technological change threatens jobs is hardly new, as pointed out in a sweeping essay by former world chess champion Gary Kasparov in the Wall Street Journal.

Twenty years after losing a chess match to a computer, Kasparov writes: “Machines have been displacing people since the industrial revolution. The difference today is that machines threaten to replace the livelihoods of the class of people who read and write articles about them,” i.e. the writer of this blog and just about anyone reading it.

Kasparov argues that to bemoan technological progress is “little better than complaining that antibiotics put too many gravediggers out of work. The transfer of labor from humans to our inventions is nothing less than the history of civilization … Machines that replace physical labor have allowed us to focus more on what makes us human: our minds.”

The great challenge in artificial intelligence is to use our minds to manage the machines we create. That challenge extends to law firms. We may have e-discovery, powerful computers and databases stuffed with information, but it still requires a human mind to sort good results from bad and to craft those results into persuasive arguments.

After all, until machines replace judges and juries, it will take human minds to persuade other human minds of the value of our arguments.

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

Step one: don’t have a manual. That’s the message in an information-packed new book about the inner workings of the SEC just after the Madoff and now largely forgotten (but just as egregious) Allen Stanford frauds.

Step 1

In his memoir of five years at the agency, former SEC Director of Investment Management Norm Champ (now back in private practice) writes that he was stunned to arrive into public service in 2010 to find that examiners had no set procedures both when looking at regulated entities or in following up on their findings.

“If SEC inspectors ever arrived at a financial firm for an examination and discovered that the firm had no manual about how to comply with federal securities laws, that firm would immediately be cited for deficiencies and most likely subject to enforcement action,” he writes in Going Public (My Adventures Inside the SEC and How to Prevent the Next Devastating Crisis).

Among his proudest achievements were instituting such procedures at the SEC, and holding accountable anyone at the SEC who begins to follow up on a whistle-blower’s report – the kind that the Commission ignored in relation to Madoff and Stanford.

We’ve written and spoken lots about our methodology for due diligence. You start from scratch and look not just to verify what you’ve been handed, but for information the person or company don’t want you to see. You don’t close investigative doors prematurely even though human nature makes you want to do just that.

Starting from scratch means that you assume nothing. You don’t assume Madoff has all those assets under management unless you check. It would have been easy to do but nobody asked. Anyone who was suspicious of the absence of an independent custodian or a major auditor similarly let it slide.

This is what we refer to as a Paint-by-Numbers investigation: the forms and relationships are all taken as givens, and all you get to do is decide on color. In Madoff’s case, the “forms” (the existence of invested money) were illusory. Who cares about the color (say, the risk profile of the “securities”) of something that doesn’t exist?

In Stanford’s case, there was lots of information he wouldn’t have been proud of. An April 2007 FINRA report on the Stanford Group Company said the firm had been found to be operating a securities business while failing to maintain its required minimum net capital.  A former employee of Stanford’s alleged in an April 2006 complaint in Florida state court that Stanford was operating a Ponzi scheme.

Without internal accountability procedures in place, did all of the people at the SEC just sit there? No. Champ (who arrived post-Madoff and Stanford) describes an agency packed with a lot of dedicated professionals but with a good bit of deadwood immune to the disciplines of the private-sector job market. As we read about the federal budget proposals that seek to cut funding at a variety of agencies, this book contains two other pertinent messages:

  1. If you could fire people in government the way you can in the private sector, it would be easier for the government to save money.
  2. That battle is so tough that most people (including Champ) just try to work with the good people they can find and leave personnel reform for someone else.

Champ makes no promises that there won’t be more Ponzi schemes, but hopes that his organizational reforms will reduce the chances. As in any due diligence, you can’t promise that you will always catch everything – only that if there are repeated indications of a problem staring you in the face (complete with former employees blowing whistles), you will follow up.

Among Champ’s recommendations for blunting the damage of the next crisis, one is especially welcome: eliminate the scandalous government sponsorship of lotteries. Lotteries are the world’s worst investment, and yet the poorest members of society spend like crazy on them, all prompted by a lot of misleading and predatory government advertising “far beyond what private businesses are allowed.”

Champ asks us to imagine what could be done with all that money people waste if it were properly invested and devoted to investor education.

We agree. The millionaires who lost with Madoff could at least have afforded $2,000 of due diligence on their investment. The poor who play the lottery and who should be saving their money are the ones who need help the most help from the SEC and from state governments that need to find a less repugnant way to raise revenue.

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

 

What to do when the databases you rely on start stripping out the very data you are paying for?Due diligence databases

Word in today’s Wall Street Journal that the main credit reporting firms will be removing many civil judgments and tax liens from credit reports prompts us to restate one our core beliefs:

Not only do databases routinely mix people up, they are far from complete in the information they contain.

Now, they will be even farther away from complete, because in order to list adverse information the credit reporting companies want several identifiers on each piece of information before they include it in a credit report. Even if there is only one person in the United States with a particular name, if his address and Social Security number are not included in a court filing against him, that filing may never make it onto his report. From what we’ve seen, there are almost no SSN’s in most of the filings we review.

As a result of this new policy, the credit scores of a lot of people are about to go up, says the Journal.

To answer the question posed at the top of this posting: what you do is you go after the information yourself. You (or a competent pro you hire) looks at databases and courthouse records for liens, litigation and other information people use every day to evaluate prospective associates, counterparties and debtors. If there’s enough money at stake, you may want to conduct interviews, not only with references but with people not on the resume.

The idea that databases are missing a lot is old news to anyone who stops to take a careful look.

The next time you are searching in a paid database, you may notice a little question mark somewhere around the box where you enter your search terms. Click on that and prepare to be shocked.

“Nationwide” coverage of marriage licenses may include only a handful of states, because such licenses are not public information in many jurisdictions. In other cases, the information is public but the database doesn’t include it because it’s too expensive to gather data that has not been scanned and stored electronically.

Of course, sending someone to a courthouse costs more than a few clicks performed while sitting at your desk. But does it cost more than lending to the wrong person who defaulted on a big loan six months ago?

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

What will it take for artificial intelligence to surpass us humans? After the Oscars fiasco last night, it doesn’t look like much.

As a person who thinks a lot about the power of human thought versus that of machines, what is striking is not that the mix-up of the Best Picture award was the product of one person’s error, but rather the screw-ups of four people who flubbed what is about the easiest job there is to imagine in show business.

Not one, but two PwC partners messed up with the envelope. You would think that if they had duplicates, it would be pretty clear whose job it was to give out the envelopes to the presenters. Something like, “you give them out and my set will be the backup.” But that didn’t seem to be what happened.

Then you have the compounded errors of Warren Beatty and Faye Dunaway, both of whom can read and simply read off what was obviously the wrong card.

The line we always hear about not being afraid that computers are taking over the world is that human beings will always be there to turn them off if necessary. Afraid of driverless cars? Don’t worry; you can always take over if the car is getting ready to carry you off a cliff.

An asset search for Bill Johnson that reveals he’s worth $200 million, when he emerged from Chapter 7 bankruptcy just 15 months ago? A human being can look at the results and conclude the computer mixed up our Bill Johnson with the tycoon of the same name.

But what if the person who wants to override the driverless car is drunk? What if the person on the Bill Johnson case is a dimwit who just passes on these improbable findings without further inquiry? Then, the best computer programming we have is only as good as the dumbest person overseeing it.

We’ve written extensively here about the value of the human brain in doing investigations. It’s the theme of my book, The Art of Fact Investigation.

As the Oscars demonstrated last night, not just any human brain will do.

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

There is a huge branch of the “fake news” business that gets no attention at all: the fake news consumed each day by corporate America that has nothing to do with politics, but everything to do with business – the bulk of the $18 trillion U.S. economy.Fake news investigation

We’ve been sorting through this kind of thing for years — It’s often why our clients hire us. I’ve also been talking on the subject recently in a speech called Fighting Fake News (see an excerpt here).

The everyday expression for figuring out what’s fake and what isn’t is: Due diligence. Good businesses are good at it, bad ones aren’t.

Six months ago, the term “fake news” meant false political information that the originator or spreader of the “news” knew was false. It’s hardly a new phenomenon, as the Wall Street Journal helpfully pointed out this week with Vladimir Putin’s Political Meddling Revives Old KGB Tactics.

By now, the term has been expanded to mean anything that’s partly or wholly untrue in the eye of the beholder, whether or not it was intentionally misstated.

What is corporate fake news? The massive amount of company, financial and personal information reported but never checked. Plenty of what’s put out is accurate, but a lot isn’t. Ask any public relations professional you know who will give you a frank appraisal of his business. If you issue a news release that’s well written, with nice quotes from your client, what happens to it?

In many cases, it will be printed word for word as a news story. There will be a news byline over it, but the body of the release will be all but unchanged. The “story” will be on dozens of television news department websites, in local newspapers, and then reproduced again based on that “reporting.”

Do “quality journalists” do this? Not that way.

Off the Beaten Track

But consider a company that is not sexy and attractive to Wall Street bankers or a lot of investors – perhaps a mid-sized printing company in Ohio or a private auto-parts manufacturer in Indiana. If that company issues a dull news release, the New York Times or the Chicago Tribune will almost certainly devote zero hours to verifying what’s in that news release. They may not report on the company at all.

If the company is public, you may get a couple of lines with earnings, usually in the context of “beating” or “missing” what analysts had predicted the earnings would be. Good luck relying on that. You would need to ask, are those the analysts who missed the dot-com bubble, the housing crisis, last year’s plunge in oil prices?

What are you to do then, when you are considering hiring someone who worked at one of these thinly covered companies? Or if you may want to enter into a long-term contract with one of them, or perhaps acquire one? Of what use will the “news” about the company be when you start looking?

There is another dimension to the problem aside from what the company says about itself. Company valuation is always relative to the health of its competitors, and they too have not only the same interest in promoting themselves, but also in reflecting negative news on their competitors.

If there is good news about fake news in politics today, it’s that people have heard a lot about made-up “news” sites, and reputable news outlets have devoted resources to reporting on them. Whatever your political viewpoint, there are plenty of places to go that will scrutinize the other side’s speeches and writings.

But where do you go if you need to scrutinize a thinly-traded or private company in refrigerated freight? Printing? A company that imports socks from Italy or manganese from Africa?

If you care enough, if the issue is valuable to you, you do your own research. Just as in the political realm, you read widely from a variety of sources and make your own decision.

Gray Matter

The problem with any kind of fake news detection comes when what is said is partially true. Neither black nor white, but gray. Evaluating gray takes the kind of gray matter a computer does not offer.

In politics, we see this all the time. President Obama’s promise “If you like your doctor, you can keep your doctor” has been given evolving degrees of truthfulness ratings since the time he said it. Many people have been able to keep their doctors; many have not (absent paying several times what they used to pay).

In business, things are almost always a shade of gray. During due diligence, an interview with a person who has posted an enthusiastic recommendation of a person on LinkedIn can reveal notes of hesitancy or qualification. You can ask questions that relate to matters not covered in the recommendation.

If a company has posted wonderful earnings, in depth analysis of the figures can show you that “wonderful” can mean “better than expected, but not sustainable because the company keeps selling assets to make its numbers.” Interviews can tell you it’s a lousy place to work, which could mean something if it’s a service business and may reflect poorly on the CEO and board.

As we tell our clients all the time, if you are about to hand the keys to a $30 million business to someone, doesn’t it make sense to make a few calls about that person to people not listed as references, and to see if there are jobs not listed on the person’s resume you’re holding?

In the world of due diligence, the most damaging fake news can come from omission — the information that is never written. Our challenge is to find it.

 

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.

A story in the Wall Street Journal Google Uses Its Search Engine to Hawk Its Products serves as a useful reminder for something we tell clients all the time: Google is there to make money, and if your ideal search result won’t make them money, you may get a less-than-useful result.

Dollar sign filled with an electronic circuit. Blue background.

Google is an indispensable tool when searching for facts, but Google is not a disinterested party, like a good reference librarian. Google is in business to make money.

The story reports that Google buys some of its own ads, so when you search for a particular thing that Google’s parent company Alphabet sells, guess what? Alphabet’s products have a way of turning out on the top of the list.

One of the first things ever written on this blog more than five years ago was an entry called Google is Not a Substitute for Thinking, and it was one of the most read entries we’ve ever posted.

Among the arguments advanced there as to why a Google search is hardly ever going to suffice in any factual inquiry, we argued that Google’s search results are stacked in favor of the ones that are paid for or that Google judges to be commercially advantageous. A Google entry about a dry cleaner in Joplin, Missouri that has no website would not be very profitable for Google, but if that dry cleaner owes you $50,000, you would want him at the top of page one.

The best way to think about Google is to treat it as a meta-search engine. Imagine not that Google will be able to give you the final answer, but a clue as to where to find the final answer.

If your dry cleaner has no website, Google may point you to a site such as Yelp that rates a different dry cleaner in Joplin. Yelp may then have the dry cleaner you want, but that Yelp listing won’t necessarily come up on Google. Or, you may notice via Google that Joplin or the state of Missouri may require a permit to operate a dry cleaner. Google can help you find where to look up such a permit.

Remember, any dolt at the public library can use Google. It takes a person with the capacity to think creatively to use Google to its greatest potential.

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.