An entire day at a conference on artificial intelligence and the law last week in Chicago produced this insight about how lawyers are dealing with the fast-changing world of artificial intelligence:

Many lawyers are like someone who knows he needs to buy a car but knows nothing about cars. He knows he needs to get from A to B each day and wants to get there faster. So, he is deposited at the largest auto show in the world and told, “Decide which car you should buy.”

Whether it’s at smaller conferences or at the gigantic, auto-show-like legal tech jamborees in Las Vegas or New York, the discussion of AI seems to be dominated by the companies that produce the stuff. Much less on show are people who use legal AI in their everyday lives.

At my conference, the keynote address (and two more panels) were dominated by IBM. Other familiar names in AI in the world of smart contracting and legal research were there, along with the one of the major “old tech” legal research giants. All of the products and services sounded great, which means the salespeople were doing their jobs.

But the number of people who presented about actually using AI after buying it? Just a few (including me). “We wanted to get more users,” said one of the conference organizers, who explained that lawyers are reluctant to describe the ways they use AI, lest they give up valuable pointers to their competitors.

Most of the questions and discussion from lawyers centered around two main themes:

  1. How can we decide which product to buy when there are so many, and they change so quickly?
  2. How can we organize our firm’s business model in such a way that it will be profitable to use expensive new software (“software” being what AI gets called after you start using it)?

Law firm business models are not my specialty, but I have written before and spoke last week about evaluating new programs.

Only you (and not the vendor) can decide how useful a program is, by testing it. Don’t let the vendors feed you canned examples of how great their program is. Don’t put in a search term or two while standing at a trade show kiosk. Instead, plug in a current problem or three while sitting in your office and see how well the program does compared to the searches you ran last week.

You mean you didn’t run the searches, but you’re deciding whether to buy this expensive package? You should at least ask the people who will do the work what they think of the offering.

I always like to put in my own company or my own name and see how accurate a fact-finding program is. Some of them (which are still useful some of the time) think I live in the house I sold eight years ago. If you’re going to buy, you should know what a program can do and what it can’t.

As with other salespeople in other industries, AI sales staff won’t tell you what their programs are bad at doing. And most importantly, they won’t tell you how well or how badly (usually badly) their program integrates with other AI software you may be using.

No matter how good any software is, you will need good, inquisitive and flexible people running it and helping to coordinate outputs of different products you are using.

While sales staff may have subject-matter expertise in law (it helps if they are lawyers themselves) they cannot possibly specialize in all facets of the law. Their job is to sell, and they should not be criticized for it.

They have their job to do, and as a responsible buyer, you have yours.

For more on what an AI testing program could look like and what kinds of traits the best users of AI should have, see my forthcoming law review article here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263

 

Do you ever wonder why some gifted small children play Mozart, but you never see any child prodigy lawyers who can draft a complicated will?

The reason is that the rules of how to play the piano have far fewer permutations and judgment calls than deciding what should go into a will. “Do this, not that” works well with a limited number of keys in each octave. But the permutations of a will are infinite. And by the way, child prodigies can play the notes, but usually not as soulfully as an older pianist with more experience of the range of emotions an adult experiences over a lifetime.

You get to be good at something by doing a lot of it. You can play the Mozart over and over, but how do you know what other human beings may need in a will, covering events that have yet to happen?

Not by drafting the same kind of will over and over, that’s for sure.

Reviewing a lot of translations done by people is the way Google Translate can manage rudimentary translations in a split second. Reviewing a thousand decisions made in document discovery and learning from mistakes picked out by a person is the way e-discovery software looks smarter the longer you use it.

But you would never translate a complex, nuanced document with Google Translate, and you sure wouldn’t produce documents without having a partner look it all over.

The craziness that can result from the mindless following of rules is an issue on the forefront law today, as we debate how much we should rely on artificial intelligence.

Who should bear the cost if AI makes a decision that damages a client? The designers of the software? The lawyers who use it? Or will malpractice insurance evolve enough to spread the risk around so that clients pay in advance in the form of a slightly higher price to offset the premium paid by the lawyer?

Whatever we decide, my view is that human oversite of computer activity is something society will need far into the future. The Mozart line above was given to me by my property professor in law school and appeared in the preface of my book, The Art of Fact Investigation.

The Mozart line is appropriate when thinking about computers, too. And in visual art, I increasingly see parallels between the way artists and lawyers struggle to get at what is true and what is an outcome we find desirable. Take the recent exhibition at the Metropolitan Museum here in New York, called Delirious: Art at the Limits of Reason, 1950 to 1980.

It showed that our struggle with machines is hardly new, even though it would seem so with the flood of scary stories about AI and “The Singularity” that we get daily. The show was filled with the worrying of artists 50 and 60 years ago about what machines would do to the way we see the world, find facts, and how we remain rational. It seems funny to say that: computers seem to be ultra-rational in their production of purely logical “thinking.”

But what seems to be a sensible or logical premise doesn’t mean that you’ll end up with logical conclusions. On a very early AI level, consider the databases we use today that were the wonders of the world 20 years ago. LexisNexis or Westlaw are hugely powerful tools, but what if you don’t supervise them? If I put my name into Westlaw, it thinks I still live in the home I sold in 2011. All other reasoning Westlaw produces based on that “fact” will be wrong. Noise complaints brought against the residents there have nothing to do with me. A newspaper story about disorderly conduct resulting in many police visits to the home two years ago are also irrelevant when talking about me.[1]

The idea of suppositions running amok came home when I looked at a sculpture last month by Sol LeWitt (1928-2007) called 13/3. At first glance, this sculpture would seem to have little relationship to delirium. It sounds from the outset like a simple idea: a 13×13 grid from which three towers arise. What you get when it’s logically put into action is a disorienting building that few would want to occupy.

As the curators commented, LeWitt “did not consider his otherwise systematic work rational. Indeed, he aimed to ‘break out of the whole idea of rationality.’ ‘In a logical sequence,’ LeWitt wrote, in which a predetermined algorithm, not the artist, dictates the work of art, ‘you don’t think about it. It is a way of not thinking. It is irrational.’”

Another wonderful work in the show, Howardena Pindell’s Untitled #2, makes fun of the faith we sometimes have in what superficially looks to be the product of machine-driven logic. A vast array of numbered dots sits uneasily atop a grid, and at first, the dots appear to be the product of an algorithm. In the end, they “amount to nothing but diagrammatic babble.”

Setting a formula in motion is not deep thinking. The thinking comes in deciding whether the vast amount of information we’re processing results in something we like, want or need. Lawyers would do well to remember that.

[1] Imaginary stuff: while Westlaw does say I live there, the problems at the home are made up for illustrative purposes.

We’ve had a great response to an Above the Law op-ed here that outlined the kinds of skills lawyers will need as artificial intelligence increases its foothold in law firms.

The piece makes clear that without the right kinds of skills, many of the benefits of AI will be lost on law firms because you still need an engaged human brain to ask the computer the right questions and to analyze the results.

But too much passivity in the use of AI is not only inefficient. It also carries the risk of ethical violations. Once you deploy anything in the aid of a client, New York legal ethics guru Roy Simon says you need to ask,

“Has your firm designated a person (whether lawyer or nonlawyer) to vet, test or evaluate the AI products (and technology products generally) before using them to serve clients?”

We’ve written before about ABA Model Rule 5.3 that requires lawyers to supervise the investigators they hire (and “supervise” means more than saying “don’t break any rules” and then waiting for the results to roll in). See The Weinstein Saga: Now Featuring Lying Investigators, Duplicitous Journalists, Sloppy Lawyers.

But Rule 5.3 also pertains to supervising your IT department. It’s not enough to have some sales person convince you to buy new software (AI gets called software once we start using it). The lawyer or the firm paying for it should do more than rely on claims by the vendor.

Simon told a recent conference that you don’t have to understand the code or algorithms behind the product (just as you don’t have to know every feature of Word or Excel), but you do need to know what the limits of the product are and what can go wrong (especially how to protect confidential information).

In addition to leaking information it shouldn’t, what kinds of things are there to learn about how a program works that could have an impact on the quality of the work you do with it?

  • AI can be biased: Software works based on the assumptions of those who program it. You can never get a read in advance of what a program’s biases may do to output until you use the program. Far more advanced than the old saying “garbage in-garbage out,” but a related concept: there are thousands of decisions a computer needs to make based on definitions a person inserts either before the thing comes out of the box or during the machine-learning process where people refine results with new, corrective inputs.
  • Competing AI programs can do some things better than others. Which programs are best for Task X and which for Task Y? No salesperson will give you the complete answer. You learn by trying.
  • Control group testing can be very valuable. Ask someone at your firm to do a search for which you know the results and see how easy it is for them to come up with the results you know you should see. If the results they come up with are wrong, you may have a problem with the person, with the program, or both.

The person who should not be leading this portion the training is the sales representative of the software vendor. Someone competent at the law firm needs to do it, and if they are not a lawyer then a lawyer needs to be up on what’s happening.

[For more on our thoughts on AI, see the draft of my paper for the Savannah Law Review, Legal Jobs in the Age of Artificial Intelligence: Moving from Today’s Limited Universe of Data Toward the Great Beyond, available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263].

 

Decent investigators and journalists everywhere ought to have been outraged at news over the weekend in the Wall Street Journal that appears to have caught a corporate investigator masquerading as a Journal reporter.

According to the story, the person trying to get information about investment strategy and caught on tape pretending to be someone he wasn’t was “Jean-Charles Brisard, a well-known corporate security and intelligence consultant who lives in Switzerland and France.”

Fake news we know about, but fake reporters? It’s more common than it should be. Free societies need a free press, and for a free press to work people have to be able to trust that a reporter is who he says he is.

Good investigators working for U.S. lawyers should not pretend to be someone they are not – whether the fake identity is a journalist or some other occupation. Whether or not it breaks a state or federal impersonation statute, it’s probably unethical under the rules of professional responsibility.

Consider Harvey Weinstein’s army of lawyers and their investigators. The evidence was presented in Ronan Farrow’s second New Yorker piece on Weinstein that hit the web last night. The story says that Weinstein, through lawyer David Boies, hired former Mossad agents from a company called Black Cube.

“Two private investigators from Black Cube, using false identities, met with the actress Rose McGowan, who eventually publicly accused Weinstein of rape, to extract information from her,” the story says.

It goes on to explain that one of the investigators used a real company as cover but that the company had been specially set up as an empty shell for this investigation. The name of the company was real, but its purpose was not (it was not an investment bank). Worse, the investigator used a fake name. Courts have said this can be OK for the agents of lawyers if done in conjunction with an intellectual property, civil rights or criminal-defense matter. This was none of these.

The journalism aspect of the Weinstein/Black Cube investigation is (if accurate) just as revolting, involving a freelance journalist who was passing what people said to him not to a news outlet but to Black Cube. This produces the same result as the Brisard case above. Why talk to a journalist if he a) May not be a journalist or b) Will be passing your material on directly to the person he’s asking you about?  The freelancer in question is unidentified and told Farrow he took no money from Black Cube or Weinstein. Volunteerism at its most inspiring.

And where were the lawyers in all of this unseemliness? Boies signed the contract with Black Cube, but said he neither selected the firm nor supervised it. “We should not have been contracting with and paying investigators that we did not select and direct,” Boies told Farrow. “At the time, it seemed a reasonable accommodation for a client, but it was not thought through, and that was my mistake. It was a mistake at the time.”

Alert to lawyers everywhere: it was a mistake “at the time” and it would be a mistake anytime. Lawyers are duty-bound to supervise all of their agents, lawyer and non-lawyer alike. When I give my standard Ethics for Investigators talk, ABA model rule 5.3(c)(1) comes right at the top, as in this excerpt from my recent CLE for the State bar of Arizona:

A lawyer is responsible for a non-lawyer’s conduct that violates the rules if the lawyer “orders or, with the knowledge of the specific conduct, ratifies the conduct involved.”

“Ratification” can in some cases be interpreted as benign neglect. An initial warning “Just don’t break any rules” won’t suffice. The nightmare scenario is the famed Winnie the Pooh case in California, Stephen Schlesinger, Inc. v. The Walt Disney Company, 155 Cal.App.4th 736 (2007).

Schlesinger’s lawyers hired investigators and told them to be good. Then the investigators broke into Disney’s offices and stole documents, some of them privileged. The court not only suppressed the evidence, but dismissed the entire case. Part of the reasoning was that Schlesinger’s lawyers, after that initial instruction, did no supervising at all.

Black Cube may not have committed any crimes, but appears from the facts in the story to have gone over the ethical line in pretending to be people they were not. Boies (or any other lawyer in a similar position) should have tried to make sure they would do no such thing. What Black Cube did was everyday fare for Mossad, the CIA and MI6, but not for the agents of U.S. lawyers.

Just back from the ABA’s family law conference where I gave a talk on asset searching, I heard a wonderful talk from the “father of unbundling” of legal services, Forrest “Woody” Mosten. He is a high priced divorce lawyer practicing in Beverly Hills, and yet makes a great living in never going to court.

Instead, he lets others do that and spends a lot of time working for an hour or two at a time for clients who can’t (or won’t) pay the kinds of top-dog rates that many lawyers know are meeting such resistance (even from people who could afford to pay but just don’t feel like it).

Unbundling for a family lawyer may look like this: A two-hour meeting to lay out strategy that someone less expensive could handle, or that a divorcing person may take into court during self-representation.

The key thing to remember about unbundling is not that the results people get are just as good as if the top-priced lawyer had done all the work for them. It’s that the alternative is that they won’t hire you at all.

I realized that without thinking about it, our firm had been unbundling for years. We are always happy to take a quick look for a couple of hours and then see what preliminary findings we come up with. As long as you are up front that a two-hour investigation is not the same as the first two hours of a ten-hour engagement, your cost-conscious client may be happy for the smaller bit of work they can afford.

Unbundling is not a new concept, but what was so surprising to me was that this seemed to be the first time the 80 lawyers in the room were encountering the concept. There was stiff resistance at first, but in the end, this seemed to soften.

Mosten says you may want to sell them a Cadillac, but it’s better to sell them a Chevy because otherwise they’ll have to walk.

Unbundling is not for everyone – not for every provider or consumer of services. But if any service professional is there serve clients, sometimes a bumpy ride is better than no ride at all.

Anyone following artificial intelligence in law knows that its first great cost saving has been in the area of document discovery. Machines can sort through duplicates so that associates don’t have to read the same document seven times, and they can string together thousands of emails to put together a quick-to-read series of a dozen email chains. More sophisticated programs evolve their ability with the help of human input.

Law firms are already saving their clients millions in adopting the technology. It’s bad news for the lawyers who used to earn their livings doing extremely boring document review, but good for everyone else. As in the grocery, book, taxi and hotel businesses, the march of technology is inevitable.

Other advances in law have come with search engines such as Lexmachina, which searches through a small number of databases to predict the outcome of patent cases. Other AI products that have scanned all U.S. Supreme Court decisions do a better job than people in predicting how the court will decide a particular case, based on briefs submitted in a live matter and the judges deciding the case.

When we think about our work gathering facts, we know that most searching is done not in a closed, limited environment. We don’t look through a “mere” four million documents as in a complex discovery or the trivial (for a computer) collection of U.S. Supreme Court cases. Our work is done when the entire world is the possible location of the search.

A person who seldom leaves New York may have a Nevada company with assets in Texas, Bermuda or Russia.

Until all court records in the U.S. are scanned and subject to optical character recognition, artificial intelligence won’t be able to do our job for us in looking over litigation that pertains to a person we are examining.

That day will surely come for U.S records, and may be here in 10 years, but it is not here yet. For the rest of the world, the wait will be longer.

Make no mistake: computers are essential to our business. Still, one set of databases including Westlaw and Lexis Nexis that we often use to begin a case are not as easy to use as Lexmachina or other closed systems, because they rely on abstracts of documents as opposed to the documents themselves.

They are frequently wrong about individual information, mix up different individuals with the same name, and often have outdated material. My profile on one of them, for instance, includes my company but a home phone number I haven’t used in eight years. My current home number is absent. Other databases get my phone number right, but not my company.

Wouldn’t it be nice to have a “Kayak” type system that could compare a person’s profile on five or six paid databases, and then sort out the gold from the garbage?

It would, but it might not happen so soon, and not just because of the open-universe problem.

Even assuming these databases could look to all documents, two other problems arise:

  1. They are on incompatible platforms. Integrating them would be a programming problem.
  2. More importantly, they are paid products, whereas Kayak searches free travel and airline sites. In addition, they require licenses to use, and the amount of data you can get is regulated by one of several permissible uses the user must enter to gain access to the data. A system integration of the sites would mean the integrator would have to vet the user for each system and process payment if it’s a pay-per-use platform.

These are hardly insurmountable problems, but they do help illustrate why, with AI marching relentlessly toward the law firm, certain areas of practice will succumb to more automation faster than others.

What will be insurmountable for AI is this: you cannot ask computers to examine what is not written down, and much of the most interesting information about people resides not on paper but in their minds and the minds of those who know them.

The next installment of this series on AI will consider how AI could still work to help us toward the right people to interview.

By now, if a lawyer isn’t thinking hard about how automation is going transform the business of law, that lawyer is a laggard.

You see the way computers upended the taxi, hotel, book and shopping mall businesses? It’s already started in law too.  As firms face resistance over pricing and are looking to get more efficient, the time is now to start training people to work with – and not in fear of – artificial intelligence.

And be not afraid.
And be not afraid

There will still be plenty of lawyers around in 10 or 20 years no matter how much artificial intelligence gets deployed in the law. But the roles those people will play will in many respects be different. The new roles will need different skills.

In a new Harvard Business Review article (based on his new book, “Humility is the New Smart”) Professor Ed Hess at the Darden School of Business argues that in the age of artificial intelligence, being smart won’t mean the same thing as it does today.

Because smart machines can process, store and recall information faster than any person, the skills of memorizing and recall are not as important as they once were. The new smart “will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating and learning,” Hess writes.

Among the many concrete things this will mean for lawyers are two aspects of fact investigation we know well and have been writing about for a long time.

  1. Open-mindedness will be indispensable.
  2. Even for legal research, logical deduction is out, logical inference is in.

Hess predicts we will “spend more time training to be open-minded and learning to update our beliefs in response to new data.” What could this mean in practice for a lawyer?

If all you know how to do is to gather raw information from a limited universe of documents, or perhaps spend a lot of time cutting and pasting phrases from old documents onto new ones, your days are numbered. Technology-assisted review (TAR) already does a good job sorting out duplicates and constructing a chain of emails so you don’t have to read the same email 27 times as you read through a long exchange.

But as computers become smarter and faster, they are sometimes overwhelmed by the vast amounts of new data coming online all the time. I wrote about this in my book, “The Art of Fact Investigation: Creative Thinking in the Age of Information Overload.”

I made the overload point with respect to finding facts outside discovery, but the same phenomenon is hitting legal research too.

In their article “On the Concept of Relevance in Legal Information Retrieval” in the Artificial Intelligence and Law Journal earlier this year,[1] Marc van Opijnen and Cristiana Santos wrote that

“The number of legal documents published online is growing exponentially, but accessibility and searchability have not kept pace with this growth rate. Poorly written or relatively unimportant court decisions are available at the click of the mouse, exposing the comforting myth that all results with the same juristic status are equal. An overload of information (particularly if of low-quality) carries the risk of undermining knowledge acquisition possibilities and even access to justice.

If legal research suffers from the overload problem, even e-discovery faces it despite TAR and whatever technology succeeds TAR (and something will). Whole areas of data are now searchable and discoverable when once they were not. The more you can search, the more there is to search. A lot of what comes back is garbage.

Lawyers who will succeed in using ever more sophisticated computer programs will need to remain open-minded that they (the lawyers) and not the computers are in charge. Open-minded here means accepting that computers are great at some things, but that for a great many years an alert mind will be able to sort through results in a way a computer won’t. The kind of person who will succeed at this will be entirely active – and not passive – while using the technology. Anyone using TAR knows that it requires training before it can be used correctly.

One reason the mind needs to stay engaged is that not all legal reasoning is deductive, and logical deduction is the basis for computational logic. Michael Genesereth of Codex, Stanford’s Center for Legal Informatics wrote two years ago that computational law “simply cannot be applied in cases requiring analogical or inductive reasoning,” though if there are enough judicial rulings interpreting a regulation the computers could muddle through.

For logical deduction to work, you need to know what step one is before you proceed to step two.  Sherlock Holmes always knew where to start because he was the character in entertaining stories of fiction. In solving the puzzle he laid it out in a way that seemed as if it was the only logical solution.

But it wasn’t. In real life, law enforcement, investigators and attorneys faced with mountains of jumbled facts have to pick their ways through all kinds of evidence that produces often mutually contradictory theories. The universe of possible starting points is almost infinite.

It can be a humbling experience to sit in front of a powerful computer armed with great software and connected to the rest of the world, and to have no idea where to begin looking, when you’ve searched enough, and how confident to be in your findings.

“The new smart,” says Hess, “will be about trying to overcome the two big inhibitors of critical thinking and team collaboration: our ego and our fears.”

Want to know more about our firm?

  • Visit charlesgriffinllc.com and see our two blogs, this one and The Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon). There is a detailed section on logic and inference in the law.
  • Watch me speak about Helping Lawyers with Fact Finding, here. We offer training for lawyers, and I speak across the country to legal groups about the proper mindset for legal inquiry.
  • If you are member of the ABA’s Litigation Section, see my piece in the current issue of Litigation Journal, “Five Questions Litigators Should Ask: Before Hiring an Investigator (and Five Tips to Investigate It Yourself). It contains a discussion of open-mindedness.

[1] van Opijnen, M. & Santos, C. Artif Intell Law (2017) 25: 65. doi:10.1007/s10506-017-9195-8

 

One lawyer we know has a stock answer when clients ask him how good their case is: “I don’t know. The courts are the most lawless place in America.”

What he means is that even though the law is supposed to foster predictability so that we will know how to act without breaking our society’s civil and criminal rules, there is a wide variety of opinion among judges even in the same jurisdictions about the matters that make or break a case on its way to a jury.

Our friend’s answer came to mind while reading an interesting roundup of experienced trial lawyers over the weekend about why the trial of Bill Cosby outside Philadelphia resulted in a deadlocked jury and mistrial, announced on Saturday.

In the New York Times, the attorneys mostly fell into two camps: those who thought lead witness Andrea Constand presented the jury with credibility problems because of inconsistent testimony, and those who thought the judge’s decision to limit the admission of evidence of many other similar allegations substantially weakened the prosecution’s case.

My view is that the two reasons are linked: evidence that many women have made claims similar to Constands’ could easily have overcome the credibility problem if the jury had been able to hear about many of the other women who alleged Cosby had drugged and had sexual contact with them too.

In another case with identical facts and a different judge, the other accusers may have made it in a great example of two things we tell clients all the time:

  1. Persuasive evidence is good, but admissible evidence is what you really want when you know you’re going to trial.
  2. A lot of legal jobs are now being done by computers, but while there are human judges they will differ the way humans always do: in a way that is never 100% predictable.

Admissibility

When we are assigned to gather facts in civil or criminal matters, all of the evidence we get must always be gathered legally and ethically. Otherwise it could easily turn out to be inadmissible. But even if you do everything right, admissibility is sometimes out of your control. The whole case can turn on it.

If all you are doing is trying to get as much information as you can without any thought of taking it to trial, then admissibility may not be much of a concern. Think about deciding whether someone is rich enough to bother suing using hearsay evidence; or finding personally damaging information that may be excluded as prejudicial, but even the thought of arguing a motion about that information would be too much for the other side to bear. It could increase the chance of a more favorable settlement for you.

In the Cosby case the information in question would have been very helpful to the prosecution.

Ordinarily the justice system doesn’t like to see evidence of other bad acts used in a case to paint a picture of  a defendant’s character. Rule 404 (b) of the Federal Rules of Evidence excludes this kind of thing, but allows admission of evidence of another act “as proving motive, opportunity, intent, preparation, plan, knowledge, identity, absence of mistake, or lack of accident.”

So the prosecution could have argued that all the other accusers making similar claims that they were drugged and subjected to sexual contact were evidence of Cosby’s intent, or a lack of accident, and may even have been seen as preparation for the time Constand went to Cosby’s home and was drugged.

But the judge wouldn’t let any of that in. In Pennsylvania, the rules in this section are tougher on the prosecution than are the federal rules. The state’s rule 404(b) (2) “requires that the probative value of the evidence must outweigh its potential for prejudice. When weighing the potential for prejudice of evidence of other crimes, wrongs, or acts, the trial court may consider whether and how much such potential for prejudice can be reduced by cautionary instructions.”

It seems that the judge was afraid that even warning the jury not to read too much into the other accusers would have prejudiced them even if he instructed them that the other accusers alone did not constitute proof of Cosby’s guilt — in this matter with Constand.

Unpredictability

The legal world is justifiably occupied in trying to figure out how to reduce costs by automating as many tasks as possible. Gathering of some facts can be automated, but not always, for the simple reason that facts are infinitely variable and therefore not wholly predictable.

Implicit in fact gathering is evaluating the facts you get, as you gather them. You are constantly evaluating because you can’t look everywhere, so promising leads get follow-up, the others don’t. Machines can scan millions of documents using optical character recognition because there are only so many combinations of letters out there. But the variety of human experience is limitless.

If machines can’t be trusted to properly evaluate someone’s story, imagine the problems if that story has never been written down. Think about all the things you would not want the world to know about you. How much of all of that has been written down? Probably very little. It was human effort alone that developed the other witnesses the prosecution wanted to call.

The only way a computer might have helped in this case would have been to predict – based on prior cases – which way the judge would rule in excluding the other evidence. Even that would be a tough program to write because these decisions turn on so many unique factors. But since judges are chosen at random, it wouldn’t have helped shape the decision about whether or not to charge Cosby.

Want to know more about our firm?

  • Visit charlesgriffinllc.com and see our two blogs, this one and The Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.
  • If you are member of the ABA’s Litigation Section, see my piece in the current issue of Litigation Journal, “Five Questions Litigators Should Ask: Before Hiring an Investigator (and Five Tips to Investigate It Yourself).

Another EB-5 visa fraud, more burned investors. For people outside the United States trying to pick a reputable investment that will get them permanent residency in the U.S., sorting through hundreds of projects is often the hardest part of the job.

EB-5 due diligence

There is plenty written about what you should do before you invest, one of the latest guides being from the North American Securities Administrators Association, here. You can read up on EB-5 frauds here.

What are the warning signs of fraud? Last year’s revelation of a huge fraud at a Vermont development that had sucked in hundreds of investors led many to wonder, “How could we have known this would blow up?”

There is no guaranteed way to find fraud, but if you see things that would give a prudent investor pause; if the project’s sponsors don’t have a good track record; if you don’t understand the risks of the project (and they all have risks) walk away.

Remember, many reputable immigration lawyers refuse to recommend an EB-5 investment because they don’t want to be sued if the investment encounters problems, whether of a normal business variety or because of fraud. Even if your lawyer recommends an investment, you should still perform due diligence on the project.

Even more surprising to some non-Americans, once the government spots EB-5 fraud, it’s often too late for the investors who have put in their money. Sometimes investors can recover and sometimes not, but the green cards they wanted will not be delivered and they have lost time in addition to money.

Looking at the track record of a developer is much easier than going through the hundreds of pages of documents you and your lawyer will need to examine before you invest your money. You will always need to do both, but as you sort through five or ten possible investments, start with the track records.

The Vermont Fraud Warning Signs

One of the most celebrated of all the projects was the group of investments in the northeastern state of Vermont, near the border with Canada. Jay Peak was an old ski hill that fell into the hands of a Canadian operating company. They began with the EB-5 program by raising money for one project, but then in 2008 the Canadian company sold the business to a man local press described as “mysterious,” Ariel Quiros. He grew up in New York, was of Puerto Rican and Venezuelan background, but had spent years in Korea building unspecified businesses which supposedly gave him the ability to buy Jay Peak for $25 million.

Once Quiros bought the mountain, the EB-5 projects accelerated, with six more projects for hotels and finally, before the scheme was exposed, a bio-technology park that was supposed to flourish among the ski hills and dairy farms of far-northern Vermont.

The main thing an investor should have asked about Jay Peak was, who exactly is Ariel Quiros, the owner? The whole sickening unravelling of the investment project is available at vtdigger.org (going from most recent to oldest story). But anyone investing after January 14, 2014 would have had an easy way to throw this one in the waste basket. A Vermont Digger article available on line described Quiros’ track record this way:

  • He lost his seat on the board of Bioheart Inc. after AnC Bio [Quiros’ company] failed to make the second installment in a $4 million investment.
  • Quiros also survived a Texas lawsuit in which two investors alleged breach of contract after they didn’t get their money back in full in 10 years.
  • And a Florida man claims he never received almost $16,000 worth of equipment from a [Quiros] company called Q Vision, but he appears to have dropped his pursuit of the matter.

Of course, full due diligence could involve verifying the assertions in this article, but if they turned out to be true, who would entrust half a million dollars and a green card to someone with a track record of not following through on investments and unhappy investors alleging breach of contract?

If Quiros occasionally had disputes with investors and partners, you would also ask a more basic question: how did he make his money – the money that bought Jay Peak — in the first place?

The article in January 2014 said,

“Quiros has melded street smarts from New York, military sensibilities from the Korean Demilitarized Zone and a love of adventure into a business empire that spans the globe, starting with international trade from Korea in the early 1980s… GSI Group, where he got his start in Korea, imported and exported goods ranging from shoes to women’s blouses to radios…He specialized in raw materials, much of it for the Korean government, he says.”

In addition, Bloomberg says that “Mr. Quiros serves as a Director and Principal of GSI Group, a raw materials procurement company for the South Korean manufacturing community with offices in Seoul, Beijing, Sydney, Hong Kong and Miami.”

The only problem is, GSI is one difficult company to find. Quiros shows up on open-source databases as a corporate officer of 96 companies, but these are all in Florida, Panama and Vermont. None of the Florida companies are called GSI.

On line, there is www.GSIkoreanet., but this mentions no overseas offices. GSI Australia’s website says it is a company dealing in poultry, swine and grain. There are no Korean links evident. And it is based in Queensland and Victoria, not New South Wales where Sydney is. The Australia companies registry provides no evidence of any Korean trading company registered in New South Wales.

In Hong Kong, a search of directors of all Hong Kong companies shows that nobody named Quiros and no company called GSI directs any Hong Kong company.

A search of regulatory filings in the U.S. turns up nothing on Quiros until 2010, after he bought Jay Peak. A news search on Bloomberg turns up only GSI Group Inc., a maker of agricultural equipment.

The earliest mention of Quiros in securities filings in the U.S. is in 2011, as an investor in a U.S. biotech company. His Korean address in this filing was: 10th Floor, H&S Tower, 119-2 Nonhyun-Dong, Gangnam-Gu, Seoul, Korea 135-820. A reverse search of this address turns up nothing on GSI.

Are we therefore stunned to learn today that according to the U.S. Securities and Exchange Commission, Quiros never used his own money to buy Jay Peak in the first place? Instead, according to the judicial complaint filed in 2016, Quiros took money investors had already put into Jay Peak when it was owned by the Canadians, and used that cash to buy the ski resort.

Subsequent cash that came in for new projects funded prior projects, but eventually the game was up when Quiros told investors that their hotel project was cancelled and converted into a loan. They would get their money back, he promised, but green cards would not be forthcoming. Quiros is fighting the SEC, while his President has settled with the agency.

In the Bernard Madoff Ponzi scheme, there were red flags that sent many prudent investors away: a small-time accountant for what was supposed to be a multi-billion-dollar enterprise, and no independent custodian for the investor money.

In the case of Quiros and the Vermont project, a history of unhappy investors and a murky source of funds should have been enough for investors to say, “Not this one.”

 

About the firm:

Charles Griffin Intelligence is an independent consulting firm that performs investor due diligence for hedge funds, corporations and individuals both inside and outside the United States. We never do work for any EB-5 developer or regional center. We do not provide legal advice, but can help investors and their lawyers assess the business risk of an investment.

For more information about the firm, please see the website at www.charlesgriffinllc.com. You can also read our blog, The Ethical Investigator, at www.ethicalinvestigator.com

 

Lawyers need to find witnesses. They look for assets to see if it’s worth suing or if they can collect after they win. They want to profile opponents for weaknesses based on past litigation or business dealings.

Every legal matter turns on facts. Most cases don’t go to trial, fewer still go to appeal, but all need good facts. Without decent facts, they face dismissal or don’t even get to the complaint stage.Better innovation in law firms

Do law schools teach any of these skills? Ninety-nine percent do not.  Good fact-finding requires something not taught at a lot of law schools: innovation and creativity. Of course, good judges can maneuver the law through creative decisions, and good lawyers are rightly praised for creative ways to interpret a regulation or to structure a deal.

But when it comes to fact gathering, the idea for most lawyers seems to be that you can assign uncreative, non-innovative people to plug data into Google, Westlaw or Lexis, and out will come the data you need.

This is incorrect, as anyone with a complex matter who has tried just Googling and Westlaw research will tell you.

The innovative, creative fact finder follows these three rules:

  1. Free Yourself from Database Dependency. If there were a secret trove of legally obtained information, you would be able to buy it because this is America, where good products get packaged and sold if there is sufficient demand for them. And Google won’t do it all. Most documents in the U.S. are not on line, so Google won’t help you. For any given person, there could be documents sitting in one of the more than 3,000 counties in this country, in paper form.
  • If you use a database, do you know how to verify the output? Is your John C. Wong the same John C. Wong who got sued in Los Angeles? How will you tell the difference? You need a battle plan. Can your researcher arrange to have someone go into a courthouse 2,000 miles away from your office?
  • How will you cope with conflicting results when one source says John C. Wong set up three Delaware LLC’s last year, and another says he set up two in Delaware and two in New York?
  1. Fight Confirmation Bias. Ask, “What am I not seeing?” Computers are terrible at the kind of thought that comes naturally to people. No risk management program said about Bernard Madoff, “His auditor can’t be up to the task because his office is in a strip mall in the suburbs.”
  • For your researchers, find people who can put themselves in the shoes of those they are investigating. Not everyone can say, “This report must be wrong. If I were in the high-end jewelry business, I wouldn’t run it out of a tiny ranch house in Idaho. Either this is a small business or Idaho’s not the real HQ.” If someone doesn’t notice a discrepancy as glaring as this, they are the wrong person to be doing an investigation that requires open-mindedness.
  1. Don’t paint by numbers. Begin an investigation on a clean sheet of paper. Don’t base your investigation on what someone’s resume says he did. Verify the whole thing.
  • Look not just at what’s on the resume, but look for what was left off Jobs that didn’t go well, and people who don’t like the person.
  • Despite that your client tells you, they don’t know everything (if they did they wouldn’t hire you). If your client thinks you will never find a subject’s assets outside of Texas, look outside of Texas anyway. You owe it to your client.

Want to know more?

  • Visit charlesgriffinllc.com and see our two blogs, The Ethical Investigator and the Divorce Asset Hunter;
  • Look at my book, The Art of Fact Investigation (available in free preview for Kindle at Amazon);
  • Watch me speak about Helping Lawyers with Fact Finding, here.