We’ve had a great response to an Above the Law op-ed here that outlined the kinds of skills lawyers will need as artificial intelligence increases its foothold in law firms.

The piece makes clear that without the right kinds of skills, many of the benefits of AI will be lost on law firms because you still need an engaged human brain to ask the computer the right questions and to analyze the results.

But too much passivity in the use of AI is not only inefficient. It also carries the risk of ethical violations. Once you deploy anything in the aid of a client, New York legal ethics guru Roy Simon says you need to ask,

“Has your firm designated a person (whether lawyer or nonlawyer) to vet, test or evaluate the AI products (and technology products generally) before using them to serve clients?”

We’ve written before about ABA Model Rule 5.3 that requires lawyers to supervise the investigators they hire (and “supervise” means more than saying “don’t break any rules” and then waiting for the results to roll in). See The Weinstein Saga: Now Featuring Lying Investigators, Duplicitous Journalists, Sloppy Lawyers.

But Rule 5.3 also pertains to supervising your IT department. It’s not enough to have some sales person convince you to buy new software (AI gets called software once we start using it). The lawyer or the firm paying for it should do more than rely on claims by the vendor.

Simon told a recent conference that you don’t have to understand the code or algorithms behind the product (just as you don’t have to know every feature of Word or Excel), but you do need to know what the limits of the product are and what can go wrong (especially how to protect confidential information).

In addition to leaking information it shouldn’t, what kinds of things are there to learn about how a program works that could have an impact on the quality of the work you do with it?

  • AI can be biased: Software works based on the assumptions of those who program it. You can never get a read in advance of what a program’s biases may do to output until you use the program. Far more advanced than the old saying “garbage in-garbage out,” but a related concept: there are thousands of decisions a computer needs to make based on definitions a person inserts either before the thing comes out of the box or during the machine-learning process where people refine results with new, corrective inputs.
  • Competing AI programs can do some things better than others. Which programs are best for Task X and which for Task Y? No salesperson will give you the complete answer. You learn by trying.
  • Control group testing can be very valuable. Ask someone at your firm to do a search for which you know the results and see how easy it is for them to come up with the results you know you should see. If the results they come up with are wrong, you may have a problem with the person, with the program, or both.

The person who should not be leading this portion the training is the sales representative of the software vendor. Someone competent at the law firm needs to do it, and if they are not a lawyer then a lawyer needs to be up on what’s happening.

[For more on our thoughts on AI, see the draft of my paper for the Savannah Law Review, Legal Jobs in the Age of Artificial Intelligence: Moving from Today’s Limited Universe of Data Toward the Great Beyond, available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263].

 

Decent investigators and journalists everywhere ought to have been outraged at news over the weekend in the Wall Street Journal that appears to have caught a corporate investigator masquerading as a Journal reporter.

According to the story, the person trying to get information about investment strategy and caught on tape pretending to be someone he wasn’t was “Jean-Charles Brisard, a well-known corporate security and intelligence consultant who lives in Switzerland and France.”

Fake news we know about, but fake reporters? It’s more common than it should be. Free societies need a free press, and for a free press to work people have to be able to trust that a reporter is who he says he is.

Good investigators working for U.S. lawyers should not pretend to be someone they are not – whether the fake identity is a journalist or some other occupation. Whether or not it breaks a state or federal impersonation statute, it’s probably unethical under the rules of professional responsibility.

Consider Harvey Weinstein’s army of lawyers and their investigators. The evidence was presented in Ronan Farrow’s second New Yorker piece on Weinstein that hit the web last night. The story says that Weinstein, through lawyer David Boies, hired former Mossad agents from a company called Black Cube.

“Two private investigators from Black Cube, using false identities, met with the actress Rose McGowan, who eventually publicly accused Weinstein of rape, to extract information from her,” the story says.

It goes on to explain that one of the investigators used a real company as cover but that the company had been specially set up as an empty shell for this investigation. The name of the company was real, but its purpose was not (it was not an investment bank). Worse, the investigator used a fake name. Courts have said this can be OK for the agents of lawyers if done in conjunction with an intellectual property, civil rights or criminal-defense matter. This was none of these.

The journalism aspect of the Weinstein/Black Cube investigation is (if accurate) just as revolting, involving a freelance journalist who was passing what people said to him not to a news outlet but to Black Cube. This produces the same result as the Brisard case above. Why talk to a journalist if he a) May not be a journalist or b) Will be passing your material on directly to the person he’s asking you about?  The freelancer in question is unidentified and told Farrow he took no money from Black Cube or Weinstein. Volunteerism at its most inspiring.

And where were the lawyers in all of this unseemliness? Boies signed the contract with Black Cube, but said he neither selected the firm nor supervised it. “We should not have been contracting with and paying investigators that we did not select and direct,” Boies told Farrow. “At the time, it seemed a reasonable accommodation for a client, but it was not thought through, and that was my mistake. It was a mistake at the time.”

Alert to lawyers everywhere: it was a mistake “at the time” and it would be a mistake anytime. Lawyers are duty-bound to supervise all of their agents, lawyer and non-lawyer alike. When I give my standard Ethics for Investigators talk, ABA model rule 5.3(c)(1) comes right at the top, as in this excerpt from my recent CLE for the State bar of Arizona:

A lawyer is responsible for a non-lawyer’s conduct that violates the rules if the lawyer “orders or, with the knowledge of the specific conduct, ratifies the conduct involved.”

“Ratification” can in some cases be interpreted as benign neglect. An initial warning “Just don’t break any rules” won’t suffice. The nightmare scenario is the famed Winnie the Pooh case in California, Stephen Schlesinger, Inc. v. The Walt Disney Company, 155 Cal.App.4th 736 (2007).

Schlesinger’s lawyers hired investigators and told them to be good. Then the investigators broke into Disney’s offices and stole documents, some of them privileged. The court not only suppressed the evidence, but dismissed the entire case. Part of the reasoning was that Schlesinger’s lawyers, after that initial instruction, did no supervising at all.

Black Cube may not have committed any crimes, but appears from the facts in the story to have gone over the ethical line in pretending to be people they were not. Boies (or any other lawyer in a similar position) should have tried to make sure they would do no such thing. What Black Cube did was everyday fare for Mossad, the CIA and MI6, but not for the agents of U.S. lawyers.