The New York Times published in interesting piece this week that was among its most popular: I Shared My Phone Number. I Learned I Shouldn’t Have.

In it, the paper’s personal tech columnist Brian X. Chen explained how much information people can get about you with just your phone number. This includes “my current home address, its square footage, the cost of the property and the taxes I pay on it,” as well as past addresses, relatives’ names and past phone numbers.

He is right, but that kind of information is out there for many of us, with or without a phone number. If you have an unusual name, all of it is easy to find without any cell phone number. Even a pretty common name can be isolated on Ancestry.com, and the information vacuumed up. It’s even easier if you are the only person with your name in your entire state, which happens a lot.

The Times says Brian X. Chen lives in San Francisco. There may be more than 100 Brian Chens in the U.S., but in California there appear to be a few Brian X. Chens, one of whom is listed in San Francisco. With or without a phone number, your property ownership is an open book if you own the property in your own name.

If you are interested in keeping your address secret, some steps you could take are:

  • Own the property in trust or in the name of a limited liability company. If your name is Joseph L. Sullivan, it would help not to name the trust the Joseph L. Sullivan 2019 Trust. You can name it just about anything you want, and the same goes for LLC’s, as long as the name isn’t already in use.
  • Don’t get utility bills sent to the house. Use a PO box or a UPS Store. Be aware that if you use your cell phone to order a pizza to your house, the number will be linked to your home address in commercial databases. Some of that information may make it into web-based services such as the one Chen writes about, but some may be limited to more expensive databases available only by license.
  • Use a made-up number when you get discount cards for groceries and pharmacy purchases.
  • Don’t say your address in a loud voice into a telephone while talking in public, to be followed by a slow, clear recitation of your phone number. I hear these all the time and not being a criminal, do not act on the information. Others will.

As Chen says, there is no easy answer to counter the trend that we are increasingly identified with our mobile phone numbers, especially if we have given up land lines. His major example of when not to give your number cites Facebook, and I couldn’t agree more.

But what if you want to reserve a table at a restaurant? They often need a phone number, which is then available to a large number of restaurant employees. Here in New York, many of us get our dry cleaning delivered home, with a cell phone number to pull up our account. If I don’t  trust these people with a phone number, does it make sense to give them a credit card number?

It’s true that hackers can do a lot of damage with the public records available in the U.S., but one thing Chen didn’t suggest that I have been doing for a long time is: make the answers to your password-recovery security question something that is not easily found on the internet. If mother’s maiden name is something easily found on line (don’t forget about on-line obituaries), don’t make it what someone needs to reset your banking password.

Try something like the name of your first school, or (unless you put it all over social media), the city where you honeymooned. If you’re not married, pick the city where you would like to have a honeymoon.

I wrote in my book The Art of Fact Investigation We often say that a Google search is not enough to do a good investigation because only a small amount of what you know about yourself is on Google. Everywhere you’ve ever lived? The names of best friends for third grade? All you old bosses?

Use that information to your benefit and the crooks will have a harder time hacking into your electronic life today.

Not for the first time, the most compelling piece of information in an investigation is what isn’t there.

We’ve written often before about the failure of databases and artificial intelligence to knit together output from various databases and I discussed the idea of what isn’t there in my book, The Art of Fact Investigation. Remember, two of the biggest red flags for those suspicious of Bernard Madoff were the absence of a Big Four auditor for a fund the size he claimed to have, as well as the absence of an independent custodian.

The most famous example of missing evidence is from Sherlock Holmes: the dog that didn’t bark when the horse was removed from its stall during the night. No barking meant the dog knew the person who took the horse. It’s discussed by an evidence professor here.

This week, another excellent example: The Center for Advanced Defense Studies (C4ADS) provided an advance copy of its report to the New York Times and Wall Street Journal, detailing the way Kim Jong Un evades international sanctions. The report, Lux & Loaded is an example of superior investigation.

Using public records that track ship movements, commercial databases that record the movement of goods, and publicly available photos on the internet, C4ADS makes the case that Kim was able to get his hands on two armored Mercedes-Maybach S600 cars worth more than $500,000 each.

The report details the movement of the cars from a port in the Netherlands through several transshipment points, but the key finding is that the ship last seen with the cars “vanished” for 17 days last October, while the cars were in transit off the coast of Korea. While unable to say conclusively that the cars moved from Rotterdam to North Korea, the evidence is persuasive.

Most interesting is not the evidence that they could not find because it does not exist in open sources, but the evidence that was removed because a Togo-flagged ship called DN5505 turned off its transponders. It’s as if the ship just disappeared from the high seas.

Maybe we should not expect to be able to track every cargo shipment in the world every moment it’s at sea, but ships going dark for weeks at a time is something that should arouse suspicions.

That’s why, among the organization’s recommendations is that insurance companies should require that ships keep their transponders on in order to acquire and maintain insurance.

That won’t catch sanctions busters every time, but it’s a very sensible start.

If you haven’t seen the amusing and disturbing piece in the Wall Street Journal this week about Black Cube, the band of former Mossad (Israeli secret service) agents, it’s worth a look.

The article explains that Black Cube’s people run around the world pretending to be people they are not, in order to investigate private, commercial or legal matters for high-paying clients. It recounts a series of foul-ups, blown cover identities, arrests and other problems with Black Cube’s operations, which make for interesting reading.

For this blog’s purposes, the most interesting part was the question of whether lawyers ought to use Black Cube. On this question, the article is not of much help:

Private investigators have long resorted to a form of deception known as “pretexting” to gather information in corporate or personal disputes. Most developed countries prohibit impersonating people to obtain private information such as phone, bank or medical records, but using assumed identities can be legal in other contexts. Such deceptions are critical to how Black Cube operates.

What struck us about the article is that while it explored what investigator conduct is legal, the issue of what is ethical was left for another day. For lawyers, that is just as important as the question of legality, but the words “ethics” or “ethical” don’t even make it into the story. That is a pity for any lawyer hoping to learn from the article whether Black Cube is a good candidate for any job.

Black Cube has been in trouble before, and we’ve written about them in connection with what they did on behalf of Harvey Weinstein, in The Weinstein Saga: Now Featuring Lying Investigators, Duplicitous Journalists, Sloppy Lawyers.

There, we argued that lawyers are bound by professional rules that require not just behaving legally, but ethically as well. Not only that, but agents of those lawyers – anybody the lawyer hires to do work for the lawyer – need to follow the rules too. Lawyers have an ethical duty to supervise their agents.

When it comes to pretexting, we take the position that no agent of a lawyer in the U.S. should be lying about his identity. Courts have carved out narrow exceptions for criminal defense and intellectual property work, but for background checks, commercial disputes and anything else a lawyer needs help with, pretexting should be out.

What happens if you get caught? You risk professional discipline, but there’s more: In the past two years, two federal judges in the Southern District of New York have thrown out evidence gathered unethically by investigators.

We often give thanks for robust government intelligence services that do the dirty work on behalf of their countries. U.S. spies in World War II gathered invaluable information about the Nazis and Japan, and later on the Soviet Union. Anyone who likes the idea of a strong Israel should understand that the Mossad is vital to that country’s survival.

A former CIA operative once said to me, “If I wasn’t breaking the law of the countries I was posted to while I was working for the CIA, I wasn’t doing my job.”

Breaking laws is not what lawyers in the U.S. (and their agents) are supposed to do. If you want to be a spy, take the entrance exam at Langley.

Get ready for college admissions scandals phase II, and maybe III, IV and V.

The reason I think so? Because of the way it was discovered.

Prosecutors didn’t break up the ring of bribing college coaches and exam proctors by using vast computing power, databases and algorithms, but by interviewing somebody. According to multiple reports, a suspect in a securities fraud case had heard about the admissions scamming going on and used it to bargain with federal authorities for more lenient treatment.

One interview led to one new suspect, a college coach, who gave the FBI other suspects, which gave them approval by judges to tap phones, and that brings us to up to today.

There will be more accused because all of those arrested coaches will have lawyers, and all of those arrested will be looking for reduce the chance that they will have to spend years in prison.

Do we really believe that Rick Singer’s ring of bribery of college coaches and exam proctors is the only one of its kind in America? If not, who else would know about the other Rick Singers still in business? Perhaps the coaches who have lost their jobs and are facing charges of accepting bribes.

We wrote in Real Due Diligence Can Never be Mass Produced that proper due diligence about a person, especially someone who will receive security clearance to handle sensitive information, requires interviews. “Think about how much information you can find about yourself on the internet: everyone you’ve ever worked with? Dated? Had an argument with? Those things are gathered not by looking at databases, the web and social media exclusively, but by interviewing.”

In Talk Isn’t Cheap Even When Offline we said, “to find out about people, you nearly always have to talk to others about them. Of course, a lot of what you may hear could turn out to be gossip. But being gossip doesn’t always mean something isn’t true. It can also mean that it’s factual information someone doesn’t want you to know about.”

Interview the coaches, find out who the other Rick singers are, and get to the parents offering bribes through them. Charge, arrest, repeat.

Have you ever noticed that artificial intelligence always seems much more frightening when people write about what it will become, but then how it can seem like imperfect, bumbling software when writing about AI in the present tense?

You get one of each in this morning’s Wall Street Journal. The paper paints a horrific picture of what the ruthless secret police of the world’s dictatorships will be able to do with AI in The Autocrat’s New Tool Kit, including facial recognition to track behavior more efficiently and to target specific groups with propaganda.

But then see Social-media companies have struggled to block violent content about this week’s terrorist attack on two mosques in New Zealand. With all of their computing power and some of the world’s smartest programmers and mathematicians, Facebook and YouTube allowed the killings to be streamed live on the internet. It took an old-fashioned phone call from the New Zealand police to tell them to take the live evildoing down. Just as the New York Times or CNBC would never put such a thing on their websites, neither should Facebook or YouTube.

Wouldn’t you think that technology that could precisely target where to send the most effective propaganda could distinguish between an extremely violent film and extremely violent reality? I would. After all, it’s like nothing for these sites to have indexed the code of all the movie clips already uploaded onto their systems. If facial recognition works on a billion Chinese people, why not on the thousands of known film actors floating up there on the YouTube cloud? If it’s not a film you already know and there are lots of gunshots, the video should be flagged for review.

Why is this so hard? For one thing, the computing power the companies need doesn’t exist yet. “The sheer volume of material posted by [YouTube’s] billions of users, along with the difficulty in evaluating which videos cross the line, has created a minefield for the companies,” the Journal said.

It’s that minefield that disturbs me. A minefield dotted with difficulties about whether or not to show mass murder in real time? What would be the harm of having a person look at any video that features mass killing before it’s cleared to air? If the computers can’t figure out what to do with such material, let a person look at it.

What is so frightening about AI is not the computing power and the uses the world can find for it, but the abdication of self-control and ethical considerations of the people using the AI.

I want the police in my country to have guns and to use them on criminals who are about to kill innocent people. I don’t want police states shooting peaceful demonstrators. I’m happy to have police in the U.S. use facial recognition if it will help stop a person from blowing up the stadium where the Super Bowl is being held. I would not want cameras on every intersection automatically tracking my every movement.

Guns are neither smart nor stupid. They are artificial power that increases the harm an unarmed person can affect. Guns are essential in maintaining freedom but can suppress freedom too.

Same for AI. There are lots of wonderful applications for it. Every bit of software in use today was called AI before it began to be used, when we then called it software.

What sets off the good AI from the bad is the way people use it. Streaming on YouTube can be a wonderful thing. But just as we need political accountability to make sure the guns our armies and police have aren’t abused, we need the people at YouTube to control their technology in a responsible way.

The AI at Facebook and YouTube isn’t dumb: Dumb are the people who trusted too readily that the tool could decide for itself what the right call would be when the horrors from Christchurch began to be uploaded.

The non-legal press doesn’t usually get very deep into questions of legal ethics, but New York Magazine did a reasonable job of it in its hard-hitting piece this week on “The Bad, Good Lawyer” David Boies.

The article asks whether Boies has crossed an ethical line, principally in his work on behalf of Harvey Weinstein (This blog argued before that he did, in The Weinstein Saga: Now Featuring Lying Investigators, Duplicitous Journalists, Sloppy Lawyers.)

While admirably tough on Boies, it’s a shame the piece conflates unethical, illegal or even bad behavior with the decision by Boies to represent Russian oligarch Oleg Deripaska, Republican fundraiser Elliott Broidy or former Malaysian Prime Minister Najib Razak, who is accused of money laundering. There is no indication Boies enables these people or is somehow complicit in what they did to get themselves into trouble.

Similarly unnecessary in a serious look at a lawyer’s ethics are throwaway lines such as Boies’ “cozy personal relationship” with Bill Clinton. If that’s a negative, you could say the same about dozens of lawyers and hundreds of famous people.

But, the information in the story about the involvement of Boies’ daughter in movies produced by Weinstein’s company while Boies was advising Weinstein was interesting, as were the attacks by Boies on one outspoken Weinstein Company director, Lance Maerov, who turned out to be asking good questions about Weinstein’s personal conduct. Unlike some of the Weinstein Company directors, Maerov was doing his job.

What’s as disturbing as the way Boies and his firm failed to supervise a fraudulent investigation into Weinstein accusers and others by Israeli company Black Cube, is the defense of the practice by lawyers interviewed by the magazine. As we wrote about before, the agents of a U.S. lawyer shouldn’t go around pretending to be people they are not. A U.S. lawyer has the duty to supervise any agents the lawyer hires. Period.

Yet New York reports that “Some corporate litigators shrug off the Black Cube revelations, saying the only thing that was surprising was that all the embarrassing details escaped the usual vault of attorney-client confidentiality. ‘That happens, it doesn’t shock me,’ [prominent entertainment lawyer] Bert Fields says of the firm’s impersonation practices.”

Even worse was the quote from “another attorney who has dealt with Boies in the past,” who brushed the fraud off this way: “The technique is a tool … Lizzie Borden misused the ax.” Many lawyers said similar things to the article’s author: “This is just what lawyers do.”

If this is just what lawyers do, those lawyers ought to be disciplined for it.

Lawyers who know anything about professional responsibility know it is wrong to send investigators out to commit fraud. A quick instruction to “follow the rules” is not enough to qualify as adequate supervision.

Getting away with it is hardly justification. Imagine if someone defended unauthorized dipping into client escrow accounts. As long as the money gets paid back and the client is no wiser, who is harmed?

No lawyer would dare make that argument, but in the case of using fraudulent techniques, it’s all supposed to be OK if you don’t get caught.

If anyone wants to hire a lawyer they want to be sure won’t cross ethical lines, this is a good test question for them: Is it OK to hire investigators to set up fake identities to lure people into interviews?

For more good ways to screen for lawyers and investigators who know and abide by the rules, see my American Bar Association article, Five Questions Litigators Should Ask Before Hiring an Investigator (and Five Tips to Investigate it Yourself).

Artificial intelligence doesn’t equal artificial perfection. I have argued for a while now both on this blog and in a forthcoming law review article here that lawyers (and the investigators who work for them) have little to fear and much to gain as artificial intelligence gets smarter.

Computers may be able to do a lot more than they used to, but there is so much more information for them to sort through that humans will long be required to pick through the results just as they are now. Right now, we have no quick way to word-search the billions of hours of YouTube videos and podcasts, but that time is coming soon.

The key point is that some AI programs will work better than others, but even the best ones will make mistakes or will only get us so far.

So argues British math professor Hannah Fry in a new book previewed in her recent essay in The Wall Street Journal, here. Fry argues that instead of having blind faith in algorithms and artificial intelligence, the best applications are the ones that we admit work somewhat well but are not perfect, and that require collaboration with human beings.

That’s collaboration, not simply implementation. Who has not been infuriated at the hands of some company, only to complain and be told, “that’s what the computer’s telling me.”

The fault may be less with the computer program than with the dumb company that doesn’t empower its people to work with and override computers that make mistakes at the expense of their customers.

Fry writes that some algorithms do great things – diagnose cancer, catch serial killers and avoid plane crashes. But, beware the modern snake-oil salesman:

Despite a lack of scientific evidence to support such claims, companies are selling algorithms to police forces and governments that can supposedly ‘predict’ whether someone is a terrorist, or a pedophile based on his or her facial characteristics alone. Others insist their algorithms can suggest a change to a single line of a screenplay that will make the movie more profitable at the box office. Matchmaking services insist their algorithm will locate your one true love.

As importantly for lawyers worried about losing their jobs, think about the successful AI applications above. Are we worried that oncologists, homicide detectives and air traffic controllers are endangered occupations? Until there is a cure for cancer, we are not.

We just think these people will be able to do their jobs better with the help of AI.

An entire day at a conference on artificial intelligence and the law last week in Chicago produced this insight about how lawyers are dealing with the fast-changing world of artificial intelligence:

Many lawyers are like someone who knows he needs to buy a car but knows nothing about cars. He knows he needs to get from A to B each day and wants to get there faster. So, he is deposited at the largest auto show in the world and told, “Decide which car you should buy.”

Whether it’s at smaller conferences or at the gigantic, auto-show-like legal tech jamborees in Las Vegas or New York, the discussion of AI seems to be dominated by the companies that produce the stuff. Much less on show are people who use legal AI in their everyday lives.

At my conference, the keynote address (and two more panels) were dominated by IBM. Other familiar names in AI in the world of smart contracting and legal research were there, along with the one of the major “old tech” legal research giants. All of the products and services sounded great, which means the salespeople were doing their jobs.

But the number of people who presented about actually using AI after buying it? Just a few (including me). “We wanted to get more users,” said one of the conference organizers, who explained that lawyers are reluctant to describe the ways they use AI, lest they give up valuable pointers to their competitors.

Most of the questions and discussion from lawyers centered around two main themes:

  1. How can we decide which product to buy when there are so many, and they change so quickly?
  2. How can we organize our firm’s business model in such a way that it will be profitable to use expensive new software (“software” being what AI gets called after you start using it)?

Law firm business models are not my specialty, but I have written before and spoke last week about evaluating new programs.

Only you (and not the vendor) can decide how useful a program is, by testing it. Don’t let the vendors feed you canned examples of how great their program is. Don’t put in a search term or two while standing at a trade show kiosk. Instead, plug in a current problem or three while sitting in your office and see how well the program does compared to the searches you ran last week.

You mean you didn’t run the searches, but you’re deciding whether to buy this expensive package? You should at least ask the people who will do the work what they think of the offering.

I always like to put in my own company or my own name and see how accurate a fact-finding program is. Some of them (which are still useful some of the time) think I live in the house I sold eight years ago. If you’re going to buy, you should know what a program can do and what it can’t.

As with other salespeople in other industries, AI sales staff won’t tell you what their programs are bad at doing. And most importantly, they won’t tell you how well or how badly (usually badly) their program integrates with other AI software you may be using.

No matter how good any software is, you will need good, inquisitive and flexible people running it and helping to coordinate outputs of different products you are using.

While sales staff may have subject-matter expertise in law (it helps if they are lawyers themselves) they cannot possibly specialize in all facets of the law. Their job is to sell, and they should not be criticized for it.

They have their job to do, and as a responsible buyer, you have yours.

For more on what an AI testing program could look like and what kinds of traits the best users of AI should have, see my forthcoming law review article here:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3085263

 

Do you ever wonder why some gifted small children play Mozart, but you never see any child prodigy lawyers who can draft a complicated will?

The reason is that the rules of how to play the piano have far fewer permutations and judgment calls than deciding what should go into a will. “Do this, not that” works well with a limited number of keys in each octave. But the permutations of a will are infinite. And by the way, child prodigies can play the notes, but usually not as soulfully as an older pianist with more experience of the range of emotions an adult experiences over a lifetime.

You get to be good at something by doing a lot of it. You can play the Mozart over and over, but how do you know what other human beings may need in a will, covering events that have yet to happen?

Not by drafting the same kind of will over and over, that’s for sure.

Reviewing a lot of translations done by people is the way Google Translate can manage rudimentary translations in a split second. Reviewing a thousand decisions made in document discovery and learning from mistakes picked out by a person is the way e-discovery software looks smarter the longer you use it.

But you would never translate a complex, nuanced document with Google Translate, and you sure wouldn’t produce documents without having a partner look it all over.

The craziness that can result from the mindless following of rules is an issue on the forefront law today, as we debate how much we should rely on artificial intelligence.

Who should bear the cost if AI makes a decision that damages a client? The designers of the software? The lawyers who use it? Or will malpractice insurance evolve enough to spread the risk around so that clients pay in advance in the form of a slightly higher price to offset the premium paid by the lawyer?

Whatever we decide, my view is that human oversite of computer activity is something society will need far into the future. The Mozart line above was given to me by my property professor in law school and appeared in the preface of my book, The Art of Fact Investigation.

The Mozart line is appropriate when thinking about computers, too. And in visual art, I increasingly see parallels between the way artists and lawyers struggle to get at what is true and what is an outcome we find desirable. Take the recent exhibition at the Metropolitan Museum here in New York, called Delirious: Art at the Limits of Reason, 1950 to 1980.

It showed that our struggle with machines is hardly new, even though it would seem so with the flood of scary stories about AI and “The Singularity” that we get daily. The show was filled with the worrying of artists 50 and 60 years ago about what machines would do to the way we see the world, find facts, and how we remain rational. It seems funny to say that: computers seem to be ultra-rational in their production of purely logical “thinking.”

But what seems to be a sensible or logical premise doesn’t mean that you’ll end up with logical conclusions. On a very early AI level, consider the databases we use today that were the wonders of the world 20 years ago. LexisNexis or Westlaw are hugely powerful tools, but what if you don’t supervise them? If I put my name into Westlaw, it thinks I still live in the home I sold in 2011. All other reasoning Westlaw produces based on that “fact” will be wrong. Noise complaints brought against the residents there have nothing to do with me. A newspaper story about disorderly conduct resulting in many police visits to the home two years ago are also irrelevant when talking about me.[1]

The idea of suppositions running amok came home when I looked at a sculpture last month by Sol LeWitt (1928-2007) called 13/3. At first glance, this sculpture would seem to have little relationship to delirium. It sounds from the outset like a simple idea: a 13×13 grid from which three towers arise. What you get when it’s logically put into action is a disorienting building that few would want to occupy.

As the curators commented, LeWitt “did not consider his otherwise systematic work rational. Indeed, he aimed to ‘break out of the whole idea of rationality.’ ‘In a logical sequence,’ LeWitt wrote, in which a predetermined algorithm, not the artist, dictates the work of art, ‘you don’t think about it. It is a way of not thinking. It is irrational.’”

Another wonderful work in the show, Howardena Pindell’s Untitled #2, makes fun of the faith we sometimes have in what superficially looks to be the product of machine-driven logic. A vast array of numbered dots sits uneasily atop a grid, and at first, the dots appear to be the product of an algorithm. In the end, they “amount to nothing but diagrammatic babble.”

Setting a formula in motion is not deep thinking. The thinking comes in deciding whether the vast amount of information we’re processing results in something we like, want or need. Lawyers would do well to remember that.

[1] Imaginary stuff: while Westlaw does say I live there, the problems at the home are made up for illustrative purposes.