# What We Can Learn About A.I. from Mickey Mouse

As a kid, when the arm was down at railroad crossings I would pay close attention to the freight cars as they went by. I made up a game where I held as many of the six-digit railcar I.D. numbers as I could in memory with the goal of finding consecutive numbers amongst the jumble of cars that were most definitely not in numerical order. The faster the train went, the harder this got. I tell you about my math nerd roots because I just read a book by Cathy O’Neil, a childhood math whiz who made up a game harder than that.

As a little girl in the back seat of the family sedan, Cathy studied the license plate numbers of passing cars. Her self-invented game was to reduce the registration numbers she saw into their constituent prime numbers. What a wonderfully geeky thing for a child to do! She went on to become a math major who wrote a thesis on algebraic number theory that was rooted in her childhood game. From there she became what the popular press these days calls a quant, turning all that theory into practice. Her experiences disturbed her enough to write Weapons of Math Destruction because mathematics, her great love, was fueling many of the world’s problems, including the housing crisis and the collapse of major financial institutions.

What she had witnessed reminded me of the scene in the Sorcerer’s Apprentice where Mickey Mouse, bored with his exhausting chore, automates it by putting on the magic hat. He uses his new powers to turn simple wooden brooms into robots, giving them instructions and setting them in motion. Immensely pleased with himself, he dozes off, awakening horrified to find the automatons have caused a disaster. He cannot undo what he has done and is helpless until the real Sorcerer returns and saves the day.

Cathy describes her reaction to math gone bad: “If we had been clear-headed, we all would have taken a step back to figure out how math had been misused and how we could prevent a similar catastrophe in the future. But instead, new mathematical techniques were hotter than ever, and expanding into still more domains. They churned 24/7 through petabytes of information (Big Data), much of it scraped from social media or e-commerce websites. And increasingly they focused not on the movements of global financial markets but on human beings, on us. Mathematicians and statisticians were studying our desires, movements, and spending power. They were predicting our trustworthiness and calculating our potential as students, workers, lovers, criminals. This was the Big Data economy, and it promised spectacular gains. A computer program could speed through thousands of résumés or loan applications in a second or two and sort them into neat lists, with the most promising candidates on top. By 2010 or so, mathematics was asserting itself as never before in human affairs, and the public largely welcomed it.”

It worries her that most of us think the automated decisions made by machines are objective when they are anything but. The dark reality is they are not brains at all – they are tools that only know how to follow orders, executing subjective choices made by human beings (and therefore fallible). At first the people behind the algorithms were actual scientists, but as the trend to A.I. being the easy answer took hold and more off-the-shelf, plug-and-play tools appeared, robotic data handling was taken over by people with not the least understanding of how things work and think all they need to know is how to flip a switch.

The inner workings of these tools are what experts call black boxes, the term for complex, murky, and mysterious mechanisms invisible to all but the quants who build them. The amateurs who have taken over from the pros believe the processes are infallible because they’re automated – even when the “answers” they produce are wrong. Cathy calls these harm-causing mathematical models Weapons of Math Destruction, saying “statistical systems require regular and ongoing feedback – something to tell them when they’re off track. Statisticians use errors to train their models and make them smarter. Imagine if Amazon.com, through a faulty correlation, started recommending lawn care books to teenage girls. The clicks would plummet and the algorithm would be tweaked until it got it right. Without feedback, statistical engines can continue spinning out faulty and damaging analyses while never learning from their mistakes.”

In most cases of people and companies looking to A.I. as the magic solution to their problems, there is not only no one around that knows how to provide critical feedback, but also no one who knows enough to care. Things are even worse when the data that feed the machines are of poor quality, which happens more and more with the ever-increasing drive for faster and cheaper research that must cut corners to meet those demands. Cathy’s goal is to “mobilize people against the use of sloppy statistics and biased models that created their own toxic feedback loops” because they produce opaque, unquestioned, and unaccountable results. As she says, “Welcome to the Dark Side of Big Data.”

Of the thousands of people I meet who base decisions on data, not one in a hundred ever asks where the numbers are coming from when every one of them should not only ask, but also be able to evaluate what they’re being told. I will teach you those things and raise your A.I. I.Q. so you become one of the 1% that comes out of the dark and into the light.