Neither would millions of people until Lee Lantz came along. A seafood merchant, he found a mild and flaky fish that tasted great. Its name, Patgonian Toothfish, was as unappetizing as its looks. He couldn’t change its appearance, so he made up a new name. He called it Chilean Sea Bass, and it became a sought-after and high-priced catch. Same thing, new label, just like Artificial Intelligence. AI is to computer programming as sanitation engineers are to janitors, marketing is to sales, and HR’s admin assistants are to secretaries in the Personnel Department. When AI was called computer programming, only scientists used it.
History credits Charles Babbage’s “difference engine” as the first mechanical computer.
It was the mathematical equivalent of an 11-foot long, five-ton manual typewriter. The first fully-electronic computer was ENIAC. It was 50 feet long, weighed 30 tons, and had 17,468 vacuum tubes. Digital Trends tell us it was programmed through a physical system of adjusting switches and cables by hand. Debugging a program meant climbing inside the ENIAC in search of faulty connections.
The Founding Mothers of computer programming.
The first programmers were called Computers, meaning those who determine by calculation. All six were women specially chosen because they were gifted mathematicians and brilliant problem-solvers. There were no programming manuals, languages, or tools, so these six had to figure it all out on their own by studying engineering diagrams. Everything had to be designed and implemented with extreme precision. Setting up a single calculation could take days and a single program could take weeks. The three-year project was completed in 1946. When ENIAC was unveiled to the press and the public, the world’s first computer programmers were never even introduced. Righting old wrongs, the ENIAC Programmers Project produced a 20-minute documentary film called The Computers
And then along came UNIVAC.
The first supercomputer correctly predicted a 1952 presidential election landslide for Eisenhower after sampling just one percent of the voting population. Computer power has never stopped growing. The statistical programs that run the computers are able to churn ever-larger mountains of raw data.
For decades computers and software were the domain of scientists. Now that statistical programs are automated, just about everyone sees themselves able to analyze data as well as scientists do. The problem is that non-experts don’t know automated analysis mostly just zooms around looking for relationships between variables. These automated processes lack the ability to determine if the links they find make any sense at all because they do not understand conditions, circumstances, or extraneous factors, the things we call context.
When you have huge data sets, it is inescapable that you will find thousands of correlations that are statistically significant but absolutely meaningless. Errors like this are inevitable even in data sets that crow about their 95% confidence intervals. In their New York Times article, Eight (No, Nine!) Problems With Big Data, Gary Marcus and Ernest Davis say what is important for people to understand is that AI is very good at detecting connections between things but cannot tell us which are meaningful and which are only flukey happenstances. A favorite example is Christie Aschwanden’s article You Can’t Trust What You Read About Nutrition. In it she examined the correlations found among 1,000 variables. With too many combinations for us to grasp (1,000 x 999 x 998…), inevitably false positives popped up like thousands of weeds. Among the “findings,” people who trim fat from their steaks are more likely to be atheists, cabbage eaters have innie bellybuttons, and egg rolls lead to dog ownership. It’s that same “science” that produces the claims we regularly hear about, including Blueberries Prevent Memory Loss, Lose 20 Pounds Eating Grapefruit, and Pistachios Cure Erectile Dysfunction. It is no exaggeration to say that every nutrient you can think of has been linked to some health outcome. This explains why we get so many back-and-forth headlines about things like how nuts, coffee, and chocolate are good for us, then bad for us, then good again.
What actually is this thing we call AI?
- Writing in the MIT Technology Review, Stanford computer scientist Jerry Kaplan calls AI “a fable cobbled together from a grab bag of disparate tools and techniques.”
- Geert Verstraeten, writing on LinkedIn, says “In general, there is quite some confusion about what AI really means and covers.”
- AI is also machine learning. In Forbes, Tikhon Jelvis says the ideal system for solving a lot of hard problems has to be a hybrid: some machine learning based on data, some explicit modeling, and some interactive ways to take advantage of experts.
Generally speaking, people think AI, machine learning, and algorithms are mostly the same thing, computer programs designed to calculate complex statistics on huge data sets. The term AI has also become synonymous with Big Data. A better way to look at it is this: Big Data is the large batches of stuff we collect and AI is one way to look at them. Think of BD as the fuel and AI as the tool. Too many people see machine learning as a magic potion. This has real consequences because AI is just a computer program. It is not intelligent – the intelligent ones are the scientists who write the programs. Many companies benefit from consumer confusion by re-labeling old tools as “AI.”
Most who use AI solutions ignore the fact that it really isn’t intelligence at all.
As kids we were taught to wish for things we wanted. Some would write to Santa, others would pray, and most would wish before they blew out the candles on their birthday cakes. When most people look at data and information, they are wishing for a particular outcome, so they see what they want and ignore the rest. Think about the people who claim to see faces on the side of an abandoned refrigerator or the back of a turtle. These faces are typically religious figures. Popular culture merchants saw an opportunity and jumped in with the Grilled Cheesus Sandwich Maker and the Holy Toast Bread Stamper.
Instagrammable images aside, people see what they’re looking for.
When you bought that new car, you all of a sudden noticed ones like it everywhere, didn’t you? You noticed because you started actively looking for cars like yours. They were there all along but you didn’t see them in the rivers of cars going by you every day because you weren’t looking for them. If you want to trade your cow for a handful of magic AI beans, that’s your business.
BD and AI are like mules.
They can do only what someone tells them to do and no more. They lack cognitive abilities that human brains take for granted. They are hardware made by people, running software made by other people. The result is artificial idiot savants that can excel at tasks with well-defined boundaries, but get things very wrong when conditions change. Or have you forgotten how stores with AI-managed inventories ran out of toilet paper?
AI needs humans to tell it what to do.
For many years those humans were actual scientists, but the constant pressure for faster and cheaper created a market for off-the-shelf, plug-and-play tools. The business world’s infatuation with ever-faster and ever-cheaper solutions reminds me of the first marketing joke I ever heard. An executive, when told it would take one woman nine months to produce a baby, said to put nine women on it and finish it in a month.
Open the pod bay door, Hal.
The amateurs who have taken over from the pros believe the processes are infallible because they’re automated – even when the answer is “I’m sorry, Dave.” With the popularity of AI, amateurs can now set complex and mysterious mechanisms in motion by pushing a button. People without a glimmer of understanding of the deep math and science governing algorithms and data are put in charge. It is like choosing as captain of your ship someone who knows zero about radar, sonar, navigation, or currents. The doctors are no longer running the asylum.
Cathy O’Neil, author of Weapons of Math Destruction isn’t happy about people’s willingness to blindly trust black boxes and murky algorithms. She says we should ask the tough questions, uncover the truth, and demand change.
Nearly 200 years ago, Charles Babbage introduced his mechanical calculator.
At the unveiling, attending dignitaries asked “If you put wrong figures into your machine, will the right answers come out?” The answer was no, and in 1963 putting the wrong figures into machines was happening so often that the term GIGO came into use. World Wide Words says the first known use of the acronym appeared in reference to what happened when the Internal Revenue Service switched over to computerized records.
Garbage In, Garbage Out is the notion that when faulty data are fed into a computer, the information produced from those data are also faulty. When the data lack integrity, so does the output. One bad apple spoils the whole bunch.
What AI claims as “signal” is often statistical “noise”
Noise is the annoying static and interference you hear when your cellphone connection is bad. It interferes with your ability to hear and understand what’s being said. The more interference, the worse the quality of the signal. In the case of BD and AI, the worse the quality of the information.
Results are limited by the data and by how we analyze them.
Even with good data, poor analysis ruins things. Keeping with our barnyard theme, you can’t make an analytical silk purse out of a data sow’s ear. Like those mules, AI uses only what it is fed. It would seem essential to start with good information instead of bad – and have a scientist at the wheel.
Like most shiny new things, AI is overrated.
Many who have climbed aboard the AI bandwagon did so because it’s new and popular. Recent studies find artificial intelligence to be the most over-hyped term in marketing today. Duke University’s Center on Science & Society says with clever marketing, AI became a fashionable trend because it was an easy way to get answers. Some executives have gone so far as to turn over determining corporate strategy to machines. FOMO is at play, too, just like the hysterical reactions to the end-of-the-world-as-we-know-it forecasts of Y2K. For those who don’t remember, a mere 2-digit coding error was projected to bring the world’s computers crashing down.The reality underwhelmed us when on January 1, 2000, nothing much happened. There was much talk about solution sellers using scare tactics to make money.
Nonsense, you say, what about self-driving cars? Isn’t that AI?
Yes, it is. In 1925 the New York Times said the world’s first radio-controlled car was guided “as if a phantom hand were at the wheel.” Since then, many have worked on this grand idea.
- In 2002, DARPA, the U.S. Defense Advanced Research Projects Agency offered $1 million to the team whose car was able to navigate a 140 mile course. The farthest team took 2 hours to go 8 miles before catching fire.
- In 2004, MIT and Harvard economists said a computer would never be able to drive a car because of the enormous complexity of information involved.
- In 2014, Nissan promised to deliver a car with “autonomous drive technology” by 2020.
- In 2015, Elon Musk predicted his Teslas would be capable of “complete autonomy” by 2017.
What will our future vehicles look like? Take a look at what was predicted in the 1940s, 50s, and 60s at Retro Future Transportation .
Robocars are still will-o-the wisps.
In an article I wrote three years ago, I said nearly everyone is confident driverless vehicles are right around the corner. One big reason they haven’t got to that corner, much less turned it, is the staggeringly incomprehensible amount of data that needs to be managed flawlessly by Big Data and AI. The system will probably be operated by a federal bureaucracy like the Federal Aviation Administration but oh, so much bigger and mind-bendingly more complicated. The FAA’s computers manage 225 million airline routes. That’s a lot of data – until we compare it to the new agency’s need to monitor, manage, and coordinate 24 quadrillion possible routes for cars and trucks alone. The math on that is nuts. When will robocars arrive? Published reports continue to assure investors fully autonomous cars are just around the corner, but those with a deeper understanding of the issues and the challenges say we shouldn’t hold our breath because it’s likelier to be another 10 or 20 years before we get to that corner.
The Economist tells us this is not the first wave of AI-related excitement. In the mid-1950s we were told it would take only a few years to build human-level intelligence. A second wave began in the 1980s and once again the field’s grandest promises went unmet. Like Florida land booms every few decades, here we go again.
Much modern AI technology has been quite successful. Billions of people use the AI inside their smartphones every day, mostly without noticing. Despite accomplishments like this, the fact remains that many of the bombastic claims made about AI have once again failed to become reality, and confidence is eroding as scientists start to wonder whether the technology has hit a wall. AI continues to butt up against new-found limits and has failed to deliver on some of its proponents’ more grandiose promises. Doubts are creeping in about whether today’s AI is really the magic solution we are being told.
I’m not against AI. I am against bad AI flying the airliner without a pilot’s license.
Please send this article to someone you think would enjoy reading it. While you’re at it, be bold and send it to someone you know that needs to read it.