Let's Take A Closer Look

Explaining complicated subject matter simply since 1986

Long before automated resumé screening, a friend complained long and loud about how every time he needed to hire someone to work in the office, the phone would ring all day. Most of the applicants who called about the job were not the kind of people who had resumés or references. When he asked me for ideas, I said I’d write his ad for him. The day after it went live, he called and said his fax line has been squealing all morning with caller after caller hanging up. He took this to mean that many people who saw the ad chose to call the only number listed in the ad. His needs were simple. He wanted someone who could read and write, follow instructions, and operate a fax machine.The ad asked applicants to write a short note telling why they would be a good employee and please fax it and their resumé to 901-xxx-xxxx. I did not include the name of the company (only the general neighborhood) or the telephone number. This meant the only candidates he had to deal with were those who could read and write, follow instructions, and successfully fax something.

Human Resources and Artificial Intelligence

HR departments have found using algorithms helps them do their jobs faster. And A.I. makes it so easy to screen candidates that many HR personnel have entirely surrendered decision-making to imperfect machines running imperfect programs. HR personnel have become reliant on what they think are infallible robo-gatekeepers without understanding that A.I. does not remove human biases. They don’t know that everything A.I. does is programmed by humans who have programmed their own personal and professional biases into the system.

Laszlo Bock

When he was head of Google’s HR department, Laszlo Bock decided the term Human Resources was too old-fashioned and he changed the HR department’s name to People Operations. Laszlo wanted to expand HR’s sphere of influence. This meant reinventing the department, which he did by laying claim to being data-driven and analytically sophisticated. He also wanted to take hiring decisions out of the hands of managers and have HR make them. And of course what he really wanted was to have HR run as much of the business as he could. In 2015 the Association for Change Management named him the HR Professional of the Decade for transitioning from the current state to a new state. A year later, Bock left Google to start Humu, whose mission is to “make work better everywhere through machine learning, science, and a little bit of love.” My favorite Bock quote? “The focus on process rather than purpose creates an insidious opportunity for sly employees to manipulate the system.”

Google concluded that algorithm-driven recruitment leads to higher quality hires.

Applicants believe the opposite. Seventy-six percent of U.S. workers from a Pew Research Center study said they would not want to apply for a job at a company that used algorithms to scan their their CV because they think algorithms do a worse job than humans.

What’s not to like about reducing bias?

Job site Monster asks that question without mentioning that resumé screening only helps by being blind to gender and ethnicity. They do not mention that automated screening cannot help with assessing such important personal characteristics as the ability to work well with people from different backgrounds.

Eager to jump on the A.I. bandwagon, HR personnel have little or no understanding of what goes into hiring algorithms or data analytics

When one organization found a correlation between candidates who entered lots of keystrokes and future job performance, they screened for applications that had lots and lots of keystrokes. This of course automatically screened out those who can express themselves well in a few words. 

Scientific, maybe; irreproachable, no

Algorithms do not automatically eliminate the human tendency for individuals to favor people who are like themselves. Biases are built in by programmers who bury theirs deep down where you can’t see them, just like termites. 

Hiring algorithms are not neutral

Writing in Harvard Business Review, Gideon Mann and Cathy O’Neil say that more and more HR managers are relying on data-driven algorithms to screen potential job candidates. Some systems are so good at rejecting applicants that three out of four resumés are weeded out without a human ever seeing them. One wonders how many babies are thrown out with the bathwater.

Most HR personnel in charge of automation:
  • Don’t realize there are drawbacks to what on the surface appears to be efficiency.
  • Are not aware of any of A.I.’s many limitations.
  • Do not have a plan for how to evaluate results.
Hiring algorithms are not designed to find the best employees

They are designed to weed out applicants. Algorithms are good for simple screens like background checks. But algorithms cannot be objective because programmers’ biases, opinions, and assumptions are lodged deeply in strings of code. Algorithms are trained to learn from past successes, which further embeds existing bias.

Executive tip: Stop making hard decisions based solely on algorithms 
  • Let decisions be guided by an algorithm-informed process.
  • Do not allow an algorithm to do the choosing.
  • Build and operate a rigorous observation and control system.
  • Insist experienced people oversee selection and evaluation.
  • Conduct random spot-checks on machine decisions and put them through extensive human review. You want to see which candidates the algorithm has been selecting and why.
How to game automated resumé screening

Like video games, automated systems are pre-programmed – and anything that is pre-programmed can be hacked. Resumé-preparation services use algorithms guaranteed to improve your CV match score. Their software culls key words from job descriptions and embeds them in your resumé. By using pre-written templates, increasing numbers of applicants submit cloned CVs designed to get through the door, not to provide a good description of the applicant. Plus, those who game often make unfounded claims, and A.I. has no way of knowing.

Employers who use automated screening should ask for personal letters

Don’t let machines scan the letters, though. Have them read and assessed by humans who understand context to get a sense of who you’re looking at.

Computers and software are the domain of scientists

But now that so many statistical programs are automated, just about everyone sees themselves able to analyze data as well as scientists can. The problem is that non-experts don’t know automated analysis mostly just zooms around looking for relationships between variables. These automated processes lack the ability to determine if the links they find make any sense at all because they do not understand conditions, circumstances, or extraneous factors, the things we call context.

A.I. cannot understand context. All it knows is statistics

The leading approach to A.I. right now is machine learning, in which programs are trained to pick out patterns in large amounts of data. Programs do this without the essential ability to determine if the patterns make any sense at all. Science journalist Bianca Nogrady says our increasing reliance on machine learning means decisions about important things are being made without enough scrutiny. Instead of worrying about robots ruling the world, we should worry that we are putting too much trust in the automated tools we are using now. Machine learning works by training software to spot patterns in data. Once trained, it is then put to work analyzing fresh, unseen data within the exact limits set by the programmers. When the computer spits out answers, we never see where they came from or how they got there.

Systems are only as good as the data they are fed

When it comes to data, one bad apple spoils the whole bunch. Garbage In, Garbage Out is the notion that when faulty data are fed into a computer, the information produced from those data are also faulty. When the data lack overall accuracy, completeness, and consistency, so does the output. Because most of the data used by business is not complete, accurate, or consistent, most of the output is untrustworthy. Rob Vermiller writes that today’s A.I. neural networks are shallow and imperfect. They learn everything the same way and at the same level, finding coincidences that they call correlations. Unlike people, machines have zero ability to know what they don’t know, nor are they aware of their own thinking process. You may have heard this referred to as metacognition.

Data scientists report that 80% of their time is spent wrestling with messy data

Writing in Forbes, Gary Bloom says any and all types of intelligence are only as good as the data from which they draw inferences. He says what’s usually missing from discussions of A.I.’s shortcomings is the elephant in the room: data quality. And data kept today by companies is often an incompatible mess, like an informational Tower of Babel. “Some of it is valuable, some of it actually belongs with stuff in a different closet and some of which no one knows is even there,” he says. “If A.I. is a recipe for increased efficiency, then good data is the essential ingredient.”

Talent beats algorithms every time

The global investment banker Jefferies Group says “Data and talent – not algorithms – will be the two significant sources of competitive advantage in the A.I. war.” Gartner’s Kasey Panetta said “If the data is of poor quality, machines won’t be able to make reliable decisions.” Joaquin Candela, Director of Applied Machine Learning at Facebook, recently told the Harvard Business Review that’s he’s focused on getting better data, not better algorithms. “I’m not saying don’t work with algorithms at all. I’m saying we need to focus on feeding them better data,” he said. A.I. needs robust, clean, and current data. Everyone needs to know where it came from, how it was collected, and how it was processed before being sent to an A.I. engine.

Ice cream and forest fires 

Let’s say that A.I. finds a correlation between variable A and variable B. Here are the possibilities:

  1. Variable A causes variable B.
  2. B causes A.
  3. Neither causes the other.
  4. C causes A and B.

Panetta uses the example of the high correlation between forest fires and ice cream consumption among children. It doesn’t take a statistician to realize that eating ice cream probably does not cause fires nor do fires cause ice cream eating. What is very likely, though, is that there is a third variable that causes increases in both ice cream consumption and forest fires. That would be heat (Variable C), which is highest in summer, when more ice cream is sold and more forest fires occur.

The larger the data set, the more patterns are the result of coincidence that is wrongly interpreted as meaningful

Today’s data sets include millions and billions of data points, guaranteeing that some of the found relationships will be statistically meaningful. But “statistically meaningful” is sometimes no more than mere coincidence. Sorry to have to put it this way, but it is not meaningful in any meaningful way. False positives pop up like a lawn full of dandelions. Among the findings from one A.I. report were these:

  • People who trim fat from their steaks are more likely to be atheists.
  • Cabbage eaters have innie bellybuttons.
  • Egg rolls lead to dog ownership.

This is the same “science” that leads to product pitches like Lose 20 Pounds Eating Grapefruit, Blueberries Prevent Memory Loss, and Pistachios Cure Erectile Dysfunction. Claims like this are easy to dismiss because these relationships are nonsensical. But we can’t rely on our instincts to determine which ones are real and which ones aren’t. That’s where we need some help.

Amazon penalized women

In The Guardian, Kenan Malik says Amazon abandoned a secret A.I. system that was supposed to automate their HR recruitment process. The system gave job candidates scores of one to five stars. Upon close inspection, they saw the program tended to give five stars to men and one star to women. According to Reuters, it “penalized resumés that included the word women’s, as in women’s chess club captain and it marked down applicants who had attended women-only colleges.” Amazon had been feeding it the details of its own recruitment program over the previous ten years. Most applicants had been men, as had most recruits. So of course, what the program learned was that men were good candidates and women were not.

All this should teach us three things
  1. Every automated program has built-in biases.
  2. We should stop thinking of machines as being objective, because machines are only as good (or bad) as the humans programming them.
  3. Humans can judge context in a way no machine can. We have a sense of right and wrong and the ability to challenge injustice. Machines do not.
Without context, A.I. is only numbers

The word context comes from the Latin for woven together. At a higher level, context can defined as the circumstances that surround an event, statement, or idea, and without which it cannot be fully understood. Ron Sellers wrote in greenbook.org how context is the difference between research and insights. “Research too often produces numbers in a vacuum, rather than providing the context that will allow for meaningful analysis of the findings.” This is where A.I. is as helpless as a kitten up a tree, except the cat knows it’s in trouble and A.I. doesn’t.

If we set the bar low enough, we can feel superior 

During a discussion of proven oil reserves, several of my University of the West Indies MBA students boasted that Trinidad & Tobago is #1 in the Caribbean. So I told them about frames of reference, pointing out how compared with the rest of the world, T&T ranks 43rd. Many became defensive and a few got angry. Most preferred the context that made them feel better.

Social norms depend on context, too.

A bathing suit is appropriate beachwear for a pool party. A tuxedo is not. The reverse is true for dinner at the embassy.

The thoughts behind the actions

Psychologists Laura Brady, Stephanie Frybergh and Yuichi Shoda write about cultural context. The say stand-alone statistics lack interpretive power, the ability to understand individuals’ experiences and behaviors in relation to their cultural contexts. Cognition, emotion, motivation, and behavior are shaped by individuals’ cultural values and norms. The same behavior takes on different meanings in diverse cultural contexts. To accurately understand human behavior, scientists must understand the cultural context in which the behavior occurs and measure the behavior in culturally relevant ways. When scientists lack this interpretive power, they are prone to drawing inaccurate conclusions, thus building incomplete or misguided theories.

Machine learning is very brittle 

It requires lots of preparation by human researchers and engineers, special-purpose coding, special-purpose sets of training data, and a custom learning structure for each new problem. Today’s machine learning is linear, not the sponge-like learning of humans. When people heard that a computer beat the world chess champion (in 1997), they tended to think that it was “playing” the game just as a human would. No way – those programs had no idea what a game was, or even that they were playing one. 

Like plants, algorithms need weeding and pruning

They require constant monitoring and constant fine-tuning. These responsibilities too often fall to HR personnel who don’t have the statistical literacy they need to understand what A.I. can and cannot do. While HR algorithms are good for increasing diversity and inclusion, they can no more measure honesty, integrity, and motivation than a yardstick can measure room temperature.

Suitcase words

Marvin Minsky was one of the founding fathers of artificial intelligence. He used the term suitcase words to describe words that have many meanings. Learning is a powerful suitcase word because it can refer to so many different types of learning experiences. Learning about artificial intelligence is a very different experience than memorizing your multiplication tables. Just like learning to operate a bulldozer is a very different experience from learning the words to “I’m A Little Teapot.

Want to read more articles like this? Click here.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.
Loading