Your cheesy corporate cloud has potential.

Your cheesy corporate cloud has potential.

There’s a word cloud hanging on the wall at work. I stare at it while I’m waiting for the elevator at the end of my day, spacing out, feeling a bit sorry for the smallest of the words. 

Unflappable.

Corporate lore tells the story of how this little word cloud came to be. The leadership team held a brainstorming session. Each person visualized a favorite coworker and offered up qualities that made that person great. 

Humble. Smart. Conscientious. Productive. Customer-Focused.

My company is in the middle of a deliberate growth spurt, so I have begun to regard the little word cloud as more than cheap office decor. Given its origins, I’d like to treat it more like a fortune-teller, asking, “Little word cloud, what kinds of people will my company hire next?”

Since I am likely the first person to ever speak with it, the little word cloud would be immediately grouchy and fatalistic. “Silly woman! I don’t matter much here. You and I, we work at an engineering services company headquartered in the Pacific Northwest. Half of all future new hires shall be white guys in plaid shirts who enjoy rally racing or camping!”

Having great empathy for the socially-unpracticed, I would not be deterred by its outburst. I would give the little word cloud a pep talk, retelling its origin story with the CEO, Directors and Vice Presidents. And I would tell it about my passion for greater diversity and transparency, both in the STEM community and in our own little company. I enjoy working with the plaid shirts, but I am hungry to work with lots of different kinds of people. My arguments would be compelling. The cloud would be bolstered. It would pause, then slowly proclaim, “Your bosses are looking for humble, productive, smart, conscientious, customer-focused candidates.”

I would nod, then wonder.

What would Paul Meehl think of all this?

In 1954, practicing psychotherapist and rat researcher Paul Meehl published a book called, “Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.” For a variety of situations — from high school counselors predicting college success to psychiatrists deciding on whether to use electroshock therapy — Meehl showed that statistical measures are better at prediction than clinical ones. That is, in such scenarios, it is better to take a few quantitative measures and do a rule-based lookup than it is to sit with a person, interview them for 45 minutes and make a face-to-face assessment.

Like any diligent researcher, Meehl anticipated the reactions of the haters, the readers who would not be able to accept statistical measures as better predictors. He wrote,

“Those who dislike the method consider it mechanical, atomistic, additive, cut and dried, artificial, unreal, arbitrary, incomplete, dead, pedantic, fractionated, trivial, forced, static, superficial, rigid, sterile, academic, oversimplified, pseudoscientific, and blind.” (Meehl 1954, p 4).

Despite all his careful wording, Meehl’s book was controversial. It was probably a difficult revelation to accept. Whatever Meehl might have written, his colleagues may have only been able to receive one message.

You went to college. You did your rotations and your residencies. You worked graveyard shift at the state mental hospital, even got attacked by a patient with a knife. So what. A four-question multiple choice test is better at predicting things about your patients than you are.

That is a message difficult to believe.

And perhaps because it so unbelievable, in the sixty years after Meehl’s work, hundreds more studies have reported similar contests in statistical versus clinical predictions. Moreover there are a variety of domains in which Meehl’s controversial conclusions have been retested and revalidated (Kahneman 2011, p 223):

  • longevity of cancer patients
  • the length of hospital stays 
  • the diagnosis of cardiac disease 
  • prospect of success for new business
  • evaluation of credit risks by banks
  • future career satisfaction of workers
  • suitability of foster parents
  • winners of football games
  • future prices of Bordeaux wine

Why are humans so particularly bad at these kinds of prediction?

In the nineties, Cognitive Neuroscientist James McClelland published a collection of papers with one main idea. The human brain has two complementary learning systems, colloquially called “long-term memory” and “short-term memory.” Long-term memory, or the neocortical system, is the non-conscious learning system. I can button this shirt I am wearing because as a kid, I practiced buttoning over and over again. It’s the same reason I can tie my own shoes. The lessons from these kinds of simple yet repeated motor learning are recorded in the neocortex.

Conversely, short-term memory refers to the hippocampal system, an area of the brain that allows for constructing temporary mental models of the present moment, allowing one to flexibly react to the world (McCrae 2000, p 2). Sure, I know how to button my own shirt, but when I go to the lunchroom, I notice that my coworker Kim is wearing the same shirt — I might even joke with her about it — largely thanks to my hippocampus. Unless an event is particularly novel, consolidation from the short-term to the long-term can take a long time, up to 15 years (McClelland 1995, p 419).

The evolutionarily older neocortical system has structural elements in common with even primitive vertebrates (Karabanov 2010), including the lamprey, a prehistoric eel-like creature with a toothy sucker mouth at one end. So it feels simultaneously awe-inspiring and depressing that a learning system that humans share with such tube sock nightmares is responsible for more than just motor learning. It is also the source of other implicit knowledge developed unconsciously over time, including the ways in which humans categorize other people.

 

 Lamprey.  Source: http://www.bbc.com/earth/story/20151102-meet-a-lamprey-your-ancestors-looked-just-like-it

Lamprey.  Source: http://www.bbc.com/earth/story/20151102-meet-a-lamprey-your-ancestors-looked-just-like-it

As humans navigate their social world — interacting with pierced baristas, plaid-shirted co-workers, grocery clerks, or homeless people on the bus — the two complementary learning systems work in concert. Automatically, unconsciously and very quickly, the neocortical system categorizes each person, not wasting valuable attentional resources on the familiar (Macrae 2000, p 106). Then the hippocampal system works to perceive novel and unexpected events. When a person is multi-tasking, stressed, or otherwise cognitively busy, even fewer resources are put toward perceiving the unique traits of any given individual. 

Project Implicit is a nonprofit organization that seeks to educate people on these categories, their implicit biases constructed by the neocortical system. Since 2005, the Harvard-based group has offered an online site to assess an individual’s implicit biases in fourteen different areas, including race, disability, gender, and careers (Project Implicit 2016). Based on the tests I have taken, I have learned I have an implicit bias associating women with science, an automatic association for “Foreign” with Asian Americans, and little to no automatic preference between Arab Muslims and other people. 

I’m not special. All people have some set of implicitly learned categories and any single person may not always rationally agree with her own implicit biases. Even the average American second grader has a strong, implicit bias associating boys with math (Cvencek 2011). But it is immensely helpful to me that I know what my own implicit biases are so that I can practice better mindfulness and keep them in check.

When it comes time for me to participate in an interview, within ten seconds of meeting the candidate, my neocortical system will have already pigeonholed him according to whatever automatic, unconscious categorization has taken place in my neocortex over the last forty years. As my company grows, as I participate in interview scenarios, I would like to outsmart my own neocortex. I would like to find a diverse group of qualified candidates, whether or not they fit under my implicit biases. I would like to leverage the lessons from the six-decade reaction to Paul Meehl’s book and turn the little word cloud’s proclamation into a repeatable, quantitative interview protocol.

Who are the humble, productive, smart, conscientious, customer-focused candidates?

According to a 1998 study by Frank Schmidt and John Hunter, two effective predictors of how a candidate will perform at a job are: 

1) Work sample test, explaining 29% of an employee’s performance. 
2) Structured interview, explaining 26% of an employee’s performance (Bock 2015). 

For work sample tests, companies may offer an assignment to a candidate that she may complete in her own time. I got an in-person interview at Puppet Labs last fall only after I wrote a program that automatically deployed an nginx web server. It was something I had never done before. On the one hand, it was easy enough to search for the basic mechanics of the task. On the other, it was a good opportunity to demonstrate my conscientious coding style. I implemented the core functionality, but I also handled multiple error exceptions and offered up user-friendly error messages when something went wrong. I took care to write a clear set of instructions on how to use the program. Such work sample tests are useful when potential employers can create a small scope of work that adequately represents what the candidate will actually be expected to do.

Alternatively, structured interviews comprise a set of generic questions, asked of every candidate in the same order, scored with a consistent rubric (Bock 2015). The questions themselves are generic and dry. The United States Office of Personnel Management offers motivation and guidance for developing structural interviews. Consider this question taken from (USOPM 2008, p 10):

Describe a situation in which you had to deal with individuals who were difficult, hostile, or distressed. Who was involved? What specific actions did you take and what was the result?

The purpose of this particular question is to assess a candidate’s interpersonal skills. The one-page rubric that accompanies this question defines five levels of quality, including concrete examples. A level-5 expert in interpersonal skills might describe a time when he had explain why an already-expensive project is late and over-budget to an irate leadership team. A candidate with a level-1 awareness of interpersonal skills might describe a time she referred an angry middle manager to her direct supervisor. 

In a similar vein, I could work with HR, my boss and my software development team to define what questions might uncover the extent to which a developer is “humble” or “productive.” I would commit to gathering feedback from a variety of different people in my own professional network, since concrete examples of what a “conscientious” developer looks like might vary across the many different cultures of Americana.

All that said, I am not naive to the economics. Both the work sample test and structured interview are expensive and cumbersome. A structured interview, with its questions and rubrics, takes time to develop and pilot, yet it only predicts 26% of an employee’s performance. As someone interested in greater diversity and transparency, there are not many attractive alternatives. There are interviews that begin with “If you could be any kind of cheese, what kind of cheese would you be?” Such unstructured interview questions only predict 14% of an employee’s performance. Moreover, they don’t seem to assess any meaningful qualities in the candidate. Instead, they permit the interviewer to cognitively validate whatever hidden biases her neocortical system barfed up in those first ten seconds (Bock 2015) and justify hiring people that are like herself (USOPM 2008).

Are interview protocols enough to eliminate implicit bias?

Even with carefully-crafted protocols like work sample tests and structured interviews, it is still necessary to take steps to override the brain’s automatic categorization that can happen in an interview scenario. 

A very good first step is to be aware that implicit bias exists in the first place (Macrae 2000, p 109). Even better is to learn about one’s own implicit biases and talk through them during deliberations. Research conducted by Project Implicit has shown that just naming implicit bias can improve hiring committee outcomes. 

A second step is to leverage what is known about these complementary learning systems. Avoid pressure to hire new candidates (USOPM 2008). Under stress, fewer cognitive resources are put toward individuating any particular candidate; employees may interview fewer candidates and rely more heavily on their automatic, unconscious categorizations.

I am awestruck by the human brain. It is an impressive, organic machine. Like any machine, working efficiently and producing desired outcomes requires knowledge and care. Of course, achieving greater diversity and transparency at my company and the greater STEM community requires more than a little word cloud. But as Paul Meehl might offer, “Your cheesy corporate word cloud has potential.”

Sources Cited

Bock, Lazslo. “Here’s Google’s Secret to Hiring the Best People.” Wired.com. April 07, 2015. http://www.wired.com/2015/04/hire-like-google/.

Cvencek, Dario, Andrew N. Meltzoff, and Anthony G. Greenwald. “Math-Gender Stereotypes in Elementary School Children.” Child Development 82, no. 3 (2011): 766–79.

Kahneman, Daniel. Thinking, Fast and Slow. New York, NY: Farrar, Strauss and Giroux, 2011.

Karabanov, A., S. Cervenka, O. De Manzano, H. Forssberg, L. Farde, and F. Ullen. “Dopamine D2 Receptor Density in the Limbic Striatum Is Related to Implicit but Not Explicit Movement Sequence Learning.” Proceedings of the National Academy of Sciences 107, no. 16 (2010): 7574–579.

McClelland, James L., Bruce L. Mcnaughton, and Randall C. O’Reilly. “Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights from the Successes and Failures of Connectionist Models of Learning and Memory.” Psychological Review 102, no. 3 (1995): 419–57.

Macrae, C. Neil, and Galen V. Bodenhausen. “Social Cognition: Thinking Categorically about Others.” Annual Review of Psychology Annu. Rev. Psychol. 51, no. 1 (2000): 93–120.

Meehl, Paul E. “Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.” 1954.

Project Implicit. “About Us.” Accessed August 07, 2016. https://implicit.harvard.edu/implicit/aboutus.html.

U.S. Office of Personnel Management (USOPM). “Structured Interviews: A Practical Guide.” September 2008.

Seasonal personnel

Seasonal personnel

Welcome to the party

Welcome to the party