Question: hello could you please write your own four paragraph 56...
Hello! Could you please write your own four paragraph (5-6 sentences per paragraph) take away or reflection of the below information? Please complete in 24 hours if possible. Thank you!
RIS BOHNET THINKS firms are wasting their money on diversity training. The problem is, most programs just don’t work. Rather than run more workshops or try to eradicate the biases that cause discrimination, she says, companies need to redesign their processes to prevent biased choices in the first place. Bohnet directs the Women and Public Policy Program at the Harvard Kennedy School and cochairs its Behavioral Insights Group. Her new book, What Works, describes how simple changes—from eliminating the practice of sharing self-evaluations to rewarding office volunteerism—can reduce the biased behaviors that undermine organizational performance. In this edited interview with HBR senior editor Gardiner Morse, Bohnet describes how behavioral design can neutralize our biases and unleash untapped talent.
HBR: Organizations put a huge amount of effort into improving
diversity and equality but are still falling short. Are they doing
the wrong things, not
trying hard enough, or both? Bohnet: There is some of each going on. Frankly, right now I am most concerned with companies that want to do the right thing but don’t know how to get there, or worse, throw money at the problem without its making much of a difference. Many U.S. corporations, for example, conduct diversity training programs without ever measuring whether they work. My colleague Frank Dobbin at Harvard and many others have done excellent research on the effectiveness of these programs, and unfortunately it looks like they largely don’t change attitudes, let alone behavior. [See “Why Diversity Programs Fail,” by Frank Dobbin, in this issue.] I encourage anyone who thinks they have a program that works to actually evaluate and document its impact. This would be a huge service. I’m a bit on a mission to convince corporations, NGOs, and government agencies to bring the same rigor they apply to their financial decision making and marketing strategies to their people management. Marketers have been running A/B tests for a long time, measuring what works and what doesn’t. HR departments should be doing the same.
What would a diversity evaluation look like? There’s a great classroom experiment that’s a good model. John Dovidio and his colleagues at Yale evaluated the effect of an antibias training program on first and second graders in 61 classrooms. About half the classrooms were randomly assigned to get four weeks of sessions on gender, race, and body type with the goal of making the children more accepting of others who were different from them. The other half didn’t get the training. The program had virtually no impact on the children’s willingness to share or play with others. This doesn’t mean you can’t ever teach kids to be more accepting—just that improving people’s inclination to be inclusive is incredibly hard. We need to keep collecting data to learn what works best. So the point for corporations is to adopt this same methodology for any program they try. Offer the training to a randomly selected group of employees and compare their behaviors afterward with a control group. Of course, this would also mean defining success beforehand. For diversity training programs to go beyond just checking the box, organizations have to be serious about what they want to change and how they plan to evaluate whether their change program worked.
What does behavioral science tell us about what to do, aside from measuring success? Start by accepting that our minds are stubborn beasts. It’s very hard to eliminate our biases, but we can design organizations to make it easier for our biased minds to get things right. HBR readers may know the story about how orchestras began using blind auditions in the 1970s. It’s a great example of behavioral design that makes it easier to do the unbiased thing. The issue was that fewer than 10% of players in major U.S. orchestras were women. Why was that? Not because women are worse musicians than men but because they were perceived that way by auditioners. So orchestras started having musicians audition behind a curtain, making gender invisible. My Harvard colleague Claudia Goldin and Cecilia Rouse of Princeton showed that this simple change played an important role in increasing the fraction of women in orchestras to almost 40% today. Note that this didn’t result from changing mindsets. In fact, some of the most famous orchestra directors at the time were convinced that they didn’t need curtains because they, of all people, certainly focused on the quality of the music and not whether somebody looked the part. The evidence told a different story.
So this is good news. Behavioral design works. Yes,
it does. The curtains made it easier for the directors
to detect talent, independent of what it looked like.
On the one hand, I find it liberating to know that
bias affects everyone, regardless of their awareness
and good intentions. This work is not about pointing
fingers at bad people. On the other hand, it is
of course also depressing that even those of us who
are committed to equality and promoting diversity
fall prey to these biases. I am one of those people.
When I took my baby boy to a Harvard day care
center for the first time a few years back, one of the
first teachers I saw was a man. I wanted to turn and
run. This man didn’t conform to my expectations
of what a preschool teacher looked like. Of course,
he turned out to be a wonderful caregiver who later
became a trusted babysitter at our house—but I
couldn’t help my initial gut reaction. I was sexist for
only a few seconds, but it bothers me to this day.
Seeing is believing. That is, we need to actually
see counterstereotypical examples if we are to
change our minds. Until we see more male kindergarten
teachers or female engineers, we need behavioral
designs to make it easier for our biased minds to
get things right and break the link between our gut
reactions and our actions.
What are examples of good behavioral design in
organizations? Well, let’s look at recruitment and
talent management, where biases are rampant. You
can’t easily put job candidates behind a curtain, but
you can do a version of that with software. I am a
big fan of tools such as Applied, GapJumpers, and
Unitive that allow employers to blind themselves
to applicants’ demographic characteristics. The
software allows hiring managers to strip age, gender,
educational and socioeconomic background,
and other information out of résumés so they can
focus on talent only.
There’s also a robust literature on how to take bias
out of the interview process, which boils down to
this: Stop going with your gut. Those unstructured
interviews where managers think they’re getting a feel for a candidate’s fit or potential are basically a
waste of time. Use structured interviews where every
candidate gets the same questions in the same
order, and score their answers in order in real time.
You should also be thinking about how your recruitment
approach can skew who even applies. For
instance, you should scrutinize your job ads for language
that unconsciously discourages either men
or women from applying. A school interested in attracting
the best teachers, for instance, should avoid
characterizing the ideal candidate as “nurturing” or
“supportive” in the ad copy, because research shows
that can discourage men from applying. Likewise, a
firm that wants to attract men and women equally
should avoid describing the preferred candidate as
“competitive” or “assertive,” as research finds that
those characterizations can discourage female applicants.
The point is that if you want to attract the
best candidates and access 100% of the talent pool,
start by being conscious about the recruitment
language you use.
What about once you’ve hired someone? How do
you design around managers’ biases then? The
same principle applies: Do whatever you can to
take instinct out of consideration and rely on hard
data. That means, for instance, basing promotions
on someone’s objectively measured performance
rather than the boss’s feeling about them. That
seems obvious, but it’s still surprisingly rare.
Be careful about the data you use, however. Using
the wrong data can be as bad as using no data. Let me
give you an example. Many managers ask their reports
to do self-evaluations, which they then use as
part of their performance appraisal. But if employees
differ in how self-confident they are—in how comfortable
they are with bragging—this will bias the
manager’s evaluations. The more self-promoting
ones will give themselves better ratings. There’s a
lot of research on the anchoring effect, which shows
that we can’t help but be influenced by numbers
thrown at us, whether in negotiations or performance
appraisals. So if managers see inflated ratings
on a self-evaluation, they tend to unconsciously
adjust their appraisal up a bit. Likewise, poorer selfappraisals,
even if they’re inaccurate, skew managers’
This is a real problem, because there are clear
gender (and also cross-cultural) differences in selfconfidence.
To put it bluntly, men tend to be more overconfident than women—more likely to sing their
own praises. One meta-analysis involving nearly
100 independent samples found that men perceived
themselves as significantly more effective leaders
than women did when, actually, they were rated by
others as significantly less effective. Women, on the
other hand, are more likely to underestimate their
capabilities. For example, in studies, they underestimate
how good they are at math and think they need
to be better than they are to succeed in higher-level
math courses. And female students are more likely
than male students to drop courses in which their
grades don’t meet their own expectations. The point
is, do not share self-evaluations with managers before
they have made up their minds. They’re likely
to be skewed, and I don’t know of any evidence that
having people share self-ratings yields any benefits
for employees or their organizations.
But it’s probably not possible to just eliminate
all managerial activities that allow biased thinking.
Right. But you can change how managers
do these things. One message here is to examine
whether practices that we thought were genderneutral
in fact lead to biased outcomes. Take the
SAT, for example. Your score shouldn’t have been
affected by whether you’re male or female. But it
turns out it was. The test once penalized students
for incorrect answers in multiple-choice questions.
That meant it was risky to guess. Research by Katie
Baldiga Coffman of Ohio State University shows that
this matters, especially for women. Among equally
able test takers, male students are more likely
to guess, while female students are more likely to
skip questions, fearing the penalty and thus ending
up with lower scores. Katie’s research reveals
that gender differences in willingness to take risk
account for about half of the gender gap in guessing.
An analysis of the fall 2001 mathematics SAT
scores suggests that this phenomenon alone explains
up to 40% of the gap between male and
female students in SAT scores. The 2016 SAT has
been redesigned so that it doesn’t penalize for
incorrect answers. Taking risk out of guessing
means that different appetites for risk taking will
no longer affect students’ final scores. This can
be expected to level the playing field for male and
Notice that the new SAT doesn’t focus on changing
the students’ mindsets about risk but instead corrects for different risk tolerances. After all, the
test is meant to measure aptitude, not willingness
to take risk. Organizations should take a page from
this book: Look around and see whether your practices
by design favor one gender over the other and
discourage some people’s ability to do their best
work. Do meetings, for example, reward those
most willing to hold forth? If so, are there meeting
formats you can use that put everyone on an
How can firms get started? Begin by collecting
data. When I was academic dean at the Harvard
Kennedy School, one day I came to the office to find
a group of students camped out in front of my door.
They were concerned about the lack of women on
the faculty. Or so I thought. Much to my surprise,
I realized that it was not primarily the number of
female faculty that concerned them but the lack of
role models for female students. They wanted to
see more female leaders—in the classroom, on panels,
behind the podium, teaching, researching, and
advising. It turns out we had never paid attention
to—or measured—the gender breakdown of the
people visiting the Kennedy School.
So we did. And our findings resembled those of
most organizations that collect such data for the first
time: The numbers weren’t pretty.
Here’s the good news. Once you collect and study
the data, you can make changes and measure progress.
In 1999, MIT acknowledged that it had been
unintentionally discriminating against female faculty.
An examination of data had revealed gender
differences in salary, space, resources, awards, and
responses to outside offers. The data had real consequences.
A follow-up study, published in 2011,
showed that the number of female faculty in science
and engineering had almost doubled, and several
women held senior leadership positions.
Companies can do their own research or turn
to consultants for help. EDGE, where I serve as a
scientific adviser, is a Swiss foundation and private
company that helps organizations across the sectors
measure how well they do in terms of gender equality.
A firm named Paradigm is another. I came across
it when I was speaking with tech firms in Silicon
Valley and San Francisco. It helps companies diagnose
where the problems are, starting by collecting
data, and then come up with possible solutions, often
based on behavioral designs"