Pages

December 13, 2011

c o r p u s | Psychology of Intelligence Analysis ~ Richards J. Heuer, Jr.





Author’s Preface
Foreword
Introduction
PART I—OUR MENTAL MACHINERY
Chapter 1: Thinking About Thinking
Chapter 2: Perception: Why Can’t We See
What Is There To Be Seen?
Chapter 3: Memory: How Do We Remember
What We Know?
PART II—TOOLS FOR THINKING
Chapter 4: Strategies for Analytical Judgment:
Transcending the Limits of Incomplete Information
Chapter 5: Do You Really Need More Information?
Chapter 6: Keeping an Open Mind
Chapter 7: Structuring Analytical Problems
Chapter 8: Analysis of Competing Hypotheses
PART III—COGNITIVE BIASES
Chapter 9: What Are Cognitive Biases?
Chapter 10: Biases in Evaluation of Evidence
Chapter 11: Biases in Perception of Cause and Effect 
Chapter 12: Biases in Estimating Probabilities
Chapter 13: Hindsight Biases in Evaluation of
Intelligence Reporting
PART IV—CONCLUSIONS
Chapter 14: Improving Intelligence Analysis

This book was prepared primarily for the use of US Government officials,
and the format, coverage, and content were designed to meet their specific
requirements.
Because this book is now out of print, this Portable Document File (PDF)
is formatted for two-sided printing to facilitate desktop publishing. It
may be used by US Government agencies to make copies for government
purposes and by non-governmental organizations to make copies
for educational purposes. Because this book may be subject to copyright
restriction, copies may not be made for any commercial purpose.
This book will be available at www.odci.gov/csi.
All statements of fact, opinion, or analysis expressed in the main text
of this book are those of the author. Similarly, all such statements in
the Forward and the Introduction are those of the respective authors
of those sections. Such statements of fact, opinion, or analysis do
not necessarily reflect the official positions or views of the Central
Intelligence Agency or any other component of the US Intelligence
Community. Nothing in the contents of this book should be construed
as asserting or implying US Government endorsement of factual
statements or interpretations.

ISBN 1 929667-00-0
Originally published in 1999.

[] [] []
 
Author’s Preface +
Foreword + Introduction
[]  []  []  []  []  []  []

 Author’s Preface
This volume pulls together and republishes, with some editing,
updating, and additions, articles written during 1978–86 for internal
use within the CIA Directorate of Intelligence. Four of the articles also
appeared in the Intelligence Community journal Studies in Intelligence
during that time frame. The information is relatively timeless and still
relevant to the never-ending quest for better analysis.
The articles are based on reviewing cognitive psychology literature
concerning how people process information to make judgments on incomplete
and ambiguous information. I selected the experiments and
findings that seem most relevant to intelligence analysis and most in need
of communication to intelligence analysts. I then translated the technical
reports into language that intelligence analysts can understand and
interpreted the relevance of these findings to the problems intelligence
analysts face.
The result is a compromise that may not be wholly satisfactory to
either research psychologists or intelligence analysts. Cognitive psychologists
and decision analysts may complain of oversimplification, while
the non-psychologist reader may have to absorb some new terminology.
Unfortunately, mental processes are so complex that discussion of them
does require some specialized vocabulary. Intelligence analysts who have
read and thought seriously about the nature of their craft should have
no difficulty with this book. Those who are plowing virgin ground may
require serious effort.
I wish to thank all those who contributed comments and suggestions
on the draft of this book: Jack Davis (who also wrote the Introduction);
four former Directorate of Intelligence (DI) analysts whose names cannot
be cited here; my current colleague, Prof. Theodore Sarbin; and my editor
at the CIA’s Center for the Study of Intelligence, Hank Appelbaum.
All made many substantive and editorial suggestions that helped greatly
to make this a better book.
—Richards J. Heuer, Jr.
[] [] []
Foreword
By Douglas MacEachin
My first exposure to Dick Heuer’s work was about 18 years ago, and
I have never forgotten the strong impression it made on me then. That
was at about the midpoint in my own career as an intelligence analyst.
After another decade and a half of experience, and the opportunity during
the last few years to study many historical cases with the benefit of
archival materials from the former USSR and Warsaw Pact regimes, reading
Heuer’s latest presentation has had even more resonance.
I know from first-hand encounters that many CIA officers tend to
react skeptically to treatises on analytic epistemology. This is understandable.
Too often, such treatises end up prescribing models as answers to the
problem. These models seem to have little practical value to intelligence
analysis, which takes place not in a seminar but rather in a fast-breaking
world of policy. But that is not the main problem Heuer is addressing.
What Heuer examines so clearly and effectively is how the human
thought process builds its own models through which we process information.
This is not a phenomenon unique to intelligence; as Heuer’s
research demonstrates, it is part of the natural functioning of the human
cognitive process, and it has been demonstrated across a broad range of
fields ranging from medicine to stock market analysis.
The process of analysis itself reinforces this natural function of the
human brain. Analysis usually involves creating models, even though
they may not be labeled as such. We set forth certain understandings and
expectations about cause-and-effect relationships and then process and
interpret information through these models or filters.
The discussion in Chapter 5 on the limits to the value of additional
information deserves special attention, in my view—particularly for an 
 intelligence organization. What it illustrates is that too often, newly acquired
information is evaluated and processed through the existing analytic
model, rather than being used to reassess the premises of the model
itself. The detrimental effects of this natural human tendency stem from
the raison d’etre of an organization created to acquire special, critical information
available only through covert means, and to produce analysis
integrating this special information with the total knowledge base.
I doubt that any veteran intelligence officer will be able to read this
book without recalling cases in which the mental processes described by
Heuer have had an adverse impact on the quality of analysis. How many
times have we encountered situations in which completely plausible
premises, based on solid expertise, have been used to construct a logically
valid forecast—with virtually unanimous agreement—that turned out
to be dead wrong? In how many of these instances have we determined,
with hindsight, that the problem was not in the logic but in the fact
that one of the premises—however plausible it seemed at the time—was
incorrect? In how many of these instances have we been forced to admit
that the erroneous premise was not empirically based but rather a conclusion
developed from its own model (sometimes called an assumption)?
And in how many cases was it determined after the fact that information
had been available which should have provided a basis for questioning
one or more premises, and that a change of the relevant premise(s) would
have changed the analytic model and pointed to a different outcome?
The commonly prescribed remedy for shortcomings in intelligence
analysis and estimates—most vociferously after intelligence “failures”—is
a major increase in expertise. Heuer’s research and the studies he cites
pose a serious challenge to that conventional wisdom. The data show that
expertise itself is no protection from the common analytic pitfalls that
are endemic to the human thought process. This point has been demonstrated
in many fields beside intelligence analysis.
A review of notorious intelligence failures demonstrates that the analytic
traps caught the experts as much as anybody. Indeed, the data show
that when experts fall victim to these traps, the effects can be aggravated
by the confidence that attaches to expertise—both in their own view and
in the perception of others.
These observations should in no way be construed as a denigration
of the value of expertise. On the contrary, my own 30-plus years in the
business of intelligence analysis biased me in favor of the view that, end
  less warnings of information overload notwithstanding, there is no such
thing as too much information or expertise. And my own observations
of CIA analysts sitting at the same table with publicly renowned experts
have given me great confidence that attacks on the expertise issue are
grossly misplaced. The main difference is that one group gets to promote
its reputations in journals, while the other works in a closed environment
in which the main readers are members of the intelligence world’s most
challenging audience—the policy making community.
The message that comes through in Heuer’s presentation is that information
and expertise are a necessary but not sufficient means of making
intelligence analysis the special product that it needs to be. A comparable
effort has to be devoted to the science of analysis. This effort has to
start with a clear understanding of the inherent strengths and weaknesses
of the primary analytic mechanism—the human mind—and the way it
processes information.
I believe there is a significant cultural element in how intelligence
analysts define themselves: Are we substantive experts employed by CIA,
or are we professional analysts and intelligence officers whose expertise
lies in our ability to adapt quickly to diverse issues and problems and
analyze them effectively? In the world at large, substantive expertise is far
more abundant than expertise on analytic science and the human mental
processing of information. Dick Heuer makes clear that the pitfalls the human
mental process sets for analysts cannot be eliminated; they are part of
us. What can be done is to train people how to look for and recognize these
mental obstacles, and how to develop procedures designed to offset them.
Given the centrality of analytic science for the intelligence mission,
a key question that Heuer’s book poses is: Compared with other areas of
our business, have we committed a commensurate effort to the study of
analytic science as a professional requirement? How do the effort and resource
commitments in this area compare to, for example, the effort and
commitment to the development of analysts’ writing skills?
Heuer’s book does not pretend to be the last word on this issue.
Hopefully, it will be a stimulant for much more work.
—–
1. Douglas MacEachin is a former CIA Deputy Director of Intelligence. After 32 years with the
Agency, he retired in 1997 and became a Senior Fellow at Harvard University’s John F. Kennedy School of Government.


[] [] []
Introduction
Improving Intelligence Analysis
at CIA: Dick Heuer’s Contribution
to Intelligence Analysis
by Jack Davis
I applaud CIA’s Center for the Study of Intelligence for making the
work of Richards J. Heuer, Jr. on the psychology of intelligence analysis
available to a new generation of intelligence practitioners and scholars.
Dick Heuer’s ideas on how to improve analysis focus on helping
analysts compensate for the human mind’s limitations in dealing with
complex problems that typically involve ambiguous information, multiple
players, and fluid circumstances. Such multi-faceted estimative challenges
have proliferated in the turbulent post-Cold War world.
Heuer’s message to analysts can be encapsulated by quoting two
sentences from Chapter 4 of this book:
Intelligence analysts should be self-conscious about their reasoning
processes. They should think about how they make
judgments and reach conclusions, not just about the judgments
and conclusions themselves.
Heuer’s ideas are applicable to any analytical endeavor. In this
Introduction, I have concentrated on his impact—and that of other pioneer
thinkers in the intelligence analysis field—at CIA, because that is
the institution that Heuer and his predecessors, and I myself, know best,
having spent the bulk of our intelligence careers there.
Leading Contributors to Quality of Analysis
Intelligence analysts, in seeking to make sound judgments, are always
under challenge from the complexities of the issues they address
and from the demands made on them for timeliness and volume of production.
Four Agency individuals over the decades stand out for having
made major contributions on how to deal with these challenges to the
quality of analysis.
My short list of the people who have had the greatest positive impact
on CIA analysis consists of Sherman Kent, Robert Gates, Douglas
MacEachin, and Richards Heuer. My selection methodology was simple.
I asked myself: Whose insights have influenced me the most during my
four decades of practicing, teaching, and writing about analysis?
Sherman Kent
Sherman Kent’s pathbreaking contributions to analysis cannot be
done justice in a couple of paragraphs, and I refer readers to fuller treatments
elsewhere. Here I address his general legacy to the analytical profession.
Kent, a professor of European history at Yale, worked in the Research
and Analysis branch of the Office of Strategic Services during World War
II. He wrote an influential book, Strategic Intelligence for American World
Power, while at the National War College in the late 1940s. He served as
Vice Chairman and then as Chairman of the DCI’s Board of National
Estimates from 1950 to 1967.
Kent’s greatest contribution to the quality of analysis was to define
an honorable place for the analyst—the thoughtful individual “applying
the instruments of reason and the scientific method”—in an intelligence
world then as now dominated by collectors and operators. In a second
(1965) edition of Strategic Intelligence, Kent took account of the coming
computer age as well as human and technical collectors in proclaiming
the centrality of the analyst:
Whatever the complexities of the puzzles we strive to solve and 
whatever the sophisticated techniques 
we may use to collect the pieces and store them, 
there can never be a time when the thoughtful man can be supplanted 
as the intelligence device supreme.
More specifically, Kent advocated application of the techniques of
“scientific” study of the past to analysis of complex ongoing situations
and estimates of likely future events. Just as rigorous “impartial” analysis
could cut through the gaps and ambiguities of information on events
long past and point to the most probable explanation, he contended, the
powers of the critical mind could turn to events that had not yet transpired
to determine the most probable developments.
To this end, Kent developed the concept of the analytic pyramid,
featuring a wide base of factual information and sides comprised of
sound assumptions, which pointed to the most likely future scenario at
the apex.
In his proselytizing and in practice, Kent battled against bureaucratic
and ideological biases, which he recognized as impediments to sound
analysis, and against imprecise estimative terms that he saw as obstacles
to conveying clear messages to readers. Although he was aware of what
is now called cognitive bias, his writings urge analysts to “make the call”
without much discussion of how limitations of the human mind were to
be overcome.
Not many Agency analysts read Kent nowadays. But he had a profound
impact on earlier generations of analysts and managers, and his
work continues to exert an indirect influence among practitioners of the
analytic profession.

Robert Gates

Bob Gates served as Deputy Director of Central Intelligence (1986–
1989) and as DCI (1991–1993). But his greatest impact on the quality
of CIA analysis came during his 1982–1986 stint as Deputy Director for
Intelligence (DDI).
 Initially schooled as a political scientist, Gates earned a Ph.D. in
Soviet studies at Georgetown while working as an analyst at CIA. As
a member of the National Security Council staff during the 1970s, he
gained invaluable insight into how policymakers use intelligence analysis.
Highly intelligent, exceptionally hard-working, and skilled in the
bureaucratic arts, Gates was appointed DDI by DCI William Casey in
good part because he was one of the few insiders Casey found who shared
the DCI’s views on what Casey saw as glaring deficiencies of Agency analysts.
Few analysts and managers who heard it have forgotten Gates’ blistering
criticism of analytic performance in his 1982 “inaugural” speech
as DDI.
Most of the public commentary on Gates and Agency analysis
concerned charges of politicization levied against him, and his defense
against such charges, during Senate hearings for his 1991 confirmation as
DCI. The heat of this debate was slow to dissipate among CIA analysts,
as reflected in the pages of Studies in Intelligence, the Agency journal
founded by Sherman Kent in the 1950s.
I know of no written retrospective on Gates’ contribution to Agency
analysis. My insights into his ideas about analysis came mostly through an
arms-length collaboration in setting up and running an Agency training
course entitled “Seminar on Intelligence Successes and Failures.” During
his tenure as DDI, only rarely could you hold a conversation with analysts
or managers without picking up additional viewpoints, thoughtful
and otherwise, on what Gates was doing to change CIA analysis.
Gates’s ideas for overcoming what he saw as insular, flabby, and incoherent
argumentation featured the importance of distinguishing between
what analysts know and what they believe—that is, to make clear
what is “fact” (or reliably reported information) and what is the analyst’s
opinion (which had to be persuasively supported with evidence). Among
his other tenets were the need to seek the views of non-CIA experts, in-
cluding academic specialists and policy officials, and to present alternate
future scenarios.
Gates’s main impact, though, came from practice—from his direct
involvement in implementing his ideas. Using his authority as DDI, he
reviewed critically almost all in-depth assessments and current intelligence
articles prior to publication. With help from his deputy and two
rotating assistants from the ranks of rising junior managers, Gates raised
the standards for DDI review dramatically—in essence, from “looks
good to me” to “show me your evidence.”
As the many drafts Gates rejected were sent back to managers who
had approved them—accompanied by the DDI’s comments about inconsistency,
lack of clarity, substantive bias, and poorly supported judgments—
the whole chain of review became much more rigorous. Analysts
and their managers raised their standards to avoid the pain of DDI rejection.
Both career advancement and ego were at stake.
The rapid and sharp increase in attention paid by analysts and managers
to the underpinnings for their substantive judgments probably was
without precedent in the Agency’s history. The longer term benefits of
the intensified review process were more limited, however, because insufficient
attention was given to clarifying tradecraft practices that would
promote analytic soundness. More than one participant in the process
observed that a lack of guidelines for meeting Gates’s standards led to a
large amount of “wheel-spinning.”
Gates’s impact, like Kent’s, has to be seen on two planes. On the one
hand, little that Gates wrote on the craft of analysis is read these days.
But even though his pre-publication review process was discontinued
under his successors, an enduring awareness of his standards still gives
pause at jumping to conclusions to many managers and analysts who
experienced his criticism first-hand.
Douglas MacEachin
Doug MacEachin, DDI from 1993 to 1996, sought to provide an
essential ingredient for ensuring implementation of sound analytic standards:
corporate tradecraft standards for analysts. This new tradecraft was
aimed in particular at ensuring that sufficient attention would be paid to
cognitive challenges in assessing complex issues.
MacEachin set out his views on Agency analytical faults and correctives
in The Tradecraft of Analysis: Challenge and Change in the CIA. My
commentary on his contributions to sound analysis is also informed by a
series of exchanges with him in 1994 and 1995.
MacEachin’s university major was economics, but he also showed
great interest in philosophy. His Agency career—like Gates’—included
an extended assignment to a policymaking office. He came away from
this experience with new insights on what constitutes “value-added” intelligence
usable by policymakers. Subsequently, as CIA’s senior manager
on arms control issues, he dealt regularly with a cadre of tough-minded
policy officials who let him know in blunt terms what worked as effective
policy support and what did not.
By the time MacEachin became DDI in 1993, Gates’s policy of
DDI front-office pre-publication review of nearly all DI analytical studies
had been discontinued. MacEachin took a different approach; he
read—mostly on weekends—and reflected on numerous already-published
DI analytical papers. He did not like what he found. In his words,
roughly a third of the papers meant to assist the policymaking process
had no discernible argumentation to bolster the credibility of intelligence
judgments, and another third suffered from flawed argumentation. This
experience, along with pressures on CIA for better analytic performance
in the wake of alleged “intelligence failures” concerning Iraq’s invasion
of Kuwait, prompted his decision to launch a major new effort to raise
analytical standards.10
MacEachin advocated an approach to structured argumentation
called “linchpin analysis,” to which he contributed muscular terms designed
to overcome many CIA professionals’ distaste for academic nomenclature.
The standard academic term “key variables” became drivers.
“Hypotheses” concerning drivers became linchpins—assumptions
underlying the argument—and these had to be explicitly spelled out.
MacEachin also urged that greater attention be paid to analytical processes
for alerting policymakers to changes in circumstances that would
increase the likelihood of alternative scenarios. 
MacEachin thus worked to put in place systematic and transparent
standards for determining whether analysts had met their responsibilities
for critical thinking. To spread understanding and application of the
standards, he mandated creation of workshops on linchpin analysis for
managers and production of a series of notes on analytical tradecraft.
He also directed that the DI’s performance on tradecraft standards be
tracked and that recognition be given to exemplary assessments. Perhaps
most ambitious, he saw to it that instruction on standards for analysis
was incorporated into a new training course, “Tradecraft 2000.” Nearly
all DI managers and analysts attended this course during 1996–97.
As of this writing (early 1999), the long-term staying power of
MacEachin’s tradecraft initiatives is not yet clear. But much of what he
advocated has endured so far. Many DI analysts use variations on his
linchpin concept to produce soundly argued forecasts. In the training
realm, “Tradecraft 2000” has been supplanted by a new course that teaches
the same concepts to newer analysts. But examples of what MacEachin
would label as poorly substantiated analysis are still seen. Clearly, ongoing
vigilance is needed to keep such analysis from finding its way into
DI products.

Richards Heuer

Dick Heuer was—and is—much less well known within the CIA
than Kent, Gates, and MacEachin. He has not received the wide acclaim
that Kent enjoyed as the father of professional analysis, and he has lacked
the bureaucratic powers that Gates and MacEachin could wield as DDIs.
But his impact on the quality of Agency analysis arguably has been at
least as important as theirs.
Heuer received a degree in philosophy in 1950 from Williams
College, where, he notes, he became fascinated with the fundamental
epistemological question, “What is truth and how can we know it?” In
1951, while a graduate student at the University of California’s Berkeley
campus, he was recruited as part of the CIA’s buildup during the Korean
War. The recruiter was Richard Helms, OSS veteran and rising player in
the Agency’s clandestine service. Future DCI Helms, according to Heuer,
was looking for candidates for CIA employment among recent graduates
of Williams College, his own alma mater. Heuer had an added advantage
as a former editor of the college’s newspaper, a position Helms had held
some 15 years earlier.11
In 1975, after 24 years in the Directorate of Operations, Heuer
moved to the DI. His earlier academic interest in how we know the truth
was rekindled by two experiences. One was his involvement in the controversial
case of Soviet KGB defector Yuriy Nosenko. The other was
learning new approaches to social science methodology while earning a
Master’s degree in international relations at the University of Southern
California’s European campus.
At the time he retired in 1979, Heuer headed the methodology unit
in the DI’s political analysis office. He originally prepared most of the
chapters in this book as individual articles between 1978 and 1986; many
of them were written for the DI after his retirement. He has updated the
articles and prepared some new material for inclusion in this book.

Heuer’s Central Ideas
 
Dick Heuer’s writings make three fundamental points about the
cognitive challenges intelligence analysts face:
• The mind is poorly “wired” to deal effectively with both inherent
uncertainty (the natural fog surrounding complex, indeterminate
intelligence issues) and induced uncertainty (the man-made fog
fabricated by denial and deception operations).
• Even increased awareness of cognitive and other “unmotivated”
biases, such as the tendency to see information confirming an already-
held judgment more vividly than one sees “disconfirming”
information, does little by itself to help analysts deal effectively
with uncertainty.
• Tools and techniques that gear the analyst’s mind to apply higher
levels of critical thinking can substantially improve analysis on
complex issues on which information is incomplete, ambiguous,
and often deliberately distorted. Key examples of such intellectual devices
include techniques for structuring information, challenging
assumptions, and exploring alternative interpretations.
The following passage from Heuer’s 1980 article entitled “Perception:
Why Can’t We See What Is There to be Seen?” shows that his ideas were
similar to or compatible with MacEachin’s concepts of linchpin analysis.
Given the difficulties inherent in the human processing of complex
information, a prudent management system should:
• Encourage products that (a) clearly delineate their assumptions
and chains of inference and (b) specify the
degree and source of the uncertainty involved in the
conclusions.
• Emphasize procedures that expose and elaborate alternative
points of view—analytic debates, devil’s advocates,
interdisciplinary brainstorming, competitive
analysis, intra-office peer review of production, and
elicitation of outside expertise.
Heuer emphasizes both the value and the dangers of mental models,
or mind-sets. In the book’s opening chapter, entitled “Thinking About
Thinking,” he notes that:
[Analysts] construct their own version of “reality” on the basis
of information provided by the senses, but this sensory input
is mediated by complex mental processes that determine
which information is attended to, how it is organized, and the
meaning attributed to it. What people perceive, how readily
they perceive it, and how they process this information after
receiving it are all strongly influenced by past experience, education,
cultural values, role requirements, and organizational
norms, as well as by the specifics of the information received.
This process may be visualized as perceiving the world through
a lens or screen that channels and focuses and thereby may distort
the images that are seen. To achieve the clearest possible image . . .
analysts need more than information . . .
They also need to understand the lenses through which this information
passes. These lenses are known by many terms—mental models,
mind-sets, biases, or analytic assumptions.
In essence, Heuer sees reliance on mental models to simplify and
interpret reality as an unavoidable conceptual mechanism for intelligence
analysts—often useful, but at times hazardous. What is required of analysts,
in his view, is a commitment to challenge, refine, and challenge again
their own working mental models, precisely because these steps are central
to sound interpretation of complex and ambiguous issues.
Throughout the book, Heuer is critical of the orthodox prescription
of “more and better information” to remedy unsatisfactory analytic performance.
He urges that greater attention be paid instead to more intensive
exploitation of information already on hand, and that in so doing,
analysts continuously challenge and revise their mental models.
Heuer sees mirror-imaging as an example of an unavoidable cognitive
trap. No matter how much expertise an analyst applies to interpreting
the value systems of foreign entities, when the hard evidence runs out
the tendency to project the analyst’s own mind-set takes over.
In Chapter 4, Heuer observes:
To see the options faced by foreign leaders as these leaders see
them, one must understand their values and assumptions and
even their misperceptions and misunderstandings. Without
such insight, interpreting foreign leaders’ decisions or forecasting
future decisions is often nothing more than partially informed
speculation. Too frequently, foreign behavior appears
“irrational” or “not in their own best interest.” Such conclusions
often indicate analysts have projected American values
and conceptual frameworks onto the foreign leaders and societies,
rather than understanding the logic of the situation as it
appears to them.

Competing Hypotheses
 
To offset the risks accompanying analysts’ inevitable recourse to mirror-
imaging, Heuer suggests looking upon analysts’ calculations about  foreign beliefs and behavior as hypotheses to be challenged. Alternative
hypotheses need to be carefully considered—especially those that cannot
be disproved on the basis of available information.
Heuer’s concept of “Analysis of Competing Hypotheses” (ACH) is
among his most important contributions to the development of an intelligence
analysis methodology. At the core of ACH is the notion of
competition among a series of plausible hypotheses to see which ones
survive a gauntlet of testing for compatibility with available information.
The surviving hypotheses—those that have not been disproved—are subjected
to further testing. ACH, Heuer concedes, will not always yield the
right answer. But it can help analysts overcome the cognitive limitations
discussed in his book.
Some analysts who use ACH follow Heuer’s full eight-step methodology.
More often, they employ some elements of ACH—especially the
use of available information to challenge the hypotheses that the analyst
favors the most. 

Denial and Deception

 
Heuer’s path-breaking work on countering denial and deception
(D&D) was not included as a separate chapter in this volume. But his
brief references here are persuasive.
He notes, for example, that analysts often reject the possibility of deception
because they see no evidence of it. He then argues that rejection
is not justified under these circumstances. If deception is well planned
and properly executed, one should not expect to see evidence of it readily
at hand. Rejecting a plausible but unproven hypothesis too early tends
to bias the subsequent analysis, because one does not then look for the
evidence that might support it. The possibility of deception should not
be rejected until it is disproved or, at least, until a systematic search for
evidence has been made and none has been found.
Heuer’s Impact
Heuer’s influence on analytic tradecraft began with his first articles.
CIA officials who set up training courses in the 1980s as part of then-
DDI Gates’s quest for improved analysis shaped their lesson plans partly
on the basis of Heuer’s findings. Among these courses were a seminar on
intelligence successes and failures and another on intelligence analysis.
 The courses influenced scores of DI analysts, many of whom are now
in the managerial ranks. The designers and teachers of Tradecraft 2000
clearly were also influenced by Heuer, as reflected in reading selections,
case studies, and class exercises.
Heuer’s work has remained on reading lists and in lesson plans for
DI training courses offered to all new analysts, as well as courses on warning
analysis and on countering denial and deception. Senior analysts and
managers who have been directly exposed to Heuer’s thinking through
his articles, or through training courses, continue to pass his insights on
to newer analysts.

Recommendations

Heuer’s advice to Agency leaders, managers, and analysts is pointed:
To ensure sustained improvement in assessing complex issues, analysis
must be treated as more than a substantive and organizational process.
Attention also must be paid to techniques and tools for coping with
the inherent limitations on analysts’ mental machinery. He urges that
Agency leaders take steps to:
• Establish an organizational environment that promotes and rewards
the kind of critical thinking he advocates—or example,
analysis on difficult issues that considers in depth a series of plausible
hypotheses rather than allowing the first credible hypothesis
to suffice.
• Expand funding for research on the role such mental processes
play in shaping analytical judgments. An Agency that relies on
sharp cognitive performance by its analysts must stay abreast
of studies on how the mind works—i.e., on how analysts reach
judgments.
• Foster development of tools to assist analysts in assessing information.
On tough issues, they need help in improving their mental
models and in deriving incisive findings from information they
already have; they need such help at least as much as they need
more information. 
I offer some concluding observations and recommendations, rooted
in Heuer’s findings and taking into account the tough tradeoffs facing
intelligence professionals:
Commit to a uniform set of tradecraft standards based on the insights
in this book.
Leaders need to know if analysts have done their

cognitive homework before taking corporate responsibility for
their judgments. Although every analytical issue can be seen as
one of a kind, I suspect that nearly all such topics fit into about
a dozen recurring patterns of challenge based largely on variations
in substantive uncertainty and policy sensitivity. Corporate
standards need to be established for each such category. And the
burden should be put on managers to explain why a given analytical
assignment requires deviation from the standards. I am
convinced that if tradecraft standards are made uniform and
transparent, the time saved by curtailing personalistic review of
quick-turnaround analysis (e.g., “It reads better to me this way”)
could be “re-invested” in doing battle more effectively against
cognitive pitfalls. (“Regarding point 3, let’s talk about your assumptions.”)
Pay more honor to “doubt.” Intelligence leaders and policymakers
should, in recognition of the cognitive impediments to sound
analysis, establish ground rules that enable analysts, after doing
their best to clarify an issue, to express doubts more openly. They
should be encouraged to list gaps in information and other obstacles
to confident judgment. Such conclusions as “We do not
know” or “There are several potentially valid ways to assess this
issue” should be regarded as badges of sound analysis, not as dereliction
of analytic duty.
Find a couple of successors to Dick Heuer. Fund their research. Heed
their findings.
—–

. 2. Jack Davis served with the Directorate of Intelligence (DI), the National Intelligence
Council, and the Office of Training during his CIA career. He is now an independent contractor
who specializes in developing and teaching analytic tradecraft. Among his publications is
Uncertainty, Surprise, and Warning (1996).
3. See, in particular, the editor’s unclassified introductory essay and “Tribute” by Harold P. Ford
in Donald P. Steury, Sherman Kent and the Board of National Estimates: Collected Essays (CIA,
Center for the Study of Intelligence, 1994). Hereinafter cited as Steury, Kent.
4. Sherman Kent, Writing History, second edition (1967). The first edition was published
in 1941, when Kent was an assistant professor of history at Yale. In the first chapter, “Why
History,” he presented ideas and recommendations that he later adapted for intelligence analysis
5.Kent, “Estimates and Influence” (1968), in Steury, Kent.
. 6. Casey, very early in his tenure as DCI (1981-1987), opined to me that the trouble with
Agency analysts is that they went from sitting on their rear ends at universities to sitting on
their rear ends at CIA, without seeing the real world.


. 7. “The Gates Hearings: Politicization and Soviet Analysis at CIA”, Studies in Intelligence
(Spring 1994). “Communication to the Editor: The Gates Hearings: A Biased Account,” Studies
in Intelligence (Fall 1994).

.8. DCI Casey requested that the Agency’s training office provide this seminar so that, at the
least, analysts could learn from their own mistakes. DDI Gates carefully reviewed the statement
of goals for the seminar, the outline of course units, and the required reading list. 
. 9. Unclassified paper published in 1994 by the Working Group on Intelligence Reform, which
had been created in 1992 by the Consortium for the Study of Intelligence, Washington, DC.


10. Discussion between MacEachin and the author of this Introduction, 1994.
 11. Letter to the author of this Introduction, 1998.


READ MORE ?

 Please Click :

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/copy_of_index.html


No comments:

Post a Comment