the rules of deductive
logic as norms for reasoning in this area.
演繹邏輯的規則
03:11
When we're interested in
reasoning about chance and uncertainty, we appeal
to probability theory and statistics to give us
norms for correct reasoning
當我們有興趣
03:18
in this area.
在此領域。
03:19
If we're talking
about making choices that will maximize certain
goals under conditions of uncertainty, then we appeal
to formal decision theory.
如果我們談論的是
03:26
If we add the situation where
other actors are making choices that can affect the
outcomes of our decisions, then we're moving into
what's called game theory.
如果我們再加上一種情況,
03:35
So over time, we've
developed a number of different theories
of rationality that give us norms for correct
reasoning in different domains.
因此,隨著時間推移,我們
03:42
Now, this is great, of course.
當然,這非常棒。
03:43
These are very powerful
and useful tools.
這些是非常強大
03:46
Now, when it comes to the study
of how human reasoning actually works, before Kahneman and
Tversky's work in the 1970s, there was a widely shared
view that more often than not,
現在,當談到研究
03:57
the mind, or the brain,
processes information in ways that mimic the
formal models of reasoning and decision-making that were
familiar from our normative
心智或大腦
04:06
models of reasoning--
from formal logic, from probability theory,
from decision theory.
推理模型——
04:11
What Kahneman and
Tversky, showed is that more often than
not, this is not the way our minds work.
卡尼曼與
04:17
They showed that there's
a gap between how our normative theories
say we should reason and how we in fact reason.
他們顯示出,在我們的規範理論
04:24
This gap can manifest
itself in different ways, and there's no one single
explanation for it.
這種差距可能以
04:29
One reason, for example is
that in real world situations, the reasoning
processes prescribed by our normative
theories of rationality
舉例來說,原因之一是
04:37
can be computationally
very intensive.
過程在計算上可能
04:40
Our brains would need to process
an awful lot of information to implement our best normative
theories of reasoning.
我們的大腦需要處理
04:45
But this kind of information
processing takes time.
但這種資訊處理
04:47
And in the real
world, we often need to make decisions much quicker,
sometimes in milliseconds.
而在現實世界中,
04:52
You can imagine
this time pressure being even more
intense if you think about the situations facing
our Homo sapien ancestors.
你可以想像,
04:59
If there's a big
animal charging you, and you wait too long to figure
out what to do, you're dead.
如果有一隻大型動物朝你衝來,而你猶豫太久、不知如何是好,你就死定了。
05:03
So the speculation is that
our brains have evolved various shortcut mechanisms
for making decisions, especially when the problems
we're facing are complex,
所以有推測認為,我們的大腦已經演化出各種捷徑機制來做決策,特別是在我們面臨的問題很複雜、
05:13
we have incomplete information,
and there's risk involved.
資訊不完整、
05:16
In these situations, we
sample the information available to us.
且涉及風險的情況下。
05:19
We focus on just
those bits that are most relevant to
our decision task, and we make a decision
based on a rule of thumb
These rules of thumb
are the heuristics in the so-called biases
and heuristics literature.
這些經驗法則,就是所謂「偏誤與捷徑」(biases and heuristics)文獻中的捷徑(heuristics)。
05:35
Two important
things to note here.
這裡有兩點重要事項需要注意。
05:37
One is that we're
usually not consciously aware of the heuristics
that we're using, or the information
that we're focusing on.
第一,我們通常不會意識到自己正在使用這些捷徑,也不會意識到我們正在關注哪些資訊。
05:43
Most of this is going
on below the surface.
大部分的運作都在潛意識中進行。
05:46
The second thing to note is
that these heuristics aren't designed to give us the best
solutions to our decision problems, all things considered.
第二點要注意的是,這些捷徑的設計目的,並不是要在通盤考量下,為我們的決策問題提供最佳解法。
05:53
What they're designed
to do is give us solutions that are good
enough for immediate purposes.
它們的設計目的是提供「足夠好」、能應付眼前需求的解決方案。
05:58
But good enough might
mean good enough in our ancestral
environments, where these cognitive
mechanisms evolved.
但所謂的「足夠好」,可能是指在我們祖先演化的環境中足夠好。
06:05
In contexts that
are more removed from these ancestral
environments, we can end up making
systematically
在與祖先環境差異較大的情境下,我們最終可能會做出系統性的
06:11
bad choices or errors in
reasoning because we're automatically, subconsciously
invoking the heuristic in a situation where that
heuristic isn't necessarily
錯誤選擇或推理失誤,因為我們會自動、潛意識地套用該捷徑,而該捷徑在當下情境中未必是
06:19
the best rule to follow.
最該遵循的準則。
06:21
So the term "bias,"
in this context, refers to the systematic gap
between how we're actually disposed to behave
or reason and how
因此,在此脈絡下,「偏誤」(bias)一詞指的是:我們實際行為或推理方式,與
06:30
we ought to behave or
reason, by the standards of some normative theory of
reasoning or decision-making.
某種規範性決策或推理理論所要求的行為或推理方式之間,存在的系統性落差。
06:35
The heuristic is
the rule of thumb that we're using to make the
decision or the judgment.
捷徑是我們用來做決策或判斷的經驗法則。
06:40
The bias is the
predictable effect of using that rule of thumb
in situations where it doesn't give an optimal result.
偏誤則是在不適合的情境下使用該經驗法則時,會導致可預測的不良後果。
06:46
I know this is all
pretty general, so let me give you an
example of a cognitive bias and its related heuristic.
我知道這些內容都相當籠統,所以我來舉個認知偏誤及其相關啟發法的例子。
06:51
This is known as the anchoring
heuristic, or the anchoring effect.
這被稱為錨定啟發法,或稱為錨定效應。
06:55
Kahneman and Tversky,
did a famous experiment where they asked a
group of subjects to estimate the percentage
of countries in Africa
Now, for one group of subjects,
they asked the question, is this percentage
more or less than 10%?
當時,對其中一組受試者,他們問的問題是:這個比例是高於還是低於 10%?
07:16
For another group of subjects,
they ask the question, is it more or less than 65%?
對另一組受試者,他們問的問題是:這個比例是高於還是低於 65%?
07:21
And the average of the
answers of the two groups differed significantly.
兩組受試者的平均答案有顯著差異。
07:25
In the first group, the
average answer was 25%.
第一組的平均答案是 25%。
07:29
In the second group, the
average answer was 45%.
第二組的平均答案是 45%。
07:33
The second group estimated
higher than the first group.
第二組的估計值高於第一組。
07:37
Why?
為什麼?
07:38
Well, this is what
seems to be going on.
嗯,這似乎是背後的運作機制。
07:40
If subjects are exposed
to a higher number, their estimates were
anchored to that number.
如果受試者接觸到較高的數字,他們的估計值就會被「錨定」在那個數字上。
07:45
Give them a higher number,
they estimate higher.
給他們較高的數字,他們就估計得較高。
07:47
Give them a lower number,
they estimate lower.
給他們較低的數字,他們就估計得較低。
07:50
So the idea behind this
anchoring heuristic is that when people
are asked to estimate a probability or an
uncertain number,
所以,這個錨定啟發法背後的概念是,當人們被要求估計一個機率或一個不確定的數字時,
07:57
rather than try to perform
a complex calculation their heads, they start with an
implicitly suggested reference point, the anchor, and make
adjustments from that reference
Now, you might think, in this
case, it's not just the number.
現在,你可能會想,在這種情況下,這不僅僅是數字的問題。
08:13
It's the way the
question is phrased that biased the estimates.
是問題的措辭方式讓估計產生了偏誤。
08:17
You might think the
subjects are assuming that the researchers
know the answer, and the reference number is
therefore related in some way
你可能會認為受試者假設研究人員知道答案,因此參考數字與實際答案有某種關聯。
08:22
to the actual answer.
與實際答案有某種關聯。
08:24
But researchers have done
this experiment many times in different ways.
但研究人員已經用不同的方式多次進行這個實驗。
08:28
In one version, for
example, the subjects are asked the same question,
to estimate the percentage of African nations in the UN.
例如,在一個版本中,受試者被問到相同的問題,估計聯合國中非洲國家的比例。
08:34
But before they
answer, the researcher spins a roulette wheel
in front of a group, waits for it to land on a number
so they can all see the number,
但在他們回答之前,研究人員在一群人面前轉動輪盤,等待它停在某個數字,讓大家都能看到這個數字。
08:42
then asks them if the
percentage of African nations is larger or smaller than the
number on the roulette wheel.
然後問他們,非洲國家的比例是大於還是小於輪盤上的數字。
08:49
The results are the same.
結果是一樣的。
08:51
If the number is high,
people estimate high.
如果數字很高,人們的估計就會偏高。
08:54
If the number is low,
people estimate low.
如果數字很低,人們的估計就會偏低。
08:57
And in this case, the
subjects couldn't possibly assume the number on the
roulette wheel had any relation to the actual percentage of
African nations in the UN,
在這種情況下,受試者不可能假設輪盤上的數字與聯合國中非洲國家的實際比例有任何關聯。
09:05
but their estimates were
anchored to this number anyway.
但他們的估計還是被這個數字錨定了。
09:08
Now, results like
these have proven to be really important
for understanding how human beings
process information
像這樣的結果已被證明非常重要,有助於理解人類如何處理資訊。
09:13
and make judgments on
the basis of information.
以及如何基於資訊做出判斷。
09:16
The anchoring effect shows up in
strategic negotiation behavior, consumer shopping
behavior, in the behavior of stock and real
estate markets--
錨定效應出現在策略性談判行為、消費者購物行為、股票和房地產市場的行為中——
09:23
it shows up everywhere.
它無所不在。
09:24
It's a very widespread
and robust effect.
這是一個非常普遍且穩固的效應。
09:27
Now note also that
this behavior is, by the standards of
our normative theories of correct reasoning,
systematically irrational.
現在也請注意,根據我們關於正確推理的規範性理論標準,這種行為是系統性非理性的。
09:35
This is an example
of a cognitive bias.
這是認知偏誤的一個例子。
09:38
Now, this would be interesting
but not deeply significant if the anchoring effect
was the only cognitive bias that we discovered.
如果「錨定效應」是我們發現的唯一認知偏誤,那麼這雖然有趣,但並非具有深遠的意義。
09:45
But if you go to
Wikipedia and type in, "list of cognitive
biases," you'll find a page that lists just
over 100 of these biases.
但如果你去維基百科輸入「認知偏誤列表」,你會找到一個列出超過 100 種這種偏誤的頁面。
09:54
And the list is not exhaustive.
而且這份清單並不完整。
09:55
I encourage everyone
to check it out.
我鼓勵大家都去查看一下。
09:57
If you spend much time
looking at these links, you'll get a crash course
in cognitive biases.
如果你花很多時間看這些連結,你會對認知偏誤有一個速成的了解。
10:01
So what's the upshot of all this
for us, as critical thinkers?
那麼,對於身為批判性思考者的我們來說,這一切的要點是什麼?
10:05
Well, I'm going to get into this
a bit more in the next podcast episode.
嗯,我將在下一期播客節目中更深入探討這個問題。
10:08
But it's clear that
at the very least, we all need to acquire a
certain level of cognitive bias literacy.
但很明顯,至少我們都需要具備一定程度的認知偏誤知識。
10:14
We don't need to become
experts, but we should all be able to recognize the most
important and most discussed cognitive biases.
我們不需要成為專家,但我們都應該能夠辨識最重要且最常被討論的認知偏誤。
10:21
We should all know what
confirmation bias is, what the base rate fallacy is,
what the gambler's fallacy is, and so on.
我們都應該知道什麼是確認偏誤、什麼是基本比率謬誤、什麼是賭徒謬誤等等。
10:28
These are just as
important as understanding the standard logical fallacies.
這些與理解標準的邏輯謬誤同樣重要。
10:32
Why?
為什麼?
10:32
Because as critical
thinkers, we need to be aware of the processes
that influence our judgments, especially if those processes
systematically bias us
in ways that make us prone
to error and bad decisions.
這些偏誤會讓我們容易犯錯並做出糟糕的決定。
10:44
Also, we want to
be on the lookout for conscious manipulation and
exploitation of these biases by people who are in the
influence business, whose
此外,我們也要提防那些從事影響力行業的人對這些偏誤進行有意識的操縱和利用,他們的工作就是
10:53
job it is to make people think
and act in ways that further their interests rather
than your interests.
讓人們以符合他們利益而非你利益的方式去思考和行動。
10:58
We know that marketing firms
and political campaigns hire experts in these areas to
help them craft their messages.
我們知道行銷公司和政治競選活動會聘請這些領域的專家來協助他們精心設計訊息。
11:05
Now, let me give you a
hypothetical example, though I know some people who'd
say this is not hypothetical.
現在,讓我給你一個假設性的例子,雖然我知道有些人會說這並非假設。
11:10
Let's say you're a media advisor
to a government that has just conducted a major military
strike on a foreign country, and there were
civilian casualties
假設你是一位媒體顧問,服務於某個政府,該政府剛對外國發動了一場重大軍事打擊,並造成了平民傷亡。
11:18
resulting from the strike.
導致此次攻擊造成平民傷亡。
11:20
Now, if the number of
civilians killed is high, then that's bad
for the government.
現在,如果平民死亡人數很高,對政府來說是不利的。
11:24
It will be harder to
maintain popular support for this action.
要維持民眾對此行動的支持將會更加困難。
11:27
Let's say our
intelligence indicates that the number of casualties
is in the thousands.
假設我們的情報顯示,傷亡人數在數千人左右。
11:32
This is not a good number.
這不是一個好看的數字。
11:33
It's going to be hard to
sell this action if that's the number that everyone reads
in the news the next day.
如果這是大家隔天在新聞上讀到的數字,要為此行動辯護將會很難。
11:38
So as an advisor
to this government, what do you recommend doing?
所以,身為這位政府的顧問,你會建議怎麼做?
11:42
I'll tell you what I would
do if all I cared about was furthering the
government's interests.
我來告訴你,如果我只在乎推進政府利益的話,我會怎麼做。
11:47
I'd say, Mr.
我會說,總統先生——或者任何掌權者——我們需要在媒體掌握消息之前發表一份聲明。
11:47
President-- or whoever's in charge-- we need to issue a
statement before the press gets ahold of this.
在這份聲明中,我們需要說,此次攻擊造成的預估傷亡人數很低,
11:53
And in this statement,
we need to say that the number of estimated
casualties resulting from this strike is low,
maybe 100 or 200, at the most.
大概最多只有 100 或 200 人。
12:02
Now, why would I advise this?
現在,為什麼我會這樣建議?
12:04
Because I know about
the anchoring effect.
因為我知道「錨定效應」(anchoring effect)。
12:06
I know that the
public's estimate of the real number
of casualties is going to be anchored to
the first number they're
我知道公眾對真實傷亡人數的估計,將會被錨定在他們最初接觸到的數字上。
12:12
exposed to.
如果那個數字很低,他們的估計就會偏低。
12:12
And if that number's low,
they'll estimate low.
即使後來出現的數據顯示數字更高,公眾的估計值
12:16
And if data eventually comes out
with numbers that are higher, the public's
estimates will still be lower than they
would be if we
仍然會比我們沒有先發制人、
12:22
didn't get in there
first and feed them that first low number.
灌輸給他們那個低數字時的估計來得低。
12:26
Now, that's what I would
do, if all I cared about was manipulating public opinion.
這就是我會做的事,如果我只在乎操縱公眾輿論的話。
12:30
This is a hypothetical
example, but trust me when I tell you that
decisions like these are made every day under
the advice of professionals
這是一個假設性的例子,但我向你保證,在專業人士的建議下,每天都在發生類似的決策
12:36
who are experts in this
psychological literature.
這些專業人士是此心理學文獻的專家。
12:39
So there's a broader
issue at stake.
因此,這涉及一個更廣泛的議題。
12:41
This is the kind of
background knowledge that is important if
our ultimate goal is to be able to claim
ownership and responsibility
如果我們的最終目標是能夠宣稱擁有權並承擔責任,
12:47
for our own beliefs and values.
對於我們自己的信念和價值觀而言,
12:50
And that's what critical
thinking is all about.
這些背景知識是非常重要的。
12:52
Well, that's going to wrap
it up for this episode.
這就是批判性思維的全部意義所在。
12:55
In the next episode, I'm going
to look at a few more case studies to help highlight how
important cognitive biases are, and maybe give you some more
incentive to look into them.
好了,本集到此結束。
13:03
I'll leave some links
in the show notes to some online
resources, which you can find at that
criticalthinkerpodcast.com.
在下一集,我將探討更多案例研究,以突顯認知偏誤的重要性,或許能給你更多動力去深入研究。
13:10
Thanks for listening, and
we'll see you next time.